[Unit]
Description=Kibana 4
[Service]
Type=simple
User=logstash
Environment=CONFIG_PATH=/opt/kibana/config/kibana.yml
Environment=NODE_ENV=production
ExecStart=/opt/kibana/node/bin/node /opt/kibana/src/cli
[Install]
WantedBy=multi-user.target
-
https://links.infomee.fr/?ynURrgOn the isolated_nw which was user defined, the Docker network feature updated the /etc/hosts with the proper name resolution. Inside of container2 it is possible to ping container3 by name.
-
https://docs.docker.com/engine/userguide/networking/work-with-networks/La doc évolue vite au fur et à mesure des versions.. en la relisant j'ai appris plein de nouvelles choses
-
https://docs.docker.com/engine/userguide/networking/dockernetworks/Cette solution semble etre un bon compromis quand on veut du lien bidirectionnel (ce qui n'est pas possible avec les links) :
http://sgillis.github.io/posts/2015-03-23-docker-dns.html
https://github.com/tonistiigi/dnsdock
-
http://abdelrahmanhosny.com/2015/07/01/3-solutions-to-bi-directional-linking-problem-in-docker-compose/Pour grapher le cputime cumulé de certains process
ps -e --format pid,time
L'inconvénient c'est que la valeur retournée est dans un format batard..
Le mieux ce serait de l'avoir en seconde, comme le etimes
-
http://unix.stackexchange.com/questions/156607/format-cputime-for-psnodejs process manager
-
https://github.com/Unitech/pm2Tout est dans man ps, je me le mets de côté pour écrire un plugin collectd :
ps -p $(cat /var/run/xx.pid) --no-headers --format rssize,vsize
-
https://links.infomee.fr/?U5Vm5QPour avoir les logs d'apache dans la console quand on lance en foreground
LogLevel info
14 ErrorLog "|cat"
15 LogFormat "%h %l %u %t \"%r\" %>s %b" common
16 CustomLog "|cat" common
-
http://zroger.com/blog/apache-in-the-foreground/En augmentant le rate limit de rsyslog ça passe
imuxsock lost
-
http://serverfault.com/questions/444061/imuxsock-messages-in-syslog-and-system-becomes-unresponsiveTo manage multiple identities
-
https://github.com/ccontavalli/ssh-identIt took me some time to figure this one out, as everybody is using rsync and ssh-keys without passphrases, but I insist that an ssh-key should have a passphrase.
In my first attemts I got this error messages mailed to me by crontab:
Permission denied (gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive).
Here are the steps to automate a backup initiated from crontab using rsync, SSH and ssh-keys with a passphrase:
Make a set of SSH keys.
Setup SSH to use the agent automatically.
Login once as the user who's cron will run the backup script. You will be asked for a passphrase. When the machine reboots, you will need to login once more, to enter the passphrase again.
Make a backup script that includes some SSH variables.
This script could be as simple as this:
. /home/username/.ssh/variables
rsync -avz --delete /data/ example.com:data
N.B. This variables file only contains these lines:
SSH_AUTH_SOCK=/tmp/ssh-DmFcb18036/agent.18036; export SSH_AUTH_SOCK;
SSH_AGENT_PID=18037; export SSH_AGENT_PID;
echo Agent pid 18037;
Put that script in crontab.
That should do it for you, as it works like a charm for me!
-
https://meinit.nl/using-rsync-from-cron-with-ssh-keys-that-have-a-passphrasevariables=~/.ssh/variables
sshadd() {
source "$variables" > /dev/null
ssh-add -l > /dev/null 2>&1
case "$?" in
1)
ssh-add > /dev/null 2>&1
;;
2)
rm "$variables"
sshagent
;;
esac
}
sshagent() {
if [ -f "$variables" ] ; then
sshadd
else
ssh-agent -s > $variables
sshadd
fi
}
sshagent
-
https://meinit.nl/enter-your-ssh-passphrase-once-use-it-many-times-even-from-crontab