Dans la plupart des cas, on peut maintenant oublier logstash et utiliser les ingest node (pipeline) d'elasticsearch
EFK (Elasticsearch, Filebeat, Kibana)
Amazon Elasticsearch access control may be based on IAM account with signed request mechanism
One way not to rewrite all applications is using such a proxy
Ok donc cet article m'a été vraiment utile.. à garder en cas d'autres problèmes avec ES
for shard in $(curl -XGET http://localhost:9200/_cat/shards | grep UNASSIGNED | awk '{print $2}'); do
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "t37",
"shard" : $shard,
"node" : "datanode15",
"allow_primary" : true
}
}
]
}'
sleep 5
done
[07:27] < torkelo>| matejz: I have managed to get about 140~ bytes per measurement (ES) asyd
[07:27] < matejz>| and was thinking of using it for metrics as well aviau
[07:27] < torkelo>| which is 12x the size requirement of Graphite (12 bytes per measurement)
[10:16] < torkelo> | agree, if you store more than 100 000 metrics/s I think ES is not a good option. But for short term performance logging the new metric features for flat_white
percentile and moving average, etc are looking very good
Pour backup son elasticsearch
Attention le snapshot n'est pas restaurable d'une version à une autre.. j'ai eu ce problème et j'ai utilisé https://github.com/mallocator/Elasticsearch-Exporter
Marche bien!