Amazon Elasticsearch access control may be based on IAM account with signed request mechanism
One way not to rewrite all applications is using such a proxy
aws dynamodb scan --table-name foo
aws dynamodb delete-item --table-name foo --key "{\"id\":{\"S\":\"$id\"}}"
On peut "remettre" ou plutot rendre disponible de nouveau un message SQS en changeant sa visibility timeout à 0
aws s3 ls s3://bucket/path/ --recursive --summarize | grep "Total Objects:"
Truc con : on ne peut pas utiliser le résultat du get-repository-policy dans le set-repository-policy pour cloner.
Il faut au passage enlever les \n qui trainent dans la réponse
So you can put your data into glacier with 2 differents ways:
1) directly to glacier via API
2) Store them to s3 then with a management policy, it'll go to Glacier
Warning : Huge cost when you download from glacier and when you delete before 3 months
aws efs describe-file-systems| jq '.FileSystems|.[]|[.Name, .SizeInBytes.Timestamp, .SizeInBytes.Value]' -c
Retourne une ligne par EFS
Sur chaque ligne, un array avec :
[0] = nom de l'efs
[1] = timestamp du moment où la taille a été calculée
[2] = la taille en bytes
Pour avoir la taille en GB :
aws efs describe-file-systems| jq '.FileSystems|.[]|[.Name, .SizeInBytes.Timestamp, .SizeInBytes.Value / 1024 /1024 / 1024]' -c
aws efs describe-file-systems| jq '.FileSystems|.[]|[.Name, .SizeInBytes.Value / 1024 /1024 / 1024]' -c
If you enable s3 enpoint in your route table, it's kind of tricky to know if the endpoint is really working. Two things to validate:
1) traceroute tcp before and after (traceroute -T s3-us-west-1.amazonaws.com 443)
You will see more hope when endpoint not activated
2) try an s3 sync cross region with enpoint activated : it should failed since it's not supported (yet @ 2017-05-02)