If you created a presigned URL by using a temporary token, then the URL expires when the token expires, even if you created the URL with a later expiration time. For more information about how the credentials you use affect the expiration time, see Who can create a presigned URL.
So you have to use regular IAM user instead of IAM role for service generating presigned urls..? :-/
-
https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html#who-presigned-urlTo apply this lifecycle rule to all objects in the bucket, choose Next.
That's why wildcard was not working :D
-
http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.htmlaws s3 ls s3://bucket/path/ --recursive --summarize | grep "Total Objects:"
-
https://links.infomee.fr/?rcG0hgSo you can put your data into glacier with 2 differents ways:
1) directly to glacier via API
2) Store them to s3 then with a management policy, it'll go to Glacier
Warning : Huge cost when you download from glacier and when you delete before 3 months
-
https://www.cloudberrylab.com/blog/compare-amazon-glacier-direct-upload-and-glacier-upload-through-amazon-s3/aws s3api list-objects --bucket BUCKET_NAME --output json --query "[sum(Contents[].Size), length(Contents[])]"
Retourne la taille en bytes et le nombre d'objets
Avec un ficheir qui contient le nom des bucket :
for BUCKET_NAME in $(cat s3list); do echo -n "$BUCKET_NAME " ; region=$(aws s3api get-bucket-location --bucket $BUCKET_NAME|jq '.LocationConstraint' -r); aws s3api list-objects --region $region --bucket $BUCKET_NAME --output json --query "sum(Contents[].Size)"; done
-
https://links.infomee.fr/?pbKktw
-
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
-
https://links.infomee.fr/?Vp7r6QIf you enable s3 enpoint in your route table, it's kind of tricky to know if the endpoint is really working. Two things to validate:
1) traceroute tcp before and after (traceroute -T s3-us-west-1.amazonaws.com 443)
You will see more hope when endpoint not activated
2) try an s3 sync cross region with enpoint activated : it should failed since it's not supported (yet @ 2017-05-02)
-
https://links.infomee.fr/?1wF2KgI may have to use this one, but issue number is freaking me out
-
https://github.com/s3fs-fuse/s3fs-fuse