Pratique quand on doit jongler entre différents comptes AWS
et l'équivalent pour s3cmd : http://mikesisk.tumblr.com/post/8703449578/s3cmd-and-multiple-accounts
I may have to use this one, but issue number is freaking me out
subnets = ec2.subnets.all()
subnets_sorted = sorted(subnets, key=lambda k: k.tags[next(index for (index, d) in enumerate(k.tags) if d["Key"] == "Name")]['Value'])
Well, my python level is not good enough to clearly understand this but my Google level was largely enough to build this
As the included diagram portrays, in a common workflow, the worker instance will consume messages sent to a specified Amazon SQS from another service (e.g.: a web server or another worker). These messages will be received by the worker via POST requests. This eliminates the necessity of configuring a worker as an always-on service, as well as having to add code for reading and consuming messages from an AWS SQS queue. In other words, the worker is implemented as a standard RESTful API/Service that will react to a message sent to it at an specific endpoint via a POST request. This is an awesome approach by Amazon to microservices and reactive design.
The conversion of the SQS message to a POST request is executed by what AWS calls the "SQS Daemon" or "Sqsd". This is a simple daemon they pre-install in the worker tier instances that is constantly monitoring an specific AWS SQS queue (provided by configuration) for new messages. When new messages arrive, it constructs a POST request and sends it to a specific endpoint (also provided via configuration). If the endpoint consumes it without errors and returns a 2** HTTP Code in its response, the "Sqsd" deletes the message from the queue to signal that its consumption was successful.
However, even though this approach is extremely powerful, Amazon does not provide the code of this daemon as open source. Therefore, we have reproduced its behavior by creating our own version of the "Sqsd" free for everyone to use. Moreover, we have provided lots of customization and configuration properties so that it can be molded to your specific use cases.
Don't give s3 full access policy to your app user
Prefer to allow access only for specific bucket
I wonder what's the best choice : create managed policy or simply use inline policy. I got a 1 to 1 relationship between my app-users and bucket so... inline policy looks good here
spoiler alert: auto scaling is far to be magical
as usual with aws, everything is an object and for autoscaling u got several objects and several links between them
First you define a launch configuration : what type of machine you want to laucnh
Then an autoscaling group : it will use the launch configuration to create new EC2
In this autoscaling group, you have to define auto scaling policies ie what to do (remove or add x instances) and link them to a cloudwatch alert (cpu is high or network or whatever cloudwatch monitors)
the autoscaling group can also be linked to an ELB so when EC2 are added/removed, they also are registred/deregistered from ELB
Each time you start a stopped instance we charge a full instance hour, even if you make this transition multiple times within a single hour.
Elastic Beanstalk simplifies this process by managing the Amazon SQS queue and running a daemon process on each instance that reads from the queue for you. When the daemon pulls an item from the queue, it sends an HTTP POST request locally to http://localhost/ with the contents of the queue message in the body. All that your application needs to do is perform the long-running task in response to the POST. You can configure the daemon to post to a different path, use a MIME type other than application/JSON, connect to an existing queue, or customize connections, timeouts, and retries.
Cycle de vie des messages Amazon SQS
Les messages stockés dans Amazon SQS ont un cycle de vie facile à gérer, mais qui garantit le traitement de tous les messages.
Un système qui doit envoyer un message sélectionne une file d'attente Amazon SQS et utilise SendMessage pour lui envoyer un nouveau message.
Un autre système traitant des messages doit traiter plus de messages : il appelle donc ReceiveMessage, et ce message est renvoyé.
Dès lors qu'un message est renvoyé par ReceiveMessage, il ne sera pas renvoyé par une autre demande ReceiveMessage avant que le délai de visibilité ne soit expiré. Ainsi, plusieurs destinataires peuvent traiter le même message simultanément.
Si le système de traitement des messages termine avec succès le traitement de ce message, il appelle DeleteMessage, ce qui supprime le message de la file d'attente pour que personne d'autre ne le traite. Si ce système ne réussit pas à traiter le message, il sera alors lu par un autre appel ReceiveMessage dès que le délai de visibilité sera expiré.
Si vous avez associé une file d'attente de lettre morte à une file d'attente source, les messages seront déplacés vers la file d'attente de lettre morte lorsque le nombre de tentatives d'envoi que vous avez défini aura été atteint.
Beanstalk ne permet pas de définir un trigger basé sur le nombre d'éléments dans la queue SQS pour l'auto scaling des environements de type worker
Pourtant c'est possible mais il y a un peu de boulot :
http://docs.aws.amazon.com/autoscaling/latest/userguide/as-using-sqs-queue.html
https://forums.aws.amazon.com/thread.jspa?messageID=722589
On peut surement intégrer ça à beanstalk avec les .ebextensions (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html)
En plus d'être un peu complexe et pas vraiment intégrer à beanstalk, ça n'a pas l'air très réactif, cet article en parle et donne une solution :
Rapid Auto Scaling with Amazon SQS : https://aws.amazon.com/blogs/aws/auto-scaling-with-sqs/
Depuis hier le 22 Février 2017
seems cool and powerful
Since December 22th 2016 you are able to configure application version lifecycle in Beanstalk.
Very convenient, we can trash our custom api cleaning scripts :-)
Default listener (80) is enabled by default, to disable it :
aws:elb:listener:
ListenerEnabled: false