subnets = ec2.subnets.all()
subnets_sorted = sorted(subnets, key=lambda k: k.tags[next(index for (index, d) in enumerate(k.tags) if d["Key"] == "Name")]['Value'])
Well, my python level is not good enough to clearly understand this but my Google level was largely enough to build this
Easy one?
Not even close
openssl s_client -connect www.cyberciti.biz:443
As the included diagram portrays, in a common workflow, the worker instance will consume messages sent to a specified Amazon SQS from another service (e.g.: a web server or another worker). These messages will be received by the worker via POST requests. This eliminates the necessity of configuring a worker as an always-on service, as well as having to add code for reading and consuming messages from an AWS SQS queue. In other words, the worker is implemented as a standard RESTful API/Service that will react to a message sent to it at an specific endpoint via a POST request. This is an awesome approach by Amazon to microservices and reactive design.
The conversion of the SQS message to a POST request is executed by what AWS calls the "SQS Daemon" or "Sqsd". This is a simple daemon they pre-install in the worker tier instances that is constantly monitoring an specific AWS SQS queue (provided by configuration) for new messages. When new messages arrive, it constructs a POST request and sends it to a specific endpoint (also provided via configuration). If the endpoint consumes it without errors and returns a 2** HTTP Code in its response, the "Sqsd" deletes the message from the queue to signal that its consumption was successful.
However, even though this approach is extremely powerful, Amazon does not provide the code of this daemon as open source. Therefore, we have reproduced its behavior by creating our own version of the "Sqsd" free for everyone to use. Moreover, we have provided lots of customization and configuration properties so that it can be molded to your specific use cases.
Don't give s3 full access policy to your app user
Prefer to allow access only for specific bucket
I wonder what's the best choice : create managed policy or simply use inline policy. I got a 1 to 1 relationship between my app-users and bucket so... inline policy looks good here
spoiler alert: auto scaling is far to be magical
as usual with aws, everything is an object and for autoscaling u got several objects and several links between them
First you define a launch configuration : what type of machine you want to laucnh
Then an autoscaling group : it will use the launch configuration to create new EC2
In this autoscaling group, you have to define auto scaling policies ie what to do (remove or add x instances) and link them to a cloudwatch alert (cpu is high or network or whatever cloudwatch monitors)
the autoscaling group can also be linked to an ELB so when EC2 are added/removed, they also are registred/deregistered from ELB
A lot of work to do..
Example : I change the autoscaling trigger in web UI interface. After a few second, my autoscaling trigger is changed (checked).
But if I save beanstalk environment configuration into a file, the change I made is not there :-(
Maybe working wih UI is not the best option, but it's very convenient to test stuff without terminate/recreate env from scratch (things I have to do btw when I commit changes in files, I have to check if the behaviour is the same)
A lot of time lost, but it's the price to have reproductible environment and 'infrastructure as a code'
Each time you start a stopped instance we charge a full instance hour, even if you make this transition multiple times within a single hour.
I'm happy with vimdiff (colorscheme Murphy) but why not!
Elastic Beanstalk simplifies this process by managing the Amazon SQS queue and running a daemon process on each instance that reads from the queue for you. When the daemon pulls an item from the queue, it sends an HTTP POST request locally to http://localhost/ with the contents of the queue message in the body. All that your application needs to do is perform the long-running task in response to the POST. You can configure the daemon to post to a different path, use a MIME type other than application/JSON, connect to an existing queue, or customize connections, timeouts, and retries.