4337 links
  • Arnaud's links
  • Home
  • Login
  • RSS Feed
  • ATOM Feed
  • Tag cloud
  • Picture wall
  • Daily
Links per page: 20 50 100
◄Older
page 6 / 8
Newer►
156 results tagged aws x
  • Configuring the AWS Command Line Interface - AWS Command Line Interface

    Pratique quand on doit jongler entre différents comptes AWS

    et l'équivalent pour s3cmd : http://mikesisk.tumblr.com/post/8703449578/s3cmd-and-multiple-accounts

    April 21, 2017 at 12:01:30 PM GMT+2 - permalink - archive.org - http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-multiple-profiles
    aws profile
  • wallix/awless: A Mighty CLI for AWS
    April 10, 2017 at 3:39:46 PM GMT+2 - permalink - archive.org - https://github.com/wallix/awless
    aws cmd
  • thumbnail
    s3fs-fuse/s3fs-fuse: FUSE-based file system backed by Amazon S3

    I may have to use this one, but issue number is freaking me out

    March 30, 2017 at 9:47:14 AM GMT+2 - permalink - archive.org - https://github.com/s3fs-fuse/s3fs-fuse
    aws fs s3 s3fs
  • localstack/README.md at master · atlassian/localstack · GitHub

    Good boy atlassian!

    March 27, 2017 at 5:14:56 PM GMT+2 - permalink - archive.org - https://github.com/atlassian/localstack/blob/master/README.md
    atlassian aws
  • Note: boto3 subnet sorted by name

    subnets = ec2.subnets.all()
    subnets_sorted = sorted(subnets, key=lambda k: k.tags[next(index for (index, d) in enumerate(k.tags) if d["Key"] == "Name")]['Value'])

    Well, my python level is not good enough to clearly understand this but my Google level was largely enough to build this

    March 24, 2017 at 4:02:34 PM GMT+1 - permalink - archive.org - https://links.infomee.fr/?Fn--jw
    aws boto boto3 python
  • thumbnail
    mozart-analytics/sqsd: A simple alternative to the Amazon SQS Daemon ("sqsd") used on AWS Beanstalk worker tier instances.

    As the included diagram portrays, in a common workflow, the worker instance will consume messages sent to a specified Amazon SQS from another service (e.g.: a web server or another worker). These messages will be received by the worker via POST requests. This eliminates the necessity of configuring a worker as an always-on service, as well as having to add code for reading and consuming messages from an AWS SQS queue. In other words, the worker is implemented as a standard RESTful API/Service that will react to a message sent to it at an specific endpoint via a POST request. This is an awesome approach by Amazon to microservices and reactive design.

    The conversion of the SQS message to a POST request is executed by what AWS calls the "SQS Daemon" or "Sqsd". This is a simple daemon they pre-install in the worker tier instances that is constantly monitoring an specific AWS SQS queue (provided by configuration) for new messages. When new messages arrive, it constructs a POST request and sends it to a specific endpoint (also provided via configuration). If the endpoint consumes it without errors and returns a 2** HTTP Code in its response, the "Sqsd" deletes the message from the queue to signal that its consumption was successful.

    However, even though this approach is extremely powerful, Amazon does not provide the code of this daemon as open source. Therefore, we have reproduced its behavior by creating our own version of the "Sqsd" free for everyone to use. Moreover, we have provided lots of customization and configuration properties so that it can be molded to your specific use cases.

    March 16, 2017 at 8:31:04 AM GMT+1 - permalink - archive.org - https://github.com/mozart-analytics/sqsd
    aws beanstalk sqs
  • thumbnail
    Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket | AWS Security Blog

    Don't give s3 full access policy to your app user
    Prefer to allow access only for specific bucket

    I wonder what's the best choice : create managed policy or simply use inline policy. I got a 1 to 1 relationship between my app-users and bucket so... inline policy looks good here

    March 15, 2017 at 11:00:52 AM GMT+1 - permalink - archive.org - https://aws.amazon.com/blogs/security/writing-iam-policies-how-to-grant-access-to-an-amazon-s3-bucket/
    aws policy s3
  • Note: aws auto scaling

    spoiler alert: auto scaling is far to be magical

    as usual with aws, everything is an object and for autoscaling u got several objects and several links between them
    First you define a launch configuration : what type of machine you want to laucnh
    Then an autoscaling group : it will use the launch configuration to create new EC2
    In this autoscaling group, you have to define auto scaling policies ie what to do (remove or add x instances) and link them to a cloudwatch alert (cpu is high or network or whatever cloudwatch monitors)
    the autoscaling group can also be linked to an ELB so when EC2 are added/removed, they also are registred/deregistered from ELB

    March 13, 2017 at 3:49:45 PM GMT+1 - permalink - archive.org - https://links.infomee.fr/?JIGagw
    auto aws scale scaling
  • Stop and Start Your Instance - Amazon Elastic Compute Cloud

    Each time you start a stopped instance we charge a full instance hour, even if you make this transition multiple times within a single hour.

    March 9, 2017 at 9:37:58 AM GMT+1 - permalink - archive.org - http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html
    aws pricing
  • GitHub - capitalone/cloud-custodian: Rules engine for AWS management, DSL in yaml for query, filter, and actions on resources
    March 7, 2017 at 10:30:37 PM GMT+1 - permalink - archive.org - https://github.com/capitalone/cloud-custodian
    aws policy
  • Worker Environments - AWS Elastic Beanstalk

    Elastic Beanstalk simplifies this process by managing the Amazon SQS queue and running a daemon process on each instance that reads from the queue for you. When the daemon pulls an item from the queue, it sends an HTTP POST request locally to http://localhost/ with the contents of the queue message in the body. All that your application needs to do is perform the long-running task in response to the POST. You can configure the daemon to post to a different path, use a MIME type other than application/JSON, connect to an existing queue, or customize connections, timeouts, and retries.

    March 2, 2017 at 9:15:10 PM GMT+1 - permalink - archive.org - http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html
    aws beanstalk sqs worker
  • Description détaillée d'Amazon Simple Queue Service (SQS) – Amazon Web Services (AWS)

    Cycle de vie des messages Amazon SQS

    Les messages stockés dans Amazon SQS ont un cycle de vie facile à gérer, mais qui garantit le traitement de tous les messages.

    Un système qui doit envoyer un message sélectionne une file d'attente Amazon SQS et utilise SendMessage pour lui envoyer un nouveau message.
    Un autre système traitant des messages doit traiter plus de messages : il appelle donc ReceiveMessage, et ce message est renvoyé.
    Dès lors qu'un message est renvoyé par ReceiveMessage, il ne sera pas renvoyé par une autre demande ReceiveMessage avant que le délai de visibilité ne soit expiré. Ainsi, plusieurs destinataires peuvent traiter le même message simultanément.
    Si le système de traitement des messages termine avec succès le traitement de ce message, il appelle DeleteMessage, ce qui supprime le message de la file d'attente pour que personne d'autre ne le traite. Si ce système ne réussit pas à traiter le message, il sera alors lu par un autre appel ReceiveMessage dès que le délai de visibilité sera expiré.
    Si vous avez associé une file d'attente de lettre morte à une file d'attente source, les messages seront déplacés vers la file d'attente de lettre morte lorsque le nombre de tentatives d'envoi que vous avez défini aura été atteint.
    March 2, 2017 at 9:12:54 PM GMT+1 - permalink - archive.org - https://aws.amazon.com/fr/sqs/details/
    amazon aws sqs
  • Scaling Based on Amazon SQS - Auto Scaling

    Beanstalk ne permet pas de définir un trigger basé sur le nombre d'éléments dans la queue SQS pour l'auto scaling des environements de type worker

    Pourtant c'est possible mais il y a un peu de boulot :

    • créer des scaling policies qui vont ajouter ou enlever des instances
    • créer des alarmes cloudwatch qui vont se déclencher par rapport au nb d'éléments dans la queue et appeler les scaling policies..

    http://docs.aws.amazon.com/autoscaling/latest/userguide/as-using-sqs-queue.html
    https://forums.aws.amazon.com/thread.jspa?messageID=722589

    On peut surement intégrer ça à beanstalk avec les .ebextensions (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html)

    En plus d'être un peu complexe et pas vraiment intégrer à beanstalk, ça n'a pas l'air très réactif, cet article en parle et donne une solution :
    Rapid Auto Scaling with Amazon SQS : https://aws.amazon.com/blogs/aws/auto-scaling-with-sqs/

    February 28, 2017 at 11:16:08 AM GMT+1 - permalink - archive.org - http://docs.aws.amazon.com/autoscaling/latest/userguide/as-using-sqs-queue.html
    auto aws beanstalk scaling sqs
  • Configure Connection Draining for Your Classic Load Balancer - Elastic Load Balancing
    February 24, 2017 at 11:57:55 AM GMT+1 - permalink - archive.org - https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-conn-drain.html
    aws elb
  • Configure the Idle Connection Timeout for Your Classic Load Balancer - Elastic Load Balancing
    February 24, 2017 at 11:57:42 AM GMT+1 - permalink - archive.org - https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html
    aws elb
  • Custom Platforms - AWS Elastic Beanstalk

    Depuis hier le 22 Février 2017

    seems cool and powerful

    February 23, 2017 at 10:04:35 AM GMT+1 * - permalink - archive.org - http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/custom-platforms.html
    aws beanstalk
  • Note: How to know which policy contains a specific action?
    Warning : this loop does a lot of call to aws api, use it with caution

    To know that I needed to list all policies and associated statements (for the default policy version)

    ```
    #!/bin/bash
    IFS=$'\n'
    for line in $(aws iam list-policies|jq '.Policies|.[]|[ .PolicyName, .Arn, .DefaultVersionId ]| @csv' -r|sed 's/","/ /g'|sed 's/"//g'); do
        name=$(echo $line|cut -d' ' -f1);
        arn=$(echo $line|cut -d' ' -f2);
        version=$(echo $line|cut -d' ' -f3);
        echo "$name"
        aws iam get-policy-version --policy-arn $arn --version-id $version
    done
    ```

    Put this in a script, redirect output to a file and go get grep!
    February 22, 2017 at 4:16:06 PM GMT+1 * - permalink - archive.org - https://links.infomee.fr/?bERNcg
    aws bash for foreach iam policy separator
  • Configuring Application Version Lifecycle Settings - AWS Elastic Beanstalk

    Since December 22th 2016 you are able to configure application version lifecycle in Beanstalk.

    Very convenient, we can trash our custom api cleaning scripts :-)

    February 22, 2017 at 3:13:17 PM GMT+1 - permalink - archive.org - http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-lifecycle.html
    aws beanstalk
  • General Options for All Environments - AWS Elastic Beanstalk

    Default listener (80) is enabled by default, to disable it :
    aws:elb:listener:
    ListenerEnabled: false

    February 20, 2017 at 4:44:59 PM GMT+1 - permalink - archive.org - http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elblistener
    aws beanstalk
  • Amazon EBS Update – New Elastic Volumes Change Everything | AWS Blog

    aws <3

    February 14, 2017 at 1:33:04 PM GMT+1 - permalink - archive.org - https://aws.amazon.com/blogs/aws/amazon-ebs-update-new-elastic-volumes-change-everything/
    aws ebs
Links per page: 20 50 100
◄Older
page 6 / 8
Newer►
Shaarli - The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community - Help/documentation