If you created a presigned URL by using a temporary token, then the URL expires when the token expires, even if you created the URL with a later expiration time. For more information about how the credentials you use affect the expiration time, see Who can create a presigned URL.
So you have to use regular IAM user instead of IAM role for service generating presigned urls..? :-/
"Create Amazon MemoryDB Cluster Instances#
You can create Amazon MemoryDB Clusters using the Cluster custom resource"
They really name their CRD "Cluster" ?? :facepalm:
Aws documentation index, and for each product doc you can get a RSS feed
Note à moi même : il faut faire attention aux headers qu'on forward à l'origine dans ce cas précis car ça peut poser des problemes d'authent' entre cloudfront et le bucket s3
Intéressant pour rapatrier plus vite les metrics aws cloudwatch dans datadog
To run a CLI command from within an Amazon Elastic Compute Cloud (Amazon EC2) instance or an Amazon Elastic Container Service (Amazon ECS) container, you can use an IAM role attached to the instance profile or the container. If you specify no profile or set no environment variables, that role is used directly. This enables you to avoid storing long-lived access keys on your instances. You can also use those instance or container roles only to get credentials for another role. To do this, you use credential_source (instead of source_profile) to specify how to find the credentials. The credential_source attribute supports the following values:
Environment – Retrieves the source credentials from environment variables.
Ec2InstanceMetadata – Uses the IAM role attached to the Amazon EC2 instance profile.
EcsContainer – Uses the IAM role attached to the Amazon ECS container.
Un container pour émuler en local l'api metadata et ainsi endosser un role
https://github.com/awslabs/amazon-ecs-local-container-endpoints
Gérer des resources aws avec un controller kube fourni par aws
Another simple solution would be to write a custom MIDDLEWARE which will give the response to ELB before the ALLOWED_HOSTS is checked. So now you don't have to load ALLOWED_HOSTS dynamically.
The middleware can be as simple as:
project/app/middleware.py
from django.http import HttpResponse
from django.utils.deprecation import MiddlewareMixin
class HealthCheckMiddleware(MiddlewareMixin):
def process_request(self, request):
if request.META["PATH_INFO"] == "/ping/":
return HttpResponse("pong")
settings.py
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'app.middleware.HealthCheckMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
...
]
Django Middleware reference https://docs.djangoproject.com/en/dev/topics/http/middleware/
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-monitoring.html#enable-as-instance-metrics
https://www.terraform.io/docs/providers/aws/r/appautoscaling_policy.html
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html
Pardefault les credentials temporaires donnés par un assume role sont valables 1h
C'est un peu juste en dev, pour augmenter cette periode :
1) dans le role en question, autoriser a demander +
2) au moment de faire le assume role en CLI, il faut passer un param pour demander +
For UDP traffic, the load balancer selects a target using a flow hash algorithm based on the protocol, source IP address, source port, destination IP address, and destination port. A UDP flow has the same source and destination, so it is consistently routed to a single target throughout its lifetime. Different UDP flows have different source IP addresses and ports, so they can be routed to different targets.
Your task definition contains a parameter that requires a specific container instance attribute that is not available on your container instances. For example, if your task uses the awsvpc network mode, but there are no instances in your specified subnets with the ecs.capability.task-eni attribute. For more information about which attributes are required for specific task definition parameters and agent configuration variables, see Task Definition Parameters and Amazon ECS Container Agent Configuration.
A faire en superuser:
-- Revoke privileges from 'public' role
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON DATABASE mydatabase FROM PUBLIC;
-- Create schema
CREATE SCHEMA myschema
-- Read-only role
CREATE ROLE readonly;
GRANT CONNECT ON DATABASE mydatabase TO readonly;
GRANT USAGE ON SCHEMA myschema TO readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA myschema TO readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT SELECT ON TABLES TO readonly;
-- Read/write role
CREATE ROLE readwrite;
GRANT CONNECT ON DATABASE mydatabase TO readwrite;
GRANT USAGE, CREATE ON SCHEMA myschema TO readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA myschema TO readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA myschema TO readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT USAGE ON SEQUENCES TO readwrite;
-- Users creation
CREATE USER reporting_user1 WITH PASSWORD 'some_secret_passwd';
CREATE USER reporting_user2 WITH PASSWORD 'some_secret_passwd';
CREATE USER app_user1 WITH PASSWORD 'some_secret_passwd';
CREATE USER app_user2 WITH PASSWORD 'some_secret_passwd';
-- Grant privileges to users
GRANT readonly TO reporting_user1;
GRANT readonly TO reporting_user2;
GRANT readwrite TO app_user1;
GRANT readwrite TO app_user2;
0,25 vCPU + 0,5GB = 9,010$ (1 month)
Le nombre de pods par worker node dépend du type d'EC2 utilisé qui détermine combien d'interfaces secondaires on a à notre disposition et de combien d'ips sur chaque interfaces on peut allouer.
Exemple avec une t2.small, on a 2 interfaces secondaires et sur chacune on peut allouer 4 ips. On peut donc lancer 8 pods maximum sur une t2.small
Il faut aussi savoir qu'un cluster EKS va faire tourner de base quelques pods :
Ce qui équivaut à 2 pods occupés sur chaque worker node (à cause des deux daemonset) et 2 pod supplémentaire lancé sur la totalité du cluster (pour coredns).
Il faut aussi compter un pod pour le dashboard, un pod pour le metrics-server et surement 2 pods pour l'external-dns, sans compter les ingress
Tout ça pour dire que ce n'est pas facile d'avoir un "petit" cluster EKS : on va vite être limité par le nombre d'ip qui est assez faible sur les EC2 les plus modestes et on va vite être obligé de lancer des EC2 supplémentaire juste pour avoir des ip
Use role in script
SYNOPSIS
get-login
[--registry-ids <value> [<value>...]]
[--include-email | --no-include-email]
OPTIONS
--registry-ids (string) A list of AWS account IDs that correspond to
the Amazon ECR registries that you want to log in to.
--include-email | --no-include-email (boolean) Specify if the '-e' flag
should be included in the 'docker login' command. The '-e' option has
been deprecated and is removed in docker version 17.06 and later. You
must specify --no-include-email if you're using docker version 17.06 or
later. The default behavior is to include the '-e' flag in the 'docker
login' output.
for user in $(aws iam list-users|jq '.Users|.[]|.UserName' -r); do echo $user;aws iam list-user-policies --user-name $user; done
To apply this lifecycle rule to all objects in the bucket, choose Next.
That's why wildcard was not working :D
Amazon Elasticsearch access control may be based on IAM account with signed request mechanism
One way not to rewrite all applications is using such a proxy
aws dynamodb scan --table-name foo
aws dynamodb delete-item --table-name foo --key "{\"id\":{\"S\":\"$id\"}}"
On peut "remettre" ou plutot rendre disponible de nouveau un message SQS en changeant sa visibility timeout à 0
aws s3 ls s3://bucket/path/ --recursive --summarize | grep "Total Objects:"
Truc con : on ne peut pas utiliser le résultat du get-repository-policy dans le set-repository-policy pour cloner.
Il faut au passage enlever les \n qui trainent dans la réponse
So you can put your data into glacier with 2 differents ways:
1) directly to glacier via API
2) Store them to s3 then with a management policy, it'll go to Glacier
Warning : Huge cost when you download from glacier and when you delete before 3 months
aws efs describe-file-systems| jq '.FileSystems|.[]|[.Name, .SizeInBytes.Timestamp, .SizeInBytes.Value]' -c
Retourne une ligne par EFS
Sur chaque ligne, un array avec :
[0] = nom de l'efs
[1] = timestamp du moment où la taille a été calculée
[2] = la taille en bytes
Pour avoir la taille en GB :
aws efs describe-file-systems| jq '.FileSystems|.[]|[.Name, .SizeInBytes.Timestamp, .SizeInBytes.Value / 1024 /1024 / 1024]' -c
aws efs describe-file-systems| jq '.FileSystems|.[]|[.Name, .SizeInBytes.Value / 1024 /1024 / 1024]' -c
If you enable s3 enpoint in your route table, it's kind of tricky to know if the endpoint is really working. Two things to validate:
1) traceroute tcp before and after (traceroute -T s3-us-west-1.amazonaws.com 443)
You will see more hope when endpoint not activated
2) try an s3 sync cross region with enpoint activated : it should failed since it's not supported (yet @ 2017-05-02)