Force role session name in AWS IAM

I want to enforce users to use role session name when assuming a role in AWS. I’ve tried the following condition in IAM policies:

      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<redacted>:root"
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringLike": {
          "sts:RoleSessionName": "${aws:username}"

While this works fine when assuming a role as a user it does not work when assuming a role from a role with administrator privileges (Allow * on *). The only way to block this would be an explicit Deny when a role tries to assume a given role and does not have session name set up. Any ideas how to write a policy doing this? A simple Deny like below does not work because aws:username is not present when Assumed Role is the principal (see docs).

      "Effect": "Deny",
      "Principal": {
        "AWS": "arn:aws:iam::<redacted>:root"
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringNotLike": {
          "sts:RoleSessionName": "${aws:username}"

Go to Source
Author: pmichna

Deployment of Docker containers on ECS(Tcp Server Java)

The given problem:

  • Created a Java Tcp Server with Eclipse vertx(Toolkit for building
    reactive,event driven)applications on JVM. The applicatino has two
    Tcp Servers which are running on port numbers:

  • I created a Docker image and was running the application inside a
    container and all works fine locally.

  • I want to deploy the docker container on ECS(Amazon Container
    Service) and to use Network Load balancer,the question is how can i
    assign different ports to different containers if in the Java code i
    was setting up explicity

//tcp server 1 listens on port 5050

  • and how does ECS manages what ports to assign to what container?

    The tcp client is always targeting one ip adress and one port number
    it cannot be changed.


Go to Source
Author: user581026

AWX cannot find matching host

I have on premise AWX and inventory (instances on AWS). To provision newly launched instance I want to use provisioning callbacks.
But when executing this command from the instance

$ curl -v -H 'Content-Type:application/json' -d '{"host_config_key": "key1"}'

an error occurs

{"msg":"No matching host could be found!"}

What might be wrong with AWX configuration if host is present within the dynamic inventory and UPDATE ON LAUNCH flag is enabled?

Go to Source
Author: Most Wanted

S3 Bucket Policy: require KMS for all files except one

I have an S3 bucket that contains sensitive data, so I want to ensure that any objects put into the bucket are encrypted with a specific KMS key. I’m doing this already with a bucket policy statement and it works well:

    "Sid": "DenyWhenWrongCMK",
    "Effect": "Deny",
    "Principal": {
        "AWS": "*"
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::mybucket/*",
    "Condition": {
        "StringNotEquals": {
            "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:REDACTED"

But I really want to create one exception to this Deny rule. One thing I like doing with my S3 buckets is putting a file in the root directory, so that when future maintainers go looking around, they have documentation about the original intent and purpose. This works best if the is not encrypted with a CMK (Customer Managed Key), so I want to make an exception to the rule above.

But Deny statements have precedence over Allow statements in S3 bucket policy, so there doesn’t seem to be any way for me to make an exception here. Am I missing something? Is there any way to enforce usage of a specific KMS CMK for all files except one?

Go to Source
Author: Nic

Are there serious problems with an EC2 auto-scaling AMI that automatically downloads newest server from git?

I’m converting some servers from manual scaling (with a load balancer) to auto-scaling. I’m currently using an Ansible script to manage upgrades, and I wouldn’t want an automatically created instance to not be using the newest version of the software. I created an AMI that will download the newest commit from a chosen branch on first boot.

I’ve since realized my approach is somewhat like “cowboy coding”, since AWS provides a mechanism to replace auto-scaling instances when there should be an update. (If using that, my existing update script would become obsolete, and I guess the update would entail creating a new AMI containing the new server version.)

Are there any serious problems with using the “cowboy” approach? I realize auto-created servers might end up with newer code than the other servers (if someone pushes but does not deploy their code changes). I wonder whether auto-scaling will cause Ansible scripts to fail if servers are terminated while being processed by Ansible. What have I missed?

Go to Source
Author: piojo

How does CI deployment to AWS typically work at scale?

I am familiar with deploying a personal app to Heroku with git push and scaling it up by adding more dynos. But how do you deploy to a scaled AWS infrastructure with thousands of private instances behind dozens of load balancers across multiple regions?

I have searched and searched for this on Google and only found “hello world” tutorials describing cloning a repo directly to a single instance, or using CodeDeploy to deploy to a single instance, and then using autoscaling groups. Basically equivalent to my Heroku example.

But what does it actually look like in production systems at say Facebook, GitHub, Twitter, Stripe, or other large companies, if they were to run on AWS? Are they pushing to a single “global” bastion instance, and then it fans out to the rest of the private instances across multiple regions in one swoop? Or is some sort of plan (like a terraform plan) created, which needs manual approval, and then there’s a custom deploy script which ssh’s into the bastion instance and then fans out to each region? Or is it a git hook integrated into CI somehow? Are AMIs typically produced? Is a Load Balancer switch switched? Etc.

What is typical of a large AWS deployment in terms of how you actually deploy your latest code changes to production, given you have thousands of instances across multiple availability zones and regions?

I am just wondering about a single service. I would imagine this process would be repeated per microservice or whatever. So for the sake of the question, imagine there is a single webserver with thousands of instances in every region. How would a deployment for that typically look? I am looking to create some GitHub actions to deploy to AWS as practice for a large project, but have no idea what the state of the art is, and haven’t been able to find any information on the topic.

These are sort of helpful:

Go to Source
Author: Lance Pollard

What are the advantages of using AWS nameservers as opposed to servers provided by other registrars?

I am transfering a domain name from namecheap to aws route 53 service. AWS provides you with three options when transfering:

  1. Continue to use the name servers provided by the current registrar or
    DNS service.
  2. Import name servers from a Route 53 hosted zone that has
    the same name as the domain.
  3. Specify new name servers to replace the
    current registrar’s name servers (not recommended)

what is the advantage of using option 2 over the others. What is the difference in pros and cons between the options?

Go to Source
Author: YulePale

AWS autoscaling and ELB, use different health checking mechanism

I need some advice about AWS autoscaling, loadbalancers and health checks:

is it possible to distinguish two health checking mechanism, one is when to send or do not send traffic to an instance (but do not terminate/replace it), and the another is when to terminate/replace that instance.


  • I have two instances behind a loadbalancer and I use autoscale. I want to perform some maintencance tasks on one of the instances so I do not want the loadbalancer
    send any traffic to it. But later I want to put it back to service.
  • Same as above, two instances behind a loadbalancer and using autoscale. One of the instances have an unrecoverable issue, so I want the autoscale terminate it
    and replace with an another instance.

Thanks in advance for the answers.

Go to Source
Author: zsolt.k

AWS API Gateway: 403 forbidden with disabled API key; error from cloudfront

How can happen that with a Lambda/API Gateway combination, if no explicit Cloud Front configuration has been done, in an error response header they mention CloudFront?

header: x-amzn-ErrorType: ForbiddenException
header: x-amz-apigw-id: P3mmbT_DREF8bg=
header: X-Cache: Error from cloudfront

Actual problem: with a disabled API key, API keeps responding “forbidden”.

UPDATE. As it seems the problem is not related to the Edge API/CloudFront, as the problem with disabled key persists also on changing API type to regional.

Go to Source
Author: Peter Muryshkin

Sending HTTP requests to private EC2 DNS from other EC2 host works, but not from container on EC2 host

I want to make a container hosted on one EC2 host communicate with another container hosted on another EC2 host.

Currently, if I send an HTTP request from host A to host B using host B’s private DNS and the exposed container port, I receive a response correctly.
However, if I try the same thing within the container on host B, I get a “Host is unreachable” error.
To me, this would indicate it is not a host networking issue, which is configured in AWS, so I won’t enter into detail on it.

What puzzles me is that I have done the exact same thing in our pre-production environment and it works perfectly.

What could be the issue / where do you think I should look to troubleshoot this? Please keep in mind I’m a noob 🙂

Go to Source
Author: Konrad

What VPN configuration do I need to connect an AWS VPC to a VPN using strongSWAN?

I need to connect the system I’m helping develop that’s deployed on AWS to another system through a VPN. Looking at the remote system VPN configuration I saw that it is based on a linux machine running strongSWAN. The authentication is done through pre-shared keys.
Both systems need to exchange information but I don’t know exactly at which rate.

Given this scenario, would it make sense to use the AWS VPN managed solution? Unless I missed something, it seems I should be able to connect the two of them through static routing.
But according to the AWS documentation, the communication needs to be started from the other system, as the VPG cannot open the connection by itself. Which leaves me to implement a ping mechanism that would always depend on the other system starting a new connection whenever the previous one is broken. And I don’t have access to the other system so I cannot create this mechanism.

Would it make more sense in this case to go the same route as the other system and just deploy a software VPN on an EC2 instance (or on many for high availability), strongSWAN or another?

Go to Source
Author: Juan Vega

Accessing a AWS docker instance via localhost only?

Suppose I have two apps launched via the AWS ECS cluster (using Docker containers).

I want to expose one app to the world via a public IP (and I do it via the AWS load balancer) but the other one I want to be able to access only internally, so that it would not have any public IPs and would only be accessible internally.

Is this possible to do that at all? I suppose it should be easier via docker containers because I could possibly make them communicate to each other by exposing a localhost via

--network="host" in docker run

But that would work if I run the two apps on the same EC2 instance.

What if I run them on separate instances but they are using the same load balancer or — separate instances but in the same AWS zone?

What setting would I use in ECS to expose this app only via the localhost?

Go to Source
Author: Aerodynamika

Best way to create temporary windows ec2 instances with pre installed apps?

I use windows ec2 instances and I need 2-3 tools which I need to install. I ocassionally need 20-40gb hard disk for processing and I always know what HDD size I want before lauching the instance and I always want the tools when I lauch the instance.

Problem is, I need to use it for like 1-2 hours only. Maybe 3 times a month.

What is the best way to tackle this with less price so that I can lauch with the pre installed apps as fast as possible with cost effective way?
(processor and ram could be any)

Go to Source
Author: LotusORB

Terraform duplicates the security groups, how do I stop it?

I’m using terraform to create two instances and a security group. Here is how I did it:

# bastion host
resource "aws_instance" "bastion_1a" {
  ami                         =
  availability_zone           = var.az_1a
  instance_type               = var.instance_type
  key_name                    = var.bastion_key_name
  subnet_id                   =
  vpc_security_group_ids      = []

resource "aws_instance" "bastion_1b" {
  ami                         =
  availability_zone           = var.az_1b
  instance_type               = var.instance_type
  key_name                    = var.bastion_key_name
  subnet_id                   =
  vpc_security_group_ids      = []    

resource "aws_security_group" "bastion_sg" {
  name        = var.name_bastion_sg
  description = var.bastion_sg_description  
  vpc_id      = module.vpc.vpc_id

  ingress {
    description = var.ssh_ingress_description
    from_port   = var.port_ssh
    to_port     = var.port_ssh
    protocol    = var.protocol_tcp
    cidr_blocks = local.ips_allowed_ssh

the problem: after the resources have been created, I checked the console and found that two security groups were created one for bastion_1a and another one for bastion_1b. However, as you can see in my terraform I only specified one security group.

I would like to use terraform to create one security group for both instances.Please can someone explain to me why terraform is creating two security groups.

What do I need to do so that terraform only creates one security group.

Go to Source
Author: davidonstack