S3 Bucket Policy: require KMS for all files except one

I have an S3 bucket that contains sensitive data, so I want to ensure that any objects put into the bucket are encrypted with a specific KMS key. I’m doing this already with a bucket policy statement and it works well:

{
    "Sid": "DenyWhenWrongCMK",
    "Effect": "Deny",
    "Principal": {
        "AWS": "*"
    },
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::mybucket/*",
    "Condition": {
        "StringNotEquals": {
            "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:REDACTED"
        }
    }
}

But I really want to create one exception to this Deny rule. One thing I like doing with my S3 buckets is putting a README.md file in the root directory, so that when future maintainers go looking around, they have documentation about the original intent and purpose. This works best if the README.md is not encrypted with a CMK (Customer Managed Key), so I want to make an exception to the rule above.

But Deny statements have precedence over Allow statements in S3 bucket policy, so there doesn’t seem to be any way for me to make an exception here. Am I missing something? Is there any way to enforce usage of a specific KMS CMK for all files except one?

Go to Source
Author: Nic

Are there serious problems with an EC2 auto-scaling AMI that automatically downloads newest server from git?

I’m converting some servers from manual scaling (with a load balancer) to auto-scaling. I’m currently using an Ansible script to manage upgrades, and I wouldn’t want an automatically created instance to not be using the newest version of the software. I created an AMI that will download the newest commit from a chosen branch on first boot.

I’ve since realized my approach is somewhat like “cowboy coding”, since AWS provides a mechanism to replace auto-scaling instances when there should be an update. (If using that, my existing update script would become obsolete, and I guess the update would entail creating a new AMI containing the new server version.)

Are there any serious problems with using the “cowboy” approach? I realize auto-created servers might end up with newer code than the other servers (if someone pushes but does not deploy their code changes). I wonder whether auto-scaling will cause Ansible scripts to fail if servers are terminated while being processed by Ansible. What have I missed?

Go to Source
Author: piojo

How does CI deployment to AWS typically work at scale?

I am familiar with deploying a personal app to Heroku with git push and scaling it up by adding more dynos. But how do you deploy to a scaled AWS infrastructure with thousands of private instances behind dozens of load balancers across multiple regions?

I have searched and searched for this on Google and only found “hello world” tutorials describing cloning a repo directly to a single instance, or using CodeDeploy to deploy to a single instance, and then using autoscaling groups. Basically equivalent to my Heroku example.

But what does it actually look like in production systems at say Facebook, GitHub, Twitter, Stripe, or other large companies, if they were to run on AWS? Are they pushing to a single “global” bastion instance, and then it fans out to the rest of the private instances across multiple regions in one swoop? Or is some sort of plan (like a terraform plan) created, which needs manual approval, and then there’s a custom deploy script which ssh’s into the bastion instance and then fans out to each region? Or is it a git hook integrated into CI somehow? Are AMIs typically produced? Is a Load Balancer switch switched? Etc.

What is typical of a large AWS deployment in terms of how you actually deploy your latest code changes to production, given you have thousands of instances across multiple availability zones and regions?

I am just wondering about a single service. I would imagine this process would be repeated per microservice or whatever. So for the sake of the question, imagine there is a single webserver with thousands of instances in every region. How would a deployment for that typically look? I am looking to create some GitHub actions to deploy to AWS as practice for a large project, but have no idea what the state of the art is, and haven’t been able to find any information on the topic.

These are sort of helpful:

Go to Source
Author: Lance Pollard

What are the advantages of using AWS nameservers as opposed to servers provided by other registrars?

I am transfering a domain name from namecheap to aws route 53 service. AWS provides you with three options when transfering:

  1. Continue to use the name servers provided by the current registrar or
    DNS service.
  2. Import name servers from a Route 53 hosted zone that has
    the same name as the domain.
  3. Specify new name servers to replace the
    current registrar’s name servers (not recommended)

what is the advantage of using option 2 over the others. What is the difference in pros and cons between the options?

Go to Source
Author: YulePale

AWS autoscaling and ELB, use different health checking mechanism

I need some advice about AWS autoscaling, loadbalancers and health checks:

is it possible to distinguish two health checking mechanism, one is when to send or do not send traffic to an instance (but do not terminate/replace it), and the another is when to terminate/replace that instance.

Scenarios:

  • I have two instances behind a loadbalancer and I use autoscale. I want to perform some maintencance tasks on one of the instances so I do not want the loadbalancer
    send any traffic to it. But later I want to put it back to service.
  • Same as above, two instances behind a loadbalancer and using autoscale. One of the instances have an unrecoverable issue, so I want the autoscale terminate it
    and replace with an another instance.

Thanks in advance for the answers.

Go to Source
Author: zsolt.k

AWS API Gateway: 403 forbidden with disabled API key; error from cloudfront

How can happen that with a Lambda/API Gateway combination, if no explicit Cloud Front configuration has been done, in an error response header they mention CloudFront?

header: x-amzn-ErrorType: ForbiddenException
header: x-amz-apigw-id: P3mmbT_DREF8bg=
header: X-Cache: Error from cloudfront

Actual problem: with a disabled API key, API keeps responding “forbidden”.

UPDATE. As it seems the problem is not related to the Edge API/CloudFront, as the problem with disabled key persists also on changing API type to regional.

Go to Source
Author: Peter Muryshkin

Sending HTTP requests to private EC2 DNS from other EC2 host works, but not from container on EC2 host

I want to make a container hosted on one EC2 host communicate with another container hosted on another EC2 host.

Currently, if I send an HTTP request from host A to host B using host B’s private DNS and the exposed container port, I receive a response correctly.
However, if I try the same thing within the container on host B, I get a “Host is unreachable” error.
To me, this would indicate it is not a host networking issue, which is configured in AWS, so I won’t enter into detail on it.

What puzzles me is that I have done the exact same thing in our pre-production environment and it works perfectly.

What could be the issue / where do you think I should look to troubleshoot this? Please keep in mind I’m a noob 🙂

Go to Source
Author: Konrad

What VPN configuration do I need to connect an AWS VPC to a VPN using strongSWAN?

I need to connect the system I’m helping develop that’s deployed on AWS to another system through a VPN. Looking at the remote system VPN configuration I saw that it is based on a linux machine running strongSWAN. The authentication is done through pre-shared keys.
Both systems need to exchange information but I don’t know exactly at which rate.

Given this scenario, would it make sense to use the AWS VPN managed solution? Unless I missed something, it seems I should be able to connect the two of them through static routing.
But according to the AWS documentation, the communication needs to be started from the other system, as the VPG cannot open the connection by itself. Which leaves me to implement a ping mechanism that would always depend on the other system starting a new connection whenever the previous one is broken. And I don’t have access to the other system so I cannot create this mechanism.

Would it make more sense in this case to go the same route as the other system and just deploy a software VPN on an EC2 instance (or on many for high availability), strongSWAN or another?

Go to Source
Author: Juan Vega

Accessing a AWS docker instance via localhost only?

Suppose I have two apps launched via the AWS ECS cluster (using Docker containers).

I want to expose one app to the world via a public IP (and I do it via the AWS load balancer) but the other one I want to be able to access only internally, so that it would not have any public IPs and would only be accessible internally.

Is this possible to do that at all? I suppose it should be easier via docker containers because I could possibly make them communicate to each other by exposing a localhost via

--network="host" in docker run

But that would work if I run the two apps on the same EC2 instance.

What if I run them on separate instances but they are using the same load balancer or — separate instances but in the same AWS zone?

What setting would I use in ECS to expose this app only via the localhost?

Go to Source
Author: Aerodynamika

Best way to create temporary windows ec2 instances with pre installed apps?

I use windows ec2 instances and I need 2-3 tools which I need to install. I ocassionally need 20-40gb hard disk for processing and I always know what HDD size I want before lauching the instance and I always want the tools when I lauch the instance.

Problem is, I need to use it for like 1-2 hours only. Maybe 3 times a month.

What is the best way to tackle this with less price so that I can lauch with the pre installed apps as fast as possible with cost effective way?
(processor and ram could be any)

Go to Source
Author: LotusORB

Terraform duplicates the security groups, how do I stop it?

I’m using terraform to create two instances and a security group. Here is how I did it:

# bastion host
resource "aws_instance" "bastion_1a" {
  ami                         = data.aws_ami.amazon_linux_2.id
  availability_zone           = var.az_1a
  instance_type               = var.instance_type
  key_name                    = var.bastion_key_name
  subnet_id                   = data.aws_subnet.public_subnet_us_east_1a.id
  vpc_security_group_ids      = [aws_security_group.bastion_sg.id]
}

resource "aws_instance" "bastion_1b" {
  ami                         = data.aws_ami.amazon_linux_2.id
  availability_zone           = var.az_1b
  instance_type               = var.instance_type
  key_name                    = var.bastion_key_name
  subnet_id                   = data.aws_subnet.public_subnet_us_east_1b.id
  vpc_security_group_ids      = [aws_security_group.bastion_sg.id]    
}

resource "aws_security_group" "bastion_sg" {
  name        = var.name_bastion_sg
  description = var.bastion_sg_description  
  vpc_id      = module.vpc.vpc_id

  ingress {
    description = var.ssh_ingress_description
    from_port   = var.port_ssh
    to_port     = var.port_ssh
    protocol    = var.protocol_tcp
    cidr_blocks = local.ips_allowed_ssh
  }
}

the problem: after the resources have been created, I checked the console and found that two security groups were created one for bastion_1a and another one for bastion_1b. However, as you can see in my terraform I only specified one security group.

I would like to use terraform to create one security group for both instances.Please can someone explain to me why terraform is creating two security groups.

What do I need to do so that terraform only creates one security group.

Go to Source
Author: davidonstack

Get tagging to work on AWS CLI

I am trying to tag commit hash and app version onto the ECS service to make it easier for developers to identify the service with the code.

But I get this error

+ aws ecs tag-resource --resource-arn arn:aws:ecs:ap-southeast-1:12344556677:service/master-web-dev-WebECSServices --tags key=COMMITHASH,value=85491def key=VERSION,value=0.2.4
An error occurred (InvalidParameterException) when calling the TagResource operation: Long arn format must be used for tagging operations

Is there any problem with the ARN? why do I always get this issue?

Go to Source
Author: Moses Liao GZ

AWS LightSail OpenVPN + ASUS RT-3200 – TLS Error: Key negotiation failed

AWS LightSail OpenVPN + ASUS RT-3200 – TLS Error: Key negotiation failed

Using https://github.com/angristan/openvpn-install i have installed openvpn on AWS Lightsail Instance (Ubuntu 18.04 LTS). My server config file is as follows:

    port 33434
    proto udp
    dev tun
    user nobody
    group nogroup
    persist-key
    persist-tun
    keepalive 10 120
    topology subnet
    server 10.8.0.0 255.255.255.0
    ifconfig-pool-persist ipp.txt
    push "dhcp-option DNS 8.8.8.8"
    push "dhcp-option DNS 8.8.4.4"
    push "redirect-gateway def1 bypass-dhcp"
    dh dh.pem
    tls-auth tls-auth.key 0
    crl-verify crl.pem
    ca ca.crt
    cert server_VYtknmf1PC80WpTG.crt
    key server_VYtknmf1PC80WpTG.key
    auth SHA512
    cipher AES-256-CBC
    ncp-ciphers AES-256-CBC
    tls-server
    client-config-dir /etc/openvpn/ccd
    status /var/log/openvpn/status.log
    verb 3

And my client.ovpn is as follows:

client
proto udp
explicit-exit-notify
remote 35.173.69.115 33434
dev tun
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
verify-x509-name server_VYtknmf1PC80WpTG name
auth SHA512
auth-nocache
cipher AES-256-CBC
tls-client
setenv opt block-outside-dns # Prevent Windows 10 DNS leak
verb 3

I have added port ‘33434’ UDP in Firewall rule of AWS instance.

I added a new profile in VPN Client section of the asus router and uploaded the client.ovpn file. And clicked activate and get follow message in the log.

May 15 22:29:43 vpnclient1[17419]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
May 15 22:29:43 vpnclient1[17419]: Socket Buffers: R=[122880->122880] S=[122880->122880]
May 15 22:29:43 vpnclient1[17419]: UDPv4 link local: [undef]
May 15 22:29:43 vpnclient1[17419]: UDPv4 link remote: [AF_INET]35.173.69.115:33434
May 15 22:29:43 vpnclient1[17419]: TLS: Initial packet from [AF_INET]35.173.69.115:33434, sid=61d9df2d 1f97bdd0
May 15 22:30:02 rc_service: service 17718:notify_rc restart_letsencrypt
May 15 22:30:43 vpnclient1[17419]: TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)
May 15 22:30:43 vpnclient1[17419]: TLS Error: TLS handshake failed
May 15 22:30:43 vpnclient1[17419]: SIGUSR1[soft,tls-error] received, process restarting
May 15 22:30:43 vpnclient1[17419]: Restart pause, 2 second(s)

Please suggest how to fix the issue. Thank you.

Go to Source
Author: Saad Bashir