Deployment of Docker containers on ECS(Tcp Server Java)

The given problem:

  • Created a Java Tcp Server with Eclipse vertx(Toolkit for building
    reactive,event driven)applications on JVM. The applicatino has two
    Tcp Servers which are running on port numbers:
    server1(port:5050);server2(5051)

  • I created a Docker image and was running the application inside a
    container and all works fine locally.

  • I want to deploy the docker container on ECS(Amazon Container
    Service) and to use Network Load balancer,the question is how can i
    assign different ports to different containers if in the Java code i
    was setting up explicity

//tcp server 1 listens on port 5050

server..listen(5050)
  • and how does ECS manages what ports to assign to what container?

    The tcp client is always targeting one ip adress and one port number
    it cannot be changed.

    Thanks!

Go to Source
Author: user581026

AWX cannot find matching host

I have on premise AWX and inventory (instances on AWS). To provision newly launched instance I want to use provisioning callbacks.
But when executing this command from the instance

$ curl -v -H 'Content-Type:application/json' -d '{"host_config_key": "key1"}' https://my-host.com/api/v2/job_templates/28/callback/

an error occurs

{"msg":"No matching host could be found!"}

What might be wrong with AWX configuration if host is present within the dynamic inventory and UPDATE ON LAUNCH flag is enabled?

Go to Source
Author: Most Wanted

S3 Bucket Policy: require KMS for all files except one

I have an S3 bucket that contains sensitive data, so I want to ensure that any objects put into the bucket are encrypted with a specific KMS key. I’m doing this already with a bucket policy statement and it works well:

{
    "Sid": "DenyWhenWrongCMK",
    "Effect": "Deny",
    "Principal": {
        "AWS": "*"
    },
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::mybucket/*",
    "Condition": {
        "StringNotEquals": {
            "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:REDACTED"
        }
    }
}

But I really want to create one exception to this Deny rule. One thing I like doing with my S3 buckets is putting a README.md file in the root directory, so that when future maintainers go looking around, they have documentation about the original intent and purpose. This works best if the README.md is not encrypted with a CMK (Customer Managed Key), so I want to make an exception to the rule above.

But Deny statements have precedence over Allow statements in S3 bucket policy, so there doesn’t seem to be any way for me to make an exception here. Am I missing something? Is there any way to enforce usage of a specific KMS CMK for all files except one?

Go to Source
Author: Nic

Are there serious problems with an EC2 auto-scaling AMI that automatically downloads newest server from git?

I’m converting some servers from manual scaling (with a load balancer) to auto-scaling. I’m currently using an Ansible script to manage upgrades, and I wouldn’t want an automatically created instance to not be using the newest version of the software. I created an AMI that will download the newest commit from a chosen branch on first boot.

I’ve since realized my approach is somewhat like “cowboy coding”, since AWS provides a mechanism to replace auto-scaling instances when there should be an update. (If using that, my existing update script would become obsolete, and I guess the update would entail creating a new AMI containing the new server version.)

Are there any serious problems with using the “cowboy” approach? I realize auto-created servers might end up with newer code than the other servers (if someone pushes but does not deploy their code changes). I wonder whether auto-scaling will cause Ansible scripts to fail if servers are terminated while being processed by Ansible. What have I missed?

Go to Source
Author: piojo

How does CI deployment to AWS typically work at scale?

I am familiar with deploying a personal app to Heroku with git push and scaling it up by adding more dynos. But how do you deploy to a scaled AWS infrastructure with thousands of private instances behind dozens of load balancers across multiple regions?

I have searched and searched for this on Google and only found “hello world” tutorials describing cloning a repo directly to a single instance, or using CodeDeploy to deploy to a single instance, and then using autoscaling groups. Basically equivalent to my Heroku example.

But what does it actually look like in production systems at say Facebook, GitHub, Twitter, Stripe, or other large companies, if they were to run on AWS? Are they pushing to a single “global” bastion instance, and then it fans out to the rest of the private instances across multiple regions in one swoop? Or is some sort of plan (like a terraform plan) created, which needs manual approval, and then there’s a custom deploy script which ssh’s into the bastion instance and then fans out to each region? Or is it a git hook integrated into CI somehow? Are AMIs typically produced? Is a Load Balancer switch switched? Etc.

What is typical of a large AWS deployment in terms of how you actually deploy your latest code changes to production, given you have thousands of instances across multiple availability zones and regions?

I am just wondering about a single service. I would imagine this process would be repeated per microservice or whatever. So for the sake of the question, imagine there is a single webserver with thousands of instances in every region. How would a deployment for that typically look? I am looking to create some GitHub actions to deploy to AWS as practice for a large project, but have no idea what the state of the art is, and haven’t been able to find any information on the topic.

These are sort of helpful:

Go to Source
Author: Lance Pollard

What are the advantages of using AWS nameservers as opposed to servers provided by other registrars?

I am transfering a domain name from namecheap to aws route 53 service. AWS provides you with three options when transfering:

  1. Continue to use the name servers provided by the current registrar or
    DNS service.
  2. Import name servers from a Route 53 hosted zone that has
    the same name as the domain.
  3. Specify new name servers to replace the
    current registrar’s name servers (not recommended)

what is the advantage of using option 2 over the others. What is the difference in pros and cons between the options?

Go to Source
Author: YulePale

AWS autoscaling and ELB, use different health checking mechanism

I need some advice about AWS autoscaling, loadbalancers and health checks:

is it possible to distinguish two health checking mechanism, one is when to send or do not send traffic to an instance (but do not terminate/replace it), and the another is when to terminate/replace that instance.

Scenarios:

  • I have two instances behind a loadbalancer and I use autoscale. I want to perform some maintencance tasks on one of the instances so I do not want the loadbalancer
    send any traffic to it. But later I want to put it back to service.
  • Same as above, two instances behind a loadbalancer and using autoscale. One of the instances have an unrecoverable issue, so I want the autoscale terminate it
    and replace with an another instance.

Thanks in advance for the answers.

Go to Source
Author: zsolt.k