How does CI deployment to AWS typically work at scale?

I am familiar with deploying a personal app to Heroku with git push and scaling it up by adding more dynos. But how do you deploy to a scaled AWS infrastructure with thousands of private instances behind dozens of load balancers across multiple regions?

I have searched and searched for this on Google and only found “hello world” tutorials describing cloning a repo directly to a single instance, or using CodeDeploy to deploy to a single instance, and then using autoscaling groups. Basically equivalent to my Heroku example.

But what does it actually look like in production systems at say Facebook, GitHub, Twitter, Stripe, or other large companies, if they were to run on AWS? Are they pushing to a single “global” bastion instance, and then it fans out to the rest of the private instances across multiple regions in one swoop? Or is some sort of plan (like a terraform plan) created, which needs manual approval, and then there’s a custom deploy script which ssh’s into the bastion instance and then fans out to each region? Or is it a git hook integrated into CI somehow? Are AMIs typically produced? Is a Load Balancer switch switched? Etc.

What is typical of a large AWS deployment in terms of how you actually deploy your latest code changes to production, given you have thousands of instances across multiple availability zones and regions?

I am just wondering about a single service. I would imagine this process would be repeated per microservice or whatever. So for the sake of the question, imagine there is a single webserver with thousands of instances in every region. How would a deployment for that typically look? I am looking to create some GitHub actions to deploy to AWS as practice for a large project, but have no idea what the state of the art is, and haven’t been able to find any information on the topic.

These are sort of helpful:

Go to Source
Author: Lance Pollard