I have developed my DRF back-end API locally, deployed it on an AWS Lightsail instance (with a public static IP) and I now want to secure it with HTTPS.
I understand that in order to use Let’s Encrypt (and not pay for an SSL certificate), I have to have a domain name associated to my instance IP since Let’s Encrypt doesn’t provide certificates for public IPs. As this is my back-end API (and not just a website), I don’t intend to buy a domain specifically for this.
Can I, somehow, associate my Lightsail IP with another domain that I’ve already purchased (and is used to host my company’s landing page)? If yes, will there be any impact on my API’s performance?
Is there any other alternative to obtain an SSL? (Apart from paying another CA to issue this for my public IP?)
Go to Source
I am working on a system that involves launching multiple AWS batch jobs, each takes between 5 to 15 minutes on average to complete.
I need to provide a mechanism that will let me know once all the jobs have completed successfully, or if any failures have occurred, so I can proceed to the next step accordingly.
Also, I don’t yet have a good strategy for handling errors. For example, how to deal with the case where only one or a few jobs failed after a certain number of retries? My first guess is that we can have a failure threshold that will dictate whether or not the overall step in the process (i.e. the collection of AWS batch jobs) can/should proceed. Something along the lines of
if failed_jobs > failed_job_threshold:
raise RuntimeError("Too many failed jobs")
The process recreates/repopulates a database table periodically (i.e. each month the overall job runs to recreate/repopulate a database table). Therefore any individual batch job failures will entail an incomplete database table and will require attention.
Is there a “best practice” architecture for handling this use case?
My development landscape includes Python, Terraform, Bitbucket Pipelines, AWS (Lambda, Batch, SQS, RDS/Aurora, etc.), and PostgreSQL.
Go to Source
Author: James Adams
How to copy a file with csv in to another bucket
I have 3 buckets in AWS a)test b)testjson c)testcsv
I have upload data.json and data.csv to test bucket
After Uploading the file below things are happend
data.json file is copied to testjson bucket
data.csv file is copied to testcsv bucket
"Name" : "Madk"
In this lambda handler only I need to copy .json into another bucket and .csv into another bucket
Go to Source
Maintaining Objects Across API Deployment Instances
I am working on a web application as a hobby and trying to learn some concepts related to cloud development and distributed applications. I am currently targeting an AWS EC2 instance as a deployment environment, and while I don’t currently have plans to deploy the same instance of my API application to many servers, I would like to design my application so that is possible in the future.
I have a search operation that I currently have implemented using a Trie. I am thinking that it would be slow to rebuild the trie every time I need to perform the search operation, so I would like to keep it in memory and insert into it as the search domain grows. I know that if I only wanted to have one server, I could just implement the trie structure as a singleton and dependency inject it. If I do this in a potentially distributed application, though, I would be opening myself up to data consistency issues.
My thought was to implement the trie in another service and deploy it separately and make requests to it (this sounds like micro service concepts, but I have no experience with those). Is this common practice? Is there a better solution for maintaining persistent data structures in this way?
Go to Source