I currently work at a company with about 50 microservices spread across the infrastructure. These range from NodeJS apps, Elixir/Phoenix apps, Ruby/Rails apps, and a handful of Go services. We have a single API Gateway that all of these services talk to. No one app knows anything about the other. I’m wondering how practical this is if this is a common practice in larger companies that span hundreds if not thousands of services?
For example, does Google or Uber have a single API that feeds all their systems at the top level?
If this approach I’m speaking of isn’t practical, what is the standard approach for having a multi-service architecture?
Go to Source
Author: Trenton Tyler
There are mutliple ways to test code: unit tests /e2e / manual testing /..
I’m developing a project that it’s implementation details changes very quickly (and sometimes the core functions as well).
Some of our microservers talk to each other directly while some others communicate using events like kafka.
When I create an e2e test (for the backend image only), before each test(s), I build the docker image of my micro service and run it again on (each) test.
I find it really hard to set up this kind of e2e test for a micro service that directly talks to other microservices (sending get/post/.. requests).
As a result, I also build/pull the other images and run them before each tests as well. But it’s not that easy because you can end up implementing a version of docker-compose in you tests infrastructure.
I would like to minimize the amount of errors that can come from other services and test a specific microservice.
Changing the microservices architecture.
When ever it is possible, a micro service will communicate with others using events. So in the tests, we only need to setup a kafka and the microservice that we try to test.
I only though of this solution from testing perspective and not from “what is best”, for example, it’s faster to communicate without kafka.
What are the pros and cons of my proposal? From your experience, is it maintainable?
Go to Source
Author: Stav Alfi
I’m currently on the research if my current monolith architecture
can be improved by moving to micro-services.
From what I saw, there are a lot of good things about micro-services
but there is a low of caveats and pit-falls that might complex
the whole system (transactions, data-query, data-sync and duplication, fault tolerance, error handling, etc)
Seems that there is the middle ground of going into a “Modular Monolith” (1,2,3) where the system is designed vertically into a loosely coupled modules that interact using APIs.
The question is how the fact that these modules operate against the same database and sit in the same process space can be leveraged to ease the complexity, i.e:
- Can the modules declare “Transaction Context” in their API that will allow an ACID transaction when there is a cross-modules business logic operation? (in contrary to micro-service where it’s not achievable by design)
- Declaring database Views in special modules that will allow joining data on the database level and not on the application level.
If these cannot be leveraged – what’s the real difference between modular-monolith & micro-services (besides the independent scaling and deployment)
Go to Source
I am planning to make a service which will have simple REST APIs and will have a database in backend. I also wanted to add a logic to listen to notifications emitted by other service and there is some business logic which will update the row in the database.
For updating the database row from Notifications, I can think of 2 approaches:
Should I create a API which is kind of internal to just used by service and this listener process calls this API instead of directly updating the database?
Listener process directly updates the service.
I can see some pros and cons of each approach. In Approach 1, we are adding a REST API unnecessarily which is never used by clients.
In Approach 2, we are giving one backside way to reach the database instead of all the requests coming from REST API.
Can someone help me here to tell if one of them is anti-pattern and which one is better to use?
Go to Source
Maintaining Objects Across API Deployment Instances
I am working on a web application as a hobby and trying to learn some concepts related to cloud development and distributed applications. I am currently targeting an AWS EC2 instance as a deployment environment, and while I don’t currently have plans to deploy the same instance of my API application to many servers, I would like to design my application so that is possible in the future.
I have a search operation that I currently have implemented using a Trie. I am thinking that it would be slow to rebuild the trie every time I need to perform the search operation, so I would like to keep it in memory and insert into it as the search domain grows. I know that if I only wanted to have one server, I could just implement the trie structure as a singleton and dependency inject it. If I do this in a potentially distributed application, though, I would be opening myself up to data consistency issues.
My thought was to implement the trie in another service and deploy it separately and make requests to it (this sounds like micro service concepts, but I have no experience with those). Is this common practice? Is there a better solution for maintaining persistent data structures in this way?
Go to Source