One API feeding all services best practice

I currently work at a company with about 50 microservices spread across the infrastructure. These range from NodeJS apps, Elixir/Phoenix apps, Ruby/Rails apps, and a handful of Go services. We have a single API Gateway that all of these services talk to. No one app knows anything about the other. I’m wondering how practical this is if this is a common practice in larger companies that span hundreds if not thousands of services?

For example, does Google or Uber have a single API that feeds all their systems at the top level?

If this approach I’m speaking of isn’t practical, what is the standard approach for having a multi-service architecture?

Go to Source
Author: Trenton Tyler

How can we design view count on videos website like youtube which needs to be highly optimized

As per my understanding :

We can have a backend scalable microservice MS1 having an API. Client calls the API in case of a user plays a video. This microservice will be using a sharded cache C1 and a message broker MB1. The cache C1 will manage video count for videos<VideoId,VideoCount>, for every new request it will increment the count in cache C1 and will add the request<VideoId,UserId> to message broker MB1.
On the other side of message broker MB1 a service MS2 will update the request in Database DB1. The sharded cache C1 will fetch data in case of unavailability of data, from the MS2.

Recently, I was in a interview where the interviewer asked me to design this view count which should be scalable. He was only concerned with millions of connections created with cache C1 in case of millions of request.
And I was under impression that the cache C1 is scalable so it’s not an issue.

I have designed similar thing before in some project including like and dislike count. So same way I tried to explain him but he wasn’t convinced .
I tried to find some standard approach or algorithm to optimize it further but unable to find any over google, so finally here I am here. Kindly help me, have I done anything wrong.

Go to Source
Author: Lovin

Testable architecture of micro-services

Into:

There are mutliple ways to test code: unit tests /e2e / manual testing /..

I’m developing a project that it’s implementation details changes very quickly (and sometimes the core functions as well).

Some of our microservers talk to each other directly while some others communicate using events like kafka.

Problem:

When I create an e2e test (for the backend image only), before each test(s), I build the docker image of my micro service and run it again on (each) test.

I find it really hard to set up this kind of e2e test for a micro service that directly talks to other microservices (sending get/post/.. requests).

As a result, I also build/pull the other images and run them before each tests as well. But it’s not that easy because you can end up implementing a version of docker-compose in you tests infrastructure.

I would like to minimize the amount of errors that can come from other services and test a specific microservice.

Possible solution:

Changing the microservices architecture.

When ever it is possible, a micro service will communicate with others using events. So in the tests, we only need to setup a kafka and the microservice that we try to test.

I only though of this solution from testing perspective and not from “what is best”, for example, it’s faster to communicate without kafka.

Question:

What are the pros and cons of my proposal? From your experience, is it maintainable?

Go to Source
Author: Stav Alfi

Leveraging the Process Context In Modular Monolith

I’m currently on the research if my current monolith architecture
can be improved by moving to micro-services.

From what I saw, there are a lot of good things about micro-services
but there is a low of caveats and pit-falls that might complex
the whole system (transactions, data-query, data-sync and duplication, fault tolerance, error handling, etc)

Seems that there is the middle ground of going into a “Modular Monolith” (1,2,3) where the system is designed vertically into a loosely coupled modules that interact using APIs.

The question is how the fact that these modules operate against the same database and sit in the same process space can be leveraged to ease the complexity, i.e:

  1. Can the modules declare “Transaction Context” in their API that will allow an ACID transaction when there is a cross-modules business logic operation? (in contrary to micro-service where it’s not achievable by design)
  2. Declaring database Views in special modules that will allow joining data on the database level and not on the application level.

If these cannot be leveraged – what’s the real difference between modular-monolith & micro-services (besides the independent scaling and deployment)

Go to Source
Author: sborpo

Is this an anti-pattern to have a service have both APIs and listening to events?

I am planning to make a service which will have simple REST APIs and will have a database in backend. I also wanted to add a logic to listen to notifications emitted by other service and there is some business logic which will update the row in the database.

For updating the database row from Notifications, I can think of 2 approaches:

  1. Should I create a API which is kind of internal to just used by service and this listener process calls this API instead of directly updating the database?

  2. Listener process directly updates the service.

I can see some pros and cons of each approach. In Approach 1, we are adding a REST API unnecessarily which is never used by clients.

In Approach 2, we are giving one backside way to reach the database instead of all the requests coming from REST API.

Can someone help me here to tell if one of them is anti-pattern and which one is better to use?

Go to Source
Author: hatellaCollege

Maintaining Objects Across API Deployment Instances

Maintaining Objects Across API Deployment Instances

I am working on a web application as a hobby and trying to learn some concepts related to cloud development and distributed applications. I am currently targeting an AWS EC2 instance as a deployment environment, and while I don’t currently have plans to deploy the same instance of my API application to many servers, I would like to design my application so that is possible in the future.

I have a search operation that I currently have implemented using a Trie. I am thinking that it would be slow to rebuild the trie every time I need to perform the search operation, so I would like to keep it in memory and insert into it as the search domain grows. I know that if I only wanted to have one server, I could just implement the trie structure as a singleton and dependency inject it. If I do this in a potentially distributed application, though, I would be opening myself up to data consistency issues.
My thought was to implement the trie in another service and deploy it separately and make requests to it (this sounds like micro service concepts, but I have no experience with those). Is this common practice? Is there a better solution for maintaining persistent data structures in this way?

Go to Source
Author: jlat96