I currently work at a company with about 50 microservices spread across the infrastructure. These range from NodeJS apps, Elixir/Phoenix apps, Ruby/Rails apps, and a handful of Go services. We have a single API Gateway that all of these services talk to. No one app knows anything about the other. I’m wondering how practical this is if this is a common practice in larger companies that span hundreds if not thousands of services?
For example, does Google or Uber have a single API that feeds all their systems at the top level?
If this approach I’m speaking of isn’t practical, what is the standard approach for having a multi-service architecture?
Go to Source
Author: Trenton Tyler
We’re a smaller shop that puts out a number of products that require
authentication and authorization. We’re currently using a 3rd party
service to “spin up new auth APIs” for each application. However, I
would like to investigate potential designs for doing this ourselves
in-house due to additional security requirements that may be needed in
the future. I’ve included information below and would appreciate any
design or support as I’m relatively fresh with designs of this scale.
Summary of goals:
An interface that allows a developer to create and manage
authentication APIs for multiple applications via a webpage. This
includes the mostly automated process of spinning up new auth APIs,
and ideally the ability to do some form of RBAC / ABAC changes via
All APIs can expose the same common-auth endpoints, so they’re basically just “Images” of one another. (I.e. Login / Register / etc.)
While this would ideally be entirely automated, some parts I’ve thought of being manual are:
* Domain configuration (pointing subdomains to new endpoints)
* Spinning up additional VMs (needed?)
* Spinning up additional DBs or tables?
* Minor configuration changes
* others I haven’t thought of?
- As a Developer, I want to login to a web portal so that I can manage auth APIs.
- As a Developer, I want to create a new API in the web UI, so that I can then integrate it to new applications.
- As a Developer, I want to manage users in the web UI, so I can oversee access to our applications.
- As a Developer, I want to **…
Each new API for the applications should likely have the user stores
(table containing user information) segregated into different
databases stored on separate hardware to minimize attack vectors and
improve security/scaling. For now, I’m thinking of different
subdomains or maybe request parameters to separate the APIs?
I feel like there may be some solution that involves building a
template/image of an Auth API on Azure and just duplicating the VM or
image, but I’m not too sure of this route either. Obviously
management, maintenance, updates, etc. to these would be more
hands-on, but feel free to provide feedback on this as well.
Thanks in advance!
Go to Source
I am planning to make a service which will have simple REST APIs and will have a database in backend. I also wanted to add a logic to listen to notifications emitted by other service and there is some business logic which will update the row in the database.
For updating the database row from Notifications, I can think of 2 approaches:
Should I create a API which is kind of internal to just used by service and this listener process calls this API instead of directly updating the database?
Listener process directly updates the service.
I can see some pros and cons of each approach. In Approach 1, we are adding a REST API unnecessarily which is never used by clients.
In Approach 2, we are giving one backside way to reach the database instead of all the requests coming from REST API.
Can someone help me here to tell if one of them is anti-pattern and which one is better to use?
Go to Source
On multiple occasions, we’ve deployed frontend code to production only to find out the backend (REST or GraphQL) hasn’t shipped their side yet. Worse yet, we expectedly find out a param name changed which may throw an error. Another example: the backend removes an API thinking that clients no longer use the removed API and the frontend crashes. If any layer of communication between frontend and backend breaks down, then we may end up with catastrophic errors.
I think the “best solution” is to use a tool like Cypress or Codecept to create a suite of integration tests which checks every API call the frontend may use. Sadly, that’s a heavyweight solution that requires significant upfront investment in developer time.
Anyway, I’m looking for simple solution to this problem. Maybe something that checks affected APIs when the frontend opens a PR and/or something that checks the frontend repo when the backend deploys to production.
Any ideas or experience solving this problem?
Go to Source