One API feeding all services best practice

I currently work at a company with about 50 microservices spread across the infrastructure. These range from NodeJS apps, Elixir/Phoenix apps, Ruby/Rails apps, and a handful of Go services. We have a single API Gateway that all of these services talk to. No one app knows anything about the other. I’m wondering how practical this is if this is a common practice in larger companies that span hundreds if not thousands of services?

For example, does Google or Uber have a single API that feeds all their systems at the top level?

If this approach I’m speaking of isn’t practical, what is the standard approach for having a multi-service architecture?

Go to Source
Author: Trenton Tyler

API / System Design [Flexible Authentication / Authorization]?

Background:

We’re a smaller shop that puts out a number of products that require
authentication and authorization. We’re currently using a 3rd party
service to “spin up new auth APIs” for each application. However, I
would like to investigate potential designs for doing this ourselves
in-house due to additional security requirements that may be needed in
the future. I’ve included information below and would appreciate any
design or support as I’m relatively fresh with designs of this scale.

Summary of goals:

An interface that allows a developer to create and manage
authentication APIs for multiple applications via a webpage. This
includes the mostly automated process of spinning up new auth APIs,
and ideally the ability to do some form of RBAC / ABAC changes via
this page.

Acceptable ‘Limitations’:

  1. All APIs can expose the same common-auth endpoints, so they’re basically just “Images” of one another. (I.e. Login / Register / etc.)

  2. While this would ideally be entirely automated, some parts I’ve thought of being manual are:
    * Domain configuration (pointing subdomains to new endpoints)
    * Spinning up additional VMs (needed?)
    * Spinning up additional DBs or tables?
    * Minor configuration changes
    * others I haven’t thought of?

User Stories:

  • As a Developer, I want to login to a web portal so that I can manage auth APIs.
  • As a Developer, I want to create a new API in the web UI, so that I can then integrate it to new applications.
  • As a Developer, I want to manage users in the web UI, so I can oversee access to our applications.
  • As a Developer, I want to **…

Future Considerations:

Each new API for the applications should likely have the user stores
(table containing user information) segregated into different
databases stored on separate hardware to minimize attack vectors and
improve security/scaling. For now, I’m thinking of different
subdomains or maybe request parameters to separate the APIs?

Thoughts:

I feel like there may be some solution that involves building a
template/image of an Auth API on Azure and just duplicating the VM or
image, but I’m not too sure of this route either. Obviously
management, maintenance, updates, etc. to these would be more
hands-on, but feel free to provide feedback on this as well.

Thanks in advance!

Go to Source
Author: ClicheCoffeeMug

Is it a good practice to have an endpoint URL with parameter accepting different type of values?

In my current maintenance project, there is REST API resource URL like this:

/sites/<site id or site code>/buildings/<building id or building code>

In this endpoint URL, there are two parameters,

  • <site id or site code>
  • <building id or building code>

As the name indicates, these two parameters are ambiguous, say the value of the first parameter can be either site id or site code, the value of the second parameter can be either building id or building code. However, implicitly it means,

For instance, there is a building with 1 as building id and rake as building code, and it is located in the site with 5 as the site id and SF as the site code, then the following endpoint URL should retrieve the same result:

  • /sites/1/buildings/5
  • /sites/rake/building/5
  • /sites/1/buildings/sf
  • /sites/rake/building/sf

The implementation of such resource endpoint contains lots of if conditions due to the ambiguity**. However, from the end-user’s aspect, this seems to be handy

My quesiton is whether such endpoint design is a good practice or a typical bad practice?

Go to Source
Author: Rui

Is this an anti-pattern to have a service have both APIs and listening to events?

I am planning to make a service which will have simple REST APIs and will have a database in backend. I also wanted to add a logic to listen to notifications emitted by other service and there is some business logic which will update the row in the database.

For updating the database row from Notifications, I can think of 2 approaches:

  1. Should I create a API which is kind of internal to just used by service and this listener process calls this API instead of directly updating the database?

  2. Listener process directly updates the service.

I can see some pros and cons of each approach. In Approach 1, we are adding a REST API unnecessarily which is never used by clients.

In Approach 2, we are giving one backside way to reach the database instead of all the requests coming from REST API.

Can someone help me here to tell if one of them is anti-pattern and which one is better to use?

Go to Source
Author: hatellaCollege

How can I ensure the client and server both have access to all API calls in use?

On multiple occasions, we’ve deployed frontend code to production only to find out the backend (REST or GraphQL) hasn’t shipped their side yet. Worse yet, we expectedly find out a param name changed which may throw an error. Another example: the backend removes an API thinking that clients no longer use the removed API and the frontend crashes. If any layer of communication between frontend and backend breaks down, then we may end up with catastrophic errors.

I think the “best solution” is to use a tool like Cypress or Codecept to create a suite of integration tests which checks every API call the frontend may use. Sadly, that’s a heavyweight solution that requires significant upfront investment in developer time.

Anyway, I’m looking for simple solution to this problem. Maybe something that checks affected APIs when the frontend opens a PR and/or something that checks the frontend repo when the backend deploys to production.

Any ideas or experience solving this problem?

Go to Source
Author: Brandon