In a micro-services architecture we can have:
A single API gateway providing a single API for all clients.
A single API gateway providing an API for each kind of client.
A per-client API gateway providing each client with an API. which is the BFF pattern.
Netflix uses the second style Inside the Netflix API Redesign. we can surely say that they have created a smart-piece of middleware in their architecture that takes on multiple responsibilities.
But how much work this single API back-end can handle, it seems that it can become a bottleneck so easily.
So my question is what are the benefits of choosing the single API to handle requests for more than 1000 clients instead of creating an API Gateway specifically designed to one type of clients? Aren't they facing many challenges to manage and maintain this complex piece?
It all depends where your end users are, in case of Netflix, they have differnt types of clients, web/mobile/streaming sticks/bluray players/what not, while web (updated to latest all the time), mobile (updated to latest eventually), bluray player with pre-installed app for example may never get updated.
And you have to version your apis accordingly for each platform and maintain them based on client update cycle for backward compatibility. If you have too many variations in a single api it will be hard to maintain instead it is easier to write an api for each type of client. Unless you have real need for #3 and have enough resources to develop for each type of client I wouldn't jump into it, as you have to maintain many variations of api for the same purpose.
I would start small with #1.
Related
When doing microservices it's quite a common and good practice to have a top layer called API-Gateway, that sits in front of a group of microservices, to facilitate requests and delivery of data and services.
When using MicroFrontends to compose an UI app from multiple sub-apps (MFE), I cannot really see a benefit about still using an API gateway to communicate between each MFE and its corresponding Microservice.
In many schemas, an API-Gateway is still used in front of the microservices
Example here :
But the fact that each microfrontend is its own app also probably means it will communicate with its own API. In this case I would either use a thin API layer over each microservice, and then make each MFE communicate directly with this layer, because a MFE probably won't ever need to read data from another MFE (this would go against the spirit of MFE, or am I wrong ?).
What kind of approach would you use in your projects ?
I am Spring Boot dev.
I develop RESTful web services.
One of my colleagues developed an API and it does two things on the basis of operation type.
If opType = Set, the api sets/unsets a flag at the backend and if opType = Get, the api gets the status of the flag.
Does this not break the architecture of REST APIs?
We have Post/Put to change some data at backend, either create or update.
And we have Get, to get the status of some thing from backend.
Now, I want to opinion of better developers!
Should this be allowed, like having multiples operations with one API call, or should we create multiple apis for each of the tasks.
Also, the front end devs in my team, don’t like integrating multiple apis somehow, suggesting that more the api calls, poorer user experience, customer will have.
Is this the normal practice among app developers?
Comments requested.
GET requests in REST are not supposed to change the state of the server, these are read operations, whereas PUT/POST do modifications to the state of the server in the most general sense.
So usually you should have two endpoints GET to read the state of the flag and put/post for creating and modifying the state.
Having said that there is nothing that can technically restrict you from implementing everything in one API, such an API won't adhere to REST conventions, that's true, but from the client-server communication standpoint (HTTP based usually), it's still perfectly doable.
Sure thing, the separation to two endpoints makes the API more clear, easier to debug and maintain the code. But besides being "restful" this can be treated as an opinionated claim.
I didn't really get the argument of integrating multiple APIs - in my understanding, the effort is the same, and even more clear to front-enders, but they might have their own arguments.
After coding for a couple of years, I have implemented many different software services into applications I was coding, using API documentation that software owner has provided. And I thought that was all about APIs I need to know, that it's just a way to make to software services communicate with each other.
But now I got a task to create an application, I wont go into detail, but let's say it just needs to implement CRUD operations and that it should use Vue on front and Laravel on back. And in the explanation of a task it is mentioned that I should use REST API for triggering those operations. And that's the part that confuses me!
Since I have never created an application from scratch, I was only working on already stable applications, fixing bugs and implementing new functionalities (and I guess this is the what it looks like for the most of the people who work in big companies today), and that's why I thought that those two frameworks (Vue and Laravel) have already implemented REST APIs since they can communicate between themselves.
Why am I specifically asked to use REST API to trigger those operations? Is there any way other than using an API to make front communicate with back (even I am using frameworks already)? If not, do they want me to create my REST API for communication and not use the one that is already provided by frameworks? I am confused, why did they mention to use REST API as if it wasn't default option, something that shouldn't even even be questionable, just an expected behavior.
why did they mention to use REST API as if it wasn't default option
For many years, offering an API in the backend for JS frontend consumption was not the default option. Traditional "round trip" applications use a form that submits to the server with a full page refresh, and I'd hazard a guess that most web applications live today still work like that.
With the advent of Vue, React, Angular etc, there is an expectation that fetching data and sending data is done via APIs in an AJAX operation. This gives applications a more seamless feel, and they're faster, since only a relatively small amount of data needs to be sent or received.
In small Laravel/Vue applications, the frontend and backend are often in the same repo, and are deployed together as a single unit. However, as the size and complexity of an application increases, there is value in splitting up these pieces into microservices, which can be deployed separately, without tricky system dependencies complicating the deployment pipeline and sign-off process. Using an API lends itself well to that approach.
Indeed, as the backend increases, the API is not one service, but several, split by process area (e.g. user, sign-up, checkout, dashboard, etc).
Do Laravel and Vue always use ... APIs to communicate?
So, to answer your main question, you don't have to use APIs/AJAX with Vue and Laravel. You can still use standard HTTP forms and redraw the whole screen if you want.
Do Laravel and Vue always use RESTful APIs to communicate? [my emphasis]
Another way of interpreting the question is that perhaps you have received instructions from someone who was differentiating a REST API from a different kind of API. On the web, GraphQL is becoming more popular. Server-to-server, SOAP (XML) used to be very common, and is still in use in many enterprises.
FOA, The gap is not going to fill "ASAP" because it requires domain knowledge that you are missing. And yes RESTful API is the best way unless you want multi-dimensional communication across multiple platforms.
Say we have following taxi-hailing application that is composed of loosely coupled microservices:
The example is taken from https://www.nginx.com/blog/introduction-to-microservices/
Each services has its own rest api and all services are combined in a single api gateway. The client does not talk to a single service but to the gateway. The gateway requests information from several services and combines them to a single response. For the client it looks like it is talking to a monolithic application.
I am trying to understand: where could we incorporate falcor into this application?
One Model Everywhere from http://netflix.github.io/falcor/
Falcor lets you represent all your remote data sources as a single
domain model via a virtual JSON graph. You code the same way no matter
where the data is, whether in memory on the client or over the network
on the server.
In this taxi-hailing application each microservice represents a single domain model already. Can you think of any benefit we could thrive by wrapping each microservice with falcor? I cannot.
However I think it is very convenient to incorporate falcor into the api gateway because we can abstract away the different domain models created by the microservices into one single or at least a few models.
What is your opinion?
You are right. This is how Netflix uses Falcor and what the Falcor router is designed for.
From the documentation:
The Router is appropriate as an abstraction over a service layer or REST API. Using a Router over these types of APIs provides just enough flexibility to avoid client round-trips without introducing heavy-weight abstractions. Service-oriented architectures are common in systems that are designed for scalability. These systems typically store data in different data sources and expose them through a variety of different services. For example, Netflix uses a Router in front of its Microservice architecture.
It is rarely ideal to use a Router to directly access a single SQL Database. Applications that use a single SQL store often attempt to build one SQL Query for every server request. Routers work by splitting up requests for different sections of the JSON Graph into separate handlers and sending individual requests to services to retrieve the requested data. As a consequence, individual Router handlers rarely have sufficient context to produce a single optimized SQL query. We are currently exploring different options for supporting this type of data access pattern with Falcor in future.
Falcor is really a great api if it is used in the correct way for very relevant use cases, like :
If your page has to make multiple REST end point calls
These calls don't depend on each other
All the REST calls happens on initial page load
Performance : If you want to cache the REST responses (for example, the microservice uses gemfire caching, you may not need falcor cache. You could still use falcor caching if you want to reduce the network latency)
Server requests batching : When running Falcor in node environment, you may want to cut down the amount of calls to node server from the client side.
Easier response parsing : If you don't want the client code to worry about extracting the data-points from REST response (Including error handling)
and so on ..
However, there are plenty of situations where falcor does not serve the purpose as much and feel that it is better off calling the end point directly :
If REST calls are dependent on one another
If you want to pass lot of parameters for calling the end point
If you don't intend to cache the response(s)
If you want to share some secure cookies (ex:XSRF tokens) with the REST web service
I have an existing web service that supports ordering and it has multiple operations (approximately 20). This is a single webservice that support the ordering function. It interacts with multiple other services to provide ordering capability.
Since there is a lot of business functionality within this app and it is supported by a 10 member team , I believe it is a monolith (though I assume there is no hard and fast rule to define what a monolith is).
We are planning to get the application deployed in cloud foundry environment and we are planning to split the app into 2-3 microservices , primarily to enable them scale independently.
The first few apis which enable searching for a product typically have more number of hits whereas the api that support actual order submission receives less that 5% of the hits. So the product search api should have significantly larger number of instances as compared to order submission api.
Though I am not sure if we could split is based on sub-domains (which I have read should be the basis) , we are thinking of splitting them based on the call sequence as explained earlier.
I have also read that microservices should be choreographed and not orchestrated. However in order to ensure our existing consumers are not impacted , I believe we should expose a api layer which would orchestrate the calls to these microservices. Is providing an api gateway , the normal approach that is followed to ensure consumers do not end up calling multiple microservices and also provides a layer of abstraction?
This seems to be orchestration more than choreography - though I am not hung up on the theoretical aspects , I would like to understand the different solutions that are pursued for this problem statement in an enterprise world.
The Benefits of Microservices
Deploy & Scale Independently
Easier to 'Reason About'
Separation of Concerns
Single Responsibility
(Micro)Service-Oriented Architecture
I would suggest splitting your services based on domain. This is a logical and efficient approach which makes it an easy starting point. Your monolithic package structure may already be organized in this manner, which simplifies the refactoring even more.
API Gateway
The typical Spring Cloud approach for this would be to use a Zuul Proxy on the edge of your network which receives the requests from your clients (web, mobile, etc.) and routes them to the microservices located behind your firewall. The client only interfaces with a single domain, and it handles CORS out of the box.
Resources:
API Gateway Pattern
Routing and Filtering