Mapping microservices on frontend - url-rewriting

This is probably a bit opinion-based question, but I will try to be technical to still be relevant.
Consider having several microservices: a, b, c.
To make this available on frontend, these could be made available as:
https://host/services/a
https://host/services/b
https://host/services/c
However, the fact that the endpoints are split between differents services are kind of irrelevant for frontend and basically if we can guarantee the endpoints don't clash, it would be great to have these available directly:
a/endpoint1 -> https://host/services/endpoint1
a/endpoint2 -> https://host/services/endpoint2
b/endpoint3 -> https://host/services/endpoint3
c/endpoint4 -> https://host/services/endpoint4
To implement such mapping, one needs to list all endpoint or at least write some matching pattern within the proxy service. This is very nice for the Frontend team to work with, however it is unfortunately very easy to brake.
What are the best practices for mapping the urls of microservices? Only thing which comes to my mind are some exports of OpenApi, which could be handled by FE to get the right path. However, every service generates its own OpenApi json, so we are basically back to the original problem.

are you sure the Frontend team needs ALL the exposed endpoints? Usually, frontends talk with an API Gateway, or, as cool kids call them these days, "Backend for Frontends".
In a nutshell, it's a special service that takes care of exposing only the functionalities/endpoints needed by the frontend. It will forward calls to the relevant services or, if necessary, call multiple services and aggregate the results.
In most cases these API Gateway don't have a db, as they're retrieving all the data from other services. They might however make use of a caching layer to speedup things.
You can even have multiple API Gateway, one per Frontend (eg. desktop, mobile).

Related

Should an API do more than one thing?

I am Spring Boot dev.
I develop RESTful web services.
One of my colleagues developed an API and it does two things on the basis of operation type.
If opType = Set, the api sets/unsets a flag at the backend and if opType = Get, the api gets the status of the flag.
Does this not break the architecture of REST APIs?
We have Post/Put to change some data at backend, either create or update.
And we have Get, to get the status of some thing from backend.
Now, I want to opinion of better developers!
Should this be allowed, like having multiples operations with one API call, or should we create multiple apis for each of the tasks.
Also, the front end devs in my team, don’t like integrating multiple apis somehow, suggesting that more the api calls, poorer user experience, customer will have.
Is this the normal practice among app developers?
Comments requested.
GET requests in REST are not supposed to change the state of the server, these are read operations, whereas PUT/POST do modifications to the state of the server in the most general sense.
So usually you should have two endpoints GET to read the state of the flag and put/post for creating and modifying the state.
Having said that there is nothing that can technically restrict you from implementing everything in one API, such an API won't adhere to REST conventions, that's true, but from the client-server communication standpoint (HTTP based usually), it's still perfectly doable.
Sure thing, the separation to two endpoints makes the API more clear, easier to debug and maintain the code. But besides being "restful" this can be treated as an opinionated claim.
I didn't really get the argument of integrating multiple APIs - in my understanding, the effort is the same, and even more clear to front-enders, but they might have their own arguments.

Should Microservices be reusable?

Should Microsercices be reusable?
With reusable I do NOT mean sharing Domain specific Models.
I mean should a microservice created for one application be reuseable in another application?
Is it sufficient if they are reusable within an application?
What is the best way to decouple microservices.
From my point of view as soon a microservices calls another microservice it is tightly coupled, means it can not easily (without modifications) be extracted and put into another microservice application that does not have the same service it refers to/from.
To decouple them, in my opinion, there are following ways:
microservice A needs to talk to the other microservice B with a
standard contract eg. a specfic protocol.
another Microservice C acts as a gateway and asks microservice B for the data and passes it as input to microservice A.
A concrete example for nr. 2 would be:
Coupled:
client -> API GateWay -> UserProfileService -> Authorization Service
Decoupled:
Client -> API Gateway -> Authorization Service -> API Gateway -> UserProfileService
Am I right assuming that this all boils down to the goal of the microservice? There is no wrong and right?
Are there any other strategies i'm missing to decouple a microservice?
I think the responses you're likely to get will represent opinions more than answers, but I'll go ahead and give mine!
The literature for microservices has long said, "decouple, decouple, decouple", but frankly I don't find this to be reality. When someone has created a useful API that would empower the functions of your own (auth, payments, and obviously databases come to mind), is it wrong to suggest that those need to be run alongside yours? Most people don't go through complicated, logic-filled gateways in order to make payments via Stripe or send text messages via Twilio, so why should privately hosted APIs be any different?
It is great to design your own service to be a reusable, easily consumable/deployable component. That shouldn't mean it can't have dependencies, but rather that we should be mindful of the bloat those dependencies introduce. This mindfulness is something devs should practice whenever they introduce dependencies, regardless of whether they are app packages or dependent services/APIs.
**disclosure: I build and run a framework/platform, Architect.io, to help cloud-native teams collaborate and build upon each others services. I've seen first-hand how company's like Facebook use similar tactics to enable service re-use and consumption, and wanted to build a microservices dependency resolver for the general public.
It completely depends on what microservice you are building. For ex; say you are building an email notification service. That can be reused by different applications. Another example say you are building a recommendation system. It's very specific for a single application. It hardly makes sense to design it in such a way that it can reused in different applications.
Choose according to the context. There is no right way. It all depends on the application.

Microservice requests

I'm trying to start a little microservice application, but I'm a little bit stuck on some technicalities.
I'm trying to build an issue tracker application as an example.
It has 2 database tables, issues and comments. These will also be separate microservices, for the sake of the example.
It has to be a separate API that can be consumed by multiple types of clients e.g. mobile, web etc..
When using a monolitic approach, all the codebase is coupled together, and when making a request to let's say the REST API, I would handle for example the '/issues/19' request
to fetch the issue with the id '19' and it's corresponding comments by means of the following pseudocode.
on_request_issue(id) # handler for the route '/issues/<id>'
issue = IssuesModel.findById(id)
issue.comments = CommentsModel.findByIssueId(id)
return issue
But I'm not sure on how I should approach this with microservices. Let's say that we have microservice-issues and microservice-comments.
I could either let the client send a request to both '/issues/19' and '/comments/byissueid/19'. But that doesn't work nice in my point of view, since if we're having multiple things
we're sending alot of requests for one page.
I could also make a request to the microservice-issues and in that one also make a request to the microservice-comments, but that looks even worse to me than the above, since from what
I've read microservices should not be coupled, and this couples them pretty hard.
So then I read about API gateways, that they could/should receive a request and fan out to the other microservices but then I couldn't really figure out how to use an API gateway. Should
I write code in there for example to catch the '/issues/19' request, then fan out to both the microservice-issues and microservice-commetns, assemble the stuff and return it?
In that case, I'm feeling I'm doing the work double, won't the API gateway become a new monolith then?
Thank you for your time
API gateway sounds like what you need.
If you'll keep it simple, just to trigger internal API, it will not become your new monolith.
It will allow you do even better processing when your application grows with new microservices, or when you have to support different clients (browser, mobile apps, watch, IOT, etc)
BTW, the example you show sounds like a good exercise, in reality, for most webapps, it looks like over design. I would not break every DB call to its own microservices.
One of the motivations for breaking something to small(er) services is service autonomy, in this case the question is, when the comments service is down should you display the issue or not- if they are always coupled anyway, they probably shouldn't reside in two services, if they aren't then making two calls will let you get this decoupling
That said, you may still need an API Gateway to solve CORS issues with your client
Lastly, comments/byissueid is not a good REST interface the issueId should be a parameter /comments/?issueId=..

How to define API gateway URLs when splitting a monolith into microservices

We are splitting a monolith application into microservices. This will be a gradual process, it means initially we will start with 2 microservices, later we will split them into more and so on.
The monoligh exposes a REST API which provides methods for managing tens of different entities (e.g. users, user_types, roles, role_types, etc.). There is only one consumer of the REST API exposed by the monolith - a Javascript frontend app.
We are currently investigating two possibilities how to configure the API gateway (Zuul):
URLs will contain the microservice name, e.g. /api/dictionary will serve /api/dictionary/user_types and /api/dictionary/role_types, while /api/data will serve /api/data/users and /api/data/roles. It means the URLs will change over time as we create more microservices. Everytime we do it the consumer (frontend) will have to be changed.
URLs will be based on the entity names, e.g. /api/users, /api/user_types, /api/roles and /api/role_types. The disadvantage is that the Zuul configuration will have to contain an explicit configuration for every single entity managed by the system.
Which of the above approaches is correct?
Manmay saying is correct. You should go with first approach for long term gain.
If you still want alternative, then you can combine both of these approach by configuring your API gateway in such a way that, It will route your request
/api/users -> /api/data/users
/api/user_types -> /api/dictionary/user_types
/api/roles -> /api/data/roles
/api/role_types -> /api/dictionary/role_types
By this approach, you will not have to compromise any of the concerns like maintenance or client side changes.

What is the role of falcor in a microservice architecture?

Say we have following taxi-hailing application that is composed of loosely coupled microservices:
The example is taken from https://www.nginx.com/blog/introduction-to-microservices/
Each services has its own rest api and all services are combined in a single api gateway. The client does not talk to a single service but to the gateway. The gateway requests information from several services and combines them to a single response. For the client it looks like it is talking to a monolithic application.
I am trying to understand: where could we incorporate falcor into this application?
One Model Everywhere from http://netflix.github.io/falcor/
Falcor lets you represent all your remote data sources as a single
domain model via a virtual JSON graph. You code the same way no matter
where the data is, whether in memory on the client or over the network
on the server.
In this taxi-hailing application each microservice represents a single domain model already. Can you think of any benefit we could thrive by wrapping each microservice with falcor? I cannot.
However I think it is very convenient to incorporate falcor into the api gateway because we can abstract away the different domain models created by the microservices into one single or at least a few models.
What is your opinion?
You are right. This is how Netflix uses Falcor and what the Falcor router is designed for.
From the documentation:
The Router is appropriate as an abstraction over a service layer or REST API. Using a Router over these types of APIs provides just enough flexibility to avoid client round-trips without introducing heavy-weight abstractions. Service-oriented architectures are common in systems that are designed for scalability. These systems typically store data in different data sources and expose them through a variety of different services. For example, Netflix uses a Router in front of its Microservice architecture.
It is rarely ideal to use a Router to directly access a single SQL Database. Applications that use a single SQL store often attempt to build one SQL Query for every server request. Routers work by splitting up requests for different sections of the JSON Graph into separate handlers and sending individual requests to services to retrieve the requested data. As a consequence, individual Router handlers rarely have sufficient context to produce a single optimized SQL query. We are currently exploring different options for supporting this type of data access pattern with Falcor in future.
Falcor is really a great api if it is used in the correct way for very relevant use cases, like :
If your page has to make multiple REST end point calls
These calls don't depend on each other
All the REST calls happens on initial page load
Performance : If you want to cache the REST responses (for example, the microservice uses gemfire caching, you may not need falcor cache. You could still use falcor caching if you want to reduce the network latency)
Server requests batching : When running Falcor in node environment, you may want to cut down the amount of calls to node server from the client side.
Easier response parsing : If you don't want the client code to worry about extracting the data-points from REST response (Including error handling)
and so on ..
However, there are plenty of situations where falcor does not serve the purpose as much and feel that it is better off calling the end point directly :
If REST calls are dependent on one another
If you want to pass lot of parameters for calling the end point
If you don't intend to cache the response(s)
If you want to share some secure cookies (ex:XSRF tokens) with the REST web service

Resources