We are having multpile microservices which have health end points in the form of JSON. A JSON may contain states of other services that a microservice will call. Is there a way we can monitor this service on Grafana?. We have Grafana and Telegraf.
Thx in advance
sam
Check this out, I believe the Telegraf HTTP plugin has JSON parsing and can satisfy this.
If you're just doing plain health checks though, I imagine you may have something like service discovery which pretty much has plain HTTP health checking out of the box.
That aside, one suggestion I have is actually break up the health checks for independent services. That is to say, that if you aggregated it at a top level microservice and that microservice fails for whatever reason, your monitoring would show a false positive for the failure for the other services behind that microservice that may be up. This goes hand in hand with service discovery if you're just looking for a plain 200 OK HTTP status code.
Related
How to sync an action of many microservices and return a single response to the client that takes into consideration each microservice's response?
I'm making a social network application (like Facebook) with microservices, for learning purposes. I've divided the app into following microservices, each will have its own database:
Authentication - Login/Register, returns a JWT token.
Database stored properties are: UserName, Email, PasswordHash, PasswordSalt.
UserProfiles - Gets and Updates profiles.
Database stored properties are: UserName, FirstName, LastName, Gender, Photos
UserPosts - user can publish Posts with anything he likes, others can comment.
Database stored properties are: UserName, UserPosts, Comments
Gateway - collects http requests from clients, forwards them to correct microservices
THere will be more, like Messages between users.
Some properties in databases will duplicate (UserName, but there can be more). I suppose I cannot avoid that if I want to make the services independent.
Now, what do I do if the user decides to change a shared property, like UserName? Obviously it will require every service to update its database. But what if one of services cannot connect to the database or meets some other error? The response should be 500 Internal Server Error. I can see two options for that:
Make the Gateway send an HTTP request to each microservice, requesting an update. How do I pass information about an error in one of the services? This seems a bad approach
Publish a message (MassTransit, RabbitMQ) to all microservices with update request. This way I can await a response from each service and decide what to return to client. But who should be the publisher here? Gateway? Authentication?
Is there some other way I have not thought about? I'll be thankful for any good-practice, clean-code advices.
Thank you
I tried messaging services from Authentication services, since it's the one that creates the User entity in the first place. But it doesn't feel like a good reason to make it there.
After a little more research, I think I may have found a solution. I learnt about the Saga Orchestrator design pattern. Basically it's just a central unit of command, don't know what's all this fancy naming for. However, I will incorporate a central unit in the Gateway microservice. Then, it will still aggregate HTTP requests from clients, but instead of just redirecting them to appropriate microservices, it will send a message to each microservice involved, over AMQP protocol (with MassTransit, RabbitMQ or similar). It will await a response from each microservice and decide what to return to the client. If any service returns an error, I will request a rollback operation or retry.
To start off I would argue that such decomposition would be excessive for a real-world scenario. IMO microservices should evolve around bounded context and all the social media stuff is a single bounded context. In case you wanted to introduce another one that is totally different (i.e. payment gateway) I would say that decomposition to microservices would make some sense. Although, even in such a case I would default to a modular monolith since microservices are distributed systems and writing distributed system correctly is hard (which you might have already experienced).
Nevertheless, I recognize that the example you've brought up is solely for learning purposes. So to answer your question: most users are ready to tolerate some inconsistencies as long as they are warned about it. So the natural solution to your question is eventual consistency. In case you want to update Username you make a call to UserProfile microservice and if things are successful you return 200 response while sending UsernameUpdated message via RabbitMQ to other microservices. In such a case doing retries in case of failure would be a concern of every microservice that receives a message acknowledging the message once the update is successful.
As an alternative, you may call other microservices via RPC (HTTP) and wait for them to return a response in a synchronous fashion. But I suggest avoiding that.
Regarding API Gateway it's a nice pattern in case you want a single entry-point for your client that routes requests to different microservices. Also, you can manage additional cross-cutting concerns like authorization in your API Gateway. But beware that every additional moving part adds complexity to your system.
how I can do in order to return 503 server error response code when my api is under maintenance?
Usually microservices tend to have a API gateway where you implement solutions such parsing some header token and inserting new headers , rate limiting , checks like if a user is administrator or not etc
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern
You can use https://spring.io/projects/spring-cloud-gateway which is used build API gateway based on spring stack
I believe you can use this to temporarily shutdown certain routes and also configure different errors
Assuming your Spring Boot web service is down completely for maintenance, then its best not to do it at the Spring Boot level. Perhaps its better to come up with a solution in by which you can swap out the server for something else that returns the 503.
Heres a very simple example:
Your APIs domain is api.myservice.com
You then switch where the domain is pointing to that solution that statically responds with the 503.
You do the maintenance on the server/database/etc.
Once the maintenance is done and your Spring Boot service is up and running, you then switch your domain to point back to the server.
Note: Domains records have a Time To Live so note that the above example is just something to give you an idea. You'll have to take the timing into consideration. An actual solution is hard to recommend when we don't have your environment details or context.
The point I'm trying to make is that perhaps Spring Boot is usually not the place to do this.
I am using Spring Boot and Spring Cloud for Microservices architecture and using various things like API Gateway, Distributed Config, Zipkin + Sleuth, Cloud and 12 factor methodologies where we've single DB server has the same schema but tables are private.
Now I am looking to have below things - Note - Response Object is nested and gives data in hierarchy.
Can we ask downstream system to develop API to accept List of CustomerId and given response in one go?
Or can we simply call the same API multiple times giving single CustomerId and get the response?
Please suggest having complex response set and also having simple response set. What would be better considering performance and microservices in mind.
I would go with option 1. This may be less RESTful but it is more performant, especially if the list of CustomerId is large. Following standards is for sure good, but sometimes the use case requires us to bend a bit the standards so that the system is useful.
With option 2. you will most probably "waste" more time with HTTP connection "dance" than with your actual use case of getting the data. Imagine having to call 50 times the same downstream service if you are required to retrieve the data from 50 CustomerIds.
A bit of background:
We have around 10 Spring boot microservices, which communicate with each other via kafka. The logs of each microservice are sent to Kibana, and in case of any errors, we have to sift through Kibana logs.
The good thing is: at the start of any flow, a message-id is generated by one of our microservices, and that is propagated to all the others as part of the message transfer (which happens through kafka), so we can search for the message-id in the logs, and we can see the footprint of that flow across all our microservices.
The bad part: having to sift through tons of logs to get a basic idea of where things broke and why.
Now the Question:
So I was wondering if we can have some distributed tracing implemented, maybe through Zipkin (or some other open-tracing framework) that can work with the message-id that our ecosystem already produces, instead of generating a new one ?
Thank you for your time :)
I'm not entirely sure if that's what you mean, but you can use Jeager https://www.jaegertracing.io/ which checks if trace-id already exist in the invocation metadata and in it generate child trace id. Based on all trace ids call diagrams are generated
For microservices, the common design pattern used is API-Gateway. I am a bit confused about its implementation and implications. My questions/concerns are as follows:
Why other patterns for microservices are not generally discussed? If they are, then did I miss them out?
If we deploy a gateway server, isn't it a bottleneck?
Isn't the gateway server vulnerable to crashes/failures due to excessive requests at a single point? I believe that the load would be enormous at this point (and keeping in mind that Netflix is doing something like this). Correct me if I am wrong in understanding.
Stream/download/upload data (like files, videos, images) will also be passing through the gateway server with other middleware services?
Why can't we use the proxy pattern instead of Gateway?
From my understanding, in an ideal environment, a gateway server would be entertaining the requests from clients and responding back after the Microservices has performed the due task.
Additionally, I was looking at Spring Cloud Gateway. It seems to be something that I am looking for in a gateway server but the routing functionality of it confuses me if it's just a routing (redirect) service and the microservice would be directly responsible for the response to the client.
The gateway pattern is used to provide a single interface to a bunch of different microservices. If you have multiple microservices providing data for your API, you don't want to expose all of these to your clients. Much better for them to have just a single point of entry, without having to think about which service to poll for which data. It's also nice to be able to centralise common processing such as authentication. Like any design pattern, it can be applied very nicely to some solutions and doesn't work well for others.
If throughput becomes an issue, the gateway is very scalable. You can just add more gateways and load balance them
There are some subtle differences between proxy pattern and API gateway pattern. I recommend this article for a pretty straightforward explanation
https://blog.akana.com/api-proxy-or-gateway/
In the area of microservices the API-Gateway is a proven Pattern. It has several advantages e.g:
It encapsulate several edge functionalities (like authentication, authorization, routing, monitoring, ...)
It hides all your microservices and controls the access to them (I don't think, you want that your clients should be able to access your microservices directly).
It may encapsulate the communication protocols requested by your microservices (sometimes the service may have a mixture of protocols internally which even are only allowed within a firewall).
An API-Gateway may also provide "API composition" (orchestrating the calls to several services an merge their results to one). It s not recommended, to implement a such composition in a microservice.
and so on
Implementing all these feature in a proxy is not trivial. There is a couple of API-Gateways which provide all these functionalities and more like the Netflix-Zuul, Spring-Gateway or the Akana Gateway.
Furthermore, in order to avoid your API-Gateway from being a bottleneck you may :
Scale your API-Gateway and load balance it (as mentioned above by Arran_Duff)
Your API-Gateway should not provide a single one-size-fits-all API for all your clients. Doing so you will, in the case of huge request amount (or large files to down/up load) for sure encounter the problems you mentioned in questions 3 and 4. Therefore in order to mitigate a such situation your Gateway e.g may provide each client with a client specific API (a API-Gateway instance serves only a certain client type or business area..). This is exactly what Netflix has done to resolve this problem (see https://medium.com/netflix-techblog/embracing-the-differences-inside-the-netflix-api-redesign-15fd8b3dc49d)
1.Why other patterns for microservices are not generally discussed? If they are, then did I miss them out?
There are many microservice pattern under different categories such as database , service etc .This is a very good article https://microservices.io/patterns/index.html
2.If we deploy a gateway server, isn't it a bottleneck?
Yes to some extent .Q3's answers image will answer this.
3.Isn't the gateway server vulnerable to crashes/failures due to excessive requests at a single point? I believe that the load would be enormous at this point (and keeping in mind that Netflix is doing something like this). Correct me if I am wrong in understanding.
4.Stream/download/upload data (like files, videos, images) will also be passing through the gateway server with other middleware services?
Why can't we use the proxy pattern instead of Gateway?
The use case for an API Proxy versus an API Gateway depends on what kinds of capabilities you require and where you are in the API Lifecycle. If you already have an existing API that doesn’t require the advanced capabilities that an API Gateway can offer than an API Proxy would be a recommended route.
You can save valuable engineering bandwidth because proxies are much easier to maintain and you won’t suffer any negligible performance loss. If you need specific capabilities that a proxy doesn’t offer you could also develop an in-house layer to accommodate your use case. If you are earlier in the API lifecycle or need the extra features that an API Gateway can provide, then investing in one would pay dividends.