Spring Boot user job/process monitoring - spring

using Spring 2.0.3
I have a set of Spring Servers which I need to find out if the Spring is processing a request sent to it. Only one of these requests is processed at a time. In this case the request is, depending on options, can cause a good number of code paths to be used. To support the different variations of the starting call there are about 30 different services and some other classes.
I need to be able to send some request to these servers and ask the question: Are you working on one of these requests. The response can be a simply yes or no.
In trying to come up with an approach it kind of seems like the Spring Actuator might be the way to go. However in a least some of the material I have looked at seems like it is at more of a sysadmin type of level.
My question is how to approach this issue? Is the Actuator the best bet to archive what I am looking for, and if not what to do? If possible would like to avoid placing code in each service/class to see what is going on.
thanks

Related

Make Zipkin (or any open-tracing framework) work with existing "trace id"

A bit of background:
We have around 10 Spring boot microservices, which communicate with each other via kafka. The logs of each microservice are sent to Kibana, and in case of any errors, we have to sift through Kibana logs.
The good thing is: at the start of any flow, a message-id is generated by one of our microservices, and that is propagated to all the others as part of the message transfer (which happens through kafka), so we can search for the message-id in the logs, and we can see the footprint of that flow across all our microservices.
The bad part: having to sift through tons of logs to get a basic idea of where things broke and why.
Now the Question:
So I was wondering if we can have some distributed tracing implemented, maybe through Zipkin (or some other open-tracing framework) that can work with the message-id that our ecosystem already produces, instead of generating a new one ?
Thank you for your time :)
I'm not entirely sure if that's what you mean, but you can use Jeager https://www.jaegertracing.io/ which checks if trace-id already exist in the invocation metadata and in it generate child trace id. Based on all trace ids call diagrams are generated

rate limiting and throttling in java

I need to implement ratelimiter/throttling in one of my microservice.
For example I have one User microservice that handles login and get user data based on role like Admin or normal user that is implemented using
JWT token and annotation #Secured, So, My ask is to throttle based on these what api is being called.And, I should be able to modify the throttle limit at runtime too.
I don't want to re-invent the wheel, so, any ideas please?
technology stack:- java, spring boot
Answer to this surely depends on what you relate throttling to.
If you are thinking to throttle data returned by api based on role for some time, you could achieve this simply by using spring-boot cache. You can control cache evict time in springboot-app (Even if you want to externalize configuration).
Please have a look at https://spring.io/guides/gs/caching/. Also, have a look at https://www.youtube.com/watch?v=nfZxXGjXVfc demonstration if required.
I am not putting details of how caching is done as it's very well explained in springboot docs. So, might have to tune it according to your need but this is first answer to your controlled throttling.
If you are thinking to throttle the api endpoint itself or throttle the amount of data it could serve i.e. control no. of requests it could serve in a seconds etc. Then you could use RateLimiter from Guava.
Also, I managed to find another one probably more relevant if you are using springboot. It's weddini/spring-boot-throttling
Seems like 2nd approach fits more into what you need.
Hope it helps!
I have implemented a rate limiter base on token-bucket.Other releated technology is spring-boot,spring-data-redis and lua.
Hope it can be helpful.
https://github.com/AllstarVirgo/rateLimit

Microservice requests

I'm trying to start a little microservice application, but I'm a little bit stuck on some technicalities.
I'm trying to build an issue tracker application as an example.
It has 2 database tables, issues and comments. These will also be separate microservices, for the sake of the example.
It has to be a separate API that can be consumed by multiple types of clients e.g. mobile, web etc..
When using a monolitic approach, all the codebase is coupled together, and when making a request to let's say the REST API, I would handle for example the '/issues/19' request
to fetch the issue with the id '19' and it's corresponding comments by means of the following pseudocode.
on_request_issue(id) # handler for the route '/issues/<id>'
issue = IssuesModel.findById(id)
issue.comments = CommentsModel.findByIssueId(id)
return issue
But I'm not sure on how I should approach this with microservices. Let's say that we have microservice-issues and microservice-comments.
I could either let the client send a request to both '/issues/19' and '/comments/byissueid/19'. But that doesn't work nice in my point of view, since if we're having multiple things
we're sending alot of requests for one page.
I could also make a request to the microservice-issues and in that one also make a request to the microservice-comments, but that looks even worse to me than the above, since from what
I've read microservices should not be coupled, and this couples them pretty hard.
So then I read about API gateways, that they could/should receive a request and fan out to the other microservices but then I couldn't really figure out how to use an API gateway. Should
I write code in there for example to catch the '/issues/19' request, then fan out to both the microservice-issues and microservice-commetns, assemble the stuff and return it?
In that case, I'm feeling I'm doing the work double, won't the API gateway become a new monolith then?
Thank you for your time
API gateway sounds like what you need.
If you'll keep it simple, just to trigger internal API, it will not become your new monolith.
It will allow you do even better processing when your application grows with new microservices, or when you have to support different clients (browser, mobile apps, watch, IOT, etc)
BTW, the example you show sounds like a good exercise, in reality, for most webapps, it looks like over design. I would not break every DB call to its own microservices.
One of the motivations for breaking something to small(er) services is service autonomy, in this case the question is, when the comments service is down should you display the issue or not- if they are always coupled anyway, they probably shouldn't reside in two services, if they aren't then making two calls will let you get this decoupling
That said, you may still need an API Gateway to solve CORS issues with your client
Lastly, comments/byissueid is not a good REST interface the issueId should be a parameter /comments/?issueId=..

Ways to update config properties for a Spring boot rest service

I have a spring boot rest service where configuration values are stored in git and fetched using a config server. Deployment is done in a docker swarm cluster where this service would run across multiple containers. So one thing I had to keep in mind is that when actuator's refresh endpoint is called, it refreshes all the containers for this service seamlessly and not just any random container. This is quite an obvious ask I believe.
I can implement updating the config values for a service as and when it's config changes in git using a message broker. However, that would take time and time is not with me at the moment.
I have come up with two quick solutions and would like your help based on your experience as to which one is better than the other. Keep in mind that both work and I tested them both.
Solution 1
Create a scheduler using #Scheduled in the Application.java and keep pinging actuator's refresh endpoint every 5 seconds. I think this is really expensive and resource intensive in production.
Solution 2
Call actuator's refresh endpoint in the controller method itself. This way, I called refresh endpoint on demand and don't keep polling it like solution 1 and be wasteful. It will also ensure that whatever container is picked for servicing a request, it refreshes itself as refresh endpoint call would refresh the properties referred by that container only.
Do you have any preference on one over the other ? Do you see any pros and cons with these solutions ? which one would you pick and why ?
Please let me know what your thoughts are.
This sounds like an interesting problem. Also, like you pointed out Solution1 is resource intensive and should not be used in production. If you are running out of time, I would suggest you go ahead with Solution2, its smarter than the prior.
However, I think the optimal way to solve this problem can be using webhooks in github. This way github will make an API call to your predefined endpoint when a specific event is generated. Events are the core of Github Webhooks. Here is the list of all github events. Choose the one that best suits your requirement. https://developer.github.com/webhooks/#events

Rate-Limit an API (spring MVC)

I'm looking the best more efficient way to implement (or use an already setup) rate limiter that would protect all my rest api url. the protection I'm looking at is a "call per second per user limiter"
I had a look on the net and what comes out was the use of either "Redis" or Guava RateLimiter.
To be honest I have never used Redis and I'am really not familiar with it. But by looking on its docs it seems that it has a quite robust rate limiter system.
I have also had a look at Guava's RateLimiter. And it looks a bit easier to use (don't need a redis installation etc...)
So I would like some suggestion of what would be "in my case" the best solution? Is using Redis "too much"?
Have any of you already tried RateLimter? Is this a good solution? Is it scaleable?
PS: I am also open to other solutions than the 2 I aforementioned if you think there are better choices.
Thank you!
If you are trying to limit access to your Spring-based REST api you should use token-bucket algorithm.
There is bucket4j-spring-boot-starter project which uses bucket4j library to rate-limit access to the REST api. You can configure it via application properties file. There is an option to limit the access based on IP address or username.
If you are using Netflix Zuul you could use Spring Cloud Zuul RateLimit which uses different storage options: Consul, Redis, Spring Data and Bucket4j.
Guava’s RateLimiter blocks the current thread so if there’s a burst of asynchronous calls against the throttled service lots of threads will be blocked and might result exhaust of free threads.
Perhaps Spring-based library Kite meets your needs. Kite's "rate-limiting throttle" rejects requests after the principal reaches a configurable limit on the number of requests in some time period. The rate limiter uses Spring Security to determine the principal involved.
But Kite is still a single-JVM approach. If you do need a cluster-aware approach Redis is a way to go.
there is no hard rule, it totally depends on your specific situation. provided that "I have never used Redis", I would recommend guava RateLimiter. compare to redis, a completely new nosql system for you, guava RateLimiter is much easier to get started with. by adding a few lines of code, you are enable to distribute permits at a configurable rate. what left to do is to adapt it to fit your need, like providing rate limit on a per user basis.

Resources