rate limiting and throttling in java - spring-boot

I need to implement ratelimiter/throttling in one of my microservice.
For example I have one User microservice that handles login and get user data based on role like Admin or normal user that is implemented using
JWT token and annotation #Secured, So, My ask is to throttle based on these what api is being called.And, I should be able to modify the throttle limit at runtime too.
I don't want to re-invent the wheel, so, any ideas please?
technology stack:- java, spring boot

Answer to this surely depends on what you relate throttling to.
If you are thinking to throttle data returned by api based on role for some time, you could achieve this simply by using spring-boot cache. You can control cache evict time in springboot-app (Even if you want to externalize configuration).
Please have a look at https://spring.io/guides/gs/caching/. Also, have a look at https://www.youtube.com/watch?v=nfZxXGjXVfc demonstration if required.
I am not putting details of how caching is done as it's very well explained in springboot docs. So, might have to tune it according to your need but this is first answer to your controlled throttling.
If you are thinking to throttle the api endpoint itself or throttle the amount of data it could serve i.e. control no. of requests it could serve in a seconds etc. Then you could use RateLimiter from Guava.
Also, I managed to find another one probably more relevant if you are using springboot. It's weddini/spring-boot-throttling
Seems like 2nd approach fits more into what you need.
Hope it helps!

I have implemented a rate limiter base on token-bucket.Other releated technology is spring-boot,spring-data-redis and lua.
Hope it can be helpful.
https://github.com/AllstarVirgo/rateLimit

Related

How to scale one specific Rest API without microservice?

I am developing web application backend with Spring where client and server talk through Restful APIs. There is a specific API where I assume the hit will be much. Is there any way to scale this specific API?( Like, assigning more threads)
In this application everything is interdependent. So, microservice wont be best approach I guess.
There are two possible ways, i can think of
Use Load Balancer, this will help you to add multiple application instances of Rest API. This is classical approach in such cases.
This depends upon existing implementation, API can be refactor to just receive the message and decouple the processing thread.
The your suggested way of increasing thread has limitation and more fine tuning require. If the use case is just to support limited user, following configure can be use. tomcat thread pool.
Just have multiple instances of the same service. REST has a statelessness constraint, so it is easy to do it.

Spring Boot user job/process monitoring

using Spring 2.0.3
I have a set of Spring Servers which I need to find out if the Spring is processing a request sent to it. Only one of these requests is processed at a time. In this case the request is, depending on options, can cause a good number of code paths to be used. To support the different variations of the starting call there are about 30 different services and some other classes.
I need to be able to send some request to these servers and ask the question: Are you working on one of these requests. The response can be a simply yes or no.
In trying to come up with an approach it kind of seems like the Spring Actuator might be the way to go. However in a least some of the material I have looked at seems like it is at more of a sysadmin type of level.
My question is how to approach this issue? Is the Actuator the best bet to archive what I am looking for, and if not what to do? If possible would like to avoid placing code in each service/class to see what is going on.
thanks

Rate-Limit an API (spring MVC)

I'm looking the best more efficient way to implement (or use an already setup) rate limiter that would protect all my rest api url. the protection I'm looking at is a "call per second per user limiter"
I had a look on the net and what comes out was the use of either "Redis" or Guava RateLimiter.
To be honest I have never used Redis and I'am really not familiar with it. But by looking on its docs it seems that it has a quite robust rate limiter system.
I have also had a look at Guava's RateLimiter. And it looks a bit easier to use (don't need a redis installation etc...)
So I would like some suggestion of what would be "in my case" the best solution? Is using Redis "too much"?
Have any of you already tried RateLimter? Is this a good solution? Is it scaleable?
PS: I am also open to other solutions than the 2 I aforementioned if you think there are better choices.
Thank you!
If you are trying to limit access to your Spring-based REST api you should use token-bucket algorithm.
There is bucket4j-spring-boot-starter project which uses bucket4j library to rate-limit access to the REST api. You can configure it via application properties file. There is an option to limit the access based on IP address or username.
If you are using Netflix Zuul you could use Spring Cloud Zuul RateLimit which uses different storage options: Consul, Redis, Spring Data and Bucket4j.
Guava’s RateLimiter blocks the current thread so if there’s a burst of asynchronous calls against the throttled service lots of threads will be blocked and might result exhaust of free threads.
Perhaps Spring-based library Kite meets your needs. Kite's "rate-limiting throttle" rejects requests after the principal reaches a configurable limit on the number of requests in some time period. The rate limiter uses Spring Security to determine the principal involved.
But Kite is still a single-JVM approach. If you do need a cluster-aware approach Redis is a way to go.
there is no hard rule, it totally depends on your specific situation. provided that "I have never used Redis", I would recommend guava RateLimiter. compare to redis, a completely new nosql system for you, guava RateLimiter is much easier to get started with. by adding a few lines of code, you are enable to distribute permits at a configurable rate. what left to do is to adapt it to fit your need, like providing rate limit on a per user basis.

the geo coder to fetch more requests

I am working with geocoder gem and like to process more number of requests from an IP. By default Google API provides only 2500 requests per day.
Please share your thoughts on how I can do more requests than the limit?
As stated before: Using only Google API the only way around the limitation is to pay for it. Or in a more shady way make the requests form more than one IP/API-Key which i would not recommend.
But to stay on the save side i would suggest mixing the services up since there a few more Geocoding APIs out there - for free.
With the right gem mixing them is also not a big issue:
http://www.rubygeocoder.com/
Supports a couple of them with a nice interface. You would pretty much only have to add some rate-limiting counters making sure you stay within the limits of each provider.
Or go the heavy way of implementing your own geocoding. With for example your own running Openstreetmaps database. The Database can be downloaded here: http://wiki.openstreetmap.org/wiki/Planet.osm#Worldwide_data
Which is the best way depends on what your actual requirements are and what ressources you have available.

Usage of observer pattern with EJB and AJAX

I want to build an Ajax gui, that is notified on any state changes happening in my ejb application. To achieve this, I thought I build an stateful ejb (3.0) that implements the Observable interface to which the Ajax client is added as an observer.
First, is this possible with Ajax. If yes, is this a good design idea or is there a more propriate way to do this?
Thanks in advance!
Cheers,
Andreas
It sounds like you are interested in 'Reverse-Ajax', where the client is notified when an event happens server side. This is different than standard Ajax, where an asynchronous event is sent to the server based on some client action. Reverse Ajax is possible, and one framework that does a very good job of allowing this is and simplifying the underlying complexity is DWR.
http://directwebremoting.org/dwr/reverse-ajax
You'll want to read up on the performance implications of the various ways to implement based on your expected load, webapp container, etc. regardless of which framework you use.
As for whether or not it is good practice, that really depends on your application. If it is important to get near-real time data pushed back to the client and you don't want to use something like Flex or other heavier frameworks, then I'd say you are on the right track. If the data does not need to be real time, or if your load is extremely high, then perhaps a more simple approach like a scheduled page refresh will save you some complexity and help with performance.
Now, some time later there is a new possible answer to your question: Usage of Websockets
From the previously linked website by Pete: "The web was not designed to allow web servers to make connections to web browsers..." That is changing now with html5.
http://en.wikipedia.org/wiki/WebSockets

Resources