Spring Boot Soap Service Performance Question - spring-boot

I am load testing a stub I created in soapUI. My stub is very simple, it takes a request then does a **Thread.sleep(X) **(picked up from an application.properties file). It is a Spring Boot soap service and my question is this.
If I have X as like 1 MS, my application handles thousands of TPS (Transactions Per Second), If I change that Thread.sleep to something Like 10 seconds, it only handles like 5 TPS. What is the limiting factor that is causing the degradation?
I have plenty of threads, CPU and Memory available. How can I make it so It will utilize all of my resources, and have a higher TPS, but mimic a delay?
Thanks,
Brian

Related

Spring boot Netflix zuul delays request to microservice

Am using spring boot Netflix zuul router version 2.1.1, we are doing a performance test with two services and the two services have a stubbed response. Our requirement Is to test with a Tps of 15 concurrently both the services. And the Response time of stub is 8 seconds, I checked the elapsed time for both the services and I did find that both services are taking an 8.4 seconds to respond. While we reached the TPS to 10.5 one of the service reached average response time to 11.5 seconds and the second service took like 9.5 seconds. Our Target is 15 TPS and test the service till 40 TPS to check the limit of our services. Am suspecting zuul Netflix router, as I tweaked the threads to 100 for tomcat I saw improvement from 12.5 to 11.5 seconds. Please let me know where am missing. What should I do to improve performance. Am not using eureka and I use routes to connect. Currently the limit is 200 with a refresh limit of 1 second.

Slow First call rest API springboot Jersey Eureka

My application is a Rest API springboot +jersey based on microservice architecture Eureka.
The first call after instance start is slow comparing to the following calls (even though I stubbed the results).
Example of first call: 500ms, the other calls about 60-100ms.
Can anyone help me to resolve this issue?
Most likely, this is related to JVM warm up. From https://www.baeldung.com/java-jvm-warmup:
The first request made to a Java web application is often substantially slower than the average response time during the lifetime of the process. This warm-up period can usually be attributed to lazy class loading and just-in-time compilation.

How to ensure my Reactive application is running in event loop style

I am using spring boot 2.0.4.RELEASE. My doubt is whether my application is running in event loop style or not. I am using tomcat as my server.
I am running some performance tests in my application and after a certain time I see a strange behaviour. After the request reaches 500 req/second , my application is not able to serve more than 500 req/second. Via prometheus I was able to figure out max thread for tomcat were 200 by default. Looks like all the threads were consumed and that's why , it was not able to server more than 500 req/second. Please correct me if am wrong.
Can the tomcat server run in event-loop style ?
How can I change the event-loop size for tomcat server if possible.
Tried changing it to jetty still the same issue. Wondering if my application is running in event loop style.
Hey i think that you are doing something wrong in your project maybe one of your dependency does not support reactive programming. If you want to benefit from async programing(reactive) your code must be 100 reactive even for security you must use reactive spring security.
Normally a reactive spring application will run on netty not in tomcat so check your dependency because tomcat is not reactive
This is more of a analysis. After running some performance test on my local machine , I was able to figure out what was actually happening inside my application.
What I did was, ran performance test on my local machine and analysed the application through JConsole.
As I said I scheduled all my blocking dB calls to schedulers.elastic. What I realised that I it is causing the bottleneck. since my dB connections are limited and I am using hikari for connection pooling so it doesn’t matter the number of threads I create out of elastic pool.
Since reactive programming is more about consuming resource to the fullest with lesser number of threads, since the threads were being created in unbounded way so it was no different from normal application .
So what I did as part of resolution limited the number of threads to 100 that were supposed to be used by for dB calls. And bang number jumped from 500 tps to 2300 tps.
I know this is not the number which one should expect out of reactive application , it has much more capability. Since right now I do not have any choice but to bear with non reactive drivers .Waiting for production grade availability of reactive drivers for mssql server.

How to achieve high TPS with Spring Boot

I am working on a application (Banking) which has a TPS requirement of 100 and multiple concurrent users.
Will Spring Boot 1.x.x allow me to achieve this?
Note: I would have used Spring Boot 2.x.x which supports Reactive paradigm but there is some legacy code which I have to use and it does not work on 2.x.x.
You can hit these numbers running a Java application on any reasonable hardware. LMAX claims that Disruptor can do over 100k TPS with 1ms latency. Spring Boot, or Java in general, won't be the limiting factor.
What will be the problem are the business requirements. If your application is to produce complex reports from over utilised database that's located in another data centre, well just the packet round-trip from CA to Netherlands is 150ms. If your SQL queries will take 30+ seconds, you are toast.
You can take a look at Tuning Tomcat For A High Throughput, Fail Fast System. It gives a good insight what can be tuned in a standard Tomcat deployment (assuming you will use Tomcat in Spring Boot). However it's unlikely that HTTP connections (assuming you will expose HTTP API) will be the initial bottleneck.

Spring boot service higher response times under heavy load

the response time of my spring boot rest service running on embedded tomcat sometimes goes really high. I have isolated the external dependencies and all of that is pretty quick.
I am at a point that I think that it is something to do with tomcat's default 200 thread pool size that it reserves only for incoming requests for the service.
What I believe is that all 200 threads under heavy load (100 requests per second) are held up and other requests are queued and lead to higher response time.
I was wondering if there is a definitive way to find out if the incoming requests are really getting queued? I have done an extensive research on tomcat documentation, spring boot embedded container documentation. Unfortunately I don't see anything relevant.
Does anyone have any ideas on how to check this

Resources