Am using spring boot Netflix zuul router version 2.1.1, we are doing a performance test with two services and the two services have a stubbed response. Our requirement Is to test with a Tps of 15 concurrently both the services. And the Response time of stub is 8 seconds, I checked the elapsed time for both the services and I did find that both services are taking an 8.4 seconds to respond. While we reached the TPS to 10.5 one of the service reached average response time to 11.5 seconds and the second service took like 9.5 seconds. Our Target is 15 TPS and test the service till 40 TPS to check the limit of our services. Am suspecting zuul Netflix router, as I tweaked the threads to 100 for tomcat I saw improvement from 12.5 to 11.5 seconds. Please let me know where am missing. What should I do to improve performance. Am not using eureka and I use routes to connect. Currently the limit is 200 with a refresh limit of 1 second.
Related
I am having a spring boot app which is acting as a kafka producer. We are trying to process 300 TPS and each request is about 1kb in size. When the same app is running on a single instance of PCF , we are able to achieve the desired TPS but when we are increasing the instances to 2 then TPS is decreasing to 150 only. We are using below properties to speed up the execution but still not able to figure out why it is working fine on 1 instance:
workerPoolCoreSize=50
workerPoolMaxSize=1000
batch.size=327680```
Any idea?
After upgrading to spring-boot 2.5, CancelledServerWebExchangeException started to appear in prometheus http_server_requests_seconds metrics quite frequently (up to 10% server responses end up with it, according to graphics). It appears in my own API metrics, as well as actuator endpoints metrics (health, info, prometheus).
Example:
http_server_requests_seconds_count{exception="CancelledServerWebExchangeException",method="GET",outcome="UNKNOWN",status="200",uri="/actuator/health"} 137.0
Kind of strange combination of outcome="UNKNOWN" & status="200"
The problem is: all these requests have successful responses.
Questions: what is this exception for and why may it occur so often?
How to reproduce: start application locally and put some load on it (I used 50 threads in jmeter accessing actuator endpoints)
I have developed a microservice using Spring Boot and deployed it as a docker container and when doing performance testing the service I see that the maximum number of threads that created to the service is 20 at any point in time even though the number of calls made is much higher. I have even set the max thread to 4000 and also set Max connection to 10000 along with all db configuration and server has 24 core and 64 gb ram still no improvement Are there any limitations with respect to number of calls that can be made to a microservice developed using Spring Boot or is the issue is with docker container? Or is this normal?
I am load testing a stub I created in soapUI. My stub is very simple, it takes a request then does a **Thread.sleep(X) **(picked up from an application.properties file). It is a Spring Boot soap service and my question is this.
If I have X as like 1 MS, my application handles thousands of TPS (Transactions Per Second), If I change that Thread.sleep to something Like 10 seconds, it only handles like 5 TPS. What is the limiting factor that is causing the degradation?
I have plenty of threads, CPU and Memory available. How can I make it so It will utilize all of my resources, and have a higher TPS, but mimic a delay?
Thanks,
Brian
the response time of my spring boot rest service running on embedded tomcat sometimes goes really high. I have isolated the external dependencies and all of that is pretty quick.
I am at a point that I think that it is something to do with tomcat's default 200 thread pool size that it reserves only for incoming requests for the service.
What I believe is that all 200 threads under heavy load (100 requests per second) are held up and other requests are queued and lead to higher response time.
I was wondering if there is a definitive way to find out if the incoming requests are really getting queued? I have done an extensive research on tomcat documentation, spring boot embedded container documentation. Unfortunately I don't see anything relevant.
Does anyone have any ideas on how to check this