Creation of parallel threads for bulk request handling? - spring

I have rest service and want to handle almost 100 requests in parallel for this service. I have mentioned number of threads and number of connections to create as 100 in my application.yml even i did not see 100 connections created to handle requests
Here is what i did in my application.yml
server.tomcat.max-threads=100
server.tomcat.max-connections=100
I am using yourkit to see the internals , but when i start its created only 10 connections to handle requests, when i sent multiple requests also the count of request handling threads not increased its remain as 10. see the attachment i took from yourkit.

You're setting max threads. Not minimum threads. Tomcat in this case has decided the minimum should be 10.

Related

What happens if I give a large value to server.tomcat.max-threads to handle load on my application?

There are around 1000+ jobs running through our service in a day and around 70-80 jobs starting at the same time and running parallelly.
To handle this, we looked that increasing the number of max threads to a large number to server.tomcat.max-threads property of our Spring application should work but I do not have full confidence as to what all can be the side effects of having a huge number like 800 to this property.
Can you please help here.
The default installation of Tomcat sets the maximum number of HTTP servicing threads at 200. Effectively, this means that the system can handle a maximum of 200 simultaneous HTTP requests. When the number of simultaneous HTTP requests exceeds this count, the unhandled requests are placed in a queue, and the requests in this queue are serviced as processing threads become available. This default queue length is 100. At these default settings, a large web load that can generate over 300 simultaneous requests will surpass the thread availability, resulting in service unavailable (HTTP 503).
More reference: https://docs.bmc.com/docs/brid91/en/tomcat-container-workload-configuration-825210082.html
How to run multiple servlets execution in parallel for Tomcat?
If this is a batch job like configuration, you can use spring batch.

Spring boot thread pool executor rest template behavior in case queueCapacity is 0 is decreasing performance for a rest apis application

I am stuck with a strange problem and not able to find out its root cause. This is my rest template thread pool executor :
connectionRequestTimeout: 60000
connectTimeout: 60000
socketTimeout: 60000
responseTimeout: 60000
connectionpoolmax: 900
defaultMaxPerRoute: 20
corePoolSize: 10
maxPoolSize: 300
queueCapacity: 0
keepAliveSeconds: 1
allowCoreThreadTimeOut: true
1) I know as the queueCapacity is 0 thread pool executor is going to create SynchronusQueue. The first issue is if I give its value positive integer value such as 50, application performance is decreasing. As per my understanding, we should only be using SynchronouseQueue in rare cases not in a spring boot rest API based application like mine.
2) Second thing is, I want to understand how SynchronousQueue works in a spring boot rest API application deployed on a server (tomcat). I know A SynchronousQueue has zero capacity so a producer blocks until a consumer is available, or a thread is created. But who consumer and producer in this case as all the requests are served by a web or application server. How does SynchronousQueue will basically work in this case?
I am checking the performance by running JMeter script on my machine. This script can handle more cases with queueCapacity 0 rather than some > 0.
I really appreciate any insight.
1) Don't set the queueCapacity explicitly otherwise, it is bound to degrade performance. Since we're limiting the incoming requests that can reside in the queue and it will be taken up once one of the thread becomes available from the fixed threadpool.
ThreadPoolTaskExecutor has a default configuration of the core pool
size of 1, with unlimited max pool size and unlimited queue capacity.
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/concurrent/ThreadPoolTaskExecutor.html
2) In a SynchronousQueue, pairs of insert and remove operations always occur simultaneously, so the queue never actually contains anything. It passes data synchronously to other thread, it waits for the other party to take the data instead of just putting data and returning.
Read more:
https://javarevisited.blogspot.com/2014/06/synchronousqueue-example-in-java.html#ixzz6PFz4Akom
https://www.baeldung.com/thread-pool-java-and-guava
I hope my answer can help you in one way.

How to get high rps with JMeter load testing https endpoint

I'm trying to test my https endpoint with JMeter. I want to make at least 10000 requests per second, but when I set the number of threads to 10000 I get way less rps, around 500.
I've tried setting the number of threads to 1000 and 100, surprisingly I get this same number of rps. I'm using HTTP Sampler and "use Keep-Alive" is set to true. When I look in the statistics I see that when using 100 threads, it makes use of Keep-Alive and connect_time is around 100 ms, but when the number of threads is higher connect_time grows, it's like it stops reusing the connections.
I know this isn't a server issue, because I've tried testing that same endpoint with Yandex.Tank and phantom and it can easily maintain 10 000 requests per second, the problem is it can't use response data to make furhter requests, that's why I have to use JMeter for this task.
This can be done by using "Stepping thread group". It will allow you to send 10000 request per second upto specified time. Refer below image.
Stepping Thread Group
Download jar from below link.
https://jmeter-plugins.org/wiki/SteppingThreadGroup/
I hope you are trying to achieve this using one machine. Try with multiple machine or jmeter distributed mode.
https://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf
https://www.blazemeter.com/blog/how-to-perform-distributed-testing-in-jmeter/
https://blazemeter.com/blog/3-common-issues-when-running-jmeter-scripts-and-how-solve-them/
I am assuming that it is the issue with machine which is not able to generate that much load. Usually, i have use max 300 threads per machine but it depend on the machine config. Just check if the machine is having issue and multiple machine is able to generate more load, considering server is not having any issue.
Hope this helps.
Update:-Usually 200-500 can be handled my modern machines.
Please check the below link to have some more info:-
1.How do threads and number of iterations impact test and what is JMeter’s max. thread limit
2.https://www.blazemeter.com/blog/what%e2%80%99s-the-max-number-of-users-you-can-test-on-jmeter/ .

Bigquery Streaming inserts, persistent or new http connection on every insert?

I am using google-api-ruby-client for Streaming Data Into BigQuery. so whenever there is a request. it is pushed into Redis as a queue & then a new Sidekiq worker tries to insert into bigquery. I think its involves opening a new HTTPS connection to bigquery every insert.
the way, I have it setup is:
Events post every 1 second or when the batch size reaches 1MB (one megabyte), whichever occurs first. This is per worker, so the Biquery API may receive tens of HTTP posts per second over multiple HTTPS connections.
This is done using the provided API client by Google.
Now the Question -- For Streaming inserts, what is the better approach:-
persistent HTTPS connection. if yes, then should it be a global connection that's shared across all requests? or something else?
Opening new connection. like we are doing now using google-api-ruby-client
I think it's pretty much to early to speak about these optimizations. Also other context is missing like if you exhausted the kernel's TCP connections or not. Or how many connections are in TIME_WAIT state and so on.
Until the worker pool doesn't reach 1000 connections per second on the same machine, you should stick with the default mode the library offers
Otherwise this would need lots of other context and deep level of understanding how this works in order to optimize something here.
On the other hand you can batch more rows into the same streaming insert requests, the limits are:
Maximum row size: 1 MB
HTTP request size limit: 10 MB
Maximum rows per second: 100,000 rows per second, per table.
Maximum rows per request: 500
Maximum bytes per second: 100 MB per second, per table
Read my other recommendations
Google BigQuery: Slow streaming inserts performance
I will try to give also context to better understand the complex situation when ports are exhausted:
Let's say on a machine you have a pool of 30,000 ports and 500 new connections per second (typical):
1 second goes by you now have 29500
10 seconds go by you now have 25000
30 seconds go by you now have 15000
at 59 seconds you get to 500,
at 60 you get back 500 and stay at using 29500 and that keeps rolling at
29500. Everyone is happy.
Now say that you're seeing an average of 550 connections a second.
Suddenly there aren't any available ports to use.
So, your first option is to bump up the range of allowed local ports;
easy enough, but even if you open it up as much as you can and go from
1025 to 65535, that's still only 64000 ports; with your 60 second
TCP_TIMEWAIT_LEN, you can sustain an average of 1000 connections a
second. Still no persistent connections are in use.
This port exhaust is better discussed here: http://www.gossamer-threads.com/lists/nanog/users/158655

How many simultaneous requests can I send to ElasticSearch cluster?

I want to send multiple bulk operation requests to ElasticSearch cluster, and I come across this issue EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
I have a cluster of 4 ElasticSearch instances (version 1.3.4), when I sent this request to get the number of its bulk operation thread pool size:
GET /_cat/thread_pool?v&h=host,bulk.active,bulk.queueSize
I got
host bulk.active bulk.queueSize
1D4HPY1 0 50
1D4HPY2 0 50
1D4HPY3 0 50
1D4HPY4 0 50
So how many simultaneous bulk operation requests I can send to that cluster? 50 or 200?
I would suggest having a look at this section from the documentation.
Also, you need to be more specific when you say "simultaneous requests that you can send" because, as you see in the page above, there are various thread pools that handle various jobs. You give an example in your post for "bulk" operations.
In my opinion, the correct request for "bulk" to see the number of simultaneous running threads (as per this piece of documentation) is GET /_cat/thread_pool?v&h=host,bulk.queueSize,bulk.min,bulk.max. So, you have bulk.max active threads allowed in the thread pool with a bulk.queueSize number of tasks in the queue for it. When a request comes in and there are no threads to handle it, the request is put in queue to wait.

Resources