Elasticsearch.Net.UnexpectedElasticsearchClientException: There were not enough free threads in the ThreadPool to complete the operation. ---> System.InvalidOperationException: There were not enough free threads in the ThreadPool to complete the operation
If i use the Search method of the IElasticClient iterface the operation performs the search successfully.
Also .. i have a hard time believing it's a setting on the elastic server since the call works fine on another machine.
Any ideas what thread pool this is referring to? Much appreciated.
Discovered there was a piece of code in the application that was setting the threadpool
ThreadPool.SetMaxThreads(maxWorkerCount, maxIOCount);
because the virtual server was set to have only 2 processors.. so this code set the max threads to 2. This caused the async calls of the application on that machine to throw those errors.
Related
In WSO2 EI 6.6, proxy stopped working abruptly. upon analyzing observed an error in the wso2 carbon log "GC Overhead limit exceeded", after this error nothing happening in the EI.
Proxy logic is to get the data from Sql server table and form an xml and send it to an external API. Proxy runs every 5 mins interval and in every interval maximum of 5 records will be pushed to an API.
After restarting the wso2 carbon services, proxy are started working. currently we are restarting the services every 3 days to avoid this issue.
Need to know how to identify the potential issue and resolve this.
This mean the JVM has run out of allocated memory. There can be many reasons for this. For example, if you haven't allocated enough memory to the JVM you can easily run out of memory. If that's not the case you need to analyze a memory dump and see what's occupying the memory causing it to fill up.
Generally, when you see the mentioned error the JVM automatically creates a heap dump(heap-dump.hprof) in the <EI_HOME>/repository/logs directory. You can try analyzing the dump to find the root cause. If the server doesn't generate a memory dump, manually take a memory dump when it's occupied than the expected level and analyze it.
When I add the asynchronous timer, it gets stuck. How can I fix this?
It's not connected with the "asynchronous timer" (whatever it means), it's classic OutOfMemoryError exception
The reason is that JVM asks the underlying operating system to create a new native thread and the process fails somewhere somehow, the reasons could be in:
Java process doesn't have sufficient address space
The underlying OS lacks virtual memory
There is a limitation on the underlying OS level which doesn't allow that many native threads, i.e. /proc/sys/kernel/threads-max
So you either need to amend your JVM/OS configuration or will have to allocate another machine and switch to Distributed Testing
More information: java.lang.OutOfMemoryError: Unable to create new native thread
The web application is running on Springboot and deployed on WebLogic.
We have assigned 400 as max threads and JDBC to be 100 connections.
When we perform load testing on the web application, the performance is optimal when the load is low (the response time is less than 200ms for most of the http request that we called).
When we increase the load, we can see that the thread count increases and jdbc count also increases gradually but no where near to max. However, the response time is getting much longer and it could take more than 5 seconds to response.
CPU usage, thread count, memory, JDBC connection seems to be normal during these period.
Another observation is that during testing and we saw that the performance is degrading, we used another machine to make a http call to the server that is only retrieving text without any DB or logic, and even this simple http call will take 10s to respond. (And the server resources is still not MAX!)
So, we are wondering what keep them waiting ?
Any other possible bottleneck?
If the server doesn't lack resources like CPU/RAM/etc. only a profiler can tell you where your application spends the most time which might be in:
Waiting in a queue for next thread/db connection from the pool to be available
Slow database query
Inefficient functions/algorithms which a subject to optimization
WebLogic configuration not suitable for high loads
JVM configuration not suitable for high loads (i.e. system is doing garbage collection to often/too long)
So I would recommend re-running your test with profiler tool telemetry enabled and at the same time monitoring essential JVM metrics using i.e. JMXMon Sample Collector which can be used for monitoring your application-specific metrics as well. It's a plugin which can be installed using JMeter Plugins Manager
For a detailed approach on how ago about identifying poor thread performance I suggest you take look at the TSA Method by Brendan Gregg.
I am getting the stuck thread error often while trying to send the JMS message to another manager server within the domain in our production environment.
Initially, we felt that it might be due to Load on the server but issue occurred randomly Even at the time of less load and system processing well some high volume time
We are not able to find the reason for the same.
Error Information:
weblogic.jms.client.JMSConnectionFactory.createQueueConnection(JMSConnectionFact
ory.java:199)
What is the advantage of using threadpool in Hystrix?
Suppose we are calling a third party service ,when we call a service or DB that thread goes into waiting state than what is the use of keep creating thread for each call?
So,I mean how short circuited(Threadpooled) method is batter then normal(non short circuited) method?
Lets say when a remote service(any service) is started to respond slowly, but a typical application(service which is making call to remote service) will still continue to call that remote service. So short circuited(Threadpooled) method helps you build a Defensive system in this particular case.
As calling service does not know if the remote service is healthy or not and new threads are spawned every time a request comes in. This will cause threads on an already struggling server to be used.
We don’t want this to happen as we need these threads for other remote calls or processes running on our server and we also want to avoid CPU utilization spiking up. so this prevents resources from becoming blocked if latency occurs. Also Bounded thread pool also gives some breathing room for downstream services to recover.
For detail : ThreadPool in Hystrix