i see that Idle connections are not getting cleared. I am not sure what the reason is?
initialSize-10
maxtotal-20
maxidle-10
minidle-0
minEvictableIdleTimeMillis-30min
numTestsPerEvictionRun-60min
numTestsPerEvictionRun-20
testOnBorrow-true
testWhileIdle-true
validationQuery-select 1 from dual
From various sources provided the following is my understanding
maxtotal- maxactive connections to Datasource which is 20 in above case
maxidle- number of idle connections that can remain in pool. these are removed by the sweeper. In the above case, a connection is idle if it remains idle for 30min. If the sweeper runs every 60min which checks 20 idle connections to and clears idle connections. Idle connections exceeding this would be closed immediately.
Is the above understanding correct?
I am using BasicDataSourceMXBean to print the stats
{"NumActive":"0","NumIdle":"10","isClosed":"false","maxTotal":"20","MaxIdle":"10","MinIdle":"0"}
The idle connections are never getting cleared even though there is no traffic. Is there anything wrong in the above config?
Also what is minIdle and when should we set it to non zero value?
Recently upgraded hibernate version from 3.6.0.Final to hibernate 4.3.11.Final and spring to 4.2.9 from older spring version.
Earlier the idle connections were getting cleared. But since the upgrade the idle connections are not getting cleared.
Everywhere I have looked, it seems that property should be testWhileIdle rather than testOnIdle. The setting is by default false, so your idle threads aren't being tested for validity, thus aren't being evicted.
The minIdle basically tells the connection pool how many idle threads are permissible. Its my understanding from the documentation that when minIdle is 0, there should be no idle connections.
Typically minIdle defaults to the same value as initialSize.
i see that Idle connections are not getting cleared. I am not sure
what the reason is?
https://commons.apache.org/proper/commons-dbcp/configuration.html
The testWhileIdle vs. testOnIdle issue that others have pointed out should resolve your question as to why the idle connections remain open. You are correct in assuming that your initialSize=10 connections will be cleared by the eviction sweeper at the 60 minute mark to bring you down to minIdle=0. Why you would want to have a minIdle=0 is a different question? The whole point of connection pooling is really to pre-authenticate, test, and establish your connections so they can sit in your pool "Idle" and available to "Borrow" by incoming requests. This improves performance by reducing execution time to only the SQL session.
Also what is minIdle and when should we set it to non zero value?
These idle connections will pre-establish and maintain wait for your future SQL requests. The minIdle sizing depends on your application, but the default from DBCP2 is 8 and probably not a bad place to start. The idea is to keep enough on hand to keep up with the average demand on the pool. You would set a maxIdle to deal with those peak times when you have bursts of traffic. The testWhileIdle=true configuration you have applied will run the validationQuery when the sweeper comes around, but will only test 3 connections per run by default. You can configure numTestsPerEvictionRun to a higher number if you want more to be tested. These "tests" ensure your connections are still in a good state so that you don't grab a "bad" idle connection from the pool during execution.
I suspect that you may be more concerned with "hung" connections rather than "idle" connections. If this is the case, you will want to review the "abandoned" configurations that are designed to destroy "active" connections that have been running longer than X amount of time. removeAbandonedOnMaintenance=true along with removeAbandonedTimeout={numberOfSecondsBeforeEligibleForRemoval}.
Related
https://quarkus.io/guides/http-reference#http-limits-configuration
quarkus.http.limits.max-connections
The maximum number of connections that are allowed at any one time. If this is set it is recommended to set a short idle timeout.
what does this exactly mean and what is the property to set the idle timeout?
It means that Quarkus will limit the number of open HTTP connections to whatever you have set.
The reason we recommend to also set a low idle timeout via quarkus.http.idle-timeout (depends on the application, but you probably want something in the low seconds), is that if you have idle connections sitting around and have the number of maximum available connections, you could run out of connections very quickly.
P.S. All Quarkus configuration options can be found here.
We're currently using H2 version 199 in embedded mode with default nio file protocol and MVStore storage system. The write_delay parameter is set to 60 seconds.
We run a batch insert/update/delete of about 30.000 statements within 2 seconds (in one transaction) followed by another batch of a couple of hundred statements only 30 seconds later (in a second transaction). The next attempt to open a db connection (only 2 minutes later) shows that the DB is corrupt:
File corrupted while reading record: null. Possible solution: use the recovery tool [90030-199]
Since the transactions occur within a minute, we wonder whether the write_delay of 60 seconds might be contributing to the issue.
Changing write_delay to 60s (from a default value of 0.5s) will definitely increase your risk of lost transactions, and I do not see a good reason for doing it. Should not cause a db corruption, though. More likely some thread interruptions do that, since you are running in the same JVM a web server and who knows what else. Using async file store might help in that area, and yes it is stable enough (how much worse it can go for your app, than a database corruption, anyway).
This is a general question about flow -
Lately we started getting warnings by .Net of Connection timeout or Connection must be open for this operation.
We are working with Oracle DB and we set a job running every 5 seconds, counting how many connections are there (both ACTIVE and INACTIVE) by the w3wp (we are querying gv$session).
The max pool size for each WS (we have 2) is 300, meaning 600 connections in total.
We noticed that indeed we are reaching the 600 sessions before the crash, however, there are many INACTIVE sessions out of those 600 sessions.
I would except that those sessions would be reused, since they are INACTIVE at the moment.
In addition, the prev_sql_id being running by most of these INACTIVE sessions is: SELECT PARAMETER, VALUE FROM SYS.NLS_DATABASE_PARAMETERS WHERE PARAMETER IN ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET').
Is it a normal behavior?
Further more, after recycling, the connection count is of course small (around 30), but 1 minute later it jumps into 200. Again, the majority are INACTIVE sessions.
What is the best way to understand what are these sessions and troubleshoot it?
Thanks!
I am testing the server with spring-boot.
However, I got some problems during doing test.
my test is
How many memories server use with increasing the web socket sessions(the number of client).
1,000 clients(lesser than 9000 sessions) has no issues with doing the test.
but, When I tried to test 10k connections, server made connections almost until 10,000.(sometimes creating sessions until 9990, sometimes 9988, 9996 like this, not the specific limit the number of socket)
after that, it just stopped creating sessions, no errors just not responding.
If some clients get timeout and release the connection, other clients which were waiting to connect are able to get connections.
'environment'
tomcat : 8.0.36
spring-boot : 1.3.3
java : 1.8
for solutions, I tried
increasing heap size.
I increase jvm heap memory by 5GB. but heap memories which are used for connections are only 2GB. So, I think that it is not related to JVM memory.
I set server.tomcat.max-thread = 20000 in application.porperties.
but it was failed, no difference before.
I am really curious about this issue. If you guys knows this problem and have ideas, let me know the reason.
Thanks.
Tomcat - maxThreads vs maxConnections
Try to set maxConnections property to be more than 10000.
From the doc:
The maximum number of connections that the server will accept and process at any given time. When this number has been reached, the server will accept, but not process, one further connection. This additional connection be blocked until the number of connections being processed falls below maxConnections at which point the server will start accepting and processing new connections again. Note that once the limit has been reached, the operating system may still accept connections based on the acceptCount setting. The default value varies by connector type. For BIO the default is the value of maxThreads unless an Executor is used in which case the default will be the value of maxThreads from the executor. For NIO the default is 10000. For APR/native, the default is 8192.
Note that for APR/native on Windows, the configured value will be reduced to the highest multiple of 1024 that is less than or equal to maxConnections. This is done for performance reasons.
If set to a value of -1, the maxConnections feature is disabled and connections are not counted.
There is a properties for spring boot, tomcat max-connection, which needs to be set in application.properties file
server.tomcat.max-connections= # Maximum number of connections that the server will accept and process at any given time.
I have an application running in CF8 which does calls to external systems like search engine and ldaps often. But at times some request never gets the response and is shown always in the the active request list.
Even tho there is request timeout set in the administration, its not getting applied to these scenarios.
I have around 5 request still pending to be finished for the last 20hours !!!
My server settings are as below
Timeout Requests after ( seconds) : 300 sec
Max no of simultaneous requests : 20
Maximum number of running JRun threads : 50
Maximum number of running JRun threads : 1000
Timeout requests waiting in queue after 300 seconds
I read through some articles and found there are cases where threads are never responded or killed. But i dont have a solid solution how can i timeout this or kill this automatically
really appreciated if you guys have some idea on this :)
The ColdFusion timeout does not apply to 'third party' connections.
A long-running LDAP query, for example, will take as long as it needs. When the calling template gets the result from the query your timeout will apply.
This often leads to confusion interpreting errors. You will get an error saying that whichever function after the long running request causes the timeout.
Further reading available here
You can (and probably should) set a timeout on the CFLDAP call itself. http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec22c24-7f97.html
Thanks, Antony, for recommending my blog entry CF911: Lies, Damned Lies, and CF Request Timeouts...What You May Not Realize. This problem of requests not timing out when expected can be very troublesome and a surprise for most.
But Anooj, while that at least explains WHY they don't die (and you can't kill them within CF), one thing to consider is that you may be able to kill them in the REMOTE server being called, in your case, the LDAP server.
You may be able to go to the administrator of THAT server and on showing them that CF has a long-running request, they may be able to spot and resolve the problem. And if they can, that may free the connection from CF and your request then will stop.
I have just added a new section on this idea to the bottom of that blog entry, as "So is there really nothing I can do for the hung requests?"
Hope that helps.