JDBC connection not available - jdbc

I appreciate everybody giving solutions/suggestions to my post.
Environment: Portlet, Ibm Websphere, Java.
Scenario: In the portal application, whenever I hit a menu item(or portlet) the server often goes down hardly in an hour. Doesn't matter whether I remain in the same menu item(or portlet) or go to another menu item(or portlet). As a result after server down, we used to get backside connection cannot be established.
Connection pool size in server = 50.
In the application: Database calls within a for has a loop of 900 iterations. Checking the log I came to know for the first 50 iterations, the operation is well carried out within seconds. But from the 51st iteration, there happens a connection timeout stating JDBC connection not available and thereafter for every iteration it takes 3 minutes(keeps waiting for database connection but not getting it).
Sample code:
listSize = 900;
for(int i=0; i < listSize; i++){
// database query for setting a status message.
}
We suspected that this might be due to open database connections. So connections are not available for 51st iteration after reaching the pool size of 50. But in the application there is spring's jdbcTemplate used which should automatically open/close connections.
Question(s):
What could be the exact cause of this scenario? Because of using the DB calls inside for loop causes the performance issue and not giving the connections to threads from 51st iteration?
If the spring automatically closes the connections, then why it is not giving the connection to new iterations from the 51st?
Is the for loop iterations are faster than spring's connection closure? So that first 50 threads iterating and not from 51st?

When you said "Connection pool size in server = 50", I assuming you mean max connections is set to 50. As you suspected, the behavior you're seeing indicates that the free pool is being exhausted by connection requests. Based on your "for" loop, the first 50 connection requests caused by queries are successful but since the connections are not being returned to the free pool, the 51st connection request is going to the waiter pool and eventually timing out after 180 seconds. You're correct that the Spring jdbcTemplate configuration is supposed to close() the connection when complete, thus returning the connection to the pool, so you will need to investigate why that is happening. Turing on WebSphere Application server tracing with tracespec of rra=all might give you some insight, see IBM Knowledge Center topic to turn on trace. Additional trace can be obtained with WAS.j2c=all, but it's going to be verbose.

Check this article - Default behavior of managed connections in WebSphere Application Server. You are probably using sharable connections and local transactions. Try to configure resource reference used by your application and set connections to unsharable. Put something like this in your web.xml, and use java:comp/env/jdbc/datasourceRef in your Spring configuration.
<resource-ref>
<res-ref-name>jdbc/datasourceRef</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Application</res-auth>
<res-sharing-scope>Unshareable</res-sharing-scope>
</resource-ref>

Related

Connection Pool Behavior ODP.NET

I'm trying to work out the behavior of connection pooling with ODP.NET. I get the basics but I don't understand what's happening. I have an application that spins up threads every X seconds and that thread connects and performs a number of searches against the database then disconnects, everything is being disposed and disconnected as you would expect. With the defaults in the connection string and X set to a high number that ensures searches are complete before the next search takes place, I get an exception, not on connect, as I would have expected but on OracleDataAdapter.Fill(). I get the following exception:
'ORA-00604: error occurred at recursive SQL level 1 ORA-01000: maximum open cursors exceeded'
After the 9th connection. Every time. Then, the application will run indefinitely without another error. It's definitely related to connection pooling. If I turn off pooling it works without error. If I turn Min Pool Size up then it takes longer for the error but it eventually happens.
My expectation for connection pooling would be a wait on the call to connect to get a new connection, not Fill failing on an adapter that's already connected (although I get that the connection object is using a pool, so maybe that's not what's happening). Anyway it's odd behavior.
Your error is not relating to a maximum number of connections but a maximum number of cursors.
A cursor is effectively a pointer to a memory address within the database server that lets the server look up the query the cursor is executing and the current state of the cursor.
Your code is connecting and then opening cursors but, for whatever reason, it is not closing the cursors. When you close a connection it will automatically close all the cursors; however, when you return a connection to a connection pool it keeps the connection open so it can be reused (and because it does not close the connection it does not automatically close all the cursors).
It is best-practice to make sure that when you open a cursor it is closed when you finish reading from it and if there is an error during the execution of the cursor, that prevents the normal execution path, then that the cursor is closed when you catch the exception.
You need to debug your code and make sure you close all the cursors you open.

How to use Pomelo.EntityFrameworkCore.MySql provider for ef core 3 in async mode properly?

We are building an asp.net core 3 application which uses ef core 3.0 with Pomelo.EntityFrameworkCore.MySql provider 3.0.
Right now we are trying to replace all database calls from sync to async, like:
//from
dbContext.SaveChanges();
//to
await dbContext.SaveChangesAsync();
Unfortunetly when we do it we expereince two issues:
Number of connections to the server grows significatntly compared to the same tests for sync calls
Average processing speed of our application drops significantly
What is the recommended way to use ef core with mysql asynchronously? Any working example or evidence of using ef-core 3 with MySql asynchonously would be appreciated.
It's hard to say what the issue here is without seeing more code. Can you provide us with a small sample app that reproduces the issue?
Any DbContext instance uses exactly one database connection for normal operations, independent of whether you call sync or async methods.
Number of connections to the server grows significatntly compared to the same tests for sync calls
What kind of tests are we talking about? Are they automated? If so, how many tests are being run? Because of the nature of async calls, if you run 1000 tests in parallel, every test with its own DbContext, you will end up with 1000 parallel connections.
Though with Pomelo, you will not end up additionally with 1000 threads, as you would with using Oracle's provider.
Update:
We test asp.net core call (mvc) which goes to db and read and writes something. 50 threads, using DbContextPool with limit 500. If i use dbContext.SaveChanges(), Add(), all context methods sync, I am land up with around 50 connections to MySql, using dbContext.SaveChangesAsnyc() also AddAsnyc, ReadAsync etc, I end up seeing max 250 connection to MySql and the average response time of the page drops by factor of 2 to 3.
(I am talking about ASP.NET and requests below. The same is true for parallel run test cases.)
If you use Async methods all the way, nothing will block, so your 50 threads are free to handle the next 50 requests while the database is still executing the queries for the first 50 requests.
This will happen again and again because ASP.NET might process your requests faster than your database can return its results. So you will end up with a lot of parallel database queries.
This does not happen when executing the Sync methods, because every thread blocks and you end up with a maximum of 50 parallel queries (one per thread).
So this is expected behavior and just a consequence of async method calls.
You can always modify your code or web server configuration to limit the amount of concurrent ASP.NET requests.
50 threads, using DbContextPool with limit 500.
Also be aware that DbContextPool does not limit how many DbContext objects can concurrently exist, but only how many will be kept in the pool. So if you set DbContextPool to 500, you can create more than 500 contexts, but only 500 will be kept alive after using them.
Update:
There is a very interesting low level talk about lock-free database pool programming from #roji that addresses this behavior and takes your position, that there should be an upper limit in the connection pool that should result in blocking when exceeded and makes a great case for this behavior.
According to #bgrainger from MySqlConnector, that is how it is already implemented (the docs did not explicitly state this, but they do now). The MaxPoolSize connection string option has a default value of 100, so if you use connection pooling and if you don't overwrite this value and if you don't use multiple connection pools, you should not have more than 100 connections active at a given time.
From GitHub:
This is a documentation error, if you are interpreting the docs to mean that you can create an unlimited number of connections.
When pooling is true, each connection pool (there is one per unique connection string) only allows MaximumPoolSize connections to be open simultaneously. Each additional call to MySqlConnection.Open will block until a connection is returned to the pool.
When pooling is false, there is no limit to the number of connections that can be opened simultaneously; it's up to the user to manage the concurrency.
Check to see whether you have Pooling=false in your connection string, as mentioned by Bradley Grainger in comments.
After I removed pooling=false from my connection string, my app ran literally 3x faster.

Manage multi-tenancy ArangoDB connection

I use ArangoDB/Go (using go-driver) and need to implement multi-tenancy, means every customer is going to have his data in a separate DB.
What I'm trying to figure out is how to make this multi-tenancy work. I understand that it's not sustainable to create a new DB connection for each request, means I have to maintain a pool of connections (not a typical connection pool tho). Of course, I can't just assume that I can make limitless, there has to be a limit. However, the more I think about that the more I understand that I need some advice on it. I'm new to Go, coming from the PHP world, and obviously it's a completely different paradigm in PHP.
Some details
I have an API (written in Go) which talks to ArangoDb using arangodb/go-driver. A standard way of creating a DB connection is
create a connection
conn, err := graphHTTP.NewConnection(...)
create client
c, err := graphDriver.NewClient(...)
create DB connection
graphDB, err := p.cl.Database(...)
This works if one has only one DB, and DB connection is created on API's boot up.
In my case it's many, and, as previously suggested, I need to maintain a DB connections pool.
Where it gets fuzzy for me is how to maintain this pool, keep in mind that pool has to have a limit.
Say, my pool is of size 5, abd over time it's been filled up with the connections. A new request comes in, and it needs a connection to a DB which is not in the pool.
The way I see it, I have only 2 options:
Kill one of the pooled connections, if it's not used
Wait till #1 can be done, or throw an error if waiting time is too long.
The biggest unknow, and this is mainly because I've never done anything like this, for me is how to track whether connection is being used or not.
What makes thing even more complex is that DB connection has it's own pool, it's done on the transport level.
Any recommendations on how to approach this task?
I implemented this in a Java proof of concept SaaS application a few months ago.
My approach can be described in a high level as:
Create a Concurrent Queue to hold the Java Driver instances (Java driver has connection pooling build in)
Use subdomain to determine which SaaS client is being used (can use URL param but I don't like that approach)
Reference the correct Connection from Queue based on SaaS client or create a new one if not in the Queue.
Continue with request.
This was fairly trivial by naming each DB to match the subdomain, but a lookup from the _systemdb could also be used.
*Edit
The Concurrent Queue holds at most one Driver object per database and hence the size at most will match the number of databases. In my testing I did not manage the size of this Queue at all.
A good server should be able to hold hundreds of these or even thousands depending on memory, and a load balancing strategy can be used to split clients into different server clusters if scaling large enough. A worker thread could also be used to remove objects based on age but that might interfere with throughput.

Spring JMS Websphere MQ open input count issue

I am using Spring 3.2.8 with JDK 6 and Websphere MQ 7.5.0.5. In my application I am making some jms calls using jmsTemplate via ThreadPool. First I faced condition that "Current queue depth" count increases as I hit jms calls. I tracked all objects I am initiating via ThreadPool and interrupt or cancel all threads/future objects. So this "Current queue depth" count controlled.
Now problem is "Open input count" value increases nearly to the number of requests I am sending. When I stops my server this count becomes 0.
In all this case I am able to send request and get response till count of 80 and my ThreadPool size is 30. After reaching request count somewhere to 80 I keep receiving error of future object rejections and not able to receive responses. In fact null responses receive for remaining calls.
Please suggest.
I am using queue in my application and filter on correlation id has been applied. I read more on it and found when we make a call to jmsTemplate.receiveSelected (queue, filter) then this has serious impact on performance. Once I removed this filter the thread conjunction issue resolved. But now filtering is still a problem for me.
Now I will be applying filter in a different way with some limitation of the application but not using receiveSelected instead now I am using jmsTemplate.receive.
Update on 14-Sep
All - I find this as a solution and like to post here.
One of my colleague helped in rectifying this issue which is great help. What we observed after debugging that if cacheConsumer is true then based on combination of
queue + message-selector + session
consumers are cached by Spring. And even calling close() method does not do any thing; basically empty method and causing thread to be hanged/stuck.
After setting cacheConsumer to false, I reverted my code back to original i.e. jmsTemplate.receiveSelected (destination, messageSelector), now when I hit 100 request count of threads only increased between 5 to 10 during multiple iterations of test.
So - this property need to be used carefully.
hope this helps !!
First I faced condition that "Current queue depth" count increases as
I hit jms calls. I tracked all objects I am initiating via ThreadPool
and interrupt or cancel all threads/future objects.
I have no idea what you are talking about but you should NOT be using/monitoring the 'current queue depth' value from your application. Bad, bad design. Only MQ monitoring tools should be using it.
Now problem is "Open input count" value increases nearly to the number
of requests I am sending. When I stops my server this count becomes 0.
Bad programming. You are 'opening a queue then putting a message' over and over and over again. How about you put some code to CLOSE the queue or better yet, REUSE the open queue!!!!!!!

weblogic questions

I have a couple of questions
1) How can we define in weblogic configuration how many concurrent users are allowed or can be allowed at a time to a particular application?
2) how can we tell how may threads are being used in a weblogic at a time?
3) How many max jdbc connections should I set so that users are not blocked due to all connections used up. How to keep a balance between number of concurrent user/threads allowed to jdbc connections max?
Thanks
It is different in each use case scenario.
But usually WLS 1 instance can cover 50~100 active user per instance.
The instance has 2 CPU and 1~1.5GB heap.
This document will be useful to your question:
"Planning Number Of Instance And Thread In Web Application Server"
1) You can user Work Managers to do this for managing requests. However, restricting the number of concurrent users will vary application to application. If it is a web app, use the work managers with a max constraint equal to the number of users you want to restrict it to. However, be sure you figure out how to handle overflow - what will you do when you get 100 requests but have a 5-user restriction? Is this synchronous or asynchronous processing?
2) Ideally you would want a 1:1 ratio of threads to connections in the pool. This guarantees that no thread (User Request) is waiting for a connection. I would suggest trying this. You can monitor the JDBC connection pools using the WebLogic console and adding fields to the columns under the 'Monitoring' tab for the connection. If you have a high number of waiters, and/or a high wait time then you would want to increase the number of connections in the pool. You could start with a 1:0.75 ratio of threads:connections, do performance/load testing and adjust based on your findings. It really depends on how well you manage the connections. Do you release the connection immediately after you get the data from the database, or do you proceed with application logic and release the connection at the end of the method/logic? If you hold the connection for a long time you will likely need closer to a 1:1 ratio.
1) If to each user you assign a session, then you can control the max number of sessions in your webapp weblogic descriptor, for example adding the following constraint :
<session-descriptor> <max-in-memory-sessions>12</max-in-memory-sessions> </session-descriptor>
It's more effective (if you mean 1 user = 1session) than limiting the number of requests by work managers.
Another way, when you can't predict the size of sessions and the number of users, is to adjust memory overloading parameters and set :
weblogic.management.configuration.WebAppContainerMBean.OverloadProtectionEnabled.
More info here :
http://download.oracle.com/docs/cd/E12840_01/wls/docs103/webapp/sessions.html#wp150466
2) Capacity of threads is managed by WebLogic through work managers. By default, just one exists : default with unllimited number of threads (!!!).
3) Usually, adapting the number of JDBC connections to the number of threads is the more effective.
The following page could surely be of great interest :
http://download.oracle.com/docs/cd/E11035_01/wls100/config_wls/overload.html
As far as i know you have to control these kind of things in
weblogic-xml-jar.xml
or
weblogic.xml
if you look for weblogic-xml-jar.xml commands you can find your desire .

Resources