Connection Pool Size concept in Oracle - oracle

Initial and Minimum Pool Size
The minimum number of connections in the pool. This value also determines the number of connections placed in the pool when the pool is first created or when application server starts.
Maximum Pool Size
The maximum number of connections in the pool.
According to the above 2 definitions, if the min pool size is 1 and max pool size is 100 then:
When the pool is first created or when application server starts then only one connection is newly created.
There will be many requests hitting concurrently in the day and will definitely require more connection which will be made in the course of the day which can reach max to 100. But while these connections are made they are not removed from pool until the application server shuts down or we remove the entire pool?
Am I right for these two points?

The pool size will stay between the limits you describe. As a general idea:
Concept #1 is correct.
Concept #2 depends on the JDBC connection pool. Typically the connection pool is able to grow and shrink according to the observed usage during the day. Heavy load will make it grow while idleness will make it shrink.
In any case, every JDBC connection pool behaves a little bit differently, so you should check the specific connection pool you want to use.

1 is correct
, but 2 assumption is true only if you don't close connection and you don't set max life time for the connection.
Usually you close the connection and then it return/released to the connection pool.
Also 100 max pool size is not needed, Although you didn't specify which connection pool you are using, you can read more about pooling setting in hikari pool size

Related

WebLogic connection pool count increase

let say if I want to increase connection pool counts in WebLogic data sources (max,min and initial) what things I have to concern on. does in need to check anything from DBMS side?
pls advice
You should ask your DBA first because increasing the connection pool capacity will an impact on memory and cpu usage in the database's hosts

Max connection pool size and autoscaling group

In Sequelize.js you should configure the max connection pool size (default 5). I don't know how to deal with this configuration as I work on an autoscaling platform in AWS.
The Aurora DB cluster on r3.2xlarge allows 2000 max connections per read replica (you can get that by running SELECT ##MAX_CONNECTIONS;).
The problem is I don't know what should be the right configuration for each server hosted on our EC2s. What should be the right max connection pool size as I don't know how many servers will be launched by the autoscaling group? Normally, the DB MAX_CONNECTIONS value should be divided by the number of connection pools (one by server), but I don't know how many server will be instantiated at the end.
Our concurrent users count is estimated to be between 50000 and 75000 concurrent users at our release date.
Did someone get previous experience with this kind of situation?
It has been 6 weeks since you asked, but since I got involved in this recently I thought I would share my experience.
The answer various based on how the application works and performs. Plus the characteristics of the application under load for the instance type.
1) You want your pool size to be > than the expected simultaneous queries running on your host.
2) You never want your a situation where number of clients * pool size approaches your max connection limit.
Remember though that simultaneous queries is generally less than simultaneous web requests since most code uses a connection to do a query and then releases it.
So you would need to model your application to understand the actual queries (and amount) that would happen for your 75K users. This is likely a lot LESS than 75K/second db queries a second.
You then can construct a script - we used jmeter - and run a test to simulate performance. One of the items we did during our test was to increase the pool higher and see the difference in performance. We actually used a large number (100) after doing a baseline and found the number made a difference. We then dropped it down until it start making a difference. In our case it was 15 and so I set it to 20.
This was against t2.micro as our app server. If I change the servers to something bigger, this value likely will go up.
Please note that you pay a cost on application startup when you set a higher number...and you also incur some overhead on your server to keep those idle connections so making larger than you need isn't good.
Hope this helps.

Websphere ejb pool

The documentation here http://www.ibm.com/support/knowledgecenter/SS7JFU_7.0.0/com.ibm.websphere.express.doc/info/exp/ae/rejb_ecnt.html mentions that a minimum of 50 and a maximum of 500 instances are created by default per ejb.
Lets say at a given point of time 500 clients are trying to use the service. Does it mean that from here after at any time there will be 500 instances? Or will the server destroy the instances after a period of time and when there are no incoming clients?
After reading further I came across something called hard limit H which forces/tells the container not to create more than the specified maximum number of instances.
So what I understood in case of 50,500 is
At any point there will be max number of instances(500).
If more clients are trying to connect, the server will create a new instance(501, 502, 503....) per client and destroy it after serving the client.
Can anyone tell me if I am right?
Pooling EJB instances allow you to save system resources. Let's say that you need 100 instances of the EJB at a given point. You initialize the beans, process the logic and then destroy them. If you need an additional 100 instances after that, you need to do this all over again. This puts a strain on system resources.
When you pool EJB instances, they move in and out of a pool that is maintained by the EJB container. Active instances process the incoming requests, while passive ones stay in the pool. To control the size of the pool, there should to be an upper and lower bound on the number of instances within the pool.
Consider the default setting: The minimum is 50 instances and the maximum is 500 instances. When the server starts up, there are no instances of the EJB on the server. As your application gets concurrent requests/hits, the pool size increases. Let's assume that there are 30 concurrent hits. The pool size stays at 30. WebSphere will not create additional instances to maintain the pool size at the minimum value. After that, assume that the concurrent hits increase to 75 and then drop below 50. At this point, WebSphere will destroy the additional 25 EJB instances and maintain the pool size at 50. Now, if you define the lower limit as 'H50' (hard limit), WebSphere will expend resources to create 50 EJB instances as the server/application starts up. Therefore, the pool size will never drop below 50.
Now let's look at the upper limit, which is 500. As the number of concurrent hits increases, the pool size grows and exceeds 500. Beyond this limit, WebSphere tries to lower the pool size by destroying the EJB instances as soon as they become inactive (i.e. return to the pool). However, the EJB instances can continue to grow beyond this limit. If you have 600 concurrent requests, there will be 600 EJB instances. If it falls to 540, the additional 60 beans are destroyed. The hard limit ('H500') ensures that this overflow never happens. Upto 500 concurrent requests can be handled at the pool's maximum size. Additional requests must wait until an EJB instance becomes inactive (i.e. returns to the pool).

WebSphere JDBC Connection Pool advice

I am having a hard time understanding what is happening in our WebSphere 7 on AIX environment. We have a JDBC Datasource that has a connection pool with a Min/Max of 1/10.
We are running a Performance Test with HP LoadRunner and when our test finishes we gather the data for the JDBC connection pool.
The Max Pool sizes shows as 10, the Avg pool size shows as 9, the Percent Used is 12%. With just this info would you make any changes or keep things the same? The pool size is growing from 1 to 9 during our test but it says its only 12% used overall. The final question is everytime our test is in the last 15 min before stopping we see an Avg Wait time of 1.8 seconds and avg thread wait of .5 but the percent used is still 10%. FYI, the last 15 min of our test does not add additional users or load its steady.
Can anyone provide any clarity or recommendations on if we should make any changes? thx!
First, I'm not an expert in this, so take this for whatever it's worth.
You're looking at WebSphere's PMI data, correct? PercentUsed is "Average percent of the pool that is in use." The pool size includes connections that were created, but not all of those will be in-use at any point in time. See FreePoolSize, "The number of free connections in the pool".
Based on just that, I'd say your pool is large enough for the load you gave it.
Your decreasing performance at the end of the test, though, does seem to indicate a performance bottleneck of some sort. Have you isolated it enough to know for certain that it's in database access? If so, can you tell if your database server, for instance, may be limiting things?

Are Clustered WebLogic JDBC DataSource settings per node or per cluster?

I have a WebLogic 9.2 Cluster which runs 2 managed server nodes. I have created a JDBC Connection Pool which I have targeted at All servers in the cluster. I believe this will result in the physical creation of connection pools on each of the 2 managed servers (although please correct me if I'm wrong)?
Working on this assumption I have also assumed that the configuration attributes of the Connection Pool e.g. Min/ Max Size etc are per managed server rather than per cluster. However I am unsure of this and can't find anything which confirms or denies it in the WebLogic documentation.
Just to be clear here's an example:
I create connection-pool-a with the following settings and target it at All servers in the cluster:
Initial Capacity: 30
Maximum Capacity: 60
Are these settings applied:
Per managed server - i.e. each node has an initial capacity of 30 and max of 60 connections.
Across the cluster - i.e. the initial number of connections across all managed servers is 30 rising to a maximum of 60.
In some other way I haven't considered?
I ask as this will obviously have a significant effect on the total number of connections being made to the Database and I'm trying to figure out how best to size the connection pools given the constraints of our Database.
Cheers,
Edd
1.Per managed server - i.e. each node has an initial capacity of 30 and max
of 60 connections.
It is per server in the Cluster.
I cannot find the documentation right now, but the reason I know this, is when the DBA used to monitor actual DB sessions, as each Managed server was started, our numberof open connections used to increment by the value of "Initial Capacity" for that Data Source.
Say Initial Capacity = 10 for the Cluster, which has Server A and B.
When both are starting up, we would first see 10 open (but inactive) sessions on the DB, then 20.
At the database, using Oracle for example, there is a limiting value set for the DB user's profile which limits the total number of open sessions which the Weblogic user can hold.
WebLogic's ability to target resources to a Cluster is intended to help keep settings consistent across a large number of application servers. The resource settings are per server so whenever you bump up the connections for a DS that is used by a cluster, you would want to multiply it by the maximum number of WebLogic servers running at any time (This isn't always the same as the number of members in the cluster).

Resources