let say if I want to increase connection pool counts in WebLogic data sources (max,min and initial) what things I have to concern on. does in need to check anything from DBMS side?
pls advice
You should ask your DBA first because increasing the connection pool capacity will an impact on memory and cpu usage in the database's hosts
Related
Initial and Minimum Pool Size
The minimum number of connections in the pool. This value also determines the number of connections placed in the pool when the pool is first created or when application server starts.
Maximum Pool Size
The maximum number of connections in the pool.
According to the above 2 definitions, if the min pool size is 1 and max pool size is 100 then:
When the pool is first created or when application server starts then only one connection is newly created.
There will be many requests hitting concurrently in the day and will definitely require more connection which will be made in the course of the day which can reach max to 100. But while these connections are made they are not removed from pool until the application server shuts down or we remove the entire pool?
Am I right for these two points?
The pool size will stay between the limits you describe. As a general idea:
Concept #1 is correct.
Concept #2 depends on the JDBC connection pool. Typically the connection pool is able to grow and shrink according to the observed usage during the day. Heavy load will make it grow while idleness will make it shrink.
In any case, every JDBC connection pool behaves a little bit differently, so you should check the specific connection pool you want to use.
1 is correct
, but 2 assumption is true only if you don't close connection and you don't set max life time for the connection.
Usually you close the connection and then it return/released to the connection pool.
Also 100 max pool size is not needed, Although you didn't specify which connection pool you are using, you can read more about pooling setting in hikari pool size
In Sequelize.js you should configure the max connection pool size (default 5). I don't know how to deal with this configuration as I work on an autoscaling platform in AWS.
The Aurora DB cluster on r3.2xlarge allows 2000 max connections per read replica (you can get that by running SELECT ##MAX_CONNECTIONS;).
The problem is I don't know what should be the right configuration for each server hosted on our EC2s. What should be the right max connection pool size as I don't know how many servers will be launched by the autoscaling group? Normally, the DB MAX_CONNECTIONS value should be divided by the number of connection pools (one by server), but I don't know how many server will be instantiated at the end.
Our concurrent users count is estimated to be between 50000 and 75000 concurrent users at our release date.
Did someone get previous experience with this kind of situation?
It has been 6 weeks since you asked, but since I got involved in this recently I thought I would share my experience.
The answer various based on how the application works and performs. Plus the characteristics of the application under load for the instance type.
1) You want your pool size to be > than the expected simultaneous queries running on your host.
2) You never want your a situation where number of clients * pool size approaches your max connection limit.
Remember though that simultaneous queries is generally less than simultaneous web requests since most code uses a connection to do a query and then releases it.
So you would need to model your application to understand the actual queries (and amount) that would happen for your 75K users. This is likely a lot LESS than 75K/second db queries a second.
You then can construct a script - we used jmeter - and run a test to simulate performance. One of the items we did during our test was to increase the pool higher and see the difference in performance. We actually used a large number (100) after doing a baseline and found the number made a difference. We then dropped it down until it start making a difference. In our case it was 15 and so I set it to 20.
This was against t2.micro as our app server. If I change the servers to something bigger, this value likely will go up.
Please note that you pay a cost on application startup when you set a higher number...and you also incur some overhead on your server to keep those idle connections so making larger than you need isn't good.
Hope this helps.
When creating an AmazonDynamoDBClient in the Java API we can specify the maximum connection pool size using setMaxConnections on ClientConfiguration. Is there a hard / recommended limit on this? For example although the default limit is 50 connections a linux client should be able to sustain 5,000 open connections, will Amazon allow this?
If there is a maximum limit does it only apply to a single client instance? What about if there are several machines using dynamo through the same account, will they share a connection limit?
Thanks!
Since the connections to Amazon DynamoDB is http(s) based, the concept of open connections is limited to your tcp max open connections at once. I highly doubt there's a limit on Amazons end at all as it is load balanced close to infinity.
Naturally, the exception is your read and write capacity limits. Note that they want you to contact them if you will exceed a certain amount of capacity units which depend on your region.
You've probably already read them, but the limitations of DynamoDB is found here:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html
I am having a hard time understanding what is happening in our WebSphere 7 on AIX environment. We have a JDBC Datasource that has a connection pool with a Min/Max of 1/10.
We are running a Performance Test with HP LoadRunner and when our test finishes we gather the data for the JDBC connection pool.
The Max Pool sizes shows as 10, the Avg pool size shows as 9, the Percent Used is 12%. With just this info would you make any changes or keep things the same? The pool size is growing from 1 to 9 during our test but it says its only 12% used overall. The final question is everytime our test is in the last 15 min before stopping we see an Avg Wait time of 1.8 seconds and avg thread wait of .5 but the percent used is still 10%. FYI, the last 15 min of our test does not add additional users or load its steady.
Can anyone provide any clarity or recommendations on if we should make any changes? thx!
First, I'm not an expert in this, so take this for whatever it's worth.
You're looking at WebSphere's PMI data, correct? PercentUsed is "Average percent of the pool that is in use." The pool size includes connections that were created, but not all of those will be in-use at any point in time. See FreePoolSize, "The number of free connections in the pool".
Based on just that, I'd say your pool is large enough for the load you gave it.
Your decreasing performance at the end of the test, though, does seem to indicate a performance bottleneck of some sort. Have you isolated it enough to know for certain that it's in database access? If so, can you tell if your database server, for instance, may be limiting things?
I have a WebLogic 9.2 Cluster which runs 2 managed server nodes. I have created a JDBC Connection Pool which I have targeted at All servers in the cluster. I believe this will result in the physical creation of connection pools on each of the 2 managed servers (although please correct me if I'm wrong)?
Working on this assumption I have also assumed that the configuration attributes of the Connection Pool e.g. Min/ Max Size etc are per managed server rather than per cluster. However I am unsure of this and can't find anything which confirms or denies it in the WebLogic documentation.
Just to be clear here's an example:
I create connection-pool-a with the following settings and target it at All servers in the cluster:
Initial Capacity: 30
Maximum Capacity: 60
Are these settings applied:
Per managed server - i.e. each node has an initial capacity of 30 and max of 60 connections.
Across the cluster - i.e. the initial number of connections across all managed servers is 30 rising to a maximum of 60.
In some other way I haven't considered?
I ask as this will obviously have a significant effect on the total number of connections being made to the Database and I'm trying to figure out how best to size the connection pools given the constraints of our Database.
Cheers,
Edd
1.Per managed server - i.e. each node has an initial capacity of 30 and max
of 60 connections.
It is per server in the Cluster.
I cannot find the documentation right now, but the reason I know this, is when the DBA used to monitor actual DB sessions, as each Managed server was started, our numberof open connections used to increment by the value of "Initial Capacity" for that Data Source.
Say Initial Capacity = 10 for the Cluster, which has Server A and B.
When both are starting up, we would first see 10 open (but inactive) sessions on the DB, then 20.
At the database, using Oracle for example, there is a limiting value set for the DB user's profile which limits the total number of open sessions which the Weblogic user can hold.
WebLogic's ability to target resources to a Cluster is intended to help keep settings consistent across a large number of application servers. The resource settings are per server so whenever you bump up the connections for a DS that is used by a cluster, you would want to multiply it by the maximum number of WebLogic servers running at any time (This isn't always the same as the number of members in the cluster).