I have a WebLogic 9.2 Cluster which runs 2 managed server nodes. I have created a JDBC Connection Pool which I have targeted at All servers in the cluster. I believe this will result in the physical creation of connection pools on each of the 2 managed servers (although please correct me if I'm wrong)?
Working on this assumption I have also assumed that the configuration attributes of the Connection Pool e.g. Min/ Max Size etc are per managed server rather than per cluster. However I am unsure of this and can't find anything which confirms or denies it in the WebLogic documentation.
Just to be clear here's an example:
I create connection-pool-a with the following settings and target it at All servers in the cluster:
Initial Capacity: 30
Maximum Capacity: 60
Are these settings applied:
Per managed server - i.e. each node has an initial capacity of 30 and max of 60 connections.
Across the cluster - i.e. the initial number of connections across all managed servers is 30 rising to a maximum of 60.
In some other way I haven't considered?
I ask as this will obviously have a significant effect on the total number of connections being made to the Database and I'm trying to figure out how best to size the connection pools given the constraints of our Database.
Cheers,
Edd
1.Per managed server - i.e. each node has an initial capacity of 30 and max
of 60 connections.
It is per server in the Cluster.
I cannot find the documentation right now, but the reason I know this, is when the DBA used to monitor actual DB sessions, as each Managed server was started, our numberof open connections used to increment by the value of "Initial Capacity" for that Data Source.
Say Initial Capacity = 10 for the Cluster, which has Server A and B.
When both are starting up, we would first see 10 open (but inactive) sessions on the DB, then 20.
At the database, using Oracle for example, there is a limiting value set for the DB user's profile which limits the total number of open sessions which the Weblogic user can hold.
WebLogic's ability to target resources to a Cluster is intended to help keep settings consistent across a large number of application servers. The resource settings are per server so whenever you bump up the connections for a DS that is used by a cluster, you would want to multiply it by the maximum number of WebLogic servers running at any time (This isn't always the same as the number of members in the cluster).
Related
Initial and Minimum Pool Size
The minimum number of connections in the pool. This value also determines the number of connections placed in the pool when the pool is first created or when application server starts.
Maximum Pool Size
The maximum number of connections in the pool.
According to the above 2 definitions, if the min pool size is 1 and max pool size is 100 then:
When the pool is first created or when application server starts then only one connection is newly created.
There will be many requests hitting concurrently in the day and will definitely require more connection which will be made in the course of the day which can reach max to 100. But while these connections are made they are not removed from pool until the application server shuts down or we remove the entire pool?
Am I right for these two points?
The pool size will stay between the limits you describe. As a general idea:
Concept #1 is correct.
Concept #2 depends on the JDBC connection pool. Typically the connection pool is able to grow and shrink according to the observed usage during the day. Heavy load will make it grow while idleness will make it shrink.
In any case, every JDBC connection pool behaves a little bit differently, so you should check the specific connection pool you want to use.
1 is correct
, but 2 assumption is true only if you don't close connection and you don't set max life time for the connection.
Usually you close the connection and then it return/released to the connection pool.
Also 100 max pool size is not needed, Although you didn't specify which connection pool you are using, you can read more about pooling setting in hikari pool size
We have Oracle 11g Enterprise 64bit and it is a cluster of 4 nodes.
There is a user with limit of 96 sessions_per_user. We thought that the total limit of sessions for this user is 4 nodes * 96 = 384 sessions. But the reality is no more than something about 180 sessions. After approximately 180 sessions being opened we get erros:
ORA-12850: Could not allocate slaves on all specified instances: 4
needed, 3 allocated ORA-12801: error signaled in parallel query
server P004, instance 3599
ORA-02391: exceeded simultaneous
SESSIONS_PER_USER limit
The question is why the total limit is only 180 sessions? Why is it not 4*96?
We would greatly appreciate your answer.
Although I can't find it documented, a quick test implies you are correct that the maximum total number of sessions is equal to SESSIONS_PER_USER * Number of Nodes. However, that will only be true if the sessions are balanced evenly across the nodes. Each instance still enforces that limit.
Check the service you are connecting to, and if that service is available on all nodes. Run these commands to look at the preferred nodes and the actual running nodes. It's possible that there was a failure, a service migrated to one node, and never migrated back.
# Preferred nodes:
srvctl config service -d $your_db_name
# Running nodes:
srvctl status service -d $your_db_name
Or possibly the connections are hard-wired to a specific instance. This is usually a mistake, but sometimes it is necessary for things like running the PL/SQL debuggers. Run this query to see where your parallel sessions are spawning:
select inst_id ,gv$session.* from gv$session;
Also check the parameter PARALLEL_FORCE_LOCAL and make sure it is not set to true:
select value from gv$parameter where name = 'parallel_force_local';
Or perhaps there's an issue with counting the number of sessions. The number of sessions is frequently more than the requested degree of parallelism. For example, if the query sorts or hashes Oracle will double the number of parallel sessions, one set to produce the rows and one set to consume the rows. Are you sure of the number of parallel sessions being requested?
Also, in my tests, when I ran a parallel query without enough SESSIONS_PER_USER, it simply downgraded my query. I'm not sure why your database is throwing an error. (Perhaps you've got parallel queuing and a timeout set?)
Lastly, it looks like you are using an extremely high degree of parallelism. Are you sure that you need hundreds of parallel processes?
Chances are there are a lot of other potential issues I haven't thought of. Parallelism and RAC are complicated.
In Sequelize.js you should configure the max connection pool size (default 5). I don't know how to deal with this configuration as I work on an autoscaling platform in AWS.
The Aurora DB cluster on r3.2xlarge allows 2000 max connections per read replica (you can get that by running SELECT ##MAX_CONNECTIONS;).
The problem is I don't know what should be the right configuration for each server hosted on our EC2s. What should be the right max connection pool size as I don't know how many servers will be launched by the autoscaling group? Normally, the DB MAX_CONNECTIONS value should be divided by the number of connection pools (one by server), but I don't know how many server will be instantiated at the end.
Our concurrent users count is estimated to be between 50000 and 75000 concurrent users at our release date.
Did someone get previous experience with this kind of situation?
It has been 6 weeks since you asked, but since I got involved in this recently I thought I would share my experience.
The answer various based on how the application works and performs. Plus the characteristics of the application under load for the instance type.
1) You want your pool size to be > than the expected simultaneous queries running on your host.
2) You never want your a situation where number of clients * pool size approaches your max connection limit.
Remember though that simultaneous queries is generally less than simultaneous web requests since most code uses a connection to do a query and then releases it.
So you would need to model your application to understand the actual queries (and amount) that would happen for your 75K users. This is likely a lot LESS than 75K/second db queries a second.
You then can construct a script - we used jmeter - and run a test to simulate performance. One of the items we did during our test was to increase the pool higher and see the difference in performance. We actually used a large number (100) after doing a baseline and found the number made a difference. We then dropped it down until it start making a difference. In our case it was 15 and so I set it to 20.
This was against t2.micro as our app server. If I change the servers to something bigger, this value likely will go up.
Please note that you pay a cost on application startup when you set a higher number...and you also incur some overhead on your server to keep those idle connections so making larger than you need isn't good.
Hope this helps.
The documentation here http://www.ibm.com/support/knowledgecenter/SS7JFU_7.0.0/com.ibm.websphere.express.doc/info/exp/ae/rejb_ecnt.html mentions that a minimum of 50 and a maximum of 500 instances are created by default per ejb.
Lets say at a given point of time 500 clients are trying to use the service. Does it mean that from here after at any time there will be 500 instances? Or will the server destroy the instances after a period of time and when there are no incoming clients?
After reading further I came across something called hard limit H which forces/tells the container not to create more than the specified maximum number of instances.
So what I understood in case of 50,500 is
At any point there will be max number of instances(500).
If more clients are trying to connect, the server will create a new instance(501, 502, 503....) per client and destroy it after serving the client.
Can anyone tell me if I am right?
Pooling EJB instances allow you to save system resources. Let's say that you need 100 instances of the EJB at a given point. You initialize the beans, process the logic and then destroy them. If you need an additional 100 instances after that, you need to do this all over again. This puts a strain on system resources.
When you pool EJB instances, they move in and out of a pool that is maintained by the EJB container. Active instances process the incoming requests, while passive ones stay in the pool. To control the size of the pool, there should to be an upper and lower bound on the number of instances within the pool.
Consider the default setting: The minimum is 50 instances and the maximum is 500 instances. When the server starts up, there are no instances of the EJB on the server. As your application gets concurrent requests/hits, the pool size increases. Let's assume that there are 30 concurrent hits. The pool size stays at 30. WebSphere will not create additional instances to maintain the pool size at the minimum value. After that, assume that the concurrent hits increase to 75 and then drop below 50. At this point, WebSphere will destroy the additional 25 EJB instances and maintain the pool size at 50. Now, if you define the lower limit as 'H50' (hard limit), WebSphere will expend resources to create 50 EJB instances as the server/application starts up. Therefore, the pool size will never drop below 50.
Now let's look at the upper limit, which is 500. As the number of concurrent hits increases, the pool size grows and exceeds 500. Beyond this limit, WebSphere tries to lower the pool size by destroying the EJB instances as soon as they become inactive (i.e. return to the pool). However, the EJB instances can continue to grow beyond this limit. If you have 600 concurrent requests, there will be 600 EJB instances. If it falls to 540, the additional 60 beans are destroyed. The hard limit ('H500') ensures that this overflow never happens. Upto 500 concurrent requests can be handled at the pool's maximum size. Additional requests must wait until an EJB instance becomes inactive (i.e. returns to the pool).
I want to know what configuration setup would be ideal for my case. I have 4 servers (nodes) each with 128 GB RAM. I'll have all 4 nodes under one cluster.
Total number number of indexes would be 10, each getting data of 1500000 documents per day.
Since I'll have 4 servers (nodes) so for all these nodes I'll set master:true, and data:true, so that if one node goes down, other becomes master. Every index will have 5 shards.
I want to know which config parameters should I alter in order to gain maximum potential from elastic.
Also tell me how much memory is enough for my usage, since I'll have very frequent select queries in production (may be 1000 requests per second).
Need a detailed suggestion.s
I'm not sure anyone can give you a definitive answer to exactly how to configure your servers since it is very dependent on your data structure, mapping and specific queries.
You should read this great article series by Elastic regarding production environments