I just took over a project and noticed that they are using a DB profile as such for service accounts used for connection caching
ALTER PROFILE APP_PROF LIMIT
SESSIONS_PER_USER 100
CONNECT_TIME 640
IDLE_TIME 15
...
I believe that is why we are sometimes getting stale connections, and the "ORA-02399: exceeded maximum connect time, you are being logged off".
My question will be: For middle-tiered applications where connections are cached, are there any good reasons why such a profile would be used for service accounts with such limits ?
Personally, I'd be hard-pressed to imagine a situation where I'd want to have a middle tier service account with a connect_time or idle_time set. I suppose it's possible that someone somewhere has a reasonable reason to use such a configuration-- maybe forcing connections to be frequently recycled is the least painful way to quickly put a band-aid on a resource leak, for example, while you looked to fix the underlying code issue. But those would certainly be settings I'd look at carefully.
I've seen and heard of cases where someone wanted to set sessions_per_user for a middle tier service account where the relationships between the middle tier app server admins and the database admins were strained. Normally, the middle tier admins set a cap for the size of the connection pool in the middle tier in consultation with the DBA team that makes sure that the database can handle connection_pool_max * number_of_app_servers connections. If the middle tier admins have a history of spinning up new app server farms or bumping up the number of allowed connections in the connection pool without talking to the DBA team, the DBA team may want to set their own limit to protect the database. I'd much rather solve the communication problem than have a separate database limit.
Related
I am building a website on Django hosted at Heroku. I think at the peak time about 500-600 users can use it simultaneously. I can't figure out what is the best postgres plan.
According to this: https://elements.heroku.com/addons/heroku-postgresql#details
heroku-postgresql:standard-0 has a connection limit of 120, and
heroku-postgresql:standard-2 has a connection limit of 400.
Is it enough to have connections of 120 for about 500 users? Or it is entirely irrelevant?
Is it enough to have connections of 120 for about 500 users?
This isn't something that can be answered with certainty by anyone other than you but there's some general understanding that might be helpful here.
In most cases for basic web applications, one user on your website != one connection used for as long as they're using the app. For instance, that user might need a connection while they log in and load their profile, but not while they're simply viewing the content. While they're idling on a page, those database connections can service other users. With sensible defaults and connection pooling, 120 connections should be plenty for 500 concurrent users.
All that being said, it's on the application developer to manage database connections and pooling to ensure that this behavior is enforced. Also, this position only represents an average web app and there are certainly apps out there whose users require longer-lived connections.
I'm just configuring a system for production and we have chosen Heroku's Postgres Standard-0 as the database. This states the maximum connections is 120 but I am aware that this does not mean I can set Sequelize's POOL_MAX to 120 as there are other considerations.
From experience, what would be an upper POOL_MAX setting?
Since Heroku needs to occasionally connect to your database to perform health-checks and various other tasks, I wouldn't recommend setting POOL_MAX to 120 as you've already intuited. A figure around 110 seems more appropriate as it will leave room for Heroku to monitor the database and allow a couple extra connections in case you need to connect to the database in a pinch. If you add more dynos or have other clients connecting to the database regularly, you'll want to adjust the POOL_MAX setting downward to account for the additional connections.
Pardon me for my limited knowledge of Oracle Database.
following is scenario
I am currently working on application where my web application is deployed on 4 profiles of Websphere application server. these 4 profiles can together handle load of 600 users.
all these 4 profiles are pointing to single Oracle Database. in normal scenario we observe that on an average nearly 100 inactive sessions exists(yes i am aware of inactive session =sessions which are not performing any activity at that moment) but in certain scenarios we observed that these inactive session count goes up more than 150 and users are complaining slowness.
i would like to appreciate if someone help me by providing pointers for following queries
1]how to calculate require oracle session count based on connection pools considering above scenario of 4 profiles
2]how to ensure even if my oracle inactive session count reach beyond peak limit (in this case 150) application will not have any impact from slowness point of view.
We're in the process of rewriting a web application in Java, coming from PHP. I think, but I'm not really sure, that we might run into problems in regard to connection pooling. The application in itself is multitenant, and is a combination of "Separate database" and "Separate schema".
For every Postgres database server instance, there can be more than 1 database (named schemax_XXX) holding more than 1 schema (where the schema is a tenant). On signup, one of two things can happen:
A new tenant schema is created in the highest numbered schema_XXX database.
The signup process sees that a database has been fully allocated and creates a new schemas_XXX+1 database. In this new database, the tenant schema is created.
All tenants are known via a central registry (also a Postgres database). When a session is established the registry will resolve the host, database and schema of the tenant and a database session is established for that HTTP request.
Now, the problem I think I'm seeing here is twofold:
A JDBC connection pool is defined when the application starts. With that I mean that all databases (host+database) are known at startup. This conflicts with the signup process.
When I'm writing this we have ~20 database servers with ~1000 databases (for a total sum of ~100k (tenant) schemas. Given those numbers, I would need 20*1000 data sources for every instance of the application. I'm assuming that all pools are also, at one time or another, also started. I'm not sure how much resources a pool allocates, but it must be a non trivial amount for 20k pools.
So, is it feasable to even assume that a connection pool can be used for this?
For the first problem, I guess that a pool with support for JMX can be used, and that we create a new datasource when and if a new schemas_XXX database is created. The larger issue is that of the huge amount of pools. For this, I guess, some sort of pool manager should be used that can terminate a pool that have no open connections (and on demand also start a pool). I have not found anything that supports this.
What options do I have? Or should I just bite the bullet and fall back to an out of process connection pool such as PgBouncer and establish a plain JDBC connection per request, similar to how we're handling it now with PHP?
A few things:
A Connection pool need not be instantiated only at application start-up. You can create or destroy them whenever you want;
You obviously don't want to eagerly create one Connection pool per database or schema to be open at all times. You'd need to keep at least 20000 or 100000 Connections open if you did, a nonstarter even before you get to the non-Connection resources used by the DataSource;
If, as is likely, requests for Connections for a particular tenant tend to cluster, you might consider lazily, dynamically instantiating pools, and destroying them after some timeout if they've not handled a request for a while.
Good luck!
In a normal enterprise application, there just one user (set in hibernate.xml or other config) and multi concurrent connection/multi concurrent session (cos its multi threaded application).
so, will those ONE user's multi session interfare each other?
Depends what you mean by "interfere".
Your middle tier connection pool will open a number of physical connections to the database. Sessions in the middle tier will request a connection from the pool, do some work, and return the connection to the pool. Assuming that your connection pool is large enough to handle the number of simultaneous calls being made from your application (based on the number of sessions, the length of time each session needs a logical connection, and the fraction of "think time" to "action time" in each session), you won't experience contention due to opening connections.
Oracle is perfectly happy to run queries in multiple sessions simultaneously. Obviously, though, there is the potential for one session to contend with another session for resources. Two sessions might contend for the same row-level lock if they are both trying to update the same row. If you have enough sessions, you might end up in a situation where CPU or RAM or I/O is being overtaxed and the load that one session creates causes performance issues in another session. Oracle doesn't care which Oracle user(s) are involved in this sort of contention-- you'd have the same potential for interference with 10 sessions all running as 1 user as you would if there were 10 sessions running as 10 different users assuming the sessions were doing the same things.