Oracle, one user (application connection) multi concurrent session (multi thread connection) - oracle

In a normal enterprise application, there just one user (set in hibernate.xml or other config) and multi concurrent connection/multi concurrent session (cos its multi threaded application).
so, will those ONE user's multi session interfare each other?

Depends what you mean by "interfere".
Your middle tier connection pool will open a number of physical connections to the database. Sessions in the middle tier will request a connection from the pool, do some work, and return the connection to the pool. Assuming that your connection pool is large enough to handle the number of simultaneous calls being made from your application (based on the number of sessions, the length of time each session needs a logical connection, and the fraction of "think time" to "action time" in each session), you won't experience contention due to opening connections.
Oracle is perfectly happy to run queries in multiple sessions simultaneously. Obviously, though, there is the potential for one session to contend with another session for resources. Two sessions might contend for the same row-level lock if they are both trying to update the same row. If you have enough sessions, you might end up in a situation where CPU or RAM or I/O is being overtaxed and the load that one session creates causes performance issues in another session. Oracle doesn't care which Oracle user(s) are involved in this sort of contention-- you'd have the same potential for interference with 10 sessions all running as 1 user as you would if there were 10 sessions running as 10 different users assuming the sessions were doing the same things.

Related

Polling database after every 'n' seconds vs CQN Continuous Query Notification - Oracle

My application currently polls database every n seconds to see if there are any new records.
To reduce network round trips, and CPU cycles of this polling i was thinking to replace it with CQN based approach where database will itself update subscribed application if there is any Commit to database.
The only problem is what if Oracle was NOT able to notify application due to any connection issue between oracle and subscribed application or if the application was crashed or killed due to any reason? ... Is there a way to know if application have missed any CQN notification?
Is polling database via application code itself the only way for mission critical applications?
You didn't say whether every 'n' seconds means you're expecting data every few seconds, or you just need your "staleness" to as low as that. That has an impact on the choice of CQN, because as per docs, https://docs.oracle.com/en/database/oracle/oracle-database/12.2/adfns/cqn.html#GUID-98FB4276-0827-4A50-9506-E5C1CA0B7778
"Good candidates for CQN are applications that cache the result sets of queries on infrequently changed objects in the middle tier, to avoid network round trips to the database. These applications can use CQN to register the queries to be cached. When such an application receives a notification, it can refresh its cache by rerunning the registered queries"
However, you have control over how persistent you want the notifcations to be:
"Reliable Option:
By default, a CQN registration is stored in shared memory. To store it in a persistent database queue instead—that is, to generate reliable notifications—specify QOS_RELIABLE in the QOSFLAGS attribute of the CQ_NOTIFICATION$_REG_INFO object.
The advantage of reliable notifications is that if the database fails after generating them, it can still deliver them after it restarts. In an Oracle RAC environment, a surviving database instance can deliver them.
The disadvantage of reliable notifications is that they have higher CPU and I/O costs than default notifications do."

Does Oracle allocate different PGA's for the same user when connected from multiple PCs?

Assume that I've connected 3 times to the database with the same user from different PCs. Does Oracle create separate PGA areas for each of them, or just one? If one, how it handles multiple queries coming from different sessions connected by the same user, and executed at the same time?
Each session (assuming you're using dedicated sessions) allocates separate memory in PGA for things like sorts. It doesn't matter whether those sessions come from 1 user or 100 users. Each session gets its own memory.
Answering your questions
Does Oracle create separate PGA areas for each of them, or just one?
The Program Global Area or PGA is an area of memory allocated and private for one process. The configuration of the PGA depends on the connection configuration of the Oracle database: either shared server or dedicated.
In a shared server configuration, multiple users share a connection to the database, minimizing memory usage on the server, but potentially affecting response time for user requests. In a shared server environment, the SGA holds the session information for a user instead of the PGA. Shared server environments are ideal for a large number of simultaneous connections to the database with infrequent or short-lived requests. In a dedicated server environment, each user process gets its own connection to the database; the PGA contains the session memory for this configuration. The PGA also includes a sort area. The sort area is used whenever a user request requires a sort, bitmap merge, or hash join operation.
Therefore, the answer is yes, assuming you are not using shared server configuration.
If one, how it handles multiple queries coming from different sessions connected by the same user, and executed at the same time?
In a SHARED SERVER configuration, the SGA holds the session information for a user instead of the PGA. That is precisely the point for handling multiple connections using the same server process. Shared server tasks have to keep these working areas in the SGA, because all the dispatcher processes handle requests from any user process.

Oracle: Difference between non-pooled connections and DRCP

I am actually reading Oracle-cx_Oracle tutorial.
There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing.
So could somebody help me understand what are they and how they are different to each other.
Thank you.
Web tier and mid-tier applications typically have many threads of execution, which take turns using RDBMS resources. Currently, multi-threaded applications can share connections to the database efficiently, allowing great mid-tier scalability. Starting with Oracle 11g, application developers and administrators and DBAs can use Database Resident Connection Pooling to achieve such scalability by sharing connections among multi-process as well as multi-threaded applications that can span across mid-tier systems.
DRCP provides a connection pool in the database server for typical Web application usage scenarios where the application acquires a database connection, works on it for a relatively short duration, and then releases it. DRCP pools "dedicated" servers. A pooled server is the equivalent of a server foreground process and a database session combined.
DRCP complements middle-tier connection pools that share connections between threads in a middle-tier process. In addition, DRCP enables sharing of database connections across middle-tier processes on the same middle-tier host and even across middle-tier hosts. This results in significant reduction in key database resources needed to support a large number of client connections, thereby reducing the database tier memory footprint and boosting the scalability of both middle-tier and database tiers. Having a pool of readily available servers also has the additional benefit of reducing the cost of creating and tearing down client connections.
DRCP is especially relevant for architectures with multi-process single threaded application servers (such as PHP/Apache) that cannot perform middle-tier connection pooling. The database can still scale to tens of thousands of simultaneous connections with DRCP.
DRCP stands for Database Resident Connection Pooling as opposed to "non-pooled" connections
In short, with DRCP, Oracle will cache all the connections opened, making a pool out of them, and will use the connections in the pool for future requests.
The aim of this is to avoid that new connections are opened if some of the existing connections are available/free, and thus to safe database ressources and gain time (the time to open a new connection).
If all connections in the pool are being used, then a new connection is automatically created (by Oracle) and added to the pool.
In non pooled connections, a connection is created and (in theory) closed by the application querying a database.
For instance, on a static PHP page querying the database, you have always the same scheme :
Open DB connection
Queries on the DB
Close the DB connection
And you know what your scheme will be.
Now suppose you have a dynamic PHP page (with AJAX or something), that will query the database only if the user makes some specific actions, the scheme becomes unpredictable. There DRCP can become healthy for your database, especially if you have a lot of users and possible requests.
This quote from the official doc fairly summarize the concept and when it should be used :
Database Resident Connection Pool (DRCP) is a connection pool in the
server that is shared across many clients. You should use DRCP in
connection pools where the number of active connections is fairly less
than the number of open connections. As the number of instances of
connection pools that can share the connections from DRCP pool
increases, the benefits derived from using DRCP increases. DRCP
increases Database server scalability and resolves the resource
wastage issue that is associated with middle-tier connection pooling.
DRCP increases the level of "centralization" of the pools:
Classic connection pool are managed within the client middleware. This means that if for instance you have several independent web servers, likely each one will have their own server-managed connection pool. There is a pool per server and the server is responsible for managing it. For instance you may have 3 separate pools with a limit of 50 connections per pool. Depending on usage patterns it may be a waste, because you may end up using the total 150 connection very seldom, and on the other hand you may hit the individual limit of 50 connections very often.
DRCP is a single pool managed by the DB server, not the client servers. This can lead to more efficient distribution of the connections. In the example above, the 3 servers may share the same pool, database-managed, of less than 150 connections, say 100 connections. And if two servers are idle, the third server can take up all the 100 connections if needed.
See Oracle Database 11g: The Top New Features for DBAs and Developers for more details and About Database Resident Connection Pooling:
This results in significant reduction in key database resources needed to support a large number of client connections, thereby reducing the database tier memory footprint and boosting the scalability of both middle-tier and database tiers
In addition, DRCP compensates the complete lack of middleware connection pools in certain technologies (quoted again from About Database Resident Connection Pooling):
DRCP is especially relevant for architectures with multi-process single threaded application servers (such as PHP/Apache) that cannot perform middle-tier connection pooling. The database can still scale to tens of thousands of simultaneous connections with DRCP.
As a further reference see for instance Connection pooling in PHP - Stack Overflow for instance.

Mystery with Oracle inactive sessions JDBC connections and application slowness

Pardon me for my limited knowledge of Oracle Database.
following is scenario
I am currently working on application where my web application is deployed on 4 profiles of Websphere application server. these 4 profiles can together handle load of 600 users.
all these 4 profiles are pointing to single Oracle Database. in normal scenario we observe that on an average nearly 100 inactive sessions exists(yes i am aware of inactive session =sessions which are not performing any activity at that moment) but in certain scenarios we observed that these inactive session count goes up more than 150 and users are complaining slowness.
i would like to appreciate if someone help me by providing pointers for following queries
1]how to calculate require oracle session count based on connection pools considering above scenario of 4 profiles
2]how to ensure even if my oracle inactive session count reach beyond peak limit (in this case 150) application will not have any impact from slowness point of view.

JDBC connection pool manager

We're in the process of rewriting a web application in Java, coming from PHP. I think, but I'm not really sure, that we might run into problems in regard to connection pooling. The application in itself is multitenant, and is a combination of "Separate database" and "Separate schema".
For every Postgres database server instance, there can be more than 1 database (named schemax_XXX) holding more than 1 schema (where the schema is a tenant). On signup, one of two things can happen:
A new tenant schema is created in the highest numbered schema_XXX database.
The signup process sees that a database has been fully allocated and creates a new schemas_XXX+1 database. In this new database, the tenant schema is created.
All tenants are known via a central registry (also a Postgres database). When a session is established the registry will resolve the host, database and schema of the tenant and a database session is established for that HTTP request.
Now, the problem I think I'm seeing here is twofold:
A JDBC connection pool is defined when the application starts. With that I mean that all databases (host+database) are known at startup. This conflicts with the signup process.
When I'm writing this we have ~20 database servers with ~1000 databases (for a total sum of ~100k (tenant) schemas. Given those numbers, I would need 20*1000 data sources for every instance of the application. I'm assuming that all pools are also, at one time or another, also started. I'm not sure how much resources a pool allocates, but it must be a non trivial amount for 20k pools.
So, is it feasable to even assume that a connection pool can be used for this?
For the first problem, I guess that a pool with support for JMX can be used, and that we create a new datasource when and if a new schemas_XXX database is created. The larger issue is that of the huge amount of pools. For this, I guess, some sort of pool manager should be used that can terminate a pool that have no open connections (and on demand also start a pool). I have not found anything that supports this.
What options do I have? Or should I just bite the bullet and fall back to an out of process connection pool such as PgBouncer and establish a plain JDBC connection per request, similar to how we're handling it now with PHP?
A few things:
A Connection pool need not be instantiated only at application start-up. You can create or destroy them whenever you want;
You obviously don't want to eagerly create one Connection pool per database or schema to be open at all times. You'd need to keep at least 20000 or 100000 Connections open if you did, a nonstarter even before you get to the non-Connection resources used by the DataSource;
If, as is likely, requests for Connections for a particular tenant tend to cluster, you might consider lazily, dynamically instantiating pools, and destroying them after some timeout if they've not handled a request for a while.
Good luck!

Resources