Oracle session is being shared by different web clients - weblogic - oracle

This is causing issues as our procedures use session based global temp tables; need that for good reason. The actions of one web client interferes with that of another. Why is the same Oracle session reused by a separate client.
There is connection pooling with Web logic in place. I have printed the following to confirm that indeed 2 clients are being assigned same oracle session.
SELECT SYS_CONTEXT ('USERENV', 'INSTANCE'),
SYS_CONTEXT ('USERENV', 'SID'),
SYS_CONTEXT ('USERENV', 'SESSIONID')
FROM DUAL;
How to ensure each client gets a different session (not HTTP session, but Oracle session)? Is this something at Weblogic level that needs to be modified?

If you're using connection pooling in the middle tier, then once your middle tier code closes a connection, it is returned to the pool and is available to be used by another middle tier session. If you are trying to use global temporary tables that store data past the point that the middle tier closes a connection, you're doing something wrong.
You could design and build your middle tier so that it doesn't use connection pooling and so that each middle tier session opens a private database connection that is used only by that user. That would generally be a horrible idea, however, since you would generally end up spending oodles of time opening and closing database connections, you'd end up with thousands of orphaned database connections when an application user simply navigates away from the site rather than explicitly logging out, and it would become very difficult to do things like have the same user serviced by different app servers at different points in time for load balancing purposes.
A better approach would be to get rid of the global temporary table and store whatever data you need in a permanent table that includes your unique session ID as part of the key. You'd need to write code to purge the data at some point but that shouldn't be terribly difficult.

Related

Understanding two SQL windows in PL/SQL Developer

Is it a correct understanding that queries run in two SQL windows in PL/SQL Developer are executed as two separate transactions? (I tend to conclude this based on the fact that the results of a modification query issued in one window are not reflected in the results of a SELECT query issued in another window). If this understanding is correct, what is the utility of that given that the two transactions share a single connection?
Two transactions cannot share a single connection. If each window is a separate transaction, each window would open a separate connection to the database. If you have two transactions, you have two sessions.
If you want to see whether the different windows are using different connections, you can run
select sys_context( 'USERENV', 'SID' ) from dual;
If you get the same result in both windows, you have a single connection and a single transaction. If you get different results, you have different connections.
"Session Mode" is configurable via the preference settings. The default is "Multi-Session", in which each window runs in its own session.
The other options are "Dual Session" (my preferred setting), in which all windows share one session while the schema browser, session monitor, compilations etc use a second session, or "Single Session" where the whole application uses a single session.

JDBC connection pool manager

We're in the process of rewriting a web application in Java, coming from PHP. I think, but I'm not really sure, that we might run into problems in regard to connection pooling. The application in itself is multitenant, and is a combination of "Separate database" and "Separate schema".
For every Postgres database server instance, there can be more than 1 database (named schemax_XXX) holding more than 1 schema (where the schema is a tenant). On signup, one of two things can happen:
A new tenant schema is created in the highest numbered schema_XXX database.
The signup process sees that a database has been fully allocated and creates a new schemas_XXX+1 database. In this new database, the tenant schema is created.
All tenants are known via a central registry (also a Postgres database). When a session is established the registry will resolve the host, database and schema of the tenant and a database session is established for that HTTP request.
Now, the problem I think I'm seeing here is twofold:
A JDBC connection pool is defined when the application starts. With that I mean that all databases (host+database) are known at startup. This conflicts with the signup process.
When I'm writing this we have ~20 database servers with ~1000 databases (for a total sum of ~100k (tenant) schemas. Given those numbers, I would need 20*1000 data sources for every instance of the application. I'm assuming that all pools are also, at one time or another, also started. I'm not sure how much resources a pool allocates, but it must be a non trivial amount for 20k pools.
So, is it feasable to even assume that a connection pool can be used for this?
For the first problem, I guess that a pool with support for JMX can be used, and that we create a new datasource when and if a new schemas_XXX database is created. The larger issue is that of the huge amount of pools. For this, I guess, some sort of pool manager should be used that can terminate a pool that have no open connections (and on demand also start a pool). I have not found anything that supports this.
What options do I have? Or should I just bite the bullet and fall back to an out of process connection pool such as PgBouncer and establish a plain JDBC connection per request, similar to how we're handling it now with PHP?
A few things:
A Connection pool need not be instantiated only at application start-up. You can create or destroy them whenever you want;
You obviously don't want to eagerly create one Connection pool per database or schema to be open at all times. You'd need to keep at least 20000 or 100000 Connections open if you did, a nonstarter even before you get to the non-Connection resources used by the DataSource;
If, as is likely, requests for Connections for a particular tenant tend to cluster, you might consider lazily, dynamically instantiating pools, and destroying them after some timeout if they've not handled a request for a while.
Good luck!

Select on one row table takes seconds

I am experiencing very low performance in my web application in which trivial HTTP requests take dozens of seconds to be processed. Tracing down the application code I discovered the majority of time is spent executing the first DB query, even if it is as simple as a SELECT on a single row-single column table. This happens for every HTTP request, independently from the query performed. After this first pathological DB interaction the remaining queries go smoothly.
I am using Hibernate on top of an Oracle DB (using jdbc).
It is not a problem of connection pool since I am successfully using Hibernate-c3p0, neither it seems to be related to Oracle itself, because all query returns immediately if performed directly on DB.
Furthermore, Hibernate SessionFactory is correctly created only once, at application start up time and concurrency is not a problem at all since tests have been done with single user.
Finally, my DB IP address is correctly resolved in my application server /etc/hosts so that even DNS related issues can be discarded (I am using two distinct virtual machines, DB and APP server).
I do not know what to look for, any help?
This sounds like your session factory object is being spun up on the first query. Generally I try to initialize the session factory on application startup to avoid this when issuing the first query because generally the user can see this slowdown. When doing it up front in application startup you will avoid this.

What constitutes a user in Oracle (and in DBs in general) in relation to transactions?

I usually see references to locks being held by a user. Does this mean a single connection, all logged in connections by a user account, etc...?
How does this apply to Oracle and DBs in general?
If it applies to more than one connection, wouldn't people see data while it's being modified?
How does this apply to JDBC?
You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit.
http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_4010.htm
It means a specific user session. If you look at the DBA_LOCKS table you'll see a session_id column, which relates to v$session and represents a single user session (i.e. connection), not all sessions for the user ID. The locking mechanisms are explained in the documentation. You're right, if that was not the case then other sessions/connections for the user would see uncommitted changes, which is never allowed (in Oracle, anyway, but for all RDBMS as far as I'm aware).
For JDBC the same applies, each lock is held by a single connection. If you have a connection pool with multiple open connections against the same user account, changes made using one connection form the pool will not be visible to other connections until they are committed. So, if you're executing multiple statements as part of a logical transaction (in which case hopefully you do not have auto-commit on), you need to keep reusing the same connection for all of them, not fetch a new connection from the pool each time as that may or may not get the same one with the pending changes.

Persisting user data in MVC 3

I have been given a requirement to persist user data once the user has authenticated initially. We don't want to hit the database to look up the user every time they navigate to a new view etc...
I have a User class that is [Serializable] so it could be stored in a session. I am using SQL server for session state as well. I was thinking of storing the object in session but I really hate doing that.
How are developers handling this type of requirement these days?
Three ways:
Encrypting data in cookies and sending it to client, decrypting it whenever you need it
Storing it server side by an Id (e.g UserId) in Cache, Session, or any other storage(which is safer than cookie).
Use second level caching strategy if you used an ORM
Assuming your user object is not huge and does not change often i think it is acceptable to store it in the session.
Since you already have a sql server session you will be making SP calls to pull/push the data already and adding a small object to that should have minimal perf issues compared to other options like persisting it down to the client and sending it back on every request.
I would also consider the server a much more secure place to keep this info.
You want to minimize the number of times you write to the session(request a lock) when it is stored in sql as it is implemented in a sealed class that exclusivity locks the session. If any of your other requests in this session require write access to the SQL session they will be blocked by the initial request until it releases the session lock. (there are some new hooks in .NET 4 for allowing you to change the SessionStateBehavior in the pipeline before the session is accessed)
You might consider a session state server (appfabric) if perf of your SQL session store is an issue.

Resources