i know when using preparestatement the statement will be cache for the next time use , but i've read the docs which said the Cache is made per connection , from my understanding this means that each connection maintain it's own Cache . i.e. Connection A can't use the statement cached in Connection B even those two connections are in the same connection pool .
i'm wondering why can't the connection pool manage the Cache for all connections in it , so the statement could be reused by all connections.
my question : am i right about this ? or i just misunderstand this. and if i'm right , how about my wondering mentioned above . can it be implemented that way ?
A statement handle is - usually - linked to the physical connection that created it (not only at the JDBC side, but also at the database side). It is also deleted/closed/disposed when the connection is closed. As it is linked to the connection, the handle can't be used from a different connection, therefor the statement cache - if any - is per connection.
Even if this were technically possible, there could be additional problems (eg privilege leaks if connection have different rights, etc).
Related
We have recently upgraded from Oracle.ManagedDataAccess.EntityFramework to Oracle.EntityFrameworkCore (we are on .net standard 2.0). When we connect to the database we use proxy credentials, with the following connection string:
User Id=changingUserId;Data Source=dbname;Proxy User Id=proxyUserId;Proxy Password=proxyUserPassword;
The UserID element changes based on who is connecting.
The problem we have is that the connection pools are no longer working as expected, with many connections being spawning and not closed - we very quickly reach the pool size limit and everything grinds to halt. Before the upgrade, pools would increase and decrease in size, but they now only grow!
Reading the oracle docs, it appears it requires the connection string to be identical for connection pooling to work correctly, but I don't see how this is possible when we are using proxy users. Has anyone else come across this/got around it or am I missing something?
Thanks
Chris
We have found a work around, adding the users password into the connection string makes it work as expected - no more filling up the connection pool/connection numbers are again rising and falling.
User Id=changingUserId;Password=usersPwd;Data Source=dbname;Proxy User Id=proxyUserId;Proxy Password=proxyUserPassword;
This isn't ideal for us - authentication/authorisation is handled elsewhere - but it will do for now. We are raising a call with Oracle as I suspect this is a bug in their library.
Reading this article: http://go-database-sql.org/accessing.html
It says that the sql.DB object is designed to be long-lived and that we should not Open() and Close() databases frequently. But what should I do if I have 10 different MySQL servers and I have sharded them in a way that I have 511 databases in each server for example the way Pinterest shards their data with MySQL?
https://medium.com/#Pinterest_Engineering/sharding-pinterest-how-we-scaled-our-mysql-fleet-3f341e96ca6f
Then would I not need to constantly access new nodes with new databases all the time? As I understand then I have to Open and Close the database connection all the time depending on which node and database I have to access.
It also says that:
If you don’t treat the sql.DB as a long-lived object, you could
experience problems such as poor reuse and sharing of connections,
running out of available network resources, or sporadic failures due
to a lot of TCP connections remaining in TIME_WAIT status. Such
problems are signs that you’re not using database/sql as it was
designed.
Will this be a problem? How should I solve this issue then?
I am also interested in the question. I guess there could be such solution:
Minimize number of idle connection in pool db.SerMaxIdleConns(N)
Make map[serverID]*sql.DB. When you have no such connection - add it to map.
Make Dara more local - so backends usually go to “their” databases. However Pinterest seems not to use it.
Increase number of sockets and files on backend machines so they can keep more open connections.
Provide some reasonable idle timeout so very old unused connections could be closed.
In my project, developers use a single instance of Connection instead of a connection pool on an Oracle 12c.
Using a pool is a common practice and Oracle itself documents it: http://docs.oracle.com/database/121/JJUCP/get_started.htm#JJUCP8120.
But JDBC 4.2 specification says:
13.1.1 Creating Statements
Each Connection object can create multiple Statement objects that may be used concurrently by the program.
Why using a pool of connections instead of a single connection, if it's possible to use statements to manage concurrency?
The Oracle Database Dev Team strongly discourages using a single Connection in multiple threads. That almost always causes problems. As a general rule we will not consider any problem report that does this.
A Connection can have multiple Statements and/or ResultSets open at one time but only one can execute at a time. Connections are strictly single threaded and blocking. We try to prevent multiple threads from accessing a Connection simultaneously but there are a few odd cases where it is possible. These are all but guaranteed to cause problems. (It is not practical to fix or prevent these cases mostly for performance reasons. Just don't share a single Connection across multiple threads.)
If a client connects to the database via a dedicated server connection then that database session will only serve that client . If the client connects to the database via shared server connection, then a given database session may serve multiple clients over its lifetime.
This is documented here.
Also, at any one point in time, a session can only execute one thing at a time. If that wasn't the case, then running things in parallel wouldn't spawn multiple other sessions!
A single connection cannot execute several statements concurrently.
Yes one connection can execute more that one statement. It will be the programmer to chose connection pooling setting or multiple statements when executing over more than one thread. Most databases in the market can handle multiple statements in one connection.
We're in the process of rewriting a web application in Java, coming from PHP. I think, but I'm not really sure, that we might run into problems in regard to connection pooling. The application in itself is multitenant, and is a combination of "Separate database" and "Separate schema".
For every Postgres database server instance, there can be more than 1 database (named schemax_XXX) holding more than 1 schema (where the schema is a tenant). On signup, one of two things can happen:
A new tenant schema is created in the highest numbered schema_XXX database.
The signup process sees that a database has been fully allocated and creates a new schemas_XXX+1 database. In this new database, the tenant schema is created.
All tenants are known via a central registry (also a Postgres database). When a session is established the registry will resolve the host, database and schema of the tenant and a database session is established for that HTTP request.
Now, the problem I think I'm seeing here is twofold:
A JDBC connection pool is defined when the application starts. With that I mean that all databases (host+database) are known at startup. This conflicts with the signup process.
When I'm writing this we have ~20 database servers with ~1000 databases (for a total sum of ~100k (tenant) schemas. Given those numbers, I would need 20*1000 data sources for every instance of the application. I'm assuming that all pools are also, at one time or another, also started. I'm not sure how much resources a pool allocates, but it must be a non trivial amount for 20k pools.
So, is it feasable to even assume that a connection pool can be used for this?
For the first problem, I guess that a pool with support for JMX can be used, and that we create a new datasource when and if a new schemas_XXX database is created. The larger issue is that of the huge amount of pools. For this, I guess, some sort of pool manager should be used that can terminate a pool that have no open connections (and on demand also start a pool). I have not found anything that supports this.
What options do I have? Or should I just bite the bullet and fall back to an out of process connection pool such as PgBouncer and establish a plain JDBC connection per request, similar to how we're handling it now with PHP?
A few things:
A Connection pool need not be instantiated only at application start-up. You can create or destroy them whenever you want;
You obviously don't want to eagerly create one Connection pool per database or schema to be open at all times. You'd need to keep at least 20000 or 100000 Connections open if you did, a nonstarter even before you get to the non-Connection resources used by the DataSource;
If, as is likely, requests for Connections for a particular tenant tend to cluster, you might consider lazily, dynamically instantiating pools, and destroying them after some timeout if they've not handled a request for a while.
Good luck!
We have a web service written in Java and is connecting to Oracle database for data extraction. Recently, we encountered too many inactive session in Oracle database from JDBC which is our web service.
We are very sure that all the connection is being closed and set to null after every process.
Can anyone help us in this? Why is it causing inactive session in the database and what can be the solution to this.
Thank you.
What, exactly, is the problem?
Normally, the middle tier application server creates a connection pool. When your code requests a connection, it gets an already open connection from the pool rather than going through the overhead of spawning a new connection to the database. When your code closes a connection, the connection is returned to the pool rather than going through the overhead of physically closing the connection. That means that there will be a reasonable number of connections to the database where the STATUS in V$SESSION is "INACTIVE" at any given point in time. That's perfectly normal.
Even under load, most database connections from a middle tier are "INACTIVE" most of the time. A status of "INACTIVE" merely means that at the instant you ran the query, the session was not executing a SQL statement. Most connections will spend most of their time either sitting in the connection pool waiting for a Java session to open them or waiting on the Java session to do something with the data or waiting on the network to transfer data between the machines.
Are you actually getting an error (i.e. ORA-00020: maximum number of processes exceeded)? Or are you just confused by the number of entries in V$SESSION?