Connection pool opens more connections then maximum pool size - oracle

Hey I'm using Glassfish open source v4 and I'm having a weird problem.
I have defined a JDBC connection pool to Oracle 11g in the admin console and I've set :
Pool Settings
Initial and Minimum Pool Size: 500
Maximum Pool Size: 1000
Pool Resize Quantity: : 750
And I've created a specific user for this connection pool. Yet sometimes when I inspect opened connections in the database I see that there are more then 1000 (maximum I've seen was 1440)
When this happens any query attempts fail, sometimes with OutOfMemory exception, some show http thread interuptions and some don't show any logs at all, just takes a long time.
What I am wondering is how is it possible the Glassfish opens more connections then I've defined it to?

1t try to compare output from netstat on appl. server and db server side. You may have some "dangling" connections. Also try to find some documentation about DCD (Dead connection detection) in Oracle.
Few years ago I saw situations where Java application server thought that the connection is dead because it is not responding for few minutes. So this connection was put onto some dead connection list and a new connection was created.
There also can be some network issues - for example there is a FW between appl and db server.
When TCP connection is not active for one hour then it's cut over on one side but DB sever does not know about that.
The usual way how to investigate that is
compare output of both netstat(s) (appl./db)
identify dangling TCP connections
translate TCP connection onto Unix process id(PID) of Oracle session process
translate PID onto Oracle session (SID and SERIAL#)
kill the session on Oracle level (alter system kill session ...)

Related

ORDS was unable to make a connection to the database

I am seeing this issue frequently after I have implemented the ORDS in R12.2.9 upgrade. Our ORDS is hosted on a weblogic server this issue occurs when there are 10 connections updating a single table? Is there any setup for a maximum connection control?
Complete Error:
ORDS was unable to make a connection to the database. This can occur if the database is unavailable, the maximum number of sessions has been reached or the pool is not correctly configured. The connection pool named: |apex|pu| had the following error(s): Exception occurred while getting connection: oracle.ucp.UniversalConnectionPoolException: All connections in the Universal Connection Pool are in use
That error means the pool has been exhausted. 10 is the DEFAULT pool size, and is almost NEVER correct for a production deployment.
It's very likely a decently active application will use all 10 connection from a pool, resulting in the exact error you are seeing.
So the answer: increase the max connections property for your pool, and restart ORDS. The hard part is: based on your application performance and activity profile, how large should the pool be?
Some good advice can be found here from our Real World Performance Team.
You can use the jdbc.MaxLimit parameter when configuring ORDS. It defaults to 10 as the maximum number of connections.
jdbc.MaxLimit
Specifies the maximum number of connections.
Defaults to 10. (Might be too low for some production environments.)
Using a command like java -jar ords.war set-property jdbc.MaxLimit 50 would set the maximum number of connections to 50 (after reloading ORDS or restarting WebLogic).

how to preserve database connection while thread is blocked

The database i'm integrated is configured as if a connection is idle(not being used for a while), then connection is dropped. Since im using spring batch in persistent configuration, there is always an active database connection on running threads.
One of my spring batch job is dependent to data from external web service which takes long time to execute. Thats why i already lose the database connection when i get the result.
I tried to use taskscheduler to register a heartbeat query(select 1 from dual) before the web request occurs, which executes the queryevery 5 minutes to keep the connection alive but even if the query executes periodically, i guess it executes the query on a seperate connecyion since it runs on another thread.
Does anyone have an alternative suggestion to keep the connection alive while on locked thread?
I use JPA's EntityManager for the haertbeat query
If you use Spring then you also use HikariCP. The recent JDBC standard defines method isValid() so you do not have to call SQL to check whether Connection is alive.
More over there is one more mechanism you can use. It is called TCP keepalive.
If you insert stanza ENABLE=BROKEN into your JDBC url Oracle JDBC drivers will enable TCP Keepalive feature on a TCP connection
jdbc:oracle:thin:#(DESCRIPTION=(ENABLE=BROKEN)(ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=myhost))(CONNECT_DATA=(SERVICE_NAME=orcl)))
Then it will be Linux kernel who will be sending keepalive probes over TCP connection even if your thread is blocked.
Beware: The delay for 1st probe and frequency is determined by Linux kernel parameters.
# cat /proc/sys/net/ipv4/tcp_keepalive_time
7200
# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
75
# cat /proc/sys/net/ipv4/tcp_keepalive_probes
9
By default the 1st keep alive probe (TCP window carying 0 bytes) is sent after 2 hours.
While Cisco/Juniper usually cut off TCP connection after one hour.

TCPListener in golang: error when number of connections is above 60 / 260

I am building TCP Proxy: client <-> proxy <-> Vertica
I have a net.TCPListener, which takes incoming requests by AcceptTCP() and creating connections, then, making connection to destination socket by net.DialTCP("tcp", nil, raddr). Looks like a bridge. Default proxy model.
Firstly, at first version, i have a trouble: if i have 59 parallel incoming request, everything is fine. But if i have one more (60), i have a trouble: 1-59 connections are OK, but 60 and newer are fault. I cant catch error properly. Looks like some socket unexpectedly closes
Secondly, i tried to set queue for listener. It helps me a lot: but if i have more than 258 requests, i get error again.
My question: is there any limit of connections in net package? May be it is system limitation?
For external info: Vertica running in docker container, hw/system: macbook, vertica limit connection pool: 5, but pool logic implemented into proxy.
I also tried set "raw" proxy without pool logic (thats why i set queue for listener: i must not exceed threshold of Vertica User's pool), result is 258 requests..
UPDATED: (05.04.2020)
Looks like it is system limitations fault. Did I mention anywhere that I trying to run the whole system on one PC?
So, what I had:
300 parallel processes as requests (making by multiprocessing.Pool
Python) (300 sockets)
Listener that creates 300 connections (once
more 300 sockets)
And series of rapidly creating/closing sockets in
deep of proxy (according to queue and Vertica pool)
What I have now:
300 python requests making from another PC in my local network (on Windows)
Proxy works fine
But I have several errors on Windows PC, which creating requests to my proxy. Errors like low memory in "swap file".
I still need to make some stress test for proxy. Adding less memory for swap file didn't solve my problem on Windows PC. I will be grateful for any suggestions and ideas. Thanks!
How does the proxy connect to Vertica?
There is by default a maximum of 50 ordinary mortal users to be connected to one Vertica node at any one time. The superuser "dbadmin" always has 5 connections in addition to that.
So if I try to connect 60 times as dbadmin, I get this on a default Vertica configuration:
Connection attempt failed: FATAL 4060: New session rejected due to limit, already 55 sessions active
You can increase the Vertica config item MaxClientSessions from its default of 50 per node.
Command is : ALTER NODE <_node_name_> SET MaxClientSessions = 100, for example.
I suppose you are always connecting to the same Vertica node, and that you have set ConnectionLoadBalancing to FALSE. So you always connect to the same node, and soon reach the default maximum of 50.
Hope that's the reason found ....

How to configure connection pool of H2 when starting from command line?

I start H2 DB with TCP server. Looking at documentation I found how to set connection pool from java code.
I wonder: is it possible to create a connection pool from command line and set max number of connections in the pool?
As I unterstand it a connection pool is a kind of a separate "client" that sits in front of the database and has one or more connections open. Your program asks the connection pool - and not the database - for a connection to the database. The connection pool looks if there is an unemployed connection hanging around and returns it to your program.
You just start the database as a "normal" TCP server as documented in http://h2database.com/html/tutorial.html#using_server. But it's still up to you to write the connection pool program.

How can I set the timeout on OCILogon2?

When the Oracle 10 databases are up and running fine, OCILogon2() will connect immediately. When the databases are turned off or inaccessible due to network issues - it will fail immediately.
However when our DBAs go into emergency maintenance and block incomming connections, it can take 5 to 10 minutes to timeout.
This is problematic for me since I've found that OCILogin2 isn't thread safe and we can only use it serially - and I connect to quite a few Oracle DBs. 3 blocked servers X 5-10 minutes = 15 to 30 minutes of lockup time
Does anyone know how to set the OCILogon2 connection timeout?
Thanks.
I'm currenty playing with OCI and it seems to me that it's impossible.
The only way I can think of is to use non-blocking mode. You'll need OCIServerAttach() and OCISessionBegin() instead of OCILogon() in this case. But when I tried this, OCISessionBegin() constantly returns OCI_ERROR with the following error code:
ORA-03123 operation would block
Cause: The attempted operation cannot complete now.
Action: Retry the operation later.
It looks strange and I don't yet know how to deal with it.
Possible workaround is to run your logon in another process, which you can kill after timeout...
We think we found the right file setting - but it's one of those problems where we have to wait until something rare and horrible occurs before we can verify it :-/
[sqlnet.ora]
SQLNET.OUTBOUND_CONNECT_TIMEOUT=60
From the Oracle docs..
http://download.oracle.com/docs/cd/B28359_01/network.111/b28317/sqlnet.htm#BIIFGFHI
5.2.35 SQLNET.OUTBOUND_ CONNECT _TIMEOUT
Purpose
Use the SQLNET.OUTBOUND_ CONNECT _TIMEOUT parameter to specify the time, in seconds, for a client to establish an Oracle Net connection to the database instance.
If an Oracle Net connection is not established in the time specified, the connect attempt is terminated. The client receives an ORA-12170: TNS:Connect timeout occurred error.
The outbound connect timeout interval is a superset of the TCP connect timeout interval, which specifies a limit on the time taken to establish a TCP connection. Additionally, the outbound connect timeout interval includes the time taken to be connected to an Oracle instance providing the requested service.
Without this parameter, a client connection request to the database server may block for the default TCP connect timeout duration (approximately 8 minutes on Linux) when the database server host system is unreachable.
The outbound connect timeout interval is only applicable for TCP, TCP with SSL, and IPC transport connections.
Default
None
Example
SQLNET.OUTBOUND_ CONNECT _TIMEOUT=10

Resources