Is there a library to share Oracle Connection Pools across multiple servers? - oracle

Are there open-source solutions to share Oracle connection pools across server instances so application instances don't suck up limited connections? For example say I have a limit of 100 connections and 10 servers. I create 2 pools of 50, is there a way to share those pools of 50 across 5 servers?

If by "5 servers" you mean 5 application servers, then you could look at using DRCP (Database Resident Connection Pooling) where the connection pool management is done via the database, and thus multiple app servers can all make use of a common connection pool.
Docs here https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdbc/database-resident-connection-pooling.html#GUID-D4F9DBD7-7DC6-4233-B831-933809173E39

Related

How to handle connection pooling in Oracle

We have lots of micro services which need connection to DB. Oracle and PostgreSQL can handle just limited amount of connections. PostgreSQL solve this situation with PgBouncer so application connect to this URL which handle connection pooling. Is there something similar in Oracle ? What I found is oracle UCP but I think this is still pooling in application not in DB.
If your clients drivers are not too old (pre oracle-12) you can use Oracles Database Resident Connection Pooling.
See https://docs.oracle.com/en/database/oracle/oracle-database/19/jjdbc/database-resident-connection-pooling.html#GUID-D4F9DBD7-7DC6-4233-B831-933809173E39
Database Resident Connection Pooling
Database Resident Connection Pool (DRCP) is a connection pool in the server that is shared across many clients. You should use DRCP in connection pools where the number of active connections is fairly less than the number of open connections.
Especially database administrators will love this because they can have a good control on the hits on the database.

How to scale websocket connection load as one adds/removes servers?

To explain the problem:
With HTTP:
Assume there are 100 requests/second arriving.
If, there are 4 servers, the load balancer (LB) can distribute the load across them evenly, 25/second per server
If i add a server (5 servers total), the LB balances it more evenly to now 20/second per server
If i remove a server (3 servers total), the LB decreases load per server to 33.3/second per server
So the load per server is automatically balanced as i add/remove servers, since each connection is so short lived.
With Websockets
Assume there are 100 clients, 2 servers (behind a LB)
The LB initially balances each incoming connection evenly, so each server has 50 connections.
However, if I add a server (3 servers total), the 3rd servers gets 0 connections, since the existing 100 clients are already connected to the 2 servers.
If i remove a server (1 server total), all those 100 connections will reconnect and are now served by 1 server.
Problem
Since websocket connections are persistent, adding/removing a server does not increase/decrease load per server until the clients decide to reconnect.
How does one then efficiently scale websockets and manage load per server?
This is similar to problems the gaming industry has been trying to solve for a long time. That is an area where you have many concurrent connections and you have to have fast communication between many clients.
Options:
Slave/master architecture where master retains connection to slaves to monitor health, load, etc. When someone joins the session/application they ping the master and the master responds with the next server. This is kind of client side load balancing except you are using server side heuristics.
This prevents your clients from blowing up a single server. You'll have to have the client poll the master before establishing the WS connection but that is simple.
This way you can also scale out to multi master if you need to and put them behind load balancers.
If you need to send a message between servers there are many options for that (handle it yourself, queues, etc).
This is how my drawing app Pixmap for Android, which I built last year, works. Works very well too.
Client side load balancing where client connects to a random host name. This is how Watch.ly works. Each host can then be its own load balancer and cluster of servers for safety. Risky but simple.
Traditional load balancing - ie round robin. Hard to beat haproxy. This should be your first approach and will scale to many thousands of concurrent users. Doesn't solve the problem of redistributing load though. One way to solve that with this setup is to push an event to your clients telling them to reconnect (and have each attempt to reconnect with a random timeout so you don't kill your servers).

Can the Oracle DRCP connection pool and a middle tier connection pool run simultaneously?

We have several web applications using my-batis as a connection pool on the web server. We are using the Oracle 12c database. Now we are adding a new application X which uses individual connections for every request and is very inefficient. We would like to enable the Oracle DRCP connection pool and have application X use it without affecting the previously existing apps that have a connection pool. Our DBA informed me that it looks like all the applications will have to use the Oracle DRCP connection pool if it is enabled, meaning we will have to reconfigure the connection method in all of our applications for this application X.
Does anyone know if this is the case, or can you have the Oracle DRCP connection pool running without affecting our middle tier connection pools?
Oracle DRCP does not interfere with any other connections methods you are currently using, it only adds another option to connect to the DRCP connection pool.
This was verified when the DBA decided to give it a try and turn on the DRCP connection pool in our Oracle database. We were able to connect to the new DRCP connection pool with our new application X and our other applications continued to work without any modification necessary.

Websockets and scalability

I am a beginner with websockets.
I have a need in my application where server needs to notify clients when something changes and am planning to use websockets.
Single server instance and single client ==> How many websockets will be created and how many connections to websockets?
Single server instance and 10 clients ==> How many websockets will be created and how many connections to websockets?
Single server instance and 1000 clients ==> How many websockets will be created and how many connections to websockets?
How do you scale with websockets when your application has a 1000’s of user base?
Thanks much for your feedback.
1) Single server instance and single client ==> How many websockets will be created and how many connections to websockets?
If your client creates one webSocket connection, then that's what there will be one webSocket connection on the client and one on the server. It's the client that creates webSocket connections to the server so it is the client that determines how many there will be. If it creates 3, then there will be 3. If it creates 1, then there will be 1. Usually, the client would just create 1.
2) Single server instance and 10 clients ==> How many websockets will be created and how many connections to websockets?
As described above, it depends upon what the client does. If each client creates 1 webSocket connection and there are 10 clients connected to the server, then the server will see a total of 10 webSocket connections.
3) Single server instance and 1000 clients ==> How many websockets will be created and how many connections to websockets?
Same as point #2.
How do you scale with webscokets when your application has a 1000’s of user base?
A single server, configured appropriately can handle hundreds of thousands of simultaneous webSocket connections that are mostly idle since an idle webSocket uses pretty much no server CPU. For even larger scale deployments, one can cluster the server (run multiple server processes) and use sticky load balancing to spread the load.
There are many other articles like these on Google worth reading if you're pursuing large scale webSocket or socket.io deployments:
The Road to 2 Million Websocket Connections in Phoenix
600k concurrent websocket connections on AWS using Node.js
10 million concurrent webSockets
Ultimately, the achievable scale per a properly configured server will likely have more to do with how much activity there is per connection and how much computation is needed to deliver that.

Can application sessions share F5 load balancer connections?

I understand that two sessions cannot use a connection at the exact same time. But I had thought it was possible for multiple sessions to share the same connection or pipe, similar in principle to a threaded application, where processing of execution is time-sliced.
The reason I bring this up is I'm somewhat perplexed by how the F5 load balancer manages connections, in relation to application sessions. I have a web server which connects to the F5, which load balances 2 application servers:
Client (i.e. laptop) --> Web Server --> F5 Load balancer for app servers --> (app server 1, app server 2)
I was looking at the number of connections on the F5 load balancer the other day and it showed 7 connections to app server 1 and 3 connections to app server 2; 10 total connections. But the actual application had hundreds of sessions. How could this be? If there are 1000 sessions, let's say, that averages out to 1 connection per 100 sessions. Something doesn't add up here, I'm thinking.
Can and does the F5 load balancer distinguish between inbound connections and outbound connections? If so, how can I see both inbound and outbound connection? I'm thinking that, perhaps, there are 10 inbound connections from the web server to the load balancer and 1000 outbound connections (because 1000 sessions) to the app servers.
I'm thinking it should be possible to queue or share multiple sessions per connections but maybe that's not how it works, particularly with load balancers. Any help making sense of all of this would be most appreciated. Thank you.
If you were using the OneConnect feature, this is exactly what it's intended for. The BIG-IP manages an internal session state for these connections and can reuse and maintain server-side sessions for multiple external connections.
Useful for potentailly bulky applications but can cause issues if you have internal applications that reuse unique keys for session state (Java is a good example).
SOL7208 - Overview of the Oneconnect Profile

Resources