What is the maximum number of connections? - rethinkdb

Since the native RethinkDB drivers do not support connection pooling yet, I was wondering, what is the maximum number of connections to the RethinkDB server?

There are a couple answers to this question:
Connection Pooling
There are some third-party drivers that do have connection pooling. rethinkdbdash, for example, is a great Node.js driver that has connection pooling.
Maximum Number Of Connections
I'm not sure there's a hard limit on number of connections on the RethinkDB side, but users usually run into the connection limits of the OS before really running into the maximum number of connections on the RethinkDB side (I don't think there is one). Basically, RethinkDB can easily handle thousands of connections without a problem.

Related

How to handle connection pooling in Oracle

We have lots of micro services which need connection to DB. Oracle and PostgreSQL can handle just limited amount of connections. PostgreSQL solve this situation with PgBouncer so application connect to this URL which handle connection pooling. Is there something similar in Oracle ? What I found is oracle UCP but I think this is still pooling in application not in DB.
If your clients drivers are not too old (pre oracle-12) you can use Oracles Database Resident Connection Pooling.
See https://docs.oracle.com/en/database/oracle/oracle-database/19/jjdbc/database-resident-connection-pooling.html#GUID-D4F9DBD7-7DC6-4233-B831-933809173E39
Database Resident Connection Pooling
Database Resident Connection Pool (DRCP) is a connection pool in the server that is shared across many clients. You should use DRCP in connection pools where the number of active connections is fairly less than the number of open connections.
Especially database administrators will love this because they can have a good control on the hits on the database.

ADO connection with C++

I am working with ADO connection with C++ where I connect to DataBase fetch records and close connection. ,
My porblem is we have to connect, fetch records and close connection every time for single user and we have more than one million users.
Is there any way that we can keep connection alive or have some pooling mechanism running background so we done have to connect always ?
I searched for connection pooling with C++ , but could not find anything
Thanks in advance
Connections are pooled by default with ADO classic (which uses OLEDB services for connection pooling). The cost of the physical network connection is incurred only during the initial open. As long as you properly close/dispose connections, the subsequent overhead is minimal without coding to handle persistent connections.

How many SSE connections can a web server maintain?

I'm experimenting with server-sent events (SSE) as an alternative to websockets for real-time data pushing (data in my application is primarily one-directional).
How scalable would this be? I know that each SSE connection uses an HTTP request -- does this mean that a web server can handle as many SSE connections as HTTP requests (something like this answer)? I feel as though this might be the case, but I'm not sure how a SSE connection works and if it is substantially more complex/resource-hungry than a simple HTTP request.
I'm mostly wondering how this compares to the number of concurrent websockets a browser can keep open. This answer suggests that only ~1400-1800 sockets can be handled by a server at the same time.
Can someone provide some insight on this?
(To clarify, I am not asking about how many SSE connections can be kept open from the client; I am asking about how many can be reasonably kept open by a web server.)
Tomcat 8 (web server to give an example) and above that uses the NIO connector for handling incoming requst. It can service max 10,000 concurrent connections(docs). It does not say anything about max connections pers se. They also provide another parameter called acceptCount which is the fall back if connections exceed 10,000.
socket connections are treated as files. Every incoming connection to tomcat is like opening a socket and depending on the OS e.g in linux depends on the file-descriptor policy. You will find a common error when too many connections are open or max connections have been reached as the following
java.net.SocketException: Too many files open
You can change the number of open files by editing
/etc/security/limits.conf
It is not clear what is max limit that is allowed. Some say default for tomcat is 1096 but the (default) one for linux is 30,000 which can be changed.
On the article I have shared the linkedIn team were able to go 250K connections on one host.
So that should give you a pretty good idea about max sse connections possible. depends on your web server max connection configuration, OS capacity etc.

Is it necessary to close database connections?

[using the JavaScript driver]
I've seen on several of the rethinkdb example projects that connections are closed at the end of each query (e.g., conn.close())
While I understand the pedagogical reasons to include that in the tutorials, is it actually performant to manually close connections? I am under the impression that the connection will automatically close once it is out-of-scope
You should not leave connections open..
You should:
1) Open connections as late as possible
2) Close connections as soon as possible
The connection itself is returned to the connection pool. Connections are a limited and relatively expensive resource. Any new connection you establish that has exactly the same connection string will be able to reuse the connection from the pool.

Design/Architecture: web-socket one connection vs multiple connections

During a designing of a client/server architecture, is there any advantage to multiplexing multiple WEBSOCKET connections from the same process to the server (i.e. sharing one connection) vs opening one WEBSOCKET connection per thread/session in the client (as is typically done when connecting to memcached or database servers.)
I'm aware about the overhead associated with each connection (e.g. RAM ...). But expect to have less than 1K-10K at the most in each client side.
Specific use case:
Lets assume, I have a remote server with multiple sessions on one side, and on the other side I have multiple clients, each client will connect to a different session through the websocket server.
In the remote server, there are 2 ways to implement it: (1) each session create its own websocket connection (2) all sessions will use same websocket connection.
From connection point of view, I like the sharing solution (one websocket connection to all sessions), because websocket server is limited by #of connections available (saving servers/scaling).
But from traffic/data speed/performance point of view, if a sessions will send lots of small packages through the connection, then, if we use one sharing connection, we will not be able to utilize the bandwidth (payload..../collect few small packages into one or split big package into small packages), because we may have to send different packages to different clients from different sessions, in this case, we will not be able to collect few packages (small packages) since they have different destination and from different sources!!, unless we will create "virtual connection" that manage each session data to maximize the speed, but this would create much implementation complexity!!!
Any other opinions?
Thanks,
JB.
I think you should consider using a limited connection pool, like they do with Database connection architecture.
Another solution I would consider is a Pub/Sub database middleman such as Redis. This allows you to use existing solutions as well as easier scalability.
To the best of my understanding, both having a single connection and using a multitude of connections have their issues.
For example, one connection can send only one message at a time. A big enough message could block the connection... are you moving big data?
Many connections can cause an overhead that could be very expensive as well as introduce more chances for errors. Consider the following:
Creating new connections is very expensive, uses bandwidth, suffers from longer network delays and requires local resources and this is exactly what websockets allows us to avoid.
You will run into scalability issues. For instance, Heroku limits websocket connections to 600 per server, or at least they did so a short while back (and I think it's reasonable)... How will you connect all the servers together to one data-store?
Remember every OS has an open file limit and that websockets use the IO architecture (each one is an 'open-file', so that websockets are a limited resource).
Regarding traffic/data speed/performance, it is a question of server architecture... but I believe you will actually see a slight speed increase by using one connection (or a small pool of connections). It's important to remember that there isn't any effective multi-tasking when you need to send TCP/IP packets.
Also, with a limited number of connections (even with one connection), you will be able to benefit from the OS's packet joining feature that will allow you to send a number of websocket frames over one TCP/IP packet (unless you constantly flush the TCP/IP socket). You will actually waste more bandwidth with more connections - even disregarding the bandwidth used to open each new connection.
Just my 5 cents, we will all think differently, I'm sure.
Good Luck!

Resources