Python concurrent requests and mysql connection pool - discord.py

Earlier I was opening and closing a connection for each request but then I got to know about mysql connection pooling which keeps the connections open for us and we can use them whenever required which ultimately saves time and efficiency. Now my question is, max size of a connection pool is 32 so imagine a situation where more than 32 requests are executing concurrently so in that case it'll raise an error. Currently am doing this
db.py
from mysql.connector import connect, pooling
pool = None
def init():
global pool
pool = pooling.MySQLConnectionPool(pool_name='mypool',pool_size=32,pool_reset_session=True,**dbconfig)
def do_work():
try:
con = pool.get_connection()
except:
con = connect(**dbconfig)
# execute queries here
con.close()
It's checking for an available connection in the pool, if all are busy then will open a new one.
So, is this the best approach to achieve this or is there any better alternative?
Note: This is in a single threaded application running completely using asyncio basically it's a discord bot running on a heavy server
Also what should be an ideal size for a connection pool? Thanks!

Related

Vertx - Closing connections - JDBC and others

I have a verticle, which consumes a message from the event bus and processes it. I have a question as to when the JDBC connection should be closed. There are 2 approaches
Closing the connection once the message is processed. But this will be very expensive because I will open/close connection every time.
Trust that vertx will close the connection when the verticle is stopped/undeployed (which is literally never) and that there wont be any memory leaks as long as the connection is open. I will open the connection in the start() method, so that whenever there is a message it available.
On the other hand, If I have an elastic search backend and I am using the elastic search SDK, which has a specific method to close the client, when should the connection be really closed?
Use a connection pool, that will take away most of the cost of closing/opening connections. When using a connection pool, closing the connection returns it to the connection pool for re-use.
The basic usage pattern is:
try (Connection connection = dataSource.getConnection()) {
// use connection
}
At the end of the block the connection is closed, which - if dataSource has a connection pool - will make it available for re-use.
You can always put your clean up code in Stop() method of Verticle interface. It will be called when the verticle starts it's un-deploy procedure.
See Vert.x Docs

What would happen if a process established multiple PostgreSQL connections and terminated without closing them?

I'm writing a DLL for a purchased software.
The software will perform multi-threaded calculations on certain tasks.
My job is to output the relative result into a database.
However, due to the limited support of the software, it is kind of difficult to do multi-threaded output of the data.
The key problem is that there is no info on the last execution of the DLL function.
Therefore, the database connection will not be closed.
So may I ask if I leave the connection open and terminate the process, what would be the potential problems?
My platform is winserver 2008, and PostgreSQL 10.
I don't understand the background information you are giving, but I can answer the question:
If a PostgreSQL client process dies without closing the database (and TCP) connection, the PostgreSQL server process (“backend process”) that servers this connection will not realize this immediately.
Of course, as soon as the server tries to communicate to the client, e.g. to return some results, TCP it will notice that the partner has gone away and will return an error.
However, often the backend process is idle, waiting for the client to send the next request. In this case, it would never notice that its partner has died. This could eventually cause max_connections to be exhausted with dead connections.
Because this is a common problem in networking, TCP provides the “keepalive” functionality: when a connection has been idle for a while (2 hours by default), the operating system will send a so-called “keepalive packet” and wait for a response from the other side. Sending keepalive packets is repeated several times (5 times by default) in short intervals (1 second by default), and if no response is received, the connection is closed by the operating system, the backend process receives an error message and terminates.
PostgreSQL provides parameters with which you can configure these settings on the server side: tcp_keepalives_idle, tcp_keepalives_count and tcp_keepalives_interval. If you set tcp_keepalives_idle to a shorter value, dead connections will be detected and removed faster, at the cost of some little added network traffic.

Sql Azure and JDBC connection pool

Does SQL Azure allow 3-rd party connection pool like HikariCP or BoneCP?
We configured HikariCP it works when we just run app but later db doesnt response on request. Is it HikariCP issue or it's common connection poool issue and no need spending more time on investigation?
HikariConfig config = new HikariConfig();
config.setMaximumPoolSize(50);
config.setDriverClassName(env.getProperty("jdbc.driverClassName"));
config.setJdbcUrl(env.getProperty("jdbc.url"));
config.setUsername(env.getProperty("jdbc.user"));
config.setPassword(env.getProperty("jdbc.pass"));
config.addDataSourceProperty("cachePrepStmts", env.getProperty("jdbc.cachePrepStmts"));
config.addDataSourceProperty("prepStmtCacheSize", env.getProperty("jdbc.prepStmtCacheSize"));
config.addDataSourceProperty("prepStmtCacheSqlLimit", env.getProperty("jdbc.prepStmtCacheSqlLimit"));
config.addDataSourceProperty("useServerPrepStmts", env.getProperty("jdbc.useServerPrepStmts"));
See this SQL Azure page re: Connection Constraints.
Maximum allowable durations are subject to change depending on the resource usage.
A logged-in session that has been idle for 30 minutes will be terminated
automatically. We strongly recommend that you use the connection pooling and
always close the connection when you are finished using it so that the unused
connection will be returned to the pool. For more information about connection
pooling, see Connection Pooling.
See if any of these errors match up to your logs. Search that page for "terminated" and "busy" to find error codes that might be relevant to your issue.
I would suggest setting the maxLifetime property in HikariCP to 15 minutes, and the idleTimeout to 2 minutes.
There is nothing on the SQL Azure side that would prohibit you from using a 3rd party connection pool. My guess is that the connection failed between the server and the client and the client didn't remove the connection from the pool.
Moving forward, I'd ensure that whichever 3-rd part connection pool you end up using tests that the connection exists before taking it out of the pool for use.
Hope that helps.

google app engine cloud sql connection never closes

I`m in the development stage of an app and I don't make many server\cloud sql calls but for some reason I have an average of 400 usage hours a month.
When I look at the cloud sql active connections dashboard I see there is always at least one active connection but in the read\write operations it's usually on 0 besides the occasional small bumps.
I create a new connection each time I make a request to the server\cloud sql and close the connection each time when I return the response.
the connection code is(I followed the guestbook tutorial\example)
Class.forName("com.mysql.jdbc.GoogleDriver");
this.dbUrl = "jdbc:google:mysql://trivia9991:triviadb?user=root";
this.dbConn = DriverManager.getConnection(dbUrl);
the closing connection code is
this.dbConn.close();
How can this keep a connection open at all time?
If the connection close code is actually running this should not be the issue. You should make sure that the connection is closed even if an exception occurs before hand.
It is also possible that a connection you made using the MySQL command line client is still open.
You can examine what connections are open by connecting using the MySQL command line client and running a SHOW PROCESSLIST; statement.

Connection pool opens more connections then maximum pool size

Hey I'm using Glassfish open source v4 and I'm having a weird problem.
I have defined a JDBC connection pool to Oracle 11g in the admin console and I've set :
Pool Settings
Initial and Minimum Pool Size: 500
Maximum Pool Size: 1000
Pool Resize Quantity: : 750
And I've created a specific user for this connection pool. Yet sometimes when I inspect opened connections in the database I see that there are more then 1000 (maximum I've seen was 1440)
When this happens any query attempts fail, sometimes with OutOfMemory exception, some show http thread interuptions and some don't show any logs at all, just takes a long time.
What I am wondering is how is it possible the Glassfish opens more connections then I've defined it to?
1t try to compare output from netstat on appl. server and db server side. You may have some "dangling" connections. Also try to find some documentation about DCD (Dead connection detection) in Oracle.
Few years ago I saw situations where Java application server thought that the connection is dead because it is not responding for few minutes. So this connection was put onto some dead connection list and a new connection was created.
There also can be some network issues - for example there is a FW between appl and db server.
When TCP connection is not active for one hour then it's cut over on one side but DB sever does not know about that.
The usual way how to investigate that is
compare output of both netstat(s) (appl./db)
identify dangling TCP connections
translate TCP connection onto Unix process id(PID) of Oracle session process
translate PID onto Oracle session (SID and SERIAL#)
kill the session on Oracle level (alter system kill session ...)

Resources