packets.go:123: closing bad idle connection: connection reset by peer - go

I am using Go, Fiber web framework, mariadb 10.6, debian 11 and github.com/go-sql-driver/mysql to connection to mariadb. I have played with these settings
db.SetMaxOpenConns(25)
db.SetMaxIdleConns(25)
db.SetConnMaxLifetime(5 * time.Minute)
ie I increase the values, decrease values but still get like 1 or 2 waring
packets.go:123: closing bad idle connection: connection reset by peer
per minute. Any suggestion?
answar was I was having wait_timeout 20 second and interactive timeout 50 second I increased now its fixed thanks to #ysth for solution

the answer was I was having wait_timeout 20 seconds and interactive timeout 50 seconds I increased now its fixed thanks to #ysth for the solution

Related

Postgres connect time delay on Windows

There is a long delay between "forked new backend" and "connection received", from about 200 to 13000 ms. Postgres 12.2, Windows Server 2016.
During this delay the client is waiting for the network packet to start the authentication. Example:
14:26:33.312 CEST 3184 DEBUG: forked new backend, pid=4904 socket=5340
14:26:33.771 CEST 172.30.100.238 [unknown] 4904 LOG: connection received: host=* port=56983
This was discussed earlier here:
Postegresql slow connect time on Windows
But I have not found a solution.
After rebooting the server the delay is much shorter, about 50 ms. Then it gradually increases in the course of a few hours. There are about 100 clients connected.
I use ip addresses only in "pg_hba.conf". "log_hostname" is off.
There is BitDefender running on the server but switching it off did not help. Further, Postgres files are excluded from BitDefender checks.
I used Process Monitor which revealed the following: Forking the postgres.exe process needs 3 to 4 ms. Then, after loading DLLs, postgres.exe is looking for custom and extended locale info of 648 locales. It finds none of these. This locale search takes 560 ms (there is a gap of 420 ms, though). Perhaps this step can be skipped by setting a connection parameter. After reading some TCP/IP parameters, there are no events for 388 ms. This time period overlaps the 420 ms mentioned above. Then postgres.exe creates a thread. The total connection time measured by the client was 823 ms.
Locale example, performed 648 times:
"02.9760160","RegOpenKey","HKLM\System\CurrentControlSet\Control\Nls\CustomLocale","REPARSE","Desired Access: Read"
"02.9760500","RegOpenKey","HKLM\System\CurrentControlSet\Control\Nls\CustomLocale","SUCCESS","Desired Access: Read"
"02.9760673","RegQueryValue","HKLM\System\CurrentControlSet\Control\Nls\CustomLocale\bg-BG","NAME NOT FOUND","Length: 532"
"02.9760827","RegCloseKey","HKLM\System\CurrentControlSet\Control\Nls\CustomLocale","SUCCESS",""
"02.9761052","RegOpenKey","HKLM\System\CurrentControlSet\Control\Nls\ExtendedLocale","REPARSE","Desired Access: Read"
"02.9761309","RegOpenKey","HKLM\System\CurrentControlSet\Control\Nls\ExtendedLocale","SUCCESS","Desired Access: Read"
"02.9761502","RegQueryValue","HKLM\System\CurrentControlSet\Control\Nls\ExtendedLocale\bg-BG","NAME NOT FOUND","Length: 532"
"02.9761688","RegCloseKey","HKLM\System\CurrentControlSet\Control\Nls\ExtendedLocale","SUCCESS",""
No events for 388 ms:
"03.0988152","RegCloseKey","HKLM\System\CurrentControlSet\Services\Tcpip6\Parameters\Winsock","SUCCESS",""
"03.4869332","Thread Create","","SUCCESS","Thread ID: 2036"

Does HikariCP supports command timeout in Spring Boot application similar to C#

Does HikariCP supports command timeout in Spring Boot application similar to C#
I am using Hikari Connection Pool in my Spring boot application. I have enabled connectionTimeout using the following configuration
spring.datasource.hikari.connectionTimeout: 30000
If I increase the number of concurrent users I'm getting the following exception in logs
Caused by: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection;
nested exception is java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
I am perfectly find with the above exception. I can increase the number of connections. But my concern is, there are few endpoints which are taking more than 2 minutest to respond. Those endpoints got DB Connection from pool but took more time to process. Is there a setting in which I can mentioned some timeout so that if DB takes more than some time (Operation time -Say 40 seconds) then it should send SQL exception. Similar to command timeout in C#
Application.properties
# A list of all Hikari parameters with a good explanation is available on https://github.com/brettwooldridge/HikariCP#configuration-knobs-baby
# This property controls the minimum number of idle connections that HikariCP tries to maintain in the pool. Default: same as maximumPoolSize
spring.datasource.hikari.minimumIdle: 10
# This property controls the maximum size that the pool is allowed to reach, including both idle and in-use connections. Basically this value will determine the maximum number of actual connections to the database backend.
# Default: 10
spring.datasource.hikari.maximumPoolSize: 20
#This property controls the maximum number of milliseconds that a client (that's you) will wait for a connection from the pool. If this time is exceeded without a connection becoming available, a SQLException will be thrown.
#Lowest acceptable connection timeout is 250 ms. Default: 30000 (30 seconds)
spring.datasource.hikari.connectionTimeout: 30000
# This property controls the maximum amount of time that a connection is allowed to sit idle in the pool. This setting only applies when minimumIdle is defined to be less than maximumPoolSize
# Default: 600000 (10 minutes)
spring.datasource.hikari.idleTimeout: 600000
# This property controls the maximum lifetime of a connection in the pool. An in-use connection will never be retired, only when it is closed will it then be removed.
# Default: 1800000 (30 minutes)
spring.datasource.hikari.maxLifetime: 1800000
# This property sets a SQL statement that will be executed after every new connection creation before adding it to the pool. Default: none
spring.datasource.hikari.connectionInitSql: SELECT 1 FROM DUAL
As explained in this comment, seems that connectionTimeout value is used for acquire connection timeout.
You can try setting that to a higher value than 30 secs and see if that helps.
Druid connection pool have this setting that allows you to set up the timeout:
spring.datasource.druid.query-timeout=10000

Set infinite session timeout but limited per request timeout

I'm trying to quickly connect to a couple thousand sites (some up some down), but it seems setting aiohttp.ClientTimeout(total=60) and passing that in to ClientSession means there is only 60 seconds allowable in total for all sites. This means that after about a minute they all quickly fail with concurrent.futures._base.TimeoutError. I tried raising that timeout, which fixes that failure issue, but then the problem is that all of the threads end up getting hung on non responding sites for the entire length of it.
Is it possible to disable the total timeout, however have a per-request timeout of 60 seconds? (Edited) - There does seem to be a timeout parameter on session.get(...), however it seems that overrides the session timeout and causes the entire session to timeout upon expiration, and not just that request. If I set my ClientSession timeout to 600 but the session.get timeout to 15, all requests fail after 15 seconds
I want to be able to get through my full list of a couple thousand sites only waiting 60 second max for each connection, but have no total time limit. Is the only way to do this being to create a new session for each request?
timeout = aiohttp.ClientTimeout(total=60)
connector = aiohttp.TCPConnector(limit=40)
dummy_jar = aiohttp.DummyCookieJar()
async with aiohttp.ClientSession(connector=connector, timeout=timeout, cookie_jar=dummy_jar) as session:
for site in sites:
task = asyncio.ensure_future(make_request(session, site, connection_pool))
tasks.append(task)
await asyncio.wait(tasks)
connection_pool.close()

af:poll timeout is not working as expected

I'm using af:poll in a page.
<af:poll binding="#{backingBeanScope.backing_pages_xyzView.p2}"
id="p2"
interval="#{sessionScope.manage_Template.dataRefreshRate}"
pollListener="#{backingBeanScope.backing_pages_xzy.pollTablexyzView}"
partialTriggers="resId1" timeout="36000000"/>
I have set a large value for timeout parameter as I don't want this page to timeout after the default 10 mins period.
However, page still times out after 10 mins time. Here's a screenshot.
Why is this happening? If I set a duration less than 10 mins as the timeout, it works.
Here is the documentation I referred http://docs.oracle.com/html/E12419_09/tagdoc/af_poll.html
Note: This Oracle web app is configured to run in tomcat 7.0.16 (ADF version: 11.1.1.7)
UPDATE: When I remove the partialTriggers="resId1" it works. Why?

Windows Snmp Management Api - Snmp timeout/retry doesn't appear to work

I'm noticing some weird snmp communication behavior when using MS SNMP Mgmt Api in terms of timeout and retries. I was wondering if mgmt api is supported on Win Server 2008 R1 x64. My program is a C++ 64bit snmp extension agent that uses the mgmt api to communicate with other agents as well.
This is my pseudo code:
SnmpMgrOpen(ip address, 150ms timeout, 3 retries)
start = getTickCount()
result = SnmpMgrRequest(get request with 3 or 4 OIDs)
finish = getTickCount()
if (result == some error)
{
log Error including total time (i.e finish - start ticks)
}
SnmpMgrClose()
When the snmpMgrRequest call times out, the total time equals anywhere from 1014ms to 5000ms. If, I set retries to 0, the total time is still 1014ms to 5000ms.
I would expect, with retries to 0 that the SnmpMgrRequest would timeout within 150ms. The documentation seems to imply this. Am I missing something is there a minimum timeout period of at least a second? What could be causing this behavior?
Any help would be greatly appreciated. I'm at a lost here.
ballerstyle_98#hotmail.com
From my experience with SNMP on Windows platforms the minimum timeout value is 1 second. So even if you set it to any value lower than that, it will default to 1 second.
Also the timeout value used is doubled for every retry. So with a 150ms 3 retry configuration in the worst case you will have a failed response to a request in 1+2+2+2 =7 seconds.
I hope this helps.

Resources