I noticed that the connection to PostgreSQL is pretty slow.
import psycopg2
import time
start_time = time.time()
try:
db = psycopg2.connect("dbname='xx' user='xxx' host='127.0.0.1' password='xxx' port='5433'")
except Exception as e:
print(e)
exit(1)
print('connect time', time.time() - start_time)
Usual connect time is 2.5-3.5 seconds.
connect time 3.3095390796661377
Its pretty much default configuration of freshly installed PostgreSQL.
I turned off log_hostname but it changed nothing. I have run both PostgreSQL 9.4 and 10 and both have the same problem.
I'm using this machine for development, but even so, I noticed it because my Django requests take 2.5-3.5 seconds which make it unbearable even for development.
Windows 10
Python 2/3
psycopg2 2.7.4
Here relevant logs with max debug from PostgreSQL
2018-03-19 21:24:43.654 +03 [10048] DEBUG: 00000: forked new backend, pid=21268 socket=5072
2018-03-19 21:24:43.654 +03 [10048] LOCATION: BackendStartup, postmaster.c:4099
2018-03-19 21:24:45.248 +03 [21268] LOG: 00000: connection received: host=127.0.0.1 port=9897
It fork new backend and then only 2 seconds later logs connection received
UPD
Even if I manage to avoid connection delay of PostgreSQL ( for example via pgbouncer, or if PostgreSQL running in docker) request still take 1.3-2seconds, but from first sent package till last its only 0.022 second, all other time idk what is happening but not a network communication between client and server. Same code if run within docker - 0.025 second. From windows - 1.3-2sec but network interaction only 0.022sec
There actually two problems that might be caused by same thing or different, no idea.
1) Postgresql not sending the packet for 1.8 seconds for an unknown reason
2) Even if the first problem eliminated and network interaction down to 0.022 sec whole thing still take 1.3-2 sec ( using either psql or psycopg2)
Related
I am trying to run long SOAK tests (24h) monitoring server CPU/RAM utilization in Jmeter. Using perfmon server agent and plugin. Tests are run headless, using JMeter Docker image. Got everything setup and job is running fine in Jenkins. Measurements are sent to the server every 10 minutes. The tests results are saved in CSV during the test.
However, although the test runs for 24 hours, Perfmon agent seems to be sending data only for around 2 hours. This is how much data i can see is saved in CSV file. Regardless test runs for 5 hours or 24, 2 hours of data is saved.
I wonder what causes it, ideally i would like to see all the data saved for the whole duration of the test. Would appreciate comments. Cheers
The issue was caused by 'packet_write_wait: connection to 3.126.218.168 port 22: broken pipe' error.
This was resolved by keeping ssh session alive, i.e. modifying ~/.ssh/config file and setting required values for 'Host * ServerAliveInterval ServerAliveCountMax'.
I have a problem that's been plaguing me about a year now. I have Oracle 12.1.x.x installed on my machine. After a day or two the listener stops responding and the listener.log contains a bunch of TNS-12531 messages. If I reboot, the problem goes away and I'm fine for another day or two. I'm lazy and I hate rebooting, so I decided to finally track this down, but I'm having no luck. Since the alternative is to do work that I really don't want to do, I'm going to spend all my time researching this.
Some notes:
Windows 10 Pro
64-Bit
32 GB RAM
Generally, about 20GB free when the error occurs
I have several databases and it doesn't matter which DB is running
Restarting the DB doesn't help
Restarting the listener doesn't help
Only rebooting clears the problem
When I set TRACE_LEVEL_LISTENER = 16, I don't get much more info. Trace files are not written to
I can connect to the DB if I bypass the listener (ie, set ORACLE_SID=xxx and connect without a DB identifier)
All other network interactions seem to work fine after the listener stops
lsnrctl status hangs and adds another TNS-12531 to the listener.log
I have roughly the same config at home and this does not happen
Below is an example of a listener.log file:
Fri Jul 28 14:21:47 2017
System parameter file is D:\app\user\product\12.1.0\dbhome_1\network\admin\listener.ora
Log messages written to D:\app\user\diag\tnslsnr\LJ-Quad\listener\alert\log.xml
Trace information written to D:\app\user\diag\tnslsnr\LJ-Quad\listener\trace\ora_24288_14976.trc
Trace level is currently 16
Started with pid=24288
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=LJ-Quad)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1521ipc)))
Listener completed notification to CRS on start
TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
28-JUL-2017 14:22:06 * 12531
TNS-12531: TNS:cannot allocate memory
28-JUL-2017 14:22:47 * 12531
TNS-12531: TNS:cannot allocate memory
28-JUL-2017 14:26:24 * 12531
TNS-12531: TNS:cannot allocate memory
Thanks a bunch for any help you can provide!
Issue 1
This error can occur approximately after 2048 connections have been made via the listener when running on a non-English Windows installation.
Fix for Issue 1
Create a Windows User Group named Administrators on the computer where the listener.exe resides. This can fix the issue of the listener dying.
Reference: I'll post the link for the first issue as soon as I find it again
Issue 2
This error can also occur on Windows 64-Bit systems where the Desktop Application Heap is too small.
Fix for Issue 2
Try to Increase the Desktop Application Heap Registry in windows its located in
HKLM\System\CurrentControlSet\Control\Session Manager\SubSystems\Windows
Just as note don't add this Value by yourself, you have to depend on document.
Basically search for the registry entry and alter the third value for the key SharedSection=1024,20480,1024. This is a trial and error approach, but seems to improve listener's stability and memory issues.
Reference: TNS:cannot allocate memory - is there limit to the num databases on one box (Oracle Developer Community)
We have maradb 10.1 and beanstalkd 1.10 and laravel 4.2
We have one query that run successfully without queue. but when run it in beanstalkd not afected and we get 'MySql server has gone away' error in log file
config:
wait_timeout = 120
max_allowed_packet = 1024M
Why different behavior between with and without queue
We had similar issues and either it was because the code was running in different thread, and connection being lost, or a strange garbage collection and closing of connection for long running processes.
Anyway what we implemented is
- when a job is reserved and starts processing we reconnect the DB always
- when we detect a connection gone away, we release the job (so it will be picked up again)
In case it happens in the middle of the processing flow, you may want to reconnect to lose work done so far on that job, if the job is somehow transactional.
I was running some tests to understand the MaxMemory-Reserved & MaxMemory-Policy and we faced “Server Closed the connection” error few times when Redis DB was almost full. Here are the details:
1) Created the Redis Cache with Standard C1(1 GB) tier and chose “allkeys-lru” and max-memory-reserved as 50 MB
2) Ran the Redis Benchmark tool to add the Keys in Redis DB to make sure Redis DB is almost full.
3) As soon as DB reached around ~960-980 MB, again ran Benchmark tool to add some more keys and got following error. In which all scenarios this error can occur?
Note: The Connected_Clients value was 0 when we ran the info command just before we encountered this error.
4) At same time ran the info command on Azure Portal Console and got the output as “Error”.
5) This error lasted approximately for 2-3 Mins and we were able to add keys after that. And once we ran the info command again, we got following stat. Here we see that difference between used_memory and used_memory_rss is around 76 MB. Do you think the above error could be because of this?
info
Server redis_version:3.2.3
redis_mode:standalone
os:Windows
arch_bits:64
multiplexing_api:winsock_IOCP
hz:10
Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
client_total_writes_outstanding:0
client_total_sent_bytes_outstanding:0
blocked_clients:0
Memory
used_memory:968991592
used_memory_human:924.10M
used_memory_rss:1049776128
used_memory_rss_human:1001.14M
used_memory_peak:1070912296
used_memory_peak_human:1021.30M
used_memory_lua:37888
maxmemory:1100000000
maxmemory_human:1.02G
maxmemory_policy:allkeys-lru
mem_allocator:jemalloc-3.6.0 #
Most likely you are running into scenario of high un-authenticated connections. Redis-benchmark first creates all the client connections (in your case -c 400 connections) and then authenticates them. The delay in auth causes high number of unauthenticated connections from a single IP and Azure Redis Cache closes them for DOS protection. Hence, the error “Server closed the connection”
You can try the redis-benchmark from here, which I have modified to authenticate as soon as a connection has been made and should solve this issue.
connecting to a Heroku Postgres database using Dbeaver client to after enabling ssl and using org.postgresql.ssl.NonValidatingFactory takes about 4-5 minutes to connect to the database.
Is this behavior normal?
Just uncheck "Show non-default databases".
It crashes / freezes because it tries to load a huge amount of databases in that host
No, this is not normal at all. Connecting should be quite instantaneous. A few seconds is already way too long, and minutes is completely off, so something else is going on.
If you try connecting with some other tool (like psql directly), do you have the same problem? I'd check there to make sure your code or some dependency is not doing something odd.