I was running some tests to understand the MaxMemory-Reserved & MaxMemory-Policy and we faced “Server Closed the connection” error few times when Redis DB was almost full. Here are the details:
1) Created the Redis Cache with Standard C1(1 GB) tier and chose “allkeys-lru” and max-memory-reserved as 50 MB
2) Ran the Redis Benchmark tool to add the Keys in Redis DB to make sure Redis DB is almost full.
3) As soon as DB reached around ~960-980 MB, again ran Benchmark tool to add some more keys and got following error. In which all scenarios this error can occur?
Note: The Connected_Clients value was 0 when we ran the info command just before we encountered this error.
4) At same time ran the info command on Azure Portal Console and got the output as “Error”.
5) This error lasted approximately for 2-3 Mins and we were able to add keys after that. And once we ran the info command again, we got following stat. Here we see that difference between used_memory and used_memory_rss is around 76 MB. Do you think the above error could be because of this?
info
Server redis_version:3.2.3
redis_mode:standalone
os:Windows
arch_bits:64
multiplexing_api:winsock_IOCP
hz:10
Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
client_total_writes_outstanding:0
client_total_sent_bytes_outstanding:0
blocked_clients:0
Memory
used_memory:968991592
used_memory_human:924.10M
used_memory_rss:1049776128
used_memory_rss_human:1001.14M
used_memory_peak:1070912296
used_memory_peak_human:1021.30M
used_memory_lua:37888
maxmemory:1100000000
maxmemory_human:1.02G
maxmemory_policy:allkeys-lru
mem_allocator:jemalloc-3.6.0 #
Most likely you are running into scenario of high un-authenticated connections. Redis-benchmark first creates all the client connections (in your case -c 400 connections) and then authenticates them. The delay in auth causes high number of unauthenticated connections from a single IP and Azure Redis Cache closes them for DOS protection. Hence, the error “Server closed the connection”
You can try the redis-benchmark from here, which I have modified to authenticate as soon as a connection has been made and should solve this issue.
Related
I am just finishing an upload of 8000 assets to candy machine (via the upload command). Everything seemed to be working well when it was creating the bundles and saving them to the cache, but once it started to write the indices I've started seeing these two errors on and off:
1)
Waiting 5 seconds to check Bundlr balance.
Requesting a withdrawal of 0.638239951 SOL from Bundlr...
Successfully withdrew 0.638244951 SOL.
Writing all indices in 719 transactions...
Progress: [█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░] 2% | 18/719Transaction simulation failed: Blockhash not found
Failed writing indices 3682-3691: Transaction was not confirmed in 60.01 seconds. It is unknown if it succeeded or fail.
I have been searching the internet and from what I can tell these errors are out of my control..is this correct? Or what can I do to get these indices to write successfully? Its at 50% progress right now but I assume the upload is not going to be successful when it finishes. If this is the case, do I need to run the candy machine upload command all over again or is there a way for me to just run the transaction portion (where it started to fail) again? I've seen some notes on retry but it wasn't completely clear to me.
The upload process took about 2.5 hours so would like to avoid that if at all possible.
Help is very much appreciated.
Both errors are common so you dont have to worry about it. You should use a custom RPC (using --rpc-url on the upload command) and wait till the upload command ends. When the upload command ends you have to use verify_upload command in order to see if everything went well (if verify_upload shows an error you have to run upload again and repet till verify_upload shows ready to deploy message).
I have a problem that's been plaguing me about a year now. I have Oracle 12.1.x.x installed on my machine. After a day or two the listener stops responding and the listener.log contains a bunch of TNS-12531 messages. If I reboot, the problem goes away and I'm fine for another day or two. I'm lazy and I hate rebooting, so I decided to finally track this down, but I'm having no luck. Since the alternative is to do work that I really don't want to do, I'm going to spend all my time researching this.
Some notes:
Windows 10 Pro
64-Bit
32 GB RAM
Generally, about 20GB free when the error occurs
I have several databases and it doesn't matter which DB is running
Restarting the DB doesn't help
Restarting the listener doesn't help
Only rebooting clears the problem
When I set TRACE_LEVEL_LISTENER = 16, I don't get much more info. Trace files are not written to
I can connect to the DB if I bypass the listener (ie, set ORACLE_SID=xxx and connect without a DB identifier)
All other network interactions seem to work fine after the listener stops
lsnrctl status hangs and adds another TNS-12531 to the listener.log
I have roughly the same config at home and this does not happen
Below is an example of a listener.log file:
Fri Jul 28 14:21:47 2017
System parameter file is D:\app\user\product\12.1.0\dbhome_1\network\admin\listener.ora
Log messages written to D:\app\user\diag\tnslsnr\LJ-Quad\listener\alert\log.xml
Trace information written to D:\app\user\diag\tnslsnr\LJ-Quad\listener\trace\ora_24288_14976.trc
Trace level is currently 16
Started with pid=24288
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=LJ-Quad)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1521ipc)))
Listener completed notification to CRS on start
TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
28-JUL-2017 14:22:06 * 12531
TNS-12531: TNS:cannot allocate memory
28-JUL-2017 14:22:47 * 12531
TNS-12531: TNS:cannot allocate memory
28-JUL-2017 14:26:24 * 12531
TNS-12531: TNS:cannot allocate memory
Thanks a bunch for any help you can provide!
Issue 1
This error can occur approximately after 2048 connections have been made via the listener when running on a non-English Windows installation.
Fix for Issue 1
Create a Windows User Group named Administrators on the computer where the listener.exe resides. This can fix the issue of the listener dying.
Reference: I'll post the link for the first issue as soon as I find it again
Issue 2
This error can also occur on Windows 64-Bit systems where the Desktop Application Heap is too small.
Fix for Issue 2
Try to Increase the Desktop Application Heap Registry in windows its located in
HKLM\System\CurrentControlSet\Control\Session Manager\SubSystems\Windows
Just as note don't add this Value by yourself, you have to depend on document.
Basically search for the registry entry and alter the third value for the key SharedSection=1024,20480,1024. This is a trial and error approach, but seems to improve listener's stability and memory issues.
Reference: TNS:cannot allocate memory - is there limit to the num databases on one box (Oracle Developer Community)
We have maradb 10.1 and beanstalkd 1.10 and laravel 4.2
We have one query that run successfully without queue. but when run it in beanstalkd not afected and we get 'MySql server has gone away' error in log file
config:
wait_timeout = 120
max_allowed_packet = 1024M
Why different behavior between with and without queue
We had similar issues and either it was because the code was running in different thread, and connection being lost, or a strange garbage collection and closing of connection for long running processes.
Anyway what we implemented is
- when a job is reserved and starts processing we reconnect the DB always
- when we detect a connection gone away, we release the job (so it will be picked up again)
In case it happens in the middle of the processing flow, you may want to reconnect to lose work done so far on that job, if the job is somehow transactional.
I have used Redis in my project for Caching purpose, I used Spring for that set up, You can go through the below mentioned link to understand what I did in my project.
http://caseyscarborough.com/blog/2014/12/18/caching-data-in-spring-using-redis/
This code was running fine in production environment (Rhel 7- EC2 instance) from last 6 to 8 months. Now suddenly it started giving "ERR operation not permitted" error
org.springframework.dao.InvalidDataAccessApiUsageException: ERR operation not permitted; nested exception is redis.clients.jedis.exceptions.JedisDataException: ERR operation not permitted
at org.springframework.data.redis.connection.jedis.JedisExceptionConverter.convert(JedisExceptionConverter.java:44)
Due to this we are unable to fetch the data from Redis server. Hence our application doesn't work properly.
I did search on this issue, I have gone through the links like
redis (error) ERR operation not permitted
This says to check "requirepass" in redis.conf file whether its commented or not, But when I saw redis.conf file in production environment its commented out.
Even through its commented I ran below mentioned command on redis-cli
"AUTH foobared"
After runing the above mentioned command, It didn't work.
Note : But when we kill the running instance of Redis and Restart it, It will start working properly then it doesn't give "ERR operation not permitted" error.
After Restart of Redis the system start working properly for another one to two hours, then again same issue arises and it will again goes off after I restart the Redis server.
Note : I tried upgrading Redis server from 2.6 to 3 even though it didn't work
Is your Redis exposed to the internet?
It's possibly a CONFIG SET requirepass attack.
See this SO question and #antirez comments here
So, I have a Java app on Heroku that uses RedisCloud addon.
The addon clearly states that the free version comes with a maximum of 30 Connections:
The problem is that Im getting this error:
ERR max number of clients reached
So the first thing I did obviously was check the RedisCloud monitor and to my surprise, It establishes a limit of 10 Connections:
The question:
Why are we getting a connection limit of 10 on RedisCloud when the limit on the Heroku addon says it should be 30?
It appears that your add-on is using an old version of the plan from before we launched our Bigger and Imporved XXXL Free plan earlier this year.
The easiest way to resolve that is to use the Heroku toolkit belt and run the command:
heroku addons:upgrade rediscloud:30 -a <your app's name>