I am new to micro-services and I am getting below exception. I am getting this only some times and I am not able to proceed.
eureka_1 | 2019-02-25 15:08:22.666 ERROR 1 ---
[target_eureka-7] c.n.e.cluster.ReplicationTaskProcessor : It seems
to be a socket read timeout exception, it will retry later. if it
continues to happen and some eureka node occupied all the cpu time,
you should set property 'eureka.server.peer-node-read-timeout-ms' to
a bigger value
Any thoughts?
I got this error while trying to run my microservices in a virtual machine. When I raised VM memory from 2.5 to 10 GB it disappeared.
By default in Docker for Windows specified only 2Gb. So try to rise memory volume.
Related
I am trying to run a load test for a application. For this i am using JMeter (v4 & v5) on linux Red hat 7.5 Vm with 16GB Ram and 8vCPU power. Goal is to reach 20k Users connected via ยต-service.
However during the test runs i get the following errors on the console:
Uncaught Exception java.lang.OutOfMemoryError: unable to create new native thread.
Here is my jvm jmeter configuration :
cat bin/jmeter | grep HEAP
HEAP (Optional) Java runtime options for memory management
: "${HEAP:="-Xms1g -Xmx4g -XX:MaxMetaspaceSize=256m"}"
Any ideas?
I tried changing the heap size in jmeter, but that didn't seem to help at all.
unable to create new native thread is not something you can work around by increasing JVM Heap, you're going above maximum number of threads threshold which is defined on OS level.
You will need to amend nproc value via ulimit command or by modifying /etc/security/limits.conf file to look like:
your_user soft nproc 1024
your_user hard nproc 32768
Reference: Unable to create new native thread
If you will be still receiving this error after raising maximum number of processes on OS level - most probably you will have to go for Distributed Testing
I built my app with docker-compose , one container is database use mariadb image ,one php to run Laravel (I installed php-memcached or php-redis extension for my app), one cache container built on redis docker image .
at first everything goes on well , but after running 2 or 3 days , I got the php exception : Connection timed out [tcp://redis:6379];
I monitor the cpu and memory and network use zabbix installed by myself on host server , but I got these error :
monitor CPU
monitor memory
I changed cache container to memcached and 2 or 3 days same thing happen,
the only way I found to solve this problem is to restart system , and it can run another 2 or 3 days before getting the same error. you know it's not possible to restart system on production, so any one can suggest me where to solve the problem other than restarting system ?
Thanks!
I think you are facing problem with redis docker container. This type of error comes when memory is exhausted. You need to set max memory parameter of redis server.
Advice: Please try to use another image of redis.
My production environment has started constantly throwing this error:
Error fetching message: ERR Error running script (call to f_0ab965f5b2899a2dec38dec41fff8c97f7a33ee9): #user_script:56: #user_script: 56: -OOM command not allowed when used memory > 'maxmemory'.
I am using the Heroku Redis addon with a worker dyno running Sidekiq.
Both Redis and the Worker Dyno have plenty of memory right now and the logs don't show them running out.
What is causing this error to be thrown and how can I fix it?
I had a job that required more memory than I had available in order to run.
Run "config get maxmemory" on your redis server. Maybe that config is limiting the amount of memory Redis is using.
I deploy Docker containers on Mesos(0.21) and Marathon(0.7.6) on Google Cloud Engine.
I use JMeter to test a REST service that run on Marathon. When the concurrent requests are less than 10, it works normal, but when the concurrent requests are over 50, the container is killed and Mesos start another container. I increase RAM, CPU but it still happens.
This is log in /var/log/mesos/
E0116 09:33:31.554816 19298 slave.cpp:2344] Failed to update resources for container 10e47946-4c54-4d64-9276-0ce94af31d44 of executor dev_service.2e25332d-964f-11e4-9004-42010af05efe running task dev_service.2e25332d-964f-11e4-9004-42010af05efe on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/612/cgroup: Failed to open file '/proc/612/cgroup': No such file or directory
The error message you're seeing is actually another symptom, not the root cause of the issue. There's a good explanation/discussion in this Apache Jira bug report:
https://issues.apache.org/jira/browse/MESOS-1837
Basically, your container is crashing for one reason or another and the /proc/pid#/ directory is getting cleared without Mesos being aware, so it throws the error message you found when it goes to check that /proc directory.
Try setting your allocated CPU higher in the JSON file describing your task.
Weblogic 10.3 gives out of memory
Followings thing I have done
Increased the -Xms512m
Increased the -Xmx1024m
Increased the max perm size in setdomainenv.bat
Is there any other way to resolve this issue I have a 2 GB system?
It is a production machine and the size of the log is around 4 GB .When analysed the log I found many connection refused error
You'll need to profile your application to find the memory leak. It could be open database connections or other resources not being handled properly
Just increasing the Xms and Xmx wont work beyond a point
Take a Heap Dump into an HPROF file and run this using Eclipse Memory Analyzer Tool or VisualVM
or monitor this using JConsole