I am trying to run multiple docker Container in a single network but as soon as I reached to 7 container this problem started. If I start one container other will exit automatically. I have increased memory to 3 GB and CPU to 100%. Now this looks like a resource problem but how do we solve it
Related
I am trying to run 2 tasks on the same EC2 container. The EC2 container is running on a t2.large type EC2 instance.
One of the tasks (which is a daemon) starts fine and is RUNNING.
The other task which is an application, does not start and I see the following errors in the Events tab.
service test-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxxxx has insufficient CPU units available. For more information, see the Troubleshooting section.
service test-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxxxx has insufficient memory available. For more information, see the Troubleshooting section.
I looked at the CPU and memory section for the container instance and the values are -
Registered Available
CPU 1024 1014
Memory 985 729
My task definitions for the task that does not run has the following CPU and Memory values -
"memory": 512,
"cpu": 10
The daemon that successfully runs on the same EC2 container instance also has the same values for memory and CPU.
I read thru the AWS docs here at https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-instance-requirement-error/ and tried to reduce the CPU and memory requirements for the test-service task definition but nothing helped. I also changed the instance type to something bigger but that did not help either.
Can someone please help me with what I should do CPU and memory for both the tasks (daemon and application) so they can run on the same EC2 container instance ?
Note: I cannot add another container to the ECS cluster.
The task definition sets CPU limit to 10 units which is probably insufficient in your case. ECS can manage CPU resources dynamically when you set up the value to 0. However it is not possible in case of memory.
I built my app with docker-compose , one container is database use mariadb image ,one php to run Laravel (I installed php-memcached or php-redis extension for my app), one cache container built on redis docker image .
at first everything goes on well , but after running 2 or 3 days , I got the php exception : Connection timed out [tcp://redis:6379];
I monitor the cpu and memory and network use zabbix installed by myself on host server , but I got these error :
monitor CPU
monitor memory
I changed cache container to memcached and 2 or 3 days same thing happen,
the only way I found to solve this problem is to restart system , and it can run another 2 or 3 days before getting the same error. you know it's not possible to restart system on production, so any one can suggest me where to solve the problem other than restarting system ?
Thanks!
I think you are facing problem with redis docker container. This type of error comes when memory is exhausted. You need to set max memory parameter of redis server.
Advice: Please try to use another image of redis.
I deploy Docker containers on Mesos(0.21) and Marathon(0.7.6) on Google Cloud Engine.
I use JMeter to test a REST service that run on Marathon. When the concurrent requests are less than 10, it works normal, but when the concurrent requests are over 50, the container is killed and Mesos start another container. I increase RAM, CPU but it still happens.
This is log in /var/log/mesos/
E0116 09:33:31.554816 19298 slave.cpp:2344] Failed to update resources for container 10e47946-4c54-4d64-9276-0ce94af31d44 of executor dev_service.2e25332d-964f-11e4-9004-42010af05efe running task dev_service.2e25332d-964f-11e4-9004-42010af05efe on status update for terminal task, destroying container: Failed to determine cgroup for the 'cpu' subsystem: Failed to read /proc/612/cgroup: Failed to open file '/proc/612/cgroup': No such file or directory
The error message you're seeing is actually another symptom, not the root cause of the issue. There's a good explanation/discussion in this Apache Jira bug report:
https://issues.apache.org/jira/browse/MESOS-1837
Basically, your container is crashing for one reason or another and the /proc/pid#/ directory is getting cleared without Mesos being aware, so it throws the error message you found when it goes to check that /proc directory.
Try setting your allocated CPU higher in the JSON file describing your task.
I am new to lxc and docker. Does docker max client count depend solely on CPU and RAM or are there some other factors associated with running multiple containers simultaneously?
As mentioned in the comments to your question, it will largely depend on the requirements of the applications inside the containers.
What follows is anecdotal data I collected for this answer (This is on a Macbook Pro with 8 cores, 16Gb and Docker running in VirtualBox with boot2docker 2Gb, using 2 MBP cores):
I was able to launch 242 (idle) redis containers before getting:
2014/06/30 08:07:58 Error: Cannot start container c4b49372111c45ae30bb4e7edb322dbffad8b47c5fa6eafad890e8df4b347ffa: pipe2: too many open files
After that, top inside the VM reports CPU use around 30%-55% user and 10%-12% system (every redis process seems to use 0.2%). Also, I get time outs while trying to connect to a redis server.
I just started with Docker because I'd like to use it to run parallel tasks.
My problem is that I don't understand how Docker handles the resources on the host (CPU, RAM, etc.): i.e. how can I evaluate the maximum number of containers to run at the same time?
Thanks for your suggestions.