How many docker containers can i run simultaneously on single host? - linux-kernel

I am new to lxc and docker. Does docker max client count depend solely on CPU and RAM or are there some other factors associated with running multiple containers simultaneously?

As mentioned in the comments to your question, it will largely depend on the requirements of the applications inside the containers.
What follows is anecdotal data I collected for this answer (This is on a Macbook Pro with 8 cores, 16Gb and Docker running in VirtualBox with boot2docker 2Gb, using 2 MBP cores):
I was able to launch 242 (idle) redis containers before getting:
2014/06/30 08:07:58 Error: Cannot start container c4b49372111c45ae30bb4e7edb322dbffad8b47c5fa6eafad890e8df4b347ffa: pipe2: too many open files
After that, top inside the VM reports CPU use around 30%-55% user and 10%-12% system (every redis process seems to use 0.2%). Also, I get time outs while trying to connect to a redis server.

Related

Docker Container not running simultaneously

I am trying to run multiple docker Container in a single network but as soon as I reached to 7 container this problem started. If I start one container other will exit automatically. I have increased memory to 3 GB and CPU to 100%. Now this looks like a resource problem but how do we solve it

docker ubuntu container filesystem

I pulled a standard docker ubuntu image and ran it like this:
docker run -i -t ubuntu bash -l
When I do an ls inside the container I see a proper filesystem and I can create files etc. How is this different from a VM? Also what are the limits of how big a file can I create on this container filesystem? Also is there a way I can create a file inside the container filesystem that persists in the host filesystem after the container is stopped or killed?
How is this different from a VM?
A VM will lock and allocate resources (disk, CPU, memory) for its full stack, even if it does nothing.
A Container isolates resources from the host (disk, CPU, memory), but won't actually use them unless it does something. You can launch many containers, if they are doing nothing, they won't use memory, CPU or disk.
Regarding the disk, those containers (launched from the same image) share the same filesystem, and through a COW (copy on Write) mechanism and UnionFS, will add a layer when you are writing in the container.
That layer will be lost when the container exits and is removed.
To persists data written in a container, see "Manage data in a container"
For more, read the insightful article from Jessie Frazelle "Setting the Record Straight: containers vs. Zones vs. Jails vs. VMs"

docker run long time make cache connection timedout

I built my app with docker-compose , one container is database use mariadb image ,one php to run Laravel (I installed php-memcached or php-redis extension for my app), one cache container built on redis docker image .
at first everything goes on well , but after running 2 or 3 days , I got the php exception : Connection timed out [tcp://redis:6379];
I monitor the cpu and memory and network use zabbix installed by myself on host server , but I got these error :
monitor CPU
monitor memory
I changed cache container to memcached and 2 or 3 days same thing happen,
the only way I found to solve this problem is to restart system , and it can run another 2 or 3 days before getting the same error. you know it's not possible to restart system on production, so any one can suggest me where to solve the problem other than restarting system ?
Thanks!
I think you are facing problem with redis docker container. This type of error comes when memory is exhausted. You need to set max memory parameter of redis server.
Advice: Please try to use another image of redis.

Docker for parallel tasks

I just started with Docker because I'd like to use it to run parallel tasks.
My problem is that I don't understand how Docker handles the resources on the host (CPU, RAM, etc.): i.e. how can I evaluate the maximum number of containers to run at the same time?
Thanks for your suggestions.

liferay performance issue on linux

I have Liferay 6 with Tomcat system setup on two machines:
Machine 1:
Windows 2003 Server
2GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.1.49
Liferay 6.0.6 with Tomcat 6
Machine 2:
Linux CentOS 5.5
4GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.5.10
Liferay 6.0.6 with Tomcat 6
Both the liferay systems are having identical startup parameters and mysql configurations.
The liferay system contains a custom theme and a servlet filter hook checking each URL access.
We have written a Grinder script to test the load of the system starting with 50 concurrent users .
The test script does the following things:
Open home page
Login with username/password
Enter security key (custom portlet)
Move to a private community
Logout
On Windows system the response time is as expected (nearly 40 seconds mean time for each test in Grinder).
However on the Linux system the response time is too high (nearly 4mins) for the same operations.
We tried revising the mysql, tomcat, connection pool and few other parameters but all resulting the same. Also the liferay were tested using mysql of the other machine (machine 1 liferay -> machine 2 mysql)
We are facing the same issue on Linux machines in our test environment and also at our client's end.
This looks like a duplicate question. I suspect your issue is related to memory / jvm configuration and specifically garbage collection. High CPU utilization under small loads tend to point in that direction.
In your Grinder script, have you set each of the steps as a separate transaction? This will allow you to see how much time each step is taking. It might be useful to know if everything is slower across the board, or if it's just one transaction type that's slowing you down.
Also, is there anything in the Tomcat logs on the Linux box you aren't seeing on windows? Unexpected java stack traces, etc?
Finally, are the databases on each machine identical? Do they have the same amount of data? Do they have the same indexes?
edit: Is it one transaction that's taking up all the extra time, or is each transaction slower? When you run 'top' on your linux box, is it the tomcat java process that's eating all your CPU, or some other process?

Resources