In Docker is there a way to put Go shared libraries that are needed by containerized go apps in a read only shared memory area? The objective is to allow many containers to the execute the same code reducing memory requirements in the containers. I expect a side effect would be smaller container images.
Where would these Go shared libraries be in memory?
The image size is less about memory and more about disk space.
You could set those shared dynamic library in an image of their own, with a shared VOLUME path.
You then docker create a container based on that image: that is a data volume container.
Finally, you reuse that created container as many time as you want, with a --volumes-from=<yourCreatedContainer> for each go app docker run containers.
Related
I am trying to create a docker image from Dockerfile on Windows 10. Being new to it, it crashed multiple times due to one or more syntax errors in the Dockerfile. I tried to clear all the images by using docker system prune --all. I got some disk space cleared up (If I am right, the system here means HDD rather than RAM?). Anyway, I see that Docker.Service seems to be using 6+ GB of memory.
My question is, is there a way to clear the memory in Docker.Service? Why is it using so much of memory when no image is being used? I know that it can be cleared by exiting Docker or force closing it.
Update
By the way, I am using Linux container, there is an option when right click on the docker icon from the tray.
Update 2
I tried all the commands from their documentation page - https://docs.docker.com/config/pruning/ - No effect.
Update 3
Doesn't seem to clear even when the image is created and saved. Looks like a bug?
Docker creates an image for each command in dockerfile, it creates image layers and stores it in the cache, So whenever your dockerfile execution is interrupted the image remains in the cache. Hence you see the memory consumption.
Try the following command which will remove all the images
docker rmi $(docker images -a -q)
Docker runs Linux containers in a VM on Windows. The 6G of ram is likely what you have assigned to that VM. Use docker stats to see the resource usage of containers running inside that VM.
I'm using Docker for Mac and for saving disk space I want to get rid of some unused local images. As far as I know the local files are stored in ~/Library/Containers/com.docker.docker.
But even if I remove all images and containers from docker the folder keeps it's size of some GB and not a single byte is released. The only option is to remove the com.docker.docker folder completely but this makes only sense if you want to remove all the data and not only the unused images. What am I doing wrong?
sudo docker system prune -af
Just simple put the command given above.
It will delete all the image automatically.
To remove all unused images, use docker images prune.
There's also another way to remove all unused image (dangling): docker rmi $(docker images -aq --filter dangling=true)
To remove images you do no need, use docker rmi <image>. You may use multiple images, eg. docker rmi abcdef ghijkl, where abcdef and ghijkl are image hashes.
If this does not help immediately, try restarting your docker service (sudo service docker restart). In some cases images/containers are only marked as deleted and are not removed. But they are after restart.
What is going on here is that your docker for mac application is really running a virtual machine. That virtual machine has its own disk, which is what that qcow2 file is. That disk gets lazily allocated and won't take up all it space all at once. Once the space is used inside the vm, then the disk will grow on the host.
The disk image will grow until it reaches the size of the disk. Just because a file gets deleted inside the vm, that will not cause the disk image on the host to shrink again.
There is a possible solution to shrink your disk image: https://forums.docker.com/t/where-does-docker-keep-images-containers-so-i-can-better-track-my-disk-usage/8370/7
I have seen a few folks use this trick with some success.
List your images with:
docker images
Delete a single image with:
docker rmi image_id
How can docker run on a Debian host maybe an OpenSUSE in a container? It uses different kernel, with separated modules. Also older Debian versions have used older kernels, so how can run it on a kernel version 3.10+ ? Older kernels have only older built in functions, how can an old distro manage new features?
What is "the trick" in it?
Docker never uses a different kernel: the kernel is always your host kernel.
If your host kernel is "compatible enough" with the software in the container you want to run it will work; otherwise, it won't.
"Containers" Are Just Process Configuration
The key thing to understand is that a Docker container is not a virtual machine: it doesn't create a new virtual computer on which to run the software. Instead, Docker starts processes in your existing OS, just like you start new processes from the command line.
The difference between a "containerized" process and an ordinary process is the restrictions put on the containerized process and the changes to how it sees the environment around it. (These are passed on to any child processes started by the containerized process.) Typical restrictions and changes include:
Instead of using the host's root filesystem, mount a different filesystem on / (usually one supplied with the container's image). Parts of the host filesystem may be mounted underneath the new process' root filesystem, e.g. by using docker run -v /u/myprogram-data:/var/data/myprogram so that when the containerized process reads or writes /var/data/myprogram/file this reads/writes /u/myprogram-data/file in the host filesystem.
Create a separate process space for the containerized process so that it can see only itself and its children (with ps or similar commands), but cannot see other processes running on the host.
Create a separate user namespace so that the users in the container are different from those in the host: e.g., UID 1234 in the containerized process will not be the same as UID 1234 for non-containerized
Create a separate set of network interfaces with their own IP addresses, often using a "virtual router" and address translation between those and the host network interfaces. (E.g., the host, when it receives a packet on port 8080, forwards it to port 80 on the container processes' virtual network interface.)
All of this is done by facilities built into the kernel; you can do any of it yourself without Docker if you write a program to do the appropriate setup and set the appropriate parameters when it starts a new process.
Compatibility
So what does "compatible enough" mean? It depends on what requests the program makes of the kernel (system calls) and what features it expects the kernel to support. Some programs make requests that will break things; others don't. For example, on an Ubuntu 18.04 (kernel 4.19) or similar host:
docker run centos:7 bash works fine.
docker run centos:6 bash fails with exit code 139, meaning it terminated with a segmentation violation signal; this is because the 4.19 kernel doesn't support something that that build of bash tried to do.
docker run centos:6 ls works fine because it's not making a request the kernel can't handle, as bash was.
If you try docker run centos:6 bash on an older kernel, say 4.9 or earlier, you'll find it will work fine. (At least as far as I tested it.)
How can docker run on a Debian host maybe an OpenSUSE in a container
Because the kernel is the same and will support the Docker engine to run all those container images: the host kernel should be 3.10 or more, but its list of system calls is fairly stable.
See "Architecting Containers: Why Understanding User Space vs. Kernel Space Matters":
Applications contain business logic, but rely on system calls.
Once an application is compiled, the set of system calls that an application uses (i.e. relies upon) is embedded in the binary (in higher level languages, this is the interpreter or JVM).
Containers don’t abstract the need for the user space and kernel space to share a common set of system calls.
In a containerized world, this user space is bundled up and shipped around to different hosts, ranging from laptops to production servers.
Over the coming years, this will create challenges.
From time to time new system calls are added, and old system calls are deprecated; this should be considered when thinking about the lifecycle of your container infrastructure and the applications that will run within it.
See also "Why kernel version doesn't match Ubuntu version in a Docker container?":
There's no kernel inside a container. Even if you install a kernel, it won't be loaded when the container starts. The very purpose of a container is to isolate processes without the need to run a new kernel.
According | to | countless | sources, Docker provides ultra-lightweight virtualization by sharing system resources across containers, instead of allocating copies of those resources per container.
I've even read articles where it is boasted that you could "run dozens, even hundreds of containers on the same VM."
But if my app requires 2GB RAM to run, and the underlying physical machine has only 8GB RAM on it, I would normally only be able to run 3 instances of my app on it (leaving ~2GB for system memory, utilities, etc.).
Does Docker do some kind of magic with RAM, allowing me to actually run dozens of containers, each one allocated 2GB RAM, but somehow sharing unused memory under the hood?
Or are those statements more media hype than anything else?
When people talk about running "dozens or hundreds of containers" they are normally thinking about microservices; small applications that do a specific task. Each of these may have memory usage measured in KBs rather than MBs, and probably not GBs, and as such there is no reason a decent machine couldn't run dozens or hundreds of them.
There is actually a competition (I think it's on-going) to get as many containers as possible running on a Raspberry Pi. The result currently stands at over a thousand, but admittedly these containers won't be running a real-life application.
Regarding memory, the answer is "it's complicated". If you're using the AUFS or Overlay driver, containers with the same base image should be able to share "memory pages"; meaning shared libraries shouldn't need to get loaded twice for two containers. This isn't something special though; normal processes running on the host will work the same way.
At the end of the day, containers are little more than isolated processes. We can easily run dozens or hundreds of processes on a host, so it's not unfeasible to run dozens or hundreds of containers.
A Docker container only consumes the resources that it needs as it needs them. So yes you could literally run hundreds of machines on one box as long as they are not all actively consuming your resources. That is what makes Docker unique; the fact that a container will use what resources it can and then release them making them available for another container on the same host. It is best practice to let the container and Docker handle allocating resources instead of doing a hard assign of them.
The alternative would be a virtual machine. Each virtual machine that you run has to run a full linux kernal, and the host OS will hold a chunk of memory aside for the virtualized environment. This means that you can really only run a couple VMs on all but the heaviest duty hardware.
A container does NOT run a kernel- it just runs a single process (plus sub processes). This means that you can run as many processes in containers as you could if you were running those same processes without containers- each thinks it is running on a separate machine, but they all just show up as processes on the host kernel.
There is no magic that will make you able to use RAM dozens of times over. But you can pack smaller processes in together a LOT tighter than you could using virtual machines for seperation.
I know there are lots of docker experts around but I spent considerable time to find out some facts and figure about Docker's run time performance, but unfortunately i could not get any concrete answer. Let me start with telling you my System's configuration:
(a) Running CentOS 6.5 on a machine having 48GB RAM, 1TB Disc and 12 Core CPUs.
(b) I build up a Docker image which is having size almost 6.5GB
Below are questions if someone can answer for the benefit of readers:
(a) Now with the given configuration, question comes that how many containers I can run in parallel without break any functionality?
(b) Assume I have two Images each having size 3.5GB, then is it suggested to run multiple small size images or we get a good performance with big sized image?
(c) What is the best file systems option to use with Docker?
EDIT: more information
(d) Actually I'm trying to put many compilers inside a container and trying to give facility to users to compile their languages online. This tool is under development and will replace my existing website compileonlone.com. Things are going fine, I build up two images with few compilers in each. I'm able to run around 250 containers successfully and after that I start getting too many files opened. After 250 containers, my RAM is reaching somewhere 40GB and CPU utilization is around 50%,.
Main problem I'm facing is removal of the old containers. Because user will come and compile his code and then will go away, so I need to remove those container after certain period of time but when I'm trying to remove such stopped containers using docker rm -v, its slowing down main docker process and its almost hanging. I mean docker -d daemon which is listening at /var/run/docker.sock. Not sure if there is any other way around to clean these containers or I have a bug. Here is the detail of Docker:
# docker info
Containers: 1016
Images: 41
Storage Driver: devicemapper
Pool Name: docker-0:20-258-pool
Pool Blocksize: 64 Kb
Data file: /var/lib/docker/devicemapper/devicemapper/data
Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
Data Space Used: 17820.7 Mb
Data Space Total: 102400.0 Mb
Metadata Space Used: 102.4 Mb
Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.17.2-1.el6.elrepo.x86_64
Operating System: <unknown>
WARNING: No swap limit support
If someone can help me on how to delete old containers in fastest way then it will be great. Simple shell script and all are not working. I already have tried like
#docker rm -v $(docker ps -a |grep Exited | awk '{print $1}')
but its completely slowing down main docker process and its unable to create new containers while this removal process is running.
Thanks for your time taken to answer these questions, which will help me as well as many others in going ahead with Docker.
a): A container is like a process. This question is like asking "how many processes can I run in parallel". It is not answerable without knowing what the processes are doing. Please add this information to your question.
b) Both 3.5GB and 6.5GB are very large for a Docker image. Best practice is to put one application in one container: if you have an application that size, then great. If not, maybe you have put your application's data into the image. This is not a good idea because the layered filesystem is slower than a regular filesystem, and you won't be wanting any of the features of layering or snapshotting on your transactional data.
The documentation on managing data explains how to mount regular disk so it is accessible from your containers.
Edit, after more information was supplied
d) Using up RAM implies the containers are still running. If there is some way within the logic of your site to know when a container is no longer needed you can docker kill it, then docker rm to remove the disk storage. Or docker rm -f does those two operations in one.
After a lot R&D and discussing with many experts, I found a solution to delete containers with lightening speed. Its simple you have run your docker daemon with dm.blkdiscard=false option as follow.
docker -d --storage-opt dm.blkdiscard=false
By the way here is what I have developed. Here I need to create and delete containers with a high speed
http://codingground.tutorialspoint.com
Hope this will help many others.