How can docker run on a Debian host maybe an OpenSUSE in a container? It uses different kernel, with separated modules. Also older Debian versions have used older kernels, so how can run it on a kernel version 3.10+ ? Older kernels have only older built in functions, how can an old distro manage new features?
What is "the trick" in it?
Docker never uses a different kernel: the kernel is always your host kernel.
If your host kernel is "compatible enough" with the software in the container you want to run it will work; otherwise, it won't.
"Containers" Are Just Process Configuration
The key thing to understand is that a Docker container is not a virtual machine: it doesn't create a new virtual computer on which to run the software. Instead, Docker starts processes in your existing OS, just like you start new processes from the command line.
The difference between a "containerized" process and an ordinary process is the restrictions put on the containerized process and the changes to how it sees the environment around it. (These are passed on to any child processes started by the containerized process.) Typical restrictions and changes include:
Instead of using the host's root filesystem, mount a different filesystem on / (usually one supplied with the container's image). Parts of the host filesystem may be mounted underneath the new process' root filesystem, e.g. by using docker run -v /u/myprogram-data:/var/data/myprogram so that when the containerized process reads or writes /var/data/myprogram/file this reads/writes /u/myprogram-data/file in the host filesystem.
Create a separate process space for the containerized process so that it can see only itself and its children (with ps or similar commands), but cannot see other processes running on the host.
Create a separate user namespace so that the users in the container are different from those in the host: e.g., UID 1234 in the containerized process will not be the same as UID 1234 for non-containerized
Create a separate set of network interfaces with their own IP addresses, often using a "virtual router" and address translation between those and the host network interfaces. (E.g., the host, when it receives a packet on port 8080, forwards it to port 80 on the container processes' virtual network interface.)
All of this is done by facilities built into the kernel; you can do any of it yourself without Docker if you write a program to do the appropriate setup and set the appropriate parameters when it starts a new process.
Compatibility
So what does "compatible enough" mean? It depends on what requests the program makes of the kernel (system calls) and what features it expects the kernel to support. Some programs make requests that will break things; others don't. For example, on an Ubuntu 18.04 (kernel 4.19) or similar host:
docker run centos:7 bash works fine.
docker run centos:6 bash fails with exit code 139, meaning it terminated with a segmentation violation signal; this is because the 4.19 kernel doesn't support something that that build of bash tried to do.
docker run centos:6 ls works fine because it's not making a request the kernel can't handle, as bash was.
If you try docker run centos:6 bash on an older kernel, say 4.9 or earlier, you'll find it will work fine. (At least as far as I tested it.)
How can docker run on a Debian host maybe an OpenSUSE in a container
Because the kernel is the same and will support the Docker engine to run all those container images: the host kernel should be 3.10 or more, but its list of system calls is fairly stable.
See "Architecting Containers: Why Understanding User Space vs. Kernel Space Matters":
Applications contain business logic, but rely on system calls.
Once an application is compiled, the set of system calls that an application uses (i.e. relies upon) is embedded in the binary (in higher level languages, this is the interpreter or JVM).
Containers don’t abstract the need for the user space and kernel space to share a common set of system calls.
In a containerized world, this user space is bundled up and shipped around to different hosts, ranging from laptops to production servers.
Over the coming years, this will create challenges.
From time to time new system calls are added, and old system calls are deprecated; this should be considered when thinking about the lifecycle of your container infrastructure and the applications that will run within it.
See also "Why kernel version doesn't match Ubuntu version in a Docker container?":
There's no kernel inside a container. Even if you install a kernel, it won't be loaded when the container starts. The very purpose of a container is to isolate processes without the need to run a new kernel.
Related
My embedded board uses Linux Kernel version 3.18.
I would like to configure my Wifi (using wpa_supplicant and then dhcpcd commands) automatically, as soon as the board boots up.
I made a shell script for the same (I verified the script by executing it manually) and placed this in "/etc/init.d" directory.
Then made a symbolic link to the shell script file in the "/etc/rc.d" directory.
However, doing this change does not serve my purpose. Can anyone please help me out.
PS: It is important to note that it takes around 3-4 seconds for my Wifi module to be inserted into the kernel once the board boots up.
TLDR;
in initscript call differant script managing wpa_supplicant,dhcpd so that init-script won't block.
It is nice practice not to block in init-scripts. so you can do differed processing in init-script. i.e. start different script in background which checks module insertion and wpa_supplicant also can modify it to keep checking status. Something similar happens in Desktop Linux OS. The program name is NetworkManager.
According | to | countless | sources, Docker provides ultra-lightweight virtualization by sharing system resources across containers, instead of allocating copies of those resources per container.
I've even read articles where it is boasted that you could "run dozens, even hundreds of containers on the same VM."
But if my app requires 2GB RAM to run, and the underlying physical machine has only 8GB RAM on it, I would normally only be able to run 3 instances of my app on it (leaving ~2GB for system memory, utilities, etc.).
Does Docker do some kind of magic with RAM, allowing me to actually run dozens of containers, each one allocated 2GB RAM, but somehow sharing unused memory under the hood?
Or are those statements more media hype than anything else?
When people talk about running "dozens or hundreds of containers" they are normally thinking about microservices; small applications that do a specific task. Each of these may have memory usage measured in KBs rather than MBs, and probably not GBs, and as such there is no reason a decent machine couldn't run dozens or hundreds of them.
There is actually a competition (I think it's on-going) to get as many containers as possible running on a Raspberry Pi. The result currently stands at over a thousand, but admittedly these containers won't be running a real-life application.
Regarding memory, the answer is "it's complicated". If you're using the AUFS or Overlay driver, containers with the same base image should be able to share "memory pages"; meaning shared libraries shouldn't need to get loaded twice for two containers. This isn't something special though; normal processes running on the host will work the same way.
At the end of the day, containers are little more than isolated processes. We can easily run dozens or hundreds of processes on a host, so it's not unfeasible to run dozens or hundreds of containers.
A Docker container only consumes the resources that it needs as it needs them. So yes you could literally run hundreds of machines on one box as long as they are not all actively consuming your resources. That is what makes Docker unique; the fact that a container will use what resources it can and then release them making them available for another container on the same host. It is best practice to let the container and Docker handle allocating resources instead of doing a hard assign of them.
The alternative would be a virtual machine. Each virtual machine that you run has to run a full linux kernal, and the host OS will hold a chunk of memory aside for the virtualized environment. This means that you can really only run a couple VMs on all but the heaviest duty hardware.
A container does NOT run a kernel- it just runs a single process (plus sub processes). This means that you can run as many processes in containers as you could if you were running those same processes without containers- each thinks it is running on a separate machine, but they all just show up as processes on the host kernel.
There is no magic that will make you able to use RAM dozens of times over. But you can pack smaller processes in together a LOT tighter than you could using virtual machines for seperation.
I am developing a software in which external software can use it, this external software can use ports that I use, I want to be able to save a range of ports to be available only for my software and when the external software wan't to use it they will get an error.
Is there any system call that tells the kernel to save range of ports for my application?
As soon as your application starts, you can open the ports and bind to them, no reason to reserve them anymore. Before your application starts, you can't use any system call.
The system administrator can, on linux, do something like
# echo 30000 31000 > /proc/sys/net/ipv4/ip_local_port_range
which means the kernel will ONLY use that range of ports when it assigns a port number randomly. There's a sysctl to go along with it as well. So you might, in theory, reserve ports 64000-65000 for your application and tell the admin of the machine your software runs on to use
# echo 1024 64000 > /proc/sys/net/ipv4/ip_local_port_range
somewhere early in the boot process.
However, I strongly recommend against that - any sysadmin with at least a little experience will tell you to get your effing software right. This kind of dependency makes administration a mess, and you totally lose if you're trying to use several different software packages needing different port ranges on one machine.
The best thing you can do is open and bind a socket for each of the ports you want to reserve before running any external program.
BTW, i had a similar problem once, on a machine that mounted NFS directories early in the boot stage, which would, from time to time, use ports 993 and 995 locally, which prevented the ssl versions of pop3d and imapd from starting up. I solved that by writing a small program that bound to those ports, starting that program even before NFS did anything, and killed it again in the boot scripts for pop3d and imapd. Maybe something like this could be a solution for you as well, if some other software uses your ports before your program starts up. But again, i'd consider this an evil hack, not something well-written software should depend on.
my question may seem too weird but i thought about the windows hibernation thing and i was wondering if there is a way to hibernate a specific process or application.
i.e : when windows start up from a normal shutdown/restart it will load all startup programs normally but in addition of that it will load a specific program with it`s previous status before shutting down the computer.
I have though about reserving the memory location and retrieve it back when computer start up , but is there any application that does that in windows environment ?
That cannot work. The state of a process is almost never contained in just the process itself. A gui app creates user32 and gdi objects that are stored in a heap associated with the desktop. It makes calls to Windows that affect the window manager state. It makes I/O calls that cause code inside drivers to run. Which in turn affects allocations inside the kernel pools. Multiply the trouble by every pipe or rpc channel it opens to talk to other processes. And shared resources like the clipboard.
Only making a snapshot of the entire operating system state works.
There are multiple solutions for this now, in Linux OS: CRIU, CryoPID2, BLCR.
I think docker can be used (both for windows & linux), but it requires pre-packaging your app in a docker, which bears some overhead(s).
I am porting an application which runs as a background service in windows at startup, we are porting the application to linux(SUSE Enterprise server), I'am completely new to linux. Can somebody help me on how to proceed with this. Like
Should I build the linux executable
After builiding the binary, what changes should I make to linux startup files to run this executable
How my service can register call back function to modify or change or send commands to my service while it is running
Yes, you should build a Linux binary. You may want to rephrase your question since I doubt this is the answer you want :-)
You should generally create what is known as an "init" file, which lives in /etc/init.d. Novell has a guide online which you can use to author the file. Note that while the init file is common, the exact method of letting the operating system use it varies depending on the distribution.
This is going to be a marked change for you. If you are doing simple actions such as re-loading a configuration file, you can use the signals functionality, especially the SIGHUP/HUP signal which is generally used for this purpose. If you require extended communication with your daemon, you can use a UNIX domain socket (think of it as a named pipe) or a network socket.
Another task you are going to need to accomplish is to daemonize your application. Generally this is done by first fork()ing your process, then redirecting the stdin/stdout pipes in the child. There are more details which can be answered by reading this document
See how-to-migrate-a-net-windows-service-application-to-linux-using-mono.
Under Linux, deamons are simple background processes. No special control methods (e.g start(), stop()) are used as in Windows. Build your service as a simple (console) application, and run it in the background. You can use a tool like daemonize to run a program as a Unix daemon.