does someone now where containerd & lima (see https://medium.com/nttlabs/containerd-and-lima-39e0b64d2a59) stores the images and containers on MacOS
For docker as far as i remember it was ~/Library/Containers/com.docker.docker, but i cannot find similar directory
Related
Image built on Mac OSX with M1 processor, deployed to an EC2 instance. But when scripts are run it yields the error:
standard_init_linux.go:219: exec user process caused: exec format error
Elsewhere on Stackoverflow, this is explained as a mismatch of OS architecture. Sure enough running "uname -m" on EC2 instance shows it to be x86_64, and "docker image inspect" shows the container to have architecture arm64.
Here's what I don't understand. "uname -m" on my Mac shows that to be x86_64 too. So how does the container inherit a different architecture?
More significantly, how do I build an image on my Mac that I can run on EC2?
Docker file is simply
FROM python
WORKDIR /
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY src /src
with src containing, currently, some simple python scripts, executed thus:
docker run container/name python test.py
This works fine on my Mac, but gives the error above when executed on AWS.
OK. Here's what's happening. My Mac has the new M1 chip and I'm running the Tech Preview version of Docker Desktop. Under the hood the chip has the arm64 architecture, but interrogating it through iTerm and VSCode it claims to be x86_64 instead, hence my confusion when I posted the question. This is probably because both those apps are being quietly run through an Intel simulator behind the scenes and that's what's responding to the uname command.
However, because the processor is really arm64, that's the base architecture when I pull Python images from Docker (I tried lots of different flavours nd version of Python - all with the same results).
To force use of an amd64 AWS-compatible image I changed the first line of the Dockerfile to:
FROM --platform=linux/x86-64 python.
When containers from this image are run on the Mac that causes a warning
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
but it's just that, a warning, and the script runs (presumably by redirecting back through the Intel simulator. The scripts now run without problem (or warning) on the EC2 instance.
I'm not sure why you're getting this error, but there is a nice way to get around it if you'd like and if you don't mind your code and images being public. I'm guessing that this is just home-stuff anyway, so it might not be too bad.
Put your code in github.
Configure a repository on
hub.docker.com for your image and configure automatic builds from
github
ssh onto your ec2 instance and pull your image directly
from docker hub
An alternative is to start with step 1, then log into your ec2 using ssh and clone the repo on that machine. You can then build it directly on a real linux machine (your osx machine doesn't run Linux, which is an instant mismatch with docker). If you build it on the server you should be able to run it there with no problems.
Try to run with CMD ["lscpu"] or something related like cat /proc/cpuinfo in the container, compare architectures
Another thing: you might be pulling arm architecture of python image when building, and try to run it on x86_64 (EC2)
In addition to what has been shared above, you could also Build a multi-arch image with Buildx.
Basically, The recent Docker versions come with a CLI command called buildx. You can use the buildx command on Docker Desktop for Mac and Windows to build multi-arch images, link them together with a manifest file, and push them all to a registry using a single command.
Here is what works for me:
Create a new builder which gives access to the new
multi-architecture features.
docker buildx create --name mybuilder --use
Build the Dockerfile with buildx, passing the list of architectures to build for:
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push .
=> pushing layers 2.7s
=> pushing manifest for docker.io/username/demo:latest 2.2
Where, username is a valid Docker username.
Notes:
The --platform flag informs buildx to generate Linux images for AMD 64-bit, Arm 64-bit, and Armv7 architectures.
The --push flag generates a multi-arch manifest and pushes all the images to Docker Hub.
To inspect the image use the below command
docker buildx imagetools inspect username/demo:latest
Docker can work decently remotely by defining DOCKER_HOST variable but now I do want to avoid installing the fat Docker for MacOS which also installs and starts the docker engine on a VM, one that seems to consume resources.
As docker work remotely it should continue to be able to build images, list images, start and stop them without having a docker servic/vm mac.
How can I do this? (docker cli seems to be come only with the entire cow).
I guess you are looking for install-client-binaries-on-macos.
Docker company afford some prebuilt binary, just download here, unpackage it, then you will find a standalone docker client binary there, copy it to your mac, out of the box for use.
There seems to be no install of the client only; but after installing the fat cow, you can tell it to stay off your grass by unchecking Start Docker Desktop when you log in in the preferences, and then shut down the Docker server.
That's what I do, and when I need docker locally I just start and then after using it, I shut it down again.
I'm not able to get what is the purpose of Docker CLI (started by Kinematic) - vs regular OSX terminal.
I have docker command available in both (OSX terminal and Docker-CLI), but when typing in:
//OSX terminal
$ docker images
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
// Kinematic terminal
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mariadb 5.5.47 a31b2e03c00b 6 months ago 238.8 MB
mysql latest d617bcfd159e 6 months ago 360.3 MB
mysql 5.7.9 a5ad9eb2ff48 8 months ago 359.9 MB
<none> <none> 9ee13ca3b908 8 months ago 125.1 MB
and the same for the rest of the commands like docker-compose up, they work at the terminal started with Kinematics but not on my local terminal.
Can you please explain me what I'm missing here?
This is due the virtualization of the Docker environment on MacOS. The docker environment is not available as a native application but contained in a virtual machine. See the docs:
Docker for Mac is our newest offering for the Mac. It runs as a native
Mac application and uses xhyve to virtualize the Docker Engine
environment and Linux kernel-specific features for the Docker daemon.
You do not need to access that machine manually in any way. The Docker Terminal listed under your Applications as well as Kitematic will do that for you.
However, if you take a look at the virtual machines listed in VirtualBox you will find a VM named default which is the one created during the Docker installation.
So the docker CLI is kind of a adapter between your local OSX terminal and the terminal of the Docker environment. The CLI somehow pipes the commands from your OSX terminal to the Docker environment.
I am new to docker and trying to understand the concept of base image.
Let's say I have a hello-world docker app on windows machine with ubuntu as base image in Dockerfile.
Now to run this hello-world application, Is docker going to install the whole ubuntu to run the application?
If not then how ubuntu base image will be used here and How will Docker container facilitate the commutation between ubuntu based application and windows OS?
Now to run this hello-world application, Is docker going to install the whole ubuntu to run the application?
No, the ubuntu image used is not "the whole ubuntu". It is a trimed-down version, without the all X11 graphic layer. Still 180 MB though: see "Docker Base Image OS Size Comparison".
These days, you would rather use an Alpine image (5 MB): see "Docker Official Images are Moving to Alpine Linux"
Regarding the hello-world application specifically, there is no Ubuntu or Alpine involved. Just 1.8 KB of C machine-code, which makes only direct calls to the Linux kernel of the host.
That Linux host is used by docker container through system calls: see "What is meant by shared kernel in Docker?"
On Windows, said Linux host was provided by VirtualBox VM running a boot2docker VM, built from a TinyCore distro.
With the more recent "Docker for Windows", that same VM is run through the Hyper-V Windows feature.
I have a Linux x86 application inside a docker container and I want to run it under Windows. I don't want to force users to install Virtual Box. Ideally a qemu or similar virtualization tool can be used, since it is very tiny and requires no installation at all.
My approach was to use qemu for Windows and
boot2docker, so I can boot a minimal Linux with docker installed and than run my docker container within it.
This is the command I'm using to run it:
qemu-system-x86_64.exe -m 256 -cdrom boot2docker.iso
The boot goes well, but I have several problems:
at every boot the image goes trough all the configuration steps (generating keys for ssh, setting hostname, etc.) that can be skipped the second time the image runs; seems that the changes to the image are not persisted trough runs. I want to build an image that is already configured and needs only to boot;
to add my application inside the image I have to rebuild the whole boot2docker.iso image by using the steps described in How to build boot2docker.iso locally.
So, the question is: how can I use the base boot2docker.iso image and add some persisting data (such as configurations and my application)? Perhaps a read/write partition mounted from another file?
like the idea.
Maybe you can check MobaliveCD, it has a nice lightweight GUI and it embeds qemu system inside. I tried it for tinycore live cd iso (base of boot2docker), which works quite ok.
While it seems it doesn't support 64bit (which boot2docker needs), but the function fits for you need.
Your command
qemu-system-x86_64.exe -m 256 -cdrom boot2docker.iso
launches an ISO, what you want is
reserve some disk place for this iso in a .img
run this iso and install it in this .img
reboot
In a Linux you would start by doing
qemu-img create -f qcow2 /home/myuser/my_image.img 6G
There is docker-cli for Windows, it seems to be what you look for, see
http://azure.microsoft.com/blog/2014/11/18/docker-cli-for-windows-clients/
You can use boot2docker http://boot2docker.io/
On boot2docker installation, it will install virtualbox behind the scenes.
You only have to start the boot2docker shortcut and the virtual box management and vms are hidden.