Run a docker container in Windows - windows

I have a Linux x86 application inside a docker container and I want to run it under Windows. I don't want to force users to install Virtual Box. Ideally a qemu or similar virtualization tool can be used, since it is very tiny and requires no installation at all.
My approach was to use qemu for Windows and
boot2docker, so I can boot a minimal Linux with docker installed and than run my docker container within it.
This is the command I'm using to run it:
qemu-system-x86_64.exe -m 256 -cdrom boot2docker.iso
The boot goes well, but I have several problems:
at every boot the image goes trough all the configuration steps (generating keys for ssh, setting hostname, etc.) that can be skipped the second time the image runs; seems that the changes to the image are not persisted trough runs. I want to build an image that is already configured and needs only to boot;
to add my application inside the image I have to rebuild the whole boot2docker.iso image by using the steps described in How to build boot2docker.iso locally.
So, the question is: how can I use the base boot2docker.iso image and add some persisting data (such as configurations and my application)? Perhaps a read/write partition mounted from another file?

like the idea.
Maybe you can check MobaliveCD, it has a nice lightweight GUI and it embeds qemu system inside. I tried it for tinycore live cd iso (base of boot2docker), which works quite ok.
While it seems it doesn't support 64bit (which boot2docker needs), but the function fits for you need.

Your command
qemu-system-x86_64.exe -m 256 -cdrom boot2docker.iso
launches an ISO, what you want is
reserve some disk place for this iso in a .img
run this iso and install it in this .img
reboot
In a Linux you would start by doing
qemu-img create -f qcow2 /home/myuser/my_image.img 6G
There is docker-cli for Windows, it seems to be what you look for, see
http://azure.microsoft.com/blog/2014/11/18/docker-cli-for-windows-clients/

You can use boot2docker http://boot2docker.io/
On boot2docker installation, it will install virtualbox behind the scenes.
You only have to start the boot2docker shortcut and the virtual box management and vms are hidden.

Related

macOS: How to update Docker Desktop that's running on an external drive, without removing existing containers?

I have installed Docker Desktop 3.0 on my mac - on an external drive.
I'm trying to uprade it to latest. I don't get prompted for them and I can't see any option to check for updates in the GUI.
I'd rather not uninstall incase it removes the containers and the data I have set up. But maybe I won't lose them?
can someone point me in the right direction?
thanks.
The location of the files dosn't play role in the upgrade. What plays the major role role is the source version and target version. There are two ways to upgrade the Docker for Mac, and there will be described in detail below:
Incremental upgrade
Upgrade with rebuild (Export / Import)
In the specific case of the upgrade from 3.03 to 3.3, the upgrade to 3.1 and 3.2 can be done as incremental, without need to export / import data, however the upgrade to 3.2.1 will require rebuild of the VM, so the export / import method will help to move the data to the new VM.
Location of data
The application dosn't contain the data for the containers. Docker for macOS is running on a virtual machine, and the data is stored on the virtual disk of the virtual machine.
The raw data (the virtual disk) of the virtual machine is located in
~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
Where the 0 is for the 0th VM. By default it's only one VM, but it's better to check. You can find this information from the GUI from Preferences > Resources > Disk Image Location. Simple copy of the content of the data folder will preserve the virtual machine.
Incremental Upgrade
It's always recommended to do incremental upgrade by following the official release versions between your current version and the target version. The releass can be found in the release notes.
The steps to upgrade are:
Stop the docker application (from the menu, find the docker icon and select Quit Docker Desktop
Make full backup of the ~/Library/Containers/com.docker.docker/Data
Rename the application (you might need it to go back, in case the upgrade is not successful)
Copy the new application in the destination and start it.
This steps will be repeated for each minor version between the source version and target version. Some versions can be skipped, however it's not recommended.
In case of problems you can use the troubleshooting guid.
Upgrades with rebuild of the VM
Be aware that upgrading the docker from certain older version to new version might require a rebuild of the virtual machine.
In addition, sometimes the upgrade is not working properly and you will need to uninstall the application as described in the troubleshooting page.
So before you proceed with the uninstall and rebuild of the VM, if you have data that it's important for you, it is highly recommendable to export the containers, volumes and images before the upgrade.
You will need to make a list of all of the containers, images and volumes you want to move across to the new VM. Following docker commands will help:
~# docker ps -a
~# docker volume ls
~# docker image ls
To export the information, you can use the docker export and docker image save. The following example will save the image or the container/volume to file called file.raw:
~# docker export <<volume name or contianer name>> > file.raw
~# docker image save <<image name>> > file.raw
Once the Docker is installed (the latest version) and the VM is rebuild, to import the data back from the files, you will need to use:
~# docker import file.raw
~# docker image load file.raw

Docker Image built on Mac OSX won't run on AWS EC2 instance

Image built on Mac OSX with M1 processor, deployed to an EC2 instance. But when scripts are run it yields the error:
standard_init_linux.go:219: exec user process caused: exec format error
Elsewhere on Stackoverflow, this is explained as a mismatch of OS architecture. Sure enough running "uname -m" on EC2 instance shows it to be x86_64, and "docker image inspect" shows the container to have architecture arm64.
Here's what I don't understand. "uname -m" on my Mac shows that to be x86_64 too. So how does the container inherit a different architecture?
More significantly, how do I build an image on my Mac that I can run on EC2?
Docker file is simply
FROM python
WORKDIR /
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY src /src
with src containing, currently, some simple python scripts, executed thus:
docker run container/name python test.py
This works fine on my Mac, but gives the error above when executed on AWS.
OK. Here's what's happening. My Mac has the new M1 chip and I'm running the Tech Preview version of Docker Desktop. Under the hood the chip has the arm64 architecture, but interrogating it through iTerm and VSCode it claims to be x86_64 instead, hence my confusion when I posted the question. This is probably because both those apps are being quietly run through an Intel simulator behind the scenes and that's what's responding to the uname command.
However, because the processor is really arm64, that's the base architecture when I pull Python images from Docker (I tried lots of different flavours nd version of Python - all with the same results).
To force use of an amd64 AWS-compatible image I changed the first line of the Dockerfile to:
FROM --platform=linux/x86-64 python.
When containers from this image are run on the Mac that causes a warning
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
but it's just that, a warning, and the script runs (presumably by redirecting back through the Intel simulator. The scripts now run without problem (or warning) on the EC2 instance.
I'm not sure why you're getting this error, but there is a nice way to get around it if you'd like and if you don't mind your code and images being public. I'm guessing that this is just home-stuff anyway, so it might not be too bad.
Put your code in github.
Configure a repository on
hub.docker.com for your image and configure automatic builds from
github
ssh onto your ec2 instance and pull your image directly
from docker hub
An alternative is to start with step 1, then log into your ec2 using ssh and clone the repo on that machine. You can then build it directly on a real linux machine (your osx machine doesn't run Linux, which is an instant mismatch with docker). If you build it on the server you should be able to run it there with no problems.
Try to run with CMD ["lscpu"] or something related like cat /proc/cpuinfo in the container, compare architectures
Another thing: you might be pulling arm architecture of python image when building, and try to run it on x86_64 (EC2)
In addition to what has been shared above, you could also Build a multi-arch image with Buildx.
Basically, The recent Docker versions come with a CLI command called buildx. You can use the buildx command on Docker Desktop for Mac and Windows to build multi-arch images, link them together with a manifest file, and push them all to a registry using a single command.
Here is what works for me:
Create a new builder which gives access to the new
multi-architecture features.
docker buildx create --name mybuilder --use
Build the Dockerfile with buildx, passing the list of architectures to build for:
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push .
=> pushing layers 2.7s
=> pushing manifest for docker.io/username/demo:latest 2.2
Where, username is a valid Docker username.
Notes:
The --platform flag informs buildx to generate Linux images for AMD 64-bit, Arm 64-bit, and Armv7 architectures.
The --push flag generates a multi-arch manifest and pushes all the images to Docker Hub.
To inspect the image use the below command
docker buildx imagetools inspect username/demo:latest

How to install only Docker client on MacOS?

Docker can work decently remotely by defining DOCKER_HOST variable but now I do want to avoid installing the fat Docker for MacOS which also installs and starts the docker engine on a VM, one that seems to consume resources.
As docker work remotely it should continue to be able to build images, list images, start and stop them without having a docker servic/vm mac.
How can I do this? (docker cli seems to be come only with the entire cow).
I guess you are looking for install-client-binaries-on-macos.
Docker company afford some prebuilt binary, just download here, unpackage it, then you will find a standalone docker client binary there, copy it to your mac, out of the box for use.
There seems to be no install of the client only; but after installing the fat cow, you can tell it to stay off your grass by unchecking Start Docker Desktop when you log in in the preferences, and then shut down the Docker server.
That's what I do, and when I need docker locally I just start and then after using it, I shut it down again.

the command "docker-machine ls " list nothing on my mac

I installed the latest stable docker for Mac, and started the docker directly without a virtual box. I know that it must have started a virtual box, so I use "docker-machine ls" to find the default machine, but it list nothing. How can i find the virtual machine? My OS version is 10.10.5
PS:
In fact, I didn't create any virtual machines, but do run my spring-boot app on the "alpine-oraclejdk8" image, so does that mean I exactly using the docker? And the reason I want to find the virtual machine is I used "nsenter" to enter the container to debug the log of my app but it doesn't work(the writer of "nsenter" told that I need enter the virtual machine first). So this is my confusing point that how the docker is running but I cant find the virtual machine on MAC
Docker for mac does not use docker-machine. The app that runs and give you the little whale icon in the top menu bar runs its own virtual machine. This virtual machine uses hyperkit, which is a project that uses xhyve, which is a port of bhyve to the mac os darwin kernel.
This will not create any entries to make docker-machine aware of the vm.
Rather than using nsenter to enter your container, you should use the docker exec command instead. The advantage of using docker exec is that it works without having the first ssh to where docker is running.
Because you need to create it.
Run the command
docker-machine create vm1
And you'll have your machine.
To redirect your docker client to the specific machine use this command
eval $(docker-machine env vm1)
Where 'vm1' is the same 'vm1' name that you used to create the machine. You can have a number of docker machine running at the same time using various backends like virtualbox or aws

Docker: Mount volume from Windows host

I use 1.12 version of Docker on Windows, since I can't use the Hyper-V feature with the newer "native" version - so I have my quickstart terminal and communicate to docker host via the invisible underlying virtual box.
Now I have the problem, that I need to mount a local folder to a container, which worked successfully from within the docker-machine by adding
--volume="`pwd`:/root/data"
to the docker run command, but it does not when I launch the same command from my Windows quickstart terminal (even though pwd command works correctly in the terminal).
I tried to find the Windows specific settings for the directory and tested several combinations of format, but no luck. Can anyone help me out on how to correctly specify a Windows folder (e.g. C:\Users\alexander.ruehl) for the volume parameter?
You can use relative path for your volume : --volume="./mydata:/root/data"
Also make sure that you have given the permission for read/write to Docker.

Resources