Docker run -d changes container architecture - windows

I was trying to install software on 32-bit CentOS 4.8 and ran into a problem. I ran the container using docker run -d(or -itd). The installation software keeps pointing to a x86_64 folder where it doesn't exist. I was so confused because I'm sure I used the correct CentOS image. I ran uname -a and it tells me that my container architecture is 64-bit (x86_64).
I try to run it using docker run -it command instead and when I check uname -a it correctly shows that I'm using 32-bit image.
My question is, is there any explanation why -d flag changes the architecture?
I'm using Docker version 20.10.5 on Windows 10 (64-bit).
Edit: Even when I start a stopped container from docker run -it command using docker start, it use 64-bit architecture instead. I need to run it using docker start -i.

Related

how to run amd64 docker images on arm64 host platform

I have an m1 mac and I am trying to run a amd64 based docker image on my arm64 based host platform. However, when I try to do so (with docker run) I get the following error:
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested.
When I try adding the tag --platform linux/amd64 the error message doesn't appear, but I can't seem to go into the relevant shell and docker ps -a shows that the container is immediately exited upon starting. Would anyone know how I can run this exact image on my machine given the circumstances/how to make the --platform tag work?
Using --platform is correct. On my M1 Mac I'm able to run both arm64 and amd64 versions of the Ubuntu image from Docker Hub. The machine hardware name provided by uname proves it.
# docker run --rm -ti --platform linux/arm/v7 ubuntu:latest uname -m
armv7l
# docker run --rm -ti --platform linux/amd64 ubuntu:latest uname -m
x86_64
Running amd64 images is enabled by Rosetta2 emulation, as indicated here.
Not all images are available for ARM64 architecture. You can add --platform linux/amd64 to run an Intel image under emulation.
If the container is exiting immediately, that's a problem with the specific container you're using.
To address the problem of your container immediately exiting after starting, try using the entrypoint flag to overwrite the container's entry point. It would look something like this:
docker run -it --entrypoint=/bin/bash image_name
Credit goes to this other SO answer that helped me solve a similar issue on my own container.

Docker Image built on Mac OSX won't run on AWS EC2 instance

Image built on Mac OSX with M1 processor, deployed to an EC2 instance. But when scripts are run it yields the error:
standard_init_linux.go:219: exec user process caused: exec format error
Elsewhere on Stackoverflow, this is explained as a mismatch of OS architecture. Sure enough running "uname -m" on EC2 instance shows it to be x86_64, and "docker image inspect" shows the container to have architecture arm64.
Here's what I don't understand. "uname -m" on my Mac shows that to be x86_64 too. So how does the container inherit a different architecture?
More significantly, how do I build an image on my Mac that I can run on EC2?
Docker file is simply
FROM python
WORKDIR /
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY src /src
with src containing, currently, some simple python scripts, executed thus:
docker run container/name python test.py
This works fine on my Mac, but gives the error above when executed on AWS.
OK. Here's what's happening. My Mac has the new M1 chip and I'm running the Tech Preview version of Docker Desktop. Under the hood the chip has the arm64 architecture, but interrogating it through iTerm and VSCode it claims to be x86_64 instead, hence my confusion when I posted the question. This is probably because both those apps are being quietly run through an Intel simulator behind the scenes and that's what's responding to the uname command.
However, because the processor is really arm64, that's the base architecture when I pull Python images from Docker (I tried lots of different flavours nd version of Python - all with the same results).
To force use of an amd64 AWS-compatible image I changed the first line of the Dockerfile to:
FROM --platform=linux/x86-64 python.
When containers from this image are run on the Mac that causes a warning
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
but it's just that, a warning, and the script runs (presumably by redirecting back through the Intel simulator. The scripts now run without problem (or warning) on the EC2 instance.
I'm not sure why you're getting this error, but there is a nice way to get around it if you'd like and if you don't mind your code and images being public. I'm guessing that this is just home-stuff anyway, so it might not be too bad.
Put your code in github.
Configure a repository on
hub.docker.com for your image and configure automatic builds from
github
ssh onto your ec2 instance and pull your image directly
from docker hub
An alternative is to start with step 1, then log into your ec2 using ssh and clone the repo on that machine. You can then build it directly on a real linux machine (your osx machine doesn't run Linux, which is an instant mismatch with docker). If you build it on the server you should be able to run it there with no problems.
Try to run with CMD ["lscpu"] or something related like cat /proc/cpuinfo in the container, compare architectures
Another thing: you might be pulling arm architecture of python image when building, and try to run it on x86_64 (EC2)
In addition to what has been shared above, you could also Build a multi-arch image with Buildx.
Basically, The recent Docker versions come with a CLI command called buildx. You can use the buildx command on Docker Desktop for Mac and Windows to build multi-arch images, link them together with a manifest file, and push them all to a registry using a single command.
Here is what works for me:
Create a new builder which gives access to the new
multi-architecture features.
docker buildx create --name mybuilder --use
Build the Dockerfile with buildx, passing the list of architectures to build for:
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push .
=> pushing layers 2.7s
=> pushing manifest for docker.io/username/demo:latest 2.2
Where, username is a valid Docker username.
Notes:
The --platform flag informs buildx to generate Linux images for AMD 64-bit, Arm 64-bit, and Armv7 architectures.
The --push flag generates a multi-arch manifest and pushes all the images to Docker Hub.
To inspect the image use the below command
docker buildx imagetools inspect username/demo:latest

Docker run armv7 images on mac

My mac uses x86_64 hardware and in theory I shouldn't be able to run docker images built for armv7.
HOWEVER
Docker documentation says:
Docker Desktop provides binfmt_misc multi-architecture support, which means you can run containers for different Linux architectures such as arm, mips, ppc64le, and even s390x.
This does not require any special configuration in the container itself as it uses qemu-static from the Docker for Mac VM.
and I'm also reading articles like this one which confirm the above
docker run -it --rm arm32v7/debian /bin/bash
should work on a mac although it doesn't work for me:
Unable to find image 'arm32v7/debian:latest' locally
latest: Pulling from arm32v7/debian
Digest: sha256:9b61eaedd46400386ecad01e2633e4b62d2ddbab8a95e460f4e0057c612ad085
Status: Image is up to date for arm32v7/debian:latest
docker: Error response from daemon: image with reference arm32v7/debian was found but does not match the specified platform cpu architecture: wanted: amd64, actual: arm.
See 'docker run --help'.
I wonder whether I'm misunderstanding something.
Docker desktop community version 2.4.2.0 (48975) edge
Docker version 20.10.0-beta1, build ac365d7
MacOS version 10.15.7 (19H2)
Note: while researching the topic I've tryied to use qemu and ran:
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
which has potentially interfered with the default behaviour.
I think my problem is related to this moby issue.
The fix was quite trivial as I only needed to add the --platform argument, in my case linux/arm or linux/arm/v7:
docker run -it --rm arm32v7/debian /bin/bash
has become
docker run --platform=linux/arm -it --rm arm32v7/debian /bin/bash
and voila:
root#82c3ff8752d3:/# uname -a
Linux 82c3ff8752d3 5.4.39-linuxkit #1 SMP Fri May 8 23:03:06 UTC 2020 armv7l GNU/Linux

Where Docker Image is run

I am learning Docker.
I made a simple Dockerfile on ubuntu18 as below:
FROM gcc:4.9
COPY . /home/user/Desktop/HelloWorld
WORKDIR /home/user/Desktop/HelloWorld
RUN g++ HelloWorld.cpp -o HelloWorld
CMD ["./HelloWorld
I built and run it on ubuntu without any problem.
Then i shared it on hub to can run it from outside.
I tried to run the image on different Ubuntu and it worked fine
I tried to run the image on Windows 7 and also worked fine!!
I don't know how it can run on windows despite docker file use g++ to build and ./ to run which is not supported on windows?
Is g++ --o HelloWorld HelloWorld.cpp and CMD ["./HelloWorld] getting run on windows? if not, so where they get run?
and what exactly FROM command does?
There is no "native" support for Linux containers in Windows. The official binary from docker solves this by provisioning a virtual machine using Hyper V that runs a small Linux distribution an the docker daemon.
The docker cli runs natively on Windows but is configured to use a remote daemon (the one in the VM).
So your linux containers does not run on windows, they run on Linux (and in case you use docker for Windows it is in a VM)

Run a docker image on Windows results in "oci runtime error: exec: "bash": executable file not found in $PATH."

I'm running Docker on Windows ("Docker Toolbox", not "Docker for Windows").
I've built an image with a rails app inside. It works properly on my Mac OS but stucks on production on Windows.
Using Docker 1.12 and docker-machine 0.8.0 on both machines.
When I create a machine and try to run the container from image, I do:
docker run -it myRepo:myTag bash
which opens me a interactive terminal on Mac OS, but Windows 7 and Windows Server 2011 are both responding with:
"Error response from daemon: oci runtime error: exec: "bash":
executable file not found in $PATH."
I use the MINGW64 shell via the Docker Quickstart Terminal but the old cmd.exe returns the same.
Can anybody help me with this issue? I've tried several hours to find a solution but there are too few answers for Windows.
Thank you in advance!
I also use Windows 7 with MINGW64. Here is what I get using nginx as example:
$docker run -it nginx bash
cannot enable tty mode on non tty input
I don't think you can open a tty using MINGW64.
You can try:
$docker run -i nginx bash
ls
bin
...
You will so no prompt or any indication you are inside the container. Just run ls and it should work inside your container.
Another option is to try to use winpty for the tty:
$ winpty docker run -it myRepo:myTag bash
root#644f59e6f818:/#
Have you tried?
$ winpty docker run -it myRepo:myTag /bin/bash
I haven't got the problem you are mentioning but I have seen it before when I was mapping volumes.
If you are mapping volumes using MINGW64, you will need to add an extra / before the local mapping. For example:
docker run -p 8080:80 -v "/$PWD":/var/share/nginx/html nginx
Let me know your findings.

Resources