I am learning Docker.
I made a simple Dockerfile on ubuntu18 as below:
FROM gcc:4.9
COPY . /home/user/Desktop/HelloWorld
WORKDIR /home/user/Desktop/HelloWorld
RUN g++ HelloWorld.cpp -o HelloWorld
CMD ["./HelloWorld
I built and run it on ubuntu without any problem.
Then i shared it on hub to can run it from outside.
I tried to run the image on different Ubuntu and it worked fine
I tried to run the image on Windows 7 and also worked fine!!
I don't know how it can run on windows despite docker file use g++ to build and ./ to run which is not supported on windows?
Is g++ --o HelloWorld HelloWorld.cpp and CMD ["./HelloWorld] getting run on windows? if not, so where they get run?
and what exactly FROM command does?
There is no "native" support for Linux containers in Windows. The official binary from docker solves this by provisioning a virtual machine using Hyper V that runs a small Linux distribution an the docker daemon.
The docker cli runs natively on Windows but is configured to use a remote daemon (the one in the VM).
So your linux containers does not run on windows, they run on Linux (and in case you use docker for Windows it is in a VM)
Related
I was trying to install software on 32-bit CentOS 4.8 and ran into a problem. I ran the container using docker run -d(or -itd). The installation software keeps pointing to a x86_64 folder where it doesn't exist. I was so confused because I'm sure I used the correct CentOS image. I ran uname -a and it tells me that my container architecture is 64-bit (x86_64).
I try to run it using docker run -it command instead and when I check uname -a it correctly shows that I'm using 32-bit image.
My question is, is there any explanation why -d flag changes the architecture?
I'm using Docker version 20.10.5 on Windows 10 (64-bit).
Edit: Even when I start a stopped container from docker run -it command using docker start, it use 64-bit architecture instead. I need to run it using docker start -i.
Image built on Mac OSX with M1 processor, deployed to an EC2 instance. But when scripts are run it yields the error:
standard_init_linux.go:219: exec user process caused: exec format error
Elsewhere on Stackoverflow, this is explained as a mismatch of OS architecture. Sure enough running "uname -m" on EC2 instance shows it to be x86_64, and "docker image inspect" shows the container to have architecture arm64.
Here's what I don't understand. "uname -m" on my Mac shows that to be x86_64 too. So how does the container inherit a different architecture?
More significantly, how do I build an image on my Mac that I can run on EC2?
Docker file is simply
FROM python
WORKDIR /
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY src /src
with src containing, currently, some simple python scripts, executed thus:
docker run container/name python test.py
This works fine on my Mac, but gives the error above when executed on AWS.
OK. Here's what's happening. My Mac has the new M1 chip and I'm running the Tech Preview version of Docker Desktop. Under the hood the chip has the arm64 architecture, but interrogating it through iTerm and VSCode it claims to be x86_64 instead, hence my confusion when I posted the question. This is probably because both those apps are being quietly run through an Intel simulator behind the scenes and that's what's responding to the uname command.
However, because the processor is really arm64, that's the base architecture when I pull Python images from Docker (I tried lots of different flavours nd version of Python - all with the same results).
To force use of an amd64 AWS-compatible image I changed the first line of the Dockerfile to:
FROM --platform=linux/x86-64 python.
When containers from this image are run on the Mac that causes a warning
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
but it's just that, a warning, and the script runs (presumably by redirecting back through the Intel simulator. The scripts now run without problem (or warning) on the EC2 instance.
I'm not sure why you're getting this error, but there is a nice way to get around it if you'd like and if you don't mind your code and images being public. I'm guessing that this is just home-stuff anyway, so it might not be too bad.
Put your code in github.
Configure a repository on
hub.docker.com for your image and configure automatic builds from
github
ssh onto your ec2 instance and pull your image directly
from docker hub
An alternative is to start with step 1, then log into your ec2 using ssh and clone the repo on that machine. You can then build it directly on a real linux machine (your osx machine doesn't run Linux, which is an instant mismatch with docker). If you build it on the server you should be able to run it there with no problems.
Try to run with CMD ["lscpu"] or something related like cat /proc/cpuinfo in the container, compare architectures
Another thing: you might be pulling arm architecture of python image when building, and try to run it on x86_64 (EC2)
In addition to what has been shared above, you could also Build a multi-arch image with Buildx.
Basically, The recent Docker versions come with a CLI command called buildx. You can use the buildx command on Docker Desktop for Mac and Windows to build multi-arch images, link them together with a manifest file, and push them all to a registry using a single command.
Here is what works for me:
Create a new builder which gives access to the new
multi-architecture features.
docker buildx create --name mybuilder --use
Build the Dockerfile with buildx, passing the list of architectures to build for:
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push .
=> pushing layers 2.7s
=> pushing manifest for docker.io/username/demo:latest 2.2
Where, username is a valid Docker username.
Notes:
The --platform flag informs buildx to generate Linux images for AMD 64-bit, Arm 64-bit, and Armv7 architectures.
The --push flag generates a multi-arch manifest and pushes all the images to Docker Hub.
To inspect the image use the below command
docker buildx imagetools inspect username/demo:latest
Help me friends, please. I'm a noob in docker.
I'm working with laravel an create project with Docker in location.
root#VIGIA-PC:/mnt/d/projects/fastfood-app/fastfood-api#
docker was working but when I shut down my pc, and then I turn on it, the docker container is not running, why?
when I go to to the project folder and type ls no appear files from debian terminal, when I going to windows they are here.
I runned docker ps and show the follow:
Information
windows version 10
distro linux debian
build 19042
Docker version 20.10.2, build 2291f61
You'll have to change the behavior of your docker apps.
If the host is being restarted, so are the containers running on it : https://docs.docker.com/config/containers/start-containers-automatically/
I have already solved my problem.that was a problem from path
Has anyone succeeded in running fabric-composer on windows 10 linux sub-system running ubuntu 16.04 ?
You can run composer on windows 10 WSL (windows Subsystem for linux) but you will not be able to run docker containers in it. Linux Docker containers require a linux kernel and WSL doesn't have a linux kernel. It is a clever piece of technology that converts user space Linux API calls dynamically to windows API calls.
Therefore you will have to run hyperledger fabric either by using docker for windows (which runs it for you in hyper-v) or you run your own hypervisor.
It is possible to have the docker commands run in WSL but it will need to configure it to interact with the docker daemon running inside a hypervisor.
Yes, you can use Hyperledger-fabric-composer on windows 10, but as david said in above answer you will not be able to run docker containers directly from Ubuntu sub-system.
To do that you have to do following things:
METHOD:- 1
You will need to install docker CE Client & docker-compose in Ubuntu Subsystem and install Docker(version v17.09) on Windows as well. But those dockers won’t connect together out of the box.
So you need to expose the daemon to port 2375 first by right-clicking the docker icon on task-bar then click setting then check the Expose daemon box.
Now the docker server will be able to connect via Windows network including Ubuntu subsystem. We need to set environment variables in Ubuntu by running below command:
echo "export DOCKER_HOST='tcp://0.0.0.0:2375'" >> ~/.bashrc
source ~/.bashrc
These commands will add DOCKER_HOST to the variables every time we start a new Bash.
METHOD:- 2
If you don't want to use ubuntu sub-system, then you can simply install Git Bash and Docker(version v17.09).
Then install Hyperledger-Fabric by using Git Bash.
I am new to docker and trying to understand the concept of base image.
Let's say I have a hello-world docker app on windows machine with ubuntu as base image in Dockerfile.
Now to run this hello-world application, Is docker going to install the whole ubuntu to run the application?
If not then how ubuntu base image will be used here and How will Docker container facilitate the commutation between ubuntu based application and windows OS?
Now to run this hello-world application, Is docker going to install the whole ubuntu to run the application?
No, the ubuntu image used is not "the whole ubuntu". It is a trimed-down version, without the all X11 graphic layer. Still 180 MB though: see "Docker Base Image OS Size Comparison".
These days, you would rather use an Alpine image (5 MB): see "Docker Official Images are Moving to Alpine Linux"
Regarding the hello-world application specifically, there is no Ubuntu or Alpine involved. Just 1.8 KB of C machine-code, which makes only direct calls to the Linux kernel of the host.
That Linux host is used by docker container through system calls: see "What is meant by shared kernel in Docker?"
On Windows, said Linux host was provided by VirtualBox VM running a boot2docker VM, built from a TinyCore distro.
With the more recent "Docker for Windows", that same VM is run through the Hyper-V Windows feature.