This is something that's been bugging me for a while, but how come I can't access programs that are installed in my base image in my Docker image?
For example, my base image has make & gcc installed. However, in my current image that I am accessing via docker run, I can't access make or gcc despite having FROM base img . I get bash: make: command not found.
Make sure you are running run command with correct format. E.g.:
docker run -it python:3.4.3-slim python --version
IMAGE^^^^^^^^^^^^ COMMAND ^^^^^^^^
Related
I'm currently running docker desktop version 20.10.22 (build 3a2c30b, fresh install) on Windows (using wsl2), but docker compose commands with the -f flag do not work correctly. Since I'm using docker compose V2 (checked the option in the docker desktop settings), my commands are with a space instead of a hyphen. I get the following message when running any docker compose command using -f:
unknown shorthand flag: 'f' in -f See 'docker --help'.
Specifically, I'm running the FIWARE NGSI-LD tutorials. All docker compose commands that are used within those tutorials fail. The commands can be found in the services file for each tutorial. For example, a command that fails (saying that the -f flag does not exist) within the Short-Term-History tutorial is the following:
docker compose -f docker-compose/mintaka.yml -p fiware up -d --remove-orphans --renew-anon-volumes
The weird thing is that docker compose --help and docker compose --version both return the output from respectively docker --help and docker --version, as if it excludes the compose keyword. The output of the above command also refers to the standard docker help command instead of the docker compose help.
UPDATE: Docker excludes the compose keyword between docker and the rest of the command. Replacing compose with a random string of letters gives the same effect. It seems as if it cannot recognize the compose keyword.
The old docker-compose is not installed so that does not work either. Running which docker-compose returns the docker-compose.exe location, inside the .../Docker/resources/bin folder. Running which docker compose returns the location of docker.exe. The .../Docker/resources/bin folder is inside the path environment variable.
Does anybody know what the problem might be? I've searched countless websites but did not find any solutions for this problem yet.
Kind regards
Here is what I have in running processes when I run docker compose events:
Please check if you have all these directories and files. Then we can troubleshoot further.
I reinstalled docker desktop with the same installer (also same version) and this resolved the problem weirdly enough...
The only difference between my old and new installation was that I already had wsl2 installed this time.
I have written some scripts to extract some data from website using selenium. Although my code runs well outside of docker container, it fails when I run it after building an image. I searched through the google and looked over the internet to find the similar issue but could not find something similar. Here I have posted my docker file,
# syntax=docker/dockerfile:1
FROM python:latest
WORKDIR /Users/ufomammut/Documents/eplrestapi
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN curl https://chromedriver.storage.googleapis.com/90.0.4430.24/chromedriver_linux64.zip -O
RUN unzip chromedriver_linux64.zip
COPY . .
CMD [ "python3", "epl.py"]
The error message I receive when I run the docker image that I built,
Traceback (most recent call last):
File "/Users/ufomammut/Documents/eplrestapi/epl.py", line 172, in <module>
browser = webdriver.Chrome("./chromedriver",options = chr_options)
File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/chrome/webdriver.py", line 73, in __init__
self.service.start()
File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/common/service.py", line 98, in start
self.assert_process_still_running()
File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/common/service.py", line 109, in assert_process_still_running
raise WebDriverException(
selenium.common.exceptions.WebDriverException: Message: Service ./chromedriver unexpectedly exited. Status code was: 255
In my code I have provided the path to the chromedriver as follows:
browser = webdriver.Chrome("./chromedriver",options = chr_options)
Right now I used linux arm64 python base image and was able to curl and unzip the chromedriver as reflected in my dockerfile above.I am no longer receiving the format error but I am receiving this error message as I posted above where it says the chromedriver unexpectedly exited.
You appear to have copied a chromedriver binary built for MacOS into a Debian machine.
Based on how you've tagged this question with macos and the image python:3.9.5-slim-buster you are using is amd64 based on Debian.
I suggest you continue persisting with trying to curl the Linux 64 bit chromedriver into your machine via your Dockerfile.
To test it in your host OS you can simply change your current chromedriver binary to chromedriver.bakup. Download the linux version to chromedriver and rebuild your docker machine and it will copy the linux version into the new image.
Looks like you have been using chromedriver binary for MacOS instead of its linux version.
To make it a 'workable' piece of code, you can try linux binary and perform volume mapping when you try to run your docker image. Use the linux binary for chromedriver and put it in a seperate directory in your system which needs to be mapped. And when you run your image try using:
docker run -v <directory_path_to_chromedriver>:<docker_image_directory_path> <image>
Your final piece of code will be updated with your new binary file path.
browser = webdriver.Chrome("<docker_image_directory_path>/chromedriver",options = chr_options)
But for permanent fix, curl and unzip the linux binary while you build the docker image.
After going through the comments, I believe you're a little confused with Docker containers and how they interact with your system.
A Docker container runs it's own independent, isolated and optimised version of an operating system determined by the FROM layer, and leverages your system's kernel.
Now, you must use Chrome Webdriver binaries compatible with linux arm64, rightly pointed out by #Shubham and #lgflorentino. It seems that you've already done so.
Your next steps should be to check permissions on the Chrome Webdriver executables, and make sure that your container has setup all the environment paths and variables correctly.
You can do so by entering your container with the command docker exec -it /bin/bash or /bin/sh and then manually execute your bins. This will give you a clearer picture.
Check your environment variables as well!
Also, an improved Dockerfile with reduced number of layers can be as well.
FROM python:latest
#Use a deterministic Python image, mention a version instead of 'latest'
WORKDIR /Users/ufomammut/Documents/eplrestapi
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt && \
curl https://chromedriver.storage.googleapis.com/90.0.4430.24/chromedriver_linux64.zip -O && \
unzip chromedriver_linux64.zip
COPY . .
#Try adding an ENV layer to make sure your paths are setup properly. (After debugging your container)
CMD [ "python3", "epl.py"]
Since you have tagged with macOs and ARM64, I believe that your development machine has Apple Silicon or M1 chip. The docker engine on the Apple Silicon machine will by default pulls ARM64 based images and your chrome driver is of linux64. This means your chrome driver can't run inside the host.
You can either use arm64 chrome driver or build docker image form amd64 platform.
docker build -t your-username/multiarch-example:manifest-amd64 --build-arg ARCH=amd64
Try adding the --platform=linux/amd64 to the FROM statement. I used this for Debian:
FROM --platform=linux/amd64 python:3.9
Image built on Mac OSX with M1 processor, deployed to an EC2 instance. But when scripts are run it yields the error:
standard_init_linux.go:219: exec user process caused: exec format error
Elsewhere on Stackoverflow, this is explained as a mismatch of OS architecture. Sure enough running "uname -m" on EC2 instance shows it to be x86_64, and "docker image inspect" shows the container to have architecture arm64.
Here's what I don't understand. "uname -m" on my Mac shows that to be x86_64 too. So how does the container inherit a different architecture?
More significantly, how do I build an image on my Mac that I can run on EC2?
Docker file is simply
FROM python
WORKDIR /
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY src /src
with src containing, currently, some simple python scripts, executed thus:
docker run container/name python test.py
This works fine on my Mac, but gives the error above when executed on AWS.
OK. Here's what's happening. My Mac has the new M1 chip and I'm running the Tech Preview version of Docker Desktop. Under the hood the chip has the arm64 architecture, but interrogating it through iTerm and VSCode it claims to be x86_64 instead, hence my confusion when I posted the question. This is probably because both those apps are being quietly run through an Intel simulator behind the scenes and that's what's responding to the uname command.
However, because the processor is really arm64, that's the base architecture when I pull Python images from Docker (I tried lots of different flavours nd version of Python - all with the same results).
To force use of an amd64 AWS-compatible image I changed the first line of the Dockerfile to:
FROM --platform=linux/x86-64 python.
When containers from this image are run on the Mac that causes a warning
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
but it's just that, a warning, and the script runs (presumably by redirecting back through the Intel simulator. The scripts now run without problem (or warning) on the EC2 instance.
I'm not sure why you're getting this error, but there is a nice way to get around it if you'd like and if you don't mind your code and images being public. I'm guessing that this is just home-stuff anyway, so it might not be too bad.
Put your code in github.
Configure a repository on
hub.docker.com for your image and configure automatic builds from
github
ssh onto your ec2 instance and pull your image directly
from docker hub
An alternative is to start with step 1, then log into your ec2 using ssh and clone the repo on that machine. You can then build it directly on a real linux machine (your osx machine doesn't run Linux, which is an instant mismatch with docker). If you build it on the server you should be able to run it there with no problems.
Try to run with CMD ["lscpu"] or something related like cat /proc/cpuinfo in the container, compare architectures
Another thing: you might be pulling arm architecture of python image when building, and try to run it on x86_64 (EC2)
In addition to what has been shared above, you could also Build a multi-arch image with Buildx.
Basically, The recent Docker versions come with a CLI command called buildx. You can use the buildx command on Docker Desktop for Mac and Windows to build multi-arch images, link them together with a manifest file, and push them all to a registry using a single command.
Here is what works for me:
Create a new builder which gives access to the new
multi-architecture features.
docker buildx create --name mybuilder --use
Build the Dockerfile with buildx, passing the list of architectures to build for:
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push .
=> pushing layers 2.7s
=> pushing manifest for docker.io/username/demo:latest 2.2
Where, username is a valid Docker username.
Notes:
The --platform flag informs buildx to generate Linux images for AMD 64-bit, Arm 64-bit, and Armv7 architectures.
The --push flag generates a multi-arch manifest and pushes all the images to Docker Hub.
To inspect the image use the below command
docker buildx imagetools inspect username/demo:latest
I am trying to use microsoft's cntk library on a mac; for this purpose I am using Docker. I am not an expert in all this, though, so I am having a hard time figuring out how to make it work.
From my understanding, Docker provides a way to run an app in a virtualized environment, without having to virtualize the entire operating system. So you download (or create) images, and you run them in "containers".
Alright, so I have followed the required steps to make the cntk library work on Docker; and if I list the images, I find
$: docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
microsoft/cntk latest c2c192036e19 7 days ago 5.92 GB
ubuntu 14.04 7c09e61e9035 5 weeks ago 188 MB
hello-world latest 48b5124b2768 2 months ago 1.84 kB
At this point I want to run of the tutorials that are in the cntk repository. I have downloaded the master branch of the cntk repository on my desktop and try to run one of the examples in the "Tutorial" folder, but I get the following error:
terminal~ username$ docker run -w /Users/username/Desktop/CNTK-master/Tutorials microsoft/cntk configFile=lr_bs.cntk
container_linux.go:247: starting container process caused "exec: \"configFile=lr_bs.cntk\": executable file not found in $PATH"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"configFile=lr_bs.cntk\": executable file not found in $PATH".
ERRO[0001] error getting events from daemon: net/http: request canceled
terminal~ username$
Essentially I call docker run with the -w flag to inform him of where the files are, but it does not work. I tried searching online but it's not clear to me how to solve the issue. Should I create a new image? Should I call the docker run command with different parameters?
The -w flag sets the working directory, which is just the default directory inside the container. Your directory is on the host, so it won't work here. Instead you need to use volumes to mount your directory on the host into the container. The final paragraph in the document you link has an example:
$ docker run --name cntk_container1 -ti -v /project1/data:/data -v /project1/config:/config cntk bash
My docker image is tianon/gentoo-stage3:latest
And my host system is centos7 and my docker version is Docker version 1.6.0, build 4749651
When I run this image , I found I can not use rc-update command. ls -l /sbin/rc* show empty result.
I have no idea what package I need to install.
rc-update is provided by the sys-apps/openrc package. Why you don't have it is a mystery without knowing more about the image / setup. The image may be using systemd, but that doesn't necessarily rule out the openrc package being installed.
You should run: ps -p 1 -o command. That will give you an indication of your init system. If it says systemd, whatever you are trying to do with rc-update should probably be done with the systemctl command instead.
If you are indeed using sysvinit / openrc, I suggest you update your openrc package by emerge -a openrc That will restore the rc-update command.