wercker with debian and alpine box and diff - wercker

I have one werker.yml with a debian box and one with an alpine box. Both with curl installed. A diff command with debian works fine, with alpine I get
diff: unrecognized option: old-line-format=
BusyBox v1.26.2 (2017-06-11 06:38:32 GMT) multi-call binary.
The diff should be the same. Any idea?

Found out, that alpine has only a very basic diff within busybox installed. So option old-line-format is not known. I installed diffutils and everything was fine.

Related

Issue running chrome driver in docker image

I have written some scripts to extract some data from website using selenium. Although my code runs well outside of docker container, it fails when I run it after building an image. I searched through the google and looked over the internet to find the similar issue but could not find something similar. Here I have posted my docker file,
# syntax=docker/dockerfile:1
FROM python:latest
WORKDIR /Users/ufomammut/Documents/eplrestapi
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN curl https://chromedriver.storage.googleapis.com/90.0.4430.24/chromedriver_linux64.zip -O
RUN unzip chromedriver_linux64.zip
COPY . .
CMD [ "python3", "epl.py"]
The error message I receive when I run the docker image that I built,
Traceback (most recent call last):
File "/Users/ufomammut/Documents/eplrestapi/epl.py", line 172, in <module>
browser = webdriver.Chrome("./chromedriver",options = chr_options)
File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/chrome/webdriver.py", line 73, in __init__
self.service.start()
File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/common/service.py", line 98, in start
self.assert_process_still_running()
File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/common/service.py", line 109, in assert_process_still_running
raise WebDriverException(
selenium.common.exceptions.WebDriverException: Message: Service ./chromedriver unexpectedly exited. Status code was: 255
In my code I have provided the path to the chromedriver as follows:
browser = webdriver.Chrome("./chromedriver",options = chr_options)
Right now I used linux arm64 python base image and was able to curl and unzip the chromedriver as reflected in my dockerfile above.I am no longer receiving the format error but I am receiving this error message as I posted above where it says the chromedriver unexpectedly exited.
You appear to have copied a chromedriver binary built for MacOS into a Debian machine.
Based on how you've tagged this question with macos and the image python:3.9.5-slim-buster you are using is amd64 based on Debian.
I suggest you continue persisting with trying to curl the Linux 64 bit chromedriver into your machine via your Dockerfile.
To test it in your host OS you can simply change your current chromedriver binary to chromedriver.bakup. Download the linux version to chromedriver and rebuild your docker machine and it will copy the linux version into the new image.
Looks like you have been using chromedriver binary for MacOS instead of its linux version.
To make it a 'workable' piece of code, you can try linux binary and perform volume mapping when you try to run your docker image. Use the linux binary for chromedriver and put it in a seperate directory in your system which needs to be mapped. And when you run your image try using:
docker run -v <directory_path_to_chromedriver>:<docker_image_directory_path> <image>
Your final piece of code will be updated with your new binary file path.
browser = webdriver.Chrome("<docker_image_directory_path>/chromedriver",options = chr_options)
But for permanent fix, curl and unzip the linux binary while you build the docker image.
After going through the comments, I believe you're a little confused with Docker containers and how they interact with your system.
A Docker container runs it's own independent, isolated and optimised version of an operating system determined by the FROM layer, and leverages your system's kernel.
Now, you must use Chrome Webdriver binaries compatible with linux arm64, rightly pointed out by #Shubham and #lgflorentino. It seems that you've already done so.
Your next steps should be to check permissions on the Chrome Webdriver executables, and make sure that your container has setup all the environment paths and variables correctly.
You can do so by entering your container with the command docker exec -it /bin/bash or /bin/sh and then manually execute your bins. This will give you a clearer picture.
Check your environment variables as well!
Also, an improved Dockerfile with reduced number of layers can be as well.
FROM python:latest
#Use a deterministic Python image, mention a version instead of 'latest'
WORKDIR /Users/ufomammut/Documents/eplrestapi
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt && \
curl https://chromedriver.storage.googleapis.com/90.0.4430.24/chromedriver_linux64.zip -O && \
unzip chromedriver_linux64.zip
COPY . .
#Try adding an ENV layer to make sure your paths are setup properly. (After debugging your container)
CMD [ "python3", "epl.py"]
Since you have tagged with macOs and ARM64, I believe that your development machine has Apple Silicon or M1 chip. The docker engine on the Apple Silicon machine will by default pulls ARM64 based images and your chrome driver is of linux64. This means your chrome driver can't run inside the host.
You can either use arm64 chrome driver or build docker image form amd64 platform.
docker build -t your-username/multiarch-example:manifest-amd64 --build-arg ARCH=amd64
Try adding the --platform=linux/amd64 to the FROM statement. I used this for Debian:
FROM --platform=linux/amd64 python:3.9

how to run amd64 docker images on arm64 host platform

I have an m1 mac and I am trying to run a amd64 based docker image on my arm64 based host platform. However, when I try to do so (with docker run) I get the following error:
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested.
When I try adding the tag --platform linux/amd64 the error message doesn't appear, but I can't seem to go into the relevant shell and docker ps -a shows that the container is immediately exited upon starting. Would anyone know how I can run this exact image on my machine given the circumstances/how to make the --platform tag work?
Using --platform is correct. On my M1 Mac I'm able to run both arm64 and amd64 versions of the Ubuntu image from Docker Hub. The machine hardware name provided by uname proves it.
# docker run --rm -ti --platform linux/arm/v7 ubuntu:latest uname -m
armv7l
# docker run --rm -ti --platform linux/amd64 ubuntu:latest uname -m
x86_64
Running amd64 images is enabled by Rosetta2 emulation, as indicated here.
Not all images are available for ARM64 architecture. You can add --platform linux/amd64 to run an Intel image under emulation.
If the container is exiting immediately, that's a problem with the specific container you're using.
To address the problem of your container immediately exiting after starting, try using the entrypoint flag to overwrite the container's entry point. It would look something like this:
docker run -it --entrypoint=/bin/bash image_name
Credit goes to this other SO answer that helped me solve a similar issue on my own container.

Docker run armv7 images on mac

My mac uses x86_64 hardware and in theory I shouldn't be able to run docker images built for armv7.
HOWEVER
Docker documentation says:
Docker Desktop provides binfmt_misc multi-architecture support, which means you can run containers for different Linux architectures such as arm, mips, ppc64le, and even s390x.
This does not require any special configuration in the container itself as it uses qemu-static from the Docker for Mac VM.
and I'm also reading articles like this one which confirm the above
docker run -it --rm arm32v7/debian /bin/bash
should work on a mac although it doesn't work for me:
Unable to find image 'arm32v7/debian:latest' locally
latest: Pulling from arm32v7/debian
Digest: sha256:9b61eaedd46400386ecad01e2633e4b62d2ddbab8a95e460f4e0057c612ad085
Status: Image is up to date for arm32v7/debian:latest
docker: Error response from daemon: image with reference arm32v7/debian was found but does not match the specified platform cpu architecture: wanted: amd64, actual: arm.
See 'docker run --help'.
I wonder whether I'm misunderstanding something.
Docker desktop community version 2.4.2.0 (48975) edge
Docker version 20.10.0-beta1, build ac365d7
MacOS version 10.15.7 (19H2)
Note: while researching the topic I've tryied to use qemu and ran:
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
which has potentially interfered with the default behaviour.
I think my problem is related to this moby issue.
The fix was quite trivial as I only needed to add the --platform argument, in my case linux/arm or linux/arm/v7:
docker run -it --rm arm32v7/debian /bin/bash
has become
docker run --platform=linux/arm -it --rm arm32v7/debian /bin/bash
and voila:
root#82c3ff8752d3:/# uname -a
Linux 82c3ff8752d3 5.4.39-linuxkit #1 SMP Fri May 8 23:03:06 UTC 2020 armv7l GNU/Linux

How can I resolve the error oci runtime error: exec: no such file or directory when using docker run on Windows

When running a Docker command such as
docker run ubuntu /bin/echo 'Hello world'
used in the in the starter example docs on the Learn by Example page of the Docker docs I see the error
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: oci runtime error: exec: "C:/Program Files/Git/usr/bin/bash": stat C:/Program Files/Git/usr/bin/bash: no such file or directory.
How can I resolve this?
This error could be caused by the setup on your system including mingw (you might see this if you have installed Git for Windows with MSYS2 for example - see here for more information). The path is being converted - to stop this you can use a double slash // before the command. In this example you can use
docker run ubuntu //bin/echo 'Hello world'
(notice the double slash (//) above). If all goes well you should now see
Hello world
An complete and slightly more complex example is starting an Ubuntu interactive shell
docker run -it -v /$(pwd)/app:/root/app ubuntu //bin/bash
Note that in my case using Git Bash I only needed one extra slash because echo $(pwd) on my machine expands to:
/c/Users/UserName/path/to/volume/mount
As another example the following can be used if zip is not available (as is the case on Windows 10 as well as Git Bash) You cannot easily zip a file for a something like an AWS Lambda function (actually there are few ways without Docker or even installing third party software if you prefer). If you want to zip the app folder under your current directory use this:
docker run -it -v /$(pwd)/app:/root/app mydockeraccount/dockerimagewithzip //usr/bin/zip -r //root/app/test1.zip //root/app
The mydockeraccount/dockerimageqithzip can be build by creating a Dockerfile like this:
FROM ubuntu
RUN apt-get update && apt-get install -y zip
Then run:
docker build -t mydockeraccount/dockerimagewithzip .

rc-update command can not be found in docker gentoo image

My docker image is tianon/gentoo-stage3:latest
And my host system is centos7 and my docker version is Docker version 1.6.0, build 4749651
When I run this image , I found I can not use rc-update command. ls -l /sbin/rc* show empty result.
I have no idea what package I need to install.
rc-update is provided by the sys-apps/openrc package. Why you don't have it is a mystery without knowing more about the image / setup. The image may be using systemd, but that doesn't necessarily rule out the openrc package being installed.
You should run: ps -p 1 -o command. That will give you an indication of your init system. If it says systemd, whatever you are trying to do with rc-update should probably be done with the systemctl command instead.
If you are indeed using sysvinit / openrc, I suggest you update your openrc package by emerge -a openrc That will restore the rc-update command.

Resources