Issue with importing image to microk8s - microk8s

Rarely when importing a docker tar file to the microk8s using the command below It looks like the command finished successfully but the image was not imported, it happened twice on two different servers. when rerun the same command the image is imported correctly
microk8s ctr image import file.tar
The OS is Ubuntu 20.04 lts
The command is perform from the tar directory or with a full path to the tar file.
Search and didn't find any information about this issue.
Any idea what the problem or where to start the digging?

Related

Issue running chrome driver in docker image

I have written some scripts to extract some data from website using selenium. Although my code runs well outside of docker container, it fails when I run it after building an image. I searched through the google and looked over the internet to find the similar issue but could not find something similar. Here I have posted my docker file,
# syntax=docker/dockerfile:1
FROM python:latest
WORKDIR /Users/ufomammut/Documents/eplrestapi
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN curl https://chromedriver.storage.googleapis.com/90.0.4430.24/chromedriver_linux64.zip -O
RUN unzip chromedriver_linux64.zip
COPY . .
CMD [ "python3", "epl.py"]
The error message I receive when I run the docker image that I built,
Traceback (most recent call last):
File "/Users/ufomammut/Documents/eplrestapi/epl.py", line 172, in <module>
browser = webdriver.Chrome("./chromedriver",options = chr_options)
File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/chrome/webdriver.py", line 73, in __init__
self.service.start()
File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/common/service.py", line 98, in start
self.assert_process_still_running()
File "/usr/local/lib/python3.9/site-packages/selenium/webdriver/common/service.py", line 109, in assert_process_still_running
raise WebDriverException(
selenium.common.exceptions.WebDriverException: Message: Service ./chromedriver unexpectedly exited. Status code was: 255
In my code I have provided the path to the chromedriver as follows:
browser = webdriver.Chrome("./chromedriver",options = chr_options)
Right now I used linux arm64 python base image and was able to curl and unzip the chromedriver as reflected in my dockerfile above.I am no longer receiving the format error but I am receiving this error message as I posted above where it says the chromedriver unexpectedly exited.
You appear to have copied a chromedriver binary built for MacOS into a Debian machine.
Based on how you've tagged this question with macos and the image python:3.9.5-slim-buster you are using is amd64 based on Debian.
I suggest you continue persisting with trying to curl the Linux 64 bit chromedriver into your machine via your Dockerfile.
To test it in your host OS you can simply change your current chromedriver binary to chromedriver.bakup. Download the linux version to chromedriver and rebuild your docker machine and it will copy the linux version into the new image.
Looks like you have been using chromedriver binary for MacOS instead of its linux version.
To make it a 'workable' piece of code, you can try linux binary and perform volume mapping when you try to run your docker image. Use the linux binary for chromedriver and put it in a seperate directory in your system which needs to be mapped. And when you run your image try using:
docker run -v <directory_path_to_chromedriver>:<docker_image_directory_path> <image>
Your final piece of code will be updated with your new binary file path.
browser = webdriver.Chrome("<docker_image_directory_path>/chromedriver",options = chr_options)
But for permanent fix, curl and unzip the linux binary while you build the docker image.
After going through the comments, I believe you're a little confused with Docker containers and how they interact with your system.
A Docker container runs it's own independent, isolated and optimised version of an operating system determined by the FROM layer, and leverages your system's kernel.
Now, you must use Chrome Webdriver binaries compatible with linux arm64, rightly pointed out by #Shubham and #lgflorentino. It seems that you've already done so.
Your next steps should be to check permissions on the Chrome Webdriver executables, and make sure that your container has setup all the environment paths and variables correctly.
You can do so by entering your container with the command docker exec -it /bin/bash or /bin/sh and then manually execute your bins. This will give you a clearer picture.
Check your environment variables as well!
Also, an improved Dockerfile with reduced number of layers can be as well.
FROM python:latest
#Use a deterministic Python image, mention a version instead of 'latest'
WORKDIR /Users/ufomammut/Documents/eplrestapi
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt && \
curl https://chromedriver.storage.googleapis.com/90.0.4430.24/chromedriver_linux64.zip -O && \
unzip chromedriver_linux64.zip
COPY . .
#Try adding an ENV layer to make sure your paths are setup properly. (After debugging your container)
CMD [ "python3", "epl.py"]
Since you have tagged with macOs and ARM64, I believe that your development machine has Apple Silicon or M1 chip. The docker engine on the Apple Silicon machine will by default pulls ARM64 based images and your chrome driver is of linux64. This means your chrome driver can't run inside the host.
You can either use arm64 chrome driver or build docker image form amd64 platform.
docker build -t your-username/multiarch-example:manifest-amd64 --build-arg ARCH=amd64
Try adding the --platform=linux/amd64 to the FROM statement. I used this for Debian:
FROM --platform=linux/amd64 python:3.9

Docker Mac - error running ls on large directory

I have a docker Mac host running an Ubuntu 19.10 guest. I am sharing a folder on my Mac filesystem that contains ~56,000 files totaling 230MB. When I ls that directory from within a docker container, it responds with:
Input/output error: <the directory name>
The folder is shared when I run the container via docker run -v /path/on/mac/fs:/path/in/guest -it my-image:latest /bin/bash.
When I ls the directory from a Mac shell, it works as expected.
Trying with a pared-down version of the directory (29k files) works as expected.
I have found a workaround: disabling "Use gRPC FUSE for file sharing" in the docker dashboard. ls works from within the container, but I still don't really understand the root of the problem.
56k files is not trivial, but also I wouldn't have guessed it's so big as to break docker.
So my questions:
Why might this be happening? Is this actually a gRPC issue, or something else?
Is there any documentation on gRPC FUSE limits? I couldn't find any.

What's the default location for Zeppelin notebooks in the Windows file system?

On a fresh Win10 machine with fresh docker, the following command instantiates Zeppelin:
docker run -p 8080:8080 --rm --name zeppelin apache/zeppelin:0.8.1
... allowing me to create a new notebook using the GUI at http://localhost:8080/#/
... but where are these notebooks stored? What's the default path to their directory so that I can git init and get to work? With Jupyter there is a 'tree' showing clearly the location/path of all notebooks; I don't see one for Zep and an hour's Googling has not been informative.
The GUI's 'Notebook Repos' button doesn't seem to help:
With the help of this answer I've been able to view the specific .json file representing my notebook. This answers the Q of where zeppelin notebooks exist in Docker images; leaving unanswered the question of where they exist in the Windows filesystem when NOT using Docker.
If using Docker (as recommended by the Zeppelin docs) this is the recipe for finding a notebook file:
# first steps are taken in Powershell
docker ps # displays an image id for use in next line
docker commit <image_id_here> mysnapshot
docker run -t -i mysnapshot /bin/bash
# now in bash inside the Docker image
root#91f4bf850583:/zeppelin# ll notebook/2E6D1WBGT/note.json
... so (if using Docker) the best way to version-control a notebook is by using Docker versioning. This is quite different to Jupyter's approach where one can run individual .ipynb notebooks locally and version-control each of them. Having answered my own question I feel greatly more informed about Docker and the differences between Jupyter vs Zeppelin. Very curious to know if anyone can solve the original question of where notebooks are stored if running in Windows WITHOUT Docker 🙏

How to run a library in docker - confusion

I am trying to use microsoft's cntk library on a mac; for this purpose I am using Docker. I am not an expert in all this, though, so I am having a hard time figuring out how to make it work.
From my understanding, Docker provides a way to run an app in a virtualized environment, without having to virtualize the entire operating system. So you download (or create) images, and you run them in "containers".
Alright, so I have followed the required steps to make the cntk library work on Docker; and if I list the images, I find
$: docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
microsoft/cntk latest c2c192036e19 7 days ago 5.92 GB
ubuntu 14.04 7c09e61e9035 5 weeks ago 188 MB
hello-world latest 48b5124b2768 2 months ago 1.84 kB
At this point I want to run of the tutorials that are in the cntk repository. I have downloaded the master branch of the cntk repository on my desktop and try to run one of the examples in the "Tutorial" folder, but I get the following error:
terminal~ username$ docker run -w /Users/username/Desktop/CNTK-master/Tutorials microsoft/cntk configFile=lr_bs.cntk
container_linux.go:247: starting container process caused "exec: \"configFile=lr_bs.cntk\": executable file not found in $PATH"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"configFile=lr_bs.cntk\": executable file not found in $PATH".
ERRO[0001] error getting events from daemon: net/http: request canceled
terminal~ username$
Essentially I call docker run with the -w flag to inform him of where the files are, but it does not work. I tried searching online but it's not clear to me how to solve the issue. Should I create a new image? Should I call the docker run command with different parameters?
The -w flag sets the working directory, which is just the default directory inside the container. Your directory is on the host, so it won't work here. Instead you need to use volumes to mount your directory on the host into the container. The final paragraph in the document you link has an example:
$ docker run --name cntk_container1 -ti -v /project1/data:/data -v /project1/config:/config cntk bash

rc-update command can not be found in docker gentoo image

My docker image is tianon/gentoo-stage3:latest
And my host system is centos7 and my docker version is Docker version 1.6.0, build 4749651
When I run this image , I found I can not use rc-update command. ls -l /sbin/rc* show empty result.
I have no idea what package I need to install.
rc-update is provided by the sys-apps/openrc package. Why you don't have it is a mystery without knowing more about the image / setup. The image may be using systemd, but that doesn't necessarily rule out the openrc package being installed.
You should run: ps -p 1 -o command. That will give you an indication of your init system. If it says systemd, whatever you are trying to do with rc-update should probably be done with the systemctl command instead.
If you are indeed using sysvinit / openrc, I suggest you update your openrc package by emerge -a openrc That will restore the rc-update command.

Resources