Can you convert/build a docker image into a full OS image? - image

I have made a docker container meant with some code for deployment. However, I realised that the structure of the project I'm working with, it's more suitable to deploy a full ISO image, instead of running docker on top of a cloud VM running stock debian, leading to unnecessary layers of virtualization.
I know that dockers are meant to be deployed on kubernetes, but before diving into that route, is there a simple way to convert a deb9 docker image into a full deb9 OS image? Like an opposite of docker import?

You can convert your docker container to an full os image. An Dockerfile debian example would be
FROM debian:stretch
RUN apt-get -y install --no-install-recommends \
linux-image-amd64 \
systemd-sysv
In principal you have to install a kernel and an init system.
Full instructions can be found github

Docker images don't contain a Linux kernel and aren't configured to do things like run a full init system or go out and get their network configuration, so this won't work well.
Put differently: the same mismatch that makes docker import not really work well because the resulting container will have too much stuff and won't be set up for Docker, means you can't export a container or image into a VM and expect it to work in that environment.
But! If you've written a Dockerfile for your image, that's very close to saying "I'm going to start from this base Linux distribution and run this shell script to get an image". Tools like Packer are set up to take a base Linux VM image, run some sort of provisioner, and create a new image. If you have a good Docker setup already, and decide a VM is a better fit, you're probably done a lot of the leg-work already to have an automated build of your VM image.

#info
we can use docker like vm / flatpak / appimge / termux with gui, audio and hardware acceleration :)
how to do it ?
remove old docker
sudo apt-get remove docker docker-engine docker.io containerd runc
===================
install the commponents
sudo apt-get update
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release
=============================
3.. add docker repo docker
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
================
install docker
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
=============================
=========================
create docker
docker image
su
cd /home/
kwrite Dockerfile
FROM debian:testing
ARG DEBIAN_FRONTEND=noninteractive­
RUN apt-get update && apt-get -y install midori bash mpv pulseaudio pulseaudio-utils pulsemixer pulseaudio-module-jack apulse neofetch vlc smplayer wget sudo cairo-dock cairo-dock-plug-ins xfce4 xfce4-goodies falkon kde-full xfce4-whiskermenu-plugin gnome tigervnc-standalone-server openssh-server openssh-client network-manager net-tools iproute2 gerbera openjdk-11-jdk mediainfo dcraw htop gimp krita libreoffice python3-pip terminator uget alsa-utils
ENV PULSE_SERVER=tcp:host.docker.internal:4713
CMD bash
save it
the run interminal
sudo docker build -t supersifobian .
how to use gui
xhost +
run image gui
sudo apt-get -y install xpra xserver-xephyr xinit xauth
xclip x11-xserver-utils x11-utils
run docker
sudo docker run -ti --net=host --device=/dev/dri:/dev/dri -e DISPLAY=:0 --privileged --cap-add=ALL --device /dev/snd --volume /dev:/dev -v /dev:/dev --group-add audio -v /var/run/docker.sock:/host/var/run/doc -e PULSE_SERVER=tcp:$P ULSE_SERVER -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro -v /media:/host/media:ro -v /home:/host/home:ro imageid / image name
nb
enter container
docker exec -it nama/id container bash
add user
adduser namauser
make use as sudo
usermod -aG sudo username
add group audio
usermod -aG audio username
=========================
audio in docker cli
using prafer
a. install paprefs
apt-get install paprefs
b.choose network in prafers
c. how to know the pulse audio port, we can type
pax11publish
d. export to terminal in docker
export "PULSE_SERVER=tcp:192.168.43.135:37721"
==================================
save docker
docker commit idcontainer nameimage:version
========================
check images
docker images
==================================
thanks :)

Related

Docker image based on Playwright image runs on my Mac but don't run on Ubuntu server

When I run this image om my Mac with M1 chip, everything is OK.
But when I try to run on server with Ubuntu, container stops with error "exec /bin/sh: exec format error"
FROM mcr.microsoft.com/playwright:v1.18.1-arm64
RUN apt-get -y update && apt-get -y upgrade
ADD build/libs/program.jar /tmp
WORKDIR /tmp
RUN apt-get -y install openjdk-11-jre-headless && apt-get clean;
CMD java -jar program.jar
This error displays on each first command RUN. Even if the command is like "RUN ls -la", I will get "/bin/sh -c ls -la returned a non-zero code: 1".
I tried to change SHELL["bin/bash","-c"] and image version but there was no effect.
If I use "FROM ubuntu", commands work, but I need exactly image for Playwright with browser dependencies.
You are building an image with ARM architecture (check with docker inspect <your_image> | grep "Archi"). This image cannot be executed on another architecture (probably amd64 for your Ubuntu server).
You should:
use an amd64 base image (mcr.microsoft.com/playwright:v1.18.1-arm64 => mcr.microsoft.com/playwright:v1.18.1-focal for example)
build your image with docker build --platform linux/amd64

build docker container from command line (windows)

I'd like to build a docker container from command line only - on windows.
On Linux it works like this:
docker build -t tcpdump - <<EOF
FROM ubuntu
RUN apt-get update && apt-get install -y <packages here>
EOF
Any ideas how to port it to windows?
I believe you can use this.
ECHO "FROM python:3
RUN pip install requests" | docker build -t yourimage:tag -
Please take a look at this doc as well.

Unable to open remote display on Mac when running Docker

I have a Dockerfile written as below:
FROM joesan/raspi_opencv_3:latest
RUN apt-get update
RUN sudo apt-get install --no-install-recommends xserver-xorg
RUN sudo apt-get install --no-install-recommends xinit
RUN apt-get install -qqy x11-apps
RUN mkdir -p /raspi_motion_detection/project
WORKDIR /raspi_motion_detection/project
COPY ./ $WORKDIR/
COPY ./requirements.txt $WORKDIR/
ADD . $WORKDIR
CMD xclock
I have a Raspberry Pi to which I ssh from my Mac (running High Sierra).
Here is what I do:
I ssh into the RaspPi from my Mac
I execute the docker command using:
docker run -ti --device=/dev/vcsm \
--device=/dev/vchiq \
-e DISPLAY=$DISPLAY:0 \
-e XAUTHORITY=/.Xauthority \
-v /tmp/.X11-unix:/tmp/.X11-unix \
joesan/motion_detector
I get an error message as below:
Error: Can't open display: localhost:11.0:0
But when I just run xclock directly on the ssh terminal, I can see that the xclock window opens up.
So I could not understand why running xclock from within a Docker container would prevent the display port being opened? Any reasons? I also came across this post here and followed what has been described there, but i could not get it to work!
https://medium.com/#dimitris.kapanidis/running-gui-apps-in-docker-containers-3bd25efa862a
A bit simplified: Each docker container runs inside the docker daemon, which basically provides a stripped down os to each container. That os has no window manager.
That is why the command xclock inside a docker container exits with an error.
When you connect via ssh to your raspberry pi and call xclock it is executed inside the raspberry's os (propably raspian), which has a running window manager.
Ok! So I thinkI found the solution to my problem! Here is what I did!
Re-installed Raspberry Stretch Lite on my SD card. The old one seems to have gotten some stale files! You can skip this step, but for me there was some corrupt files on the old installation, so I decided to get a fresh install!
On my Raspberry Pi, run the following command:
xauth list
I copy the cookie locally to a text editor as I need it later!
Removed the xclock command from the Dockerfile that I originally had!
Build the Dockerfile using the following command:
docker run -it --net=host --device=/dev/vcsm --device=/dev/vchiq -e
DISPLAY -v /tmp/.X11-unix joesan/motion_detector bash
Notice that I'm running a bash command to my Docker run so that I can get a basj prompt from the running image!
The result of step 3 would give me a bash prompt from the container that I just ran at step 3
I need to now install xauth in the image
apt-get install xauth
I then add the xauth cookie from step 0
It is after this Bang! I got what I want!

How to access /var/run/docker.sock from inside a docker container as a non-root user? (MacOS Host)

I have installed docker on Mac and everything is running fine. I am using a Jenkins docker image and running it. While using Jenkins as a CI server and to build further images by running docker commands through it, I came to know that we have to bind mount /var/run/docker.sock while running the Jenkins images so it can access the docker daemon.
I did that, and installed docker CLI inside Jenkins’s container. But when running docker ps or any other docker commands it is throwing an error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.28/containers/json: dial unix /var/run/docker.sock: connect: permission denied
When I connect to container as a root user, it works fine. But switching to the ‘jenkins’ user throws the above error. I have already added ‘jenkins’ user to sudo list but does not help.
I found few articles suggesting to add ‘jenkins’ user to ‘docker’ group but to my surprise I do not find any docker group on Mac or inside container.
Any help is much appreciated. Thanks
It looks like the reason this is happening is pretty straight forward: UNIX permissions are not letting the jenkins user read /var/run/docker.sock. Really the easiest option is to just change the group assignment on /var/run/docker.sock from root to another group, and then add jenkins to that group:
[as root, inside the container]
root#host:/# usermod -G docker jenkins
root#host:/# chgrp docker /var/run/docker.sock
This assumes of course that you already have the docker CLI installed, and that a group called docker exists. If not:
[as root, inside the container]
root#host:/# groupadd docker
Alternatively, you could change the world permissions on /var/run/docker.sock to allow non-root users to access the socket, but I wouldn't recommend doing that; it just seems like bad security practice. Similarly, you could outright chown the socket to the jenkins user, although I'd rather just change the group settings.
I'm confused why using sudo didn't work for you. I just tried what I believe is exactly the setup you described and it worked without problems.
Start the container:
[on macos host]
darkstar:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
docker.io/jenkins/jenkins:lts
darkstar:~$ docker exec -u root -it <container id> /bin/bash
Install Docker CLI:
[as root, inside container]
root#host:/# apt-get update
root#host:/# apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
root#host:/# rel_id=$(. /etc/os-release; echo "$ID")
root#host:/# curl -fsSL https://download.docker.com/linux/${rel_id}/gpg > /tmp/dkey
root#host:/# apt-key add /tmp/dkey
root#host:/# add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/${rel_id} \
$(lsb_release -cs) stable"
root#host:/# apt-get update
root#host:/# apt-get -y install docker-ce
Then set up the jenkins user:
[as root, inside container]
root#host:/# usermod -G sudo jenkins
root#host:/# passwd jenkins
[...]
And trying it out:
[as jenkins, inside container]
jenkins#host:/$ sudo docker ps -a
[...]
password for jenkins:
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 jenkins/jenkins:lts "/sbin/tini -- /usr/…" 8 minutes ago ...
it seems to work fine for me. Maybe you took a different route to install the Docker CLI? Not sure, but if you want to access the docker socket using sudo, those steps will work. Although, I think it would be easier to just change the group assignment as explained up above. Good luck :)
Note: All tests performed using macOS Mojave v10.14.3 running Docker Engine v19.03.2. This doesn't seem to be heavily dependent on the host platform, so I would expect it to work on Linux or any other UNIX-like OS, including other versions of macOS/OSX.
No, but this works:
Add the user (e.g. jenkins) to the staff-group: sudo dseditgroup -o edit -a jenkins -t user staff
Allow group to sudo, in sudo visudo add:
%staff ALL = (ALL) ALL

Docker - container with multiple images

I wanted to make a Dockerfile with multiple images to run in one container.
What is the best method to go about this? Below is a list of what I wanted to run in a single container. I have not have any luck with making a Dockerfile with all of these included.
MySQL Server
RabbitMQ
Java8
Node.js
Xvfb
Firefox
Chrome
This is what I have so far, can I get a few tips
FROM stackbrew/ubuntu:12.04
MAINTAINER
# Update the repository sources list #RUN apt-get update
# My SQL Server ###############
RUN apt-get
update -qq && apt-get
install -y mysql-server-5.5
ADD my.cnf /etc/mysql/conf.d/my.cnf
RUN chmod 664 /etc/mysql/conf.d/my.cnf
ADD run /usr/local/bin/run
RUN chmod +x /usr/local/bin/run V
OLUME ["/var/lib/mysql"]
EXPOSE 3306
CMD ["/usr/local/bin/run"]
You cannot have "multiple images to run in one container", that wouldn't make sense.
But you can write a Dockerfile to create an image that will install all the services you mentionned. Example (Ubuntu/Debian distribution) :
[...header...]
FROM stackbrew/ubuntu:12.04 #or use ubuntu-upstart:12.04
MAINTAINER BPetkov
# Update the repository sources list
RUN apt-get update -qq
# Mysql
RUN apt-get install -y mysql-server-5.5
ADD my.cnf /etc/mysql/conf.d/my.cnf
RUN chmod 664 /etc/mysql/conf.d/my.cnf
ADD run /usr/local/bin/run
RUN chmod +x /usr/local/bin/run
# Other stuff
RUN apt-get -y install rabbitmq
RUN apt-get -y install nodejs
[...]
VOLUME ["/var/lib/mysql"]
EXPOSE 3306
EXPOSE .......
CMD ["/sbin/init"]
Then you would have to get all of them started automatically when the container starts.
You can use a process manager such as supervisord (Docker documentation here).
Alternatively, you could use a regular init system, check this base image : ubuntu-upstart. This one would allow you to only have to install the packages in your Dockerfile and get them started automatically without any effort, by specifying /sbin/init as EntryPoint or CMD in your Dockerfile.
The feature you're looking for is Docker Compose.

Resources