Best approach for implementing Docker inheritance [duplicate] - windows

Is it possible with Docker to combine two images into one?
Like this here:
genericA --
\
---> specificAB
/
genericB --
For example there's an image for Java and an image for MySQL.
I'd like to have an image with Java and MySQL.

No, you can only inherit from one image.
You probably don't want Java and MySQL in the same image as it's more idiomatic to have a single component in a container i.e. create a separate MySQL container and link it to the Java container rather than put both into the same container.
However, if you really must have them in the same image, write a Dockerfile with Java as the base image (FROM statement) and install MySQL in the Dockerfile. You should be able to largely copy the statements from the official MySQL Dockerfile.

Docker doesn't directly support this, but you can use DockerMake (full disclosure: I wrote it) to manage this sort of "inheritance". It uses a YAML file to set up the individual pieces of the image, then drives the build by generating the appropriate Dockerfiles.
Here's how you would build this slightly more complicated example:
--> genericA --
/ \
debian:jessie --> customBase ---> specificAB
\ /
--> genericB --
You would use this DockerMake.yml file:
specificAB:
requires:
- genericA
- genericB
genericA:
requires:
- customBase
build_directory: [some local directory]
build: |
#Dockerfile commands go here, such as
ADD installA.sh
RUN ./installA.sh
genericB:
requires:
- customBase
build: |
#Here are some other commands you could run
RUN apt-get install -y genericB
ENV PATH=$PATH:something
customBase:
FROM: debian:jessie
build: |
RUN apt-get update && apt-get install -y buildessentials
After installing the docker-make CLI tool (pip install dockermake), you can then build the specificAB image just by running
docker-make specificAB

If you do docker commit, it is not handy to see what commands were used in order to build your container, you have to issue a docker history image
If you have a Dockerfile, just look at it and you see how it was built and what it contains.
Docker commit is 'by hand', so prone to errors, docker build using a Dockerfile that works is much better.

You can put multiple FROM commands in a single Dockerfile.
https://docs.docker.com/reference/builder/#from

Related

Modify the Docker image and save it for not loose the content when building the container again. How?

Hi I need to modify the a docker image from Autoware_AI repository after build it. The problem is:
A) I build the image running a .sh file:
cd $WORKING_DIRECTORY/docker/generic
./run.sh -t 1.14.0
It is specifically from Autoware: https://www.svlsimulator.com/docs/system-under-test/autoware-instructions/
B) I modify the scripts contained inside the packages contained in Autoware folder
C) When I exit the container, and later enter again the modifications are not there anymore, of coure, because the image is built from Dockerfile from scratch...
To find a solution I have tried 2 different approachs:
To modify the container and save it as described here: https://www.scalyr.com/blog/create-docker-image/
Issue: When using other terminal, trying to add .txt file for Autoware_AI running container, to modify the container, Autoware_AI container does not appear as active (but it is). Just other container are avaialable when I try to copy a file to Autoware_AI:
Commit Changes To a Docker image: https://phoenixnap.com/kb/how-to-commit-changes-to-docker-image
Issue: Problem to connect to Autoware_AI server and run the ros packages. This problem does not happen when building the original Docker file with .sh
The complete description of my problem as well as output of terminal attempts are better described here:
https://answers.ros.org/question/376512/fork-autoware_ai-repository-and-create-docker-image/?answer=376583#post-id-376583
https://get-help.robotigniteacademy.com/t/fork-autoware-ai-repository-and-create-docker-image/9533/4
I am kind of new in forking,changing docker images. I do not understand how to fix this, find a solution for create my custom docker image and make it functional.
Thanks very much in advance!
As David Maze suggested one feasible way would be change the Dockerfile, and then build my image from it. It is a good idea. In my case however additional steps were required, because I had a build.sh script that called different Dockerfiles to build the image, and besides that this build.sh file also installed some ROS packages and other dependencies.
Even if the build.sh needed to change. The modifications on Dockerfile contributed to solve 90% of the issue. My Dockerfile was:
enter image description here
And After the modification, the Dockerfile became:
enter image description here
To dowload the modified ROS codes to my image, instead of autoware repo ROS pkgs, I needed:
1 - Copy the autoware.ai.repos file from here:
https://raw.githubusercontent.com/Autoware-AI/autoware.ai/1.14.0/autoware.ai.repos
To my docker local folder (docker/generic) and unwrap them with vcs import command as the Dockerfile displays above...
2- Edit the autoware.ai.repos, in order to change the address of some of the repositories contained in autoware.ai.repos to my personal github:
I Removed the lines:
autoware/visualization:
type: git
url: https://github.com/Autoware-AI/visualization.git
version: 1.14.0
And replaced by:
autoware/visualization:
type: git
url: https://github.com/marcusvinicius178/visualization
Afterwards I followed the build instructions in case 3 here: https://github.com/Autoware-AI/autoware.ai/wiki/Generic-x86-Docker#run-an-autoware-docker-container
$ ./build.sh
$ ./run.sh -t local
I know that may exist a more professional way to work with Docker images, and build a new Dockerfile based on the original one. But I am not that expert in Docker and also in this way my problem was solved.

Docker Build/Deploy using Bash Script

I have a deploy script that I am trying to use for my server for CD but I am running into issues writing the bash script to complete some of my required steps such as running npm and the migration commands.
How would I go about getting into a container bash, from this script, running the commands below and then exiting to finish pulling up the changes?
Here is the script I am trying to automate:
cd /Project
docker-compose -f docker-compose.prod.yml down
git pull
docker-compose -f docker-compose.prod.yml build
# all good until here because it opens bash and does not allow more commands to run
docker-compose -f docker-compose.prod.yml run --rm web bash
npm install # should be run inside of web bash
python manage.py migrate_all # should be run inside of web bash
exit # should be run inside of web bash
# back out of web bash
docker-compose -f docker-compose.prod.yml up -d
Typically a Docker image is self-contained, and knows how to start itself up without any user intervention. With some limited exceptions, you shouldn't ever need to docker-compose run interactive shells to do post-deploy setup, and docker exec should be reserved for emergency debugging.
You're doing two things in this script.
The first is to install Node packages. These should be encapsulated in your image; your Dockerfile will almost always look something like
FROM node
WORKDIR /app
COPY package*.json .
RUN npm ci # <--- this line
COPY . .
CMD ["node", "index.js"]
Since the dependencies are in your image, you don't need to re-install them when the image starts up. Conversely, if you change your package.json file, re-running docker-compose build will re-run the npm install step and you'll get a clean package tree.
(There's a somewhat common setup that puts the node_modules directory into an anonymous volume, and overwrites the image's code with a bind mount. If you update your image, it will get the old node_modules directory from the anonymous volume and ignore the image updates. Delete these volumes: and use the code that's built into the image.)
Database migrations are a little trickier since you can't run them during the image build phase. There are two good approaches to this. One is to always have the container run migrations on startup. You can use an entrypoint script like:
#!/bin/sh
python manage.py migrate_all
exec "$#"
Make this script be executable and make it be the image's ENTRYPOINT, leaving the CMD be the command to actually start the application. On every container startup it will run migrations and then run the main container command, whatever it may be.
This approach doesn't necessarily work well if you have multiple replicas of the container (especially in a cluster environment like Docker Swarm or Kubernetes) or if you ever need to downgrade. In these cases it might make more sense to manually run migrations by hand. You can do that separately from the main container lifecycle with
docker-compose run web \
python manage.py migrate_all
Finally, in terms of the lifecycle you describe, Docker images are immutable: this means that it's safe to rebuild new images while the old ones are running. A minimum-downtime approach to the upgrade sequence you describe might look like:
git pull
# Build new images (includes `npm install`)
docker-compose build
# Run migrations (if required)
docker-compose run web python manage.py migrate_all
# Restart all containers
docker-compose up --force-recreate

How do I uninstall Docker packages?

I wanted to install CVAT for training an Object Detection AI using Docker. The install failed for some reason in the middle and it wasn't installed. But all the files were still occupying space on my machine. I tried reinstalling the CVAT and the files keep adding to the occupied space. How do I remove all of these files? I am using a MacBook Pro with MacOS Big Sur Beta 4.
Edit: https://github.com/opencv/cvat/blob/develop/cvat/apps/documentation/installation.md#mac-os-mojave
These are the commands I am running to install CVAT.
docker-compose build output: https://pastebin.com/7EkeQ289
docker-compose up -d output: https://pastebin.com/hF3GFDkX
docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser output: https://pastebin.com/Mfh8CivL
If you are trying to remove the containers, attempt the following:
1. docker ps -a - lists all containers
2. docker stop [label or SHA of the containers you want to remove]
docker-compose down [YAML configuration file you targeted with docker-compose up] - this should stop all containers, teardown networks, etc. that docker-compose started with 'up'
docker container prune - removes all stopped containers
NOTE: If you have other stopped containers that you want to keep, do not run this, but remove them individually, as I suggested in the stricken-through step two above, or Konrad Botor's comment
https://docs.docker.com/compose/reference/down/
https://docs.docker.com/engine/reference/commandline/container_prune/
If you want to remove the images:
docker images
docker rmi [label or SHA] (RMI is the remove image command)
https://docs.docker.com/engine/reference/commandline/images/
https://docs.docker.com/engine/reference/commandline/rmi/
To speed up this process, analyze the YAML configuration file being targeted for your docker-compose build command, and/or reference the documentation for that specific project (CVAT) if available, to determine what containers (software) it is initializing (and how it is doing so, if necessary). It might help to paste its contents in the question.
Note: what is taking up space may be volumes which are not cleaned up properly by the docker build scripts for this project. See the following documentation on how to remove those:
https://docs.docker.com/engine/reference/commandline/volume_rm/
I might be missing some context, as I cannot access your pastebin links (behind a firewall at the moment).

Unable to find docker image locally

I was following this post - the reference code is on GitHub. I have cloned the repository on my local.
The project has got a react app inside it. I'm trying to run it on my local following step 7 on the same post:
docker run -p 8080:80 shakyshane/cra-docker
This returns:
Unable to find image 'shakyshane/cra-docker:latest' locally
docker: Error response from daemon: pull access denied for shakyshane/cra-docker, repository does not exist or may require 'docker login'.
See 'docker run --help'.
I tried login to docker again but looks like since it belongs to #shakyShane I cannot access it.
I idiotically tried npm start too but it's not a simple react app running on node - it's in the container and containers are not controlled by npm
Looks like docker pull shakyshane/cra-docker:latest throws this:
Error response from daemon: pull access denied for shakyshane/cra-docker, repository does not exist or may require 'docker login'
So the question is how do I run this docker image on my local mac machine?
Well this is illogical but still sharing so future people like me don't get stuck.
The problem was that I was trying to run a docker image which doesn't exist.
I needed to build the image:
docker build . -t xameeramir/cra-docker
And then run it:
docker run -p 8080:80 xameeramir/cra-docker
In my case, my image had TAG specified with it and I was not using it.
REPOSITORY TAG IMAGE ID CREATED SIZE
testimage testtag 189b7354c60a 13 hours ago 88.3MB
Unable to find image 'testimage:latest' locally for this command docker run testimage
So specifying tag like this - docker run testimage:testtag worked for me
Posting my solution since non of the above worked.
Working on macbook M1 pro.
The issue I had is that the image was built as arm/64. And I was running the command:
docker run --platform=linux/amd64 ...
So I had to build the image for amd/64 platform in order to run it.
Command below:
docker buildx build --platform=linux/amd64 ...
In conclusion your docker image platform and docker run platform needs to be the same from what I experienced.
In my case, the docker image did exist on the system and still I couldn't run the container locally, so I used the exact image ID instead of image name and tag, like this:
docker run myContainer c29150c8588e
I received this error message when I typed the name/character wrong. That is, "name1\name2" instead of "name1/name2" (wrong slash).
In my case, I saw this error when I had logged in to the dockerhub in my docker desktop. The repo I was pulling was local to my enterprise. Once i logged out of dockerhub, the pull worked.
This just happened to me because my local docker vm on macos ran out of disk space.
I just deleted some old images using docker image prune and it started working correctly again.
shakyshane/cra-docker Does not exist in that user's repo https://hub.docker.com/u/shakyshane/
The problem is you are trying to run an imagen that does not exists. If you are executing a Dockerfile, the image was not created until Dockerfile pass with no errors; so when Dockerfile tries to run the image, it can't find it. Be sure you have no errors in the execution of your scripts.
The simplest answer can be the correct one!.. make sure you have permissions to execute the command, use:
sudo docker run -p 8080:80 shakyshane/cra-docker
In my case, I didn't realise there was a difference between docker run and docker start, and I kept using the run command when I should've been using the start command.
FYI, run is for building and creating the docker container, start is to just start a stopped container
Use -d
sudo docker run -d -p 8000:8000 rasa/duckling
learn about -d here
sudo docker run --help
At first, i build image on mac-m1-pro with this command docker build -t hello_k8s_world:0.0.1 ., when is run this image the issue appear.
After read Master Yi's answer, i realize the crux of the matter and rebuild my images like this docker build --platform=arm64 -t hello_k8s_world:0.0.1 .
Finally,it worked.

gitlab-runner errors on local windows

I am trying to generate my work in progress hugo website locally. It works fine with gitlab CI.
I installed docker and the gitlab runner service.
Then using the guide here I figured that I am supposed to do gitlab-runner exec docker pages.
But that results in:
[0;33mWARNING: Since GitLab Runner 10.0 this command is marked as DEPRECATED and will be removed in one of upcoming releases[0;m
[0KRunning with gitlab-runner 10.5.0 (80b03db9)
[0;m[0KUsing Docker executor with image rocker/tidyverse:latest ...
[0;m[0KPulling docker image rocker/tidyverse:latest ...
[0;m[0KUsing docker image sha256:f9a62417cb9b800a07695f86027801d8dfa34552c621738a80f5fed649c1bc80 for rocker/tidyverse:latest ...
[0;m[31;1mERROR: Job failed (system failure): Error response from daemon: invalid volume specification: '/host_mnt/c/builds/project-0/Users/jan/Desktop/gits/stanstrup-web:C:\Users\jan\Desktop\gits\stanstrup-web:ro'
[0;m[31;1mFATAL: Error response from daemon: invalid volume specification: '/host_mnt/c/builds/project-0/Users/jan/Desktop/gits/stanstrup-web:C:\Users\jan\Desktop\gits\stanstrup-web:ro'[0;m
I also tried registering it as other guides show but I end up with the same issue.
Others have had some issues:
https://gitlab.com/gitlab-org/gitlab-runner/issues/1775 It was said it was fixed...
https://github.com/moby/moby/issues/12751 suggest that you can set COMPOSE_CONVERT_WINDOWS_PATHS=1. I tried setting that as an environmental variable but it didn't help.
More discussion of how to escape the path correctly: https://github.com/docker/compose/issues/3285
More discussion sugestion COMPOSE_CONVERT_WINDOWS_PATHS=1 would work: https://github.com/docker/toolbox/issues/607
Am I supposed to set something in .gitlab-ci.yml? Should volumes be set there? In which case how/where?
The .gitlab-ci.yml says:
image: rocker/tidyverse:latest
before_script:
- apt-get update && apt-get -y install default-jdk pandoc r-base r-cran-rjava curl netcdf-bin libnetcdf-dev libxml2-dev libssl-dev
- R CMD javareconf
- Rscript .gitlab-ci.R
pages:
script:
- R -e "blogdown::build_site()"
artifacts:
paths:
- public
only:
- master
Looks like you hit the colon seperator bug in docker for windows which lots of tools have to work around , gitlab has noticed it
until the fix comes out the simplest workaround would be for you to try doing this in a linux vm on your windows box.
get prebuilt gitlab vm images from bitnami here.
otherwise you could checkout and run the gitlab-runner source branch with the fix, however it shows some conflicts and might have other bugs.

Resources