Makefile and docker attach. How to execute from other Makefile's task? - bash

I have a Makefile:
attach:
docker run --rm \
-it \
alpine:3.8
run-inside: attach
uname -a
I can run make attach and type uname -a by hands
uname -a, is just for example
I want run-inside to run container, attach to it, execute command and stop container. Is it possible to do this? I need this because i'm setting up CI for my project, and i need to know how to run without copy/paste
I know i can do this:
run:
docker run --rm \
alpine:3.8 uname -a
But this way i'm duplicating docker command

A possible solution consists in using the -d (--detach) option of docker run, along with the docker exec command.
For example:
Makefile
IMAGE ?= alpine:3.8
NAME ?= foobar
RUN = docker exec $(NAME)
all: start run-inside stop
start:
docker run -d -i --name=$(NAME) --rm --init $(IMAGE)
run-inside:
$(RUN) cat /etc/os-release
$(RUN) uname -a
stop:
docker stop $(NAME)
.PHONY: all start run-inside stop
Regarding the options passed to docker run:
-d tells the Docker Engine to run the container in the background;
-i is necessary to keep the container running (while -t is useless here);
--name specifies the container's name;
--rm triggers the container's removal as soon as it is stopped (here, with docker stop);
--init is optional (it is especially handy when the entrypoint is a shell, so that the signal sent by docker stop can be processed immediately by the tini process, run as PID 1).
As an aside, relying on a Makefile is maybe unnecessary when configuring a Docker-based CI: it can work well but you might instead want to:
inline the docker commands at stake directly in a .travis.yml or .gitlab-ci.yml or so;
use a docker-compose.yml file and install docker-compose beforehand.

Related

How to pass ALL environment variables to container with docker exec

It's possible to set one or more environment variables in the container while doing docker exec, for example:
docker exec -ti -e VAR=1 -e HOME container_name command
But I would like to pass all the shell's environment variables without explicitly specifying them individually. Essentially the equivalent of sudo -E, although it's a different thing.
According to the documentation, there is no such option. But one hack would be something like:
env > env_vars && docker exec -ti --env-file ./env_vars container_name command
Which works, but I'm looking for a simple one step solution that doesn't involve creating a temporary file. Perhaps a bash trick I don't know or haven't thought of yet. Thanks.
Please note: Passing all environment variables is not recommended and defeats the purpose of container process isolation. This question is for knowledge, not about what should be done. Also, the question is specifically about running a temporary command in an existing container with docker exec, not about docker run.
With Bash it seems using process substitution work:
docker run --rm -ti --env-file <(env) alpine sh
Note, this creates a temporary fifo file behind the scenes anyway.
Note, this will not work properly with variables containing newlines, they are cutoff on newlines. You should do something along, I tried to make it short:
readarray -d '' -t args < <(env -0 | sed -z 's/^/--env\x00/')
docker run --rm -ti "${args[#]}" alpine sh

Docker run to execute script in mount without exiting container automatically?

I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'

How to bash into a docker container

trying to bash into container and do a for loop which simply performs a command (which works on a single file by the way). it even seems to echo the right command...what did I forget
Untitled
for pdf in *.pdf ;
do
docker run --rm -v "$(pwd):/home/docker" leofcardoso/pdf2pdfocr -g jpeg2000 -v -i '\'''$pdf''\''';
done
You can bash in a container with this commands:
To see the docker container id
docker container ls
To enter in bash inside a container.
docker exec -it CONTAINER_ID bash
First thing, you are not allocating tty in the docker run command and the docker container dies soon after converting files. Here is main process of container
#!/bin/bash
cd /home/docker
exec pdf2pdfocr.py "$#"
So, in this case, the life of this container is the life of exec pdf2pdfocr.py "$#" command.
As mentioned by #Fra, override the entrypoint and run the command manually.
docker run --rm -v "$(pwd):/home/docker" -it --entrypoint /bin/bash leofcardoso/pdf2pdfocr
but in the above run command, docker container will do not a thing and will just allocate the tty and the bash will open. So you can convert files inside your containers using docker exec and then run pdf2pdfocr.py -g jpeg2000 -v -i mypdf.pdf
So, if you want to run with override entry point then you can try.
docker run -it --rm --entrypoint /bin/bash -v "$(pwd):/home/docker" leofcardoso/pdf2pdfocr -c "pdf2pdfocr.py -g jpeg2000 -v -i mypdf.pdf"
or with the bash script
#!/bin/bash
for pdf in *.pdf ;
do
echo "converting $pdf"
docker run -it --rm --entrypoint /bin/bash -v "$(pwd):/home/docker" leofcardoso/pdf2pdfocr -c "pdf2pdfocr.py -g jpeg2000 -v -i $pdf"
done
But the container will die after completing the conversion.

Docker build fails when executed within Makefile

So, I have a Docker build command that I have tested which works great
docker build \
-t app \
--no-cache --network host \
--build argssh_private_key="$(cat ~/.ssh/id_rsa)"\
--build-arg python_version="3.6.8" -f Dockerfile .
To ease the pain of the team learning Docker I encapsulated a few of the commands - build, start, stop - within a Makefile. However, within the Makefile I need to change the command slightly by modifying
$(cat ~/.ssh/id_rsa)
to
$(shell cat ~/.ssh/id_rsa)
When I execute the following:
make build
I receive the following message:
Step 13/20 : RUN git clone --depth 1 "${git_user}#${git_host}:${git_repo}" app
---> Running in d2eb41a71315
Cloning into 'app'...
Warning: Permanently added the ECDSA host key for IP address [ip_address] to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
However, I do not have the same issue when executing from the command-line. I I think it has something to do with the way the call the "cat" command but, I do not know a way to resolve.
Any ideas ?
Makefile:
APP_NAME=ccs_data_pipeline
DATA?="${HOME}/data"
DOCKER_FILE=Dockerfile
PYTHON_VERSION?=3.6.8
SRC?=$(shell dirname `pwd`)
PRIVATE_KEY?=$(shell echo $(shell cat ~/.ssh/id_rsa))
build: ## Build container for ccs data pipeline
docker build \
-t $(APP_NAME) \
--no-cache --network host \
--build-arg ssh_private_key="$(PRIVATE_KEY)" \
--build-arg python_version="$(PYTHON_VERSION)" \
-f $(DOCKER_FILE) .
start: ## Start the docker container
docker run \
-it -v $(DATA):/data \
--network host \
--rm \
--name="$(APP_NAME)" $(APP_NAME)
stop: ## Stop the docker container
docker stop $(APP_NAME); \
docker rm $(APP_NAME)
Please show your actual makefile, or at least the entire rule that is having the error. The single command you provided, with no context, is not enough to understand what you're doing or what might be wrong.
Note that it is often not correct to replace a shell operation like $(...) with a make shell command $(shell ...). However, sometimes it will work "by accident", where the real differences between those commands don't happen to matter.
In general you should never use $(shell ...) inside a recipe (I have no idea if this command appears in a recipe). Instead, you should escape all the dollar signs that you want to be passed verbatim to the shell when it runs your recipe:
$$(cat ~/.ssh/id_rsa)

"docker run" dies after exiting a bash shell script

I'm attempting to craft system admin bash tools for starting up a Docker image.
But such docker run keeps dying on me after its bash script exited.
The actual working bash script in question is:
#!/bin/sh
docker run \
--name publicnginx1 \
-v /var/www:/usr/share/nginx/html:ro \
-v /var/nginx/conf:/etc/nginx:ro \
--rm \
-p 80 \
-p 443 \
-d \
nginx
docker ps
Executing the simple script resulted in:
# ./docker-run-nginx.sh
743a6eaa33f435e3e0d211c4047bc9af4d4667dc31cd249e481850f40f848c83
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
743a6eaa33f4 nginx "nginx -g 'daemon of…" 1 second ago Up Less than a second 0.0.0.0:32778->80/tcp, 0.0.0.0:32777->443/tcp publicnginx1
And after that bash script gets completed, I executed 'docker ps'
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
There is no Docker running.
What did I do wrong?
Try to run it without --rm.
You can see all container (including the one that already died using this command):
> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
743a6eaa33f4 nginx "nginx -g 'daemon of…" 1 second ago Exited (??) ??
^^^^^
You should be able to look at what is the exit code of the container. Using the container id, you can also look into it's log to understand better what is going on:
docker logs 743a6eaa33f4
If you still can't figure it out, you can start the container with tty to run bash, and try to run the command inside it.
docker run -it -v /var/www:/usr/share/nginx/html:ro -v /var/nginx/conf:/etc/nginx:ro --rm -p 80 -p 443 nginx bash

Resources