copy bash command history (recursive search commands) into Docker container - bash

I have a container which I am using interactively (docker run -it), in it, i have to run a pretty common set of commands, though not always in a set order, hence I cannot just run a script.
Thus, I would like for a way to have my commands in recursive search (Ctrl+R) be available in the Docker container.
Any idea how I can do this?

Let's mount the history file into the container from the host so it's contains will get preserved the container death.
# In some directory
touch bash_history
docker run -v ./bash_history:/root/.bash_history:Z -it fedora /bin/bash
I would recommend to have separate bash history to the one that you use on the host for the safety reasons.

I found helpful info in these questions:
Docker and .bash_history
Docker: preserve command history
https://superuser.com/questions/1158739/prompt-command-to-reload-from-bash-history
They use docker volume mounts however, which mean that the container commands affect the local (host PC) commands, which I do not want.
It seems I will have to copy ~/.bash_history from local into container which will make the history work 'one-way'.
UPDATE: Working:
COPY your_command_script.sh some_folder/my_history
ENV HISTFILE myroot/my_history
RUN PROMPT_COMMAND="history -a; history -r"
Explanation:
copy command script into a file in container
tell the shell to look at a different file for history
reload the history file

Related

file descriptor redirection in docker

I want to be able to pipe some content into a docker process without clobbering it's stdin.
I thought I could do this by opening a new file descriptor in bash before spawning the docker process, then consuming this descriptor within the docker process. However it doesn't work
outside docker:
exec 4<>somefile.txt
docker run --rm -i image cmd args > output.txt
inside docker:
exec 4>file.txt # also tried without the exec
do something with file.txt
The docker container stops when it reaches the 4>file.txt line.
It must be an atomic action, so I can't use docker cp or anything like that.
Also, the docker image does not expose any network ports, so netcat cannot be used.
I would prefer to not use any complex docker mounts.
STDIN is required for other purposes, so I can't clobber that
Are there any other options for getting the file content into a transient container for the use of a single command?
The usual approach here is to mount the current directory into the container. You can choose any directory name inside the container, and should try to avoid hiding the script itself with the mount.
docker run --rm -i -v $PWD:/data image \
cmd -i /data/file.txt -o /data/output.txt --other-args
Filesystem permissions can be tricky, on both sides of this: you can name any directory in the first half of the -v option, even system directories like /etc; and if the process inside the container runs as a non-root user it might have trouble reading the files in the directory you mount in.
You can bind-mount either files or directories, but with the one caveat that they must exist on the host first, or else Docker will create a directory for you (even if you wanted a file; and likely owned by root and not your local user).

Entering text into a docker container via ssh from bash file

What I am trying to do is setup a local development database and to prevent everyone having to go through all the steps I thought it would be useful to create a script.
What I have below stop once it is in the terminal, which looks like:
output
./dbSetup.sh
hash of container 0d1b182aa6f1
/ #
At which point I have to manually enter exit.
script
#!/bin/bash
command=$(docker ps | grep personal)
set $command
echo "hash of container ${1}"
docker exec -it ${1} sh
Is there a way I can inject a command via a script into a dockers container terminal?
In order to execute command inside a container, you can use something like this:
docker exec -ti my_container sh -c "echo a && echo b"
More information available at: https://docs.docker.com/engine/reference/commandline/exec/
Your script finds a running Docker container and opens a shell to it. The "-it" makes it interactive and allocates a tty which is why it continues to wait for input, e.g. "exit". If the plan is to execute some commands to initialize a local development database, I'd recommend looking at building an image with a Dockerfile instead. i.e. Once you figure out the commands to run, they would become RUN commands and the container after docker run would expose a local development database.
If you really want some commands to run within the shell after it is started and maintain the session, depending on the base image, you might be able to mount a bash profile that has the required commands, e.g. -v db_profile:/etc/profile.d where db_profile is a folder with the shell scripts you want to run. To get them to run you'd exec sh -l to have the login startup scripts to run.

Problem in executing a shell script present on host using docker exec

I'm trying to execute a script on the master node of AWS EMR cluster. The intention is to create a new conda env and link it to jupyter. I'm following this doc from AWS. Problem is, whatever be the content of the script, I'm getting the same error: bash: /home/hadoop/scripts/bootstrap.sh: No such file or directory while executing sudo docker exec jupyterhub bash /home/hadoop/scripts/bootstrap.sh. I've made sure the sh file is in the correct location.
But if I copy the bootstrap.sh file inside the container, and then run the same docker exec cmd, it's working fine. What am I missing here? I've tried with a simple script with the following entries, but it throws the same error:
#!/bin/bash
echo "Hello"
The doc clearly says:
Kernels are installed within the Docker container. The easiest way to
accomplish this is to create a bash script with installation commands,
save it to the master node, and then use the sudo docker exec
jupyterhub script_name command to run the script within the jupyterhub
container.
The docker exec command runs a command within the container's namespaces. One of those namespaces is the filesystem. So unless the command is part of the image, written into the container directly, or you have mounted a host volume to map a host directory into the container, you won't be able to execute it. A host volume could look like:
docker run -v /host/scripts:/container/scripts -n your_container $your_image
docker exec -it your_container /container/scripts/test.sh
That host volume could be the same path on both the host and the container.
If it is a shell script, you could use I/O redirection, e.g.:
docker exec -i $container_id /bin/bash <local_script.sh
but be aware that you cannot do interactive stuff this way since the script content has replaced your terminal as stdin. This works because the shell inside the container is just processing commands from stdin.
Other than those scenarios, I don't know what to tell you other than the documentation from AWS appears to be wrong.

Reuse inherited image's CMD or ENTRYPOINT

How can I include my own shell script CMD on container start/restart/attach, without removing the CMD used by an inherited image?
I am using this, which does execute my script fine, but appears to overwrite the PHP CMD:
FROM php
COPY start.sh /usr/local/bin
CMD ["/usr/local/bin/start.sh"]
What should I do differently? I am avoiding the prospect of copy/pasting the ENTRYPOINT or CMD of the parent image, and maybe that's not a good approach.
As mentioned in the comments, there's no built-in solution to this. From the Dockerfile, you can't see the value of the current CMD or ENTRYPOINT. Having a run-parts solution is nice if you control the upstream base image and include this code there, allowing downstream components to make their changes. But docker there's one inherent issue that will cause problems with this, containers should only run a single command that needs to run in the foreground. So if the upstream image kicks off, it would stay running without giving your later steps a chance to run, so you're left with complexities to determine the order to run commands to ensure that a single command does eventually run without exiting.
My personal preference is a much simpler and hardcoded option, to add my own command or entrypoint, and make the last step of my command to exec the upstream command. You will still need to manually identify the script name to call from the upstream Dockerfile. But now in your start.sh, you would have:
#!/bin/sh
# run various pieces of initialization code here
# ...
# kick off the upstream command:
exec /upstream-entrypoint.sh "$#"
By using an exec call, you transfer pid 1 to the upstream entrypoint so that signals get handled correctly. And the trailing "$#" passes through any command line arguments. You can use set to adjust the value of $# if there are some args you want to process and extract in your own start.sh script.
If the base image is not yours, you unfortunately have to call the parent command manually.
If you own the parent image, you can try what the people at camptocamp suggest here.
They basically use a generic script as an entry point that calls run-parts on a directory. What that does is run all scripts in that directory in lexicographic order. So when you extend an image, you just have to put your new scripts in that same folder.
However, that means you'll have to maintain order by prefixing your scripts which could potentially get out of hand. (Imagine the parent image decides to add a new script later...).
Anyway, that could work.
Update #1
There is a long discussion on this docker compose issue about provisioning after container run. One suggestion is to wrap you docker run or compose command in a shell script and then run docker exec on your other commands.
If you'd like to use that approach, you basically keep the parent CMD as the run command and you place yours as a docker exec after your docker run.
Using mysql image as an example
Do docker inspect mysql/mysql-server:5.7 and see that:
Config.Cmd="mysqld"
Config.Entrypoint="/entrypoint.sh"
which we put in bootstrap.sh (remember to chmod a+x):
#!/bin/bash
echo $HOSTNAME
echo "Start my initialization script..."
# docker inspect results used here
/entrypoint.sh mysqld
Dockerfile is now:
FROM mysql/mysql-server:5.7
# put our script inside the image
ADD bootstrap.sh /etc/bootstrap.sh
# set to run our script
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/etc/bootstrap.sh"]
Build and run our new image:
docker build --rm -t sidazhou/tmp-mysql:5.7 .
docker run -it --rm sidazhou/tmp-mysql:5.7
Outputs:
6f5be7c6d587
Start my initialization script...
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...
You'll see this has the same output as the original image:
docker run -it --rm mysql/mysql-server:5.7
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...

Cannot run script added to existing docker container

I have a container that is running with no issues. I added a bash script to compliment a couple other scripts already in the container. The docker image copy 2 scripts to /usr/local/bin and they can be accessed with docker exec -c container-name existingscript.
I added my own script to the same directory and when running the same command I get an error that exec cannot run the script: no file or directory,script not located in $PATH. I check path and sure enough, /usr/local/bin is listed. I checked permissions and the script is 755.
I then open an interactive shell with docker exec -it mycontainer bash and run /usr/local/bin/myscript and it runs with no problem.
Why can I not run the script from outside the container like I can the other two (that were included in the image). All three have almost the same functions a day do not use any special programs, one lists files, one adds files, one reads the file.
The base is Ubuntu.
EDIT: Found where I was running into the issue. Provided the answer in case anyone else happens to make the same mistake.
EDIT-2: So the script that came with the docker image to perform a couple common functions calls the image not the container so my adding the scripts to the container had no effect on the script which was why I kept getting the no file or directory error.
The line in the script in question was:
docker run --rm -v "$(pwd)/config":/path/to/file -ti image_name:latest" mynewscript $#
Of course that ran against the image and NOT the container.
Once I noticed that I tried running it with exec instead of run and it ran without error, like so:
docker exec -it container_name mynewscript
The reason is "/usr/local/bin" not in your script's $PATH, you can use /usr/local/bin/myscript explicitly in your script. Or export $PATH first in the script.
While I was adding snippets to help explain the issue I found the problem and the solution.
So I access the scripts inside the container from the host with another script that allows you to do different things based on switch case. The scripts are called against the docker image and not the container so the script I added does not actually exist in the image.
I modified the script to call the container instead of the image and it works as expected.
EDIT: I updated the question with the answer but I am adding it here as well:
So the script that came with the docker image to perform a couple common functions calls the image not the container so my adding the scripts to the container had no effect on the script which was why I kept getting the no file or directory error.
The line in the script in question was:
docker run --rm -v "$(pwd)/config":/path/to/file -ti image_name:latest" mynewscript $#
Of course that ran against the image and NOT the container.
Once I noticed that I tried running it with exec instead of run and it ran without error, like so:
docker exec -it container_name mynewscript

Resources