I'm running the docker container locally to troubleshoot its state. I don't always want to execute the RUN/ENTRYPOINT, I often want to get into the running container, do some things, and then run the RUN/ENTRYPOINT.
It would be super convenient to have the RUN/ENTRYPOINT available after I docker run bash by just pressing the up key. So I thought it would be nice if I could modify the history with history -s ... in the Dockerfile. That way, as soon as I docker run bash, I can just press up and have the RUN/ENTRYPOINT available.
When I put this in the docker file, I got this error:
/bin/sh: 1: history: not found
Is there a way to set the bash history in a Dockerfile?
You get the error because RUN commands run in /bin/sh, which has no history command available.
To make this work, you need to run an interactive bash shell during the build, so it will store your history entry.
RUN bash -ic 'history -s foobar'
That should leave behind a history file with foobar as its most recent (and probably only) entry.
You will see an error during build about ioctl... that is normal, because interactive bash expects to find a terminal, and there won't be one. But it should still work fine.
bash: cannot set terminal process group (1): Inappropriate ioctl for device
bash: no job control in this shell
Note that this will be stored for the user you run the command as. If your image switches to a non-root user with the USER statement, you should put this after the USER line so it is stored in the user that your image runs as.
Related
I have a WSL Ubuntu distro that I've set up so that when I login 4 services start working, including a web API that I can test via Swagger to verify it is up and working.
I'm at the point where what I want to do now is start WSL via a script - that is, launch my distro, have all of the services start, and do it from Python. The problem is I cannot even figure out the correct syntax to get WSL to start from PowerShell in a manner where my services start.
Side note: "services" != systemctl (or similar) calls, but just executing bash CLI commands from either my .bashrc or .profile at login.
I've put the commands to execute in .profile & .bashrc. I've configured it both for root execution and non-root user execution. I've taken the commands out of those 2 files and put it into a script in the Windows file system that I pass in on the start of wsl. And I've put that shell script in the WSL file system as well. Nothing seems to work, and sometimes the distro starts and then stops after about 30 seconds.
Some of the PS CLI commands I've tried:
Start-Job -ScriptBlock{ wsl -d distro -u root }
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c /root/bin/start.sh'
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c .\start.sh'
wsl -d distro -u root -- bash -i -l -c /root/bin/start.sh
wsl -d distro -u root -- bash -i -l -c .\start.sh
wsl -d distro -u root -- /root/bin/start.sh
Permutations of the above that I've tried: replace root with my default login, and turning all of the Start-Job bash options into a comma-separated list of single-quoted strings (Ex: 'bash', '-i', '-l', ... ). Nothing I launch from the CLI will allow me access to the web API that is supposed to be hosted on my distro.
Any advice on what to try next?
Not necessarily an answer here as much as troubleshooting tips which will hopefully lead to an answer:
First, most of the forms that you are using seem to be correct. The only ones that absolutely shouldn't work are those that attempt to run the script from the Windows filesystem.
Make sure that you have a shebang line starting your script. I'm assuming you do, but other readers may come across this as well. For the moment, try this form:
#!/usr/bin/env -S bash -li
That's going to have the same effect as the bash -li you tried -- It will source both both interactive startup files such as ~/.bashrc as well as login profiles such as ~/.bash_profile (and /etc/profile.d/*, etc.).
Note that preferably, you won't need the -li. Best practice would be to move anything necessary for the services over from the startup scripts to your start.sh script, and avoid parsing the profile and rc. I need to go update some of my answers, since I just realized I've been guilty of giving some potentially bad advice ...
Specifically, though, I'm wondering if your interactive Bash config has something truly, well, "interactive" in it that might be preventing the automatic running of the script itself. Again, best practice would be for ~/.bashrc to only hold configuration that is needed for interactive shell sessions.
Make sure the script is set as executable (chmod +x start.sh). Again, I'm assuming this is the case for you.
With a shebang line and an executable script, use something like:
wsl -d distro -u root -e /root/bin/start.sh
The -e tells WSL to launch the script directly. Since it has a shebang line, it will be parsed by Bash. Most of the other forms you use above actually run Bash twice - Once when launching WSL and another when it finds the shebang line in the script.
Try some basic troubleshooting for your script like:
Add set -x to the top (right under the shebang line) to turn on script debugging.
Add a ps -efH at the end to show the processes that are running when the script completes
If needed, resort to quick-and-dirty echo statements to show where things have progressed in the script.
I'm hopeful that the above will at least show you the problem, but if not, add the debugging info that you gain from this to your question, and we can troubleshoot further.
I have a container which I am using interactively (docker run -it), in it, i have to run a pretty common set of commands, though not always in a set order, hence I cannot just run a script.
Thus, I would like for a way to have my commands in recursive search (Ctrl+R) be available in the Docker container.
Any idea how I can do this?
Let's mount the history file into the container from the host so it's contains will get preserved the container death.
# In some directory
touch bash_history
docker run -v ./bash_history:/root/.bash_history:Z -it fedora /bin/bash
I would recommend to have separate bash history to the one that you use on the host for the safety reasons.
I found helpful info in these questions:
Docker and .bash_history
Docker: preserve command history
https://superuser.com/questions/1158739/prompt-command-to-reload-from-bash-history
They use docker volume mounts however, which mean that the container commands affect the local (host PC) commands, which I do not want.
It seems I will have to copy ~/.bash_history from local into container which will make the history work 'one-way'.
UPDATE: Working:
COPY your_command_script.sh some_folder/my_history
ENV HISTFILE myroot/my_history
RUN PROMPT_COMMAND="history -a; history -r"
Explanation:
copy command script into a file in container
tell the shell to look at a different file for history
reload the history file
Using the Linux terminal, I run bash scripts (.sh files) containing sequences of commands I want to execute.
The issue is that I am unable to run a Docker command from within my shell script. I can run this Docker command when it's typed directly at the terminal with root privileges but not when I include it in the shell script file.
My script executed as a general user from command line, looks like this:
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su
# Copy a folder from Docker container to host OS
docker cp <container-name>:/home/user/data /home/user/docker_backup
# More general user commands
cd ..
My code only runs until the su line above. After i enter the root password, nothing happens. if i type exit, i get permission errors, meaning the docker cp command failed.
**
This is my desired solution
**After thorough research, as I wanted to run my script as a general user, and only run certain commands as Root when necessary, I came up with a solution that works.
My script now looks like this (run with
$ sh script_name.sh):
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su - root -c "docker cp <container-name>:/home/user/data /home/user/docker_backup"
# More general user commands
cd ..
Run shell script as general user. For commands that require root privileges, I use su - root -c "<command>". Terminal prompts for root password and executes command in quotes as root, then shell proceeds as general user.
Actually posting this as an answer:
You switch your current user to root during the script, but the script was executed by your own user.
So the docker cp command will also be executed as your own user, but you will be logged into the root account.
This results in you not seeing the output of docker cp (which might give you insight to not working - I think insufficient privilege).
A solution to this is either using sudo before docker cp, starting the script as root or adding your user to the group "docker", which authorizes your user to use the docker commands
I had the similar issue where the docker commands were running fine on the Terminal but the same commands were not running when I compiled them into a bash script and the issue was basically because of two reasons.
The docker commands need to be run with uplifted privileges that is with the sudo command ( Eg: sudo docker ps works but docker ps won't work). One could add the current user to docker group so that we need not use sudo with each docker command. Please visit this link and follow the section 2 to do the same.
Run the script in the correct way
One should have #! bin/bash at the starting of the script. It is a shebang that is required by each script.
One should save the file without .sh extension
One should provide the execution permission to the script by giving command chmod 777 script_name
run the script with bash script_name
How can I include my own shell script CMD on container start/restart/attach, without removing the CMD used by an inherited image?
I am using this, which does execute my script fine, but appears to overwrite the PHP CMD:
FROM php
COPY start.sh /usr/local/bin
CMD ["/usr/local/bin/start.sh"]
What should I do differently? I am avoiding the prospect of copy/pasting the ENTRYPOINT or CMD of the parent image, and maybe that's not a good approach.
As mentioned in the comments, there's no built-in solution to this. From the Dockerfile, you can't see the value of the current CMD or ENTRYPOINT. Having a run-parts solution is nice if you control the upstream base image and include this code there, allowing downstream components to make their changes. But docker there's one inherent issue that will cause problems with this, containers should only run a single command that needs to run in the foreground. So if the upstream image kicks off, it would stay running without giving your later steps a chance to run, so you're left with complexities to determine the order to run commands to ensure that a single command does eventually run without exiting.
My personal preference is a much simpler and hardcoded option, to add my own command or entrypoint, and make the last step of my command to exec the upstream command. You will still need to manually identify the script name to call from the upstream Dockerfile. But now in your start.sh, you would have:
#!/bin/sh
# run various pieces of initialization code here
# ...
# kick off the upstream command:
exec /upstream-entrypoint.sh "$#"
By using an exec call, you transfer pid 1 to the upstream entrypoint so that signals get handled correctly. And the trailing "$#" passes through any command line arguments. You can use set to adjust the value of $# if there are some args you want to process and extract in your own start.sh script.
If the base image is not yours, you unfortunately have to call the parent command manually.
If you own the parent image, you can try what the people at camptocamp suggest here.
They basically use a generic script as an entry point that calls run-parts on a directory. What that does is run all scripts in that directory in lexicographic order. So when you extend an image, you just have to put your new scripts in that same folder.
However, that means you'll have to maintain order by prefixing your scripts which could potentially get out of hand. (Imagine the parent image decides to add a new script later...).
Anyway, that could work.
Update #1
There is a long discussion on this docker compose issue about provisioning after container run. One suggestion is to wrap you docker run or compose command in a shell script and then run docker exec on your other commands.
If you'd like to use that approach, you basically keep the parent CMD as the run command and you place yours as a docker exec after your docker run.
Using mysql image as an example
Do docker inspect mysql/mysql-server:5.7 and see that:
Config.Cmd="mysqld"
Config.Entrypoint="/entrypoint.sh"
which we put in bootstrap.sh (remember to chmod a+x):
#!/bin/bash
echo $HOSTNAME
echo "Start my initialization script..."
# docker inspect results used here
/entrypoint.sh mysqld
Dockerfile is now:
FROM mysql/mysql-server:5.7
# put our script inside the image
ADD bootstrap.sh /etc/bootstrap.sh
# set to run our script
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/etc/bootstrap.sh"]
Build and run our new image:
docker build --rm -t sidazhou/tmp-mysql:5.7 .
docker run -it --rm sidazhou/tmp-mysql:5.7
Outputs:
6f5be7c6d587
Start my initialization script...
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...
You'll see this has the same output as the original image:
docker run -it --rm mysql/mysql-server:5.7
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...
From my user on my machine, I ssh to a shared user on another machine that runs t-shell by default. I would like to create an alias that logs me in to the other machine as the shared user, cds to my personal folder on that machine, switches shell to bash, and sources a script which defines some additional aliases. How can I achieve this?
This is what I've tried so far. From my machine I run:
ssh -ty <otheruser>#<otherhost> 'cd <myfolder>; source tsh.personal'
On the other machine, I have the file ~/<myfolder>/tsh.personal which looks like
#!/bin/tsh
/bin/bash -c 'source ~/<myfolder>/bash.personal'
However, when I use the option -c for bash, it just runs the command and then exits, and then the connection to other machine closes because all comands passes to the ssh command has finished. I have also tried replacing the last row in ~/<myfolder>/tsh.personal with
/bin/bash -c 'source ~/<myfolder>/bash.personal; /bin/bash'
which tells bash to start another instance of bash, which won't exit. However, when that instance is started, it is like ~/<myfolder>/bash.personal was never sourced. Are all aliases reset whenever a new instance of bash is started, or why are the aliases not passed to the new instance?
Change tsh.personal to
exec /bin/bash --rcfile ~/<myfolder>/bash.personal
The exec isn't strictly necessary, but it cleans up the process table by replacing the tsh instance with a bash instance.