"bash -c" vs. "dash -c" - bash

dash -c behaves differently from bash -c:
docker run -it ubuntu /bin/dash -c ps
PID TTY TIME CMD
1 ? 00:00:00 sh
7 ? 00:00:00 ps
docker run -it ubuntu /bin/bash -c ps
PID TTY TIME CMD
1 ? 00:00:00 ps
Is there an explanation for this difference?

bash has an optimisation where the very last command in a script implicitly gets executed with exec. dash recently gained this optimisation as well, but not yet in the version you're using. You'll see the same behaviour with bash -c 'exec ps' and dash -c 'exec ps'.

Related

How not to terminate after carried out commands in bash

After carrying out commands with "-c" option of bash, how can I make the terminal wait for input while preserving the environment?
Like CMD /K *** or pwsh -NoExit -Command ***.
From a comment by Cyrus:
You can achieve something similar by abusing the --rcfile option:
bash --rcfile <(echo "export PS1='> ' && ls")
From bash manpage:
--rcfile file
Execute commands from file instead of the system wide initialization file /etc/bash.bashrc and the standard personal initialization file ~/.bashrc if the shell is interactive
This is the answer I was looking for. Thank you!!
As an example of use, I use the following method to use the latest docker image with my preferred repository without building the image:
# Call bash in the container from bash
docker run --rm -it ubuntu:22.04 bash -c "bash --rcfile <(echo 'sed -i -E '\''s%^(deb(-src|)\s+)https?://(archive|security)\.ubuntu\.com/ubuntu/%\1http://mirrors.xtom.com/ubuntu/%'\'' /etc/apt/sources.list && apt update && FooBar=`date -uIs`')"
# ... from pwsh
docker run --rm -it ubuntu:22.04 bash -c "bash --rcfile <(echo 'sed -i -E '\''s%^(deb(-src|)\s+)https?://(archive|security)\.ubuntu\.com/ubuntu/%\1http://mirrors.xtom.com/ubuntu/%'\'' /etc/apt/sources.list && apt update && FooBar=``date -uIs``')"
# Call dash (BusyBox ash) in the container from bash
docker run --rm -it alpine:latest ash -c "ash -c 'export ENV=\$1;ash' -s <(echo 'sed -i -E '\''s%^https?://dl-cdn\.alpinelinux\.org/alpine/%https://ftp.udx.icscoe.jp/Linux/alpine/%'\'' /etc/apk/repositories && apk update && FooBar=`date -uIs`')"
# ... from pwsh
docker run --rm -it alpine:latest ash -c "ash -c 'export ENV=`$1;ash' -s <(echo 'sed -i -E '\''s%^https?://dl-cdn\.alpinelinux\.org/alpine/%https://ftp.udx.icscoe.jp/Linux/alpine/%'\'' /etc/apk/repositories && apk update && FooBar=``date -uIs``')"

Why is Bash handling child processes different compared to Sh

The tini init-process, used in Docker, mentions that process group killing is not activated by default and gives the following example:
docker run krallin/ubuntu-tini sh -c 'sleep 10'
If I run this, and press Ctrl-C immediately after, I indeed have to wait for 10 seconds till the child process exits.
However, if instead of sh I used bash:
docker run krallin/ubuntu-tini bash -c 'sleep 10'
and press Ctrl-C, the process exits immediately.
Why do sh (which is symlinked to dash) and bash behave differently towards this child process?
And how does Bash kill the child process, I thought Bash does not propagate signals by default?
Answered thanks to chepner and Charles Duffy:
bash -c has an implicit optimization where it uses exec to replace itself if possible. sh (dash) does not have this optimization. See also this observation.
To verify:
Process tree using bash:
❯ docker run --name test --rm --detach krallin/ubuntu-tini bash -c 'sleep 60'
03194d48a4dcc8225251fe1e5de2dcbb901c8a9cfd0853ae910bfe4d3735608d
❯ docker exec test ps axfo pid,ppid,args
PID PPID COMMAND
1 0 /usr/bin/tini -- bash -c sleep 60
7 1 sleep 60
Process tree using sh:
❯ docker run --name test --rm --detach krallin/ubuntu-tini sh -c 'sleep 60'
e56f207509df4b0b57f8e6b2b2760835f6784a147b200d798dffad112bb11d6a
❯ docker exec test ps axfo pid,ppid,args
PID PPID COMMAND
1 0 /usr/bin/tini -- sh -c sleep 60
7 1 sh -c sleep 60
8 7 \_ sleep 60

bash starts /bin/sh on docker:git

On my computer bash starts and lasts on docker:git.
~ # bash
bash-4.4# ps
PID USER TIME COMMAND
1 root 0:00 sh
26 root 0:00 bash
32 root 0:00 ps
bash-4.4# echo $0
bash
bash-4.4# echo $SHELL
/bin/ash
ash seems a bit uneasy but I'm able to run #!/bin/bash file so it's fine so far.
However, on Gitlab CI on gitlab.com, command bash doesn't return anything but it doesn't seem to keep running. Why is this?
$ apk add --update bash
$ bash
$ ps && pwd && echo $0 && echo $SHELL && ls /bin
PID USER TIME COMMAND
1 root 0:00 /bin/sh
10 root 0:00 /bin/sh
25 root 0:00 ps
/builds/230s/industrial_calibration
/bin/sh
ash
base64
bash
bashbug
:
More detailed output on my computer:
$ lsb_release -a|grep Description
No LSB modules are available.
Description: Ubuntu 16.04.4 LTS
$ docker pull docker:git
$ docker images | grep -i docker
docker git 5c58d1939c5d 10 days ago 152MB
$ docker run -it docker:git
~ # apk add --update bash
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
:
(10/10) Installing tar (1.29-r1)
Executing busybox-1.27.2-r11.trigger
OK: 37 MiB in 31 packages
~ # bash
bash-4.4# ps
PID USER TIME COMMAND
1 root 0:00 sh
26 root 0:00 bash
32 root 0:00 ps
bash-4.4# echo $0
bash
bash-4.4# echo $SHELL
/bin/ash
.gitlab-ci.yml used (this fails at the last line as the passed file uses bash specific syntax):
image: docker:git
before_script:
- apk add --update bash coreutils tar # install industrial_ci depedencies
- bash
- git clone https://github.com/plusone-robotics/industrial_ci.git .ci_config -b gitlab_modularize
- ps && pwd && echo $0 && echo $SHELL && ls /bin
- source ./.ci_config/industrial_ci/src/tests/gitlab_module.sh
UPDATE: Sourcing bash-based file in bash -c indeed works, but it's probably not useful to me as what I really want is to use a function defined in that file and because bash -c line terminates and doesn't carry the context, the function won't be available in the later lines IMO.
- /bin/bash -c "source ./.ci_config/industrial_ci/src/tests/gitlab_module.sh"
- pass_sshkey_docker
image: alpine
before_script:
- apk add --update bash coreutils tar
- bash
- echo smth
Now imagine your are the computer. You wait for each command to finish before executing the next one, you don't use the keyboard. So what do you do? Let's try it with alpine, substituting newline with ;:
$ docker run -ti --rm alpine sh -c 'apk add --update bash; bash; echo smth'
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/6) Installing pkgconf (1.3.10-r0)
(2/6) Installing ncurses-terminfo-base (6.0_p20171125-r0)
(3/6) Installing ncurses-terminfo (6.0_p20171125-r0)
(4/6) Installing ncurses-libs (6.0_p20171125-r0)
(5/6) Installing readline (7.0.003-r0)
(6/6) Installing bash (4.4.19-r1)
Executing bash-4.4.19-r1.post-install
Executing busybox-1.27.2-r7.trigger
OK: 13 MiB in 17 packages
bash-4.4#
YOU DON'T TOUCH THE KEYBOARD. You can wait endlessly for the bash-4.4# line to disapear as bash will wait endlessly for you to type anything. The command echo smth will never execute, gitlab will timeout waiting for bash to end, the end.
Now if you want to execute something in alpine using bash using gitlab-ci i suggest doing it this way: create a executable script ci-myscript.sh that you git add&commit to your repo:
$ cat ci-myscript.sh
#!/bin/bash
git clone https://github.com/plusone-robotics/industrial_ci.git .ci_config -b gitlab_modularize
ps && pwd && echo $0 && echo $SHELL && ls /bin
source ./.ci_config/industrial_ci/src/tests/gitlab_module.sh
The first line #!/bin/bash tells that shell should execute this script under bash. Now from your gitlab-ci you run:
image: docker:git
before_script:
- apk add --update bash coreutils tar
- ./ci-myscript-sh
Creating such scripts is actually a good workflow, because you can test the script locally on your computer before testing it in gitlab-ci.
The other option is to single call bash as suggested by #Mazel in comments:
image: docker:git
before_script:
- apk add --update bash coreutils tar
- bash -c 'git clone https://github.com/plusone-robotics/industrial_ci.git .ci_config -b gitlab_modularize; ps && pwd && echo $0 && echo $SHELL && ls /bin; source ./.ci_config/industrial_ci/src/tests/gitlab_module.sh'
That way you need to call everything in single line, because the next line won't have the same enviroment as the previous line.

Why `uname -a` invoke exit1 for sslocal?

The shadowsocks client was running in my pc.
uname -a
Linux MiWiFi 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u2 (2017-06-26) x86_64 GNU/Linux
[1]+ Exit 1 sudo sh -c '/usr/bin/nohup /usr/local/bin/sslocal -c /etc/shadowsocks_racks.json > /var/log/ss.log 2>&1'
Why command uname -a invoke Exit1 for sslocal here?
What does Exit 1 sudo sh -c '/usr/bin/nohup /usr/local/bin/sslocal -c /etc/shadowsocks_racks.json > /var/log/ss.log 2>&1'
mean ?
Why command uname -a invoke Exit1 for sslocal here?
You've completely misinterpreted the output. Before uname ran, a background command running as job 1 exited with a status code of 1. The shell couldn't tell you until a command was run, and now you have been told.

Why docker exec is killing nohup process on exit?

I have running docker ubuntu container with just a bash script inside. I want to start my application inside that container with docker exec like that:
docker exec -it 0b3fc9dd35f2 ./main.sh
Inside main script I want to run another application with nohup as this is a long running application:
#!/bin/bash
nohup ./java.sh &
#with this strange sleep the script is working
#sleep 1
echo `date` finish main >> /status.log
The java.sh script is as follow (for simplicity it is a dummy script):
#!/bin/bash
sleep 10
echo `date` finish java >> /status.log
The problem is that java.sh is killed immediately after docker exec returns. The question is why?
The only solution I found out is to add some dummy sleep 1 into the first script after nohup is started. Than second process is running fine. Do you have any ideas why it is like that?
[EDIT]
Second solution is to add some echo or trap command to java.sh script just before sleep. Than it works fine. Unfortunately I cannot use this workaround as instead of this script I have java process.
This is not an answer, but I still don't have the required reputation to comment.
I don't know why the nohup doesn't work. But I did a workaround that worked, using your ideas:
docker exec -ti running_container bash -c 'nohup ./main.sh &> output & sleep 1'
Okay, let's join two answers above :D
First rcmgleite say exactly right: use
-d
options to run process as 'detached' background.
And second (the most important!) if you run detached process, you don't needed nohup!
deploy_app.sh
#!/bin/bash
cd /opt/git/app
git pull
python3 setup.py install
python3 -u webui.py >> nohup.out
Execute this inside a container
docker exec -itd container_name bash -c "/opt/scripts/deploy_app.sh"
Check it
$ docker attach container_name
$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 11768 1940 pts/0 Ss Aug31 0:00 /bin/bash
root 887 0.4 0.0 11632 1396 pts/1 Ss+ 02:47 0:00 /bin/bash /opt/scripts/deploy_app
root 932 31.6 0.4 235288 32332 pts/1 Sl+ 02:47 0:00 python3 -u webui.py
I know this is a late response but I will add it here for documentation reasons.
When using nohup on bash and running it with 'exec' on a docker container, you should use
$ docker exec -d 0b3fc9dd35f2 /bin/bash -c "./main.sh"
The -d option means:
-d, --detach Detached mode: run command in the
background
for more information about docker exec, see:
https://docs.docker.com/engine/reference/commandline/exec/
This should do the trick.

Resources