Multiple exec in a shell script - bash

What happens if you have multiple exec commands in a shell script, for example:
#!/bin/sh
exec yes > /dev/null &
exec yes alex > /dev/null
I assume that a fork is still needed in order to execute the first command since the shell needs to continue executing?
Or does the & specify to create a sub process in which the exec is actually then run?

The use of & implie a sub-process.
So exec have no effect.
Demo:
export LANG=C
echo $$
17259
exec sh -c 'echo $$;read foo' &
[1] 17538
17538
[1]+ Stopped exec sh -c 'echo $$;read foo'
fg
exec sh -c 'echo $$;read foo'
17259
I run the script: echo $$;read foo in order to prevent exit before having quietly read previous output.
In this sample, the current process ID is 17259.
When run with ampersand (&), the output is another pid (bigger). when run without ampersand, the new shell replace the command and is not forked.
Replacing the command by:
sh -c 'echo $$;set >/tmp/fork_test-$$.env;read'
re-running the whole test will generate two files in /tmp.
On my desk, I could read:
19772
19994
19772
So I found two files in /tmp:
-rw-r--r-- 1 user0 user0 2677 jan 22 00:26 /tmp/fork_test-19772.env
-rw-r--r-- 1 user0 user0 2689 jan 22 00:27 /tmp/fork_test-19994.env
If I run: diff /tmp/fork_test-19*env, I read:
29c29
< SHLVL='0'
---
> SHLVL='1'
46a47
> _='/bin/sh'
So the first run, with ampersand is in a sublevel.
Nota: This was tested under many different shell.

The shell forks to run the background process, but that means the new shell still needs to fork to run yes. Using exec eliminates the fork in the subshell.

Related

how many child process (subprocess) generated by 'su -c command'

When using Upstart, controlling subprocesses (child process) is quite important. But what confused me is as following, which has gone beyond upstart itself:
scenario 1:
root#ubuntu-jstorm:~/Desktop# su cr -c 'sleep 20 > /tmp/a.out'
I got 3 processes by: cr#ubuntu-jstorm:~$ ps -ef | grep -v grep | grep sleep
root 8026 6544 0 11:11 pts/2 00:00:00 su cr -c sleep 20 > /tmp/a.out
cr 8027 8026 0 11:11 ? 00:00:00 bash -c sleep 20 > /tmp/a.out
cr 8028 8027 0 11:11 ? 00:00:00 sleep 20
scenario 2:
root#ubuntu-jstorm:~/Desktop# su cr -c 'sleep 20'
I got 2 processes by: cr#ubuntu-jstorm:~$ ps -ef | grep -v grep | grep sleep
root 7975 6544 0 10:03 pts/2 00:00:00 su cr -c sleep 20
cr 7976 7975 0 10:03 ? 00:00:00 sleep 20
The process of sleep 20 is the one I care, especially in Upstart, the process managed by Upstart should be this while not bash -c sleep 20 > /tmp/a.out is managed by Upstart, while not the sleep 20.
In scenario 1, upstart will not work correctly, above is the reason.
Therefore, why scenario 1 got 3 process, this doesn't make sense for me. Even though I know I can use command 'exec' to fix it, I just want to get the procedure what happened when the two command committed.
su -c starts the shell and passes it the command via its -c option. The shell may spawn as many processes as it likes (it depends on the given command).
It appears the shell executes the command directly without forking in some cases e.g., if you run su -c '/bin/sleep $$' then the apparent behaviour as if:
su starts a shell process (e.g., /bin/sh)
the shell gets its own process id (PID) and substitute $$ with it
the shell exec() /bin/sleep.
You should see in ps output that sleep's argument is equal to its pid in this case.
If you run su -c '/bin/sleep $$ >/tmp/sleep' then /bin/sleep argument is different from its PID (it is equal to the ancestor's PID) i.e.:
su starts a shell process (e.g., /bin/sh)
the shell gets its own process id (PID) and substitute $$ with it
the shell double forks and exec() /bin/sleep.
The double fork indicates that the actual sequence of events might be different e.g., su could orchestrate the forking or not forking, not the shell (I don't know). It seems the double fork is there to make sure that the command won't get a controlling terminal.
command > file
This is not atomic action, and actually done in 2 process.
One is execute the command;
the other do the output redirection.
Above two action can not done in one process.
Am I right?

Why doesn't nohup sh -c "..." store variable?

Why doesn't "nohup sh -c" stores variable?
$ nohup sh -c "RU=ru" &
[1] 17042
[1]+ Done nohup sh -c "RU=ru"
$ echo ${RU}
$ RU=ru
$ echo ${RU}
ru
How do I make it such that it store the variable value so that I can use in a loop later?
For now, it's not recording the value when I use RU=ru instead my bash loop, i.e.
nohup sh -c "RU=ru; for file in *.${RU}-en.${RU}; do some_command ${RU}.txt; done" &
It doesn't work within the sh -c "..." too, cat nohup.out outputs nothing for the echo:
$ nohup sh -c "FR=fr; echo ${FR}" &
[1] 17999
[1]+ Done nohup sh -c "FR=fr; echo ${FR}"
$ cat nohup.out
Why doesn't "nohup sh -c" stores variable?
Environment variables only live in a process, and potentially children of a process. When you run sh -c you are creating a new child process. Any environment variables in that child process cannot "move up" to the parent process. That's simply how shells and environment variables work.
It doesn't work within the sh -c "..." too, cat nohup.out outputs
nothing for the echo"
The reason for this is that you are using double quotes. When you use double quotes, the shell does variable expansion before running the command. If you switch to single quotes, the variable won't be expanded until the shell command runs:
nohup sh -c 'FR=fr; echo ${FR}'

$> bash script.sh ... does the forked bash process in turn create a sub-shell?

If I run:
$> bash script.sh
a fork-and-exec happens to run the bash binary. Does that process execute script.sh or does it create a sub-shell in turn in the same way that
$> ./script.sh
first creates a sub-shell to execute the script?
The bash process that runs bash script.sh executes the script directly, not as a second layer of fork and exec. Obviously, individual commands within the script are forked and executed separately, but not the script itself.
You could use ps to show that. For example, script.sh might contain:
tty
echo $$
sleep 20
You could run that and in another terminal window run ps -ft tty0 (if the tty command indicated tty0), and you'd see the shell in which you ran the bash script.sh command, the shell which is running script.sh and the sleep command.
Example
In ttys000:
$ bash script.sh
/dev/ttys000
65090
$
In ttys001:
$ ps -ft ttys000
UID PID PPID C STIME TTY TIME CMD
0 2422 2407 0 9Jul14 ttys000 0:00.13 login -pfl jleffler /bin/bash -c exec -la bash /bin/bash
199484 2428 2422 0 9Jul14 ttys000 0:00.56 -bash
199484 65090 2428 0 3:58PM ttys000 0:00.01 bash script.sh
199484 65092 65090 0 3:58PM ttys000 0:00.00 sleep 20
$
You can use pstree or ps -fax to look at the process tree. In your case when specifying bash as a (forked) command with a script parameter it will not (need) to fork a subshell as running with "command file" is one mode of operation (if not used -c).
BTW: you can also use exec sh script.sh to replace your current shell process with the new sub shell.
When you call a shell script without the source (or .) command, it will run in a subshell. This is the case for your second line. If you want to run the script in the current one, you would need to use . ./script.sh.

Bash script runs with two pids

When I run bash script I am getting two entries in ps list one being child of other .
My script contains just one command
test.sh
sleep 20
pidof test.sh
2494 2493
And how can I get the parent PID
When you run that script, two processes are being created. The first one is the bash interpreter running your script. sleep on the other hand is another binary (often in /bin) and thus requires launching of a new process. (although the process naming seems to differ on different systems; when running on my test system neither process was named by test.sh, just bash and sleep).
To get the parent process ID for one or more processes (by ID or name) you might use ps:
$ ps -p 6194 -o ppid=
6187
$ ps -p 6194,6748 -o ppid=
6187
6747
$ ps -C bash -o ppid=
6187
6747
6782
On a Centos system 'pidof' returned only the parent process. To obtain the child(ren) you could use 'pstree':
$ pidof test.sh
22220
$ pstree -p 22220
mytest(22220)---sleep(22223)

executing shell command in background from script [duplicate]

This question already has answers here:
How do you run multiple programs in parallel from a bash script?
(19 answers)
Closed 1 year ago.
how can I execute a shell command in the background from within a bash script, if the command is in a string?
For example:
#!/bin/bash
cmd="nohup mycommand";
other_cmd="nohup othercommand";
"$cmd &";
"$othercmd &";
this does not work -- how can I do this?
Leave off the quotes
$cmd &
$othercmd &
eg:
nicholas#nick-win7 /tmp
$ cat test
#!/bin/bash
cmd="ls -la"
$cmd &
nicholas#nick-win7 /tmp
$ ./test
nicholas#nick-win7 /tmp
$ total 6
drwxrwxrwt+ 1 nicholas root 0 2010-09-10 20:44 .
drwxr-xr-x+ 1 nicholas root 4096 2010-09-10 14:40 ..
-rwxrwxrwx 1 nicholas None 35 2010-09-10 20:44 test
-rwxr-xr-x 1 nicholas None 41 2010-09-10 20:43 test~
Building off of ngoozeff's answer, if you want to make a command run completely in the background (i.e., if you want to hide its output and prevent it from being killed when you close its Terminal window), you can do this instead:
cmd="google-chrome";
"${cmd}" &>/dev/null & disown;
&>/dev/null sets the command’s stdout and stderr to /dev/null instead of inheriting them from the parent process.
& makes the shell run the command in the background.
disown removes the “current” job, last one stopped or put in the background, from under the shell’s job control.
In some shells you can also use &! instead of & disown; they both have the same effect. Bash doesn’t support &!, though.
Also, when putting a command inside of a variable, it's more proper to use eval "${cmd}" rather than "${cmd}":
cmd="google-chrome";
eval "${cmd}" &>/dev/null & disown;
If you run this command directly in Terminal, it will show the PID of the process which the command starts. But inside of a shell script, no output will be shown.
Here's a function for it:
#!/bin/bash
# Run a command in the background.
_evalBg() {
eval "$#" &>/dev/null & disown;
}
cmd="google-chrome";
_evalBg "${cmd}";
Also, see: Running bash commands in the background properly
This works because the it's a static variable.
You could do something much cooler like this:
filename="filename"
extension="txt"
for i in {1..20}; do
eval "filename${i}=${filename}${i}.${extension}"
touch filename${i}
echo "this rox" > filename${i}
done
This code will create 20 files and dynamically set 20 variables. Of course you could use an array, but I'm just showing you the feature :). Note that you can use the variables $filename1, $filename2, $filename3... because they were created with evaluate command. In this case I'm just creating files, but you could use to create dynamically arguments to the commands, and then execute in background.
For example you have a start program named run.sh to start it working at background do the following command line.
./run.sh &>/dev/null &

Resources