Is there any difference between "sh -c 'some comand'" and directly run some command - shell

let's say echo command, we can run that command by two ways:
# by 1
echo 'hello'
# or by 2
sh -c "echo 'hello'"
Is there any difference between the two ways? By the way, I can see the way 2 is very popular in yaml config files.
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "sleep 2; done"]

The first way calls an inherited command interpreter, eg from a terminal running /bin/bash ; the second way exec sh (aka Bourne Shell) as the interpreter and instruct him ( -c ) to do something.
sh, ksh, csh, bash are all shell interpreters. They provide some features that are not always compatible between them. So, if you don't know the environment where your program will run, the best is to specify the interpreter you want, which is less error prone.

This is a single command:
foo 1 2 3
So is this
sh -c 'foo 1 2 3'
This is not a single command (but rather a pair of commands separated by a ;)
foo; bar
but this is
sh -c "foo; bar"
This does not specify a command using the name of a executable file
for x in 1 2 3; do echo "$x"; done
but this does
sh -c 'for x in 1 2 3; do echo "$x"; done'
sh -c is basically way to specify an arbitrary shell script as a single argument to a command that can be executed from a file.

Related

Why "bash -c" can't receive full list of arguments?

I have next two scripts:
test.sh:
echo "start"
echo $#
echo "end"
run.sh:
echo "parameters from user:"
echo $#
echo "start call test.sh:"
bash -c "./test.sh $#"
Execute above run.sh:
$ ./run.sh 1 2
parameters from user:
1 2
start call test.sh:
start
1
end
You could see although I pass 2 arguments to run.sh, the test.sh just receive the first argument.
But, if I change run.sh to next which just drop bash -c:
echo "parameters from user:"
echo $#
echo "start call test.sh:"
./test.sh $#
The behavior becomes as expected which test.sh receive 2 arguments:
$ ./run.sh 1 2
parameters from user:
1 2
start call test.sh:
start
1 2
end
Question:
For some reason, I have to use bash -c in my full scenario, then could you kindly tell me what's wrong here? How I could fix that?
It is because of the quoting of the arguments is in wrong place. When you run a sequence of commands inside bash -c, think of that as it being a full shell script in itself, and need to pass arguments accordingly. From the bash manual
If Bash is started with the -c option (see Invoking Bash), then $0 is set to the first argument after the string to be executed, if one is present. Otherwise, it is set to the filename used to invoke Bash, as given by argument zero.
But if one notices your command below,
bash -c "./test.sh $#"
when your expectation was to pass the arguments to the test.sh, inside '..', but the $# inside double-quotes expanded pre-maturely, undergoing word-splitting to produce the first argument value only, i.e. value of $1
But even when you have fixed it by using single quotes as below, it still can't work, because remember the contents passed to -c is evaluated in its own shell context and needs arguments passed explicitly,
set -- 1 2
bash -c 'echo $#' # Both the cases still don't work, as the script
bash -c 'echo "$#"' # inside '-c' is still not passed any arguments
To fix, the above, you need an explicit passing of arguments the contents inside -c as below. The _ (underscore) character represents the pathname of the shell invoked to execute the script (in this case bash). More at Bash Variables on the manual
set -- 1 2
bash -c 'printf "[%s]\n" "$#"' _ "$#"
[1]
[2]
So to fix your script, in run.sh, pass the arguments as
bash -c './test.sh "$#"' _ "$#"
Besides the accept one, find another solution just now. If add -x when call the run.sh, I could see next:
$ bash -x ./run.sh 1 2
+ echo 'parameters from user:'
parameters from user:
+ echo 1 2
1 2
+ echo 'start call test.sh:'
start call test.sh:
+ bash -c './test.sh 1' 2
start
1
end
So, it looks bash -c "./test.sh $#" is interpreted as bash -c './test.sh 1' 2.
Inspired from this, I tried to use $* to replace $#, which then just pass all params as a single parameter, then with next it also works well:
run.sh:
echo "parameters from user:"
echo $*
echo "start call test.sh:"
bash -c "./test.sh $*"
Execution:
$ bash -x ./run.sh 1 2
+ echo 'parameters from user:'
parameters from user:
+ echo 1 2
1 2
+ echo 'start call test.sh:'
start call test.sh:
+ bash -c './test.sh 1 2'
start
1 2
end

Assign the result of a mathematical calculation to a variable without a subshell

My question is a twofold.
First:
Is it possible to achieve this without using a subshell?
FOO=$((6-5))
or this?
BAR=`echo "$FOO/100" | bc -l`
If I understand the second one correctly, I'm creating 2 subshells by using ยด and |
Second
Does creating/using subshells for this kind of stuff impact the overall performance of a script?
--
Thanks
You can check count of subprocess allocated with that simple line:
bash -c 'echo $$'
It create new shell and outputs current process id.
Since processes in linux are sequential numbers, we can use this "hack" to detect how many processes was started between commands. (some processes can be started in background by cron or at, so need to check results multiple times).
Also this command will start process, so if you start it multiple time you will see increasing number. To get real count of processes that was started between this command you must substract 1.
So starting checking.
$ bash -c 'echo $$'
4240
$ FOO=$((6-5))
$ bash -c 'echo $$'
4241
4241 - 4240 - 1 = 0. No subprocess was started.
$ FOO=1111
$ bash -c 'echo $$'
4244
$ BAR=`echo "$FOO/100" | bc -l`
$ bash -c 'echo $$'
4248
4248 - 4244 - 1 = 3. There is 3 process was started.
If we start with "here is a string" removing useless echo:
$ bash -c 'echo $$'
4256
$ BAR=`bc -l <<< "$FOO/100"`
$ bash -c 'echo $$'
4259
4259 - 4256 - 1 = 2. Now 2 subprocess was started.
seems like echo implicitly allocates new shell??
backticks allocates new shell - new process to read output
bc allocates new process
This variant will create two subprocess too:
read BAR < <(bc -l <<< "$FOO / 100")
read is a bash command - it does not fork subprocess and executed in the same shell
() will create shell - subprocess
bc will create subprocess
One way to see that $(( ... )) does not invoke a subshell is to modify the value of a variable inside the construct, and see that the change is visible in the current shell.
$ count=5
$ : $(( count++ ))
$ echo $count
6
If a subshell was created by $(( ... )), the output of the following echo would still be 5.

from csh to bash and re-source the same file

I have a bash file that needs to get sourced. Users might have a csh without knowing (default configuration) so I wanted to change the shell to bash but sourced the file as that was the user's intention.
There are a lot of help around this and the resulting code would be:
#!/bin/csh (AND bash!)
[ "$?shell" = "1" ] && bash --login -c 'source this_file; bash' && exit
...
Everything works as expected besides the fact that the sourced file this_file must be hard-coded. In csh $_ would contain source this_file as that was the command that started sourcing the script, but there is no way I can pass it to the bash command.
This means:
If I use:
... && set this_file=($_) && bash -c "$this_file; bash" && ...
bash will complain that the parenthesis are wrong (this happens the second time as bash is started and tries to source this_file
If I use:
... && bash -c ""$_"; bash" && ...
bash gets a broken command that doesn't work either: bash -c "source" "this_file" "; bash"
If I use:
... && bash --login -c "$_; bash" && ...
csh gets a broken command: `Unmatched ".
I can't find out how to use $_ with an accepted bash syntax that passes the value as a single command (i.e. bash -c "source this_file; bash")
This is the test:
cat >a.sh<<'EOF'
[ "$?shell" = "1" ] && bash --login -c 'source a.sh; bash' && exit
a=1
EOF
And then I expect this to work:
$ csh -c 'source a.csh'
$ echo $a
1
I'm pretty sure it used to work... I'm trying to find out why it doesn't now. (I solved this problem using the tcl modules package, but I'll give this a try)
It is possible to pass arguments to a bash -c 'cmd' construct, i. e. bash -c 'echo $#' arg0 1 2 3 4 5!
#!/bin/csh (AND bash!)
#[ "$?shell" = "1" ] && bash --login -c 'source this_file; bash' && exit
[ "$?shell" = "1" ] && bash --login -c '${#}; bash' arg0 $_ && exit

Subshells and PIDs: Why do $$ and \$\$ sometimes match under nested sh -c?

I know this is an artificially complicated example, but why are both PIDs the same in the first line, while (as expected, to me at least) the two other lines yield different PIDs?
$ sh -c 'sh -c "echo $$ \$\$"'
4500 4500
$ sh -c 'sh -c "echo $$ \$\$"; true'
4596 5060
$ sh -c 'true; sh -c "echo $$ \$\$"'
4728 2868
Thanks!
For me in bash 4.1.5, the output of first line is:
sh -c 'sh -c "echo $$ \$\$"'
4063 4064
as expected - values are different.
Also tested on ash, sh, and zsh.
It must be some tricky optimization.
Update:
in bash 3.2 there is and "ONESHOT" feature:
see comment in the shell.c:1243:
#if defined (ONESHOT)
/* Run one command, given as the argument to the -c option. Tell
parse_and_execute not to fork for a simple command. */

Using bash shell inside Matlab

I'm trying to put a large set of bash commands into a matlab script and manage my variables (like file paths, parameters etc) from there. It is also needed because this workflow requires manual intervention at certain steps and I would like to use the step debugger for this.
The problem is, I don't understand how matlab interfaces with bash shell.
I can't do system('source .bash_profile') to define my bash variables. Similarly I can't define them by hand and read them either, e.g. system('export var=somepath') and then system('echo $var') returns nothing.
What is the correct way of defining variables in bash inside matlab's command window? How can I construct a workflow of commands which will use the variables I defined as well as those in my .bash_profile?
If all you need to do is set environment variables, do this in MATLAB:
>> setenv('var','somepath')
>> system('echo $var')
Invoke Bash as a login shell to get your ~/.bash_profile sourced and use the -c option to execute a group of shell commands in one go.
# in Terminal.app
man bash | less -p 'the --login option'
man bash | less -p '-c string'
echo 'export profilevar=myProfileVar' >> ~/.bash_profile
# test in Terminal.app
/bin/bash --login -c '
echo "$0"
echo "$3"
echo "$#"
export var=somepath
echo "$var"
echo "$profilevar"
ps
export | nl
' zero 1 2 3 4 5
# in Matlab
cmd=sprintf('/bin/bash --login -c ''echo "$profilevar"; ps''');
[r,s]=system(cmd);
disp(s);

Resources