I'm having trouble with saving the PID into a variable when I'm using bash -c. For example:
bash -c "PID=$$; echo $PID"
In this case the output is empty. How to save the child PID (the PID of the command inside the double quotes) now in the variable PID
just use simple quotes or you expression is evaluated inside your current command line (too soon) and not in the child bash command, and PID isn't defined yet at this moment and you're actually passing
bash -c "PID=4353; echo"
(where 4353 is the pid of the current bash process)
Someone noted that it's not clear if you want to pass parent pid or child pid
to pass parent pid, fix it like this (only the part within double quotes is evaluated before calling bash child process):
bash -c "PID=$$; "'echo $PID'
to pass child pid, fix it like this (nothing is evaluated in the current shell, same trick used for awk scripts):
bash -c 'PID=$$; echo $PID'
set -x is useful for debugging and will show the actual command you end up running prefixed with +:
$ set -x
$ bash -c "PID=$$; echo $PID"
+ bash -c 'PID=1900; echo '
And indeed, in that command you would expect empty output. This happens because $$ and $PID are substituted before bash is called.
To avoid this, you can single quote the string or escape the "$"s:
$ set -x
$ bash -c 'PID=$$; echo $PID'
+ bash -c 'PID=$$; echo $PID'
1925
$ bash -c "PID=\$\$; echo \$PID"
+ bash -c 'PID=$$; echo $PID'
1929
Related
I am migrating a script that was just using sh to bash, the script originally looked like this:
#!/bin/sh
... a bunch of setup ...
exec "$#"
When I run the script via:
./my_script kill -l
I get a list of available signals:
HUP INT QUIT ILL TRAP ABRT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLT
CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH POLL PWR SYS
However, I want to use the bash signal names, so I thought I could simply:
#!/bin/bash
exec bash -l "$#"
The problem is now kill is not recognized:
/bin/kill: /bin/kill: cannot execute binary file
Really my script is just a wrapper around another process and I need to make sure a signal of kill -SIGTERM can be sent to it.
You need to add the -c option. Otherwise (see the ARGUMENTS section of the bash man page) "...the first argument is assumed to be the name of a file containing shell commands."
I.e.:
exec bash -lc "$*"
You are telling bash to run a file which it expects to be a bash script but it turns out to be a binary executable file.
Instead of this:
#!/bin/bash
exec bash -l "$#"
Use this:
#!/bin/bash
exec bash -c "$1"
Is there a specific reason you need option -l to run bash as a "login" shell? If not just use option -c to run the string argument.
Updated to use $1 instead of $# as it is more appropriate for a string argument as #chepner commented.
This also requires you to send the argument as a string, not a reference to the binary.
Instead of this:
./my_script kill -l
Do this:
./my_script "kill -l"
I'm trying to execute a series of bash commands using /bin/bash -l -c as follows:
/bin/bash -l -c "cmd1 && cmd2 && cmd3..."
What I notice is that if cmd1 happens to export an environment variable, it is not seen by cmd2 and so on. If I run the same concatenated commands without the /bin/bash -l -c option, it just runs fine.
For example, this does not print the value of MYVAR:
$/bin/bash -l -c "export MYVAR="myvar" && echo $MYVAR"
whereas, the following works as expected
$export MYVAR="myvar" && echo $MYVAR
myvar
Can't find why using /bin/bash -l -c won't work here? Or are there any suggestion how to make this work with using /bin/bash -l -c?
variables are exported for child processes, parent process environment can't be changed by child process so consecutive commands (forking sub processes can't see variables exported from preceeding command), some commands are particular becuase don't fork new process but are run inside current shell process they are builtin commands only builtin commands can change current process environment export is a builtin, for more information about builtins man bash /^shell builtin.
Otherwise expansion is done during command parsing:
/bin/bash -l -c "export MYVAR="myvar" && echo $MYVAR"
is one command (note the nested quotes are not doing what you expect because are closing and opening quoted string, myvar is not quoted and may be split). So $MYVAR is expanded to current value before assigned to myvar.
export MYVAR="myvar" && echo $MYVAR
are two commands and $MYVAR because && is bash syntax (not literal like "&&"), is expanded after being assigned.
so that $ have a literal meaning (not expanded) use single quotes or escape with backslash.
/bin/bash -l -c 'export MYVAR="myvar" && echo "$MYVAR"'
or
/bin/bash -l -c "export MYVAR=\"myvar\" && echo \"\$MYVAR\""
Is there a way, I can find the process name of bash script by the shell script that was used to invoke it? or can I set process name of bash script to something such as
-myprocess
(I have looked into argv[0], but I am not clear about it)
so when I use
ps -ef | grep -c '[M]yprocess'
I get only all the instances of myprocess?
To obtain the command name of the current script that is running, use:
ps -q $$ -o comm=
To get process information on all running scripts that have the same name as the current script, use:
ps -C "$(ps -q $$ -o comm=)"
To find just the process IDs of all scripts currently being run that have the same name as the current script, use:
pgrep "$(ps -q $$ -o comm=)"
How it works
$$ is the process ID of the script that is being run.
The option -q $$ tells ps to report on only process ID $$.
The option -o comm= tells ps to omit headers and to skip the usual output and instead print just the command name.
The parent process id can be obtained from $PPID on bash and ksh. We can read the fields from ps into an array.
For bash you can try this. The problem with ps is that many options are non-standard, so I have kept that as generic as possible:
#!/bin/bash
while read -a fields
do
if [[ ${fields[0]} == $PPID ]]
then
echo "Shell: ${fields[3]}"
echo "Command: ${fields[4]}"
fi
done < <(ps -p $PPID)
You have tagged bash and ksh, but they have different syntax rules. To read into an array bash uses -a but ksh uses -A, So for korn shell you would need to change the read line (and the #! line):
while read -A fields
Not sure what you mean by that, but let's go with an example script b.sh.
#!/usr/local/bin/bash
echo "My name is $0, and I am running under $SHELL as the shell."
Running the script will give you:
$ bash b.sh
My name is b.sh, and I am running under /usr/local/bin/bash as the shell.
For more check this answer: HOWTO: Detect bash from shell script
I'm attempting to run a command in a shell script and set a variable with the resulting process id. Stripped down to the relevant parts, I have:
#!/bin/bash
USER=myAppUser
PATH_TO_APP=/opt/folder/subfolder
PID=`su - $USER -c 'nohup $PATH_TO_APP/myapp --option > /dev/null 2>&1 & echo $!'`;
echo $PID
I understand that I need to use double quotes around the nohup command for variable substitution, but if I do, PID is not being set. If I use double quotes and hardcode the PATH_TO_APP it will execute and set PID. I'm guessing it's a problem with the combination of the back tick and single/double quotes.. but I'm not sure what the solution is.
Put the command in double quotes so that $PATH_TO_APP will be substituted, and then escape the $ in $! so it will be included literally in the argument to su, and then processed by the subshell.
PID=$(su - $USER -c "nohup $PATH_TO_APP/myapp --option > /dev/null 2>&1 & echo \$!")
echo $PID
Why doesn't "nohup sh -c" stores variable?
$ nohup sh -c "RU=ru" &
[1] 17042
[1]+ Done nohup sh -c "RU=ru"
$ echo ${RU}
$ RU=ru
$ echo ${RU}
ru
How do I make it such that it store the variable value so that I can use in a loop later?
For now, it's not recording the value when I use RU=ru instead my bash loop, i.e.
nohup sh -c "RU=ru; for file in *.${RU}-en.${RU}; do some_command ${RU}.txt; done" &
It doesn't work within the sh -c "..." too, cat nohup.out outputs nothing for the echo:
$ nohup sh -c "FR=fr; echo ${FR}" &
[1] 17999
[1]+ Done nohup sh -c "FR=fr; echo ${FR}"
$ cat nohup.out
Why doesn't "nohup sh -c" stores variable?
Environment variables only live in a process, and potentially children of a process. When you run sh -c you are creating a new child process. Any environment variables in that child process cannot "move up" to the parent process. That's simply how shells and environment variables work.
It doesn't work within the sh -c "..." too, cat nohup.out outputs
nothing for the echo"
The reason for this is that you are using double quotes. When you use double quotes, the shell does variable expansion before running the command. If you switch to single quotes, the variable won't be expanded until the shell command runs:
nohup sh -c 'FR=fr; echo ${FR}'