I have a very simple unix bash script I am using to execute a command every second. It has the following form:
while : ; do
cat /proc/`pidof iBrowser.bin`/smaps | awk -f ./myawkscript.awk >> $DIRPATH
sleep 1
done
The script runs fine, but it won't stop! If I hit ctrl-C while the script is running, the process does not stop, and I get the following error:
cat: can't open '/proc//smaps': No such file or directory
Does anyone know how this can be avoided?
You should consider a trap function. See this and this.
To trap ctrl-c, you'd define a handler, eg:
ctrl_c ()
{
# Handler for Control + C Trap
echo ""
echo "Control + C Caught..."
exit
}
And then state that you wish to trap it with that handler:
trap ctrl_c SIGINT
Alternatively...
you could run the script in the background by appending &, e.g.
$ ./your_script.sh &
Which would present you with a job id in [square brackets]:
$ ./your_script.sh &
[1] 5183
(in this case 1). When you were done, you could terminate the process with
$ kill %1
Note the percent sign indicates you are referencing a job and not a process id
awk -f ./myawkscript.awk /proc/`pidof iBrowser.bin`/smaps >> $DIRPATH \
|| exit 1
will exit the script if the awk invocation fails, which happens when pidof fails due to an erroneous path. I've taking the liberty of removing your UUOC.
Related
I am currently using dash as main shell.
I tried to write a little function that should imitate wait, but with some text.
Here's a minimal, working code:
#!/bin/dash
wait() {
echo Waiting...
local pid="${1}"; shift
local delay=.250
while kill -0 "${pid}" 2>/dev/null; do
echo Still waiting...
sleep "${delay}"
done
echo Resuming
}
main() {
sleep 3 &
wait %1
}
main
If you copy-paste it in a dash shell you can see the code works just fine.
Anyway, if you try to save it in a file, it does not.
After some troubleshooting I've done I found out that deleting 2>/dev/null, you can see an error message: kill: No such process, but using command wait "${pid}" it just waits for it.
So for example:
#!/bin/dash
wait() {
echo Waiting...
local pid="${1}"; shift
command wait "${pid}"
echo Resuming
}
main() {
sleep 3 &
wait %1
}
main
works fine as a file script, too.
I am not sure where/what I am wrong in this piece of code and some things I tried didn't help.
Among the trials I tried to convert %1 to its pid, but jobs -p %1 in a subshell (such as var="$(jobs -p %1)") fails badly.
Any tip?
Job control is disabled in non-interactive shells. Enable it with set -m, or by appending -m to the shebang, and it'll work.
Example:
$ dash -c 'sleep 1 & kill %1 && echo success'
dash: 1: kill: No such process
$ dash -m -c 'sleep 1 & kill %1 && echo success'
success
I have a complicated mechanism built into my bash environment that requires the execution of a couple scripts when the prompt is generated, but also when the user hits enter to begin processing a command. I'll give an oversimplified description:
The debug trap does this in a fairly limited way: it fires every time a statement is executed.
trap 'echo $BASH_COMMAND' DEBUG # example
Unfortunately, this means that when I type this:
sleep 1; sleep 2; sleep 3
rather than processing a $BASH_COMMAND that contains the entire line, I get the three sleeps in three different traps. Worse yet:
sleep 1 | sleep 2 | sleep 3
fires all three as the pipe is set up - before sleep 1 even starts executing, the output might lead you to believe that sleep 3 is running.
I need a way to execute a script right at the beginning, processing the entire command, and I'd rather it not fire when the prompt command is run, but I can deal with that if I must.
THERE'S A MAJOR PROBLEM WITH THIS SOLUTION. COMMANDS WITH PIPES (|) WILL FINISH EXECUTING THE TRAP, BUT BACKGROUNDING A PROCESS DURING THE TRAP WILL CAUSE THE PROCESSING OF THE COMMAND TO FREEZE - YOU'LL NEVER GET A PROMPT BACK WITHOUT HITTING ^C. THE TRAP COMPLETES, BUT $PROMPT_COMMAND NEVER RUNS. THIS PROBLEM PERSISTS EVEN IF YOU DISOWN THE PROCESS IMMEDIATELY AFTER BACKGROUNDING IT.
This wound up being a little more interesting than I expected:
LOGFILE=~/logfiles/$BASHPID
start_timer() {
if [ ! -e $LOGFILE ]; then
#You may have to adjust this to fit with your history output format:
CMD=`history | tail -1 | tr -s " " | cut -f2-1000 -d" "`
#timer2 keeps updating the status line with how long the cmd has been running
timer2 -p "$PROMPT_BNW $CMD" -u -q & echo $! > $LOGFILE
fi
}
stop_timer() {
#Unfortunately, killing a process always prints that nasty confirmation line,
#and you can't silence it by redirecting stdout and stderr to /dev/null, so you
#have to disown the process before killing it.
disown `cat $LOGFILE`
kill -9 `cat $LOGFILE`
rm -f $LOGFILE
}
trap 'start_timer' DEBUG
I am attempting to run a couple commands in a bash script however it will hang up on my command waiting for it to complete (which it wont). this script is simply making sure its running.
#!/bin/bash
ps cax | grep python > /dev/null
if [ $? -eq 0 ]; then
echo "Process is running."
else
echo "Process is not running... Starting..."
python likebot.py
echo $(ps aux | grep python | grep -v color | awk {'print $2'})
fi
Once it gets to the python command it hangs up while the command is being executed. its not till i cntrl c before it gives the pid. is there anyway i can have it run this bash script and exit the bash script once the commands were run (without waiting for them to complete).
In general, if you want to execute a command and not wait for it, you can simply use & as the delimiter rather than ; or a newline. When doing so, the pid of that process is available to the shell in the special variable !. If you want to wait for that process to complete, you can use wait. If you do not wish to wait for it, then simply omit the wait. In your case:
python likebot.py & # Start command asynchronously
echo $! # echo the pid of the most recent asynchronous process
Since it looks like likebot should be always running you might want to consider 'nohup' as well, with a bare & the job is still a child of your login process and will die if that dies.
I'm learning Bash for a Unix class, and I'm trying to figure out how to run a script, then run a second script while the first is running and have the two interact. To clarify, the scripts look like this:
#!/bin/bash
num = 1
trap exit 0 SIGINT SIGTERM
trap "{ echo &num ; num++; }" SIGUSR1
while :
do
sleep 2
done
and the second one:
#!/bin/bash
if ps | grep "$1" > /dev/null
then
kill -SIGUSR1 $1
else
echo "Process doesn't exist"
fi
exit 0
In case the code isn't correct, the general idea is for the first script to loop until it recieves a SIGINT or SIGTERM, and echo and increment a number whenever it receives a SIGUSR1. The second script takes a pid as an argument and checks if it exists, and sends a SIGUSR1 to the given process. The problem is that when I run the first script, I can't do anything unless I move it to the background with ctrl-z, but when it's there it doesn't seem to respond to any signal except a kill signal. Any ideas on how to make this work?
You can use mycommand & to run a script in the background. Ctrl-Z stops the script, but you can then use bg to let it run in the background. In either case, you can use fg to bring it to the foreground again.
Also note that you can't have spaces around the = in assignments, and you can use let num++ to increment num. You should also singlequote the command in trap, to prevent "$num" from expanding.
All in all:
#!/bin/bash
num=1
trap exit 0 SIGINT SIGTERM
trap '{ echo $num ; let num++; }' SIGUSR1
while :
do
sleep 2
done
Finally, you can more easily check if a pid exists by just using kill -0 pid, or just attempting to sigusr1 it and check the result, to avoid grep "123" matching the substring of pid "1234" and such.
You need to make the first script run in the background. When you press Ctrl+Z it is suspended. Then you can type "bg" to make it run in the background (it will stop again if it tries to read from standard input, to allow you to switch back to it with the "fg" command).
Another way is to start script1 already in the background like this:
$ ./script1 &
The ampersand starts a job in the background and returns you to the prompt immediately.
Look in the bash man page under "JOB CONTROL" (here's a copy) for more information on how this works. The key commands to deal with jobs from an interactive shell is "jobs", "fg", and "bg".
How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null