When using exec with &, the final command does not run - bash

It seems the code after if/fi is not running. Here is what I have:
I have a script, /my/scripts/dir/directoryPercentFull.sh:
directoryPercentFull="$(df | grep '/aDir/anotherDir' | grep -o '...%' | sed 's/%//g' | sed 's/ //g')"
if [ $directoryPercentFull -gt 90 ]
then
echo $directoryPercentFull
exec /someDir/someOtherDir/test01.sh &
exec /someDir/someOtherOtherDir/test02.sh &
exec /someDir/yetAnotherDir/test03.sh
fi
echo "Processing Done"
The scripts being called are:
/someDir/someOtherDir/test01.sh
#!/usr/bin/env bash
echo "inside test01.sh"
sleep 5
echo "leaving test01.sh"
/someDir/someOtherOtherDir/test02.sh
#!/usr/bin/env bash
echo "inside test02.sh"
sleep 5
echo "leaving test02.sh"
/someDir/yetAnotherDir/test03.sh
#!/usr/bin/env bash
echo "inside test03.sh"
sleep 5
echo "leaving test03.sh"
running the script by cd-ing to /my/scripts/dir and then doing ./directoryPercentFull.sh gives:
OUTPUT:
93
inside test03.sh
inside test02.sh
inside test01.sh
leaving test03.sh
leaving test01.sh
leaving test02.sh
OUTPUT EXPECTED:
93
inside test01.sh
inside test02.sh
inside test03.sh
leaving test01.sh
leaving test02.sh
leaving test03.sh
Processing Done
The order of the echo commands are not that big of a deal, though if someone knows why they go 3,2,1, then 3,1,2, I wouldn't hate an explanation.
However, I am not getting that final Processing Done. Anyone have any clue why the final echo back in /my/scripts/dir/directoryPercentFull.sh does not occur? I have purposefully not placed an & after the last exec statement, as I don't want what what is after the if/fi to run until all of it is finished processing.

/someDir/someOtherDir/test01.sh &
/someDir/someOtherOtherDir/test02.sh &
/someDir/yetAnotherDir/test03.sh
Get rid of all the execs. exec causes the shell process to be replaced by the given command, meaning the shell does not continue executing further commands.
The order of the echo commands are not that big of a deal, though if someone knows why they go 3,2,1, then 3,1,2, I wouldn't hate an explanation.
The printouts could come in any order. The three scripts are run in parallel processes so there's no telling which order they echo their printouts.

Related

Script executes but fails to increment

So I have this shell script that I think should run a given number of times, sleep then resume, and output the results to a log file
#!/bin/bash
log=/path/to/file/info.log
a=$(COMMAND1 | cut -d : -f 2)
b=$(COMMAND2 | grep VALUE| cut -c 7,8)
for i in {1..4}
do
echo "Test" $i >> $log
date >> $log
echo $a >> $log
echo "$((-113 + (($b * 2)))) VALUE" >> $log
sleep 60
done
When I run ps -ef | grep scriptname.sh it seems the script does run. Executes once then the PID is gone as if the run has completed.
I have tested the script and know that it is running and capturing the data I want. But I do not understand why its not incrementing and not sure why its ending earlier than expected.
info.log output sample
Test {1..4}
DATE IN UTC
EXPECTED VALUE OF a
EXPECTED VALUE OF b
Note that the output is literally "Test {1..4}" not "Test 1" "Test 2" Test 3" and so on, as I would expect.
I have run the script as ./scriptname.sh & and as /path/to/file/scriptname.sh &
I have read that there is a difference in running the script with sh and bash though I dont fully understand what effect that would have on the script. I am not a software person at all.
I have tried to run the script with nohup to keep it running in the background if I close the terminal. I also thought the & in the command was supposed to keep the script running in the background. Still it seems the script does not continue to run
I previously asked this question and it was closed, citing that it was similar to a post about the difference between sh and bash...but thats not my main question.
also echo "$BASH_VERSION" returns nothing, a blank line. echo "$-" returns smi, and I have no idea what that means. but bash --version returns:
BusyBox v1.17.1 (2019-11-26 10:41:00 PST) built-in shell (ash)
Enter 'help' for a list of built-in commands.
So my questions are:
If running the script with sh - is that done with ./scriptname.sh & and running the script with bash is /path/to/file/scriptname.sh &...and if so what effect does that have on how the script code is processed? that is - is using sh or bash. I do not fully understand the difference between the two
why does the script not continue to run when I close the terminal? This is my big concern. I would like to run this script hourly for a set period of time. Every time I try something and come back I get one instance in the log.
Neither brace expansion nor seq are part of the POSIX specification. Use a while loop.
log=/path/to/file/info.log
a=$(COMMAND1 | cut -d : -f 2)
b=$(COMMAND2 | grep VALUE| cut -c 7,8)
i=1
while [ "$i" -le 4 ]; do
printf 'Test %s\n' "$i"
date
printf '%s\n' "$a"
printf '%s\n' "$((-113 + (($b * 2)))) VALUE"
sleep 60
i=$((i+1))
done >> "$log"
(I suspect that you want to move the assignments to a and b inside the loop as well; right now, you are simply writing identical files to the log at each iteration.)

Using while read, do Loop in bash script, to parse command line output

So I am trying to create a script that will wait for a certain string in the output from the command that's starting another script.
I am running into a problem where my script will not move past this line of code
$(source path/to/script/LOOPER >> /tmp/looplogger.txt)
I have tried almost every variation I can think of for this line
ie. (./LOOPER& >> /tmp/looplogger.txt)
bash /path/to/script/LOOPER 2>1& /tmp/looplogger.txt etc.
For Some Reason I cannot get it to run in a subshell and have the rest of the script go about its day.
I am trying to run a script from another script and access it's output then parse line by line until a certain string is found
Then once that string is found my script would kill said script (which I am aware if it is sourced then then the parent script would terminate as well).
The script that is starting looper then trying to kill it-
#!/bin/bash
# deleting contents of .txt
echo "" > /tmp/looplogger.txt
#Code cannot get past this command
$(source "/usr/bin/gcti/LOOPER" >> /tmp/ifstester.txt)
while [[ $(tail -1 /tmp/looplogger.txt) != "Kill me" ]]; do
sleep 1
echo ' in loop ' >> /tmp/looplogger.txt
done >> /tmp/looplogger.txt
echo 'Out of loop' >> looplogger.txt
#This kill command works as intended
kill -9 $(ps -ef | grep LOOPER | grep -v grep | awk '{print $2}')
echo "Looper was killed" > /tmp/looplogger.txt
I have tried using while IFS= read -r as well. for the above script. But I find it's syntax alittle confusing.
Looper Script -
./LOOPER
#!/bin/bash
# Script to test with scripts that kill & start processes
let i=0
# Infinite While Loop
while :
do
i=$((i+1))
until [ $i -gt 10 ]
do
echo "I am looping :)"
sleep 1
((i=i+1))
done
echo "Kill me"
sleep 1
done
Sorry for my very wordy question.

applescript blocks shell script cmd when writing to pipe

The following script works as expected when executed from an Applescript do shell script command.
#!/bin/sh
sleep 10 &
#echo "hello world" > /tmp/apipe &
cpid=$!
sleep 1
if ps -ef | grep $cpid | grep sleep | grep -qv grep ; then
echo "killing blocking cmd..."
kill -KILL $cpid
# non zero status to inform launch script of problem...
exit 1
fi
But, if the sleep command (line 2) is swaped to the echo command in (line 3) together with the if statement, the script blocks when run from Applescript but runs fine from the terminal command line.
Any ideas?
EDIT: I should have mentioned that the script works properly when a consumer/reader is connected to the pipe. It only block when nothing is reading from the pipe...
OK, the following will do the trick. It basically kills the job using its jobid. Since there is only one, it's the current job %%.
I was lucky that I came across the this answer or it would have driven me crazy :)
#!/bin/sh
echo $1 > $2 &
sleep 1
# Following is necessary. Seems to need it or
# job will not complete! Also seen at
# https://stackoverflow.com/a/10736613/348694
echo "Checking for running jobs..."
jobs
kill %% >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo "Taking too long. Killed..."
exit 1
fi
exit 0

Bash script: `exit 0` fails to exit

So I have this Bash script:
#!/bin/bash
PID=`ps -u ...`
if [ "$PID" = "" ]; then
echo $(date) Server off: not backing up
exit
else
echo "say Server backup in 10 seconds..." >> fifo
sleep 10
STARTTIME="$(date +%s)"
echo nosave >> fifo
echo savenow >> fifo
tail -n 3 -f server.log | while read line
do
if echo $line | grep -q 'save complete'; then
echo $(date) Backing up...
OF="./backups/backup $(date +%Y-%m-%d\ %H:%M:%S).tar.gz"
tar -czhf "$OF" data
echo autosave >> fifo
echo "$(date) Backup complete, resuming..."
echo "done"
exit 0
echo "done2"
fi
TIMEDIFF="$(($(date +%s)-STARTTIME))"
if ((TIMEDIFF > 70)); then
echo "Save took too long, canceling backup."
exit 1
fi
done
fi
Basically, the server takes input from a fifo and outputs to server.log. The fifo is used to send stop/start commands to the server for autosaves. At the end, once it receives the message from the server that the server has completed a save, it tar's the data directory and starts saves again.
It's at the exit 0 line that I'm having trouble. Everything executes fine, but I get this output:
srv:scripts $ ./backup.sh
Sun Nov 24 22:42:09 EST 2013 Backing up...
Sun Nov 24 22:42:10 EST 2013 Backup complete, resuming...
done
But it hangs there. Notice how "done" echoes but "done2" fails. Something is causing it to hang on exit 0.
ADDENDUM: Just to avoid confusion for people looking at this in the future, it hangs at the exit line and never returns to the command prompt. Not sure if I was clear enough in my original description.
Any thoughts? This is the entire script, there's nothing else going on and I'm calling it direct from bash.
Here's a smaller, self contained example that exhibits the same behavior:
echo foo > file
tail -f file | while read; do exit; done
The problem is that since each part of the pipeline runs in a subshell, exit only exits the while read loop, not the entire script.
It will then hang until tail finds a new line, tries to write it, and discovers that the pipe is broken.
To fix it, you can replace
tail -n 3 -f server.log | while read line
do
...
done
with
while read line
do
...
done < <(tail -n 3 -f server.log)
By redirecting from a process substitution instead, the flow doesn't have to wait for tail to finish like it would in a pipeline, and it won't run in a subshell so that exit will actually exits the entire script.
But it hangs there. Notice how "done" echoes but "done2" fails.
done2 won't be printed at all since exit 0 has already ended your script with return code 0.
I don't know the details of bash subshells inside loops, but normally the appropriate way to exit a loop is to use the "break" command. In some cases that's not enough (you really need to exit the program), but refactoring that program may be the easiest (safest, most portable) way to solve that. It may also improve readability, because people don't expect programs to exit in the middle of a loop.

Getting exit code of last shell command in another script

I am trying to beef up my notify script. The way the script works is that I put it behind a long running shell command and then all sorts of notifications get invoked after the long running script finished.
For example:
sleep 100; my_notify
It would be nice to get the exit code of the long running script. The problem is that calling my_notify creates a new process that does not have access to the $? variable.
Compare:
~ $: ls nonexisting_file; echo "exit code: $?"; echo "PPID: $PPID"
ls: nonexisting_file: No such file or directory
exit code: 1
PPID: 6203
vs.
~ $: ls nonexisting_file; my_notify
ls: nonexisting_file: No such file or directory
exit code: 0
PPID: 6205
The my_notify script has the following in it:
#!/bin/sh
echo "exit code: $?"
echo "PPID: $PPID"
I am looking for a way to get the exit code of the previous command without changing the structure of the command too much. I am aware of the fact that if I change it to work more like time, e.g. my_notify longrunning_command... my problem would be solved, but I actually like that I can tack it at the end of a command and I fear complications of this second solution.
Can this be done or is it fundamentally incompatible with the way that shells work?
My shell is Z shell (zsh), but I would like it to work with Bash as well.
You'd really need to use a shell function in order to accomplish that. For a simple script like that it should be pretty easy to have it working in both zsh and bash. Just place the following in a file:
my_notify() {
echo "exit code: $?"
echo "PPID: $PPID"
}
Then source that file from your shell startup files. Although since that would be run from within your interactive shell, you may want to use $$ rather than $PPID.
It is incompatible. $? only exists within the current shell; if you want it available in subprocesses then you must copy it to an environment variable.
The alternative is to write a shell function that uses it in some way instead.
One method to implement this could be to use EOF tag and a master script which will create your my_notify script.
#!/bin/bash
if [ -f my_notify ] ; then
rm -rf my_notify
fi
if [ -f my_temp ] ; then
rm -rf my_temp
fi
retval=`ls non_existent_file &> /dev/null ; echo $?`
ppid=$PPID
echo "retval=$retval"
echo "ppid=$ppid"
cat >> my_notify << 'EOF'
#!/bin/bash
echo "exit code: $retval"
echo " PPID =$ppid"
EOF
sh my_notify
You can refine this script for your purpose.

Resources