"allowed" operations in bash read while loop - bash

I have a file text.txt which contains two lines.
first line
second line
I am trying to loop in bash using following loop:
while read -r LINE || [[ -n "$LINE" ]]; do
# sed -i 'some command' somefile
echo "echo something"
echo "$LINE"
sh call_other_script.sh
if ! sh some_complex_script.sh ; then
echo "operation failed"
fi
done <file.txt
When calling some_complex_script.sh only the first line is processed, however when commenting it out all two lines are processed.
some_complex_script.sh does all kind of stuff, like starting processes, sqlplus, starting WildFly etc.
./bin/call_some_script.sh | tee $SOME_LOGFILE &
wait
...
sqlplus $ORACLE_USER/$ORACLE_PWD#$DB<<EOF
whenever sqlerror exit 1;
whenever oserror exit 2;
INSERT INTO TABLE ....
COMMIT;
quit;
EOF
...
nohup $SERVER_DIR/bin/standalone.sh -c $WILDFLY_PROFILE -u 230.0.0.4 >/dev/null 2>&1 &
My question is if there are some operations which are not supposed to be called in some_complex_script.sh and in the loop (it may as well take 10 minutes to finish, is this a good idea at all?) which may break that loop.
The script is called using Jenkins and the Publish over SSH Plugin. When some_complex_script.sh is called on its own, there are no problems.

You should close or redirect stdin for the other commands you run, to stop them reading from the file. eg:
sh call_other_script.sh </dev/null

Related

How to monitore the stdout of a command with a timer?

I'd like to know when an application hasn't print a line in stdout for N seconds.
Here is a reproducible example:
#!/bin/bash
dmesg -w | {
while IFS= read -t 3 -r line
do
echo "$line"
done
echo "NO NEW LINE"
}
echo "END"
I can see the NO NEW LINE but the pipe doesn't stop and the bash doesn't continue. END is never displayed.
How to exit from the braces' code?
Source: https://unix.stackexchange.com/questions/117501/in-bash-script-how-to-capture-stdout-line-by-line
How to exit from the brackets' code?
Not all commands exit when they can't write to output or receive SIGPIPE, and they will not exit until they actually notice they can't write to output. Instead, run the command in the background. If the intention is not to wait on the process, in bash you could just use process substitution:
{
while IFS= read -t 3 -r line; do
printf "%s\n" "$line"
done
echo "end"
} < <(dmesg -w)
You could also use coprocess. Or just run the command in the background with a pipe and kill it when done with it.

Using while read, do Loop in bash script, to parse command line output

So I am trying to create a script that will wait for a certain string in the output from the command that's starting another script.
I am running into a problem where my script will not move past this line of code
$(source path/to/script/LOOPER >> /tmp/looplogger.txt)
I have tried almost every variation I can think of for this line
ie. (./LOOPER& >> /tmp/looplogger.txt)
bash /path/to/script/LOOPER 2>1& /tmp/looplogger.txt etc.
For Some Reason I cannot get it to run in a subshell and have the rest of the script go about its day.
I am trying to run a script from another script and access it's output then parse line by line until a certain string is found
Then once that string is found my script would kill said script (which I am aware if it is sourced then then the parent script would terminate as well).
The script that is starting looper then trying to kill it-
#!/bin/bash
# deleting contents of .txt
echo "" > /tmp/looplogger.txt
#Code cannot get past this command
$(source "/usr/bin/gcti/LOOPER" >> /tmp/ifstester.txt)
while [[ $(tail -1 /tmp/looplogger.txt) != "Kill me" ]]; do
sleep 1
echo ' in loop ' >> /tmp/looplogger.txt
done >> /tmp/looplogger.txt
echo 'Out of loop' >> looplogger.txt
#This kill command works as intended
kill -9 $(ps -ef | grep LOOPER | grep -v grep | awk '{print $2}')
echo "Looper was killed" > /tmp/looplogger.txt
I have tried using while IFS= read -r as well. for the above script. But I find it's syntax alittle confusing.
Looper Script -
./LOOPER
#!/bin/bash
# Script to test with scripts that kill & start processes
let i=0
# Infinite While Loop
while :
do
i=$((i+1))
until [ $i -gt 10 ]
do
echo "I am looping :)"
sleep 1
((i=i+1))
done
echo "Kill me"
sleep 1
done
Sorry for my very wordy question.

bash: Wait for process substitution subshell to finish

How can bash wait for the subshell used in process substitution to finish in the following construct? (This is of course simplified from the real for loop and subshell which I am using, but it illustrates the intent well.)
for i in {1..3}; do
echo "$i"
done > >(xargs -n1 bash -c 'sleep 1; echo "Subshell: $0"')
echo "Finished"
Prints:
Finished
Subshell: 1
Subshell: 2
Subshell: 3
Instead of:
Subshell: 1
Subshell: 2
Subshell: 3
Finished
How can I make bash wait for those subshells to complete?
UPDATE
The reason for using process substitution is that I'm wanting to use file descriptors to control what is printed to the screen and what is sent to the process. Here is a fuller version of what I'm doing:
for myFile in file1 file2 file3; do
echo "Downloading $myFile" # Should print to terminal
scp -q $user#$host:$myFile ./ # Might take a long time
echo "$myFile" >&3 # Should go to process substitution
done 3> >(xargs -n1 bash -c 'sleep 1; echo "Processing: $0"')
echo "Finished"
Prints:
Downloading file1
Downloading file2
Downloading file3
Finished
Processing: file1
Processing: file2
Processing: file3
Processing each may take much longer than the transfer. The file transfers should be sequential since bandwidth is the limiting factor. I would like to start processing each file after it is received without waiting for all of them to transfer. The processing can be done in parallel, but only a with a limited number of instances (due to limited memory/CPU). So if the fifth file just finished transferring but only the second file has finished processing, the third and fourth files should complete processing before the fifth file is processed. Meanwhile the sixth file should start transferring.
Bash 4.4 lets you collect the PID of a process substitution with $!, so you can actually use wait, just as you would for a background process:
case $BASH_VERSION in ''|[123].*|4.[0123])
echo "ERROR: Bash 4.4 required" >&2; exit 1;;
esac
# open the process substitution
exec {ps_out_fd}> >(xargs -n1 bash -c 'sleep 1; echo "Subshell: $0"'); ps_out_pid=$!
for i in {1..3}; do
echo "$i"
done >&$ps_out_fd
# close the process substitution
exec {ps_out_fd}>&-
# ...and wait for it to exit.
wait "$ps_out_pid"
Beyond that, consider flock-style locking -- though beware of races:
for i in {1..3}; do
echo "$i"
done > >(flock -x my.lock xargs -n1 bash -c 'sleep 1; echo "Subshell: $0"')
# this is only safe if the "for" loop can't exit without the process substitution reading
# something (and thus signalling that it successfully started up)
flock -x my.lock echo "Lock grabbed; the subshell has finished"
That said, given your actual use case, what you want should presumably look more like:
download() {
for arg; do
scp -q $user#$host:$myFile ./ || (( retval |= $? ))
done
exit "$retval"
}
export -f download
printf '%s\0' file1 file2 file3 |
xargs -0 -P2 -n1 bash -c 'download "$#"' _
you could have the subshell create a file that the main shell waits for.
tempfile=/tmp/finished.$$
for i in {1..3}; do
echo "$i"
done > >(xargs -n1 bash -c 'sleep 1; echo "Subshell: $0"'; touch $tempfile)
while ! test -f $tempfile; do sleep 1; done
rm $tempfile
echo "Finished"
You can use bash coproc to hold a read-able filedescriptor to be closed when all process' children die:
coproc read # previously: `coproc cat`, see comments
for i in {1..3}; do
echo "$i"
done > >(xargs -n1 bash -c 'sleep 1; echo "Subshell: $0"')
exec {COPROC[1]}>&- # close my writing side
read -u ${COPROC[0]} # will wait until all potential writers (ie process children) end
echo "Finished"
If this is to be run on a system where there is an attacker you should not use a temp file name that can be guessed. So based on #Barmar's solution here is one that avoids that:
tempfile="`tempfile`"
for i in {1..3}; do
echo "$i"
done > >(xargs -n1 bash -c 'sleep 1; echo "Subshell: $0"'; rm "$tempfile")
while test -f "$tempfile"; do sleep 1; done
echo "Finished"
I think you are making it more complicated than it needs to be. Something like this works because the internal bash executions are a subprocess of the main process, the wait causes the process to wait until everything is finished before printing.
for i in {1..3}
do
bash -c "sleep 1; echo Subshell: $i" &
done
wait
echo "Finished"
Unix and derivatives (Linux) have the ability to wait for child (sub) processes but not grandchild processes such as occurred in your original. Some would consider the polling solution where you go back and check for completion to be vulgar since it does not use this mechanism.
The solution where the xargs PID was captured was not vulgar, just too complicated.

sqlplus within a loop - unix

Is there a way send multiple sqlplus commands within a loop but waiting for one to run with success in order for the next one to start ?
Here is a sample of my code. I have tried to add that sleep 15 because the functions that I'm going to execute are taking like 10-20s to run. I want to get rid of that 15s constant and make them run one after the other.
if [ "$#" -eq 1 ]; then
checkUser "$1"
while read line; do
sqlplus $user/$pass#$server $line
sleep 15
done < "$wrapperList"
fi
The instruction in a while loop are done in sequence. It would be equivalent to do like that, using ; to chain instructions:
sqlplus $user/$pass#$server $line1
sqlplus $user/$pass#$server $line2
So you don't need the sleep 15 here, since the sqlplus commands will not be called in parallel. The way you did it already is calling them one after the other.
Nota: It is even better to stop running if first line did not return correctly, using && to say: run only if previous return code is 0
sqlplus $user/$pass#$server $line1 &&\
sqlplus $user/$pass#$server $line2
To have this in the while loop:
checkUser "$1"
while read line; do
sqlplus $user/$pass#$server $line
RET_CODE=$? # check return code, and break if not ok.
if [ ${RET_CODE} != 0 ]; then
echo "aborted." ; break
fi
done < "$wrapperList"
On the other hand, When you want to run in parallel, syntax is different, like here: Unix shell script run SQL scripts in parallel

Bash script: `exit 0` fails to exit

So I have this Bash script:
#!/bin/bash
PID=`ps -u ...`
if [ "$PID" = "" ]; then
echo $(date) Server off: not backing up
exit
else
echo "say Server backup in 10 seconds..." >> fifo
sleep 10
STARTTIME="$(date +%s)"
echo nosave >> fifo
echo savenow >> fifo
tail -n 3 -f server.log | while read line
do
if echo $line | grep -q 'save complete'; then
echo $(date) Backing up...
OF="./backups/backup $(date +%Y-%m-%d\ %H:%M:%S).tar.gz"
tar -czhf "$OF" data
echo autosave >> fifo
echo "$(date) Backup complete, resuming..."
echo "done"
exit 0
echo "done2"
fi
TIMEDIFF="$(($(date +%s)-STARTTIME))"
if ((TIMEDIFF > 70)); then
echo "Save took too long, canceling backup."
exit 1
fi
done
fi
Basically, the server takes input from a fifo and outputs to server.log. The fifo is used to send stop/start commands to the server for autosaves. At the end, once it receives the message from the server that the server has completed a save, it tar's the data directory and starts saves again.
It's at the exit 0 line that I'm having trouble. Everything executes fine, but I get this output:
srv:scripts $ ./backup.sh
Sun Nov 24 22:42:09 EST 2013 Backing up...
Sun Nov 24 22:42:10 EST 2013 Backup complete, resuming...
done
But it hangs there. Notice how "done" echoes but "done2" fails. Something is causing it to hang on exit 0.
ADDENDUM: Just to avoid confusion for people looking at this in the future, it hangs at the exit line and never returns to the command prompt. Not sure if I was clear enough in my original description.
Any thoughts? This is the entire script, there's nothing else going on and I'm calling it direct from bash.
Here's a smaller, self contained example that exhibits the same behavior:
echo foo > file
tail -f file | while read; do exit; done
The problem is that since each part of the pipeline runs in a subshell, exit only exits the while read loop, not the entire script.
It will then hang until tail finds a new line, tries to write it, and discovers that the pipe is broken.
To fix it, you can replace
tail -n 3 -f server.log | while read line
do
...
done
with
while read line
do
...
done < <(tail -n 3 -f server.log)
By redirecting from a process substitution instead, the flow doesn't have to wait for tail to finish like it would in a pipeline, and it won't run in a subshell so that exit will actually exits the entire script.
But it hangs there. Notice how "done" echoes but "done2" fails.
done2 won't be printed at all since exit 0 has already ended your script with return code 0.
I don't know the details of bash subshells inside loops, but normally the appropriate way to exit a loop is to use the "break" command. In some cases that's not enough (you really need to exit the program), but refactoring that program may be the easiest (safest, most portable) way to solve that. It may also improve readability, because people don't expect programs to exit in the middle of a loop.

Resources