Unable to exit line in bash script - bash

I am writing a script to start an application, grep for the word "server startup", exit and then execute the next command. But it would not exit and execute next cmd after condition is met. Any help?
#!/bin/bash
application start; tail -f /application/log/file/name | \
while read line ; do
echo "$line" | grep "Server startup"
if [ $? = 0 ]
then
echo "application started...!"
fi
done

Don't Use Tail's Follow Flag
Tail's follow flag (e.g. -f) will not exit, and will continue to follow the file until it receives an appropriate signal or encounters an error condition. You will need to find a different approach to tracking data at the end of your file, such as watch, logwatch, or periodic log rotation using logrotate. The best tool to use will depend a lot on the format and frequency of your log data.

Related

Running commands in bash script in parallel with loop

I have a script where I start a packet capture with tshark and then check whether the user has submitted an input text file.
If there is a file present, I need to run a command for every item in the file through a loop (while tshark is running); else continue running tshark.
I would also like some way to stop tshark with user input such as a letter.
Code snippet:
echo "Starting tshark..."
sleep 2
tshark -i ${iface} &>/dev/null
tshark_pid=$!
# if devices aren't provided (such as in case of new devices, start capturing directly)
if [ -z "$targets" ]; then
echo "No target list provided."
else
for i in $targets; do
echo "Attempting to deauthenticate $i..."
sudo aireplay-ng -0 $number -a $ap -c $i $iface$mon
done
fi
What happens here is that tshark starts, and only when I quit it using Ctrl+c does it move on to the if statement and subsequent loop.
Adding a & at the end of command executes the command in a new sub process. Mind you won't be able to kill it with ctlr + c
example:
firefox
will block the shell
firefox & will not block shell

bash run multiple files exit condition

I have a function like so
function generic_build_a_module(){
move_to_the_right_directory
echo 'copying the common packages'; ./build/build_sdist.sh;
echo 'installing the api common package'; ./build/cache_deps.sh;
}
I want to exit the function if ./build/build_sdist.sh doesn't finishes successfully.
here is the content ./build/build_sdist.sh
... multiple operations....
echo "installing all pip dependencies from $REQUIREMENTS_FILE_PATH and placing their tar.gz into $PACKAGES_DIR"
pip install --no-use-wheel -d $PACKAGES_DIR -f $PACKAGES_DIR -r $REQUIREMENTS_FILE_PATH $PACKAGES_DIR/*
In other words, how does the main function generic_build_a_module "knows" if the ./build/build_sdist.sh finished successfully?
You can check the exit status of a command by surrounding it with an if. ! inverts the exit status. Use return 1 to exit your function with exit status 1.
generic_build_a_module() {
move_to_the_right_directory
echo 'copying the common packages'
if ! ./build/build_sdist.sh; then
echo "Aborted due to error while executing build."
return 1
fi
echo 'installing the api common package'
./build/cache_deps.sh;
}
If you don't want to print an error message, the same program can be written shorter using ||.
generic_build_a_module() {
move_to_the_right_directory
echo 'copying the common packages'
./build/build_sdist.sh || return 1
echo 'installing the api common package'
./build/cache_deps.sh;
}
Alternatively, you could use set -e. This will exit your script immediately when some command exits with a non-zero status.
You have to do the following:-
Run both the script in background and store their respective process id in two variables
Keep checking whether the scripts completed or not after an interval say for every 1 to 2 seconds.
Kill the process which is not completed after a specific time say 30 seconds
Example:
sdist=$(ps -fu $USER|grep -v "grep"|grep "build_sdist.sh"| awk '{print $2}')
OR
sdist=$(ps -fu $USER|grep [b]uild_sdist.sh| awk '{print $2}')
deps=$(ps -fu $USER|grep -v "grep"|grep "cache_deps.sh"| awk '{print $2}')
Now use a while loop to check the status every after a certain interval or just check the status directly after 30 seconds like below
sleep 30
if grep "$sdist"; then
kill -8 $sdist
fi
if grep "$deps"; then
kill -8 $deps
fi
You can check the exit code status of the last executed command by checking the $? variable. Exit code 0 is a typical indication that the command completed successfully.
Exit codes can be set by using exit followed by the code number within a script.
Here's a previous question regarding the use of $? with more detail, but to simply check this value try:
echo "test";echo $?
# Example
echo 'copying the common packages'; ./build/build_sdist.sh;
if [ $? -ne 0 ]; then
echo "The last command exited with a non-zero code"
fi
[ $? -ne 0 ] Checks if the last executed commands error code is not equal to 0. This is also useful to ensure that any negative error codes generated such as -1 are captured.
The caveat of the above approach is that we have only checked against the last command executed and not the ... multiple operations.... that you mentioned, so we may have missed an error generated by a command executed before pip install.
Depending on the situation you could set -e within a subsequent script, which instructs the shell to exit the script at the first instance a command exits with a non-zero status.
Another option would be to perform a similar operation as the example within ./build/build_sdist.sh to check the exit code of each command. This would give you the most control as to when and how the script finishes and allows the script to set it's own exit code.

bash: redirected file seen by script as 'does not exist '

I want to check if there are any errors with the last command, hence redirecting stderr to a file and checking the file for "error" string.(Only one possible error in this case.)
My script looks like below:
#aquire lock
rm -f /some/path/err.out
MyProgramme 2>/some/path/err.out &
if grep -i "error" /some/path/err.out ; then
echo "ERROR while running MyProgramme, check /some/path/err.out for error(s)"
#release lock
exit 1
fi
'if' condition is giving error 'No such file or directory' on err.out, however I can see the file exists.
Did I miss anything ?.. Any help is appreciated. Thanks!
PS: I couldn't check the exit code using $? as it is running in background.
In addition to the file possibly not existing when you call grep, you only call grep once, and it only sees whatever data is currently in the file. grep will not continue reading from the file when it reaches the end, waiting for MyProgramme to complete. Instead, I would recommend using a named pipe as the input to grep. This will cause grep to continue reading from the pipe until MyProgramme does, in fact, complete.
#aquire lock
rm -f /some/path/err.out
p=/some/path/err.out
mkfifo "$p"
MyProgramme 2> "$p" &
if grep -i "error" "$p" ; then
echo "ERROR while running MyProgramme, check /some/path/err.out for error(s)"
#release lock
exit
fi
When you start MyProgramme in the background, it's possible that grep executes before MyProgramme could write (and thus create) to the file /some/path/err.out. That's why even though the file exists later when you check it yourself, grep couldn't find it.
You can wait until the background job completes using wait before inspecting the file using grep.

Bash script: `exit 0` fails to exit

So I have this Bash script:
#!/bin/bash
PID=`ps -u ...`
if [ "$PID" = "" ]; then
echo $(date) Server off: not backing up
exit
else
echo "say Server backup in 10 seconds..." >> fifo
sleep 10
STARTTIME="$(date +%s)"
echo nosave >> fifo
echo savenow >> fifo
tail -n 3 -f server.log | while read line
do
if echo $line | grep -q 'save complete'; then
echo $(date) Backing up...
OF="./backups/backup $(date +%Y-%m-%d\ %H:%M:%S).tar.gz"
tar -czhf "$OF" data
echo autosave >> fifo
echo "$(date) Backup complete, resuming..."
echo "done"
exit 0
echo "done2"
fi
TIMEDIFF="$(($(date +%s)-STARTTIME))"
if ((TIMEDIFF > 70)); then
echo "Save took too long, canceling backup."
exit 1
fi
done
fi
Basically, the server takes input from a fifo and outputs to server.log. The fifo is used to send stop/start commands to the server for autosaves. At the end, once it receives the message from the server that the server has completed a save, it tar's the data directory and starts saves again.
It's at the exit 0 line that I'm having trouble. Everything executes fine, but I get this output:
srv:scripts $ ./backup.sh
Sun Nov 24 22:42:09 EST 2013 Backing up...
Sun Nov 24 22:42:10 EST 2013 Backup complete, resuming...
done
But it hangs there. Notice how "done" echoes but "done2" fails. Something is causing it to hang on exit 0.
ADDENDUM: Just to avoid confusion for people looking at this in the future, it hangs at the exit line and never returns to the command prompt. Not sure if I was clear enough in my original description.
Any thoughts? This is the entire script, there's nothing else going on and I'm calling it direct from bash.
Here's a smaller, self contained example that exhibits the same behavior:
echo foo > file
tail -f file | while read; do exit; done
The problem is that since each part of the pipeline runs in a subshell, exit only exits the while read loop, not the entire script.
It will then hang until tail finds a new line, tries to write it, and discovers that the pipe is broken.
To fix it, you can replace
tail -n 3 -f server.log | while read line
do
...
done
with
while read line
do
...
done < <(tail -n 3 -f server.log)
By redirecting from a process substitution instead, the flow doesn't have to wait for tail to finish like it would in a pipeline, and it won't run in a subshell so that exit will actually exits the entire script.
But it hangs there. Notice how "done" echoes but "done2" fails.
done2 won't be printed at all since exit 0 has already ended your script with return code 0.
I don't know the details of bash subshells inside loops, but normally the appropriate way to exit a loop is to use the "break" command. In some cases that's not enough (you really need to exit the program), but refactoring that program may be the easiest (safest, most portable) way to solve that. It may also improve readability, because people don't expect programs to exit in the middle of a loop.

How can I wait for certain output from a process then continue in Bash?

I'm trying to write a bash script to do some stuff, start a process, wait for that process to say it's ready, and then do more stuff while that process continues to run. The issue I'm running into is finding a way to wait for that process to be ready before continuing, and allowing it to continue to run.
In my specific case I'm trying to setup a PPP connection. I need to wait until it has connected before I run the next command. I would also like to stop the script if PPP fails to connect. pppd prints to stdout.
In psuedo code what I want to do is:
[some stuff]
echo START
[set up the ppp connection]
pppd <options> /dev/ttyUSB0
while 1
if output of pppd contains "Script /etc/ppp/ipv6-up finished (pid ####), status = 0x0"
break
if output of pppd contains "Sending requests timed out"
exit 1
[more stuff, and pppd continues to run]
echo CONTINUING
Any ideas on how to do this?
I had to do something similar waiting for a line in /var/log/syslog to appear. This is what worked for me:
FILE_TO_WATCH=/var/log/syslog
SEARCH_PATTERN='file system mounted'
tail -f -n0 ${FILE_TO_WATCH} | grep -qe ${SEARCH_PATTERN}
if [ $? == 1 ]; then
echo "Search terminated without finding the pattern"
fi
It pipes all new lines appended to the watched file to grep and instructs grep to exit quietly as soon as the pattern is discovered. The following if statement detects if the 'wait' terminated without finding the pattern.
The quickest solution I came up with was to run pppd with nohup in the background and check the nobup.out file for stdout. It ended up something like this:
sudo nohup pppd [options] 2> /dev/null &
# check to see if it started correctly
PPP_RESULT="unknown"
while true; do
if [[ $PPP_RESULT != "unknown" ]]; then
break
fi
sleep 1
# read in the file containing the std out of the pppd command
# and look for the lines that tell us what happened
while read line; do
if [[ $line == Script\ /etc/ppp/ipv6-up\ finished* ]]; then
echo "pppd has been successfully started"
PPP_RESULT="success"
break
elif [[ $line == LCP:\ timeout\ sending\ Config-Requests ]]; then
echo "pppd was unable to connect"
PPP_RESULT="failed"
break
elif [[ $line == *is\ locked\ by\ pid* ]]; then
echo "pppd is already running and has locked the serial port."
PPP_RESULT="running"
break;
fi
done < <( sudo cat ./nohup.out )
done
There's a tool called "Expect" that does almost exactly what you want. More info: http://en.wikipedia.org/wiki/Expect
You might also take a look at the man pages for "chat", which is a pppd feature that does some of the stuff that expect can do.
If you go with expect, as #sblom advised, please check autoexpect.
You run what you need via autoexpect command and it will create expect script.
Check man page for examples.
Sorry for the late response but a simpler way would to use wait.
wait is a BASH built-in command which waits for a process to finish
Following is the excerpt from the MAN page.
wait [n ...]
Wait for each specified process and return its termination sta-
tus. Each n may be a process ID or a job specification; if a
job spec is given, all processes in that job's pipeline are
waited for. If n is not given, all currently active child pro-
cesses are waited for, and the return status is zero. If n
specifies a non-existent process or job, the return status is
127. Otherwise, the return status is the exit status of the
last process or job waited for.
For further reference on usage:
Refer to wiki page

Resources