expect script failing when called from a function in bash script - bash

I have written a script that has several functions. There is several functions which are bash script - which make a call to an expect script. When I test the function in terminal it runs fine and completes - however when I test my entire script it hangs. If I remove the expect script's it will finish but it runs cumulative time and not what I would expect of a complete time of 6m30.400s which is the longest time'd function.
Is this the correct output to background the process and send stdout to the bit bucket - so that all processes will run concurrently backup_ironport &> /dev/null eg.
I ran the script with the first 6 functions and I expected it to take 2m48 seconds but it took 4m47 seconds - which seems to be running them separately and not concurrently.
time ./network-bak.sh
real 4m47.033s
If I let it run all functions it hangs for over 15 minutes and I have to stop it. If I run functions separately in the shell to test it will complete. I ran bash -x and script and it just sits on the first expect script backup_cisco_firewall and goes no-where.
If I check the process in the system: it sits there forever
ps aux |grep fw-bak-expect
user 30925 0.0 0.0 0 0 pts/7 Z 10:18 0:00 [fw-bak- expect.s] <defunct>
Here is a snip of the functions of the script and the run process: I tested each function in terminal with time function - and put the real time next to the function with the # tag to show how long it took to run.
backup_fortigate()
{ for fortigate in `cat "$h5"`; do scp $fortigate:sys_config "$b3"/$fortigate-$date; done; }
backup_cisco_firewall()
{ cd "$sc" ; for fw in `cat "$h2"`; do ./fw-bak-expect.sh $fw ; done; }
########################
# Start of MAIN #
# First Run the Backups#
########################
rotate &
rpid=$!
backup_ironport &> /dev/null # real 0m27.490s
backup_fortigate &> /dev/null # real 0m40.816s
backup_nexus &> /dev/null # real 0m35.346s
backup_switch-router &> /dev/null # real 2m48.649s
backup_rsa &> /dev/null # real 0m1.017s
backup_tlite &> /dev/null # real 0m29.589s
backup_cisco_firewall &> /dev/null # real 6m30.400s # no sys-context
backup_sw-no-pk &> /dev/null # real 4m6.729s
backup_esx &> /dev/null # real 1m24.330s
wait
##############################
# Now we confirm the backups #
##############################
confirm_backup > /dev/null
search_for_backups > /dev/null
vh1=$(wc -l < "$f1")
vh2=$(wc -l < "$f2")
backup_verify
# zero the verification files for the next run
cat /dev/null > "$f1"
cat /dev/null > "$f2"
cat /dev/null > "$mh"
kill -9 $rpid
echo "\b\b "

The &> syntax is a short-hand to redirect both stdout and stderr at the same time, here to /dev/null. If you want to background the function call and drop the output, you need to include the actual backgrounding token:
...
backup_ironport &> /dev/null &
...

Related

Issues with script run from /etc/rc.local

I'm trying to run a bash script at boot time from /etc/rc.local on a headless Raspberry Pi 4 (Raspbian buster lite - Debian based). I've done something similar on a Pi 3 with success so I'm confused about why the Pi 4 would misbehave - or behave differently.
The script executed from /etc/rc.local fires but appears to just exit at seemingly random intervals with no indication as to why it's being terminated.
To test it, I dumbed down the script and just stuck the following into a test script called /home/pi/test.sh:
#!/bin/bash
exec 2> /tmp/output # send stderr from rc.local to a log file
exec 1>&2 # send stdout to the same log file
set -x # tell bash to display commands before execution
while true
do
echo 'Still alive'
sleep .1
done
I then call it from /etc/rc.local just before the exit line:
#!/bin/sh -e
#
# rc.local - executed at the end of each multiuser runlevel
#
# Make sure that the script will "exit 0" on success or any other
# value on error.
/home/pi/test.sh
echo $? >/tmp/exiterr #output exit code to /tmp/exiterr
exit 0
The contents of /tmp/output:
+ true
+ echo 'Still alive'
Still alive
+ sleep .1
+ true
+ echo 'Still alive'
Still alive
+ sleep .1
and /tmp/exiterr shows
0
If I reduce the sleep period, /tmp/output is longer (over 6000 lines without the sleep).
Any ideas why the script is exiting shortly after starting?

Can't run a shell script every 24 hours

I have written a shell script that runs some commands. I have added a logic to run this script once every 24 hours. But it runs once and then doesn't run.
The script is as below:
#!/bin/bash
while true; do
cd /home/ubuntu/;
DATE=`date '+%Y-%m-%d'`;
aws s3 cp --recursive "/home/ubuntu/" s3://bucket_name/$DATE/;
rm -r -f ./*;
# sleep 24 hours
sleep $((24 * 60 * 60))
done
Why does it not run once every 24 hours ? I do not get any errors when the script runs. The copy takes about 10 mins.
The good practice is to protect your script againt multirunning.
In this case, you can be sure that only 1 instance is running.
#!/bin/bash
LOCKFILE=/tmp/block_file
if ( set -o noclobber; echo "$$" > "$LOCKFILE") 2> /dev/null;
then
trap 'rm -f "$LOCKFILE"; exit $?' INT TERM EXIT
while true; do
cd /home/ubuntu/;
DATE=`date '+%Y-%m-%d'`;
aws s3 cp --recursive "/home/ubuntu/" s3://bucket_name/$DATE/;
rm -r -f ./*;
# sleep 24 hours
sleep $((24 * 60 * 60))
done
rm -f "$LOCKFILE"
trap - INT TERM EXIT
else
echo "Warning. Script is already running!"
echo "Block by PID $(cat $LOCKFILE) ."
exit
fi
You can run a script immune to hangups.
nohup is a UNIX utility that runs the specified command ignoring communication loss signals (SIGHUP). Thus, the script will continue to work in the background even after the user logs out.
nohup ./yourscript.sh
The created file /tmp/block_file will safe runned script against multirunning. To complete it press ctrl+c or run kill -11 pidofyourscript in terminal, in this way /tmp/block_file will be deleted.
The output of script puts on file nohup.out.
To run in background (preferred way):
nohup ./yourscript.sh &
Your script is probably killed due to inactivity, or when you exit the shell. The proper way to do this is use cron, as #Christian.K mentioned. See https://help.ubuntu.com/community/CronHowto

Output of background process output to Shell variable

I want to get output of a command/script to a variable but the process is triggered to run in background. I tried as below and few servers ran it correctly and I got the response. But in few I am getting i_res as empty.
I am trying to run it in background as the command has chance to get in hang state and I don't want to hung the parent script.
Hope I will get a response soon.
#!/bin/ksh
x_cmd="ls -l"
i_res=$(eval $x_cmd 2>&1 &)
k_pid=$(pgrep -P $$ | head -1)
sleep 5
c_errm="$(kill -0 $k_pid 2>&1 )"; c_prs=$?
if [ $c_prs -eq 0 ]; then
c_errm=$(kill -9 $k_pid)
fi
wait $k_pid
echo "Result : $i_res"
Try something like this:
#!/bin/ksh
pid=$$ # parent process
(sleep 5 && kill $pid) & # this will sleep and wake up after 5 seconds
# and kill off the parent.
termpid=$! # remember the timebomb pid
# put the command that can hang here
result=$( ls -l )
# if we got here in less than 5 five seconds:
kill $termpid # kill off the timebomb
echo "$result" # disply result
exit 0
Add whatever messages you need to the code. On average this will complete much faster than always having a sleep statement. You can see what it does by making the command sleep 6 instead of ls -l

Bash - Hiding a command but not its output

I have a bash script (this_script.sh) that invokes multiple instances of another TCL script.
set -m
for vars in $( cat vars.txt );
do
exec tclsh8.5 the_script.tcl "$vars" &
done
while [ 1 ]; do fg 2> /dev/null; [ $? == 1 ] && break; done
The multi threading portion was taken from Aleksandr's answer on: Forking / Multi-Threaded Processes | Bash.
The script works perfectly (still trying to figure out the last line). However, this line is always displaed: exec tclsh8.5 the_script.tcl "$vars"
How do I hide that line? I tried running the script as :
bash this_script.sh > /dev/null
But this hides the output of the invoked tcl scripts too (I need the output of the TCL scripts).
I tried adding the /dev/null to the end of the statement within the for statement, but that too did not work either. Basically, I am trying to hide the command but not the output.
You should use $! to get the PID of the background process just started, accumulate those in a variable, and then wait for each of those in turn in a second for loop.
set -m
pids=""
for vars in $( cat vars.txt ); do
tclsh8.5 the_script.tcl "$vars" &
pids="$pids $!"
done
for pid in $pids; do
wait $pid
# Ought to look at $? for failures, but there's no point in not reaping them all
done

Bash script to watch execution time of other scripts

I have a main script which run all the scripts in a folder.
#!/bin/bash
for each in /some_folder/*.sh
do
bash $each
done;
I want to know if execution of one of them lasts too long (more than N seconds). For example execution of script such as:
#!/bin/bash
ping -c 10000 google.com
will lasts very long, and I want my main script to e-mail me after N second.
All I can do now is to run all scripts with #timeout N option but it stops them!
Is it possible to E-mail me and not to stop execution of script?
Try this :
#!/bin/bash
# max seconds before mail alert
MAX_SECONDS=3600
# running the command in the background and get the pid
command_that_takes_a_long_time & _pid=$!
sleep $MAX_SECONDS
# if the pid is alive...
if kill &>/dev/null -0 $_pid; then
mail -s "script $0 takes more than $MAX_SECONDS" user#domain.tld < /dev/null
fi
We run the command in the background, then sleep for MAX_SECONDS in // and alert by email if the process takes more than what is permitted.
Finally, with your specific requirements :
#!/bin/bash
MAX_SECONDS=3600
alerter(){
bash "$1" & _pid=$!
sleep $MAX_SECONDS
if kill &>/dev/null -0 $_pid; then
mail -s "$2 takes more than $MAX_SECONDS" user#domain.tld < /dev/null
fi
}
for each in /some_folder/*.sh; do
alerter "$each" &
wait $_pid # remove this line if you wou'd like to run all scripts in //
done
You can do something like this:
( sleep 10 ; echo 'Takes a while' | sendmail myself#example.com ) &
email_pid=$!
bash $each
kill $email_pid
The first command is run in a subshell in the background. It first sleeps a while, then sends email. If the script $each finishes before the sleep expires, the subshell is killed without sending email.

Resources