Bash script for do loop that detects a time period elapsed and then continues - bash

I have a script that runs through a list of servers to connect to and grabs by SCP over files to store
Occasionally due to various reasons one of the servers crashes and my script gets stuck for around 4 hours before moving on through the list.
I would like to be able to detect a connection issue or a period of time elapsed after script has started and kill that command and move on to next.
I suspect that this would involve a wait or sleep and continue but I am new to loops and bash
#!/bin/bash
#
# Generate a list of backups to grab
df|grep backups|awk -F/ '{ print $NF }'>/tmp/backuplistsmb
# Get each backup in turn
for BACKUP in `cat /tmp/backuplistsmb`
do
cd /srv/backups/$BACKUP
scp -o StrictHostKeyChecking=no $BACKUP:* .
sleep 3h
done
The above script works fine but does get stuck for 4 hours should there be a connection issue. It is worth noting that some of the transfers take 10 mins and some 2.5 hours
Any ideas or help would very appreciated

Try to use the timeout program for that:
Usage:
timeout [OPTION] DURATION COMMAND [ARG]...
E.g. time (timeout 3 sleep 5) will run for 3 secs.
So in your code you can use:
timeout 300 scp -o StrictHostKeyChecking=no $BACKUP:* .
This limits the copy to 5 minutes.

Related

Continuing script after long command executed over SSH

My local computer is running a bash script that executes another script (locally) on a remote like so :
#!/bin/bash
# do stuff
ssh remote#remote "/home/remote/path/to/script.sh"
echo "Done"
# do other stuff
script.sh takes around 15 minutes to execute. Without loss of connection, script.sh is executed completely (until the very last line). Though, Done will never be echoed (nor will the other stuff be executed).
Notes :
I've experimented using screen and nohup, but like I said, the connection is stable and script.sh is executed thoroughly (script.sh doesn't seem to be dropped).
I need script.sh to be over before I can move on to doing other stuff so I can't really run the script and detach (or I will need to know when the script is over before I can start doing other stuff).
Everything works fine if I use a dummy script that last only 5 minutes (instead of 15).
Edit :
script.sh used for testing :
#!/bin/bash
touch /tmp/start
echo "Start..." & sleep 900; touch /tmp/endofscript
Adding -o ServerAliveInterval=60 fixes the issue.
The ServerAliveInterval option prevents your router from thinking the SSH connection is idle by sending packets over the network between your device and the destination server every 60 seconds.
(source)
In the case of a script that takes several minutes to execute and that has no output, this will keep the connection alive and avoid it from timing out and being left hanging.
Two options : 
ssh -o ServerAliveInterval=60 remote#remote "/home/remote/path/to/script.sh"
Adding the following lines to ~/.ssh/config of local computer (replace remote by the name of your remote or * to enable for any remote):
Host remote
ServerAliveInterval 60
For additional information :
What do options ServerAliveInterval and ClientAliveInterval in sshd_config do exactly?
Have you tried setting set -xv in the scripts, or executing both the scripts with bash -xv script.shto get the details of the scripts execution?

Shell script: How to loop run two programs?

I'm running an Ubuntu server to mine crypto. It's not a very stable coin yet and their main node gets disconnected sometimes. When this happens it crashes the program through fatal error.
At first I wrote a loop script so it would keep running after a crash and just try again after 15 seconds:
while true;
do ./miner <somecodetoconfiguretheminer> &&break;
sleep 15
done;
This works, but is inefficient. Sometimes the loop will keep running for 30 minutes until the main node is back up - which costs me 30 minutes of hashing power unused. So I want it to run a second miner for 15 minutes to mine another coin, then check the first miner again if its working yet.
So basically: Start -> Mine coin 1 -> if crash -> Mine coin 2 for 15 minutes -> go to Start
I tried the script below but the server just becomes unresponsive once the first miner disconnects:
while true;
do ./miner1 <somecodetoconfiguretheminer> &&break;
timeout 900 ./miner2
sleep 15
done;
Ive read through several topics / questions on how &&break works, timeout works and how while true works but I can't figure out what I'm missing here.
Thanks in advance for the help!
A much simpler solution would be to run both of the programs all the time, and lower the priority of the less-preferred one. On Linux and similar systems, that is:
nice -10 ./miner2loop.sh &
./miner1loop.sh
Then the scripts can be similar to your first one.
Okay, so after trial and error - and some help - I found out that there is nothing wrong with my initial code. Timeout appears to behave differently on my linux instance when used in terminal than in a bash script. If used in Terminal it behaves as it should, it counts down and then kills the process it started. If used in bash however - it acts as if I typed 'sleep' and then after counting down stops.
Apparently this has to do with my Ubuntu instance (running on a VPS). Even though I installed latest versions of coreutils, have all the latest versions installed through apt-get update etc. This is the case for me on Digital Ocean as well as Google Compute.
The solution is to use the Timeout code as a function within the bash script, as found on another thread in stackoverflow. I named the function timeout2 as to not confuse the system in triggering the not properly working timeout command:
#!/bin/bash
# Executes command with a timeout
# Params:
# $1 timeout in seconds
# $2 command
# Returns 1 if timed out 0 otherwise
timeout2() {
time=$1
# start the command in a subshell to avoid problem with pipes
# (spawn accepts one command)
command="/bin/sh -c "$2""
expect -c "set echo "-noecho"; set timeout $time; spawn -noecho
$command; expect timeout { exit 1 } eof { exit 0 }"
if [ $? = 1 ] ; then
echo "Timeout after ${time} seconds"
fi
}
while true;
do
./miner1 <parameters for miner> && break;
sleep 5
timeout2 300 ./miner2 <parameters for miner>
done;

Running a shell script once a day at random time [duplicate]

This question already has answers here:
Cron jobs and random times, within given hours
(13 answers)
Closed 9 years ago.
Need run a shell script once a day at random time. (so once every day between 00:00-23:59).
I know the sleep command, and the cron too, but
the cron has not random times
and the sleep solution - not very nice - my idea is launch the script every midnight and sleep random time at the start of the script.
Is here something more elegant?
If you have the at command, you can combinte the cron and the at.
Run from a cron every midnight the next script:
#!/bin/bash
script="/tmp/script.sh" #insert the path to your script here
min=$(( 24 * 60 ))
rmin=$(( $RANDOM % $min ))
at -f "$script" now+${rmin}min
The above will run the at command every midnight and will execute your script at random time . You should check your crontab how often is the atrun command started. (The atrun runs the commands stored with the at)
The main benefit in comparison with the sleep method: this "survives" the system reboot.
I would simply launch you script at midnight, and sleep for a random time between 0 and 86400 seconds. Since my bash's $RANDOM returns a number between 0 and 32767:
sleep $(( ($RANDOM % 1440)*60 + ($RANDOM % 60) ))
The best alternative to cron is probably at
See at man page
Usually, at reads commands from standard input, but you can give a file of jobs with -f.
Time wise, you can specify many formats. Maybe in your case the most convenient would be
at -f jobs now + xxx minutes
where your scripts gives xxx as a random value from 1 to 1440 (1440 minutes in a day), and jobs contains the commands you want to be executed.
Nothing prevents you from running sed to patch your crontab as the last thing your program does and just changing the next start time. I wouldn't sleep well though.
You can use cron to launch bash script, which generates pseudorandom timestamp and gives it to unix program at
I see you are familiar with bash and cron enough, so at will be a piece of cake for you. Documentation as always "man at" or you can try wiki
http://en.wikipedia.org/wiki/At_(Unix)

ssh and bring up many processes in parallel + solaris

I have many processes and each of them take a lot of time to come up (5-10 min).
I am running my script in abc#abc1 and ssh to xyz#xyz1 to bring up the daemons.
There in the other machine(xyz#xyz1) I want to bring up 10 processes in parallel (call there startup scripts).
Then after 10 min I will check there status are they up or down.
I am doing this because I want the execution time of (my) script to be minimum.
How to do this using shell script with minimum amount of time ?
Thanks
Something like this should get your processes started:
for cmd in bin/proc1 bin/proc2 bin/procn; do
logfile=var/${cmd#bin/}.out
ssh xyz#xyz1 "bash -c '$cmd > $logfile 2>&1 &' && echo 'started $cmd in the background. See $logfile for its output.'"
done

Need Help with Unix bash "timer" for mp3 URL Download Script

I've been using the following Unix bash script:
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3
((n++))
done < ~/Desktop/URLs.txt
to download and rename a bunch mp3 files from URLs listed in "URLs.txt". It works well (thanks to StackOverflow users), but due to a suspected server quantity/time download limit, it's only allowing me to access a range of 40 - 50 files from my URL list.
Is there a way to work around this by adding a "timer" inside the while loop so it downloads 1 file per "X" seconds?
I found another related question, here:
How to include a timer in Bash Scripting?
but I'm not sure where to add the "sleep [number of seconds]"... or even if "sleep" is really what I need for my script...?
Any help enormously appreciated — as always.
Dave
curl has some pretty awesome command-line options (documentation), for example, --limit-rate will limit the amount of bandwidth that curl uses, which might completely solve your problem.
For example, replace the curl line with:
curl --limit-rate 200K "$mp3" > ~/Desktop/URLs/$n.mp3
would limit the transfers to an average of 200K per second, which would download a typical 5MB MP3 file in 25 seconds, and you could experiment with different values until you found the maximum speed that worked.
You could also try a combination of --retry and --retry-delay so that when and if a download fails, curl waits and then tries again after a certain amount of time.
For example, replace the curl line with:
curl --retry 30 "$mp3" > ~/Desktop/URLs/$n.mp3
This will transfer the file. If the transfer fails, it will wait a second and try again. If it fails again, it will wait two seconds. If it fails again, it will wait four seconds, and so on, doubling the waiting time until it succeeds. The "30" means it will retry up to 30 times, and it will never wait more than 10 minutes. You can learn this all at the documentation link I gave you.
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3 &
((n++))
if ! ((n % 4)); then
wait
sleep 5
fi
done < ~/Desktop/URLs.txt
This will spawn at most 4 instances of curl and then wait for them to complete before it spawns 4 more.
A timer?
Like your crontab?
man cron
You know what they let you download, just count the disk usage of your files that you did get.
There is the transfer you are allowed. You need that, and you will need the PID of your script.
ps aux | grep $progname | print awk '{print $1}'
or something like that. The secret sauce here is that you can suspend with
kill -SIGSTOP PID
and resume with
kill -SIGCONT PID
So the general method would be
Urls on an array or queue or whatever
bash lets you have
Process an url.
increment transfer counter
When transfer counter gets close
kill -SIGSTOP MYPID
You are suspended.
in your crontab foreground your script after a minute/hour/day whatever
Continue processing
Repeat until done.
just don't log out or you'll need to do the whole thing over again, although if you used perl it would be trivial.
Disclaimer, I am not sure if this is an exercise in bash or whatnot, I confess freely that I see the answer in perl, which is always my choice outside of a REPL. Code in Bash long enough , or heaven forbid, Zsh ( my shell ) and you will see why Perl was so popular. Ah memories...
Disclaimer 2: Untested, drive by , garbage methodology here only made possible because you've an idea what that transfer might be. Obviously, if you have ssh , use ssh -D PORT you#host and pull the mp3's out of the proxy half the time.
In my own defense, if you slow pull the urls with sleep you'll be connected for a while. Perhaps "they" might notice that. Suspend and resume and you only should be connected while grabbing tracks, and gone otherwise.
Not so much an answer as an optimization. If you can consistently get the first few URLs, but it times out on the later ones, perhaps you could trim your URL file as the mp3s were successfully received?
That is, as 1.mp3 is successfully downloaded, strip it from the list:
tail url.txt -n +2 > url2.txt; mv -f url2.txt url.txt
Then the next time the script runs, it'll begin from 2.mp3
If that works, you might just set up a cron job to periodically execute the script over and over, taking bites at a time.
It just occurred to me that you're programatically numbering the mp3s, and curl might clobber some of them on restart, since every time it runs it'll start counting at 1.mp3 again. Something to be aware of, if you go this route.

Resources