Shell script to rsync a file every week without cronjob (school assignement) - shell

#!/bin/bash
z=1
b=$(date)
while [[ $z -eq 1 ]]
do
a=$(date)
if [ "$a" == "$b" ]
then
b=$(date -d "+7 days")
rsync -v -e ssh user#ip_address:~/sample.tgz /home/kartik2
sleep 1d
fi
done
I want to rsync a file every week !! But if I start this script on every boot the file will be rsynced every time the system starts !! How to alter the code to satisfy week basis rsync ? ( PS- I don't want to do this through cronjob - school assignment)

You are talking about having this run for weeks, right? So, we have to take into account that the system will be rebooted and it needs to be run unattended. In short, you need some means of ensuring the script is run at least once every week even when no one is around. The options look like this
Option 1 (worst)
You set a reminder for yourself and you log in every week and run the script. While you may be reliable as a person, this doesn't allow you to go on vacation. Besides, it goes against our principle of "when no one is around".
Option 2 (okay)
You can background the process (./once-a-week.sh &) but this will not reliable over time. Among other things, if the system restarts then your script will not be operating and you won't know.
Option 3 (better)
For this to be reliable over weeks one option is to daemonize the script. For a more detailed discussion on the matter, see: Best way to make a shell script daemon?
You would need to make sure the daemon is started after reboot or system failure. For more discussion on that matter, see: Make daemon start up with Linux
Option 4 (best)
You said no cron but it really is the best option. In particular, it would consume no system resources for the 6 days, 23 hours and 59 minutes when it does not need to running. Additionally, it is naturally resilient to reboots and the like. So, I feel compelled to say that creating a crontab entry like the following would be my top vote: #weekly /full/path/to/script
If you do choose option 2 or 3 above, you will need to make modifications to your script to contain a variable of the week number (date +%V) in which the script last successfully completed its run. The problem is, just having that in memory means that it will not be sustained past reboot.
To make any of the above more resilient, it might be best to create a directory where you can store a file to serve as a semaphore (e.g. week21.txt) or a file to store the state of the last run. Something like once-a-week.state to which you would write a value when run:
date +%V > once-a-week.state # write the week number to a file
Then to read the value, you would:
file="/path/to/once-a-week.state" # the file where the week number is stored
read -d $'\x04' name < "$file"
echo "$name"
You would then need to check to see if the week number matched this present week number and handle the needed action based on match or not.

#!/bin/bash
z=1
b=$(cat f1.txt)
while [[ $z -eq 1 ]]
do
a=$(date +"%d-%m-%y")
if [ "$a" == "$b" ] || [ "$b" == "" ] || [$a -ge $b ]
then
b=$(date +"%d-%m-%y" -d "+7 days")
echo $b > f1.txt
rsync -v -e ssh HOST#ip:~/sample.tgz /home/user
if [ $? -eq 0 ]
then
sleep 1d
fi
fi
done
This code seems to works well and good !! Any changes to it let me know

Related

How to use timeout for this nested Bash script?

I wrote the following bash script, which works all right, apart from some random moments when it freezes completely and doesn't evolve further past a certain value of a0
export OMP_NUM_THREADS=4
N_SIM=15000
N_NODE=1
for ((i = 1; i <= $N_SIM; i++))
do
index=$((i))
a0=$(awk "NR==${index} { print \$2 }" Intensity_Wcm2_versus_a0_10_20_10_25_range.txt)
dirname="a0_${a0}"
if [ -d "${dirname}" ]; then
cd -P -- "${dirname}" # enter the directory because it exists already
if [ -f "ParticleBinning0.h5" ]; then # move to next directory because the sim has been already done and results are there
cd ..
echo ${a0}
echo We move to the next directory because ParticleBinning0.h exists in this one already.
continue 1
else
awk -v s="a0=${a0}" 'NR==6 {print s} 1 {print}' ../namelist_for_smilei.py > namelist_for_smilei_a0included.py
echo ${a0}
mpirun -n 1 ../smilei namelist_for_smilei_a0included.py 2&> smilei.log
cd ..
fi
else
mkdir -p $dirname
cd $dirname
awk -v s="a0=${a0}" 'NR==6 {print s} 1 {print}' ../namelist_for_smilei.py > namelist_for_smilei_a0included.py
echo ${a0}
mpirun -n 1 ../smilei namelist_for_smilei_a0included.py 2&> smilei.log
cd ..
fi
done
I need to let this to run for 12 hours or so in order for it to complete all the 15,000 simulations.
One mpirun -n 1 ../smilei namelist_for_smilei.py 2&> smilei.log command takes 4 seconds to run on average.
Sometimes it just stops at one value of a0 and the last printed value of a0 on the screen is say a0_12.032131.
And it stays like this, stays like this, for no reason.
There's no output being written in the smilei.log from that particularly faulty a0_12.032131 folder.
So I don't know what has happened with this particular value of a0.
Any value of a0 is not particularly important, I can live without the computations for that 1 particular value of a0.
I have tried to use the timeout utility in Ubuntu to somehow make it advance past any value of a0 which takes more than 2 mins to run. If it takes more than that to run, it clearly failed and stops the whole process running forwards.
It is beyond my capabilities to write such a script.
How shall a template look like for my particular pipeline?
Thank you!
It seems that this mpirun program is hanging. As you said you could use the timeout utility to terminate its execution after a reasonable amount of time has passed:
timeout --signal INT 2m mpirun...
Depending on how mpirun handles signals it may be necessary to use KILL instead of INT to terminate the process.

Passing a directory in bash trough pipe communication

This problem is a homework, I know there would be much easier ways of solving this problem, but it is what it is.
The question goes as follows:
-I have a bash script that does some creates some files.
-I have to pass this first argument of this script ( which is a directory ) to another script trough a pipeline ( | ).
The problem comes in when I do try and pass this directory as an argument, in the other script, the argument I receive is null.
This is the whole code, in this exact order.
Nothing was left out.
The first script receives at least 11 argument:
if [[ $# -lt 11 ]]; then
echo "Not enough arguments." >&2;
exit 1;
fi;
The script will check if the first argument is an actual directory:
if [[ ! -d $1 ]]; then
echo "Not a valid first argument." >&2;
exit 1;
fi;
Here, I'll save my first directory here so that I can shift, and then I'm going to declare more stuff that i need
Directory=$1;
shift;
N=$#
name_array=("$#")
Pos=0;
Afterwards, the script will use all the other arguments to create .txt files with those names, and add a number of non-null lines in each .txt file ( So, argument2.txt has N-1 lines, argument3.txt has N-2 lines ... argumentN.txt has 1 line.) Afterwards, I have to change their permissions to 600.
while [[ $# -gt 0 ]]; do
if [[ -a "$Directory"/"$1".txt ]]; then
rm "$Directory"/"$1".txt;
fi
touch "$Directory"/"$1".txt
for((i=0;i<N;i++)); do
echo "$N" >> "$Directory"/"$1".txt;
done;
chmod u=rw "$Directory"/"$1".txt;
N=$(($N-1));
name_array["$Pos"]="$Directory"/"$1".txt;
Pos=$(($Pos + 1));
shift;
done;
Problem comes here, I'm trying to echo the first directory..
echo $Directory;
I have tried this as well, with the same results as the one above
echo "$(cd "$(dirname "$Directory")" && pwd -P)/$(basename "$Directory")";
.. like this so I can use in the Shell the pipeline commands.
The second script will receive the directory, enter it and search trough all the files that have been created just now.
In the Unix terminal, I use this:
./FirstScript.bash 1 2 3 4 5 6 7 8 9 10 11 12 | ./SecondScript.bash
Thank you in advance!
Thanx for being honest, and of course we're not going to do your homework, but,...
Most of the code seems pretty good. There are some smaller issues, which you might easily solve with shellcheck (if it isn't installed, install it or use the web version), and I would not put ; at the end of each line.
So, instead of running the complete pipeline at once, break it down to find where your problem lies. If you run
./FirstScript.bash 1 2 3 4 5 6 7 8 9 10 11 12
If the output is as expected (so, with the echo $Directory), and the files are created, then the first script is good. So, in this case, the directory 1 will contain the files 2 3 4 5 6 7 8 9 10 11 12, and the output will be 1.
Then, you will run the second script
./SecondScript.bash
and as input you will give:
1
^D
(that is control-d on Unix). And then the second script should do what you want (or not).
In this way you can debug the scripts separately.
(quick face-palm question: is 1 really the directory you want, or did you just forget the directory name as first argument?)

Determine how many times my Raspberry Pi has rebooted

I'm working on a Raspberry Pi in bash script and I wanted to know if it was possible to determine how many times the RPi rebooted. The point is that my program is doing something and if I reach 3 reboots it starts doing something else.
I already found this https://unix.stackexchange.com/questions/131888/is-there-a-way-to-tell-how-many-times-my-computer-has-rebooted-in-a-24-hour-peri
but the problem is that it gives me a number that can't be modified easily.
Any ideas ?
Thanks for the clarification.
last reboot | grep ^reboot | wc -l
That's the number of reboots your system did. Since your program will not "survive" a reboot, I assume that you want the number of reboots since your program ran the first time. So you want to store the number of reboots the first time around, and read it back on (the first and) subsequent starts:
if [[ ! -e ~/.reboots ]]
then
echo $(last reboot | grep ^reboot | wc -l) > ~/.reboots
fi
INITIAL_REBOOTS=$(cat ~/.reboots)
# Now you can check if the *current* number of reboots
# is larger than the *initial* number by three or more:
REBOOTS=$(last reboot | grep ^reboot | wc -l)
if [[ $(expr $REBOOTS - $INITIAL_REBOOTS) -ge 3 ]]
then
echo "Three or more reboots"
else
echo "Less than three reboots"
fi
The above lacks all kinds of finesse and error checking (e.g. in case someone has tampered with ~/.reboots), but is meant as proof of concept only.

stop scripting in a specified time(2 mins)

#!/bin/bash
commonguess(){
for guess in $(< passwordlist)
do
try=$(echo "$guess" | sha256sum | awk '{print $1}' )
if [ "$try" == "$xxx" ]
then
echo "$name:$try"
return 0
fi
done
return 1
}
dict(){...}
brute(){...}
while IFS=':' read -r name hashing;do
commonguess || dict || brute
done
my code has been fixed, and i need to do one more thing. when i run function brute, it should stop after 2 mins. I know sleep command can make the script pause, however i have been told it is not a good idea to use "kill". So i am wondering is there any way to do this.
You'd get a better answer if you were more specific about what brute actually consists of.
If it's pure shell itself, the cleanest way to have it manage its execution time is to check the SECONDS variable frequently enough so you can abort the process by yourself it it ever goes over some threshold.
It it's external, you're not going to be able to avoid kill. Or some invoke wrapper that handles timeout on its own, that probably already exists.

What is the Best Way to Perform Timestamp Comparison in Bash

I have an alert script that I am trying to keep from spamming me so I'd like to place a condition that if an alert has been sent within, say the last hour, to not send another alert. Now I have a cron job that checks the condition every minute because I need to be alerted quickly when the condition is met but I don't need to get the email every munite until I get the issue under control. What is the best way to compare time in bash to accomplish this?
By far the easiest is to store time stamps as modification times of dummy files. GNU touch and date commands can set/get these times and perform date calculations. Bash has tests to check whether a file is newer than (-nt) or older than (-ot) another.
For example, to only send a notification if the last notification was more than an hour ago:
touch -d '-1 hour' limit
if [ limit -nt last_notification ]; then
#send notification...
touch last_notification
fi
Use "test":
if test file1 -nt file2; then
# file1 is newer than file2
fi
EDIT: If you want to know when an event occurred, you can use "touch" to create a file which you can later compare using "test".
Use the date command to convert the two times into a standard format, and subtract them. You'll probably want to store the previous execution time in a dotfile then do something like:
last = $(cat /tmp/.lastrun)
curr = $(date '+%s')
diff = $(($curr - $last))
if [ $diff -gt 3600 ]; then
# ...
fi
echo "$curr" >/tmp/.lastrun
(Thanks, Steve.)

Resources