I'm working on a Raspberry Pi in bash script and I wanted to know if it was possible to determine how many times the RPi rebooted. The point is that my program is doing something and if I reach 3 reboots it starts doing something else.
I already found this https://unix.stackexchange.com/questions/131888/is-there-a-way-to-tell-how-many-times-my-computer-has-rebooted-in-a-24-hour-peri
but the problem is that it gives me a number that can't be modified easily.
Any ideas ?
Thanks for the clarification.
last reboot | grep ^reboot | wc -l
That's the number of reboots your system did. Since your program will not "survive" a reboot, I assume that you want the number of reboots since your program ran the first time. So you want to store the number of reboots the first time around, and read it back on (the first and) subsequent starts:
if [[ ! -e ~/.reboots ]]
then
echo $(last reboot | grep ^reboot | wc -l) > ~/.reboots
fi
INITIAL_REBOOTS=$(cat ~/.reboots)
# Now you can check if the *current* number of reboots
# is larger than the *initial* number by three or more:
REBOOTS=$(last reboot | grep ^reboot | wc -l)
if [[ $(expr $REBOOTS - $INITIAL_REBOOTS) -ge 3 ]]
then
echo "Three or more reboots"
else
echo "Less than three reboots"
fi
The above lacks all kinds of finesse and error checking (e.g. in case someone has tampered with ~/.reboots), but is meant as proof of concept only.
Related
I use this code on embedded device to receive chapter number, this is number after sign "/" if is more than 01, execute script:
echo -n "REMOTE QCH" | /tmp/nc 0.0.0.0 48360 > /tmp/QCH
sleep 1s
a=$(cat /tmp/QCH | grep -o '[^/":]\+$' | grep -o '[[:digit:]]*')
if [ "$a" -gt "01" ]; then
echo "action"
fi
This code send tcp packet/receive and save to file /tmp/QCH, It can give you numbers 01/12, 04/18...
echo -n "REMOTE QCH" | /tmp/nc 0.0.0.0 48360 > /tmp/QCH
Everything works fine, I wrote the code myself, but is it well optimized? maybe can be faster or better?
Greetings
No, it is not optimized. bash in general is slower than a compiled language such as C. Pipes also take up a lot a resources; most, if not all could be removed. grep and regular expressions can use a lot of resources; replacing these with exact string matches, if possible, is almost always more optimized and often possible (not suer in this case). Not storing variables, also might optimize memory usage (trivial).
Another issue, regarding correctness, is that /tmp/QCH might change in 1 second, which would break the cat
I wrote the following bash script, which works all right, apart from some random moments when it freezes completely and doesn't evolve further past a certain value of a0
export OMP_NUM_THREADS=4
N_SIM=15000
N_NODE=1
for ((i = 1; i <= $N_SIM; i++))
do
index=$((i))
a0=$(awk "NR==${index} { print \$2 }" Intensity_Wcm2_versus_a0_10_20_10_25_range.txt)
dirname="a0_${a0}"
if [ -d "${dirname}" ]; then
cd -P -- "${dirname}" # enter the directory because it exists already
if [ -f "ParticleBinning0.h5" ]; then # move to next directory because the sim has been already done and results are there
cd ..
echo ${a0}
echo We move to the next directory because ParticleBinning0.h exists in this one already.
continue 1
else
awk -v s="a0=${a0}" 'NR==6 {print s} 1 {print}' ../namelist_for_smilei.py > namelist_for_smilei_a0included.py
echo ${a0}
mpirun -n 1 ../smilei namelist_for_smilei_a0included.py 2&> smilei.log
cd ..
fi
else
mkdir -p $dirname
cd $dirname
awk -v s="a0=${a0}" 'NR==6 {print s} 1 {print}' ../namelist_for_smilei.py > namelist_for_smilei_a0included.py
echo ${a0}
mpirun -n 1 ../smilei namelist_for_smilei_a0included.py 2&> smilei.log
cd ..
fi
done
I need to let this to run for 12 hours or so in order for it to complete all the 15,000 simulations.
One mpirun -n 1 ../smilei namelist_for_smilei.py 2&> smilei.log command takes 4 seconds to run on average.
Sometimes it just stops at one value of a0 and the last printed value of a0 on the screen is say a0_12.032131.
And it stays like this, stays like this, for no reason.
There's no output being written in the smilei.log from that particularly faulty a0_12.032131 folder.
So I don't know what has happened with this particular value of a0.
Any value of a0 is not particularly important, I can live without the computations for that 1 particular value of a0.
I have tried to use the timeout utility in Ubuntu to somehow make it advance past any value of a0 which takes more than 2 mins to run. If it takes more than that to run, it clearly failed and stops the whole process running forwards.
It is beyond my capabilities to write such a script.
How shall a template look like for my particular pipeline?
Thank you!
It seems that this mpirun program is hanging. As you said you could use the timeout utility to terminate its execution after a reasonable amount of time has passed:
timeout --signal INT 2m mpirun...
Depending on how mpirun handles signals it may be necessary to use KILL instead of INT to terminate the process.
#!/bin/bash
z=1
b=$(date)
while [[ $z -eq 1 ]]
do
a=$(date)
if [ "$a" == "$b" ]
then
b=$(date -d "+7 days")
rsync -v -e ssh user#ip_address:~/sample.tgz /home/kartik2
sleep 1d
fi
done
I want to rsync a file every week !! But if I start this script on every boot the file will be rsynced every time the system starts !! How to alter the code to satisfy week basis rsync ? ( PS- I don't want to do this through cronjob - school assignment)
You are talking about having this run for weeks, right? So, we have to take into account that the system will be rebooted and it needs to be run unattended. In short, you need some means of ensuring the script is run at least once every week even when no one is around. The options look like this
Option 1 (worst)
You set a reminder for yourself and you log in every week and run the script. While you may be reliable as a person, this doesn't allow you to go on vacation. Besides, it goes against our principle of "when no one is around".
Option 2 (okay)
You can background the process (./once-a-week.sh &) but this will not reliable over time. Among other things, if the system restarts then your script will not be operating and you won't know.
Option 3 (better)
For this to be reliable over weeks one option is to daemonize the script. For a more detailed discussion on the matter, see: Best way to make a shell script daemon?
You would need to make sure the daemon is started after reboot or system failure. For more discussion on that matter, see: Make daemon start up with Linux
Option 4 (best)
You said no cron but it really is the best option. In particular, it would consume no system resources for the 6 days, 23 hours and 59 minutes when it does not need to running. Additionally, it is naturally resilient to reboots and the like. So, I feel compelled to say that creating a crontab entry like the following would be my top vote: #weekly /full/path/to/script
If you do choose option 2 or 3 above, you will need to make modifications to your script to contain a variable of the week number (date +%V) in which the script last successfully completed its run. The problem is, just having that in memory means that it will not be sustained past reboot.
To make any of the above more resilient, it might be best to create a directory where you can store a file to serve as a semaphore (e.g. week21.txt) or a file to store the state of the last run. Something like once-a-week.state to which you would write a value when run:
date +%V > once-a-week.state # write the week number to a file
Then to read the value, you would:
file="/path/to/once-a-week.state" # the file where the week number is stored
read -d $'\x04' name < "$file"
echo "$name"
You would then need to check to see if the week number matched this present week number and handle the needed action based on match or not.
#!/bin/bash
z=1
b=$(cat f1.txt)
while [[ $z -eq 1 ]]
do
a=$(date +"%d-%m-%y")
if [ "$a" == "$b" ] || [ "$b" == "" ] || [$a -ge $b ]
then
b=$(date +"%d-%m-%y" -d "+7 days")
echo $b > f1.txt
rsync -v -e ssh HOST#ip:~/sample.tgz /home/user
if [ $? -eq 0 ]
then
sleep 1d
fi
fi
done
This code seems to works well and good !! Any changes to it let me know
I have having an interesting issue that I can't seem to figure out.
I have a basic script that pulls configuration information and just redirects it to a file:
cat /etc/something > 1
cat /etc/something-else > 2
As soon as my data gather is finished, I run a "parser" that presents info about the check:
#58
id="RHEL-06-000001"
ruleid="The system must require passwords to contain at least one special character."
if grep -G [a-z] 1; then
ocredit=`cat 1 | grep -v "^#" | awk '{print $2}'`
if [ "$ocredit" -le -1 ]; then
result="Not A Finding"
todo="None"
else
result="Open"
todo="The current value is $ocredit. This is less than the minimum requirement of - 1."
fi
else
result="Open"
todo="The option is not configured"
fi
echo "$id, $ruleid, $result, $todo" >> Findings.csv
#59
id="RHEL-06-000002"
ruleid="The system must require passwords to contain at least one lowercase alphabetic character."
if grep -G [a-z] 2; then
lcredit=`cat 2 | awk -F"=" '{print $2}'`
if [ "$lcredit" -le -1 ]; then
result="Not A Finding"
todo="None"
else
result="Open"
todo="The current value is $lcredit. This is less than the minimum requirement of -1."
fi
else
result="Open"
todo="The system is not configured to require at least one lowercase alphabetical charatcer in passwords."
echo "$id, $ruleid, $result, $todo" >> Findings.csv
Or something remotely close to that.
I have roughly 250 of these checks happening but my code is runs the first 58 and then stops and no longer redirects content to the checks.csv.
I do get an error after the script finishes prematurely, stating
./checker.sh: line 2898: syntax error: unexpected end of file
which is the end of my file, but I can't seem to figure out how it is escaping to that point in the script.
The kicker, this all worked until about a half hour ago and it has be stumped.
Can you help me out?
You seem to be missing the fi after your second-last line:
else
result="Open"
todo="The system is not configured to require at least one lowercase alphabetical charatcer in passwords."
## HERE ##
echo "$id, $ruleid, $result, $todo" >> Findings.csv
That could potentially cause problems for the bash parser when encountered, causing an EOF error when bash tries to find the missing fi.
That probably means you've got an unclosed if statement or similar. Bash reads simple commands on-demand, but when it comes upon a complex statement like that, it wants to read the whole if statement and its contents. If it then comes upon an EOF while still trying to read to the end of it, it will give you that error.
I often find myself writing simple for loops to perform an operation to many files, for example:
for i in `find . | grep ".xml$"`; do bzip2 $i; done
It seems a bit depressing that on my 4-core machine only one core is getting used.. is there an easy way I can add parallelism to my shell scripting?
EDIT: To introduce a bit more context to my problems, sorry I was not more clear to start with!
I often want to run simple(ish) scripts, such as plot a graph, compress or uncompress, or run some program, on reasonable sized datasets (usually between 100 and 10,000). The scripts I use to solve such problems look like the one above, but might have a different command, or even a sequence of commands to execute.
For example, just now I am running:
for i in `find . | grep ".xml.bz2$"`; do find_graph -build_graph $i.graph $i; done
So my problems are in no way bzip specific! (Although parallel bzip does look cool, I intend to use it in future).
Solution: Use xargs to run in parallel (don't forget the -n option!)
find -name \*.xml -print0 | xargs -0 -n 1 -P 3 bzip2
This perl program fits your needs fairly well, you would just do this:
runN -n 4 bzip2 `find . | grep ".xml$"`
gnu make has a nice parallelism feature (eg. -j 5) that would work in your case. Create a Makefile
%.xml.bz2 : %.xml
all: $(patsubt %.xml,%xml.bz2,$(shell find . -name '*.xml') )
then do a
nice make -j 5
replace '5' with some number, probably 1 more than the number of CPU's. You might want to do 'nice' this just in case someone else wants to use the machine while you are on it.
The answer to the general question is difficult, because it depends on the details of the things you are parallelizing.
On the other hand, for this specific purpose, you should use pbzip2 instead of plain bzip2 (chances are that pbzip2 is already installed or at least in the repositories or your distro). See here for details: http://compression.ca/pbzip2/
I find this kind of operation counterproductive. The reason is the more processes access the disk at the same time the higher the read/write time goes so the final result ends in a longer time. The bottleneck here won't be a CPU issue, no matter how many cores you have.
Haven't you ever performed a simple two big file copies at the same time on the same HD drive? I is usually faster to copy one and then another.
I know this task involves some CPU power (bzip2 is demanding compression method), but try measuring first CPU load before going the "challenging" path we all technicians tend to choose much more often than needed.
I did something like this for bash. The parallel make trick is probably a lot faster for one-offs, but here is the main code section to implement something like this in bash, you will need to modify it for your purposes though:
#!/bin/bash
# Replace NNN with the number of loops you want to run through
# and CMD with the command you want to parallel-ize.
set -m
nodes=`grep processor /proc/cpuinfo | wc -l`
job=($(yes 0 | head -n $nodes | tr '\n' ' '))
isin()
{
local v=$1
shift 1
while (( $# > 0 ))
do
if [ $v = $1 ]; then return 0; fi
shift 1
done
return 1
}
dowait()
{
while true
do
nj=( $(jobs -p) )
if (( ${#nj[#]} < nodes ))
then
for (( o=0; o<nodes; o++ ))
do
if ! isin ${job[$o]} ${nj[*]}; then let job[o]=0; fi
done
return;
fi
sleep 1
done
}
let x=0
while (( x < NNN ))
do
for (( o=0; o<nodes; o++ ))
do
if (( job[o] == 0 )); then break; fi
done
if (( o == nodes )); then
dowait;
continue;
fi
CMD &
let job[o]=$!
let x++
done
wait
If you had to solve the problem today you would probably use a tool like GNU Parallel (unless there is a specialized parallelized tool for your task like pbzip2):
find . | grep ".xml$" | parallel bzip2
To learn more:
Watch the intro video for a quick introduction:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). You command line
with love you for it.
I think you could to the following
for i in `find . | grep ".xml$"`; do bzip2 $i&; done
But that would spin off however many processes as you have files instantly and isn't an optimal as just running four processes at a time.