I have application which register OnLine or Offline status which is stored in my test.log file. Status can be changed every second or minute or at all during many hours. Once per 15 minutes I need to send actual status to external machine [my.ip.address]. In below example let's assume that I need to just echo actual status.
I wrote below script which is watching my test.log and stores actual status in FLAG variable. However I cannot send it (or echo) to my external machine [my.ip.address] cause FLAG is not saved properly. Do you have any idea what's wrong in below example?
#!/usr/bin/env bash
FLAG="OffLine"
FLAG_tmp=$FLAG
tail -f /my/path/test.log | while read line
do
if [[ $line == *"OnLine"* ]]; then
FLAG_tmp="OnLine"
fi
if [[ $line == *"OffLine"* ]]; then
FLAG_tmp="OffLine"
fi
if [ "$FLAG" != "$FLAG_tmp" ];then
FLAG=$FLAG_tmp
echo $FLAG # it works, now FLAG stores actual true status
fi
done &
# till this line I suppose that everything went well but here (I mean out of
# tail -f scope) $FLAG stores only OffLine - even if I change it to OnLine 4 lines before.
while :
do
#(echo $FLAG > /dev/udp/[my.ip.address]/[port])
echo "$FLAG" # for debug purpose - just echo actual status.
# However it is always OffLine! WHY?
#sleep 15*60 # wait 15 minutes
sleep 2 # for debug, wait only 2 sec
done
EDIT:
Thanks guys for your answers, but I still don't get a solution.
#123: I corrected my code basing on your example, but it seems to not working.
#!/usr/bin/env bash
FLAG="OffLine"
FLAG_tmp=$FLAG
while read line
do
if [[ $line == *"OnLine"* ]]; then
FLAG_tmp="OnLine"
fi
if [[ $line == *"OffLine"* ]]; then
FLAG_tmp="OffLine"
fi
if [ "$FLAG" != "$FLAG_tmp" ];then
FLAG=$FLAG_tmp
#echo $FLAG
fi
done & < <(tail -f /c/vagrant_data/iso/rpos/log/rpos.log)
while :
do
echo "$FLAG"
sleep 2
done
#chepner: Do you have some exact proposals how can I solve this problem?
I think you are making it overly complicated. If you just want to send yourself the last state of OffLine or OnLine you might try something like this:
#!/bin/bash
while :
do
FLAG="$(egrep 'OffLine|OnLine' test.log | tail -1)"
if [ $(echo "$FLAG" | grep OffLine) ]
then
FLAG=OffLine
else
FLAG=OnLine
fi
echo $FLAG
sleep 2
done
Or, if you really want to keep the two processes,
#!/bin/bash
echo OffLine > status
tail -f test.log | while read line
do
if [[ "$line" =~ "OffLine" ]]
then
echo OffLine > status
elif [[ "$line" =~ "OnLine" ]]
then
echo OnLine > status
fi
done &
while :
do
cat status > /dev/udp/[my.ip.address]/[port])
sleep 15*60
done
Related
I can't tell if something I'm trying here is simply impossible or if I'm really lacking knowledge in bash's syntax. This is the first script I've written.
I've got a Nextcloud instance that I am backing up daily using a script. I want to log the output of the script as it runs to a log file. This is working fine, but I wanted to see if I could also pipe the Nextcloud occ command's output to the log file too.
I've got an if statement here checking if the file scan fails:
if ! sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all; then
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
This works fine and I am able to handle the error if the system cannot execute the command. The error string above is sent to this function:
Print()
{
if [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "No" ]; then
echo "$1" | tee -a "$log_file"
elif [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "Yes" ]; then
echo "$1" >> "$log_file"
elif [[ "$logging" -eq 0 ]] && [ "$quiet_mode" = "No" ]; then
echo "$1"
fi
}
How can I make it so the output of the occ command is also piped to the Print() function so it can be logged to the console and log file?
I've tried piping the command after ! using | Print without success.
Any help would be appreciated, cheers!
The Print function doesn't read standard input so there's no point piping data to it. One possible way to do what you want with the current implementation of Print is:
if ! occ_output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1); then
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
Print "'occ' output: $occ_output"
Since there is only one line in the body of the if statement you could use || instead:
occ_output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1) \
|| Print "Error: Failed to scan files. Are you in maintenance mode?"
Print "'occ' output: $occ_output"
The 2>&1 causes both standard output and error output of occ to be captured to occ_output.
Note that the body of the Print function could be simplified to:
[[ $quiet_mode == No ]] && printf '%s\n' "$1"
(( logging )) && printf '%s\n' "$1" >> "$log_file"
See the accepted, and excellent, answer to Why is printf better than echo? for an explanation of why I replaced echo "$1" with printf '%s\n' "$1".
How's this? A bit unorthodox perhaps.
Print()
{
case $# in
0) cat;;
*) echo "$#";;
esac |
if [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "No" ]; then
tee -a "$log_file"
elif [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "Yes" ]; then
cat >> "$log_file"
elif [[ "$logging" -eq 0 ]] && [ "$quiet_mode" = "No" ]; then
cat
fi
}
With this, you can either
echo "hello mom" | Print
or
Print "hello mom"
and so your invocation could be refactored to
if ! sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all; then
echo "Error: Failed to scan files. Are you in maintenance mode?"
fi |
Print
The obvious drawback is that piping into a function loses the exit code of any failure earlier in the pipeline.
For a more traditional approach, keep your original Print definition and refactor the calling code to
if output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1); then
: nothing
else
Print "error $?: $output"
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
I would imagine that the error message will be printed to standard error, not standard output; hence the addition of 2>&1
I included the error code $? in the error message in case that would be useful.
Sending and receiving end of a pipe must be a process, typically represented by an executable command. An if statement is not a process. You can of course put such a statement into a process. For example,
echo a | (
if true
then
cat
fi )
causes cat to write a to stdout, because the parenthesis put it into a child process.
UPDATE: As was pointed out in a comment, the explicit subprocess is not needed. One can also do a
echo a | if true
then
cat
fi
I want to apply dark theme depending on if it is "AM" or "PM". So I have created a bash script with "while loop" (So that it runs forever).
But running this script causes speed up in cpu fan. (that happens when our pc struggles to do something highly computational or playing game or anything heavy).
My script is just a simple line. So how can I run this script without causing high fan speeding?
#!/bin/bash
isNightThemeApplied=0
isDayThemeApplied=0
while [[ 1 -le 1 ]]
do
if [[ `date +%r` == *"AM"* ]]
then
if [[ isNightThemeApplied -eq 0 ]]
then
echo "Applying Night Theme..."
lookandfeeltool -a 'org.kde.breezedark.desktop'
isNightThemeApplied=1
isDayThemeApplied=0
echo "Night Theme Applied Successfully"
fi
else
if [[ isDayThemeApplied -eq 0 ]]
then
echo "Applying Day Theme..."
lookandfeeltool -a 'org.kde.breeze.desktop'
isDayThemeApplied=1
isNightThemeApplied=0
echo "Day Theme Applied Successfully"
fi
fi
done
You could do something like this that uses a while loop every 30 minutes so hardly resource hungry, no need to echo anything in my opinion, just put this in your start up script and run it.
#!/usr/bin/env bash
a=$(date +%p)
b="PM"
c="AM"
theme_change () {
if [[ "$a" == "$b" ]] ; then
lookandfeeltool -a 'org.kde.breezedark.desktop'
elif [[ "$a" == "$c" ]] ; then
lookandfeeltool -a 'org.kde.breeze.desktop'
fi
}
while true; do
theme_change;
sleep 1800;
done
A process (in background) should create a file (e.g. result.txt) and populate it with 5 log lines.
I need to check: 1) if the file exists and 2) checks if all the logs (5 lines) are stored
If these conditions are not satisfy within xxx seconds, the process failed and print "FAILED" in terminal, otherwise print "SUCCEED".
I think I need to use a while loop, but I don't know how to implement these conditions
N.B: the lines are appended into the file (asynchronously) and I don't have to check the compliance of logs, just to check if all are stored
This one checks the log and waits 2 seconds before failing:
#!/bin/sh
log_success() {
[[ $(tail -n "$2" "$1" 2> /dev/null | wc -l) -eq "$2" ]]
}
log_success 'file.log' 5 || sleep 2
if log_success 'file.log' 5; then
echo "success"
else
echo "fail"
fi
Well here's my draft for that:
FILE=/path/to/something
for (( ;; )); do
if [[ -e $FILE ]] && WC=$(wc -l < "$FILE") && [[ WC -ge 5 ]]; then
: # Valid.
fi
done
Or
FILE=/path/to/something
for (( ;; )); do
if [[ ! -e $FILE ]]; then
: # File doesn't exist. Do something.
elif WC=$(wc -l < "$FILE") && [[ WC -ge 5 ]]; then
: # Valid.
else
: # File doesn't contain 5 or more lines or is unreadable. Invalid.
fi
done
This one could still have problems with race conditions though.
I wrote a little bash script called "wp", which upload files to an ftp server. It uses the wput utility. It takes the list of files from a text file. When uploading is ready it comments out the line with a double cross in the text file. The success of the upload is detected according to the last line in the logfile. My question is how can I avoid multiple starting of my script? I am trying to detect with pgrep if the instance is running, but doesn't work correctly:
#!/bin/bash
if [ "$(pgrep ^wp$|wc -l)" -eq "2" ]
then
echo "$(pgrep ^wp$)"
echo "$(pgrep ^wp$|wc -l)"
echo "wp script is starting..."
else
echo "$(pgrep ^wp$)"
echo "$(pgrep ^wp$|wc -l)"
echo "wp script is already running!"
exit
fi
server="ftp://username:password#ftp.ftpserver.com"
logfile=~/uploads.log
listfile=~/uploads.txt
list_backup=~/uploads_bak000.txt
while read f;
do
ret=""
if [ "${f:0:1}" = "#" -o "$f"1 = 1 ]
then
if [ "$f"1 = 1 ]
then
:
#echo "invalid string: "$f
else
#first character is remark sign # then empty command -> :
echo "remark line skipped: "$f
fi
else
#while string $ret is empty
while [ -z "$ret" ]
do
wput "$f" --tries=-1 "$server" 2>&1|tee -a $logfile #> /dev/null
ret=$(tail -n 1 "$logfile"|grep "FINISHED\|Nothing\|Skipped\|Transfered")
done
if [ -n "$ret" ]
then
cat $listfile > $list_backup
awk -v f="$f" '{if ($0==f && $0!~/#/) print "#" $0; else print $0;}' $list_backup > $listfile
fi
fi
done < $listfile
There are quick-n-dirty solutions that use ps with grep (don't do this).
It is better to use a lock file as a "mutex". A nice way of doing this is by using a directory as a lock file (http://mywiki.wooledge.org/BashFAQ/045).
I would also suggest taking a look at:
http://mywiki.wooledge.org/ProcessManagement#How_do_I_make_sure_only_one_copy_of_my_script_can_run_at_a_time.3F
, which mentions use of setlock(http://cr.yp.to/daemontools/setlock.html) that abstracts the lock file handling for you.
We have to cache quite a big database of data after each upload, so we created a bash script that should handle it for us. The script should start 4 paralel curls to the site and once they're done, start the next one from the URL list we store in the file.
In theory everything works ok, and the concept works if we run the run 4 processes from our local machines to the target site.
If i set the MAX_NPROC=1 the curl takes as long as it would if the browser hits the URL
i.e. 20s
If I set the MAX_NPROC=2 the time request took, triples.
Am I missing something? Is that an apache setting that is slowing us down? or is this a secret cURL setting that I'm missing?
Any help will be appreciated. Please find the bash script below
#!/bin/bash
if [[ -z $2 ]]; then
MAX_NPROC=4 # default
else
MAX_NPROC=$2
fi
if [[ -z $1 ]]; then
echo "File with URLs is missing"
exit
fi;
NUM=0
QUEUE=""
DATA=""
URL=""
declare -a URL_ARRAY
declare -a TIME_ARRAY
ERROR_LOG=""
function queue {
QUEUE="$QUEUE $1"
NUM=$(($NUM+1))
}
function regeneratequeue {
OLDREQUEUE=$QUEUE
echo "OLDREQUEUE:$OLDREQUEUE"
QUEUE=""
NUM=0
for PID in $OLDREQUEUE
do
process_count=`ps ax | awk '{print $1 }' | grep -c "^${PID}$"`
if [ $process_count -eq 1 ] ; then
QUEUE="$QUEUE $PID"
NUM=$(($NUM+1))
fi
done
}
function checkqueue {
OLDCHQUEUE=$QUEUE
for PID in $OLDCHQUEUE
do
process_count=`ps ax | awk '{print $1 }' | grep -c "^${PID}$"`
if [ $process_count -eq 0 ] ; then
wait $PID
my_status=$?
if [[ $my_status -ne 0 ]]
then
echo "`date` $my_status ${URL_ARRAY[$PID]}" >> $ERROR_LOG
fi
current_time=`date +%s`
old_time=${TIME_ARRAY[$PID]}
time_difference=$(expr $current_time - $old_time)
echo "`date` ${URL_ARRAY[$PID]} END ($time_difference seconds)" >> $REVERSE_LOG
#unset TIME_ARRAY[$PID]
#unset URL_ARRAY[$PID]
regeneratequeue # at least one PID has finished
break
fi
done
}
REVERSE_LOG="$1.rvrs"
ERROR_LOG="$1.error"
echo "Cache STARTED at `date`" > $REVERSE_LOG
echo "" > ERROR_LOG
while read line; do
# create the command to be run
DATA="username=user#server.com&password=password"
URL=$line
CMD=$(curl --data "${DATA}" -s -o /dev/null --url "${URL}")
echo "Command: ${CMD}"
# Run the command
$CMD &
# Get PID for process
PID=$!
queue $PID;
URL_ARRAY[$PID]=$URL;
TIME_ARRAY[$PID]=`date +%s`
while [ $NUM -ge $MAX_NPROC ]; do
checkqueue
sleep 0.4
done
done < $1
echo "Cache FINISHED at `date`" >> $REVERSE_LOG
exit
The network is almost always the bottleneck. Spawning more connections usually makes it slower.
You can try to see if parallel'izing it will do you any good by spawning several
time curl ...... &