I wrote a shell script and added it to my cron. It's supposed to run every minute and check for the average server load, past 1 minute, and if it's over 40 it should log the load, date and then restart Apache httpd. Here is my script:
#!/bin/bash
LOGFILE=/home/user/public_html/domain.com/cron/restart.log
function float_to_int() {
echo $1 | cut -d. -f1
}
check=$(uptime | awk -F' *,? *' '{print $12}')
now=$(date)
checkk=$(float_to_int $check)
if [[ $checkk > 40 ]]; then
echo $now $checkk >> $LOGFILE 2>&1
/usr/bin/systemctl restart httpd.service
fi
If I look at the log file I see the following:
Wed Jul 3 20:02:01 EDT 2019 70
Wed Jul 3 23:03:01 EDT 2019 43
Wed Jul 3 23:12:01 EDT 2019 9
Wed Jul 3 23:13:01 EDT 2019 7
Wed Jul 3 23:14:01 EDT 2019 6
Wed Jul 3 23:15:02 EDT 2019 5
Wed Jul 3 23:16:01 EDT 2019 5
Something is clearly wrong as it should only log and restart Apache if the load is over 40 but as you can see from the logs the load was 9, 7, 6, 5 and 5. Could someone point me in the right direction?
From man bash, section CONDITIONAL EXPRESSIONS (emphasis mine) :
string1 > string2
True if string1 sorts after string2 lexicographically.
You will either want to use [['s -gt operator, or use arithmetic evaluation instead of [[ :
if (( chekk > 40 )); then
Here's one in GNU awk (GNU awk due to strftime()):
awk '
$1 > 0.4 { # interval above 0.4
logfile="./log.txt" # my logpath, change it
print strftime("%c"), $1 >> logfile # date and load to log
cmd="/usr/bin/systemctl restart httpd.service" # command to use for restarting
if((ret=(cmd|getline res)) !=0 ) # store return value and result
print "failed: " ret # if failed
else
print "success"
}' /proc/loadavg # getting load avg from /proc
Related
I need to add the timestamp of all remote servers as part of output and check & compare whether the timestamp is the same or not,
I am able to print the machine IP and date.
#!/bin/bash
all_ip=(192.168.1.121 192.168.1.122 192.168.1.123)
for ip_addr in "${all_ip[#]}"; do
aws_ip=$"ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'"
date=date
sshpass -p "password" ssh root#$ip_addr "$aws_ip & $date"
echo "==================================================="
done
Getting Output as :
Wed 27 Jul 2022 05:48:15 AM PDT
192.168.1.121
===================================================
Wed Jul 27 05:48:15 PDT 2022
192.168.1.122
===================================================
Wed Jul 27 05:48:15 PDT 2022
192.168.1.123
===================================================
How to check whether the timestamp ( ignoring seconds ) of all machines is the same or not ,
eg: (Wed 27 Jul 2022 05:48:15 || Wed 27 Jul 2022 05:48:15 || Wed 27 Jul 2022 05:48:15)
Expected Output:
|| Time are in sync on all machines || # if in sync
|| Time are not in sync on all machines || # if not sync
Wed 27 Jul 2022 05:48:15 AM PDT
192.168.1.121
===================================================
Wed Jul 27 05:48:15 PDT 2022
192.168.1.122
===================================================
Wed Jul 27 05:48:15 PDT 2022
192.168.1.123
===================================================
How to check whether the time ( ignoring seconds )
tmpdir=$(mktemp -d)
trap 'rm -r "$tmpdir"' EXIT
for ip in "${allips[#]}"; do
# Do N connections, in paralllel, each one writes to a separate file.
sshpass -p "password" ssh root#"$ip" "date +%Y-%m-%d_%H:%M" > "$tmpdir/$ip.txt" &
done
wait
times=$(
for i in "$tmpdir"/*.txt; do
# print filename with file contents.
echo "$i $(<$i)"
done |
# Sort them on second column
sort -k2 |
# Uniq on second field
uniq -f 2
)
echo "$times"
timeslines=$(wc -l <<<"$times")
if ((timeslines == 1)); then
echo "YAY! minutes on all servers the same"
fi
First, you may adjust your "date" command as folow in order to exclude the seconds:
date +%Y-%m-%d_%H:%M
Then, simply grep your output and validate that all the timestamps are identical. You may dump in a temporary file or any other way.
Ex:
grep [aPatternSpecificToTheLinewithTheDate] [yourTemporaryFile] | sort | uniq | wc -l
If the result is 1, it means that all the timestamps are identical.
However you will have to deal with the corner case where the minute shift while you are fetching the time form all your servers.
I want to create a simple command which logs the date/time, error and stdout to a file. So if I move a file to a folder and it says "Permission denied", I get a line in my file which shows the current date/time and also the error.
I know how to write the stdout and the error of a command to a file but how do I add a time? Thanks in advance!
Here's some code that can help:
function add_date() {
while IFS= read -r line; do
echo "$(date): $line"
done
}
{
# Your code here
} 2>&1 | add_date >> $LOGFILE
This will add the date to the beginning of every line output by your code ( between the braces anyway).
There may be some issues with output buffering. This will appear as the same timestamp on all the lines in your logfile.
Here's an example of the code above applied:
: ${LOGFILE:=logfile}
function add_date() {
while IFS= read -r line; do
echo "$(date): $line"
done
}
{
for a in {1..10}
do
echo $a
sleep 2
done
} 2>&1 | add_date >> $LOGFILE
And the results:
$ cat logfile
Thu Mar 28 14:50:46 EDT 2019: 1
Thu Mar 28 14:50:48 EDT 2019: 2
Thu Mar 28 14:50:50 EDT 2019: 3
Thu Mar 28 14:50:52 EDT 2019: 4
Thu Mar 28 14:50:54 EDT 2019: 5
Thu Mar 28 14:50:56 EDT 2019: 6
Thu Mar 28 14:50:58 EDT 2019: 7
Thu Mar 28 14:51:00 EDT 2019: 8
Thu Mar 28 14:51:02 EDT 2019: 9
Thu Mar 28 14:51:04 EDT 2019: 10
I use a function to do something like what you're asking for
status_msg () {
echo "`hostname --short`:`date '+%m_%d_%Y_%H_%M_%S'`: $*"
}
The above function can be called via
status_msg "This is a test line"
Which would result in
hostname:03_28_2019_13_50_31: This is a test line
Or if you're running a command which produces output, you can use it like so ..
<command> 2>&1 | while read -r line
do
status_msg $line
done
You can redirect stderr to stdout like so:
2>&1
then add the time before each line of either like so:
sed "s/\(.\)/`date` \1/"
so we wind up with something like this
2>&1 | sed "s/\(.\)/`date` \1/"
I need to know when I can do a maintance on a frequently used system. All I can check is a logfile, where I can see when the users are starting and ending there work in average.
I need to do this for weekdays, saturday and sunday.
I know how to grep these information but I don't know how to separate weekdays from weekends and how to build an average from the timestamps. Can anyone help me with that please? Kind regards
Edit: More information as requested
Here is my script so far:
i=14
while i >=0
do dow=$(date -d "-$i day" +%A)
if [ $dow = "Saturday" ] || [ $dow = "Sunday" ]
then i=$((i-1))
fi
beginnweek+=(`zgrep T400: logfile|grep -v 'T811:Icinga'|head -n 1|cut -d " " -f2`)
endweek+=(`zgrep T400: logfile|grep -v 'T811:Icinga'|tail -n 1|cut -d " " -f2`)
i=$((i-1))
done
###calculate average beginn and end - Thats what missing
i=14
while i >=0
do dow=$(date -d "-$i day" +%A)
if [ $dow = "Monday" ] || [ $dow = "Tuesday" ] || [ $dow = "Wednesday" ] || [ $dow = "Thursday" ] || [ $dow = "Friday" ] || [ $dow = "Sunday" ]
then i=$((i-1))
fi
beginnSat+=(`zgrep T400: logfile|grep -v 'T811:Icinga'|head -n 1|cut -d " " -f2`)
endSat+=(`zgrep T400: logfile|grep -v 'T811:Icinga'|tail -n 1|cut -d " " -f2`)
i=$((i-1))
done
###calculate average beginn and end - Thats what missing
i=14
while i >=0
do dow=$(date -d "-$i day" +%A)
if [ $dow = "Monday" ] || [ $dow = "Tuesday" ] || [ $dow = "Wednesday" ] || [ $dow = "Thursday" ] || [ $dow = "Friday" ] || [ $dow = "Saturday" ]
then i=$((i-1))
fi
beginnSun+=(`zgrep T400: logfile|grep -v 'T811:Icinga'|head -n 1|cut -d " " -f2`)
endSun+=(`zgrep T400: logfile|grep -v 'T811:Icinga'|tail -n 1|cut -d " " -f2`)
i=$((i-1))
done
###calculate average beginn and end - Thats what missing
I'm working with
GNU bash, version 4.2.46
on SLES and with
GNU bash, version 3.1.17
The logfiles are looking like this:
19/10/2018 04:00:03.175 : [32631] INFO : (8) >>\\\\\\\\\\T090:NOPRINT,NOSAVE|T400:551200015480|T811:Icinga|T8904:001|T8905:001|//////////
19/10/2018 07:17:19.501 : [4935] INFO : >>\\\\\\\\\\T021:datamax|T050:software|T051:V 1.0|T101:|T400:428568605212|T520:00000000|T510:|T500:|T545:19.10.2018||T821:DE|PRINTINFO:|PRINT1:|PRINT0:intermec pf4i.int01|//////////
First of all you should ask yourself if you really want to use an average. An average makes only sense if all users login at morning, stay logged in over noon, and logout at evening. If you have logouts distributed all over the day the average logout time is meaningless.
But even in such an idealized case you shouldn't start maintenance right after the average logout time since around 50% of the users would still be logged in at that time.
I would rather visualize logins as bars and determine a good maintenance time by hand. ranwhen.py (see picture below) is a very nice tool to display when your system was up. Maybe you can find something similar for logins or adapt the tool yourself.
Nevertheless, here's what you asked for:
Parsing The Logs
Instead of parsing the log manually, I would advise you to use the last tool, which prints the last logins in a simpler format. Since you are on Linux, there should be an -F option for last to print dates prefixed with their weekday. With -R we suppress some unneeded information. The output of last -FR looks as follows:
socowi pts/5 Fri Oct 19 17:42:16 2018 still logged in
reboot system boot Fri Oct 19 14:34:44 2018 still running
alice pts/2 Fri Oct 19 10:35:05 2018 - Fri Oct 19 11:51:03 2018 (01:15)
alice tty7 Fri Oct 19 10:24:32 2018 - Fri Oct 19 11:51:52 2018 (01:27)
bob tty7 Fri Oct 19 10:04:21 2018 - Fri Oct 19 10:14:01 2018 (00:09)
reboot system boot Fri Oct 19 12:03:34 2018 - Fri Oct 19 11:51:55 2018 (00:-11)
carol tty7 Fri Oct 19 08:10:49 2018 - down (01:50)
dave tty7 Thu Oct 18 12:48:12 2018 - crash (04:28)
wtmp begins Tue Oct 16 12:38:03 2018
To extract the valid login and logout dates we use the following functions.
onlyUsers() { last -FR | head -n -2 | grep -Ev '^reboot '; }
onlyDates() { grep -F :; }
loginDates() { onlyUsers | cut -c 23-46 | onlyDates; }
logoutDates() { onlyUsers | cut -c 50-73 | onlyDates; }
Filter By Weekday
The functions loginDates and logoutDates print something like
Fri Oct 19 17:42:16 2018
Fri Oct 19 14:34:44 2018
[...]
Thu Oct 18 12:48:12 2018
Filtering out specific weekdays is pretty easy:
workweek() { grep -E 'Mon|Tue|Wed|Thu|Fri'; }
weekend() { grep -E 'Sat|Sun'; }
If you want all login dates on weekends, you would write loginDates | weekend.
Computing An Average Time
To compute the average time from multiple dates, we first extract the time of day from the dates. Then we convert the HH:MM format to minutes since midnight. Computing an average of a list numbers is easy. Afterwards we convert back to HH:MM.
timeOfDay() { cut -c 12-16; }
timeToMins() { awk -F: '{print $1*60 + $2}'; }
minsToTime() { awk '{printf "%02d:%02d", $1/60, $1%60}'; }
avgMins() { awk '{s+=$1}END{printf "%d", s/NR}'; }
avgTime() { timeOfDay | timeToMins | avgMins | minsToTime; }
Putting Everything Together
To get the average times just combine the commands as needed. Some examples:
# Average login times during workweeks
avg="$(loginDates | workweek | avgTime)"
# Average logout times on weekends
avg="$(logoutDates | weekend | avgTime)"
How can I get the number of logins of each day from the beginning of the wtmp file using AWK?
I thought about using an associative array but I don't know how to implement it in AWK..
myscript.sh
#!/bin/bash
awk 'BEGIN{numberoflogins=0}
#code goes here'
The output of the last command:
[fnorbert#localhost Documents]$ last
fnorbert tty2 /dev/tty2 Mon Apr 24 13:25 still logged in
reboot system boot 4.8.6-300.fc25.x Mon Apr 24 16:25 still running
reboot system boot 4.8.6-300.fc25.x Mon Apr 24 13:42 still running
fnorbert tty2 /dev/tty2 Fri Apr 21 16:14 - 21:56 (05:42)
reboot system boot 4.8.6-300.fc25.x Fri Apr 21 19:13 - 21:56 (02:43)
fnorbert tty2 /dev/tty2 Tue Apr 4 08:31 - 10:02 (01:30)
reboot system boot 4.8.6-300.fc25.x Tue Apr 4 10:30 - 10:02 (00:-27)
fnorbert tty2 /dev/tty2 Tue Apr 4 08:14 - 08:26 (00:11)
reboot system boot 4.8.6-300.fc25.x Tue Apr 4 10:13 - 08:26 (-1:-47)
wtmp begins Mon Mar 6 09:39:43 2017
The shell script's output should be:
Apr 4: 4
Apr 21: 2
Apr 24: 3
, using associative array if it's possible
In awk, arrays can be indexed by strings or numbers, so you can use it like an associative array.
However, what you're asking will be hard to do with awk reliably because the delimiters are whitespace, therefore empty fields will throw off the columns, and if you use FIELDWIDTHS you'll also get thrown off by columns longer than their assigned width.
If all you're looking for is just the number of logins per day you might want to use a combination of sed and awk (and sort):
last | \
sed -E 's/^.*(Mon|Tue|Wed|Thu|Fri|Sat|Sun) (Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) ([ 0-9]{2}).*$/\2 \3/p;d' | \
awk '{arr[$0]++} END { for (a in arr) print a": " arr[a]}' | \
sort -M
The sed -E uses extended regular expressions, and the pattern just prints the date of each line that is emitted by last (This matches on the day of week, but only prints the Month and Date)
We could have used uniq -c to get the counts, but using awk we can do an associative array as you hinted.
Finally using sort -M we're sorting on the abbreviated date formats like Apr 24, Mar 16, etc.
Try the following awk script(assuming that the month is the same, points to current month):
myscript.awk:
#!/bin/awk -f
{
a[NR]=$0; # saving each line into an array indexed by line number
}
END {
for (i=NR-1;i>1;i--) { # iterating lines in reverse order(except the first/last line)
if (match(a[i],/[A-Z][a-z]{2} ([A-Z][a-z]{2}) *([0-9]{1,2}) [0-9]{2}:[0-9]{2}/, b))
m=b[1]; # saving month name
c[b[2]]++; # accumulating the number of occurrences
}
for (i in c) print m,i": "c[i]
}
Usage:
last | awk -f myscript.awk
The output:
Apr 4: 4
Apr 21: 2
Apr 24: 3
okay so i run an openssl command to get the date of an expired script. Doing so gives me this:
enddate=Jun 26 23:59:59 2012 GMT
Then i cut everything out and just leave the month which is "Jun"
Now the next part of my script is to tell the user if the the certificate is expired or not and to do that i use an if statement in which it looks like this:
if [ $exp_year -lt $cur_year && $exp_month -lt $cur_month ]; then
echo ""
echo "Certificate is still valid until $exp_date"
echo ""
else
echo ""
echo "Certificate has expired on $exp_date, please renew."
echo ""
fi
I can't figure out how to convert the month into an integer to even do the comparison.
I thought of doing the brute force way which is this:
Jan=01
Feb=02
Mar=03
...
Clearly that's a terrible way to do it. Does anyone know what i can do?
well, you can use:
now=$(date +%s)
cert=$(date --date="$enddate" +%s)
if [ $cert -lt $now ]; then
echo "Old!"
fi
i.e. convert the date into the seconds past the epoch and compare those
I would recommend using Petesh's answer, but here's a way to set up an associative array if you have Bash 4:
months=(Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec)
declare -A mlookup
for monthnum in ${!months[#]}
do
mlookup[${months[monthnum]]=$((monthnum + 1))
done
echo "${mlookup["Jun"]}" # outputs 6
If you have Bash before version 4, you can use AWK to help you out:
month=Feb
awk -v "month=$month" 'BEGIN {months = "Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec"; print (index(months, month) + 3) / 4}'
Another way in pure Bash (any version):
months="Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec"
month=Aug
string="${months%$month*}"
echo "$((${#string}/4 + 1))"