Unix - bash -OSX - macos

Trying to use grep to find some information. Evaluate the information. Then perform a function.
Heres what I have, any help is appreciated.
FIXED TO:
#! /bin/bash
UT=$(/usr/sbin/system_profiler SPSoftwareDataType | grep "Time since boot" | grep "days")
if [ "$UT" -ge "5 days" ]; then
echo this
else
echo that
fi
SPSoftwareDataType looks like this:
System Software Overview:
System Version: OS X 10.9.5
Kernel Version: Darwin 13.4.0
Boot Volume: Macintosh HD
Boot Mode: Normal
Computer Name: xxxxxxxxxxxx
User Name: xxxxxxxxxx (xxxxxxxxxx)
Secure Virtual Memory: Enabled
Time since boot: 8 days 3:25
Trying using sysctl
#! /bin/bash
UT=$(awk -F":" ' $4 > 200 ' sysctl -n kern.boottime)
echo $UT
if [ “$UT” -ge “1430315296” ]; then
echo this
else
echo that
fi

I can't see how you expect awk to compare anything specified in days and hours with anything else... nor do I know why you would choose to parse system_profiler output.
Have you considered:
sysctl -n kern.boottime
{ sec = 1431023230, usec = 0 } Thu May 7 19:27:10 201
which will give the boot time in seconds since the epoch which is just a simple integer you can compare with other times?
So, you can parse out the seconds like this
UT=$(sysctl -n kern.boottime | awk -F"[ ,]+" '{print $4}')
the -F"[ ,]+" says to treat multiple spaces or commas as field separators.

You can do all in awk
awk '/Time since boot.*days/ {print ($4>5?"This":"That")}' file
This
awk '/Time since boot.*days/ {print ($4>8?"This":"That")}' file
That
Edit:
#!/bin/bash
/usr/sbin/system_profiler SPSoftwareDataType | awk '/Time since boot.*days/ {print ($4>8?"This":"That")}'

Related

Find remote and local version numbers, compare them, and download if greater

I have a workable solution but it is not presentable/clean for public usage.
The file "version.txt" is both remote and local. The difference is the number:
Remote:
17 March 2022
FVWM myExtensions ver. 3.1.4
Local:
15 March 2022
FVWM myExtensions ver. 3.1.1
In my "poor" solution I manually changed the lines into one line for awk to find the last column and sed removing the dots between the number. Both results are made as variables.
awk '{print $NF}' download/version.txt > tmpGit.txt
VARgit=`sed 's|[.]||g' tmpGit.txt`
awk '{print $NF}' ~/.fvwm/version.txt > tmpLocal.txt
VARlocal=`sed 's|[.]||g' tmpLocal.txt`
if [ "$VARgit" -gt "$VARlocal" ]; then
echo "New update available.";
else
echo "No update.";
fi
I have not found a solution for finding the number in text lines and comparing multiple dot numbers.
Thank you in advance.
You could do this with grep, e.g.:
IFS=. read rmajor rminor rpatch < <(grep -oE '[0-9]+\.[0-9]+\.[0-9]+' remote.txt)
IFS=. read lmajor lminor lpatch < <(grep -oE '[0-9]+\.[0-9]+\.[0-9]+' local.txt)
[ $rmajor -gt $lmajor ] && echo "New major version"
[ $rminor -gt $lminor ] && echo "New minor version"
[ $rpatch -gt $lpatch ] && echo "New patchlevel"
Edit
So to test if remote.txt contains a newer version, assuming all version items are numerical, something like this works:
if [ $rmajor -gt $lmajor ]; then
echo "New major version"
elif [ $rmajor -eq $lmajor -a $rminor -gt $lminor ]; then
echo "New minor version"
elif [ $rmajor -eq $lmajor -a $rminor -eq $lminor -a $rpatch -gt $lpatch ]; then
echo "New patchlevel"
else
echo "Remote has same version or older."
fi
Using GNU sort for Version-sort:
$ awk -v OFS='\t' '/FVWM/{print $NF, FILENAME}' local remote | sort -k1,1Vr
3.1.4 remote
3.1.1 local
That tells you the version number in decreasing order and the name of the file containing each version number.
The above was run on these input files:
$ head local remote
==> local <==
15 March 2022
FVWM myExtensions ver. 3.1.1
==> remote <==
17 March 2022
FVWM myExtensions ver. 3.1.4
Given the above this will print the name of the file that contains the highest version number IF the 2 are different and print nothing otherwise:
$ awk -v OFS='\t' '/FVWM/{print $NF, FILENAME}' local remote |
sort -k1,1Vr |
awk '
{ vers[NR]=$1; files[NR]=$0; sub(/[^\t]+\t/,"",files[NR]) }
END{ if ( vers[1] != vers[2] ) print files[1] }
'
remote
so then you can just test if that is the remote file name and if so download.
finding the number in text line
This is easy to do using regular expression, for example using GNU AWK, let file.txt content be
17 March 2022
FVWM myExtensions ver. 3.1.4
then
awk 'BEGIN{FPAT="[0-9.]+[.][0-9.]+"}NF{print $1}' file.txt
output
3.1.4
Explanation: I inform GNU AWK that field consting 1 or more of digits or dot, literal dot (hence [.] rather than .), 1 or more of digits or dot. Then if one or more such field is present in given line do print 1st field (this solution assume you have at most 1 such field in each line)
(tested in gawk 4.2.1)
comparing multiple dot numbers
I do not know ready solution for this, but I want to note that this task might seems easy, but it is not unless you can enforce certain restriction. In your example you have 3.1.1 and 3.1.4 that is all elements are <10 so you will get expected result if you do comparison as for needs for alphabetic ordering. Consider what happens if you have 3.9 and 3.11 - latter will be consider earlier as first difference is at 3rd position and 1 is earlier in alphabet that 9. This problem is not present if you might enforce that all parts are consisting of exactly one digit.

Awk exctracting column but just from one row

I have a script in the following form:
2017-12-11 10:20:16.993 ...
2017-12-12 10:19:16.993 ...
2017-12-13 10:17:16.993 ...
and I want to extract the first column via awk - F. , and compare it to actual system time in seconds and print the line if the difference is less than 300 seconds.
> SYSTEM_TIME=$(date +%s)
> awk -F. -v system_time=$SYSTEM_TIME '{gsub(/[-:]/," ",$1); if(system_time-mktime($1) <= 300) {print $0}}' log.txt
This is my code, but I can't use mktime because it's not in the POSIX norm. Can it be done without it?
Thanks,
Ahmed
General Remark: logfiles are often incomplete. A date-time format is given, but often the time-zone is missing. When daylight-saving comes into-play it can mess up your complete karma if you are missing your timezone.
Note: In all commands below, it will be assumed that the date in the logfile is in UTC and that the system runs in UTC. If this is not the case, be aware that daylight saving time will create problems when running any of the commands below arround the time daylight-saving kicks in.
Combination of date and awk: (not POSIX)
If your date command has the -d flag (not POSIX), you can run the following:
awk -v r="(date -d '300 seconds ago' '+%F %T.%3N)" '(r < $0)'
GNU awk only:
If you want to make use of mktime, it is then easier to just do:
awk 'BEGIN{s=systime();FS=OFS="."}
{t=$1;gsub(/[-:]/," ",t); t=mktime(t)}
(t-s < 300)' logfile
I will be under the assumption that the log-files are not created in the future, so all times are always smaller than system time.
POSIX:
If you cannot make use of mktime but want to use posix only, which also implies that date does not have the -d flag, you can create your own implementation of mktime. Be aware, that the version presented here does not do any timezone corrections as is done with mktime. mktime_posix assumes that the datestring is in UTC
awk -v s="$(date +%s)" '
# Algorithm from "Astronomical Algorithms" By J.Meeus
function mktime_posix(datestring, a,t) {
split(datestring,a," ")
if (a[1] < 1970) return -1
if (a[2] <= 2) { a[1]--; a[2]+=12 }
t=int(a[1]/100); t=2-t+int(t/4)
t=int(365.25*a[1]) + int(30.6001*(a[2]+1)) + a[3] + t - 719593
return t*86400 + a[4]*3600 + a[5]*60 + a[6]
}
BEGIN{FS=OFS="."}
{t=$1;gsub(/[-:]/," ",t); t=mktime_posix(t)}
(t-s <= 300)' logfile
Related: this answer, this answer
I can think in doing this as its shorter.
#!/bin/bash
SYSTEM_TIME=$(date +%s)
LOGTIME=$( date "+%s" -d "$( awk -F'.' '{print $1}' <( head -1 inputtime.txt ))" )
DIFFERENCEINSECONDS=$( echo "$SYSTEM_TIME $LOGTIME" | awk '{ print ($1 - $2)}' )
if [[ "$DIFFERENCEINSECONDS" -gt 300 ]]
then
echo "TRIGGERED!"
fi
Hope its useful for you. Let me know.
Note : I assumed your input log could be called inputtime.txt. You need to change for your actual filename of course.

What's the easiest way to find multiple unused local ports withing a range?

What I need is to find unused local ports withing a range for further usage (for appium nodes). I found this code:
getPorts() {
freePort=$(netstat -aln | awk '
$6 == "LISTEN" {
if ($4 ~ "[.:][0-9]+$") {
split($4, a, /[:.]/);
port = a[length(a)];
p[port] = 1
}
}
END {
for (i = 7777; i < 65000 && p[i]; i++){};
if (i == 65000) {exit 1};
print i
}
')
echo ${freePort}
}
this works pretty well if I need singe free port, but for parallel test execution we need multiple unused ports. So I need to modify the function to be able to get not one free port, but multiple (depends on parameter), starting from the first found free port and then store the result in one String variable. For example if I need ports for three 3 devices, the result should be:
7777 7778 7779
the code should work on macOS, because we're using mac mini as a test server.
Since I only started with bash, it's a bit complicated to do for me
This is a bash code, it works fine on Linux, so if your Mac also runs bash it will work for you.
getPorts() {
amount=${1}
found=0
ports=""
for ((i=7777;i<=65000;i++))
do
(echo > /dev/tcp/127.0.0.1/${i}) >/dev/null 2>&1 || {
#echo "${i}"
ports="${ports} ${i}"
found=$((found+1))
if [[ ${found} -ge ${amount} ]]
then
echo "${ports:1}"
return 0
fi
}
done
return 1
}
Here is how to use use it and the output:
$ getPorts 3
7777 7778 7779
$ getPorts 10
7777 7778 7779 7780 7781 7782 7783 7784 7785 7786
Finding unused ports from 5000 to 5100:
range=(`seq 5000 5100`)
ports=`netstat -tuwan | awk '{print $4}' | grep ':' | cut -d ":" -f 2`
echo ${range[#]} ${ports[#]} ${ports[#]} | tr ' ' '\n' | sort | uniq -u

How to hand over bash variable to your python script?

I am rather new to bash scripting, more used to batch. Anyways what I am trying to do is to be able to get a string from a bash variable that is created from an nmap scan and make it a variable for a python script. I was going to use grep but it gets too much. Here are the results:
Starting Nmap 5.21 ( http://nmap.org ) at 2014-05-22 20:12 PDT
Nmap scan report for 192.168.1.201
Host is up (0.00020s latency).
Not shown: 96 filtered ports
PORT STATE SERVICE
135/tcp open msrpc
139/tcp open netbios-ssn
445/tcp open microsoft-ds
3389/tcp open ms-term-serv
MAC Address: 02:21:9B:88:3C:06 (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 4.77 seconds
What I want to get is: MAC Address: 02:21:9B:88:3C:06 WITH the space at the end. SO it would be MAC=$
Thank you in advance
MAC=$(egrep -o '^MAC Address: (..:){5}.. ' filename.txt)
The -o option makes egrep just output the part of the line that matches the regexp, so it will just go up to the space after the address.
You can use grep and pipe (|) to awk :
MAC=`grep -a 'MAC' Nmapscan.txt | awk -F ' ' {'print $1" "$2" "$3" " '}`
Or
MAC=`nmap 192.168.1.201 | grep 'MAC' | awk -F ' ' {'print $1" "$2" "$3" " '}`
If possible I'd strongly recommend adding this change too for security:
NMAP_FILE=`mktemp`
# start
nmap 192.168.1.201 > $NMAP_FILE
# middle
MAC=`grep -a 'MAC' $NMAP_FILE | awk -F ' ' {'print $1" "$2" "$3" " '}`
# end
rm -f $NMAP_FILE
To pass the $MAC over to your python script you can use:
ALERT.py $MAC &
and add this to ALERT.py
import sys
mac_address = sys.argv[0]
# now `mac_address` is "MAC Address: 02:21:... "

syntax error: operand expected (error token is ">= 75 ")

#!/bin/bash
CURRENT=$(df -h / | grep / | awk '{ print $4}')
THRESHOLD=75
if (( "$CURRENT" >= "$THRESHOLD" )); then
mail -s "CENTOS-6 localhost 10.10.1.238 Disk Space Alert" sss#abc.net << EOF
Your root partition remaining free space is critically low. Used: $CURRENT%
EOF
fi
I got the following error when i run the script, syntax error: operand expected (error token is ">= 75 ")
It's because CURRENT will contain a percent sign, so it won't be a valid operand for the comparison operation.
You can remove the last character like this :
CURRENT=${CURRENT%?};
Also make sure that df -h / | grep / | awk '{ print $4}' is correctly returning the usage ratio, on most systems you have to use print $5.
A couple of things:
you don't need grep at all, awk is quite capable of doing it's own regex stuff.
if you search for / in the df output, you'll probably get most lines as most mounts have a / somewhere in them. If you just want the root mountpoint, you can use <space>/$.
Check that 4 is the correct field number, on my box it's 5.
In any case, that field is of the form 55% which will not be considered numeric. You can use gsub to get rid of it.
With that in mind, the following snippet can be used to get the percentage:
df -h | awk '$0 ~ / \/$/ { gsub("%","",$5); print $5 }'
And, just as an aside, I'm not that big a fan of here-docs in shell scripts since it either (1) screws up my nicely indented files; or (2) makes me burn half an hour while I try to remember the various syntax options which will allow indented EOF strings :-)
I prefer something like:
(
echo Your root partition remaining free space is critically low: Used: ${CURRENT}%
) | mail -s "CENTOS-6 localhost 10.10.1.238 Disk Space Alert" sss#abc.net
Especially since that means I can put arbitrarily complex commands in the sub-shell to generate whatever info I want in the mail message (rather than just simple text substitutions).
So, bottom line, I'd be looking at something more like:
#!/usr/bin/env bash
# Config section.
LIMIT=75
# Code section.
CURR=$(df -h | awk '$0 ~ / \/$/ { gsub("%","",$5); print $5 }')
if [[ ${CURR} -ge ${LIMIT} ]] ; then
(
echo "Your root partition remaining free space is critically low: Used: ${CURR}%"
) | mail -s "CENTOS-6 localhost 10.10.1.238 Disk Space Alert" sss#abc.net
fi
Just try:
CURRENT=$(df -h |awk '{print $4}' |sort -n |tail -n1 |sed 's/%//g')
THRESHOLD=90
if [ $THRESHOLD -gt $CURRENT ]

Resources