would like to get an opinion on how best to do this in bash, thank you
for x number of servers, each has it's own list of replication agreements and their status.. it's easy to run a few commands and get this data, ex;
get servers, output (setting/variable in/from a local config file);
. ./ldap-config ; echo "$MASTER $REPLICAS"
dc1-server1 dc1-server2 dc2-server1 dc2-server2 dc3...
for dc1-server1, get agreements, output;
ipa-replica-manage -p $(cat ~/.dspw) list -v $SERVER.$DOMAIN | grep ': replica' | sed 's/: replica//'
dc2-server1
dc3-server1
dc4-server1
for dc1-server1, get agreement status codes, output;
ipa-replica-manage -p $(cat ~/.dspw) list -v $SERVER.$DOMAIN | grep 'status: Error (' | sed -e 's/.*status: Error (//' -e 's/).*//'
0
0
18
so output would be several columns based on the 'get servers' list with each 'replica: status' under each server, for that server
looking to achieve something like;
dc2-server1: 0 dc2-server2: 0 dc1-server1: 0 ...
dc3-server1: 0 dc3-server2: 18 dc3-server1: 13 ...
dc4-server1: 18 dc4-server2: 0 dc4-server1: 0 ...
Generally eval is considered evil. Nevertheless, I'm going to use it.
paste is handy for printing files side-by-side.
Bash process substitutions can be used where you'd use a filename.
So, I'm going to dynamically build up a paste command and then eval it
I'm going to use get.sh as a placeholder for your mystery commands.
cmd="paste"
while read -ra servers; do
for server in "${servers[#]}"; do
cmd+=" <(./get.sh \"$server\" agreements | sed 's/\$/:/')"
cmd+=" <(./get.sh \"$server\" status)"
done
done < <(./get.sh servers)
eval "$cmd" | column -t
Related
I have a reference file with device names in them. For example WABEL8499IPM101. I'm using this script to set the base name (without the last 3 digits) to look at the reference file and see what is already used. If 101 is used it will create a file for me with 102, 103 if I request 2 total. I'm looking to use an input file to run it multiple times. I'm also trying to figure out how to start at 101 if there isn't a name found when searching the reference file
I would like to loop this using an input file instead of manually entering bash test.sh WABEL8499IPM 2 each time. I would like to be able to build an input file of all the names that need compared and then output. It would also be nice that if there isn't a match that it starts creating names at WABEL8499IPM101 instead of just WABEL8499IPM1.
Input file example:
ColumnA (BASE NAME) ColumnB (QUANTITY)
WABEL8499IPM 2
Script:
SRCFILE="~/Desktop/deviceinfo.csv"
LOGDIR="~/Desktop/"
LOGFILE="$LOGDIR/DeviceNames.csv"
# base name, such as "WABEL8499IPM"
device_name=$1
# quantity, such as "2"
quantityNum=$2
# the largest in sequence, such as "WABEL8499IPM108"
max_sequence_name=$(cat $SRCFILE | grep -o -e "$device_name[0-9]*" | sort --reverse | head -n 1)
# extract the last 3digit number (such as "108") from max_sequence_name
max_sequence_num=$(echo $max_sequence_name | rev | cut -c 1-3 | rev)
# create new sequence_name
# such as ["WABEL8499IPM109", "WABEL8499IPM110"]
array_new_sequence_name=()
for i in $(seq 1 $quantityNum);
do
cnum=$((max_sequence_num + i))
array_new_sequence_name+=($(echo $device_name$cnum))
done
#CODE FOR CREATING OUTPUT FILE HERE
#for fn in ${array_new_sequence_name[#]}; do touch $fn; done;
# write log
for sqn in ${array_new_sequence_name[#]};
do
echo $sqn >> $LOGFILE
done
Usage:
bash test.sh WABEL8499IPM 2
Result in the log file:
WABEL8499IPM109
WABEL8499IPM110
Just wrap a loop around your code instead of assuming the args come in on the command line.
SRCFILE="~/Desktop/deviceinfo.csv"
LOGDIR="~/Desktop/"
LOGFILE="$LOGDIR/DeviceNames.csv"
while read device_name quantityNum
do max_sequence_name=$( grep -o -e "$device_name[0-9]*" $SRCFILE |
sort --reverse | head -n 1)
max_sequence_num=${max_sequence_name: -3}
array_new_sequence_name=()
for i in $(seq 1 $quantityNum)
do cnum=$((max_sequence_num + i))
array_new_sequence_name+=("$device_name$cnum")
done
for sqn in ${array_new_sequence_name[#]};
do echo $sqn >> $LOGFILE
done
done < input.file
I'd maybe pass the input file as the parameter now.
I am currently building a bash script for class, and I am trying to use the grep command to grab the values from a simple calculator program and store them in the variables I assign, but I keep receiving a syntax error message when I try to run the script. Any advice on how to fix it? my script looks like this:
#!/bin/bash
addanwser=$(grep -o "num1 + num2" Lab9 -a 5 2)
echo "addanwser"
subanwser=$(grep -o "num1 - num2" Lab9 -s 10 15)
echo "subanwser"
multianwser=$(grep -o "num1 * num2" Lab9 -m 3 10)
echo "multianwser"
divanwser=$(grep -o "num1 / num2" Lab9 -d 100 4)
echo "divanwser"
modanwser=$(grep -o "num1 % num2" Lab9 -r 300 7)
echo "modawser"`
You want to grep the output of a command.
grep searches from either a file or standard input. So you can say either of these equivalent:
grep X file # 1. from a file
... things ... | grep X # 2. from stdin
grep X <<< "content" # 3. using here-strings
For this case, you want to use the last one, so that you execute the program and its output feeds grep directly:
grep <something> <<< "$(Lab9 -s 10 15)"
Which is the same as saying:
Lab9 -s 10 15 | grep <something>
So that grep will act on the output of your program. Since I don't know how Lab9 works, let's use a simple example with seq, that returns numbers from 5 to 15:
$ grep 5 <<< "$(seq 5 15)"
5
15
grep is usually used for finding matching lines of a text file. To actually grab a part of the matched line other tools such as awk are used.
Assuming the output looks like "num1 + num2 = 54" (i.e. fields are separated by space), this should do your job:
addanwser=$(Lab9 -a 5 2 | awk '{print $NF}')
echo "$addanwser"
Make sure you don't miss the '$' sign before addanwser when echo'ing it.
$NF selects the last field. You may select nth field using $n.
In the sections below, you'll see the shell script I am trying to run on a UNIX machine, along with a transcript.
When I run this program, it gives the expected output but it also gives an error shown in the transcript. What could be the problem and how can I fix it?
First, the script:
#!/usr/bin/bash
while read A B C D E F
do
E=`echo $E | cut -f 1 -d "%"`
if test $# -eq 2
then
I=`echo $2`
else
I=90
fi
if test $E -ge $I
then
echo $F
fi
done
And the transcript of running it:
$ df -k | ./filter.sh -c 50
./filter.sh: line 12: test: capacity: integer expression expected
/etc/svc/volatile
/var/run
/home/ug
/home/pg
/home/staff/t
/packages/turnin
$ _
Before the line that says:
if test $E -ge $I
temporarily place the line:
echo "[$E]"
and you'll find something very much non-numeric, and that's because the output of df -k looks like this:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 954316620 212723892 693109608 24% /
udev 10240 0 10240 0% /dev
: :
The offending line there is the first, which will have its fifth field Use% turned into Use, which is definitely not an integer.
A quick fix may be to change your usage to something like:
df -k | sed -n '2,$p' | ./filter -c 50
or:
df -k | tail -n+2 | ./filter -c 50
Either of those extra filters (sed or tail) will print only from line 2 onwards.
If you're open to not needing a special script at all, you could probably just get away with something like:
df -k | awk -vlimit=40 '$5+0>=limit&&NR>1{print $5" "$6}'
The way it works is to only operate on lines where both:
the fifth field, converted to a number, is at least equal to the limit passed in with -v; and
the record number (line) is two or greater.
Then it simply outputs the relevant information for those matching lines.
This particular example outputs the file system and usage (as a percentage like 42%) but, if you just want the file system as per your script, just change the print to output $6 on its own: {print $6}.
Alternatively, if you do the percentage but without the %, you can use the same method I used in the conditional: {print $5+0" "$6}.
I'm having an issue when i try to port my bash script to nagios.The scripts works fine when I run on console, but when I run it from Nagios i get the msg "(null)" - In the nagios debug log I see that it parse the script well but it returns the error msg..
I'm not very good at scripting so i guess i'll need some help
The objective of the script is to check *.ears version from some servers, md5 them and compare the output to see if the version matches or not.
To do that, i have a json on these servers that prints the name of the *.ear and his md5.
so.. The first part of the script gets that info from the json with curl and stores just the md5 number on a .tempfile , then it compares both temp files and if they match i got the $STATE_OK msg. If they dont , it creates a .datetmp file with the date ( the objective of this is to print a message after 48hs of inconsistence). Then, i make a diff of the .datetmp file and the days i wanna check if the result is less than 48hrs it prints the $STATE_WAR, if the result is more than 48 hrs it Prints the $STATE_CRI
The sintaxis of the script is " $ sh script.sh nameoftheear.ear server1 server2 "
Thanks in advance
#/bin/bash
#Variables For Nagios
cont=$1
bas1=$2
bas2=$3
## Here you set the servers hostname
svr1= curl -s "http://$bas1.domain.com:7877/apps.json" | grep -Po '"EAR File":.*? [^\\]",' | grep $cont | awk '{ print $5 }' > .$cont-tmpsvr1
svr2= curl -s "http://$bas2.domain.com:7877/apps.json" | grep -Po '"EAR File":.*? [^\\]",' | grep $cont | awk '{ print $5 }' > .$cont-tmpsvr2
file1=.$cont-tmpsvr1
file2=.$cont-tmpsvr2
md51=$(head -n 1 .$cont-tmpsvr1)
md52=$(head -n 1 .$cont-tmpsvr2)
datenow=$(date +%s)
#Error Msg
ERR_WAR="Not updated $bas1: $cont $md51 --- $bas2: $cont $md52 "
ERR_CRI="48 hs un-updated $bas1: $cont $md51 --- $bas2: $cont $md52 "
OK_MSG="Is up to date $bas1: $cont $md51 --- $bas2: $cont $md52 "
STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
##Matching md5 Files
if cmp -s "$file1" "$file2"
then
echo $STATE_OK
echo $OK_MSG
# I do the rm to delete the date tmp file so i can get the $STATE_OK or $STATE_WARNING
rm .$cont-datetmp
exit 0
elif
echo $datenow >> .$cont-datetmp
#Vars to set modification date
datetmp=$(head -n 1 .$cont-datetmp)
diffdate=$(( ($datenow - $datetmp) /60 ))
#This var is to set the time of the critical ERR
days=$((48*60))
[ $diffdate -lt $days ]
then
echo $STATE_WARNING
echo $ERR_WAR
exit 1
else
echo $STATE_CRITICAL
echo $ERR_CRI
exit 2
fi
I am guessing some kind of permission problem - more specifically I don't think the nagios user can write to it's own home directory. You either fix those permissions or write to a file in /tmp (and consider using mktemp?).
...but ideally you'd skip writing all those files, as far as I can see all of those comparisons etc could be kept in memory.
UPDATE
Looked at your script again - I see some obvious errors you can look into:
You are printing out the exit value before you print the message.
You print the exit value rather than exit with the exit value.
...so this:
echo $STATE_WARNING
echo $ERR_WAR
exit 1
Should rather be:
echo $ERR_WAR
exit $STATE_WARNING
Also I am wondering if this is really the script or if you missed something when pasting. There seems to be missing an 'if' and also a superfluous line break in your last piece of code? Should rather be:
if [ $diffdate -lt $days ]
then
...
else
...
fi
I have a text file with an unknown number of lines. I need to grab some of those lines at random, but I don't want there to be any risk of repeats.
I tried this:
jot -r 3 1 `wc -l<input.txt` | while read n; do
awk -v n=$n 'NR==n' input.txt
done
But this is ugly, and doesn't protect against repeats.
I also tried this:
awk -vmax=3 'rand() > 0.5 {print;count++} count>max {exit}' input.txt
But that obviously isn't the right approach either, as I'm not guaranteed even to get max lines.
I'm stuck. How do I do this?
This might work for you:
shuf -n3 file
shuf is one of GNU coreutils.
If you have Python accessible (change the 10 to what you'd like):
python -c 'import random, sys; print("".join(random.sample(sys.stdin.readlines(), 10)).rstrip("\n"))' < input.txt
(This will work in Python 2.x and 3.x.)
Also, (again change the 10 to the appropriate value):
sort -R input.txt | head -10
If jot is on your system, then I guess you're running FreeBSD or OSX rather than Linux, so you probably don't have tools like rl or sort -R available.
No worries. I had to do this a while ago. Try this instead:
$ printf 'one\ntwo\nthree\nfour\nfive\n' > input.txt
$ cat rndlines
#!/bin/sh
# default to 3 lines of output
lines="${1:-3}"
# default to "input.txt" as input file
input="${2:-input.txt}"
# First, put a random number at the beginning of each line.
while read line; do
printf '%8d%s\n' $(jot -r 1 1 99999999) "$line"
done < "$input" |
sort -n | # Next, sort by the random number.
sed 's/^.\{8\}//' | # Last, remove the number from the start of each line.
head -n "$lines" # Show our output
$ ./rndlines input.txt
two
one
five
$ ./rndlines input.txt
four
two
three
$
Here's a 1-line example that also inserts the random number a little more cleanly using awk:
$ printf 'one\ntwo\nthree\nfour\nfive\n' | awk 'BEGIN{srand()} {printf("%8d%s\n", rand()*10000000, $0)}' | sort -n | head -n 3 | cut -c9-
Note that different versions of sed (in FreeBSD and OSX) may require the -E option instead of -r to handle ERE instead or BRE dialect in the regular expression if you want to use that explictely, though everything I've tested works with escapted bounds in BRE. (Ancient versions of sed (HP/UX, etc) might not support this notation, but you'd only be using those if you already knew how to do this.)
This should do the trick, at least with bash and assuming your environment has the other commands available:
cat chk.c | while read x; do
echo $RANDOM:$x
done | sort -t: -k1 -n | tail -10 | sed 's/^[0-9]*://'
It basically outputs your file, placing a random number at the start of each line.
Then it sorts on that number, grabs the last 10 lines, and removes that number from them.
Hence, it gives you ten random lines from the file, with no repeats.
For example, here's a transcript of it running three times with that chk.c file:
====
pax$ testprog chk.c
} else {
}
newNode->next = NULL;
colm++;
====
pax$ testprog chk.c
}
arg++;
printf (" [%s] n", currNode->value);
free (tempNode->value);
====
pax$ testprog chk.c
char tagBuff[101];
}
return ERR_OTHER;
#define ERR_MEM 1
===
pax$ _
sort -Ru filename | head -5
will ensure no duplicates. Not all implementations of sort have the -R option.
To get N random lines from FILE with Perl:
perl -MList::Util=shuffle -e 'print shuffle <>' FILE | head -N
Here's an answer using ruby if you don't want to install anything else:
cat filename | ruby -e 'puts ARGF.read.split("\n").uniq.shuffle.join("\n")'
for example, given a file (dups.txt) that looks like:
1 2
1 3
2
1 2
3
4
1 3
5
6
6
7
You might get the following output (or some permutation):
cat dups.txt| ruby -e 'puts ARGF.read.split("\n").uniq.shuffle.join("\n")'
4
6
5
1 2
2
3
7
1 3
Further example from the comments:
printf 'test\ntest1\ntest2\n' | ruby -e 'puts ARGF.read.split("\n").uniq.shuffle.join("\n")'
test1
test
test2
Of course if you have a file with repeated lines of test you'll get just one line:
printf 'test\ntest\ntest\n' | ruby -e 'puts ARGF.read.split("\n").uniq.shuffle.join("\n")'
test