How to automatically remove inactive OSSEC agents (batch) - bash

As part of some batch "bash" program, how can I automatically remove inactive ossec agents in cases of autoscaling groups where instances are created/deleted constantly?

Here is a quick script you can run to remove 'Disconnected' and 'Never connected' agents
for OUTPUT in $(/var/ossec/bin/agent_control -l | grep -E 'Disconnected|Never' | tr ':' ',' | cut -d "," -f 2 )
do
/var/ossec/bin/manage_agents -r $OUTPUT
done

#This is to be run on ossec server, path for ossec is /var/ossec/
file=agents.txt
/var/ossec/bin/agent_control -l > $file
#Wipe working tmp files
rm remove.txt
rm removed.txt
echo -n "" > remove.txt
echo -n "" > removed.txt
#Find Disconnected agents
while IFS= read -r line
do
ids=$(echo $line | awk '{print $2}')
status=$(echo $line | awk '{print $NF}')
if [ "$status" == "Disconnected" ]; then
echo $ids >> remove.txt
fi
done < "$file"
#Find Never connected agents
while IFS= read -r line
do
ids=$(echo $line | awk '{print $2}')
status=$(echo $line | awk '{ if (NF > 1) print $(NF-1),$NF ; else print $NF; }')
if [ "$status" == "Never connected" ]; then
echo $ids >> remove.txt
fi
done < "$file"
#Remove commas
sed 's/.$//' remove.txt > removed.txt
#Remove agents with IDs in removed.txt file
file2=removed.txt
while IFS= read -r line
do
/var/ossec/bin/manage_agents -r "$line"
done < $file2
#Restart OSSEC service
/var/ossec/bin/ossec-control restart
#End

Related

Bash while loop: Preventing third-party commands to read from stdin

Assume an input table (intable.csv) that contains ID numbers in its second column, and a fresh output table (outlist.csv) into which the input file - extended by one column - is to be written line by line.
echo -ne "foo,NC_045043\nbar,NC_045193\nbaz,n.a.\nqux,NC_045054\n" > intable.csv
echo -n "" > outtable.csv
Further assume that one or more third-party commands (here: esearch, efetch; both part of Entrez Direct) are employed to retrieve additional information for each ID number. This additional info is to form the third column of the output table.
while IFS="" read -r line || [[ -n "$line" ]]
do
echo -n "$line" >> outtable.csv
NCNUM=$(echo "$line" | awk -F"," '{print $2}')
if [[ $NCNUM == NC_* ]]
then
echo "$NCNUM"
RECORD=$(esearch -db nucleotide -query "$NCNUM" | efetch -format gb)
echo "$RECORD" | grep "^LOCUS" | awk '{print ","$3}' | \
tr -d "\n" >> outtable.csv
else
echo ",n.a." >> outtable.csv
fi
done < intable.csv
Why does the while loop iterate only over the first input table entry under the above code, whereas it iterates over all input table entries if the code lines starting with RECORD and echo "$RECORD" are commented out? How can I correct this behavior?
This would happen if esearch reads from standard input. It will inherit the input redirection from the while loop, so it will consume the rest of the input file.
The solution is to redirect is standard input elsewhere, e.g. /dev/null.
while IFS="" read -r line || [[ -n "$line" ]]
do
echo -n "$line" >> outtable.csv
NCNUM=$(echo "$line" | awk -F"," '{print $2}')
if [[ $NCNUM == NC_* ]]
then
echo "$NCNUM"
RECORD=$(esearch -db nucleotide -query "$NCNUM" </dev/null | efetch -format gb)
echo "$RECORD" | grep "^LOCUS" | awk '{print ","$3}' | \
tr -d "\n" >> outtable.csv
else
echo ",n.a." >> outtable.csv
fi
done < intable.csv

How to Parse the data from .property file from Jenkins using Index

I have a Property file in Jenkins lets call it Something.txt
and Something.txt contains
A_B_C
D_E_F
and i have used below shell to read the file and Execute my Automation
file="/var/lib/jenkins/components.txt"
if [ -f "$file" ]
then
echo "$file found."
Websites="$(awk -F '_' '{print $1}' $file | paste -d, -s)"
Profiles="$(awk -F '_' '{print $2}' $file | paste -d, -s)"
Component="$(awk -F '_' '{print $3}' $file | paste -d, -s)"
for i in $(echo $Websites | sed "s/,/ /g"); do
for j in $(echo $Profiles | sed "s/,/ /g"); do
for k in $(echo $Component| sed "s/,/ /g"); do
mvn clean verify -D "cucumber.options=--tags #"${j} -D surefire.suiteXmlFiles=./XMLScripts/${i}.${k}.testng.xml ||true
done
done
done
but what is happening is My Job is running as
A-B-C & A-B-F & D-B-C & B-E-F
but the expected result is A-B-C & D-E-F how to achieve this?
Don't read lines with for
#!/usr/bin/env bash
file="/var/lib/jenkins/components.txt"
if [[ -f "$file" ]]; then
while IFS=_ read -r website profile component; do
printf '%s %s %s\n' "$website" "$profile" "$component"
done < "$file"
fi
In your case you can do
#!/usr/bin/env bash
file="/var/lib/jenkins/components.txt"
if [[ -f "$file" ]]; then
while IFS=_ read -r website profile component; do
echo mvn clean verify -D cucumber.options=--tags #"$website" -D "surefire.suiteXmlFiles=./XMLScripts/$profile.$component.testng.xml" ||true
done < "$file"
fi
Remove the echo if you're satisfied with the result.

How to take the first field of each line from a text file and execute it in a while-loop?

I tried with below script. But, not working for cut the first field of each line and to be executed for "chmod".
#!/bin/bash
if [ -z "$1" ]; then
echo -e "Usage: $(basename $0) FILE\n"
exit 1
fi
if [ ! -e "$1" ]; then
echo -e "$1: File doesn't exist.\n"
exit 1
fi
while read -r line; do
awk '{print $1}'
[ -n "$line" ] && chown root "$line" && echo -e "$line Ownership changed"
done < "$1"
If field separator is space, try this:
while read -r line; do
FILE_TO_CHANGE=$(echo $line | awk '{print $1}')
[ -n "$line" ] && chown root "$FILE_TO_CHANGE" && echo -e "$line Ownership changed"
done < "$1"
awk read $line and print first token on standard output, the result is saved in FILE_TO_CHANGE variable and then it is used to run chown.
Another way could be:
awk '{print $1}' $1 | while read line; do
chown root "$line" && echo -e "$line Ownership changed"
done
awk read your file and print the first field of each line, in this case, while loop read awk output line by line and run chown on field.
You could extract the first word on each line with awk and pipe to xargs, invoking chown only as few times as possible:
awk '{print $1}' "$1" | xargs chown root

bash script loop to check if variable contains string - not working

i have a script which copy files from one s3 bucket to local server, do some stuff and upload it to another s3 bucket.
in the original bucket i have few folders, one of them called "OTHER"
i dot want my script to work on this folder
i tried to define a loop to check if the path string does not contains the string "OTHER" only then to continue to other commands but for some reason it is not working.
what am i doing wrong ?
#!/bin/bash
shopt -s extglob
gcs3='s3://gc-reporting-pud-production/splunk_printer_log_files/'
gcs3ls=$((aws s3 ls 's3://gc-reporting-pud-production/splunk_printer_log_files/' --recursive) | sed 's/^.*\(splunk_printer.*\)/\1/g'| tr -s ' ' | tr ' ' '_')
ssyss3=s3://ssyssplunk
tokenFile=/splunkData/GCLogs/tokenFile.txt
nextToken=$((aws s3api list-objects-v2 --bucket "gc-reporting-pud-production" --prefix splunk_printer_log_files/ --max-items 5) |grep -o 'NEXTTOKEN.*' |awk -F " " '{print $2}')
newToken=$( tail -n 1 /splunkData/GCLogs/tokenFile.txt )
waterMark=$(aws s3api list-objects-v2 --bucket "gc-reporting-pud-production" --prefix splunk_printer_log_files/ --max-items 5 --starting-token
$newToken|sed 's/^.*\(splunk_printer.*zip\).*$/\1/'|sed '1d'|sed '$d')
while true; do
for j in $waterMark ; do
echo $j
if [ "$j" != *"OTHER"* ]; then
gcRegion=$(echo $j | awk -F'/' '{print $2}')
echo "gcRegion:"$gcRegion
if [ "$gcRegion" != "OTHER" ]; then
gcTech=$(echo $j | awk -F'/' '{print $3}')
echo "GCTech:"$gcTech
gcPrinterFamily=$(echo $j | awk -F'/' '{print $4}')
echo "gcPrinterFamily:" $gcPrinterFamily
gcPrinterType=$(echo $j | awk -F'/' '{print $5}')
echo "gcPrinterType:" $gcPrinterType
gcPrinterName=$(echo $j| awk -F'/' '{print $6}')
echo "gcPrinterName:" $gcPrinterName
gcFileName=$(echo $j| awk -F'/' '{print $7}'| awk -F'.zip' '{print $1}')
echo "gcFileName:" $gcFileName
cd /splunkData/GCLogs
dir="/splunkData/GCLogs/$gcRegion/$gcTech/$gcPrinterFamily/$gcPrinterType/$gcPrinterName"
echo "dir:"$dir
mkdir -p $dir
aws s3 sync $gcs3$gcRegion/$gcTech/$gcPrinterFamily/$gcPrinterType/$gcPrinterName/ $dir
find $dir -name '*.zip' -exec sh -c 'unzip -o -d "${0%.*}" "$0"' '{}' ';'
aws s3 cp $dir $ssyss3/$gcRegion/$gcTech/$gcPrinterFamily/$gcPrinterType/$gcPrinterName/ --recursive --exclude "*.zip"
newToken=$( tail -n 1 /splunkData/GCLogs/tokenFile.txt )
nextToken=$(aws s3api list-objects-v2 --bucket "gc-reporting-pud-production" --prefix splunk_printer_log_files/ --max-items 5 --starting-token $newToken |grep -o 'NEXTTOKEN.*' |awk -F " " '{print $2}')
waterMark=$(aws s3api list-objects-v2 --bucket "gc-reporting-pud-production" --prefix splunk_printer_log_files/ --max-items 5 --starting-token $newToken|sed 's/^.*\(splunk_printer.*zip\).*$/\1/'|sed '1d'|sed '$d')
echo "$nextToken" > "$tokenFile"
fi
fi
done
done
You need to use the double-bracket conditional command to turn == and != into pattern matching operators:
if [[ "$j" != *"OTHER"* ]]; then
# ^^ ^^
Or use case
case "$j" in
*OTHER*) ... ;;
*) echo "this is like an `else` block" ;;
esac
Paste your code into https://www.shellcheck.net/ for other things to fix.
I think glenn jackman was on the right path. Try this:
if [[ "$j" != *OTHER* ]]; then
The [[ ]] is required for pattern string matching (and you have to remove the " ). The case statement is also a good idea. You can abandon the shell test altogether and use grep as follows:
if
grep -q '.*OTHER.*' <<< "$j" 2>/dev/null
then
...
fi
Here's a check of the [[ ]]:
$ echo $j
abOTHERc
$ [[ "$j" == *OTHER* ]]
$ echo $?
0
As per BenjaminW., the quotes around $j in [[ ]] are unnecessary. However, the quotes around *OTHER* do make a big difference. See below:
$ j="OTHER THINGS"
$ [[ $j == "*OTHER*" ]] ; echo "$j" matches '"*OTHER*"': $?
OTHER THINGS matches "*OTHER*": 1
$ [[ $j == *OTHER* ]] ; echo "$j" matches '*OTHER*': $?
OTHER THINGS matches *OTHER*: 0

How to pass a variable string to a file txt at the biginig of test?

I have a problem
I Have a program general like this gene.sh
that for all file (es file: geneX.csv) make a directory with the name of gene (example: Genex/geneX.csv) next this program compile an other program inside gene.sh but this progrm need a varieble and I dont know how do it.
this is the program gene.sh
#!/bin/bash
# Create a dictory for each file *.xls and *.csv
for fname in *.xlsx *csv
do
dname=${fname%.*}
[[ -d $dname ]] || mkdir "$dname"
mv "$fname" "$dname"
done
# For each gene go inside the directory and compile the programs getChromosomicPositions.sh to have the positions, and getHapolotipeStings.sh to have the variants
for geni in */; do
cd $geni
z=$(tail -n 1 *.csv | tr ';' "\n" | wc -l)
cd ..
cp getChromosomicPositions.sh $geni --->
cp getHaplotypeStrings.sh $geni
cd $geni
export z
./getChromosomicPositions.sh *.csv
export z
./getHaplotypeStrings.sh *.csv
cd ..
done
This is the program getChromosomichPositions.sh:
rm chrPosRs.txt
grep '^Haplotype\ ID' $1 | cut -d ";" -f 4-61 | tr ";" "\n" | awk '{print "select chrom,chromStart,chromEnd,name from snp147 where name=\""$1"\";"}' > listOfQuery.txt
while read l; do
echo $l > query.txt
mysql -h genome-mysql.cse.ucsc.edu -u genome -A -D hg38 --skip-column-names < query.txt > queryResult.txt
if [[ "$(cat queryResult.txt)" == "" ]];
then
cat query.txt |
while read line; do
echo $line | awk '$6 ~/rs/ {print $6}' > temp.txt;
if [[ "$(cat temp.txt)" != "" ]];
then cat temp.txt | awk -F'name="' '{print $2}' | sed -e 's/";//g' > temp.txt;
./getHGSVposHG19.sh temp.txt ---> Hear the problem--->
else
echo $line | awk '{num=sub(/.*:g\./,"");num+=sub(/\".*/,"");if(num==2){print};num=""}' > temp2.txt
fi
done
cat query.txt >> varianti.txt
echo "Missing Data" >> chrPosRs.txt
else
cat queryResult.txt >> chrPosRs.txt
fi
done < listOfQuery.txt
rm query*
hear the problem:
I need to enter in the file temp.txt and put automatically at the beginning of the file the variable $geni of the program gene.sh
How can I do that?
Why not pass "$geni" as say the first argument when invoking your script, and treating the rest of the arguments as your expected .csv files.
./getChromosomicPositions.sh "$geni" *.csv
Alternatively, you can set it as environment variable for the script, so that it can be used there (or just export it).
geni="$geni" ./getChromosomicPositions.sh *.csv
In any case, once you have it available in the second script, you can do
if passed as the first argument:
echo "${1}:$(cat temp.txt | awk -F'name="' '{print $2}' | sed -e 's/";//g')
or if passed as environment variable:
echo "${geni}:$(cat temp.txt | awk -F'name="' '{print $2}' | sed -e 's/";//g')

Resources