How to Parse the data from .property file from Jenkins using Index - bash

I have a Property file in Jenkins lets call it Something.txt
and Something.txt contains
A_B_C
D_E_F
and i have used below shell to read the file and Execute my Automation
file="/var/lib/jenkins/components.txt"
if [ -f "$file" ]
then
echo "$file found."
Websites="$(awk -F '_' '{print $1}' $file | paste -d, -s)"
Profiles="$(awk -F '_' '{print $2}' $file | paste -d, -s)"
Component="$(awk -F '_' '{print $3}' $file | paste -d, -s)"
for i in $(echo $Websites | sed "s/,/ /g"); do
for j in $(echo $Profiles | sed "s/,/ /g"); do
for k in $(echo $Component| sed "s/,/ /g"); do
mvn clean verify -D "cucumber.options=--tags #"${j} -D surefire.suiteXmlFiles=./XMLScripts/${i}.${k}.testng.xml ||true
done
done
done
but what is happening is My Job is running as
A-B-C & A-B-F & D-B-C & B-E-F
but the expected result is A-B-C & D-E-F how to achieve this?

Don't read lines with for
#!/usr/bin/env bash
file="/var/lib/jenkins/components.txt"
if [[ -f "$file" ]]; then
while IFS=_ read -r website profile component; do
printf '%s %s %s\n' "$website" "$profile" "$component"
done < "$file"
fi
In your case you can do
#!/usr/bin/env bash
file="/var/lib/jenkins/components.txt"
if [[ -f "$file" ]]; then
while IFS=_ read -r website profile component; do
echo mvn clean verify -D cucumber.options=--tags #"$website" -D "surefire.suiteXmlFiles=./XMLScripts/$profile.$component.testng.xml" ||true
done < "$file"
fi
Remove the echo if you're satisfied with the result.

Related

bash script loop to check if variable contains string - not working

i have a script which copy files from one s3 bucket to local server, do some stuff and upload it to another s3 bucket.
in the original bucket i have few folders, one of them called "OTHER"
i dot want my script to work on this folder
i tried to define a loop to check if the path string does not contains the string "OTHER" only then to continue to other commands but for some reason it is not working.
what am i doing wrong ?
#!/bin/bash
shopt -s extglob
gcs3='s3://gc-reporting-pud-production/splunk_printer_log_files/'
gcs3ls=$((aws s3 ls 's3://gc-reporting-pud-production/splunk_printer_log_files/' --recursive) | sed 's/^.*\(splunk_printer.*\)/\1/g'| tr -s ' ' | tr ' ' '_')
ssyss3=s3://ssyssplunk
tokenFile=/splunkData/GCLogs/tokenFile.txt
nextToken=$((aws s3api list-objects-v2 --bucket "gc-reporting-pud-production" --prefix splunk_printer_log_files/ --max-items 5) |grep -o 'NEXTTOKEN.*' |awk -F " " '{print $2}')
newToken=$( tail -n 1 /splunkData/GCLogs/tokenFile.txt )
waterMark=$(aws s3api list-objects-v2 --bucket "gc-reporting-pud-production" --prefix splunk_printer_log_files/ --max-items 5 --starting-token
$newToken|sed 's/^.*\(splunk_printer.*zip\).*$/\1/'|sed '1d'|sed '$d')
while true; do
for j in $waterMark ; do
echo $j
if [ "$j" != *"OTHER"* ]; then
gcRegion=$(echo $j | awk -F'/' '{print $2}')
echo "gcRegion:"$gcRegion
if [ "$gcRegion" != "OTHER" ]; then
gcTech=$(echo $j | awk -F'/' '{print $3}')
echo "GCTech:"$gcTech
gcPrinterFamily=$(echo $j | awk -F'/' '{print $4}')
echo "gcPrinterFamily:" $gcPrinterFamily
gcPrinterType=$(echo $j | awk -F'/' '{print $5}')
echo "gcPrinterType:" $gcPrinterType
gcPrinterName=$(echo $j| awk -F'/' '{print $6}')
echo "gcPrinterName:" $gcPrinterName
gcFileName=$(echo $j| awk -F'/' '{print $7}'| awk -F'.zip' '{print $1}')
echo "gcFileName:" $gcFileName
cd /splunkData/GCLogs
dir="/splunkData/GCLogs/$gcRegion/$gcTech/$gcPrinterFamily/$gcPrinterType/$gcPrinterName"
echo "dir:"$dir
mkdir -p $dir
aws s3 sync $gcs3$gcRegion/$gcTech/$gcPrinterFamily/$gcPrinterType/$gcPrinterName/ $dir
find $dir -name '*.zip' -exec sh -c 'unzip -o -d "${0%.*}" "$0"' '{}' ';'
aws s3 cp $dir $ssyss3/$gcRegion/$gcTech/$gcPrinterFamily/$gcPrinterType/$gcPrinterName/ --recursive --exclude "*.zip"
newToken=$( tail -n 1 /splunkData/GCLogs/tokenFile.txt )
nextToken=$(aws s3api list-objects-v2 --bucket "gc-reporting-pud-production" --prefix splunk_printer_log_files/ --max-items 5 --starting-token $newToken |grep -o 'NEXTTOKEN.*' |awk -F " " '{print $2}')
waterMark=$(aws s3api list-objects-v2 --bucket "gc-reporting-pud-production" --prefix splunk_printer_log_files/ --max-items 5 --starting-token $newToken|sed 's/^.*\(splunk_printer.*zip\).*$/\1/'|sed '1d'|sed '$d')
echo "$nextToken" > "$tokenFile"
fi
fi
done
done
You need to use the double-bracket conditional command to turn == and != into pattern matching operators:
if [[ "$j" != *"OTHER"* ]]; then
# ^^ ^^
Or use case
case "$j" in
*OTHER*) ... ;;
*) echo "this is like an `else` block" ;;
esac
Paste your code into https://www.shellcheck.net/ for other things to fix.
I think glenn jackman was on the right path. Try this:
if [[ "$j" != *OTHER* ]]; then
The [[ ]] is required for pattern string matching (and you have to remove the " ). The case statement is also a good idea. You can abandon the shell test altogether and use grep as follows:
if
grep -q '.*OTHER.*' <<< "$j" 2>/dev/null
then
...
fi
Here's a check of the [[ ]]:
$ echo $j
abOTHERc
$ [[ "$j" == *OTHER* ]]
$ echo $?
0
As per BenjaminW., the quotes around $j in [[ ]] are unnecessary. However, the quotes around *OTHER* do make a big difference. See below:
$ j="OTHER THINGS"
$ [[ $j == "*OTHER*" ]] ; echo "$j" matches '"*OTHER*"': $?
OTHER THINGS matches "*OTHER*": 1
$ [[ $j == *OTHER* ]] ; echo "$j" matches '*OTHER*': $?
OTHER THINGS matches *OTHER*: 0

Bash, deleting specific row from file

I have a file with filename and path to the file
I want to delete the the rows which have files that do not exist anymore
file.txt (For now all existing files):
file1;~/Documents/test/123
file2;~/Documents/test/456
file3;~/Test
file4;~/Files/678
Now if I delete any of the given files(file 2 AND file4 fore example) and run my script I want it to test if the file in the given row exists and remove the row if it does not
file.txt(after removing file2, file4):
file1;~/Documents/test/123
file3;~/Test
What I got so far(Not working at all):
-Does not want to run at all
#!/bin/sh
backup=`cat file.txt`
rm -f file.txt
touch file.txt
while read -r line
do
dir=`echo "$line" | awk -F';' '{print $2}'`
file=`echo "$line" | awk -F';' '{print $1}'`
if [ -f "$dir"/"$file" ];then
echo "$line" >> file.txt
fi
done << "$backup"
Here's one way:
tmp=$(mktemp)
while IFS=';' read -r file rest; do
[ -f "$file" ] && printf '%s;%s\n' "$file" "$rest"
done < file.txt > "$tmp" && mv "$tmp" file.txt
or if you don't want a temp file for some reason:
tmp=()
while IFS=';' read -r file rest; do
[ -f "$file" ] && tmp+=( "$file;$rest" )
done < file.txt &&
printf '%s\n' "${tmp[#]}" > file.txt
Both are untested but should be very close if not exactly correct.
If I understand, this should do it.
touch file.txt file2.txt
for i in `cat file.txt`; do
fp=`echo $i|cut -d ';' -f2`
if [ -e $fp ];then
echo "$i" >> file2.txt
fi
done
mv file2.txt file.txt

How to pass a variable string to a file txt at the biginig of test?

I have a problem
I Have a program general like this gene.sh
that for all file (es file: geneX.csv) make a directory with the name of gene (example: Genex/geneX.csv) next this program compile an other program inside gene.sh but this progrm need a varieble and I dont know how do it.
this is the program gene.sh
#!/bin/bash
# Create a dictory for each file *.xls and *.csv
for fname in *.xlsx *csv
do
dname=${fname%.*}
[[ -d $dname ]] || mkdir "$dname"
mv "$fname" "$dname"
done
# For each gene go inside the directory and compile the programs getChromosomicPositions.sh to have the positions, and getHapolotipeStings.sh to have the variants
for geni in */; do
cd $geni
z=$(tail -n 1 *.csv | tr ';' "\n" | wc -l)
cd ..
cp getChromosomicPositions.sh $geni --->
cp getHaplotypeStrings.sh $geni
cd $geni
export z
./getChromosomicPositions.sh *.csv
export z
./getHaplotypeStrings.sh *.csv
cd ..
done
This is the program getChromosomichPositions.sh:
rm chrPosRs.txt
grep '^Haplotype\ ID' $1 | cut -d ";" -f 4-61 | tr ";" "\n" | awk '{print "select chrom,chromStart,chromEnd,name from snp147 where name=\""$1"\";"}' > listOfQuery.txt
while read l; do
echo $l > query.txt
mysql -h genome-mysql.cse.ucsc.edu -u genome -A -D hg38 --skip-column-names < query.txt > queryResult.txt
if [[ "$(cat queryResult.txt)" == "" ]];
then
cat query.txt |
while read line; do
echo $line | awk '$6 ~/rs/ {print $6}' > temp.txt;
if [[ "$(cat temp.txt)" != "" ]];
then cat temp.txt | awk -F'name="' '{print $2}' | sed -e 's/";//g' > temp.txt;
./getHGSVposHG19.sh temp.txt ---> Hear the problem--->
else
echo $line | awk '{num=sub(/.*:g\./,"");num+=sub(/\".*/,"");if(num==2){print};num=""}' > temp2.txt
fi
done
cat query.txt >> varianti.txt
echo "Missing Data" >> chrPosRs.txt
else
cat queryResult.txt >> chrPosRs.txt
fi
done < listOfQuery.txt
rm query*
hear the problem:
I need to enter in the file temp.txt and put automatically at the beginning of the file the variable $geni of the program gene.sh
How can I do that?
Why not pass "$geni" as say the first argument when invoking your script, and treating the rest of the arguments as your expected .csv files.
./getChromosomicPositions.sh "$geni" *.csv
Alternatively, you can set it as environment variable for the script, so that it can be used there (or just export it).
geni="$geni" ./getChromosomicPositions.sh *.csv
In any case, once you have it available in the second script, you can do
if passed as the first argument:
echo "${1}:$(cat temp.txt | awk -F'name="' '{print $2}' | sed -e 's/";//g')
or if passed as environment variable:
echo "${geni}:$(cat temp.txt | awk -F'name="' '{print $2}' | sed -e 's/";//g')

How to automatically remove inactive OSSEC agents (batch)

As part of some batch "bash" program, how can I automatically remove inactive ossec agents in cases of autoscaling groups where instances are created/deleted constantly?
Here is a quick script you can run to remove 'Disconnected' and 'Never connected' agents
for OUTPUT in $(/var/ossec/bin/agent_control -l | grep -E 'Disconnected|Never' | tr ':' ',' | cut -d "," -f 2 )
do
/var/ossec/bin/manage_agents -r $OUTPUT
done
#This is to be run on ossec server, path for ossec is /var/ossec/
file=agents.txt
/var/ossec/bin/agent_control -l > $file
#Wipe working tmp files
rm remove.txt
rm removed.txt
echo -n "" > remove.txt
echo -n "" > removed.txt
#Find Disconnected agents
while IFS= read -r line
do
ids=$(echo $line | awk '{print $2}')
status=$(echo $line | awk '{print $NF}')
if [ "$status" == "Disconnected" ]; then
echo $ids >> remove.txt
fi
done < "$file"
#Find Never connected agents
while IFS= read -r line
do
ids=$(echo $line | awk '{print $2}')
status=$(echo $line | awk '{ if (NF > 1) print $(NF-1),$NF ; else print $NF; }')
if [ "$status" == "Never connected" ]; then
echo $ids >> remove.txt
fi
done < "$file"
#Remove commas
sed 's/.$//' remove.txt > removed.txt
#Remove agents with IDs in removed.txt file
file2=removed.txt
while IFS= read -r line
do
/var/ossec/bin/manage_agents -r "$line"
done < $file2
#Restart OSSEC service
/var/ossec/bin/ossec-control restart
#End

Simplest Bash code to find what files from a defined list don't exist in a directory?

This is what I came up with. It works perfectly -- I'm just curious if there's a smaller/crunchier way to do it. (wondering if possible without a loop)
files='file1|file2|file3|file4|file5'
path='/my/path'
found=$(find "$path" -regextype posix-extended -type f -regex ".*\/($files)")
for file in $(echo "$files" | tr '|', ' ')
do
if [[ ! "$found" =~ "$file" ]]
then
echo "$file"
fi
done
You can do this without invoking any external tools:
IFS="|"
for file in $files
do
[ -f "$file" ] || printf "%s\n" "$file"
done
Your code will break if you have file names with whitespace. This is how I would do it, which is a bit more concise.
echo "$files" | tr '|' '\n' | while read file; do
[ -e "$file" ] || echo "$file"
done
You can probably play around with xargs if you want to get rid of the loop all together.
$ eval "ls $path/{${files//|/,}} 2>&1 1>/dev/null | awk '{print \$4}' | tr -d :"
Or use awk
$ echo -n $files | awk -v path=$path -v RS='|' '{printf("! [[ -e %s ]] && echo %s\n", path"/"$0, path"/"$0) | "bash"}'
without whitespace in filenames:
files=(mbox todo watt zoff xorf)
for f in ${files[#]}; do test -f $f || echo $f ; done

Resources