bash reading multiple text file and setting variables - bash

how can read each lines and then set each string as separate variables.
for example:
555 = a
abc = b
5343/abc = c
22 = d
2323 = e
233/2344 = f
test1.txt
555 abc 5343/abc
444 cde 343/ccc
test2.txt
22 2323 233/2344
112 223 13/12
echo $a $d $f
desired output:
555 22 233/2344
444 112 13/12
Following script will set each line as variable but i would strings in each line as variable.
paste test1.txt test2.txt | while IFS="$(printf '\t')" read -r f1 f2
do
printf 'codesonar %s %s\n' "$f1 $f2"
done

You have to use the variables you want in your read.
$: paste test1.txt test2.txt |
> while read a b c d e f g h i j k l m n o p q etc
> do echo $a $d $f
> done
555 22 233/2344
444 112 13/12
Am I missing something?

Related

Converting string using bash

I want to convert the output of command:
dmidecode -s system-serial-number
which is a string looking like this:
VMware-56 4d ad 01 22 5a 73 c2-89 ce 3f d8 ba d6 e4 0c
to:
564dad01-225a-73c2-89ce-3fd8bad6e40c
I suspect I need to first of all extract all letters and numbers after the "VMware-" part at that start and then insert "-" at the known positions after character 10, 14, 18, 22.
To try the first extraction I have tried:
$ echo `dmidecode -s system-serial-number | grep -oE '(VMware-)?[a0-Z9]'`
VMware-5 6 4 d a d 0 1 2 2 5 a 7 3 c 2 8 9 c e 3 f d 8 b a d 6 e 4 0 c
However this isn't going the right way.
EDIT:
This gets me to a single log string however it's not elegant:
$ echo `dmidecode -s system-serial-number | sed -s "s/VMware-//" | sed -s "s/-//" | sed -s "s/ //g"`
564dad01225a73c289ce3fd8bad6e40c
Like this :
dmidecode -s system-serial-number |
sed -E 's/VMware-//;
s/ +//g;
s/(.)/\1-/8;
s/(.)/\1-/13;
s/(.)/\1-/23'
You can use Bash sub string extraction:
$ s="VMware-56 4d ad 01 22 5a 73 c2-89 ce 3f d8 ba d6 e4 0c"
$ s1=$(echo "${s:7}" | tr -d '[:space:]')
$ echo "${s1:0:8}-${s1:8:4}-${s1:12:9}-${s1:21}"
564dad01-225a-73c2-89ce-3fd8bad6e40c
Or, built-ins only (ie, no tr):
$ s1=${s:7}
$ s1="${s1// /}"
$ echo "${s1:0:8}-${s1:8:4}-${s1:12:9}-${s1:21}"

Print the entire row which has difference in value while compare the columns

I want to print the entire row whose value dont match
EG :
Symbol Qty Symbol Qty Symbol qty
a 10 a 10 a 11
b 11 b 11 b 11
c 12 c 12 f 13
f 12 f 12 g 13
OUTPUT :
a 10 a 10 a 11
c 12 c 12 (empty Space)
f 12 f 12 f 13
empty space {ES} g 13
awk 'FNR==NR{a[$0];next}!($0 in a ) ' output1.csv output2.csv >> finn1.csv
awk 'FNR==NR{a[$0];next}!($0 in a ) ' finn1.csv output4.csv >> finn.csv
but this prints all in one column that is missing
Like a 11, but I require the whole line
Assuming that you only want to test for mismatched Qty fields, try this:
#!/bin/bash
declare input_file="/path/to/input_file"
declare -i header_flag=0 a b c
while read line; do
[ ${header_flag} -eq 0 ] && header_flag=1 && continue # Ignore first line.
[ ${#line} -eq 0 ] && continue # Ignore blank lines.
read x a x b x c x <<< ${line} # Reuse ${x} because it is not used.
[ ${a} -ne ${b} -o ${a} -ne ${c} -o ${b} -ne ${c} ] && echo ${line}
done < ${input_file}
The awk one-liner
awk '!($1 == $3 && $2 == $4 && $3 == $5 && $4 == $6)' file
will output
Symbol Qty Symbol Qty Symbol qty
a 10 a 10 a 11
c 12 c 12 f 12
f 12 f 12 g 13
You're going about this the wrong way: you can't mash up all the files into one and then try to find which ones have different/missing values. You need to process the individual files
$ cat file1
Symbol Qty
a 10
b 11
c 12
f 12
$ cat file2
Symbol Qty
a 10
b 11
c 12
f 12
$ cat file3
Symbol qty
a 11
b 11
f 13
g 13
Then
assuming you have GNU awk
gawk '
FNR > 1 { qty[$1][FILENAME] = $1 " " $2 }
END {
OFS = "\t"
for (sym in qty) {
missing = !((ARGV[1] in qty[sym]) && (ARGV[2] in qty[sym]) && (ARGV[3] in qty[sym]))
unequal = !(qty[sym][ARGV[1]] == qty[sym][ARGV[2]] && qty[sym][ARGV[1]] == qty[sym][ARGV[3]])
if (missing || unequal) {
print qty[sym][ARGV[1]], qty[sym][ARGV[2]], qty[sym][ARGV[3]]
}
}
}
' file{1,2,3}
outputs
a 10 a 10 a 11
c 12 c 12
f 12 f 12 f 13
g 13

How can I compare two 2D-array files with bash?

I have two 2D-array files to read with bash.
What I want to do is extract the elements inside both files.
These two files contain different rows x columns such as:
file1.txt (nx7)
NO DESC ID TYPE W S GRADE
1 AAA 20 AD 100 100 E2
2 BBB C0 U 200 200 D
3 CCC 9G R 135 135 U1
4 DDD 9H Z 246 246 T1
5 EEE 9J R 789 789 U1
.
.
.
file2.txt (mx3)
DESC W S
AAA 100 100
CCC 135 135
EEE 789 789
.
.
.
Here is what I want to do:
Extract the element in DESC column of file2.txt then find the corresponding element in file1.txt.
Extract the W,S elements in such row of file2.txt then find the corresponding W,S elements in such row of file1.txt.
If [W1==W2 && S1==S2]; then echo "${DESC[colindex]} ok"; else echo "${DESC[colindex]} NG"
How can I read this kind of file as a 2D array with bash or is there any convenient way to do that?
bash does not support 2D arrays. You can simulate them by generating 1D array variables like array1, array2, and so on.
Assuming DESC is a key (i.e. has no duplicate values) and does not contain any spaces:
#!/bin/bash
# read data from file1
idx=0
while read -a data$idx; do
let idx++
done <file1.txt
# process data from file2
while read desc w2 s2; do
for ((i=0; i<idx; i++)); do
v="data$i[1]"
[ "$desc" = "${!v}" ] && {
w1="data$i[4]"
s1="data$i[5]"
if [ "$w2" = "${!w1}" -a "$s2" = "${!s1}" ]; then
echo "$desc ok"
else
echo "$desc NG"
fi
break
}
done
done <file2.txt
For brevity, optimizations such as taking advantage of sort order are left out.
If the files actually contain the header NO DESC ID TYPE ... then use tail -n +2 to discard it before processing.
A more elegant solution is also possible, which avoids reading the entire file in memory. This should only be relevant for really large files though.
If the rows order is not needed be preserved (can be sorted), maybe this is enough:
join -2 2 -o 1.1,1.2,1.3,2.5,2.6 <(tail -n +2 file2.txt|sort) <(tail -n +2 file1.txt|sort) |\
sed 's/^\([^ ]*\) \([^ ]*\) \([^ ]*\) \2 \3/\1 OK/' |\
sed '/ OK$/!s/\([^ ]*\) .*/\1 NG/'
For file1.txt
NO DESC ID TYPE W S GRADE
1 AAA 20 AD 100 100 E2
2 BBB C0 U 200 200 D
3 CCC 9G R 135 135 U1
4 DDD 9H Z 246 246 T1
5 EEE 9J R 789 789 U1
and file2.txt
DESC W S
AAA 000 100
CCC 135 135
EEE 789 000
FCK xxx 135
produces:
AAA NG
CCC OK
EEE NG
Explanation:
skip the header line in both files - tail +2
sort both files
join the needed columns from both files into one table like, in the result will appears only the lines what has common DESC field
like next:
AAA 000 100 100 100
CCC 135 135 135 135
EEE 789 000 789 789
in the lines, which have the same values in 2-4 and 3-5 columns, substitute every but 1st column with OK
in the remainder lines substitute the columns with NG

Read the number of columns using awk/sed

I have the following test file
Kmax Event File - Text Format
1 4 1000
65 4121 9426 12312
56 4118 8882 12307
1273 4188 8217 12309
1291 4204 8233 12308
1329 4170 8225 12303
1341 4135 8207 12306
63 4108 8904 12300
60 4106 8897 12307
731 4108 8192 12306
...
ÿÿÿÿÿÿÿÿ
In this file I want to delete the first two lines and apply some mathematical calculations. For instance each column i will be $i-(i-1)*number. A script that does this is the following
#!/bin/bash
if test $1 ; then
if [ -f $1.evnt ] ; then
rm -f $1.dat
sed -n '2p' $1.evnt | (read v1 v2 v3
for filename in $1*.evnt ; do
echo -e "Processing file $filename"
sed '$d' < $filename > $1_tmp
sed -i '/Kmax/d' $1_tmp
sed -i '/^'"$v1"' '"$v2"' /d' $1_tmp
cat $1_tmp >> $1.dat
done
v3=`wc -l $1.dat | awk '{print $1}' `
echo -e "$v1 $v2 $v3" > .$1.dat
rm -f $1_tmp)
else
echo -e "\a!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo -e " Event file $1.evnt doesn't exist !!!!!!"
echo -e "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
fi
else
echo -e "\a!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo -e "!!!!! Give name for event files !!!!!"
echo -e "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
fi
awk '{print $1, $2-4096, $3-(2*4096), $4-(3*4096)}' $1.dat >$1_Processed.dat
rm -f $1.dat
exit 0
The file won't always have 4 columns. Is there a way to read the number of columns, print this number and apply those calculations?
EDIT The idea is to have an input file (*.evnt), convert it to *.dat or any other ascii file(it doesn't matter really) which will only include the number in columns and then apply the calculation $i=$i-(i-1)*number. In addition it will keep the number of columns in a variable, that will be called in another program. For instance in the above file, number=4096 and a sample output file is the following
65 25 1234 24
56 22 690 19
1273 92 25 21
1291 108 41 20
1329 74 33 15
1341 39 15 18
63 12 712 12
60 10 705 19
731 12 0 18
while in the console I will get the message There are 4 detectors.
Finally a new file_processed.dat will be produced, where file is the initial name of awk's input file.
The way it should be executed is the following
./myscript <filename>
where <filename> is the name without the format. For instance, the files will have the format filename.evnt so it should be executed using
./myscript filename
Let's start with this to see if it's close to what you're trying to do:
$ numdet=$( awk -v num=4096 '
NR>2 && NF>1 {
out = FILENAME "_processed.dat"
for (i=1;i<=NF;i++) {
$i = $i-(i-1)*num
}
nf = NF
print > out
}
END {
printf "There are %d detectors\n", nf | "cat>&2"
print nf
}
' file )
There are 4 detectors
$ cat file_processed.dat
65 25 1234 24
56 22 690 19
1273 92 25 21
1291 108 41 20
1329 74 33 15
1341 39 15 18
63 12 712 12
60 10 705 19
731 12 0 18
$ echo "$numdet"
4
Is that it?
Using awk
awk 'NR<=2{next}{for (i=1;i<=NF;i++) $i=$i-(i-1)*4096}1' file

shellscript and awk extraction to calculate averages

I have a shell script that contains a loop. This loop is calling another script. The output of each run of the loop is appended inside a file (outOfLoop.tr). when the loop is finished, awk command should calculate the average of specific columns and append the results to another file(fin.tr). At the end, the (fin.tr) is printed.
I managed to get the first part which is appending the results from the loop into (outOfLoop.tr) file. also, my awk commands seem to work... But I'm not getting the final expected output in terms of format. I think I'm missing something. Here is my try:
#!/bin/bash
rm outOfLoop.tr
rm fin.tr
x=1
lmax=4
while [ $x -le $lmax ]
do
calling another script >> outOfLoop.tr
x=$(( $x + 1 ))
done
cat outOfLoop.tr
#/////////////////
#//I'm getting the above part correctly and the output is :
27 194 119 59 178
27 180 100 30 187
27 175 120 59 130
27 189 125 80 145
#////////////////////
#back again to the script
echo "noRun\t A\t B\t C\t D\t E"
echo "----------------------\n"
#// print the total number of runs from the loop
echo "$lmax\t">>fin.tr
#// extract the first column from the output which is 27
awk '{print $1}' outOfLoop.tr >>fin.tr
echo "\t">>fin.tr
#Sum the column---calculate average
awk '{s+=$5;max+=0.5}END{print s/max}' outOfLoop.tr >>fin.tr
echo "\t">>fin.tr
awk '{s+=$4;max+=0.5}END{print s/max}' outOfLoop.tr >>fin.tr
echo "\t">>fin.tr
awk '{s+=$3;max+=0.5}END{print s/max}' outOfLoop.tr >>fin.tr
echo "\t">>fin.tr
awk '{s+=$2;max+=0.5}END{print s/max}' outOfLoop.tr >> fin.tr
echo "-------------------------------------------\n"
cat fin.tr
rm outOfLoop.tr
I want the format to be like :
noRun A B C D E
----------------------------------------------------------
4 27 average average average average
I have incremented max inside the awk command by 0.5 as there was new line between the out put of the results (output of outOfLoop file)
$ cat file
27 194 119 59 178
27 180 100 30 187
27 175 120 59 130
27 189 125 80 145
$ cat tst.awk
NF {
for (i=1;i<=NF;i++) {
sum[i] += $i
}
noRun++
}
END {
fmt="%-10s%-10s%-10s%-10s%-10s%-10s\n"
printf fmt,"noRun","A","B","C","D","E"
printf "----------------------------------------------------------\n"
printf fmt,noRun,$1,sum[2]/noRun,sum[3]/noRun,sum[4]/noRun,sum[5]/noRun
}
$ awk -f tst.awk file
noRun A B C D E
----------------------------------------------------------
4 27 184.5 116 57 160

Resources