Input from an array for awk to find duplicates - shell

I tried to input data for awk from an array:
awk -v var="${A[*]}" 'BEGIN{split(var,list,"\n"); for (i=1;i<=length(list);i++) print list[i]}'
Also using awk to find duplicates between files:
filecnt=$(find "${pmdir}" -type f)
awk -v n=filecnt '{a[$0]++}END{for (i in a)if (a[i]>1){print i, a[i];}}' $filecnt >> ${outputfile}
But I had hard time to find out how to do it if awk takes an array as its input.
something like:
awk -v var="${A[*]}" '{var[$0]++}END{for (i in var)if (var[i]>1){print i, var[i];}}'
A is a column data reading from a file:
for i in $( awk -F ',' '{ print $1; }' "${ifile}" )
do
A[$j]=$i
#echo "${A[$j]}"
j=$((j+1))
done
example of A is
0x10000
0x11000
0x01100
0x00010
0x11000
0x00010
0x00010
The output is wanted:
0x11000 2
0x00010 3
Thanks for your suggestions.

Is this what you want?
$ printf '%s\n' "${A[#]}" | sort | uniq -cd | awk '{print $2, $1}'
0x00010 3
0x11000 2
or if you prefer:
$ printf '%s\n' "${A[#]}" | awk '{cnt[$0]++} END{for (val in cnt) if (cnt[val]>1) print val, cnt[val]}'
0x11000 2
0x00010 3
or:
$ awk -v vals="${A[*]}" 'BEGIN{split(vals,tmp); for (i in tmp) cnt[tmp[i]]++; for (val in cnt) if (cnt[val]>1) print val, cnt[val]}'
0x11000 2
0x00010 3
Note that that last one relies on none of the values in A[] containing spaces or escape chars.
Your for loop isn't how to populate A[] in the first place, though, this is:
A=()
while IFS= read -r i; do
A+=( "$i" )
done < <(cut -d',' -f1 "$ifile")
or:
A=()
while IFS=',' read -r i _; do
A+=( "$i" )
done < "$ifile"
or:
readarray -t A < <(cut -d',' -f1 "$ifile")

Related

How to grab fields in inverted commas

I have a text file which contains the following lines:
"user","password_last_changed","expires_in"
"jeffrey","2021-09-21 12:54:26","90 days"
"root","2021-09-21 11:06:57","0 days"
How can I grab two fields jeffrey and 90 days from inverted commas and save in a variable.
If awk is an option, you could save an array and then save the elements as individual variables.
$ IFS="\"" read -ra var <<< $(awk -F, '/jeffrey/{ print $1, $NF }' input_file)
$ $ var2="${var[3]}"
$ echo "$var2"
90 days
$ var1="${var[1]}"
$ echo "$var1"
jeffrey
while read -r line; do # read in line by line
name=$(echo $line | awk -F, ' { print $1} ' | sed 's/"//g') # grap first col and strip "
expire=$(echo $line | awk -F, ' { print $3} '| sed 's/"//g') # grap third col and strip "
echo "$name" "$expire" # do your business
done < yourfile.txt
IFS=","
arr=( $(cat txt | head -2 | tail -1 | cut -d, -f 1,3 | tr -d '"') )
echo "${arr[0]}"
echo "${arr[1]}"
The result is into an array, you can access to the elements by index.
May be this below method will help you using
sed and awk command
#!/bin/sh
username=$(sed -n '/jeffrey/p' demo.txt | awk -F',' '{print $1}')
echo "$username"
expires_in=$(sed -n '/jeffrey/p' demo.txt | awk -F',' '{print $3}')
echo "$expires_in"
Output :
jeffrey
90 days
Note :
This above method will work if their is only distinct username
As far i know username are not duplicate

Bash Grep Takes 3 Days To Run. Anyway to Enhance it?

I have a script like this that I would like to seek some suggestions on enhancing it.
cd /home/output/
cat R*op.txt > R.total.op.txt
awk '{if( (length($8)>9) || ($8 ~ /^AAA/) ) {print $0}}' R.total.op.txt > temp && mv temp R.total.op.txt
cat S*op.txt > S.total.op.txt
awk '{if( (length($8)>9) || ($8 ~ /^AAA/) ) {print $0}}' S.total.op.txt > temp && mv temp S.total.op.txt
cat R.total.op.txt S.total.op.txt | awk '{print $4}' | sort -k1,1 | awk '!x[$1]++' > genes.txt
rm *total.op.txt
head genes.txt
cd /home/output/
for j in R1_with-genename R2_with-genename S1_with-genename S2_with-genename
do
**for i in `cat genes.txt`; do cat $j'.op.txt' | grep -w $i >> $j'_'$i'_gene.txt'**;done
done
ls -m1 *gene.txt | wc -l
find . -size 0 -delete
ls -m1 *gene.txt | wc -l
rm genes.txt
cd /home/output/
for i in `ls *gene.txt`
do
paste <(awk '{print $4"\t"$8"\t"$9"\t"$13}' $i | awk '!x[$1]++' | awk '{print $1}') <(awk '{print $4"\t"$8"\t"$9}' $i | awk '{if( (length($2)>9) || ($2 ~ /^AAA/) ) {print $0}}' | sort -k2,2 | awk '{ sum += $3 } END { if (NR > 0) print sum / NR }') <(awk '{print $4"\t"$8"\t"$9}' $i| awk '{if( (length($2)>9) || ($2 ~ /^AAA/) ) {print $0}}' | sort -k2,2 | wc -l) <(awk '{print $4"\t"$8"\t"$9"\t"$13}' $i | awk '{if( (length($2)>9) || ($2 ~ /^AAA/) ) {print $0}}' | sort -k2,2 | grep -v ":::" | wc -l) > $i'_stats.txt'
done
rm *gene.txt
cd /home/output/
for j in R1_with-genename R2_with-genename S1_with-genename S2_with-genename
do
cat $j*stats.txt > $j'.final.txt'
done
rm *stats.txt
cd /home/output/
for i in `ls *final.txt`
do
sed "1iGene_Name\tMean1\tCalculated\tbases" $i > temp && mv temp $i
done
head *final.txt
The very first for loop (marked with asterisks) that has cat genes.txt is the grep loop that is taking 3 days to finish. Can someone please advice any enhancements to the command and if this entire script can be made into a single command? Thanks in advance.
Try replacing the nested loops with a single awk.
awk 'FNR = NR {words[$0] = "\\b" $0 "\\b"; next}
{ for (i in words) if ($0 ~ words[i]) {
fn = FILENAME "_" i "_gene.txt";
print >> fn;
close(fn);
}' genes.txt {{R,S}{1,2}_with-genename}.op.txt
I suggest creating a sed script:
# name script
SEDSCRIPT=split.sed
# Make sure it is empty
echo "" > ${SEDSCRIPT}
# Loop through all the words in genes.txt and
# create sed command that will write that line to a file
for word in `cat genes.txt`; do
echo "/${word}/w ${word}.txt" >> ${SEDSCRIPT}
done
basenames="R1_with-genename R2_with-genename S1_with-genename S2_with-genename"
# Loop over input files
for name in "${basenames}"; do
# Run sed script against file
sed -n -f ${SEDSCRIPT} ${name}.op.txt
# Move the temporary files created by sed to their permanent names
for word in `cat genes.txt`; do
mv ${word}.txt ${name}_${word}_gene.txt
done
done

How can I specify a row in awk in for loop?

I'm using the following awk command:
my_command | awk -F "[[:space:]]{2,}+" 'NR>1 {print $2}' | egrep "^[[:alnum:]]"
which successfully returns my data like this:
fileName1
file Name 1
file Nameone
f i l e Name 1
So as you can see some file names have spaces. This is fine as I'm just trying to echo the file name (nothing special). The problem is calling that specific row within a loop. I'm trying to do it this way:
i=1
for num in $rows
do
fileName=$(my_command | awk -F "[[:space:]]{2,}+" 'NR==$i {print $2}' | egrep "^[[:alnum:]])"
echo "$num $fileName"
$((i++))
done
But my output is always null
I've also tried using awk -v record=$i and then printing $record but I get the below results.
f i l e Name 1
EDIT
Sorry for the confusion: rows is a variable that list ids like this 11 12 13
and each one of those ids ties to a file name. My command without doing any parsing looks like this:
id File Info OS
11 File Name1 OS1
12 Fi leNa me2 OS2
13 FileName 3 OS3
I can only use the id field to run a the command that I need, but I want to use the File Info field to notify the user of the actual File that the command is being executed against.
I think your $i does not expand as expected. You should quote your arguments this way:
fileName=$(my_command | awk -F "[[:space:]]{2,}+" "NR==$i {print \$2}" | egrep "^[[:alnum:]]")
And you forgot the other ).
EDIT
As an update to your requirement you could just pass the rows to a single awk command instead of a repeatitive one inside a loop:
#!/bin/bash
ROWS=(11 12)
function my_command {
# This function just emulates my_command and should be removed later.
echo " id File Info OS
11 File Name1 OS1
12 Fi leNa me2 OS2
13 FileName 3 OS3"
}
awk -- '
BEGIN {
input = ARGV[1]
while (getline line < input) {
sub(/^ +/, "", line)
split(line, a, / +/)
for (i = 2; i < ARGC; ++i) {
if (a[1] == ARGV[i]) {
printf "%s %s\n", a[1], a[2]
break
}
}
}
exit
}
' <(my_command) "${ROWS[#]}"
That awk command could be condensed to one line as:
awk -- 'BEGIN { input = ARGV[1]; while (getline line < input) { sub(/^ +/, "", line); split(line, a, / +/); for (i = 2; i < ARGC; ++i) { if (a[1] == ARGV[i]) {; printf "%s %s\n", a[1], a[2]; break; }; }; }; exit; }' <(my_command) "${ROWS[#]}"
Or better yet just use Bash instead as a whole:
#!/bin/bash
ROWS=(11 12)
while IFS=$' ' read -r LINE; do
IFS='|' read -ra FIELDS <<< "${LINE// +( )/|}"
for R in "${ROWS[#]}"; do
if [[ ${FIELDS[0]} == "$R" ]]; then
echo "${R} ${FIELDS[1]}"
break
fi
done
done < <(my_command)
It should give an output like:
11 File Name1
12 Fi leNa me2
Shell variables aren't expanded inside single-quoted strings. Use the -v option to set an awk variable to the shell variable:
fileName=$(my_command | awk -v i=$i -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]])"
This method avoids having to escape all the $ characters in the awk script, as required in konsolebox's answer.
As you already heard, you need to populate an awk variable from your shell variable to be able to use the desired value within the awk script so thi:
awk -F "[[:space:]]{2,}+" 'NR==$i {print $2}' | egrep "^[[:alnum:]]"
should be this:
awk -v i="$i" -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]]"
Also, though, you don't need awk AND grep since awk can do anything grep van do so you can change this part of your script:
awk -v i="$i" -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]]"
to this:
awk -v i="$i" -F "[[:space:]]{2,}+" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}'
and you don't need a + after a numeric range so you can change {2,}+ to just {2,}:
awk -v i="$i" -F "[[:space:]]{2,}" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}'
Most importantly, though, instead of invoking awk once for every invocation of my_command, you can just invoke it once for all of them, i.e. instead of this (assuming this does what you want):
i=1
for num in rows
do
fileName=$(my_command | awk -v i="$i" -F "[[:space:]]{2,}" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}')
echo "$num $fileName"
$((i++))
done
you can do something more like this:
for num in rows
do
my_command
done |
awk -F '[[:space:]]{2,}' '$2~/^[[:alnum:]]/{print NR, $2}'
I say "something like" because you don't tell us what "my_command", "rows" or "num" are so I can't be precise but hopefully you see the pattern. If you give us more info we can provide a better answer.
It's pretty inefficient to rerun my_command (and awk) every time through the loop just to extract one line from its output. Especially when all you're doing is printing out part of each line in order. (I'm assuming that my_command really is exactly the same command and produces the same output every time through your loop.)
If that's the case, this one-liner should do the trick:
paste -d' ' <(printf '%s\n' $rows) <(my_command |
awk -F '[[:space:]]{2,}+' '($2 ~ /^[::alnum::]/) {print $2}')

How can I print the duplicates in a file only once?

I have an input file that contains:
123,apple,orange
123,pineapple,strawberry
543,grapes,orange
790,strawberry,apple
870,peach,grape
543,almond,tomato
123,orange,apple
i want the output to be:
The following numbers are repeated:
123
543
is there a way to get this output using awk; i'm writing the script in solaris , bash
sed -e 's/,/ , /g' <filename> | awk '{print $1}' | sort | uniq -d
awk -vFS=',' \
'{KEY=$1;if (KEY in KEYS) { DUPS[KEY]; }; KEYS[KEY]; } \
END{print "Repeated Keys:"; for (i in DUPS){print i} }' \
< yourfile
There are solutions with sort/uniq/cut as well (see above).
If you can live without awk, you can use this to get the repeating numbers:
cut -d, -f 1 my_file.txt | sort | uniq -d
Prints
123
543
Edit: (in response to your comment)
You can buffer the output and decide if you want to continue. For example:
out=$(cut -d, -f 1 a.txt | sort | uniq -d | tr '\n' ' ')
if [[ -n $out ]] ; then
echo "The following numbers are repeated: $out"
exit
fi
# continue...
This script will print only the number of the first column that are repeated more than once:
awk -F, '{a[$1]++}END{printf "The following numbers are repeated: ";for (i in a) if (a[i]>1) printf "%s ",i; print ""}' file
Or in a bit shorter form:
awk -F, 'BEGIN{printf "Repeated "}(a[$1]++ == 1){printf "%s ", $1}END{print ""} ' file
If you want to exit your script in case a dup is found, then you can exit a non-zero exit code. For example:
awk -F, 'a[$1]++==1{dup=1}END{if (dup) {printf "The following numbers are repeated: ";for (i in a) if (a[i]>1) printf "%s ",i; print "";exit(1)}}' file
In your main script you can do:
awk -F, 'a[$1]++==1{dup=1}END{if (dup) {printf "The following numbers are repeated: ";for (i in a) if (a[i]>1) printf "%s ",i; print "";exit(-1)}}' file || exit -1
Or in a more readable format:
awk -F, '
a[$1]++==1{
dup=1
}
END{
if (dup) {
printf "The following numbers are repeated: ";
for (i in a)
if (a[i]>1)
printf "%s ",i;
print "";
exit(-1)
}
}
' file || exit -1

Having trouble with awk

I am trying to assign a variable to an awk statement. I am getting an error. Here is the code:
for i in `checksums.txt` do
md=`echo $i|awk -F'|' '{print $1}'`
file=`echo $i|awk -F'|' '{print $2}'`
done
Thanks
for i in `checksums.txt` do
This will try to execute checksums.txt, which is very probably not what you want. If you want the contents of that file do:
for i in $(<checksums.txt) ; do
md=$(echo $i|awk -F'|' '{print $1}')
file=$(echo $i|awk -F'|' '{print $2}')
# ...
done
(This is not optimal, and will not do what you want if the file has lines with spaces in them, but at least it should get you started.)
You don't need external programs for this:
while IFS=\| read m f; do
printf 'md is %s, filename is %s\n' "$m" "$f"
done < checksums.txt
Edited as per new requirement.
Given the file is already sorted, you could use uniq (assuming GNU uniq and md hash length of 33 characters):
uniq -Dw33 checksums.txt
If GNU uniq is not available, you can use awk
(this version doesn't require a sorted input):
awk 'END {
for (M in m)
if (m[M] > 1)
print M, "==>", f[M]
}
{
m[$1]++
f[$1] = f[$1] ? f[$1] FS $2 : $2
}' checksums.txt
while read line
do
set -- `echo $line | tr '|' ' '`
echo md is $1, file is $2
done < checksums.txt

Resources