I have an input file that contains:
123,apple,orange
123,pineapple,strawberry
543,grapes,orange
790,strawberry,apple
870,peach,grape
543,almond,tomato
123,orange,apple
i want the output to be:
The following numbers are repeated:
123
543
is there a way to get this output using awk; i'm writing the script in solaris , bash
sed -e 's/,/ , /g' <filename> | awk '{print $1}' | sort | uniq -d
awk -vFS=',' \
'{KEY=$1;if (KEY in KEYS) { DUPS[KEY]; }; KEYS[KEY]; } \
END{print "Repeated Keys:"; for (i in DUPS){print i} }' \
< yourfile
There are solutions with sort/uniq/cut as well (see above).
If you can live without awk, you can use this to get the repeating numbers:
cut -d, -f 1 my_file.txt | sort | uniq -d
Prints
123
543
Edit: (in response to your comment)
You can buffer the output and decide if you want to continue. For example:
out=$(cut -d, -f 1 a.txt | sort | uniq -d | tr '\n' ' ')
if [[ -n $out ]] ; then
echo "The following numbers are repeated: $out"
exit
fi
# continue...
This script will print only the number of the first column that are repeated more than once:
awk -F, '{a[$1]++}END{printf "The following numbers are repeated: ";for (i in a) if (a[i]>1) printf "%s ",i; print ""}' file
Or in a bit shorter form:
awk -F, 'BEGIN{printf "Repeated "}(a[$1]++ == 1){printf "%s ", $1}END{print ""} ' file
If you want to exit your script in case a dup is found, then you can exit a non-zero exit code. For example:
awk -F, 'a[$1]++==1{dup=1}END{if (dup) {printf "The following numbers are repeated: ";for (i in a) if (a[i]>1) printf "%s ",i; print "";exit(1)}}' file
In your main script you can do:
awk -F, 'a[$1]++==1{dup=1}END{if (dup) {printf "The following numbers are repeated: ";for (i in a) if (a[i]>1) printf "%s ",i; print "";exit(-1)}}' file || exit -1
Or in a more readable format:
awk -F, '
a[$1]++==1{
dup=1
}
END{
if (dup) {
printf "The following numbers are repeated: ";
for (i in a)
if (a[i]>1)
printf "%s ",i;
print "";
exit(-1)
}
}
' file || exit -1
Related
I am trying to make a script to check if the value snp (column $4 of the test file) is present in another file (map file). If so, print the value snp and the value distance taken from the map file (distance is the column $4 of the map file). If the snp value from the test file is not present in the map file, print the snp value but put a 0 (zero) in the second column as distance value.
The script is:
for chr in {1..22};
do
for snp in awk '{print $4}' test$chr.bim
i=$(grep $snp map$chr.txt | wc -l | awk '{print $1}')
if [[ $i == "0" ]]
then
echo "$snp 0" >> position.$chr
else
distance=$(grep $snp map$chr.txt | awk '{print $4}')
echo "$snp $distance" >> position.$chr
fi
done
done
my map file is made like this:
Chromosome Position(bp) Rate(cM/Mb) Map(cM)
chr22 16051347 8.096992 0.000000
chr22 16052618 8.131520 0.010291
chr22 16053624 8.131967 0.018472
and so on..
my test file is made like this:
22 16051347 0 16051347 C A
22 16052618 0 16052618 G T
22 17306184 0 17306184 T G
and so on..
I'm getting the following syntax errors:
position.sh: line 6: syntax error near unexpected token `i=$(grep $snp map$chr.txt | wc -l | awk '{print $1}')'
position.sh: line 6: `i=$(grep $snp map$chr.txt | wc -l | awk '{print $1}')'
Any tip?
The attempt to use awk as the argument to for is basically a syntax error, and you have a number of syntax problems and inefficiencies here.
Try this:
for chr in {1..22}; do
awk '{print $4}' "test$chr.bim" |
while IFS="" read -r snp; do
if ! grep -q "$snp" "map$chr.txt"; then
echo "$snp 0"
else
awk -v snp="$snp" '
$0 ~ snp { print snp, $4 }' "map$chr.txt"
fi >> "position.$chr"
done
done
The entire thing could probably be further refactored to a single Awk script.
for chr in {1..22}; do
awk 'NR == FNR { ++a[$4]; next }
$2 in a { print a[$2], $4; ++found[$2] }
END { for(k in a) if (!found[k]) print a[k], 0 }' \
"test$chr.bim" "map$chr.txt" >> "position.$chr"
done
The correct for syntax for what I'm guessing you wanted would look like
for snp in $(awk '{print $4}' "test$chr.bim"); do
but this has other problems; see don't read lines with for
I have a big text file with million of log lines.
I would like to filter all the lines which satisfy following criteria
url should be url=/v2/testB
totalTime value should be greater than 500
INFO|id=1|totaltime=5000|httpmethod=POST|url=/v1/testA
INFO|id=2|totaltime=200|httpmethod=POST|url=/v2/testB
INFO|id=3|totaltime=1000|httpmethod=POST|url=/v2/testB
INFO|id=4|totaltime=501|httpmethod=POST|url=/v2/testB
result:-
id=3,totaltime=1000
id=4,totaltime=501
I have tried using multiple awk and then putting if block, I wonder, it can be done quickly? Thanks !
while IFS= read -r line; do
value=`echo $line|grep "url=/v2/testB" | awk -F"totaltime=" '{ print $2}'| awk -F"|" '{ print $1}'`
if (( $value > 500 )); then
echo $line
fi
done < file.log
You may use this awk:
awk -F '|' -v OFS=, '$NF == "url=/v2/testB" {v=$3; sub(/^totaltime=/, "", v); if (v+0 > 500) print $2, $3}' file
id=3,totaltime=1000
id=4,totaltime=501
To make it more readable:
awk -F '|' -v OFS=, '
$NF == "url=/v2/testB" {
v = $3
sub(/^totaltime=/, "", v)
if (v+0 > 500)
print $2, $3
}' file
If you have gnu-awk then it can be reduced to:
awk -F '|' -v OFS=, '$NF == "url=/v2/testB" &&
gensub(/^totaltime=/, "", "1", $3)+0 > 500 {print $2, $3}' file
v+0 is shorthand in awk to covert a string value to number.
$ awk -F'|' -v OFS=',' '{split($3,t,/=/)} $5=="url=/v2/testB" && t[2]>500{print $2, $3}' file
id=3,totaltime=1000
id=4,totaltime=501
You seem to be in luck:
awk -F'|' 'BEGIN{FS="|"; OFS=","}
{ url = substr($NF,index($NF,"=")+1)
totaltime = substr($3,index($3,"=")+1)
}
(url == "/v1/testB") && (totaltime+0 > 500) { print $2,$3 }
' file
With your shown samples, please try following awk program.
awk -F'\\||totaltime=' '$NF=="url=/v2/testB" && $4>500{print $2",totaltime="$4}' Input_file
Explanation: Following is the detailed explanation for above code.
Setting field separator by using -F option in awk program.
Setting field separators to | and totaltime= for all the lines of Input_file.
In main program, checking conditions:
a- If $NF(last field) is equal to url=/v2/testB AND
b- 4th field is greater than 500 then do:
print 2nd field of current line followed by string ,totaltime= followed by 4th field as per required output by OP.
All the awk solutions are great, and if that is a solution use them.
If you wanted to fix your Bash effort, you can do:
while IFS='|' read -r id ti; do
[[ "${ti#*=}" -gt 500 ]] && printf "%s,%s\n" "$id" "$ti"
done < <(grep 'url=/v2/testB$' file | cut -d '|' -f 2,3)
Alternatively, you can eliminate cut and keep all five fields:
while IFS='|' read -r c1 c2 c3 c4 c5; do
[[ "${c3#*=}" -gt 500 ]] && printf "%s,%s\n" "$c2" "$c5"
done < <(grep 'url=/v2/testB$' file)
Either prints:
id=3,totaltime=1000
id=4,totaltime=501
Final.txt
Failed,2021-12-07 22:30 EST,Scheduled Backup,abc,/clients/FORD_1030PM_EST_Windows2008,Windows File System
Failed,2021-12-07 22:00 EST,Scheduled Backup,def,/clients/FORD_10PM_EST_Windows2008,Windows File System
I want to iterate through these rows instead of column
Expected Output
client=abc
client=def
group=/clients/FORD_1030PM_EST_Windows2008
group=/clients/FORD_10PM_EST_Windows2008
I tried this
while read line ; do
group=$(awk -F',' '{print $4}')
client=$(awk -F',' '{print $5}')
echo $group
echo $client
done < Final
it's Not working but when I am individually doing this
cat Final | awk -F',' '{print $4}'
then it is giving me the expected output but does not work when I am trying in the loop.
With GNU awk:
awk -F ',' 'BEGINFILE{f++}
f==1{print "client=" $4}
f==2{print "group=" $5}
' Final Final
Output:
client=abc
client=def
grooup=/clients/FORD_1030PM_EST_Windows2008
grooup=/clients/FORD_10PM_EST_Windows2008
One-pass awk solution, storing field $5 in an array for printing at the end:
$ awk -F, '{print $4; groups[NR]=$5} END {for (i=1;i<=NR;i++) print groups[i]}' Final.txt
abc
def
/clients/FORD_1030PM_EST_Windows2008
/clients/FORD_10PM_EST_Windows2008
Two-pass awk that eliminates need to store field $5 in an array:
$ awk -F, 'FNR==NR {print $4;next} {print $5}' Final.txt Final.txt
abc
def
/clients/FORD_1030PM_EST_Windows2008
/clients/FORD_10PM_EST_Windows2008
with bash
declare -a client group
while IFS=, read -ra fields; do
client+=("${fields[3]}")
group+=("${fields[4]}")
done < Final
printf 'client=%s\n' "${client[#]}"
printf 'group=%s\n' "${group[#]}"
Using miller, it's no different really from a single-pass awk
solution, collecting the values in arrays:
mlr --icsv --implicit-csv-header put '
#client[NR] = $4;
#group[NR] = $5;
filter false;
end {
emit #client, "client";
emit #group, "group";
}
' Final
The equivalent of the above, as more readable (IMO) awk code:
awk -F, '
{client[NR] = $4; group[NR] = $5}
END {
for (i=1; i<=NR; i++) print "client=" client[i]
for (i=1; i<=NR; i++) print "group=" group[i]
}
' Final
Using csvtool is nice because it has a transpose function,
but it still needs help getting to the desired output
csvtool col 4,5 Final \
| csvtool cat <(echo "client,group") - \
| csvtool transpose - \
| awk -F, -v OFS="=" '{for (i=2; i<=NF; i++) print $1, $i}'
LOG="Failed,2021-12-07 22:30 EST,Scheduled Backup,abc,/clients/FORD_1030PM_EST_Windows2008,Windows File System
Failed,2021-12-07 22:00 EST,Scheduled Backup,def,/clients/FORD_10PM_EST_Windows2008,Windows File System"
getColumnTextSed(){
log="$1"
column=$2
[ -n "$log" -a -n "$column" ] && sed -E 's/([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),(.*)/\'$column'/mg;t;d' <<<$log
}
getColumnTextAWK(){
log="$1"
column=$2
[ -n "$log" -a -n "$column" ] && awk -F',' -v "c=$column" '{print $(c)}' <<<$log
}
echo "# with sed and column number"
echo "-------------------------------"
getColumnTextSed "$LOG" 4
getColumnTextSed "$LOG" 5
echo "-------------------------------"
echo "# with awk and column number"
echo "-------------------------------"
getColumnTextAWK "$LOG" 4
getColumnTextAWK "$LOG" 5
OUTPUT
# with sed
-------------------------------
abc
def
/clients/FORD_1030PM_EST_Windows2008
/clients/FORD_10PM_EST_Windows2008
-------------------------------
# with awk
-------------------------------
abc
def
/clients/FORD_1030PM_EST_Windows2008
/clients/FORD_10PM_EST_Windows2008
How to use awk command, as I need to add or append a 000 to my below timestamp column. I try to use the below command,
head -n 10000001 ratings.csv | tail -n +2 | awk '{print $1 "000"}' >> ratings_1.csv
but data is not as expected.
$ cat ratings.csv |wc -l
20000264
$ head ratings.csv
userId,movieId,rating,timestamp
1,2,3.5,1112486027
1,29,3.5,1112484676
1,32,3.5,1112484819
1,47,3.5,1112484727
1,50,3.5,1112484580
1,112,3.5,1094785740
1,151,4.0,1094785734
1,223,4.0,1112485573
1,253,4.0,1112484940
My expected output should look like
1,2,3.5,1112486027000
awk '{ if (NR > 1) { $1 = $1 "000" } print }'
Maybe a faster version that wouldn't run the if on every line would be:
awk 'BEGIN { getline; print } { print $0 "000" }'
I'm using the following awk command:
my_command | awk -F "[[:space:]]{2,}+" 'NR>1 {print $2}' | egrep "^[[:alnum:]]"
which successfully returns my data like this:
fileName1
file Name 1
file Nameone
f i l e Name 1
So as you can see some file names have spaces. This is fine as I'm just trying to echo the file name (nothing special). The problem is calling that specific row within a loop. I'm trying to do it this way:
i=1
for num in $rows
do
fileName=$(my_command | awk -F "[[:space:]]{2,}+" 'NR==$i {print $2}' | egrep "^[[:alnum:]])"
echo "$num $fileName"
$((i++))
done
But my output is always null
I've also tried using awk -v record=$i and then printing $record but I get the below results.
f i l e Name 1
EDIT
Sorry for the confusion: rows is a variable that list ids like this 11 12 13
and each one of those ids ties to a file name. My command without doing any parsing looks like this:
id File Info OS
11 File Name1 OS1
12 Fi leNa me2 OS2
13 FileName 3 OS3
I can only use the id field to run a the command that I need, but I want to use the File Info field to notify the user of the actual File that the command is being executed against.
I think your $i does not expand as expected. You should quote your arguments this way:
fileName=$(my_command | awk -F "[[:space:]]{2,}+" "NR==$i {print \$2}" | egrep "^[[:alnum:]]")
And you forgot the other ).
EDIT
As an update to your requirement you could just pass the rows to a single awk command instead of a repeatitive one inside a loop:
#!/bin/bash
ROWS=(11 12)
function my_command {
# This function just emulates my_command and should be removed later.
echo " id File Info OS
11 File Name1 OS1
12 Fi leNa me2 OS2
13 FileName 3 OS3"
}
awk -- '
BEGIN {
input = ARGV[1]
while (getline line < input) {
sub(/^ +/, "", line)
split(line, a, / +/)
for (i = 2; i < ARGC; ++i) {
if (a[1] == ARGV[i]) {
printf "%s %s\n", a[1], a[2]
break
}
}
}
exit
}
' <(my_command) "${ROWS[#]}"
That awk command could be condensed to one line as:
awk -- 'BEGIN { input = ARGV[1]; while (getline line < input) { sub(/^ +/, "", line); split(line, a, / +/); for (i = 2; i < ARGC; ++i) { if (a[1] == ARGV[i]) {; printf "%s %s\n", a[1], a[2]; break; }; }; }; exit; }' <(my_command) "${ROWS[#]}"
Or better yet just use Bash instead as a whole:
#!/bin/bash
ROWS=(11 12)
while IFS=$' ' read -r LINE; do
IFS='|' read -ra FIELDS <<< "${LINE// +( )/|}"
for R in "${ROWS[#]}"; do
if [[ ${FIELDS[0]} == "$R" ]]; then
echo "${R} ${FIELDS[1]}"
break
fi
done
done < <(my_command)
It should give an output like:
11 File Name1
12 Fi leNa me2
Shell variables aren't expanded inside single-quoted strings. Use the -v option to set an awk variable to the shell variable:
fileName=$(my_command | awk -v i=$i -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]])"
This method avoids having to escape all the $ characters in the awk script, as required in konsolebox's answer.
As you already heard, you need to populate an awk variable from your shell variable to be able to use the desired value within the awk script so thi:
awk -F "[[:space:]]{2,}+" 'NR==$i {print $2}' | egrep "^[[:alnum:]]"
should be this:
awk -v i="$i" -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]]"
Also, though, you don't need awk AND grep since awk can do anything grep van do so you can change this part of your script:
awk -v i="$i" -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]]"
to this:
awk -v i="$i" -F "[[:space:]]{2,}+" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}'
and you don't need a + after a numeric range so you can change {2,}+ to just {2,}:
awk -v i="$i" -F "[[:space:]]{2,}" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}'
Most importantly, though, instead of invoking awk once for every invocation of my_command, you can just invoke it once for all of them, i.e. instead of this (assuming this does what you want):
i=1
for num in rows
do
fileName=$(my_command | awk -v i="$i" -F "[[:space:]]{2,}" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}')
echo "$num $fileName"
$((i++))
done
you can do something more like this:
for num in rows
do
my_command
done |
awk -F '[[:space:]]{2,}' '$2~/^[[:alnum:]]/{print NR, $2}'
I say "something like" because you don't tell us what "my_command", "rows" or "num" are so I can't be precise but hopefully you see the pattern. If you give us more info we can provide a better answer.
It's pretty inefficient to rerun my_command (and awk) every time through the loop just to extract one line from its output. Especially when all you're doing is printing out part of each line in order. (I'm assuming that my_command really is exactly the same command and produces the same output every time through your loop.)
If that's the case, this one-liner should do the trick:
paste -d' ' <(printf '%s\n' $rows) <(my_command |
awk -F '[[:space:]]{2,}+' '($2 ~ /^[::alnum::]/) {print $2}')