awk issue inside for loop - bash

I have many files with different names that end with txt.
rtfgtq56.txt
fgutr567.txt
..
So I am running this command
for i in *txt
do
awk -F "\t" '{print $2}' $i | grep "K" | awk '{print}' ORS=';' | awk -F "\t" '{OFS="\t"; print $i, $1}' > ${i%.txt*}.k
done
My problem is that I want to add the name of every file in the first column, so I run this part:
awk -F "\t" '{OFS="\t"; print $i, $1}' > ${i%.txt*}
$i means the file that are in the for loop,
but it did not work because awk can't read the $i in the for loop.
Do you know how I can solve it?

You want to refactor eveything into a single Awk script anyway, and take care to quote your shell variables.
for i in *.txt
do
awk -F "\t" '/K/{a = a ";" $2}
END { print FILENAME, substr(a, 1) }' "$i" > "${i%.txt*}.k"
done
... assuming I untangled your logic correctly. The FILENAME Awk variable contains the current input file name.
More generally, if you genuinely want to pass a variable from a shell script to Awk, you can use
awk -v awkvar="$shellvar" ' .... # your awk script here
# Use awkwar to refer to the Awk variable'
Perhaps see also useless use of grep.

Using the -v option of awk, you can create an awk Variable based on a shell variable.
awk -v i="$i" ....
Another possibility would be to make i an environment variable, which means that awk can access it via the predefined ENVIRON array, i.e. as ENVIRON["i"].

Related

How to combine two awk commands into a single line

how can I combine the following two awk commands into a single line:
awk -F= '$1=="ID" { print $2 ;}' /etc/*release && awk -F= '$1=="VERSION_ID" { print $2 ;}' /etc/*release | xargs
Im trying to get the linux distribution and os in a single line, in the format distribution+version.
For example: ubuntu20.04, rhel7.5
With bash:
source /etc/os-release; echo "$ID $VERSION_ID"
You can capture each part into a variable and then print them out once you have processed the file:
awk -F= '$1=="ID"{id=$2}$1=="VERSION_ID"{vid=$2}END{print id,vid}' /etc/*release
I assume you don't want the quotes (as in your example), in that case:
awk -F '["=]' '$1=="ID" {printf("%s,",$3)} $1=="VERSION_ID" {printf("%s\n",$3)}' < /etc/*release

Script returned '/usr/bin/awk: Argument list too long' in using -v in awk command

Here is the part of my script that uses awk.
ids=`cut -d ',' -f1 $file | sed ':a;N;$!ba;s/\n/,/g'`
awk -vdata="$ids" -F',' 'NR > 1 {if(index(data,$2)>0){print $0",true"}else{print $0",false"}}' $input_file >> $output_file
This works perfectly, but when I tried to get data to two or more files like this.
ids=`cut -d ',' -f1 $file1 $file2 $file3 | sed ':a;N;$!ba;s/\n/,/g'`
It returned this error.
/usr/bin/awk: Argument list too long
As I researched, it was not caused by the number of files, but the number of ids fetched.
Does anybody have an idea on how to solve this? Thanks.
You could use an environment variable to pass the data to awk. In awk the environment variables are accessible via an array ENVIRON.
So try something like this:
export ids=`cut -d ',' -f1 $file | sed ':a;N;$!ba;s/\n/,/g'`
awk -F',' 'NR > 1 {if(index(ENVIRON["ids"],$2)>0){print $0",true"}else{print $0",false"}}' $input_file >> $output_file
Change the way you generate your ids so they come out one per line, like this, which I use as a very simple way to generate ids 2,3 and 9:
echo 2; echo 3; echo 9
2
3
9
Now pass that as the first file to awk and your $input_file as the second file to awk:
awk '...' <(echo 2; echo 3; echo 9) "$input_file"
In bash you can generate a pseudo-file with the output of a process using <(some commands), and that is what I am using.
Now, in your awk, pick up the ids from the first file like this:
awk 'FNR==NR{ids[$1]++;next}' <(echo 2; echo 3; echo 9)
which will set ids[2]=1, ids[3]=1 and ids[9]=1.
Then pass both your files and add in your original processing:
awk 'FNR==NR{ids[$1]++;next} {if($2 in ids) print $0",true"; else print $0",false"}' <(echo 2; echo 3; echo 9) "$input_file"
So, for my final answer, your entire code will look like:
awk 'FNR==NR{ids[$1]++;next} {if($2 in ids) print $0",true"; else print $0",false"}' <(cut ... file1 file2 file3 | sed ...) "$input_file"
As #hek2mgl alludes in the comments, you can likely just pass the files which include the ids to awk "as is" and let awk find the ids itself rather than using cut and sed. If there are many, you can make them all come to awk as the first file with:
awk '...' <(cat file1 file2 file3) "$input_file"
There's 2 problems in your script:
awk -vdata="$ids" -F',' 'NR > 1 {if(index(data,$2)>0){print $0",true"}else{print $0",false"}}' $input_file >> $output_file
that could be causing that error:
-vdata=.. - that is gawk-specific, in other awks you need to leave a space between -v and data=. So if you aren't running gawk then idk what your awk will make of that statement but it might treat it as multiple args.
$input_file - you MUST quote shell variables unless you have a specific purpose in mind by leaving them unquoted. If $input_file contains globbing chars or spaces then you leaving it unquoted will cause them to be expanded into potentially multiple files/args.
So try this:
awk -v data="$ids" -F',' 'NR > 1 {if(index(data,$2)>0){print $0",true"}else{print $0",false"}}' "$input_file" >> "$output_file"
and see if you still have the problem. Your script does have other unrelated issues of course, some of which have already been pointed out, and you can post a followup question if you want help with those, but just FYI that awk script could be written more concisely as:
awk -v data="$ids" 'BEGIN{FS=OFS=","} NR > 1{print $0, (index(data,$2) ? "true" : "false")}'

Bash: AWK - $1 as first parameter of shell script

I spent on this 2 hours and get nothing. I want to get $1 and $2 as a first command line input of shell script, but I couldn't manage this. And $3 and $0 would be columns in awk. I try different methods but nothing works for me.
awk -F':' -v "limit=1000" '{ if ( $3 >=limit ) gsub("~/$1/",~/$2/); print \$0}' file.txt
the cleanest method is to explicitly pass the values from shell to awk with awk's -v option:
awk -F: -v limit=1000 -v patt="~/$1/" -v repl="~/$2/" '
$3 >=limit {gsub(patt,repl); print}
' file.txt
When your awk line is part of a script file, and you want to use $1 and $2 from the script in your awk command, you should temporary stop the literal string with a single quote and start it again.
awk -F':' -v "limit=1000" '{ if ( $3 >=limit ) gsub("~/'$1'/",~/'$2'/); print $0}' file.txt
You didn't post any sample input or expected output so this is a guess but you probably want something like this:
awk -F':' -v limit=1000 -v arg1="$1" -v arg2="$2" '$3 >= limit{gsub("~/" arg1 "/","~/" arg2 "/"); print}' file.txt

Awk: Drop last record separator in one-liner

I have a simple command (part of a bash script) that I'm piping through awk but can't seem to suppress the final record separator without then piping to sed. (Yes, I have many choices and mine is sed.) Is there a simpler way without needing the last pipe?
dolls = $(egrep -o 'alpha|echo|november|sierra|victor|whiskey' /etc/passwd \
| uniq | awk '{IRS="\n"; ORS=","; print}'| sed s/,$//);
Without the sed, this produces output like echo,sierra,victor, and I'm just trying to drop the last comma.
You don't need awk, try:
egrep -o ....uniq|paste -d, -s
Here is another example:
kent$ echo "a
b
c"|paste -d, -s
a,b,c
Also I think your chained command could be simplified. awk could do all things in an one-liner.
Instead of egrep, uniq, awk, sed etc, all this can be done in one single awk command:
awk -F":" '!($1 in a){l=l $1 ","; a[$1]} END{sub(/,$/, "", l); print l}' /etc/password
Here is a small and quite straightforward one-liner in awk that suppresses the final record separator:
echo -e "alpha\necho\nnovember" | awk 'y {print s} {s=$0;y=1} END {ORS=""; print s}' ORS=","
Gives:
alpha,echo,november
So, your example becomes:
dolls = $(egrep -o 'alpha|echo|november|sierra|victor|whiskey' /etc/passwd | uniq | awk 'y {print s} {s=$0;y=1} END {ORS=""; print s}' ORS=",");
The benefit of using awk over paste or tr is that this also works with a multi-character ORS.
Since you tagged it bash here is one way of doing it:
#!/bin/bash
# Read the /etc/passwd file in to an array called names
while IFS=':' read -r name _; do
names+=("$name");
done < /etc/passwd
# Assign the content of the array to a variable
dolls=$( IFS=, ; echo "${names[*]}")
# Display the value of the variable
echo "$dolls"
echo "a
b
c" |
mawk 'NF-= _==$NF' FS='\n' OFS=, RS=
a,b,c

How to replace the nth column/field in a comma-separated string using sed/awk?

assume I have a string
"1,2,3,4"
Now I want to replace, e.g. the 3rd field of the string by some different value.
"1,2,NEW,4"
I managed to do this with the following command:
echo "1,2,3,4" | awk -F, -v OFS=, '{$3="NEW"; print }'
Now the index for the column to be replaced should be passed as a variable. So in this case
index=3
How can I pass this to awk? Because this won't work:
echo "1,2,3,4" | awk -F, -v OFS=, '{$index="NEW"; print }'
echo "1,2,3,4" | awk -F, -v OFS=, '{$($index)="NEW"; print }'
echo "1,2,3,4" | awk -F, -v OFS=, '{\$$index="NEW"; print }'
Thanks for your help!
This might work for you:
index=3
echo "1,2,3,4" | awk -F, -v OFS=, -v INDEX=$index '{$INDEX="NEW"; print }'
or:
index=3
echo "1,2,3,4" | sed 's/[^,]*/NEW/'$index
Have the shell interpolate the index in the awk program:
echo "1,2,3,4" | awk -F, -v OFS=, '{$'$index'="NEW"; print }'
Note how the originally single quoted awk program is split in three parts, a single quoted beginning '{$', the interpolated index value, followed by the single quoted remainder of the program.
Here's a seductive way to break the awkwardness:
$ echo "1,2,3,4" | sed 's/,/\n/g' | sed -e $index's/.*/NEW/'
This is easily extendable to multiple indexes just by adding another -e $newindex's/.*/NEWNEW/'
# This should be faster than awk or sed.
str="1,2,3,4"
IFS=','
read -a f <<< "$str"
f[2]='NEW'
printf "${f[*]}"
With plain awk (I.E. Not gawk etc) I believe you'll have to use split( string, array, [fieldsep] ); change the array entry of choice and then join them back together with sprintf or similar in a loop.
gawk allows you to have a variable as a field name, $index in your example. See here.
gawk is usually the default awk on Linux, so change your invocation to gawk "script" and see if it works.

Resources