What's Ruby one-liner equivalent to awk's RS, NF, and OFS? - ruby

I have this file:
1
2
3
4
a
b
c
XY
Z
I want to convert every block into a TAB separated line, and append the current timestamp at the last column to get an output like this:
1 2 3 4 1548915098
a b c 1548915098
XY Z 1548915098
I can use awk to do it like this:
awk '$(NF+1)=systime()' RS= OFS="\t" file
where empty RS is equivalent to set RS="\n\n+".
But I want to use Ruby one-liner to do it. I've come up with this:
ruby -a -ne 'BEGIN{#lines=Array.new}; if ($_ !~ /^$/) then #lines.push($_.chomp) else (puts #lines.push(Time.now.to_i.to_s).join "\t"; #lines=Array.new) unless #lines.empty? end; END{puts #lines.push(Time.now.to_i.to_s).join "\t" unless #lines.empty?}' file
which is somehow awkward.
Is there any elegant way to do this?
And is there any ruby equivalent to awk's RS, NF, and OFS?
Thanks :)

$ awk '$(NF+1)=systime()' RS= OFS="\t" ip.txt
1 2 3 4 1548917728
a b c 1548917728
XY Z 1548917728
$ # .to_s can be ignored here, since puts will take care of it
$ ruby -00 -lane '$F.append(Time.now.to_i.to_s); puts $F.join("\t")' ip.txt
1 2 3 4 1548917730
a b c 1548917730
XY Z 1548917730
-00 paragraph mode
-a auto split, results available from $F array
-l chomps record separator

Related

Count Occurrences of a text in a file with a shell command

Question seems simple, but there is a twist here.
Consider a file with with data :
A,B
A,C
A,D
D,A
C,A
B,A
Here, I need a bash command which gives the count of occurrences taking
A,B
B,A
as a single count. Hence total count for this example should be 3 and not 6.
Essentially the same as the other answers but it figures out the order of the components for hashing:
$ awk -F, '!(($(($1<$2)+1),$(($2<=$1)+1)) in a){a[$(($1<$2)+1),$(($2<=$1)+1)];c++}END{print c}' file
3
Explained
$ awk -F, '
!( ( $(($1<$2)+1), $(($2<=$1)+1) ) in a ) {
a[$(($1<$2)+1),$(($2<=$1)+1)]
c++
}
END { print c }' file
$1<$2 is either 0 or 1, therefore ($1<$2)+1 is 1 or 2 and $(($1<$2)+1) either $1 or $2. Same applies to the other component $(($2<=$1)+1), it's either $2 or $1. So, it's referencing a[$1,$2] or a[$2,$1]. Tested with:
A,A
A,A
That <= could be just < in the latter component, leading to a[$1,$1] if $1==$2.
awk to the rescue!
$ awk -F, '!(($1,$2) in a){a[$1,$2];a[$2,$1];c++} END{print c}' file

Bash: replacing a column by another and using AWK to print specific order

I have a dummy file that looks like so:
a ID_1 S1 S2
b SNP1 1 0
c SNP2 2 1
d SNP3 1 0
I want to replace the contents of column 2 by the corresponding line number. My file would then look like so:
a 1 S1 S2
b 2 1 0
c 3 2 1
d 4 1 0
I can do this with the following command:
cut -f 1,3-4 -d " " file.txt | awk '{print $1 " " FNR " " $2,$3}'
My question is, is there a better way of doing this? In particular, the real file I am working on has 2303 columns. Obviously I don't want to have to write:
cut -f 1,3-2303 -d " " file.txt | awk '{print $1 " " FNR " " $2,$3,$4,$5 ETC}'
Is there a way to tell awk to print from column 2 to the last column without having to write all the names?
Thanks
I think this should do
$ awk '{$2=FNR} 1' file.txt
a 1 S1 S2
b 2 1 0
c 3 2 1
d 4 1 0
change second column and print the changed record. Default OFS is single space which is what you need here
the above command is idiomatic way to write
awk '{$2=FNR} {print $0}' file.txt
you can think of simple awk program as awk 'cond1{action1} cond2{action2} ...'
only if cond1 evaluates to true, action1 is executed and so on. If action portion is omitted, awk by default prints input record. 1 is simply one way to write always true condition
See Idiomatic awk mentioned in https://stackoverflow.com/tags/awk/info for more such idioms
Following awk may also help you in same here.
awk '{sub(/.*/,FNR,$2)} 1' Input_file
Output will be as follows.
a 1 S1 S2
b 2 1 0
c 3 2 1
d 4 1 0
Explanation: It's explanation will be simple, using sub utility of awk to substitute everything in $2(second field) with FNR which is out of the box variable for awk to represent the current line number of any Input_file then mentioning 1 will print the current line of Input_file.

Merging word counts with Bash and Unix

I made a Bash script that extracts words from a text file with grep and sed and then sorts them with sort and counts the repetitions with wc, then sort again by frequency. The example output looks like this:
12 the
7 code
7 with
7 add
5 quite
3 do
3 well
1 quick
1 can
1 pick
1 easy
Now I'd like to merge all words with the same frequency into one line, like this:
12 the
7 code with add
5 quite
3 do well
1 quick can pick easy
Is there any way to do that with Bash and standard Unix toolset? Or I would have to write a script / program in some more sophisticated scripting language?
With awk:
$ echo "12 the
7 code
7 with
7 add
5 quite
3 do
3 well
1 quick
1 can
1 pick
1 easy" | awk '{cnt[$1]=cnt[$1] ? cnt[$1] OFS $2 : $2} END {for (e in cnt) print e, cnt[e]} ' | sort -nr
12 the
7 code with add
5 quite
3 do well
1 quick can pick easy
You can do something similar with Bash 4 associative arrays. awk is easier and POSIX though. Use that.
Explanation:
awk splits the line apart by the separator in FS, in this case the default of horizontal whitespace;
$1 is the first field of the count - use that to collect items with the same count in an associative array keyed by the count with cnt[$1];
cnt[$1]=cnt[$1] ? cnt[$1] OFS $2 : $2 is a ternary assignment - if cnt[$1] has no value, just assign the second field $2 to it (The RH of :). If it does have a previous value, concatenate $2 separated by the value of OFS (the LH of :);
At the end, print out the value of the associative array.
Since awk associative arrays are unordered, you need to sort again by the numeric value of the first column. gawk can sort internally, but it is just as easy to call sort. The input to awk does not need to be sorted, so you can eliminate that part of the pipeline.
If you want the digits to be right justified (as your have in your example):
$ awk '{cnt[$1]=cnt[$1] ? cnt[$1] OFS $2 : $2}
END {for (e in cnt) printf "%3s %s\n", e, cnt[e]} '
If you want gawk to sort numerically by descending values, you can add PROCINFO["sorted_in"]="#ind_num_desc" prior to traversing the array:
$ gawk '{cnt[$1]=cnt[$1] ? cnt[$1] OFS $2 : $2}
END {PROCINFO["sorted_in"]="#ind_num_desc"
for (e in cnt) printf "%3s %s\n", e, cnt[e]} '
With single GNU awk expression (without sort pipeline):
awk 'BEGIN{ PROCINFO["sorted_in"]="#ind_num_desc" }
{ a[$1]=(a[$1])? a[$1]" "$2:$2 }END{ for(i in a) print i,a[i]}' file
The output:
12 the
7 code with add
5 quite
3 do well
1 quick can pick easy
Bonus alternative solution using GNU datamash tool:
datamash -W -g1 collapse 2 <file
The output (comma-separated collapsed fields):
12 the
7 code,with,add
5 quite
3 do,well
1 quick,can,pick,easy
awk:
awk '{a[$1]=a[$1] FS $2}!b[$1]++{d[++c]=$1}END{while(i++<c)print d[i],a[d[i]]}' file
sed:
sed -r ':a;N;s/(\b([0-9]+).*)\n\s*\2/\1/;ta;P;D'
You start with sorted data, so you only need a new line when the first field changes.
echo "12 the
7 code
7 with
7 add
5 quite
3 do
3 well
1 quick
1 can
1 pick
1 easy" |
awk '
{
if ($1==last) {
printf(" %s",$2)
} else {
last=$1;
printf("%s%s",(NR>1?"\n":""),$0)
}
}; END {print}'
next time you find yourself trying to manipulate text with a combination of grep and sed and shell and..., stop and just use awk instead - the end result will be clearer, simpler, more efficient, more portable, etc...
$ cat file
It was the best of times, it was the worst of times,
it was the age of wisdom, it was the age of foolishness.
.
$ cat tst.awk
BEGIN { FS="[^[:alpha:]]+" }
{
for (i=1; i<NF; i++) {
word2cnt[tolower($i)]++
}
}
END {
for (word in word2cnt) {
cnt = word2cnt[word]
cnt2words[cnt] = (cnt in cnt2words ? cnt2words[cnt] " " : "") word
printf "%3d %s\n", cnt, word
}
for (cnt in cnt2words) {
words = cnt2words[cnt]
# printf "%3d %s\n", cnt, words
}
}
$
$ awk -f tst.awk file | sort -rn
4 was
4 the
4 of
4 it
2 times
2 age
1 worst
1 wisdom
1 foolishness
1 best
.
$ cat tst.awk
BEGIN { FS="[^[:alpha:]]+" }
{
for (i=1; i<NF; i++) {
word2cnt[tolower($i)]++
}
}
END {
for (word in word2cnt) {
cnt = word2cnt[word]
cnt2words[cnt] = (cnt in cnt2words ? cnt2words[cnt] " " : "") word
# printf "%3d %s\n", cnt, word
}
for (cnt in cnt2words) {
words = cnt2words[cnt]
printf "%3d %s\n", cnt, words
}
}
$
$ awk -f tst.awk file | sort -rn
4 it was of the
2 age times
1 best worst wisdom foolishness
Just uncomment whichever printf line you like in the above script to get whichever type of output you want. The above will work in any awk on any UNIX system.
Using miller's nest verb:
mlr -p nest --implode --values --across-records -f 2 --nested-fs ' ' file
Output:
12 the
7 code with add
5 quite
3 do well
1 quick can pick easy

Sum values of specific columns using awk

So I have a file which looks like this:
1 4 6
2 5
3
I want to sum only specific columns, let's say the first and third.
And the output should look like this:
7
2
3
I store numbers of columns (arguments) in a variable:
x=${#:2} (because I omit first passed argument which is a $filename)
How to calclute this using awk in a bash script ?
I was thinking about sth like this
for i in ${#:2}
do
awk -v c=$i '{sum+=$c;print sum}' $fname
done
But it does not work properly.
How about something like this:
$ awk -v c="1 3" 'BEGIN{split(c,a)}{c=0;for(i in a) c+=$a[i]; print c}' file
7
2
3
Explained:
$ awk -v c="1 3" ' # the desired column list space-separated
BEGIN {
split(c,a) # if not space-separated, change it here
}
{
c=0; # reusing col var as count var. recycle or die!
for(i in a) # after split desired cols are in a arr, ie. a[1]=1, a[2]=3
c+=$a[i]; # sum em up
print c # print it
}' file
EDIT: changed comma-separation to space-separation.
awk '{print $1 + $3}' file
7
2
3

Print a comma except on the last line in Awk

I have the following script
awk '{printf "%s", $1"-"$2", "}' $a >> positions;
where $a stores the name of the file. I am actually writing multiple column values into one row. However, I would like to print a comma only if I am not on the last line.
Single pass approach:
cat "$a" | # look, I can use this in a pipeline!
awk 'NR > 1 { printf(", ") } { printf("%s-%s", $1, $2) }'
Note that I've also simplified the string formatting.
Enjoy this one:
awk '{printf t $1"-"$2} {t=", "}' $a >> positions
Yeh, looks a bit tricky at first sight. So I'll explain, first of all let's change printf onto print for clarity:
awk '{print t $1"-"$2} {t=", "}' file
and have a look what it does, for example, for file with this simple content:
1 A
2 B
3 C
4 D
so it will produce the following:
1-A
, 2-B
, 3-C
, 4-D
The trick is the preceding t variable which is empty at the beginning. The variable will be set {t=...} only on the next step of processing after it was shown {print t ...}. So if we (awk) continue iterating we will got the desired sequence.
I would do it by finding the number of lines before running the script, e.g. with coreutils and bash:
awk -v nlines=$(wc -l < $a) '{printf "%s", $1"-"$2} NR != nlines { printf ", " }' $a >>positions
If your file only has 2 columns, the following coreutils alternative also works. Example data:
paste <(seq 5) <(seq 5 -1 1) | tee testfile
Output:
1 5
2 4
3 3
4 2
5 1
Now replacing tabs with newlines, paste easily assembles the date into the desired format:
<testfile tr '\t' '\n' | paste -sd-,
Output:
1-5,2-4,3-3,4-2,5-1
You might think that awk's ORS and OFS would be a reasonable way to handle this:
$ awk '{print $1,$2}' OFS="-" ORS=", " input.txt
But this results in a final ORS because the input contains a newline on the last line. The newline is a record separator, so from awk's perspective there is an empty last record in the input. You can work around this with a bit of hackery, but the resultant complexity eliminates the elegance of the one-liner.
So here's my take on this. Since you say you're "writing multiple column values", it's possible that mucking with ORS and OFS would cause problems. So we can achieve the desired output entirely with formatting.
$ cat input.txt
3 2
5 4
1 8
$ awk '{printf "%s%d-%d",t,$1,$2; t=", "} END{print ""}' input.txt
3-2, 5-4, 1-8
This is similar to Michael's and rook's single-pass approaches, but it uses a single printf and correctly uses the format string for formatting.
This will likely perform negligibly better than Michael's solution because an assignment should take less CPU than a test, and noticeably better than any of the multi-pass solutions because the file only needs to be read once.
Here's a better way, without resorting to coreutils:
awk 'FNR==NR { c++; next } { ORS = (FNR==c ? "\n" : ", "); print $1, $2 }' OFS="-" file file
awk '{a[NR]=$1"-"$2;next}END{for(i=1;i<NR;i++){print a[i]", " }}' $a > positions

Resources