Get common lines, for only specific fields, from multiple files - bash

I am trying to understand the following code used to pull out overlapping lines over multiple files using BASH.
awk 'END {
# the END block is executed after
# all the input has been read
# loop over the rec array
# and build the dup array indxed by the nuber of
# filenames containing a given record
for (R in rec) {
n = split(rec[R], t, "/")
if (n > 1)
dup[n] = dup[n] ? dup[n] RS sprintf("\t%-20s -->\t%s", rec[R], R) : \
sprintf("\t%-20s -->\t%s", rec[R], R)
}
# loop over the dup array
# and report the number and the names of the files
# containing the record
for (D in dup) {
printf "records found in %d files:\n\n", D
printf "%s\n\n", dup[D]
}
}
{
# build an array named rec (short for record), indexed by
# the content of the current record ($0), concatenating
# the filenames separated by / as values
rec[$0] = rec[$0] ? rec[$0] "/" FILENAME : FILENAME
}' file[a-d]
After understanding what each sub-block of code is doing, I would like to extend this code to find specific fields with overlap, rather than the entire line. For example, I have tried changing the line:
n = split(rec[R], t, "/")
to
n = split(rec[R$1], t, "/")
to find the lines where the first field is the same across all files but this did not work. Eventually I would like to extend this to check that a line has fields 1, 2, and 4 the same, and then print the line.
Specifically, for the files mentioned in the example in the link:
if file 1 is:
chr1 31237964 NP_055491.1 PUM1 M340L
chr1 33251518 NP_037543.1 AK2 H191D
and file 2 is:
chr1 116944164 NP_001533.2 IGSF3 R671W
chr1 33251518 NP_001616.1 AK2 H191D
chr1 57027345 NP_001004303.2 C1orf168 P270S
I would like to pull out:
file1/file2 --> chr1 33251518 AK2 H191D
I found this code at the following link:
http://www.unix.com/shell-programming-and-scripting/140390-get-common-lines-multiple-files.html#post302437738. Specifically, I would like to understand what R, rec, n, dup, and D represent from the files themselves. It is unclear from the comments provided and printf statements I've added within the subloops fail.
Thank you very much for any insight on this!

The script works by building an auxiliary array, the indices of which are the lines in the input files (denoted by $0 in rec[$0]), and the values are filename1/filename3/... for those filenames in which the given line $0 is present. You can hack it up to just work with $1,$2 and $4 like so:
awk 'END {
# the END block is executed after
# all the input has been read
# loop over the rec array
# and build the dup array indxed by the nuber of
# filenames containing a given record
for (R in rec) {
n = split(rec[R], t, "/")
if (n > 1) {
split(R,R1R2R4,SUBSEP)
dup[n] = dup[n] ? dup[n] RS sprintf("\t%-20s -->\t%s\t%s\t%s", rec[R], R1R2R4[1],R1R2R4[2],R1R2R4[3]) : \
sprintf("\t%-20s -->\t%s\t%s\t%s", rec[R], R1R2R4[1],R1R2R4[2],R1R2R4[3])
}
}
# loop over the dup array
# and report the number and the names of the files
# containing the record
for (D in dup) {
printf "records found in %d files:\n\n", D
printf "%s\n\n", dup[D]
}
}
{
# build an array named rec (short for record), indexed by
# the partial content of the current record
# (special concatenation of $1, $2 and $4)
# concatenating the filenames separated by / as values
rec[$1,$2,$4] = rec[$1,$2,$4] ? rec[$1,$2,$4] "/" FILENAME : FILENAME
}' file[a-d]
this solution makes use of multidimensional arrays: we create rec[$1,$2,$4] instead of rec[$0]. This special syntax of awk concatenates the indices with the SUBSEP character, which is by default non-printable ("\034" to be precise), and so it is unlikely to be part of either of the fields. In effect it does rec[$1 SUBSEP $2 SUBSEP $4]=.... Otherwise this part of the code is the same. Note that it would be more logical to move the second block to the beginning of the script, and finish with the END block.
The first part of the code also has to be changed: now for (R in rec) loops over these tricky concatenated indices, $1 SUBSEP $2 SUBSEP $4. This is good while indexing, but you need to split R at the SUBSEP characters to obtain again the printable fields $1, $2, $4. These are put into the array R1R2R4, which can be used to print the necessary output: instead of %s,...,R we now have %s\t%s\t%s,...,R1R2R4[1],R1R2R4[2],R1R2R4[3],. In effect we're doing sprintf ...%s,...,$1,$2,$4; with pre-saved fields $1, $2, $4. For your input example this will print
records found in 2 files:
foo11.inp1/foo11.inp2 --> chr1 33251518 AK2
Note that the output is missing H191D but rightly so: that is not in field 1, 2 or 4 (but rather in field 5), so there's no guarantee that it is the same in the printed files! You probably don't want to print that, or anyway have to specify how you should treat the columns which are not checked between files (and so may differ).
A bit of explanation for the original code:
rec is an array, the indices of which are full lines of input, and the values are the slash-separated list of files in which those lines appear. For instance, if file1 contains a line "foo bar", then rec["foo bar"]=="file1" initially. If then file2 also contains this line, then rec["foo bar"]=="file1/file2". Note that there are no checks for multiplicity, so if file1 contains this line twice, then eventually you'll get rec["foo bar"]=file1/file1/file2 and obtain 3 for the number of files containing this line.
R goes over the indices of the array rec after it has been fully built. This means that R will eventually assume each unique line of every input file, allowing us to loop over rec[R], containing the filenames in which that specific line R was present.
n is a return value from split, which splits the value of rec[R] --- that is the filename list corresponding to line R --- at each slash. Eventually the array t is filled with the list of files, but we don't make use of this, we only use the length of the array t, i.e. the number of files in which line R is present (this is saved in the variable n). If n==1, we don't do anything, only if there are multiplicities.
the loop over n creates classes according to the multiplicity of a given line. n==2 applies to lines that are present in exactly 2 files. n==3 to those which appear thrice, and so on. What this loop does is that it builds an array dup, which for every multiplicity class (i.e. for every n) creates the output string "filename1/filename2/... --> R", with each of these strings separated by RS (the record separator) for each value of R that appears n times total in the files. So eventually dup[n] for a given n will contain a given number of strings in the form of "filename1/filename2/... --> R", concatenated with the RS character (by default a newline).
The loop over D in dup will then go through multiplicity classes (i.e. valid values of n larger than 1), and print the gathered output lines which are in dup[D] for each D. Since we only defined dup[n] for n>1, D starts from 2 if there are multiplicities (or, if there aren't any, then dup is empty, and the loop over D will not do anything).

first you'll need to understand the 3 blocks in an AWK script:
BEGIN{
# A code that is executed once before the data processing start
}
{
# block without a name (default/main block)
# executed pet line of input
# $0 contains all line data/columns
# $1 first column
# $2 second column, and so on..
}
END{
# A code that is executed once after all data processing finished
}
so you'll probably need to edit this part of the script:
{
# build an array named rec (short for record), indexed by
# the content of the current record ($0), concatenating
# the filenames separated by / as values
rec[$0] = rec[$0] ? rec[$0] "/" FILENAME : FILENAME
}

Related

Copy columns of a file to specific location of another pipe delimited file

I have a file suppose xyz.dat which has data like below -
a1|b1|c1|d1|e1|f1|g1
a2|b2|c2|d2|e2|f2|g2
a3|b3|c3|d3|e3|f3|g3
Due to some requirement, I am making two new files(aka m.dat and o.dat) from original xyz.dat.
M.dat contains columns 2|4|6 like below after running some logic on it -
b11|d11|f11
b22|d22|f22
b33|d33|f33
O.dat contains all the columns except 2|4|6 like below without any change in it -
a1|c1|e1|g1
a2|c2|e2|g2
a3|c3|e3|g3
Now I want to merge both M and O file to create back the original format xyz.dat file.
a1|b11|c1|d11|e1|f11|g1
a2|b22|c2|d22|e2|f22|g2
a3|b33|c3|d33|e3|f33|g3
Please note column positions can change for another file. I will get the columns positions like in above example it is 2,4,6 so need some generic command to run in loop to merge the new M and O file or one command in which I can pass the columns positions and it will copy the columns form M.dat file and past it in O.dat file.
I tried paste, sed, cut but not able to make any perfect command.
Please help.
To perform column-wise merge of two files, better to use a scripting engine (Python, Awk or Perl or even bash). Tools like paste, sed and cut do not have enough flexibility for those tasks (join may come close, but require extra work).
Consider the following awk based script
awk -vOFS='|' '-F|' '
{
getline s < "o.dat"
n = split(s. a)
# Print output, Add a[n], or $n, ... as needed based on actual number of fields.
print $1, a[1], $2, a[2], $3, a[3], a[4]
}
' m.dat
The print line can be customized to generate whatever column order
Based on clarification from OP, looks like the goal is: Given an input of two files, and list of columns where data should be merged from the 2nd file, produce an output file that contain the merge data.
For example:
awk -f mergeCols COLS=2,4,6 M=b.dat a.dat
# If file is marked executable (chmod +x mergeCols)
mergeCols COLS=2,4,6 M=b.dat a.dat
Will insert the columns from b.dat into columns 2, 4 and 6, whereas other column will include data from a.dat
Implementation, using awk: (create a file mergeCols).
#! /usr/bin/awk -f
BEGIN {
FS=OFS="|"
}
NR==1 {
# Set the column map
nc=split(COLS, c, ",")
for (i=1 ; i<=nc ; i++ ) {
cmap[c[i]] = i
}
}
{
# Read one line from merged file, split into tokens in 'a'
getline s < M
n = split(s, a)
# Merge columns using pre-set 'cmap'
k=0
for (i=1 ; i<=NF+nc ; i++ ) {
# Pick up a column
v = cmap[i] ? a[cmap[i]] : $(++k)
sep = (i<NF+nc) ? "|" : "\n"
printf "%s%s", v, sep
}
}

Awk substring doesnt yield expected result

I've a file whose content is below:
C2:0301,353458082243570,353458082243580,0;
C2:0301,353458082462440,353458082462450,0;
C2:0301,353458082069130,353458082069140,0;
C2:0301,353458082246230,353458082246240,0;
C2:0301,353458082559320,353458082559330,0;
C2:0301,353458080153530,353458080153540,0;
C2:0301,353458082462670,353458082462680,0;
C2:0301,353458081943950,353458081943960,0;
C2:0301,353458081719070,353458081719080,0;
C2:0301,353458081392470,353458081392490,0;
Field 2 and Field 3 (considering , as separator), contains 15 digit IMEI number ranges and not individual IMEI numbers. Usual format of IMEI is 8-digits(TAC)+6-digits(Serial number)+0(padded). The 6 digits(Serial number) part in the IMEI defines the start and end range, everything else remaining same. So in order to find individual IMEIs in the ranges (which is exactly what I want), I need a unary increment loop from 6 digits(Serial number) from the starting IMEI number in Field-2 till 6 digits(Serial number) from the ending IMEI number in Field-3. I am using the below AWK script:
awk -F"," '{v = substr($2,9,6); t = substr($3,9,6); while(v <= t) printf "%s%0"6"s%s,%s\n", substr($3,1,8),v++,substr($3,15,2),$4;}' TEMP.OUT.merge_range_part1_21
It gives me the below result:
353458082243570,0
353458082243580,0
353458082462440,0
353458082462450,0
353458082069130,0
353458082069140,0
353458082246230,0
353458082246240,0
353458082559320,0
353458082559330,0
353458080153530,0
353458082462670,0
353458082462680,0
353458081943950,0
353458081943960,0
353458081719070,0
353458081719080,0
353458081392470,0
353458081392480,0
353458081392490,0
The above is as expected except for the below line in the result:
353458080153530,0
The result is actually from the below line in the input file:
C2:0301,353458080153530,353458080153540,0;
But the expected output for the above line in input file is:
353458080153530,0
353458080153540,0
I need to know whats going wrong in my script.
The problem with your script is you start with 2 string variables, v and t, (typed as strings since they are the result of a string operation, substr()) and then convert one to a number with v++ which would strip leading zeros but then you're doing a string comparison with v <= t since a string (t) compared to a number or string or numeric string is always a string comparison. Yes you can add zero to each of the variables to force a numeric comparison but IMHO this is more like what you're really trying to do:
$ cat tst.awk
BEGIN { FS=","; re="(.{8})(.{6})(.*)" }
{
match($2,re,beg)
match($3,re,end)
for (i=beg[2]; i<=end[2]; i++) {
printf "%s%06d%s\n", end[1], i, end[3]
}
}
$ gawk -f tst.awk file
353458082243570
353458082243580
353458082462440
353458082462450
353458082069130
353458082069140
353458082246230
353458082246240
353458082559320
353458082559330
353458080153530
353458080153540
353458082462670
353458082462680
353458081943950
353458081943960
353458081719070
353458081719080
353458081392470
353458081392480
353458081392490
and when done with appropriate variables like that no conversion is necessary. Note also that with the above you don't need to repeatedly state the same or relative numbers to extract the part of the strings you care about, you just state the number of characters to skip (8) and the number to select (6) once. The above uses GNU awk for the 3rd arg to match().
The problem was in the while(v <= t) part of the script. I believe with leading 0s the match was not happening properly. So I ensured that they are casted into int while doing the comparison in the while loop. The AWK documentation says you can cast a value to int by using value+0. So my while(v <= t) in the awk script needed to change to while(v+0 <= t+0) . So the below AWK script:
awk -F"," '{v = substr($2,9,6); t = substr($3,9,6); while(v <= t) printf "%s%0"6"s%s,%s\n", substr($3,1,8),v++,substr($3,15,2),$4;}' TEMP.OUT.merge_range_part1_21
was changed to :
awk -F"," '{v = substr($2,9,6); t = substr($3,9,6); while(v+0 <= t+0) printf "%s%0"6"s%s,%s\n", substr($3,1,8),v++,substr($3,15,2),$4;}' TEMP.OUT.merge_range_part1_21
That only change got me the expected value for the failure case. For example this in my input file:
C2:0301,353458080153530,353458080153540,0;
Now gives me individual IMEIs as :
353458080153530,0
353458080153540,0
Use an if statement that checks for leading zeros in variable v setting y accordingly:
awk -F"," '{v = substr($2,9,6); t = substr($3,9,6); while(v <= t) { if (substr(v,1,1)=="0") { v++;y="0"v } else { v++;y=v } ;printf %s%0"6"s%s,%s\n", substr($3,1,8),y,substr($3,15,2),$4;v=y } }' TEMP.OUT.merge_range_part1_21
Make sure that the while condition is contained in braces and also that v is incremented WITHIN the if conditions.
Set v=y at the end of the statement to allow this to work on additional increments.

Deleting lines with more than 30% lowercase letters

I try to process some data but I'am unable to find a working solution for my problem. I have a file which looks like:
>ram
cacacacacacacacacatatacacatacacatacacacacacacacacacacacacaca
cacacacacacacaca
>pam
GAATGTCAAAAAAAAAAAAAAAAActctctct
>sam
AATTGGCCAATTGGCAATTCCGGAATTCaattggccaattccggaattccaattccgg
and many lines more....
I want to filter out all the lines and the corresponding headers (header starts with >) where the sequence string (those not starting with >) are containing 30 or more percent lowercase letters. And the sequence strings can span multiple lines.
So after command xy the output should look like:
>pam
GAATGTCAAAAAAAAAAAAAAAAActctctct
I tried some mix of a while loop for reading the input file and then working with awk, grep, sed but there was no good outcome.
Here's one idea, which sets the record separator to ">" to treat each header with its sequence lines as a single record.
Because the input starts with a ">", which causes an initial empty record, we guard the computation with NR > 1 (record number greater than one).
To count the number of characters we add the lengths of all the lines after the header. To count the number of lower-case characters, we save the string in another variable and use gsub to replace all the lower-case letters with nothing --- just because gsub returns the number of substitutions made, which is a convenient way of counting them.
Finally we check the ratio and print or not (adding back the initial ">" when we do print).
BEGIN { RS = ">" }
NR > 1 {
total_cnt = 0
lower_cnt = 0
for (i=2; i<=NF; ++i) {
total_cnt += length($i)
s = $i
lower_cnt += gsub(/[a-z]/, "", s)
}
ratio = lower_cnt / total_cnt
if (ratio < 0.3) print ">"$0
}
$ awk -f seq.awk seq.txt
>pam
GAATGTCAAAAAAAAAAAAAAAAActctctct
Or:
awk '{n=length(gensub(/[A-Z]/,"","g"));if(NF && n/length*100 < 30)print a $0;a=RT}' RS='>[a-z]+\n' file
RS='>[a-z]+\n' - Sets the record separator to the line containing '>' and name
RT - This value is set by what is matched by RS above
a=RT - save previous RT value
n=length(gensub(/[A-Z]/,"","g")); - get the length of lower case chars
if(NF && n/length*100 < 30)print a $0; - check we have a value and that the percentage is less than 30 for lower case chars
awk '/^>/{b=B;gsub( /[A-]/,"",b);
if( length( b) < length( B) * 0.3) print H "\n" B
H=$0;B="";next}
{B=( (B != "") ? B "\n" : "" ) $0}
END{ b=B;gsub( /[A-]/,"",b);
if( length( b) < length( B) * 0.3) print H "\n" B
}' YourFile
quick qnd dirty, a function suite better the need for printing
Nowadays I would not use sed or awk anymore for anything longer than 2 lines.
#! /usr/bin/perl
use strict; # Force variable declaration.
use warnings; # Warn about dangerous language use.
sub filter # Declare a sub-routing, a function called `filter`.
{
my ($header, $body) = #_; # Give the first two function arguments the names header and body.
my $lower = $body =~ tr/a-z//; # Count the translation of the characters a-z to nothing.
print $header, $body, "\n" # Print header, body and newline,
unless $lower / length ($body) > 0.3; # unless lower characters have more than 30%.
}
my ($header, $body); # Declare two variables for header and body.
while (<>) { # Loop over all lines from stdin or a file given in the command line.
if (/^>/) { # If the line starts with >,
filter ($header, $body) # call filter with header and body,
if defined $header; # if header is defined, which is not the case at the beginning of the file.
($header, $body) = ($_, ''); # Assign the current line to header and an empty string to body.
} else {
chomp; # Remove the newline at the end of the line.
$body .= $_; # Append the line to body.
}
}
filter ($header, $body); # Filter the last record.

Find lines that have partial matches

So I have a text file that contains a large number of lines. Each line is one long string with no spacing, however, the line contains several pieces of information. The program knows how to differentiate the important information in each line. The program identifies that the first 4 numbers/letters of the line coincide to a specific instrument. Here is a small example portion of the text file.
example text file
1002IPU3...
POIPIPU2...
1435IPU1...
1812IPU3...
BFTOIPD3...
1435IPD2...
As you can see, there are two lines that contain 1435 within this text file, which coincides with a specific instrument. However these lines are not identical. The program I'm using can not do its calculation if there are duplicates of the same station (ie, there are two 1435* stations). I need to find a way to search through my text files and identify if there are any duplicates of the partial strings that represent the stations within the file so that I can delete one or both of the duplicates. If I could have BASH script output the number of the lines containing the duplicates and what the duplicates lines say, that would be appreciated. I think there might be an easy way to do this, but I haven't been able to find any examples of this. Your help is appreciated.
If all you want to do is detect if there are duplicates (not necessarily count or eliminate them), this would be a good starting point:
awk '{ if (++seen[substr($0, 1, 4)] > 1) printf "Duplicates found : %s\n",$0 }' inputfile.txt
For that matter, it's a good starting point for counting or eliminating, too, it'll just take a bit more work...
If you want the count of duplicates:
awk '{a[substr($0,1,4)]++} END {for (i in a) {if(a[i]>1) print i": "a[i]}}' test.in
1435: 2
or:
{
a[substr($0,1,4)]++ # put prefixes to array and count them
}
END { # in the end
for (i in a) { # go thru all indexes
if(a[i]>1) print i": "a[i] # and print out the duplicate prefixes and their counts
}
}
Slightly roundabout but this should work-
cut -c 1-4 file.txt | sort -u > list
for i in `cat list`;
do
echo -n "$i "
grep -c ^"$i" file.txt #This tells you how many occurrences of each 'station'
done
Then you can do whatever you want with the ones that occur more than once.
Use following Python script(syntax of python 2.7 version used)
#!/usr/bin/python
file_name = "device.txt"
f1 = open(file_name,'r')
device = {}
line_count = 0
for line in f1:
line_count += 1
if device.has_key(line[:4]):
device[line[:4]] = device[line[:4]] + "," + str(line_count)
else:
device[line[:4]] = str(line_count)
f1.close()
print device
here the script reads each line and initial 4 character of each line are considered as device name and creates a key value pair device with key representing device name and value as line numbers where we find the string(device name)
following would be output
{'POIP': '2', '1435': '3,6', '1002': '1', '1812': '4', 'BFTO': '5'}
this might help you out!!

Bash/Awk: Reformat uneven columns with multiple deliminators

I have a CSV where I need to reformat a single column's contents.
The problem is that each cell has completely different lengths to reformat.
Current column looks like (these are two lines of single column) :
Foo*foo*foo*1970,1980+Bar*bar*bar*1970
Foobar*Foobar*foobarbar*1970,1975,1980
Result should look like (still two lines one column)
Foo*foo*foo*1970+Foo*foo*foo*1980+Bar*bar*bar*1970
Foobar*Foobar*foobarbar*1970+Foobar*Foobar*foobarbar*1975+Foobar*Foobar*foobarbar*1980
this is what I'm trying to do
#!/bin/bash
cat foocol | \
awk -F'+' \
'{for i in NF print $i}' \
| awk -F'*' \
'{$Foo=$1"*"$2"*"$3"*" print $4}' \
\
| awk -v Foo=$Foo -F',' \
'{for j in NF do \
print Foo""$j"+" }' \
> newcol
The idea is to iterate over the multiple '+' delimited data, while the first three '*' delimited values are to be grouped for every ',' delimited year, with a '+' between them
But I'm just getting syntax errors everywhere.
Thanks
$ awk --re-interval -F, -v OFS=+ '{match($1,/([^*]*\*){3}/);
prefix=substr($0,RSTART,RLENGTH);
for(i=2;i<=NF;i++) $i=prefix $i }1' file
Foo*foo*foo*1970+Foo*foo*foo*1980+Bar*bar*bar*1970
Foobar*Foobar*foobarbar*1970+Foobar*Foobar*foobarbar*1975+Foobar*Foobar*foobarbar*1980
perhaps add validation with if(match(...
Solution in TXR:
$ txr reformat.txr data
Foo*foo*foo*1970+Foo*foo*foo*1980+Bar*bar*bar*1970
Foobar*Foobar*foobarbar*1970+Foobar*Foobar*foobarbar*1975+Foobar*Foobar*foobarbar*1980
Code in reformat.txr:
#(repeat)
# (coll)#/\+?/#a*#b*#c*#(coll)#{x /[^,+]+/}#(until)+#(end)#(end)
# (output :into items)
# (repeat)
# (repeat)
#a*#b*#c*#x
# (end)
# (end)
# (end)
# (output)
# {items "+"}
# (end)
#(end)
This solution is based on regarding the data to have nested syntax: groups of records are delimited by newlines. Records within groups are separated by + and within records there are four fields separated by *. The last field contains comma-separated items. The data is to be normalized by expanding copies of the records such that the comma-separated items are distributed across the copies.
The outer #(repeat) handles walking over the lines. The outer #(coll) iterates over records, collecting the first three fields into variables a, b and c. Then an inner #(coll) gets each comma separated item into the variable x. The inner #(coll) collects the x-s into a list, and the outer #(coll) also collects all the variables into lists, so a, b, c become lists of strings, and x is a list of lists of strings.
The :into items keyword parameter in the output causes the lines which would normally go the standard output device to be collected into a list of strings, and bound to a variable. For instance:
#(output :into lines)
a
b
cd
#(end)
establishes a variable lines which contains the list ("a" "b" "cd").
So here we are getting the output of the doubly-nested repeat as a bunch of lines, where each line represents a record, stored in a variable called items. Then we output these using the #{items "+"}, a syntax which outputs the contents of a list variable with the given separator.
The doubly nested repeat handles the expansion of records over each comma separated item from the fourth field. The outer repeat implicitly iterates over the lists a, b, c and x. Inside the repeat, these variables denote the items of their respective lists. Variable x is a list of lists, and so the inner repeat iterates over that. Inside the outer repeat, variables a, b, c are already scalar, and stay that way in the scope of the inner repeat: only x varies, which is exactly what we want.
In the data collection across each line, there are some subtleties:
# (coll)#/\+?/#a*#b*#c*#(coll)#{x /[^,+]+/}#(until)+#(end)#(end)
Firstly, we match an optional leading plus with the /\+?/ regex, thereby consuming it. Without this, the a field of every record, except for the first one, would include that separating + and we would get double +-s in the final output. The a, b, c variables are matched simply. TXR is non-greedy with regard to the separating material: #a* means match some characters up to the nearest * and bind them to a variable a. Collecting the x list is more tricky. Here was use a positive-regex-match variable: #{x /[^,+]+/} to extract the sub-field. Each x is a sequence of one or more characters which are not pluses or commas, extracted positively without regard for whatever follows, much like a tokenizer extracts a token. This inner collect terminates when it encounters a +, which is what the #(until)+ clause ensures. It will also implicitly terminate if it hits the end of the line; the #(until) match isn't mandatory (by default). That terminating + stays in the input stream, which is why we have to recognize it and discard it in front of the #a.
It should be noted that #(coll), by default, scans for matches and skips regions of text that do not match, just like its cousin #(collect) does with lines. For instance if we have #(coll)#{foo /[a-z]+/}#(end), which collects sequences of lower-case letters into foo, turning foo into a list of such strings, and if the input is 1234abcd-efgh.... ijk, then foo ends up with the list ("abcd" "efgh" "ijk"). This is why there is no explicit logic in the inner #(coll) to consume the separating commas: they are implicitly skipped.

Resources