Sort multiple tables inside Markdown file with text interspersed between them - bash

There is a Markdown file with headings, text, and unsorted tables. I want to programmatically sort each table by ID, which is the 3rd column, in descending order, preferably using PowerShell or Bash. The table would remain in its place in the file.
# Heading
Text
| Col A | Col B | ID |
|---------|---------|----|
| Item 1A | Item 1B | 8 |
| Item 2A | Item 2B | 9 |
| Item 3A | Item 3B | 6 |
# Heading
Text
| Col A | Col B | ID |
|---------|---------|----|
| Item 4A | Item 4B | 3 |
| Item 5A | Item 5B | 2 |
| Item 6A | Item 6B | 4 |
I have no control over how the Markdown file is generated. Truly.
Ideally the file would remain in Markdown after the sort for additional processing. However, I explored these options without success:
Convert to JSON and sort (the solutions I tried didn't agree with tables)
Convert to HTML and sort (only found JavaScript solutions)
This script alone, while helpful, would need to be modified to parse through the Markdown file (having trouble finding understandable guidance on how to run a script on content between two strings)
The reason for command line (and not JavaScript on the HTML, for example) is that this transformation will take place in an Azure Release Pipeline. It is possible to add an Azure Function to the pipeline, which would allow me to run JavaScript code in the cloud, and I will pursue that if all else fails. I want to exhaust command-line options first because I am not very familiar with JavaScript or how to pass content between Functions and releases.
Thank you for any ideas.

By modifying the referred script, how about:
flush() {
printf "%s\n" "${lines[#]:0:2}"
printf "%s\n" "${lines[#]:2}" | sort -t \| -nr -k 4
lines=()
}
while IFS= read -r line; do
if [[ ${line:0:1} = "|" ]]; then
lines+=("$line")
else
(( ${#lines[#]} > 0 )) && flush
echo "$line"
fi
done < input.md
(( ${#lines[#]} > 0 )) && flush
Output:
# Heading
Text
| Col A | Col B | ID |
|---------|---------|----|
| Item 2A | Item 2B | 9 |
| Item 1A | Item 1B | 8 |
| Item 3A | Item 3B | 6 |
# Heading
Text
| Col A | Col B | ID |
|---------|---------|----|
| Item 6A | Item 6B | 4 |
| Item 4A | Item 4B | 3 |
| Item 5A | Item 5B | 2 |
BTW, if Perl is your option, here is an alternative:
perl -ne '
sub flush {
print splice(#ary, 0, 2); # print header lines
# sort the table with keying the ID by Schwartzian transform
print map { $_->[0] }
sort { $b->[1] <=> $a->[1] }
map { [$_, (split(/\s*\|\s*/))[3] ] }
#ary;
#ary = ();
}
# main loop
if (/^\|/) { # table section
push(#ary, $_);
} else { # other section
if ($#ary > 0) {
&flush;
} else {
print;
}
}
END {
if ($#ary > 0) { &flush; }
}
' input.md
Hope this helps.

If possible to identify markdown tables, a small 'awk' (or bash/python/perl) can filter the output. It assume each table has 2 header line.
awk -v 'FS="|" '
function cmp_id(i1, v1, i2, v2) {
return v1-v2 ;
}
function show () {
asorti(k, d, "cmp_id")
# for (i=1 ; i<=n; i++ ) print i, k[i], d[i] ;
# Print first 2 original header row, followed by sorted data lines
print s[1] ; print s[2]
for (i=1 ; i<=n; i++ ) if ( d[i]>=3 ) print s[d[i]] ;
n = 0
}
# Capture tables
/^\|/ { s[++n] = $0 ; k[n] = $4 ; next }
n > 0 { show() ; }
{ print }
END { show() ; }
'

Related

bash looping and extracting of the fragment of txt file

I am dealing with the analysis of big number of dlg text files located within the workdir. Each file has a table (usually located in different positions of the log) in the following format:
File 1:
CLUSTERING HISTOGRAM
____________________
________________________________________________________________________________
| | | | |
Clus | Lowest | Run | Mean | Num | Histogram
-ter | Binding | | Binding | in |
Rank | Energy | | Energy | Clus| 5 10 15 20 25 30 35
_____|___________|_____|___________|_____|____:____|____:____|____:____|____:___
1 | -5.78 | 11 | -5.78 | 1 |#
2 | -5.53 | 13 | -5.53 | 1 |#
3 | -5.47 | 17 | -5.44 | 2 |##
4 | -5.43 | 20 | -5.43 | 1 |#
5 | -5.26 | 19 | -5.26 | 1 |#
6 | -5.24 | 3 | -5.24 | 1 |#
7 | -5.19 | 4 | -5.19 | 1 |#
8 | -5.14 | 16 | -5.14 | 1 |#
9 | -5.11 | 9 | -5.11 | 1 |#
10 | -5.07 | 1 | -5.07 | 1 |#
11 | -5.05 | 14 | -5.05 | 1 |#
12 | -4.99 | 12 | -4.99 | 1 |#
13 | -4.95 | 8 | -4.95 | 1 |#
14 | -4.93 | 2 | -4.93 | 1 |#
15 | -4.90 | 10 | -4.90 | 1 |#
16 | -4.83 | 15 | -4.83 | 1 |#
17 | -4.82 | 6 | -4.82 | 1 |#
18 | -4.43 | 5 | -4.43 | 1 |#
19 | -4.26 | 7 | -4.26 | 1 |#
_____|___________|_____|___________|_____|______________________________________
The aim is to loop over all the dlg files and take the single line from the table corresponding to wider cluster (with bigger number of slashes in Histogram column). In the above example from the table this is the third line.
3 | -5.47 | 17 | -5.44 | 2 |##
Then I need to add this line to the final_log.txt together with the name of the log file (that should be specified before the line). So in the end I should have something in following format (for 3 different log files):
"Name of the file 1": 3 | -5.47 | 17 | -5.44 | 2 |##
"Name_of_the_file_2": 1 | -5.99 | 13 | -5.98 | 16 |################
"Name_of_the_file_3": 2 | -4.78 | 19 | -4.44 | 3 |###
A possible model of my BASH workflow would be:
#!/bin/bash
do
file_name2=$(basename "$f")
file_name="${file_name2/.dlg}"
echo "Processing of $f..."
# take a name of the file and save it in the log
echo "$file_name" >> $PWD/final_results.log
# search of the beginning of the table inside of each file and save it after its name
cat $f |grep 'CLUSTERING HISTOGRAM' >> $PWD/final_results.log
# check whether it works
gedit $PWD/final_results.log
done
Here I need to substitute combination of echo and grep in order to take selected parts of the table.
You can use this one, expected to be fast enough. Extra lines in your files, besides the tables, are not expected to be a problem.
grep "#$" *.dlg | sort -rk11 | awk '!seen[$1]++'
grep fetches all the histogram lines which are then sorted in reverse order by last field, that means lines with most # on the top, and finally awk removes the duplicates. Note that when grep is parsing more than one file, it has -H by default to print the filenames at the beginning of the line, so if you test it for one file, use grep -H.
Result should be like this:
file1.dlg: 3 | -5.47 | 17 | -5.44 | 2 |##########
file2.dlg: 3 | -5.47 | 17 | -5.44 | 2 |####
file3.dlg: 3 | -5.47 | 17 | -5.44 | 2 |#######
Here is a modification to get the first appearence in case of many equal max lines in a file:
grep "#$" *.dlg | sort -k11 | tac | awk '!seen[$1]++'
We replaced the reversed parameter in sort, with the 'tac' command which is reversing the file stream, so now for any equal lines, initial order is preserved.
Second solution
Here using only awk:
awk -F"|" '/#$/ && $NF > max[FILENAME] {max[FILENAME]=$NF; row[FILENAME]=$0}
END {for (i in row) print i ":" row[i]}' *.dlg
Update: if you execute it from different directory and want to keep only the basename of every file, to remove the path prefix:
awk -F"|" '/#$/ && $NF > max[FILENAME] {max[FILENAME]=$NF; row[FILENAME]=$0}
END {for (i in row) {sub(".*/","",i); print i ":" row[i]}}'
Probably makes more sense as an Awk script.
This picks the first line with the widest histogram in the case of a tie within an input file.
#!/bin/bash
awk 'FNR == 1 { if(sel) print sel; sel = ""; max = 0 }
FNR < 9 { next }
length($10) > max { max = length($10); sel = FILENAME ":" $0 }
END { if (sel) print sel }' ./"$prot"/*.dlg
This assumes the histograms are always the tenth field; if your input format is even messier than the lump you show, maybe adapt to taste.
In some more detail, the first line triggers on the first line of each input file. If we have collected a previous line (meaning this is not the first input file), print that, and start over. Otherwise, initialize for the first input file. Set sel to nothing and max to zero.
The second line skips lines 1-8 which contain the header.
The third line checks if the current line's histogram is longer than max. If it is, update max to this histogram's length, and remember the current line in sel.
The last line is spillover for when we have processed all files. We never printed the sel from the last file, so print that too, if it's set.
If you mean to say we should find the lines between CLUSTERING HISTOGRAM and the end of the table, we should probably have more information about what the surrounding lines look like. Maybe something like this, though;
awk '/CLUSTERING HISTOGRAM/ { if (sel) print sel; looking = 1; sel = ""; max = 0 }
!looking { next }
looking > 1 && $1 != looking { looking = 0; nextfile }
$1 == looking && length($10) > max { max = length($10); sel = FILENAME ":" $0 }
END { if (sel) print sel }' ./"$prot"/*.dlg
This sets looking to 1 when we see CLUSTERING HISTOGRAM, then counts up to the first line where looking is no longer increasing.
I would suggest processing using awk:
for i in $FILES
do
echo -n \""$i\": "
awk 'BEGIN {
output="";
outputlength=0
}
/(^ *[0-9]+)/ { # process only lines that start with a number
if (length(substr($10, 2)) > outputlength) { # if line has more hashes, store it
output=$0;
outputlength=length(substr($10, 2))
}
}
END {
print output # output the resulting line
}' "$i"
done

Bash - Removing empty columns from .csv file

I have a large .csv file in which I have to remove columns which are empty. By empty, I mean that they have a header, but the rest of the column contains no data.
I've written a Bash script to try and do this, but am running into a few issues.
Here's the code:
#!/bin/bash
total="$(head -n 1 Reddit-cleaner.csv | grep -o ',' | wc -l)"
i=1
count=0
while [ $i -le $total ]; do
cat Reddit-cleaner.csv | cut -d "," -f$i | while read CMD; do if [ -n CMD ]; then count=$count+1; fi; done
if [ $count -eq 1 ]; then
cut -d "," -f$i --complement <Reddit-cleaner.csv >Reddit-cleanerer.csv
fi
count=0
i=$i+1
done
Firstly I find the number of columns, and store it in total. Then while the program has not reached the last column, I loop through the columns individually. The nested while loop checks if each row in the column is empty, and if there is more than one row that is not empty, it writes all other columns to another file.
I recognise that there are a few problems with this script. Firstly, the count modification occurs in a subshell, so count is never modified in the parent shell. Secondly, the file I am writing to will be overwritten every time the script finds an empty column.
So my question then is how can I fix this. I initially wanted to have it so that it wrote to a new file column by column, based on count, but couldn't figure out how to get that done either.
Edit: People have asked for a sample input and output.
Sample input:
User, Date, Email, Administrator, Posts, Comments
a, 20201719, a#a.com, Yes, , 3
b, 20182817, b#b.com, No, , 4
c, 20191618, , No, , 4
d, 20190126, , No, , 2
Sample output:
User, Data, Email, Administrator, Comments
a, 20201719, a#a.com, Yes, 3
b, 20182817, b#b.com, No, 4
c, 20191618, , No, 4
d, 20190126, , No, 2
In the sample output, the column which has no data in it except for the header (Posts) has been removed, while the columns which are either entirely or partially filled remain.
I may be misinterpreting the question (due to its lack of example input and expected output), but this should be as simple as:
$ x="1,2,3,,4,field 5,,,six,7"
$ echo "${x//,+(,)/,}"
1,2,3,4,field 5,six,7
This requires bash with extglob enabled. Otherwise, you can use an external call to sed:
$ echo "1,2,3,,4,field 5,,,six,7" |sed 's/,,,*/,/g'
1,2,3,4,field 5,six,7
There's a lot of redundancy in your sample code. You should really consider awk since it already tracks the current field count (as NF) and the number of lines (as NR), so you could add that up with a simple total+=NF on each line. With the empty fields collapsed, awk can just run on the field number you want.
$ echo "1,2,3,,4,field 5,,,six,7" |awk -F ',+' '
{ printf "line %d has %d fields, the 6th of which is <%s>\n", NR, NF, $6 }'
line 1 has 7 fields, the 6th of which is <six>
This uses printf to denote the number of records (NR, the current line number), the number of fields (NF) and the value of the sixth field ($6, can also be as a variable, e.g. $NF is the value of the final field since awk is one-indexed).
It is actually job of a CSV parser but you may use this awk script to get the job done:
cat removeEmptyCellsCsv.awk
BEGIN {
FS = OFS = ", "
}
NR == 1 {
for (i=1; i<=NF; i++)
e[i] = 1 # initially all cols are marked empty
next
}
FNR == NR {
for (i=1; i<=NF; i++)
e[i] = e[i] && ($i == "")
next
}
{
s = ""
for (i=1; i<=NF; i++)
s = s (i==1 || e[i-1] ? "" : OFS) (e[i] ? "" : $i)
print s
}
Then run it as:
awk -f removeEmptyCellsCsv.awk file.csv{,}
Using sample data provided in question, it will produce following output:
1, User, Date, Email, Administrator, Comments
2, a, 20201719, a#a.com, Yes, 3
3, b, 20182817, b#b.com, No, 4
4, c, 20191618, , No, 4
5, d, 20190126, , No, 2
Note that Posts columns has been removed because it is empty in every record.
$ cat tst.awk
BEGIN { FS=OFS="," }
NR==FNR {
if ( NR > 1 ) {
for (i=1; i<=NF; i++) {
if ( $i ~ /[^[:space:]]/ ) {
gotValues[i]
}
}
}
next
}
{
c=0
for (i=1; i<=NF; i++) {
if (i in gotValues) {
printf "%s%s", (c++ ? OFS : ""), $i
}
}
print ""
}
$ awk -f tst.awk file file
User, Date, Email, Administrator, Comments
a, 20201719, a#a.com, Yes, 3
b, 20182817, b#b.com, No, 4
c, 20191618, , No, 4
d, 20190126, , No, 2
See also What's the most robust way to efficiently parse CSV using awk? if you need to work with any more complicated CSVs than the one in your question.
You can use Miller (https://github.com/johnkerl/miller) and its remove-empty-columns verb.
Starting from
+------+----------+---------+---------------+-------+----------+
| User | Date | Email | Administrator | Posts | Comments |
+------+----------+---------+---------------+-------+----------+
| a | 20201719 | a#a.com | Yes | - | 3 |
| b | 20182817 | b#b.com | No | - | 4 |
| c | 20191618 | - | No | - | 4 |
| d | 20190126 | - | No | - | 2 |
+------+----------+---------+---------------+-------+----------+
and running
mlr --csv remove-empty-columns input.csv >output.csv
you will have
+------+----------+---------+---------------+----------+
| User | Date | Email | Administrator | Comments |
+------+----------+---------+---------------+----------+
| a | 20201719 | a#a.com | Yes | 3 |
| b | 20182817 | b#b.com | No | 4 |
| c | 20191618 | - | No | 4 |
| d | 20190126 | - | No | 2 |
+------+----------+---------+---------------+----------+

Join two csv files if value is between interval in file 2

I have two csv files that I need to join, F1 has milions of lines, F2 (file 1) has thousands of lines. I need to join these files, if the position in file F1 (F1.pos) is between F2.start and F2.end. Is there any way, how to do this in bash? Because I have a code in Python pandas to sqllite3 and I am looking for something quicker.
Table F1 looks like:
| name | pos |
|------ |------ |
| a | 1020 |
| b | 1200 |
| c | 1800 |
Table F2 looks like:
| interval_name | start | end |
|--------------- |------- |------ |
| int1 | 990 | 1090 |
| int2 | 1100 | 1150 |
| int3 | 500 | 2000 |
Result should look like:
| name | pos | interval_name | start | end |
|------ |------ |--------------- |------- |------ |
| a | 1020 | int1 | 990 | 1090 |
| a | 1020 | int3 | 500 | 2000 |
| b | 1200 | int1 | 990 | 1090 |
| b | 1200 | int3 | 500 | 2000 |
| c | 1800 | int3 | 500 | 2000 |
DISCLAIMER: Use dedicated/local tools if available, this is hacking:
There is an apparent error in your desired output: name b should not match int1.
$ tail -n+1 *.csv
==> f1.csv <==
name,pos
a,1020
b,1200
c,1800
==> f2.csv <==
interval_name,start,end
int1,990,1090
int2,1100,1150
int3,500,2000
$ awk -F, -vOFS=, '
BEGIN {
print "name,pos,interval_name,start,end"
PROCINFO["sorted_in"]="#ind_num_asc"
}
FNR==1 {next}
NR==FNR {Int[$1] = $2 "," $3; next}
{
for(i in Int) {
split(Int[i], I)
if($2 >= I[1] && $2 <= I[2]) print $0, i, Int[i]
}
}
' f2.csv f1.csv
Outputs:
name,pos,interval_name,start,end
a,1020,int1,990,1090
a,1020,int3,500,2000
b,1200,int3,500,2000
c,1800,int3,500,2000
This is not particularly efficient in any way; the only sorting used is to ensure that the Int array is parsed in the correct order, which changes if your sample data is not indicative of the actual schema. I would be very interested to know how my solution performs vs pandas.
Here's one in awk. It hashes the smaller file records to arrays and for each of the bigger file records it iterates thru the hashes so it is slow:
$ awk '
NR==FNR { # hash f2 records
start[NR]=$4
end[NR]=$6
data[NR]=substr($0,2)
next
}
FNR<=2 { # mind the front matter
print $0 data[FNR]
}
{ # check if in range and output
for(i in start)
if($4>start[i] && $4<end[i])
print $0 data[i]
}' f2 f1
Output:
| name | pos | interval_name | start | end |
|------ |------ |--------------- |------- |------ |
| a | 1020 | int1 | 990 | 1090 |
| a | 1020 | int3 | 500 | 2000 |
| b | 1200 | int3 | 500 | 2000 |
| c | 1800 | int3 | 500 | 2000 |
I doubt that a bash script would be faster than a python script. Just don't import the files into a database – write a custom join function instead!
The best way to join depends on your input data. If nearly all F1.pos are inside of nearly all intervals then a naive approach would be the fastest. The naive approach in bash would look like this:
#! /bin/bash
join --header -t, -j99 F1 F2 |
sed 's/^,//' |
awk -F, 'NR>1 && $2 >= $4 && $2 <= $5'
# NR>1 is only there to skip the column headers
However, this will be very slow if there are only a few intersections, for instance, when the average F1.pos only is in 5 intervals. In this case the following approach will be way faster. Implement it in a programing language of your choice – bash is not appropriate for this:
Sort F1 by pos in ascending order.
Sort F2 by start and then by end in ascending order.
For each sorted file, keep a pointer to a line, starting at the first line.
Repeat until F1's pointer reaches the end:
For the current F1.pos advance F2's pointer until F1.pos ≥ F2.start.
Lock F2's pointer, but continue to read lines until F1.pos ≤ F2.end. Print the read lines in the output format name,pos,interval_name,start,end.
Advance F1's pointer by one line.
Only sorting the files could be actually faster in bash. Here is a script to sort both files.
#! /bin/bash
sort -t, -n -k2 F1-without-headers > F1-sorted
sort -t, -n -k2,3 F2-without-headers > F2-sorted
Consider using LC_ALL=C, -S N% and --parallel N to speed up the sorting process.

Check a Value and Tag according to that value and append in same row using shell

I have a file as
NUMBER|05-1-2016|05-2-2016|05-3-2016|05-4-2016|
0000000 | 0 | 225.993 | 0 | 324|
0003450 | 89| 225.993 | 0 | 324|
0005350 | 454 | 225.993 | 54 | 324|
In example There are four dates in the header
I want to check the value under the date for the field 1 'number' and tag values according to that using shell
example if value is between 0-100 tag 'L' and if greater than 100 , tag 'H'
So the output should be like
NUMBER|05-1-2016|05-2-2016|05-3-2016|05-4-2016|05-1-2016|05-2-2016|05-3-2016|05-4-2016|
0000000 | 0 | 225.993 | 0 | 324| L | H | L | H|
0003450 | 89| 225.993 | 0 | 324|L | H | L | H|
0005350 | 454 | 225.993 | 54 | 324|H | H | L | H|
A quick and dirty example, that:
sets the input and output field separator (-F and OFS below) to |,
prints the the header (record with NR==1)
for all others prints the fields 1-5, and then executes function lh for fields 2-5
defines the function lh, as one returning L for values < 100, and H for all others
Code:
awk -F \| '
BEGIN {OFS="|"}
NR==1 {print}
NR > 1 {print $1, $2, $3, $4, $5, lh($2), lh($3), lh($4), lh($5) }
function lh(val) { return (val < 100) ? "L" : "H"}
' file.txt
Alternative function lh:
function lh(val) {
result = "";
if (val < 100) {
result = "L";
} else {
result = "H";
}
return result;
}

Split a column into separate columns based on value

I have a tab delimited file that looks as follows:
cat my file.txt
gives:
1 299
1 150
1 50
1 57
2 -45
2 62
3 515
3 215
3 -315
3 -35
3 3
3 6789
3 34
5 66
5 1334
5 123
I'd like to use Unix commands to get a tab-delimited file that based on values in column#1, each column of the output file will hold all relevant values of column#2
(I'm using here for the example the separator "|" instead of tab only to illustrate my desired output file):
299 | -45 | 515 | 66
150 | 62 | 215 | 1334
50 | | -315 |
57 | | -35 |
| | 3 |
The corresponding Headers (1,2,3,5; based on column#1 values) could be a nice addition to the code (as shown below), but the main request is to split the information of the first file into separated columns. Thanks!
1 | 2 | 3 | 5
299 | -45 | 515 | 66
150 | 62 | 215 | 1334
50 | | -315 |
57 | | -35 |
| | 3 |
Here's a one liner that matches your output. It builds a string $ARGS containing as many process substitutions as there are unique values in the first column. Then, $ARGS is used as the argument for the paste command:
HEADERS=$(cut -f 1 file.txt | sort -n | uniq); ARGS=""; for h in $HEADERS; do ARGS+=" <(grep ^"$h"$'\t' file.txt | cut -f 2)"; done; echo $HEADERS | tr ' ' '|'; eval "paste -d '|' $ARGS"
Output:
1|2|3|5
299|-45|515|66
150|62|215|1334
50||-315|
57||-35|
||3|
You can use gnu-awk
awk '
BEGIN{max=0;}
{
d[$1][length(d[$1])+1] = $2;
if(length(d[$1])>max)
max = length(d[$1]);
}
END{
PROCINFO["sorted_in"] = "#ind_num_asc";
line = "";
flag = 0;
for(j in d){
line = line (flag?"\t|\t":"") j;
flag = 1;
}
print line;
for(i=1; i<=max; ++i){
line = "";
flag = 0;
for(j in d){
line = line (flag?"\t|\t":"") d[j][i];
flag = 1;
}
print line;
}
}' file.txt
you get
1 | 2 | 3 | 5
299 | -45 | 515 | 66
150 | 62 | 215 | 1334
50 | | -315 |
57 | | -35 |
| | 3 |
Or, you can use python .... for example, in split2Columns.py
import sys
records = [line.split() for line in open(sys.argv[1])]
import collections
records_dict = collections.defaultdict(list)
for key, val in records:
records_dict[key].append(val)
from itertools import izip_longest
print "\t|\t".join(records_dict.keys())
print "\n".join(("\t|\t".join(map(str,l)) for l in izip_longest(*records_dict.values(), fillvalue="")))
python split2Columns.py file.txt
you get same result
#Jose Ricardo Bustos M. - thanks for your answer! I unfortunately couldn't install on my Mac the gnu-awk, but based on your suggestive answer I've performed something similar using awk:
HEADERS=$(cut -f 1 try.txt | awk '!x[$0]++');
H=( ${HEADERS// / });
MAXUNIQNUM=$(cut -f 1 try.txt |uniq -c|awk '{print $1}'|sort -nr|head -1);
awk -v header="${H[*]}" -v max=$MAXUNIQNUM
'BEGIN {
split(header,headerlist," ");
for (q = 1;q <= length(headerlist); q++)
{counter[q]=1;}
}
{for (z = 1; z <= length(headerlist); z++){
if (headerlist[z] == $1){
arr[counter[z],headerlist[z]] = $2;
counter[z]++
};
}
}
END {
for (x = 1; x <= max; x++){
for (y = 1; y<= length(headerlist); y++){
printf "%s\t",arr[x,headerlist[y]];
}
printf "\n"
}
}' try.txt
This is using an array to keep track of the column headings, using them to name temporary files and paste everything together in the end:
#!/bin/bash
infile=$1
filenames=()
idx=0
while read -r key value; do
if [[ "${filenames[$idx]}" != "$key" ]]; then
(( ++idx ))
filenames[$idx]="$key"
echo -e "$key\n----" > "$key"
fi
echo "$value" >> "$key"
done < "$1"
paste "${filenames[#]}"
rm "${filenames[#]}"

Resources