Sed search&replace from CSV file inserts carriage return - bash

I have a file retimp_info.csv with two columns and ~500 rows like this:
rettag, retid
231,1
and a file mdb_ret_exp.csv with multiple rows and columns:
a,s,d,231,f,g
a,s,d,345,f,g
So the goal is to find and replace the occurrences of the rettag with retid from the first file. Now there are multiple rettags that need to be replaced inside the mdb_ret_exp.csv. (using commas so the column can be specified incase that number occurs anywhere else i may not know about ie - different column).
Here's what I tried:
while IFS="," read -r rettag retid; do
sed -i "s/,$rettag,/,$retid,/" mdb_ret_exp.csv
done < $HOME/retimp_info.csv
It almost works, but it adds an extra carriage return on every replacement:
a,s,d,1
,f,g
a,s,d,345,f,g
I expected it to still remain on one line:
a,s,d,1,f,g
a,s,d,345,f,g
How do I avoid the extra carriage return?

This is most likely caused by your retimp_info.csv having DOS/Windows style \r\n line endings. You could remove them from the file while reading:
cat "$HOME/retimp_info.csv" | tr -d '\r' | while IFS="," read -r rettag retid; do
sed -i "s/,$rettag,/,$retid,/" mdb_ret_exp.csv
done
or strip them from the file in advance with dos2unix or by opening the file in a text editor, choosing "Unix line endings" or equivalent option, and then saving it again.

You're barking up the wrong tree. Just do this:
awk '
BEGIN { FS=OFS="," }
NR==FNR { map[$1] = $2; next }
{
for (i=1; i<=NF; i++) {
if ($i in map) {
$i = map[$i]
}
}
print
}
' $HOME/retimp_info.csv mdb_ret_exp.csv
That will solve all of your current problems and the ones you may not have hit yet, but probably will, related to:
doing regexp instead of string comparisons, and
the fact your current approach can't work for the first or last
fields on each line, and
as written your sed loop could replace the replacements after making them
In addition to being far more robust, the awk approach will also be at least an order of magnitude faster than your current approach. See also why-is-using-a-shell-loop-to-process-text-considered-bad-practice.
Oh, and run dos2unix or similar on your input files first as they currently have Windows control-M line endings (use cat -v file to see them).

Update: used the following -
while IFS="," read -r rettag retid; do
sed -i "s/,$rettag,/,$retid,/g" mdb_ret_exp.csv
done < $home/retimp_info.csv
worked fine but now after it replaces the proper value (which resides in the middle of the line/row) it inserts a carriage return - causing the following information to be moved to the next row
ie:
a,s,d,231,f,g
now is -
a,s,d,1
,f,g
Need ,f,g to remain on the same line...

Related

Unix bash - using cut to regex lines in a file, match regex result with another similar line

I have a text file: file.txt, with several thousand lines. It contains a lot of junk lines which I am not interested in, so I use the cut command to regex for the lines I am interested in first. For each entry I am interested in, it will be listed twice in the text file: Once in a "definition" section, another in a "value" section. I want to retrieve the first value from the "definition" section, and then for each entry found there find it's corresponding "value" section entry.
The first entry starts with ' gl_ ', while the 2nd entry would look like ' "gl_ ', starting with a '"'.
This is the code I have so far for looping through the text document, which then retrieves the values I am interested in and appends them to a .csv file:
while read -r line
do
if [[ $line == gl_* ]] ; then (param=$(cut -d'\' -f 1 $line) | def=$(cut -d'\' -f 2 $line) | type=$(cut -d'\' -f 4 $line) | prompt=$(cut -d'\' -f 8 $line))
while read -r glline
do
if [[ $glline == '"'$param* ]] ; then val=$(cut -d'\' -f 3 $glline) |
"$project";"$param";"$val";"$def";"$type";"$prompt" >> /filepath/file.csv
done < file.txt
done < file.txt
This seems to throw some syntax errors related to unexpected tokens near the first 'done' statement.
Example of text that needs to be parsed, and paired:
gl_one\User Defined\1\String\1\\1\Some Text
gl_two\User Defined\1\String\1\\1\Some Text also
gl_three\User Defined\1\Time\1\\1\Datetime now
some\junk
"gl_one\1\Value1
some\junk
"gl_two\1\Value2
"gl_three\1\Value3
So effectively, the while loop reads each line until it hits the first line that starts with 'gl_', which then stores that value (ie. gl_one) as a variable 'param'.
It then starts the nested while loop that looks for the line that starts with a ' " ' in front of the gl_, and is equivalent to the 'param' value. In other words, the
script should couple the lines gl_one and "gl_one, gl_two and "gl_two, gl_three and "gl_three.
The text file is large, and these are settings that have been defined this way. I need to collect the values for each gl_ parameter, to save them together in a .csv file with their corresponding "gl_ values.
Wanted regex output stored in variables would be something like this:
first while loop:
$param = gl_one, $def = User Defined, $type = String, $prompt = Some Text
second while loop:
$val = Value1
Then it stores these variables to the file.csv, with semi-colon separators.
Currently, I have an error for the first 'done' statement, which seems to indicate an issue with the quotation marks. Apart from this,
I am looking for general ideas and comments to the script. I.e, not entirely sure I am looking for the quotation mark parameters "gl_ correctly, or if the
semi-colons as .csv separators are added correctly.
Edit: Overall, the script runs now, but extremely slow due to the inner while loop. Is there any faster way to match the two lines together and add them to the .csv file?
Any ideas and comments?
This will generate a file containing the data you want:
cat file.txt | grep gl_ | sed -E "s/\"//" | sort | sed '$!N;s/\n/\\/' | awk -F'\' '{print $1"; "$5"; "$7"; "$NF}' > /filepath/file.csv
It uses grep to extract all lines containing 'gl_'
then sed to remove the leading '"' from the lines that contain one [I have assumed there are no further '"' in the line]
The lines are sorted
sed removes the return from each pair of lines
awk then prints
the required columns according to your requirements
Output routed to the file.
LANG=C sort -t\\ -sd -k1,1 <file.txt |\
sed '
/^gl_/{ # if definition
N; # append next line to buffer
s/\n"gl_[^\\]*//; # if value, strip first column
t; # and start next loop
}
D; # otherwise, delete the line
' |\
awk -F\\ -v p="$project" -v OFS=\; '{print p,$1,$10,$2,$4,$8 }' \
>>/filepath/file.csv
sort lines so gl_... appears immediately before "gl_... (LANG fixes LC_TYPE) - assumes definition appears before value
sed to help ensure matching definition and value (may still fail if duplicate/missing value), and tidy for awk
awk to pull out relevant fields

Convert multi-line csv to single line using Linux tools

I have a .csv file that contains double quoted multi-line fields. I need to convert the multi-line cell to a single line. It doesn't show in the sample data but I do not know which fields might be multi-line so any solution will need to check every field. I do know how many columns I'll have. The first line will also need to be skipped. I don't how much data so performance isn't a consideration.
I need something that I can run from a bash script on Linux. Preferably using tools such as awk or sed and not actual programming languages.
The data will be processed further with Logstash but it doesn't handle double quoted multi-line fields hence the need to do some pre-processing.
I tried something like this and it kind of works on one row but fails on multiple rows.
sed -e :0 -e '/,.*,.*,.*,.*,/b' -e N -e '1n;N;N;N;s/\n/ /g' -e b0 file.csv
CSV example
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
The output I want is
First name,Last name,Address,ZIP
John,Doe,Country City Street,12345
Jane,Doe,Country City Street,67890
etc.
etc.
First my apologies for getting here 7 months late...
I came across a problem similar to yours today, with multiple fields with multi-line types. I was glad to find your question but at least for my case I have the complexity that, as more than one field is conflicting, quotes might open, close and open again on the same line... anyway, reading a lot and combining answers from different posts I came up with something like this:
First I count the quotes in a line, to do that, I take out everything but quotes and then use wc:
quotes=`echo $line | tr -cd '"' | wc -c` # Counts the quotes
If you think of a single multi-line field, knowing if the quotes are 1 or 2 is enough. In a more generic scenario like mine I have to know if the number of quotes is odd or even to know if the line completes the record or expects more information.
To check for even or odd you can use the mod operand (%), in general:
even % 2 = 0
odd % 2 = 1
For the first line:
Odd means that the line expects more information on the next line.
Even means the line is complete.
For the subsequent lines, I have to know the status of the previous one. for instance in your sample text:
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
You can say line 1 (John,Doe,"Country) has 1 quote (odd) what means the status of the record is incomplete or open.
When you go to line 2, there is no quote (even). Nevertheless this does not mean the record is complete, you have to consider the previous status... so for the lines following the first one it will be:
Odd means that record status toggles (incomplete to complete).
Even means that record status remains as the previous line.
What I did was looping line by line while carrying the status of the last line to the next one:
incomplete=0
cat file.csv | while read line; do
quotes=`echo $line | tr -cd '"' | wc -c` # Counts the quotes
incomplete=$((($quotes+$incomplete)%2)) # Check if Odd or Even to decide status
if [ $incomplete -eq 1 ]; then
echo -n "$line " >> new.csv # If line is incomplete join with next
else
echo "$line" >> new.csv # If line completes the record finish
fi
done
Once this was executed, a file in your format generates a new.csv like this:
First name,Last name,Address,ZIP
John,Doe,"Country City Street",12345
I like one-liners as much as everyone, I wrote that script just for the sake of clarity, you can - arguably - write it in one line like:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l " || echo "$l";done >new.csv
I would appreciate it if you could go back to your example and see if this works for your case (which you most likely already solved). Hopefully this can still help someone else down the road...
Recovering the multi-line fields
Every need is different, in my case I wanted the records in one line to further process the csv to add some bash-extracted data, but I would like to keep the csv as it was. To accomplish that, instead of joining the lines with a space I used a code - likely unique - that I could then search and replace:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l ~newline~ " || echo "$l";done >new.csv
the code is ~newline~, this is totally arbitrary of course.
Then, after doing my processing, I took the csv text file and replaced the coded newlines with real newlines:
sed -i 's/ ~newline~ /\n/g' new.csv
References:
Ternary operator: https://stackoverflow.com/a/3953666/6316852
Count char occurrences: https://stackoverflow.com/a/41119233/6316852
Other peculiar cases: https://www.linuxquestions.org/questions/programming-9/complex-bash-string-substitution-of-csv-file-with-multiline-data-937179/
TL;DR
Run this:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l " || echo "$l";done >new.csv
... and collect results in new.csv
I hope it helps!
If Perl is your option, please try the following:
perl -e '
while (<>) {
$str .= $_;
}
while ($str =~ /("(("")|[^"])*")|((^|(?<=,))[^,]*((?=,)|$))/g) {
if (($el = $&) =~ /^".*"$/s) {
$el =~ s/^"//s; $el =~ s/"$//s;
$el =~ s/""/"/g;
$el =~ s/\s+(?!$)/ /g;
}
push(#ary, $el);
}
foreach (#ary) {
print /\n$/ ? "$_" : "$_,";
}' sample.csv
sample.csv:
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
John,Doe,"Country
City
Street",67890
Result:
First name,Last name,Address,ZIP
John,Doe,Country City Street,12345
John,Doe,Country City Street,67890
This might work for you (GNU sed):
sed ':a;s/[^,]\+/&/4;tb;N;ba;:b;s/\n\+/ /g;s/"//g' file
Test each line to see that it contains the correct number of fields (in the example that was 4). If there are not enough fields, append the next line and repeat the test. Otherwise, replace the newline(s) by spaces and finally remove the "'s.
N.B. This may be fraught with problems such as ,'s between "'s and quoted "'s.
Try cat -v file.csv. When the file was made with Excel, you might have some luck: When the newlines in a field are a simple \n and the newline at the end is a \r\n (which will look like ^M), parsing is simple.
# delete all newlines and replace the ^M with a new newline.
tr -d "\n" < file.csv| tr "\r" "\n"
# Above two steps with one command
tr "\n\r" " \n" < file.csv
When you want a space between the joined line, you need an additional step.
tr "\n\r" " \n" < file.csv | sed '2,$ s/^ //'
EDIT: #sjaak commented this didn't work is his case.
When your broken lines also have ^M you still can be a lucky (wo-)man.
When your broken field is always the first field in double quotes and you have GNU sed 4.2.2, you can join 2 lines when the first line has exactly one double quote.
sed -rz ':a;s/(\n|^)([^"]*)"([^"]*)\n/\1\2"\3 /;ta' file.csv
Explanation:
-z don't use \n as line endings
:a label for repeating the step after successful replacement
(\n|^) Search after a newline or the very first line
([^"]*) Substring without a "
ta Go back to label a and repeat
awk pattern matching is working.
answer in one line :
awk '/,"/{ORS=" "};/",/{ORS="\n"}{print $0}' YourFile
if you'd like to drop quotes, you could use:
awk '/,"/{ORS=" "};/",/{ORS="\n"}{print $0}' YourFile | sed 's/"//gw NewFile'
but I prefer to keep it.
to explain the code:
/Pattern/ : find pattern in current line.
ORS : indicates the output line record.
$0 : indicates the whole of the current line.
's/OldPattern/NewPattern/': substitude first OldPattern with NewPattern
/g : does the previous action for all OldPattern
/w : write the result to Newfile

'sed' replace last patern and delete others pattern

I want to replace only the last string "delay" by "ens_delay" in my file and delete the others one before the last one:
Input file:
alpha_notify_teta=''
alpha_notify_check='YES'
text='CRDS'
delay=''
delay=''
delay=''
textfileooooop=''
alpha_enable='YES'
alpha_hostnames=''
alpha_orange='YES'
alpha_orange_interval='300'
alpha_notification_level='ALL'
expression='YES'
delay='9'
textfileooooop=''
alpha_enable='YES'
alpha_hostnames=''
Output file: (expected value)
alpha_notify_teta=''
alpha_notify_check='YES'
text='CRDS'
textfileooooop=''
alpha_enable='YES'
alpha_hostnames=''
alpha_orange='YES'
alpha_orange_interval='300'
alpha_notification_level='ALL'
expression='YES'
ens_delay='9'
textfileooooop=''
alpha_enable='YES'
alpha_hostnames=''
Here my first command but it doesn't work because it will work only if I have delay as last line.
sed -e '$,/delay/ s/delay/ens_delay/'
My second command will delete all lines contain "delay", even "ens_delay" will be deleted.
sed -i '/delay/d'
Thank you
This might work for you (GNU sed):
sed '/^delay=/,$!b;/^delay=/!H;//{x;s/^[^\n]*\n\?//;/./p;x;h};$!d;x;s/^/ens_/' file
Lines before the first line beginning delay= should be printed as normal. Otherwise, a line beginning delay= is stored in the hold space and subsequent lines that do not begin delay= are appended to it. Should the hold space already contain such lines, the first line is deleted and the remaining lines printed before the hold space is replaced by the current line. At the end of the file, the first line of the hold space is amended to prepend the string ens_ and then the whole of the hold space is printed.
You cannot do this kind of thing with sed. There is no way in sed to "look forward" and tell if there are more matches to the pattern. You can kind of look back, but that won't be sufficient to solve this problem.
This perl script will solve it:
#!/usr/bin/perl
use strict;
use warnings;
my ($seek, $replacement, $last, #new) = (shift, shift, 0);
open(my $fh, shift) or die $!;
my #l = <$fh>;
close($fh) or die $!;
foreach (reverse #l){
if(/$seek/){
if ($last++ == 0){
s/$seek/$replacement/;
} else {
next;
}
}
unshift(#new, $_);
}
print join "", #new;
Call like:
./script delay= ens_delay= inputfile
I chose to entirely eliminate lines which you intended to delete rather than collapse them in to a single blank line. If that is really required then it's a bit more complicated: the first such line in any consecutive set (or rather the last such) must be pushed on to the output list and you have to track whether this has just been done so you know whether to push the next time, too.
You could also solve this problem with awk, python, or any number of other languages. Just not sed.
Have this monster:
sed -e "1,$(expr $(sed -n '/^delay=/=' your_file.txt | tail -1) - 1)"'s/^delay=.*$//' \
-e 's/^delay=/ens_delay=/' your_file.txt
Here:
sed -n '/^delay=/=' your_file.txt | tail -1 return the last line number of the encountered pattern (let's name it X)
expr is used to get the X-1 line
"1,X-1"'[command]' means "perform this command betwen the first and the X-1 line included (I used double quotes to let the expansion getting done)
's/^delay=.*$//' the said [command]
-e 's/^delay=/ens_delay=/' the next expression to perform (will occur only on the last line)
Output:
alpha_notify_teta=''
alpha_notify_check='YES'
text='CRDS'
textfileooooop=''
alpha_enable='YES'
alpha_hostnames=''
alpha_hsm_backup_notification='YES'
alpha_orange='YES'
alpha_orange_interval='300'
alpha_notification_level='ALL'
expression='YES'
ens_delay='9'
textfileooooop=''
alpha_enable='YES'
alpha_hostnames=''
alpha_hsm_backup_notification='YES'
If you want to delete the lines instead of leaving them blank:
sed -e "1,$(expr $(sed -n '/^delay=/=' your_file.txt | tail -1) - 1)"'{/^delay=.*$/d}' \
-e 's/^delay=/ens_delay=/' your_file.txt
As was mentioned elsewhere, sed can't know which occurrence of a substring is the last one. But awk can keep track of things in arrays. For example, the following will delete all duplicate assignments, as well ask making your substitution:
awk 'BEGIN{FS=OFS="="} $1=="delay"{$1="ens_delay"} !($1 in a){o[++i]=$1} {a[$1]=$0} END{for(x=0;x<i;x++) printf "%s\n",a[o[x]]}' inputfile
Or, broken out for easier reading/comments:
BEGIN {
FS=OFS="=" # set the field separator, to help isolate the left hand side
}
$1=="delay" {
$1="ens_delay" # your field substitution
}
!($1 in a) {
o[++i]=$1 # if we haven't seen this variable, record its position
}
{
a[$1]=$0 # record the value of the last-seen occurrence of this variable
}
END {
for (x=0;x<i;x++) # step through the array,
printf "%s\n",a[o[x]] # printing the last-seen values, in the order
} # their variable was first seen in the input file.
You might not care about the order of the variables. If so, the following might be simpler:
awk 'BEGIN{FS=OFS="="} $1=="delay"{$1="ens_delay"} {o[$1]=$0} END{for(i in o) printf "%s\n", o[i]}' inputfile
This simply stores the last-seen line in an array whose key is the variable name, then prints out the content of the array in an unknown order.
Assuming I understand your specifications properly, this should do what you need. Given infile x,
$: last=$( grep -n delay x|tail -1|sed 's/:.*//' )
This grep's the file for all lines with delay and returns them with the line number prepended with a colon. The tail -1 grabs the last of those lines, ignoring all the others. sed 's/:.*//' strips the colon and the actual line content, leaving only the number (here it was 14.)
That all evaluates out to assign 14 as $last.
$: sed '/delay/ { '$last'!d; '$last' s/delay/ens_delay/; }' x
alpha_notify_teta=''
alpha_notify_check='YES'
text='CRDS'
textfileooooop=''
alpha_enable='YES'
alpha_hostnames=''
alpha_orange='YES'
alpha_orange_interval='300'
alpha_notification_level='ALL'
expression='YES'
ens_delay='9'
textfileooooop=''
alpha_enable='YES'
alpha_hostnames=''
Apologies for the ugly catenation. What this does is writes the script using the value of $last so that the result looks like this to sed:
$: sed '/delay/ { 14!d; 14 s/delay/ens_delay/; }' x
sed reads leading numbers as line selectors, so what this script of commands do -
First, sed automatically prints lines unless told not to, so by default it would just print every line. The script modifies that.
/delay/ {...} is a pattern-based record selector. It will apply the commands between the {} to all lines that match /delay/, which is why it doesn't need another grep - it handles that itself. Inside the curlies, the script does two things.
First, 14!d says (only if this line has delay, which it will) that if the line number is 14, do not (the !) delete the record. Since all the other lines with delay won't be line 14 (or whatever value of the last one the earlier command created), those will get deleted, which automatically restarts the cycle and reads the next record.
Second, if the line number is 14, then it won't delete, and so will progress to the s/delay/ens_delay/ which updates your value.
For all lines that don't match /delay/, sed just prints them as-is.

Find Replace using Values in another File

I have a directory of files, myFiles/, and a text file values.txt in which one column is a set of values to find, and the second column is the corresponding replace value.
The goal is to replace all instances of find values (first column of values.txt) with the corresponding replace values (second column of values.txt) in all of the files located in myFiles/.
For example...
values.txt:
Hello Goodbye
Happy Sad
Running the command would replace all instances of "Hello" with "Goodbye" in every file in myFiles/, as well as replace every instance of "Happy" with "Sad" in every file in myFiles/.
I've taken as many attempts at using awk/sed and so on as I can think logical, but have failed to produce a command that performs the action desired.
Any guidance is appreciated. Thank you!
Read each line from values.txt
Split that line in 2 words
Use sed for each line to replace 1st word with 2st word in all files in myFiles/ directory
Note: I've used bash parameter expansion to split the line (${line% *} etc) , assuming values.txt is space separated 2 columnar file. If it's not the case, you may use awk or cut to split the line.
while read -r line;do
sed -i "s/${line#* }/${line% *}/g" myFiles/* # '-i' edits files in place and 'g' replaces all occurrences of patterns
done < values.txt
You can do what you want with awk.
#! /usr/bin/awk -f
# snarf in first file, values.txt
FNR == NR {
subs[$1] = $2
next
}
# apply replacements to subsequent files
{
for( old in subs ) {
while( index(old, $0) ) {
start = index(old, $0)
len = length(old)
$0 = substr($0, start, len) subs[old] substr($0, start + len)
}
}
print
}
When you invoke it, put values.txt as the first file to be processed.
Option One:
create a python script
with open('filename', 'r') as infile, etc., read in the values.txt file into a python dict with 'from' as key, and 'to' as value. close the infile.
use shutil to read in directory wanted, iterate over files, for each, do popen 'sed 's/from/to/g'" or read in each file interating over all the lines, each line you find/replace.
Option Two:
bash script
read in a from/to pair
invoke
perl -p -i -e 's/from/to/g' dirname/*.txt
done
second is probably easier to write but less exception handling.
It's called 'Perl PIE' and it's a relatively famous hack for doing find/replace in lots of files at once.

convert multiply lines between pattern to a comma separated string

I need help in processing data from STDIN (data is taken from another file with 'tail -f' plus grepped to filter out garbage). There are several lines between patterns:
<DN> 589</DN>
<DD>03.12.2014</DD>
<ST> </ST>
<STC>0</STC>
<STT>0</STT>
<PU>5</PU>
<OT>01</OT>
<DSN></DSN>
<NRA>40807,40820,426,30231,40818,30230</NRA>
<GR>300 000-00
&#10</GR>
then next block with DN/GR starts
I need to convert lines between and to a single line, comma-separated:
<DN> 589</DN>,<DD>03.12.2014</DD>,<ST> </ST>,<STC>0</STC>,<STT>0</STT>,<PU>5</PU>,<OT>01</OT>,<DSN></DSN>,<NRA>40807,40820,426,30231,40818,30230</NRA>,<GR>300 000-00
&#10</GR>
I need a one-liner with awk or sed or perl to do it and put result to STDOUT.
I've tried to do it, but failed due to lack of experience. Also tried to google and didn't find a working solution.
whatever..| awk '{sub(/^\s*/,"");printf "%s%s",$0,(/\/GR>\s*$/?"\n":",")}'
this line does:
remove the leading spaces from each line
join all line with sep , till the block end /GR>
if you have x data blocks, it gives you x long lines.
sed -nr '/<DN>/,/<GR>/{ H; /<GR>/{ g; s%\n%,%g; s%^,%%; p; s%.*%%; h }; }' <<'EOSEQ'
<DN> 589</DN>
<DD>03.12.2014</DD>
<STC>0</STC>
<GR>300 000-00
&#10</GR>
<DN>900</DN>
<DD>20.11.2014</DD>
<OT>01</OT>
<NRA>40807,40820,426,30231,40818,30230</NRA>
<GR>300 000-00
&#10</GR>
EOSEQ
SED one-liner, as you wish :)
Using awk you could do the following:
awk '{printf ("%s,", $NF)}' test.txt ##Will have comma at the end which may/may not be ok for you.
You can use the following one in sed.
sed -r ':loop ;N;s/(.*)\n(.*)/\1,\2/ ; t loop ' file name.

Resources