I'm reading a text file line by line in a bash script. The text file I'm reading is a tab separated csv - however when I try to cut the read line, it does not work, it seems like the \t is converted to a blank space somewhere
Below code is not what I am doing finally - I have not yet implemented the actual workload to the code, until the data can be read reliably.
for (( currlineno=2 ; $currlineno <= $maxlines ; currlineno++ )); do
currline=$(sed -n "$currlineno"p "$IMPORT_TABLE".csv )
echo $currline |cut -f2
done
now when I change the two lines like below it works
for (( currlineno=2 ; $currlineno <= $maxlines ; currlineno++ )); do
currline=$(sed -n "$currlineno"p "$IMPORT_TABLE".csv |tr '\t' ';')
echo $currline |cut -f2 -d ';'
done
but I cannot do it like that as my text file also contains ';' ',' and '.' in the fields. Tab is the only acceptable option for me, as my fields will never contain it.
That's because you don't double quote your variable.
tabbed=$'a\tb'
echo $tabbed : "$tabbed"
When bash sees the variable outside of quotes, it applies word splitting on its contents, and echo just outputs its parameters separated by spaces. Double quotes make the value one parameter, even if it contains whitespace, newlines, etc.
Related
Hello everyone I'm a beginner in shell coding. In daily basis I need to convert a file's data to another format, I usually do it manually with Text Editor. But I often do mistakes. So I decided to code an easy script who can do the work for me.
The file's content like this
/release201209
a1,a2,"a3",a4,a5
b1,b2,"b3",b4,b5
c1,c2,"c3",c4,c5
to this:
a2>a3
b2>b3
c2>c3
The script should ignore the first line and print the second and third values separated by '>'
I'm half way there, and here is my code
#!/bin/bash
#while Loops
i=1
while IFS=\" read t1 t2 t3
do
test $i -eq 1 && ((i=i+1)) && continue
echo $t1|cut -d\, -f2 | { tr -d '\n'; echo \>$t2; }
done < $1
The problem in my code is that the last line isnt printed unless the file finishes with an empty line \n
And I want the echo to be printed inside a new CSV file(I tried to set the standard output to my new file but only the last echo is printed there).
Can someone please help me out? Thanks in advance.
Rather than treating the double quotes as a field separator, it seems cleaner to just delete them (assuming that is valid). Eg:
$ < input tr -d '"' | awk 'NR>1{print $2,$3}' FS=, OFS=\>
a2>a3
b2>b3
c2>c3
If you cannot just strip the quotes as in your sample input but those quotes are escaping commas, you could hack together a solution but you would be better off using a proper CSV parsing tool. (eg perl's Text::CSV)
Here's a simple pipeline that will do the trick:
sed '1d' data.txt | cut -d, -f2-3 | tr -d '"' | tr ',' '>'
Here, we're just removing the first line (as desired), selecting fields 2 & 3 (based on a comma field separator), removing the double quotes and mapping the remaining , to >.
Use this Perl one-liner:
perl -F',' -lane 'next if $. == 1; print join ">", map { tr/"//d; $_ } #F[1,2]' in_file
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
-F',' : Split into #F on comma, rather than on whitespace.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
I have a text file: file.txt, with several thousand lines. It contains a lot of junk lines which I am not interested in, so I use the cut command to regex for the lines I am interested in first. For each entry I am interested in, it will be listed twice in the text file: Once in a "definition" section, another in a "value" section. I want to retrieve the first value from the "definition" section, and then for each entry found there find it's corresponding "value" section entry.
The first entry starts with ' gl_ ', while the 2nd entry would look like ' "gl_ ', starting with a '"'.
This is the code I have so far for looping through the text document, which then retrieves the values I am interested in and appends them to a .csv file:
while read -r line
do
if [[ $line == gl_* ]] ; then (param=$(cut -d'\' -f 1 $line) | def=$(cut -d'\' -f 2 $line) | type=$(cut -d'\' -f 4 $line) | prompt=$(cut -d'\' -f 8 $line))
while read -r glline
do
if [[ $glline == '"'$param* ]] ; then val=$(cut -d'\' -f 3 $glline) |
"$project";"$param";"$val";"$def";"$type";"$prompt" >> /filepath/file.csv
done < file.txt
done < file.txt
This seems to throw some syntax errors related to unexpected tokens near the first 'done' statement.
Example of text that needs to be parsed, and paired:
gl_one\User Defined\1\String\1\\1\Some Text
gl_two\User Defined\1\String\1\\1\Some Text also
gl_three\User Defined\1\Time\1\\1\Datetime now
some\junk
"gl_one\1\Value1
some\junk
"gl_two\1\Value2
"gl_three\1\Value3
So effectively, the while loop reads each line until it hits the first line that starts with 'gl_', which then stores that value (ie. gl_one) as a variable 'param'.
It then starts the nested while loop that looks for the line that starts with a ' " ' in front of the gl_, and is equivalent to the 'param' value. In other words, the
script should couple the lines gl_one and "gl_one, gl_two and "gl_two, gl_three and "gl_three.
The text file is large, and these are settings that have been defined this way. I need to collect the values for each gl_ parameter, to save them together in a .csv file with their corresponding "gl_ values.
Wanted regex output stored in variables would be something like this:
first while loop:
$param = gl_one, $def = User Defined, $type = String, $prompt = Some Text
second while loop:
$val = Value1
Then it stores these variables to the file.csv, with semi-colon separators.
Currently, I have an error for the first 'done' statement, which seems to indicate an issue with the quotation marks. Apart from this,
I am looking for general ideas and comments to the script. I.e, not entirely sure I am looking for the quotation mark parameters "gl_ correctly, or if the
semi-colons as .csv separators are added correctly.
Edit: Overall, the script runs now, but extremely slow due to the inner while loop. Is there any faster way to match the two lines together and add them to the .csv file?
Any ideas and comments?
This will generate a file containing the data you want:
cat file.txt | grep gl_ | sed -E "s/\"//" | sort | sed '$!N;s/\n/\\/' | awk -F'\' '{print $1"; "$5"; "$7"; "$NF}' > /filepath/file.csv
It uses grep to extract all lines containing 'gl_'
then sed to remove the leading '"' from the lines that contain one [I have assumed there are no further '"' in the line]
The lines are sorted
sed removes the return from each pair of lines
awk then prints
the required columns according to your requirements
Output routed to the file.
LANG=C sort -t\\ -sd -k1,1 <file.txt |\
sed '
/^gl_/{ # if definition
N; # append next line to buffer
s/\n"gl_[^\\]*//; # if value, strip first column
t; # and start next loop
}
D; # otherwise, delete the line
' |\
awk -F\\ -v p="$project" -v OFS=\; '{print p,$1,$10,$2,$4,$8 }' \
>>/filepath/file.csv
sort lines so gl_... appears immediately before "gl_... (LANG fixes LC_TYPE) - assumes definition appears before value
sed to help ensure matching definition and value (may still fail if duplicate/missing value), and tidy for awk
awk to pull out relevant fields
I have a string as follows:
string = '[ "file1.py", "file2.py", "file3.py", "file4.py" ]'
How to convert it to an iterable list of individual filenames so that I can run a for loop on it as follows:
for filen in fileArr
do
echo $filen
done
Expected output:
file1.py
file2.py
file3.py
file4.py
So far I have just removed the first and last square brackets using string=${string:1:${#string}-2} but I still have quotations and commas to remove. Is there a clean and simple way to achieve this?
You can use the tr command (more information about the tr command here) to eliminate the start bracket, the end bracket, and the double quotations. Also, you can substitute the coma for a tab so you can later iterate the over the result.
Here the code:
$ s='[ "file1.py", "file2.py", "file3.py", "file4.py" ]'
$ s2=$(echo $s | tr "[" " " | tr "]" " " | tr "\"" " " | tr "," "\\t")
$ for x in $s2; do echo $x ; done
file1.py
file2.py
file3.py
file4.py
More in-depth, the s2 statement uses:
s2=$( --> Assigns the output of the full command to s2
echo $s --> Prints the content of s to the pipes
| tr "[" " " --> Substitute the start bracket by spaces
| tr "]" " " --> Substitute the end bracket by spaces
| tr "\"" " " --> Substitute the double quotations by spaces
| tr "," "\\t" --> Substitute the comma by a tab
)
Notice that this code is valid for the kind of input you provided but it will not work if the filenames contain spaces.
EDIT:
Another solution is possible using substring replacement instead of using the tr command.
Here the code:
$ s2=${s//\[/} # Erase the start brackets
$ s3=${s2//\]/} # Erase the end brackets
$ s4=${s3//\"/} # Erase the double quotations
$ s5=${s4//,/ } # Substitute the comma by a tab
$ for x in $s5; do echo $x ; done
file1.py
file2.py
file3.py
file4.py
As the previous solution, notice that the code will not work if the filenames contain spaces (since they will be considered separated entries).
EDIT 2:
As #Z4-tier pointed out, the first option can be re-written in a more compact way using the -d option available in tr. This option erases the given characters. Also, the obtained string after the parsing might not be iterable if the Internal Field Separator (IFS) is set incorrectly. Although I think that the previous solutions cover most of the cases, you might consider setting and restoring the IFS value if it was set to something else than the default value.
Hence, you could write:
$ s='[ "file1.py", "file2.py", "file3.py", "file4.py" ]'
$ s2=$(echo $s | tr -d "[]\"" | tr "," "\\t")
$ IFS=$' '
$ for x in $s2; do echo $x ; done
file1.py
file2.py
file3.py
file4.py
$ IFS= #restore your IFS value
tr <string1> <string2> will replace any occurance of a character in string1 with the character appearing in the same index position from string2, so this can be done with 1 pipe.
tr can also be used as tr -d <string1> where any occurance of a character in string1 is deleted.
s='[ "file1.py", "file2.py", "file3.py", "file4.py" ]'
s2=$(echo $s | tr -d '[]",')
This is not iterable like you might think. It is still one string:
bash-$ echo "${s2[0]}"
file1.py file2.py file3.py file4.py
Try this:
IFS=$' '
bash-$ my_array=($(echo $s | tr -d '[]",'))
bash-$ echo "${my_array[0]}"
file1.py
bash-$ for k in "${my_array[#]}"; do echo $k; done
file1.py
file2.py
file3.py
file4.py
This sets IFS (the internal field separator) to a space character to take advantage of the spaces that are left in the string after it gets run through tr. It uses a subshell to translate s into what we called s2 above (now using my_array to indicate that it's not a string...), surrounds it with (...) to create an indexed array (this is where IFS is important).
I have a .csv file that contains double quoted multi-line fields. I need to convert the multi-line cell to a single line. It doesn't show in the sample data but I do not know which fields might be multi-line so any solution will need to check every field. I do know how many columns I'll have. The first line will also need to be skipped. I don't how much data so performance isn't a consideration.
I need something that I can run from a bash script on Linux. Preferably using tools such as awk or sed and not actual programming languages.
The data will be processed further with Logstash but it doesn't handle double quoted multi-line fields hence the need to do some pre-processing.
I tried something like this and it kind of works on one row but fails on multiple rows.
sed -e :0 -e '/,.*,.*,.*,.*,/b' -e N -e '1n;N;N;N;s/\n/ /g' -e b0 file.csv
CSV example
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
The output I want is
First name,Last name,Address,ZIP
John,Doe,Country City Street,12345
Jane,Doe,Country City Street,67890
etc.
etc.
First my apologies for getting here 7 months late...
I came across a problem similar to yours today, with multiple fields with multi-line types. I was glad to find your question but at least for my case I have the complexity that, as more than one field is conflicting, quotes might open, close and open again on the same line... anyway, reading a lot and combining answers from different posts I came up with something like this:
First I count the quotes in a line, to do that, I take out everything but quotes and then use wc:
quotes=`echo $line | tr -cd '"' | wc -c` # Counts the quotes
If you think of a single multi-line field, knowing if the quotes are 1 or 2 is enough. In a more generic scenario like mine I have to know if the number of quotes is odd or even to know if the line completes the record or expects more information.
To check for even or odd you can use the mod operand (%), in general:
even % 2 = 0
odd % 2 = 1
For the first line:
Odd means that the line expects more information on the next line.
Even means the line is complete.
For the subsequent lines, I have to know the status of the previous one. for instance in your sample text:
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
You can say line 1 (John,Doe,"Country) has 1 quote (odd) what means the status of the record is incomplete or open.
When you go to line 2, there is no quote (even). Nevertheless this does not mean the record is complete, you have to consider the previous status... so for the lines following the first one it will be:
Odd means that record status toggles (incomplete to complete).
Even means that record status remains as the previous line.
What I did was looping line by line while carrying the status of the last line to the next one:
incomplete=0
cat file.csv | while read line; do
quotes=`echo $line | tr -cd '"' | wc -c` # Counts the quotes
incomplete=$((($quotes+$incomplete)%2)) # Check if Odd or Even to decide status
if [ $incomplete -eq 1 ]; then
echo -n "$line " >> new.csv # If line is incomplete join with next
else
echo "$line" >> new.csv # If line completes the record finish
fi
done
Once this was executed, a file in your format generates a new.csv like this:
First name,Last name,Address,ZIP
John,Doe,"Country City Street",12345
I like one-liners as much as everyone, I wrote that script just for the sake of clarity, you can - arguably - write it in one line like:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l " || echo "$l";done >new.csv
I would appreciate it if you could go back to your example and see if this works for your case (which you most likely already solved). Hopefully this can still help someone else down the road...
Recovering the multi-line fields
Every need is different, in my case I wanted the records in one line to further process the csv to add some bash-extracted data, but I would like to keep the csv as it was. To accomplish that, instead of joining the lines with a space I used a code - likely unique - that I could then search and replace:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l ~newline~ " || echo "$l";done >new.csv
the code is ~newline~, this is totally arbitrary of course.
Then, after doing my processing, I took the csv text file and replaced the coded newlines with real newlines:
sed -i 's/ ~newline~ /\n/g' new.csv
References:
Ternary operator: https://stackoverflow.com/a/3953666/6316852
Count char occurrences: https://stackoverflow.com/a/41119233/6316852
Other peculiar cases: https://www.linuxquestions.org/questions/programming-9/complex-bash-string-substitution-of-csv-file-with-multiline-data-937179/
TL;DR
Run this:
i=0;cat file.csv|while read l;do i=$((($(echo $l|tr -cd '"'|wc -c)+$i)%2));[[ $i = 1 ]] && echo -n "$l " || echo "$l";done >new.csv
... and collect results in new.csv
I hope it helps!
If Perl is your option, please try the following:
perl -e '
while (<>) {
$str .= $_;
}
while ($str =~ /("(("")|[^"])*")|((^|(?<=,))[^,]*((?=,)|$))/g) {
if (($el = $&) =~ /^".*"$/s) {
$el =~ s/^"//s; $el =~ s/"$//s;
$el =~ s/""/"/g;
$el =~ s/\s+(?!$)/ /g;
}
push(#ary, $el);
}
foreach (#ary) {
print /\n$/ ? "$_" : "$_,";
}' sample.csv
sample.csv:
First name,Last name,Address,ZIP
John,Doe,"Country
City
Street",12345
John,Doe,"Country
City
Street",67890
Result:
First name,Last name,Address,ZIP
John,Doe,Country City Street,12345
John,Doe,Country City Street,67890
This might work for you (GNU sed):
sed ':a;s/[^,]\+/&/4;tb;N;ba;:b;s/\n\+/ /g;s/"//g' file
Test each line to see that it contains the correct number of fields (in the example that was 4). If there are not enough fields, append the next line and repeat the test. Otherwise, replace the newline(s) by spaces and finally remove the "'s.
N.B. This may be fraught with problems such as ,'s between "'s and quoted "'s.
Try cat -v file.csv. When the file was made with Excel, you might have some luck: When the newlines in a field are a simple \n and the newline at the end is a \r\n (which will look like ^M), parsing is simple.
# delete all newlines and replace the ^M with a new newline.
tr -d "\n" < file.csv| tr "\r" "\n"
# Above two steps with one command
tr "\n\r" " \n" < file.csv
When you want a space between the joined line, you need an additional step.
tr "\n\r" " \n" < file.csv | sed '2,$ s/^ //'
EDIT: #sjaak commented this didn't work is his case.
When your broken lines also have ^M you still can be a lucky (wo-)man.
When your broken field is always the first field in double quotes and you have GNU sed 4.2.2, you can join 2 lines when the first line has exactly one double quote.
sed -rz ':a;s/(\n|^)([^"]*)"([^"]*)\n/\1\2"\3 /;ta' file.csv
Explanation:
-z don't use \n as line endings
:a label for repeating the step after successful replacement
(\n|^) Search after a newline or the very first line
([^"]*) Substring without a "
ta Go back to label a and repeat
awk pattern matching is working.
answer in one line :
awk '/,"/{ORS=" "};/",/{ORS="\n"}{print $0}' YourFile
if you'd like to drop quotes, you could use:
awk '/,"/{ORS=" "};/",/{ORS="\n"}{print $0}' YourFile | sed 's/"//gw NewFile'
but I prefer to keep it.
to explain the code:
/Pattern/ : find pattern in current line.
ORS : indicates the output line record.
$0 : indicates the whole of the current line.
's/OldPattern/NewPattern/': substitude first OldPattern with NewPattern
/g : does the previous action for all OldPattern
/w : write the result to Newfile
I'm writing a for loop in bash to run a command and I need to add a comma after one of my variables. I can't seem to do this without an extra space added. When I move "," right next to $bams then it outputs *.sorted,
#!/bin/bash
bams=*.sorted
for i in $bams
do echo $bams ","
done;
Output should be this:
'file1.sorted','file2.sorted','file3.sorted'
The eventual end goal is to be able to insert a list of files into a --flag in the format above. Not sure how to do that either.
First, a literal answer (if your goal were to generate a string of the form 'foo','bar','baz', rather than to run a program with a command line equivalent to somecommand --flag='foo','bar','baz', which is quite different):
shopt -s nullglob # generate a null result if no matches exist
printf -v var "'%s'," *.sorted # put list of files, each w/ a comma, in var
echo "${var%,}" # echo contents of var, with last comma removed
Or, if you don't need the literal single quotes (and if you're passing your result to another program on its command line with the single quotes being syntactic rather than literal, you absolutely don't want them):
files=( *.sorted ) # put *.sorted in an array
IFS=, # set the comma character as the field separator
somecommand --flag "${files[*]}" # run your program with the comma-separated list
try this -
lst=$( echo *.sorted | sed 's/ /,/g' ) # stack filenames with commas
echo $lst
if you really need the single-ticks around each filename, then
lst="'$( echo *.sorted | sed "s/ /','/g" )'" # commas AND quotes
#!/bin/bash
bams=*.sorted
for i in $bams
do flag+="${flag:+,}'$i'"
done
echo $flag