How to concatenate continuation lines in Perl? - windows

I have a CSV file which contains strings like this:
ID1;banana
| apple
| oranges
and I want that every time there is a pipe at the beginning of the line, the string will be appended to the previous line, the output should be like this:
ID1;banana | apple | oranges
how can remove the newlines that precede a line begining with a pipe |?

In a hackish one liner, removing returns before pipes:
perl -ne '$s = do {local $/; <>}; $s =~ s/\n\|/ |/g; print $s' file.csv

Instead of trying to backspace/erase what's already been printed, you could instead only print the carriage return when a | isn't the first char:
perl -n -e 'chomp; /^\s*\|/? print " $_": print "\n$_" ' yourfile.txt

Is the string you show the value of a single CSV field? If so then you should be using Text::CSV to divide each line into fields (as its getline method is the simplest way to cope with data that contains embedded newlines) and you can use the substitution s/\n(?=\|)/ /g to change a newline into a space if it precedes a pipe character.
Here's an example
use strict;
use warnings;
use Text::CSV;
my $csv = Text::CSV->new({ binary => 1, eol => $/ });
while (my $row = $csv->getline(*DATA)) {
s/\n(?=\|)/ /g for #$row;
print "$_: $row->[$_]\n" for 0 .. $#$row;
print "\n";
}
__DATA__
"ID1;banana
| apple
| oranges",f2,f3
g1,g2,g3
output
0: ID1;banana | apple | oranges
1: f2
2: f3
0: g1
1: g2
2: g3
If your circumstance is different from that then you need to explain.

Related

lowercase and remove punctuation from a csv

I have a giant file (6gb) which is a csv and the rows look like so:
"87687","institute Polytechnic, Brazil"
"342424","university of India, India"
"24343","univefrsity columbia, Bogata, Colombia"
and I would like to remove all punctuation and lower the case of second column yielding:
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia"
what would be the most efficient way to do this on the terminal?
Tried:
cat TEXTFILE | tr -d '[:punct:]' > OUTFILE
problem: resultant is not in lowercase and tr seems to act on both columns not just the ssecond.
With a real CSV parser in Perl, the robust/reliable way, using just one process.
As far as it's line by line, the 6GB requirement of file size should not be an issue.
#!/usr/bin/perl
use strict; use warnings; # harness
use Text::CSV; # load the needed module (install it)
use feature qw/say/; # say = print("...\n")
# create an instance of a new CSV parser
my $csv = Text::CSV->new({ auto_diag => 1 });
# open a File Handle or exit with error
open my $fh, "<:encoding(utf8)", "file.csv" or die "file.csv: $!";
while (my $row = $csv->getline ($fh)) { # parse line by line
$_ = $row->[1]; # parse only column 2
s/[\s[:punct:]]//g; # removes both space(s) and punct(s)
$_ = lc $_; # Lower Case current value $_
$row->[1] = qq/"$_"/; # edit changes and (re)"quote"
say join ",", #$row; # display the whole current row
}
close $fh; # close the File Handle
Output
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia"
install
cpan Text::CSV
Here's an approach using xsv and process substitution:
paste -d, \
<(xsv select 1 infile.csv) \
<(xsv select 2 infile.csv | sed 's/[[:blank:][:punct:]]*//g;s/.*/\L&/')
The sed command first removes all blanks and punctuation, then lowercases the entire match.
This also works when the first field contains blanks and commas, and retains quoting where required.
Using sed
$ sed -E ':a;s/([^,]*,)([^ ,]*)[ ,]([[:alpha:]]+)/\1\L\2\3/;ta' input_file
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia
I suggest using this awk solution, which should work with any version of awk:
awk 'BEGIN{FS=OFS="\",\""} {
gsub(/[^[:alnum:]"]+/, "", $2); $2 = tolower($2)} 1' file
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia"
Details:
We make "," input and output field separators in BEGIN block
gsub(/[^[:alnum:]"]+/, "", $2): Strip all non-alphanumeric characters except "
$2 = tolower($2): Lowercase second column
One GNU awk (for gensub()) idea:
awk '
BEGIN { FS=OFS="\"" }
{ $4=gensub(/[^[:alnum:]]/,"","g",tolower($4)) }
1'
This generates:
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia"
Another sed approach -
sed -E 's/ +//g; s/([^"]),/\1/g; s/"([^"]*)"/"\L\1"/g' file
I don't like how that leaves no flexibility, and makes you rewrite the logic if you find something else you want to remove, though.
Another in awk -
awk -F'[", ]+' '
{ printf "\"%s\",\"", $2;
for(c=3;c<=NF;c++) printf "%s", tolower($c);
print "\"";
}' file
This approach lets you define and add any additional offending characters into the field delimiters without editing your logic.
$: pat=$"[\"',_;:!##\$%)(* -]+"
$: echo "$pat"
["',_;:!##$%)(* -]+
$: cat file
"87687","institute 'Polytechnic, Brazil"
"342424","university; of-India, India"
"24343","univefrsity )columbia, Bogata, Colombia"
$: awk -F"$pat" '{printf "\"%s\",\"", $2; for(c=3;c<=NF;c++) printf "%s", tolower($c); print "\"" }' file
"87687","institutepolytechnicbrazil"
"342424","universityofindiaindia"
"24343","univefrsitycolumbiabogatacolombia"
(I hate the way that lone single quote throws the markup color/format parsing off, lol)
Another way using ruby. Edited the data to show only the second field is modified.
% ruby -r 'csv' -e 'f = open("file");
CSV.parse(f) do |i|
puts "\"" + i[0] + "\",\"" + i[1].downcase.gsub(/[ ,]/,"") + "\"" end'
"8768, 7","institutepolytechnicbrazil"
"342 424","universityofindiaindia"
"243 43","univefrsitycolumbiabogatacolombia"
Using FastCSV gives a huge speedup
gem install fastcsv
% ruby -r 'fastcsv' -e 'f = open("file");
FastCSV.raw_parse(f) do |i|
puts "\"" + i[0] + "\",\"" + i[1].downcase.gsub(/[ ,]/,"") + "\"" end'
"8768, 7","institutepolytechnicbrazil"
"342 424","universityofindiaindia"
"243 43","univefrsitycolumbiabogatacolombia"
Data
% cat file
"8768, 7","institute Polytechnic, Brazil"
"342 424","university of India, India"
"243 43","univefrsity columbia, Bogata, Colombia"
With your shown samples and attempts please try following GNU awk code using match function of it. Using regex (^"[^"]*",")([^"]*)(".*)$ in match function which will create 3 capturing groups and will store the value into arr and respectively I am fetching the values of it later in program to meet OP's requirement.
awk '
match($0,/(^"[^"]*",")([^"]*)(".*)$/,arr){
gsub(/[^[:alnum:]]+/,"",arr[2])
print arr[1] tolower(arr[2]) arr[3]
}
' Input_file
This might work for you (GNU sed):
sed -E s'/("[^"]*",)/\1\n/;h;s/.*\n//;s/[[:punct:] ]//g;s/.*/"\L&"/;H;g;s/\n.*\n//' file
Divide and rule.
Partition the line into two fields, make a copy, process the second field removing punctuation and spaces, re-quote and lowercase and then re-assemble the fields
An alternative, perhaps?
sed -E ':a;s/^("[^"]*",".*)[^[:alpha:]"](.*)/\L\1\2/;ta' file
Here is a way to do so in PHP.
Note: PHP will not output double quotes unless needed by the first column. The second column will never need double quotes, it has no space or special characters.
$max_line_length = 100;
if (($fp = fopen("file.csv", "r")) !== FALSE) {
while (($data = fgetcsv($fp, $max_line_length, ",")) !== FALSE) {
$data[1] = strtolower(preg_replace('/[\s[:punct:]]/', '', $data[1]));
fputcsv(STDOUT, $data, ',', '"');
}
fclose($fp);
}

How to replace text in file between known start and stop positions with a command line utility like sed or awk?

I have been tinkering with this for a while but can't quite figure it out. A sample line within the file looks like this:
"...~236 characters of data...Y YYY. Y...many more characters of data"
How would I use sed or awk to replace spaces with a B character only between positions 236 and 246? In that example string it starts at character 29 and ends at character 39 within the string. I would want to preserve all the text preceding and following the target chunk of data within the line.
For clarification based on the comments, it should be applied to all lines in the file and expected output would be:
"...~236 characters of data...YBBYYY.BBY...many more characters of data"
With GNU awk:
$ awk -v FIELDWIDTHS='29 10 *' -v OFS= '{gsub(/ /, "B", $2)} 1' ip.txt
...~236 characters of data...YBBYYY.BBY...many more characters of data
FIELDWIDTHS='29 10 *' means 29 characters for first field, next 10 characters for second field and the rest for third field. OFS is set to empty, otherwise you'll get space added between the fields.
With perl:
$ perl -pe 's/^.{29}\K.{10}/$&=~tr| |B|r/e' ip.txt
...~236 characters of data...YBBYYY.BBY...many more characters of data
^.{29}\K match and ignore first 29 characters
.{10} match 10 characters
e flag to allow Perl code instead of string in replacement section
$&=~tr| |B|r convert space to B for the matched portion
Use this Perl one-liner with substr and tr. Note that this uses the fact that you can assign to substr, which changes the original string:
perl -lpe 'BEGIN { $from = 29; $to = 39; } (substr $_, ( $from - 1 ), ( $to - $from + 1 ) ) =~ tr/ /B/;' in_file > out_file
To change the file in-place, use:
perl -i.bak -lpe 'BEGIN { $from = 29; $to = 39; } (substr $_, ( $from - 1 ), ( $to - $from + 1 ) ) =~ tr/ /B/;' in_file
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-p : Loop over the input one line at a time, assigning it to $_ by default. Add print $_ after each loop iteration.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-i.bak : Edit input files in-place (overwrite the input file). Before overwriting, save a backup copy of the original file by appending to its name the extension .bak.
I would use GNU AWK following way, for simplicity sake say we have file.txt content
S o m e s t r i n g
and want to change spaces from 5 (inclusive) to 10 (inclusive) position then
awk 'BEGIN{FPAT=".";OFS=""}{for(i=5;i<=10;i+=1)$i=($i==" "?"B":$i);print}' file.txt
output is
S o mBeBsBt r i n g
Explanation: I set field pattern (FPAT) to any single character and output field seperator (OFS) to empty string, thus every field is populated by single characters and I do not get superfluous space when print-ing. I use for loop to access desired fields and for every one I check if it is space, if it is I assign B here otherwise I assign original value, finally I print whole changed line.
Using GNU awk:
awk -v strt=29 -v end=39 '{ ram=substr($0,strt,(end-strt));gsub(" ","B",ram);print substr($0,1,(strt-1)) ram substr($0,(end)) }' file
Explanation:
awk -v strt=29 -v end=39 '{ # Pass the start and end character positions as strt and end respectively
ram=substr($0,strt,(end-strt)); # Extract the 29th to the 39th characters of the line and read into variable ram
gsub(" ","B",ram); # Replace spaces with B in ram
print substr($0,1,(strt-1)) ram substr($0,(end)) # Rebuild the line incorporating raw and printing the result
}'file
This is certainly a suitable task for perl, and saddens me that my perl has become so rusty that this is the best I can come up with at the moment:
perl -e 'local $/=\1;while(<>) { s/ /B/ if $. >= 236 && $. <= 246; print }' input;
Another awk but using FS="":
$ awk 'BEGIN{FS=OFS=""}{for(i=29;i<=39;i++)sub(/ /,"B",$i)}1' file
Output:
"...~236 characters of data...YBBYYY.BBY...many more characters of data"
Explained:
$ awk ' # yes awk yes
BEGIN {
FS=OFS="" # set empty field delimiters
}
{
for(i=29;i<=39;i++) # between desired indexes
sub(/ /,"B",$i) # replace space with B
# if($i==" ") # couldve taken this route, too
# $i="B"
}1' file # implicit output
With sed :
sed '
H
s/\(.\{236\}\)\(.\{11\}\).*/\2/
s/ /B/g
H
g
s/\n//g
s/\(.\{236\}\)\(.\{11\}\)\(.*\)\(.\{11\}\)/\1\4\3/
x
s/.*//
x' infile
When you have an input string without \r, you can use:
sed -r 's/(.{236})(.{10})(.*)/\1\r\2\r\3/;:a;s/(\r.*) (.*\r)/\1B\2/;ta;s/\r//g' input
Explanation:
First put \r around the area that you want to change.
Next introduce a label to jump back to.
Next replace a space between 2 markers.
Repeat until all spaces are replaced.
Remove the markers.
In your case, where the length doesn't change, you can do without the markers.
Replace a space after 236..245 characters and try again when it succeeds.
sed -r ':a; s/^(.{236})([^ ]{0,9}) /\1\2B/;ta' input
This might work for you (GNU sed):
sed -E 's/./&\n/245;s//\n&/236/;h;y/ /B/;H;g;s/\n.*\n(.*)\n.*\n(.*)\n.*/\2\1/' file
Divide the problem into 2 lines, one with spaces and one with B's where there were spaces.
Then using pattern matching make a composite line from the two lines.
N.B. The newline can be used as a delimiter as it is guaranteed not to be in seds pattern space.

How to add a constant number to all entries of a row in a text file in bash

I want to add or subtract a constant number form all entries of a row in a text file in Bash.
eg. my text file looks like:
21.018000 26.107000 51.489000 71.649000 123.523000 127.618000 132.642000 169.247000 173.276000 208.721000 260.032000 264.127000 320.610000 324.639000 339.709000 354.779000 385.084000
(it has only one row)
and I want to subtract value 18 from all columns and save it in a new file. What is the easiest way to do this in bash?
Thanks a lot!
Use simple awk like this:
awk '{for (i=1; i<=NF; i++) $i -= 18} 1' file >> $$.tmp && mv $$.tmp file
cat file
3.018 8.107 33.489 53.649 105.523 109.618 114.642 151.247 155.276 190.721 242.032 246.127 302.61 306.639 321.709 336.779 367.084
Taking advantage of awks RS and ORS variables we can do it like this:
awk 'BEGIN {ORS=RS=" "}{print $1 - 18 }' your_file > your_new_filename
It sets the record separator for input and output to space. This makes every field a record of its own and we have only to deal with $1.
Give a try to this compact and funny version:
$ printf "%s 18-n[ ]P" $(cat text.file) | dc
dc is a reverse-polish desk calculator (hehehe).
printf generates one string per number. The first string is 21.018000 18-n[ ]P. Other strings follow, one per number.
21.018000 18: the values separated with spaces are pushed to the dc stack.
- Pops two values off, subtracts the first one popped from the second one popped, and pushes the result.
n Prints the value on the top of the stack, popping it off, and does not print a newline after.
[ ] add string (space) on top of the stack.
P Pops off the value on top of the stack. If it it a string, it is simply printed without a trailing newline.
The test with an additional sed to replace the useless last (space) char with a new line:
$ printf "%s 18-n[ ]P" $(cat text.file) | dc | sed "s/ $/\n/" > new.file
$ cat new.file
3.018000 8.107000 33.489000 53.649000 105.523000 109.618000 114.642000 151.247000 155.276000 190.721000 242.032000 246.127000 302.610000 306.639000 321.709000 336.779000 367.084000
----
For history a version with sed:
$ sed "s/\([1-9][0-9]*[.][0-9][0-9]*\)\{1,\}/\1 18-n[ ]P/g" text.file | dc
With Perl which will work on multiply rows:
perl -i -nlae '#F = map {$_ - 18} #F; print "#F"' num_file
# ^ ^^^^ ^
# | |||| Printing an array in quotes will join
# | |||| with spaces
# | |||Evaluate code instead of expecting filename.pl
# | ||Split input on spaces and store in #F
# | |Remove (chomp) newline and add newline after print
# | Read each line of specified file (num_file)
# Inplace edit, change original file, take backup with -i.bak

Taking multiple header (rows matching condition) and convert into a column

Hello I have a file that has multiple Headers in it that I need to have turned into column values. The file looks like this:
Day1
1,Smith,London
2,Bruce,Seattle
5,Will,Dallas
Day2
1,Mike,Frisco
4,James,LA
I would like the file to end up looking like this:
Day1,1,Smith,London
Day1,2,Bruce,Seattle
Day1,5,Will,Dallas
Day2,1,Mike,Frisco
Day2,4,James,LA
The file doesn't have sequential numbers before the names and it doesn't have the same quantity of records after the "Day" Header.
Does anyone have any ideas on how to accomplish this using the command-line?
In awk
awk -F, 'NF==1{a=$0;next}{print a","$0}' file
Checks if the number of fields is 1, if it is it sets a variable to that and skips the next block.
For each line that doesn't have 1 field, it prints the saved variable and the line
And in sed
sed -n '/,/!{h};/,/{x;G;s/\n/,/;p;s/,.*//;x}' file
Broken down for MrBones wild ride.
sed -n '
/,/!{h}; // If the line does not contain a comma overwrite buffer with line
/,/{ // If the line contains a comma, do everything inside the brackets
x; // Exchange the line for the held in buffer
G; // Append buffer to line
s/\n/,/; // Replace the newline with a comma
p; // Print the line
s/,.*//; // Remove everything after the first comma
x // exchange line for hold buffer to put title back in buffer for the next line.
}' file // The file you are using
In essence it saves the lines without a ,, i.e the headers. Then if its not a header, it switches the current line with the saved header and appends the now switched line to the end of the header. As it is appended with a newline, then the next statement replaces that with a comma. Then the line is printed. NExt to recover the header, everything after it is removed and it is swapped back into the buffer, ready for the next line.
sed '/^Day/ {h;d;}
G;s/\(.*\)\n\(.*\)/\2,\1/
' YourFile
posix compliant
print nothing if not at least 1 data after a Day
white line are treated as data
awk '{if ( $0 ~ /^Day/ ) Head = $0; else print Head "," $0}' YourFile
use Day as paragraph separator and content as header to use on following line
Perl solution:
#! /usr/bin/perl
use warnings;
use strict;
my $header;
while (<>) { # Read line by line.
if (/,/) { # If the line contains a comma,
print "$header,$_"; # prepend the header.
} else {
chomp; # Remove the newline.
$header = $_; # Remember the header.
}
}
Another sed version
sed -n '/Day[0-9]\+/{h;b end};{G;s/\(.*\)\n\(.*\)/\2,\1/;p;:end}'
Perl
$ perl -F, -wlane ' if(#F eq 1){$s=$F[0]; next}print "$s,$_"' file
Day1,1,Smith,London
Day1,2,Bruce,Seattle
Day1,5,Will,Dallas
Day2,1,Mike,Frisco
Day2,4,James,LA
This Perl one-line program will do as you ask. It requires Perl v5.14 or better
perl -ne'tr/,// ? print $c,$_ : ($c = s/\s*\z/,/r)' myfile.txt
for earlier versions of perl, use
perl -ne'tr/,// ? print $c,$_ : ($c = $_) =~ s/\s*\z/,/' myfile.txt
output
Day1,1,Smith,London
Day1,2,Bruce,Seattle
Day1,5,Will,Dallas
Day2,1,Mike,Frisco
Day2,4,James,LA
Another perl example- this time using $/ to separate each record.
use strict;
use warnings;
local $/ = "Day";
while (<>) {
next unless my ($num) = m/^(\d+)/;
for ( split /\n/ ) {
print "Day${num},$_\n" if m/,/;
}
}

how can I convert this string into a list through the command line

I have files that are named like C1_1_B_(1)IMG1511.jpg and I want to split them up into a list where i get back as
C1
1
B
(1)
IMG1511.jpg
trying to figure out if i need to do this with sed or awk or regex or even what that would look like i could do it in applescript but I would rather call shell command as it is much faster
EDIT
Ok so now its changed a bit and I can figure out how to fix it
example are
"P24-M_(1)Lighter_Ray_Logo_Full_Color.jpg"
"P24_(1)24x36loren.jpg"
so _(*) indicates where I want to stop listing so i end up with
P24
M
(1)
Lighter_Ray_Logo_Full_Color.jpg
and
P24
(1)
24x36loren.jpg
Translate _ to new lines:
echo "C1_1_B_(1)IMG1511.jpg" | tr '_' '\n'
Output:
C1
1
B
(1)IMG1511.jpg
Although, it looks like you want to split on ) as well. No can do with tr, but...
echo "C1_1_B_(1)IMG1511.jpg" | tr '_' '\n' | sed -e 's/)/)\
/'
There's a linefeed inside the replacement string, which is needed for Mac. On other *nix OS's, a simple escape works:
echo "C1_1_B_(1)IMG1511.jpg" | tr '_' '\n' | sed -e 's/)/)\n/'
Output:
C1
1
B
(1)
IMG1511.jpg
Would this do?
<<<"C1_1_B_(1)IMG1511.jpg" sed -r 'y/_/\n/;s/\([^)]*\)/&\n/g;'
I know it's not sed/awk, but here's something that would work in perl:
#!/usr/bin/perl
while(<STDIN>) {
my($line) = $_;
chomp($line);
my #values = split(/_|(\(\d+\))/, $line);
foreach my $val (#values) {
if ( $val !~ m/^$/)
{
print "$val\n";
}
}
}
exit 0;
If the filename is stored in $P, the following works with zsh:
myarr=${(s/_/)$(echo $P | sed 's/)/)_/g')}
This creates an actual array.
This handles filenames which contain _ ( ) in other places.
<<< '
C1_1_B_(1)IMG151).jpg
C1_1_B_(1)IMG_(4444).jpg
C(22)2_1_22_333_B_(144)I_M_G_(_1511).jpg
' sed -nr '# isolate, process and print first section
s/^([^(]+)_/\1\n/;h
s/(.*)\n.*/\1/
s/([^_]+)_/\1\n/gp;x
# process the second section
s/.*\n(.*)/\1/
s/([^)]+\))/\1\n/p
';exit
str="C1_1_B_(1)IMG1511.jpg"
ary=( $(IFS=_; echo $str) )
for ((idx=0; idx < ${#ary[#]}; idx++)); do echo ${ary[$idx]}; done
outputs
C1
1
B
(1)IMG1511.jpg

Resources