For the following code, which according to the style guide should be wrapped at 80 chars:
opts.on('--scores_min <uint>', Integer, 'Drop reads if a single position in ',
'the index have a quality score ',
'below scores_main (default= ',
"#{DEFAULT_SCORE_MIN})") do |o|
options[:scores_min] = o
end
The resulting output is:
--scores_min <uint> Drop reads if a single position in
the index have a quality score
below scores_main (default=
16)
Which wraps at 72 chars and looks wrong :o(
I really want it wrapped at 80 chars and aligned like this:
--scores_min <uint> Drop reads if a single position in the
index have a quality score below
scores_min (default=16)
How can this be achieved in a clever way?
The easiest solution in this case is to stack parameters like this:
opts.on('--scores_min <uint>',
Integer,
"Drop reads if a single position in the ",
"index have a quality score below ",
"scores_min (default= #{DEFAULT_SCORE_MIN})") do |o|
options[:scores_min] = o
end
That results in a fairly pleasant output:
--scores_min <uint> Drop reads if a single position in the
index have a quality score below
scores_min (default= 16)
More generally, here docs can make it easier to format output strings in a way that looks good both in the code and in the output:
# Deeply nested code
puts <<~EOT
Drop reads if a single position in the
index have a quality score below
scores_min (default= #{DEFAULT_SCORE_MIN})
EOT
But in this case it doesn't work so well since the description string is indented automatically.
So I think the solution is to follow the Ruby Style Guide:
When using heredocs for multi-line strings keep in mind the fact that
they preserve leading whitespace. It's a good practice to employ some
margin based on which to trim the excessive whitespace.
code = <<-END.gsub(/^\s+\|/, '')
|def test
| some_method
| other_method
|end
END
# => "def test\n some_method\n other_method\nend\n"
[EDIT] In Ruby 2.3 you can do (same ref):
code = <<~END
def test
some_method
other_method
end
END
Related
I'm using regex to grab parameters from an html file.
I've tested the regexp and it seems to be fine- it appears that the csv conversion is what's causing the issue, but I'm not sure.
Here is what I have:
mechanics_file= File.read(filename)
mechanics= mechanics_file.scan(/(?<=70%">)(.*)(?=<\/td)/)
id_file= File.read(filename)
id=id_file.scan(/(?<="propertyids\[]" value=")(.*)(?=")/)
puts id.zip(mechanics)
CSV.open('csvfile.csv', 'w') do |csv|
id.zip(mechanics) { |row| csv << row }
end
The puts output looks like this:
2073
Acting
2689
Action / Movement Programming
But the contents of the csv look like this:
"[""2073""]","[""Acting""]"
"[""2689""]","[""Action / Movement Programming""]"
How do I get rid of all of the extra quotes and brackets? Am I doing something wrong in the process of writing to a csv?
This is my first project in ruby so I would appreciate a child-friendly explanation :) Thanks in advance!
String#scan returns an Array of Arrays (bold emphasis mine):
scan(pattern) → array
Both forms iterate through str, matching the pattern (which may be a Regexp or a String). For each match, a result is generated and either added to the result array or passed to the block. If the pattern contains no groups, each individual result consists of the matched string, $&. If the pattern contains groups, each individual result is itself an array containing one entry per group.
a = "cruel world"
# […]
a.scan(/(...)/) #=> [["cru"], ["el "], ["wor"]]
So, id looks like this:
id == [['2073'], ['2689']]
and mechanics looks like this:
mechanics == [['Acting'], ['Action / Movement Programming']]
id.zip(movements) then looks like this:
id.zip(movements) == [[['2073'], ['Acting']], [['2689'], ['Action / Movement Programming']]]
Which means that in your loop, each row looks like this:
row == [['2073'], ['Acting']]
row == [['2689'], ['Action / Movement Programming']]
CSV#<< expects an Array of Strings, or things that can be converted to Strings as an argument. You are passing it an Array of Arrays, which it will happily convert to an Array of Strings for you by calling Array#to_s on each element, and that looks like this:
[['2073'], ['Acting']].map(&:to_s) == [ '["2073"]', '["Acting"]' ]
[['2689'], ['Action / Movement Programming']].map(&:to_s) == [ '["2689"]', '["Action / Movement Programming"]' ]
Lastly, " is the string delimiter in CSV, and needs to be escaped by doubling it, so what actually gets written to the CSV file is this:
"[""2073""]", "[""Acting""]"
"[""2689""]", "[""Action / Movement Programming""]"
The simplest way to correct this, would be to flatten the return values of the scans (and maybe also convert the IDs to Integers, assuming that they are, in fact, Integers):
mechanics_file = File.read(filename)
mechanics = mechanics_file.scan(/(?<=70%">)(.*)(?=<\/td)/).flatten
id_file = File.read(filename)
id = id_file.scan(/(?<="propertyids\[]" value=")(.*)(?=")/).flatten.map(&:to_i)
CSV.open('csvfile.csv', 'w') do |csv|
id.zip(mechanics) { |row| csv << row }
end
Another suggestion would be to forgo the Regexps completely and use an HTML parser to parse the HTML.
I would like to write a Ruby script (repl.rb) which can replace a string in a binary file (string is defined by a regex) to a different, but same length string.
It works like a filter, outputs to STDOUT, which can be redirected (ruby repl.rb data.bin > data2.bin), regex and replacement can be hardcoded. My approach is:
#!/usr/bin/ruby
fn = ARGV[0]
regex = /\-\-[0-9a-z]{32,32}\-\-/
replacement = "--0ca2765b4fd186d6fc7c0ce385f0e9d9--"
blk_size = 1024
File.open(fn, "rb") {|f|
while not f.eof?
data = f.read(blk_size)
data.gsub!(regex, str)
print data
end
}
My problem is that when string is positioned in the file that way it interferes with the block size used by reading the binary file. For example when blk_size=1024 and my 1st occurance of the string begins at byte position 1000, so I will not find it in the "data" variable. Same happens with the next read cycle. Should I process the whole file two times with different block size to ensure avoiding this worth case scenario, or is there any other approach?
I would posit that a tool like sed might be a better choice for this. That said, here's an idea: Read block 1 and block 2 and join them into a single string, then perform the replacement on the combined string. Split them apart again and print block 1. Then read block 3 and join block 2 and 3 and perform the replacement as above. Split them again and print block 2. Repeat until the end of the file. I haven't tested it, but it ought to look something like this:
File.open(fn, "rb") do |f|
last_block, this_block = nil
while not f.eof?
last_block, this_block = this_block, f.read(blk_size)
data = "#{last_block}#{this_block}".gsub(regex, str)
last_block, this_block = data.slice!(0, blk_size), data
print last_block
end
print this_block
end
There's probably a nontrivial performance penalty for doing it this way, but it could be acceptable depending on your use case.
Maybe a cheeky
f.pos = f.pos - replacement.size
at the end of the while loop, just before reading the next chunk.
I'm parsing a pdf that has some dates by splitting the lines and then searching them. The following are example lines:
Posted Date: 02/11/2015
Effective Date: 02/05/2015
When I find Posted Date, I split on the : and pull out 02/11/2015. But when I do the same for effective date, it only returns /05/2015. When I write all lines, it displays that date as /05/2015 while the PDF has the 02. Would 02 be converted to nil for some reason? Am I missing something?
lines = reader.pages[0].text.split(/\r?\n/)
lines.each_with_index do |line, index|
values_to_insert = []
if line.include? "Legal Name:"
name_line = line.split(":")
values_to_insert.push(name_line[1])
end
if line.include? "Active/Pending Insurance"
topLine = lines[index+2].split(" ")
middleLine = lines[index+5].split(" ")
insuranceLine = lines[index + 7]
insurance_line_split = insuranceLine.split(" ")
insurance_line_split.each_with_index do |word, i|
if word.include? "Insurance"
values_to_insert.push(insuranceLine.split(":")[1])
end
end
topLine.each_with_index do |word, i|
if word.include? "Posted"
values_to_insert.push(topLine[i + 2])
end
end
middleLine.each_with_index do |word, i|
if word.include? "Effective" or word.include? "Cancellation"
#puts middleLine[0]
puts middleLine[1]
#puts middleLine[i + 1].split(":")[1]
end
end
end
end
Here is what happens when I print all lines:
Active/Pending Insurance:
Form: 91X Type: BIPD/Primary Posted Date: 02/11
/2015
Policy/Surety Number:A 3491819 Coverage From: $0
To: $1,000,000
Effective Date:/05/2015 Cancellation Date:
Insurance Carrier: PROGRESSIVE EXPRESS INSURANCE COMPANY
Attn: CUSTOMER SERVICE
Address: P. O. BOX 94739
CLEVELAND, OH 44101 US
Telephone: (800) 444 - 4487 Fax: (440) 603 - 4555
Edited to show the code and even add a picture. I'm splitting by lines and then splitting again on colons and sometimes spaces. It's not amazingly clean but I don't think there's a much better way.
The problem occurs at positions where multiple pieces of text are on the same line but don't use exactly the same base line. In case of the PDF at hands,
(at least) the policy number and the effective date are positioned slightly higher than their respective labels.
The cause for this is the way the pdf-reader library used by the OP brings together the text pieces drawn on the page:
It determines a number of columns and rows to arrange the letters in and
creates an array of the rows number of strings filled with the columns number of spaces.
It then combines consecutive text pieces from the PDF on exactly the same base line and
finally puts these combined text pieces into the string array starting from the position best matching their starting position in the PDF.
As fonts used in PDFs usually are not monospaced, this procedure can result in overlapping strings, i.e. erasure of one of the two. The step combining strings on the same baseline prevents erasure in that case, but for strings on slightly different base lines, this overlapping effect can still occur.
What one can do, is increase the number of columns used here.
The library in page_layout.rb defines
def col_count
#col_count ||= ((#page_width / #mean_glyph_width) * 1.05).floor
end
As you see there already is some magic number 1.05 in use to slightly increase the number of columns. By increasing this number even more, no erasures as observed by the OP should occur anymore. One should not increase the factor too much, though, because that can introduce unwanted space characters where none belong.
The OP reported that increasing the magic number to 1.10 sufficed in his case.
I'm just learning Ruby and have been tackling small code projects to accelerate the process.
What I'm trying to do here is read only the alphabetic words from a text file into an array, then delete the words from the array that are less than 5 characters long. Then where the stdout is at the bottom, I'm intending to use the array. My code currently works, but is very very slow since it has to read the entire file, then individually check each element and delete the appropriate ones. This seems like it's doing too much work.
goal = File.read('big.txt').split(/\s/).map do |word|
word.scan(/[[:alpha:]]+/).uniq
end
goal.each { |word|
if word.length < 5
goal.delete(word)
end
}
puts goal.sample
Is there a way to apply the criteria to my File.read block to keep it from mapping the short words to begin with? I'm open to anything that would help me speed this up.
You might want to change your regex instead to catch only words longer than 5 characters to begin with:
goal = File.read('C:\Users\bkuhar\Documents\php\big.txt').split(/\s/).flat_map do |word|
word.scan(/[[:alpha:]]{6,}/).uniq
end
Further optimization might be to maintain a Set instead of an Array, to avoid re-scanning for uniqueness:
goal = Set.new
File.read('C:\Users\bkuhar\Documents\php\big.txt').scan(/\b[[:alpha:]]{6,}\b/).each do |w|
goal << w
end
In this case, use the delete_if method
goal => your array
goal.delete_if{|w|w.length < 5}
This will return a new array with the words of length lower than 5 deleted.
Hope this helps.
I really don't understand what a lot of the stuff you are doing in the first loop is for.
You take every chunk of text separated by white space, and map it to a unique value in an array generated by chunking together groups of letter characters, and plug that into an array.
This is way too complicated for what you want. Try this:
goal = File.readlines('big.txt').select do |word|
word =~ /^[a-zA-Z]+$/ &&
word.length >= 5
end
This makes it easy to add new conditions, too. If the word can't contain 'q' or 'Q', for example:
goal = File.readlines('big.txt').select do |word|
word =~ /^[a-zA-Z]+$/ &&
word.length >= 5 &&
! word.upcase.include? 'Q'
end
This assumes that each word in your dictionary is on its own line. You could go back to splitting it on white space, but it makes me wonder if the file you are reading in is written, human-readable text; a.k.a, it has 'words' ending in periods or commas, like this sentence. In that case, splitting on whitespace will not work.
Another note - map is the wrong array function to use. It modifies the values in one array and creates another out of those values. You want to select certain values from an array, but not modify them. The Array#select method is what you want.
Also, feel free to modify the Regex back to using the :alpha: tag if you are expecting non-standard letter characters.
Edit: Second version
goal = /([a-z][a-z']{4,})/gi.match(File.readlines('big.txt').join(" "))[1..-1]
Explanation: Load a file, and join all the lines in the file together with a space. Capture all occurences of a group of letters, at least 5 long and possibly containing but not starting with a '. Put all those occurences into an array. the [1..-1] discards "full match" returned by the MatchData object, which would be all the words appended together.
This works well, and it's only one line for your whole task, but it'll match
sugar'
in
I'd like some 'sugar', if you know what I mean
Like above, if your word can't contain q or Q, you could change the regex to
/[a-pr-z][a-pr-z']{4,})[ .'",]/i
And an idea - do another select on goal, removing all those entries that end with a '. This overcomes the limitations of my Regex
Is there a method similar to the Rails truncate method that accepts an index where I can indicate the start of truncation and a separator parameter so that it does not start in the middle of a word or string?
For example:
"i love the taste of bubble tea after lunch."
I would like to grab a string of size 15 starting from index 9 so this should result in:
"the taste of bubble"
I don't think there is one function to do this, so you'll have to write your own. I'd recommend chopping off the start of the string first and then using truncate to handle the end. Something like this might do what you want:
def truncate_beginning_and_end(str, beginning, length, separator)
first_space_before_beginning = str[0..beginning].rindex(separator)
str_without_beginning = str[(first_space_before_beginning + 1)..-1]
truncate(str_without_beginning, length: length, separator: separator, omission: '')
end