Choose starting row for CSV.foreach or similar method? Don't want to load file into memory - ruby

Edit (I adjusted the title): I am currently using CSV.foreach but that starts at the first row. I'd like to start reading a file at an arbitrary line without loading the file into memory. CSV.foreach works well for retrieving data at the beginning of a file but not for data I need towards the end of a file.
This answer is similar to what I am looking to do but it loads the entire file into memory; which is what I don't want to do.
I have a 10gb file and the key column is sorted in ascending order:
# example 10gb file rows
key,state,name
1,NY,Jessica
1,NY,Frank
1,NY,Matt
2,NM,Jesse
2,NM,Saul
2,NM,Walt
etc..
I find the line I want to start with this way ...
file = File.expand_path('~/path/10gb_file.csv')
File.open(file, 'rb').each do |line|
if line[/^2,/]
puts "#{$.}: #{line}" # 5: 2,NM,Jesse
row_number = $. # 5
break
end
end
... and I'd like to take row_number and do something like this but not load the 10gb file into memory:
CSV.foreach(file, headers: true).drop(row_number) { |row| "..load data..." }
Lastly, I'm currently handling it like the next snippet; It works fine when the rows are towards the front of the file but not when they're near the end.
CSV.foreach(file, headers: true) do |row|
next if row['key'].to_i < row_number.to_i
break if row['key'].to_i > row_number.to_i
"..load data..."
end
I am trying to use CSV.foreach but I'm open to suggestions. An alternative approach I am considering but does not seem to be efficient for numbers towards the middle of a file:
Use IO or File and read the file line by line
Get the header row and build the hash manually
Read the file from the bottom for numbers near the max key value

I think you have the right idea. Since you've said you're not worried about fields spanning multiple lines, you can seek to a certain line in the file using IO methods and start parsing there. Here's how you might do it:
begin
file = File.open(FILENAME)
# Get the headers from the first line
headers = CSV.parse_line(file.gets)
# Seek in the file until we find a matching line
match = "2,"
while line = file.gets
break if line.start_with?(match)
end
# Rewind the cursor to the beginning of the line
file.seek(-line.size, IO::SEEK_CUR)
csv = CSV.new(file, headers: headers)
# ...do whatever you want...
ensure
# Don't forget the close the file
file.close
end
The result of the above is that csv will be a CSV object whose first row is the row that starts with 2,.
I benchmarked this with an 8MB (170k rows) CSV file (from Lahman's Baseball Database) and found that it was much, much faster than using CSV.foreach alone. For a record in the middle of the file it was about 110x faster, and for a record toward the end about 66x faster. If you want, you can take a look at the benchmark here: https://gist.github.com/jrunning/229f8c2348fee4ba1d88d0dffa58edb7
Obviously 8MB is nothing like 10GB, so regardless this is going to take you a long time. But I'm pretty sure this will be quite a bit faster for you while also accomplishing your goal of not reading all of the data into the file at once.

Foreach will do everything you need. It streams, so it works well with big files.
CSV.foreach('~/path/10gb_file.csv') do |line|
# Only one line will be read into memory at a time.
line
end
Fastest way to skip data that we’re not interested in is to use read to advance through a portion of the file.
File.open("/path/10gb_file.csv") do |f|
f.seek(107) # skip 107 bytes eg. one line. (constant time)
f.read(50) # read first 50 on second line
end

Related

Append line every n lines to file - ruby

I need to add line to text file every 1000 line. The main issue is that file is about 3GB so I can't load the whole file to string or array. To work with big files I usually use File.foreach but I can't find any information about working with index in this case. Is there any other option to resolve this issue without the need to load whole file to memory?
Here are a couple options.
Option 1:
File.open('output.txt', 'w') do |outfile|
File.foreach('input.txt').each_with_index do |line, i|
outfile.puts(line)
outfile.puts '--- 1000 ---' if (i + 1) % 1000 == 0 && i != 0
end
end
This inserts the line '--- 1000 ---' after every 1000 lines from the original file. It has some drawbacks though. Mainly it has to check each index and check that we are not at line zero with every line! But it works. And it works on large files without hogging memory.
Option 2:
File.open('output.txt', 'w') do |outfile|
File.foreach('input.txt').each_slice(1000) do |lines|
outfile.puts(lines)
outfile.puts '--- 1000 ---'
end
end
This code does almost exactly the same thing using Enumerable's each_slice method. It yields an array of every 1000 lines, writes them out using puts (which accepts Arrays), then writes our marker line after it. It then repeats for the next 1000 lines. The difference is if the file isn't a multiple of 1000 lines the last call to this block will yield an array smaller than 1000 lines and our code will still append our line of text after it.
We can fix this by testing the array's length and only writing out our line if the array is exactly 1000 lines. Which will be true for every batch of 1000 lines except the last one (given a file that is not a multiple of 1000 lines).
Option 2a:
File.open('output.txt', 'w') do |outfile|
File.foreach('input.txt').each_slice(1000) do |lines|
outfile.puts(lines)
outfile.puts '--- 1000 ---' unless lines.size < 1000
end
end
This extra check is only needed if appending that line to the end of the file is a problem for you. Otherwise you can leave it out for a small performance boost.
Speaking of performance, here is how each option performed on a 335.5 MB file containing 1,000,000 paragraphs of Lorem Ipsum. Each benchmark is total time to process the entire file 100 times.
Option 1:
103.859825 44.646519 148.506344 (152.286349)
[Finished in 152.6s]
Option 2:
96.249542 43.780160 140.029702 (145.210728)
[Finished in 145.7s]
Option 2a:
98.041073 45.788944 143.830017 (149.769698)
[Finished in 150.2s]
As you can see, option 2 is the fastest. Keep in mind, options 2/2a will in theory use more memory since it loads 1000 lines at a time, but even then it's capped at a very small level so handling enormous files shouldn't be a problem. However they are all so close I would recommend going with whatever option reads the best or makes the most sense.
Hope this helped.

How to read a large text file line-by-line and append this stream to a file line-by-line in Ruby?

Let's say I want to combine several massive files into one and then uniq! the one (THAT alone might take a hot second)
It's my understanding that File.readlines() loads ALL the lines into memory. Is there a way to read it line by line, sort of like how node.js pipe() system works?
One of the great things about Ruby is that you can do file IO in a block:
File.open("test.txt", "r").each_line do |row|
puts row
end # file closed here
so things get cleaned up automatically. Maybe it doesn't matter on a little script but it's always nice to know you can get it for free.
you aren't operating on the entire file contents at once, and you don't need to store the entirety of each line either if you use readline.
file = File.open("sample.txt", 'r')
while !file.eof?
line = file.readline
puts line
end
Large files are best read by streaming methods like each_line as shown in the other answer or with foreach which opens the file and reads line by line. So if the process doesn't request to have the whole file in memory you should use the streaming methods. While using streaming the required memory won't increase even if the file size increases opposing to non-streaming methods like readlines.
File.foreach("name.txt") { |line| puts line }
uniq! is defined on Array, so you'll have to read the files into an Array anyway. You cannot process the file line-by-line because you don't want to process a file, you want to process an Array, and an Array is a strict in-memory data structure.

Editing a CSV file in place, row by row

I have a long CSV file with two columns of numbers:
1,2
2,5
7,3
etc...
I would like to add a third column equal to the sum of the first two:
1,2,3
2,5,7
7,3,10
The following code is a solution to the problem, and it makes a copy of the input file, with the third column appended. Instead, I would like to operate on the input file line by line, writing the third column to each line as I went along. If the process gave error through for some reason, the answers to the first half of the file should already be saved and would not need to be recalculated.
I can't come up with a good way to do this using ruby's CSV class. Here's my current solution with the copied file:
require 'csv'
CSV.open("big_file.csv", "w") do |csv|
csv << %w{1 2}
csv << %w{2 5}
csv << %w{3 8}
end
big_csv_file = CSV.open("big_file.csv", 'r')
# I'm creating a copy of big_file.csv here
# I'd rather edit it in place
CSV.open("copy_with_extra_column.csv", "w") do |csv|
big_csv_file.each do |row|
row << eval(row[0] + row[1])
csv << row
end
end
To put this another, way, there is no way, at the fundamental file level, to "insert" the sum into the file. In your example:
1,2
2,5
7,2
If we ignore the whole notion of a "CSV" file (which is really just a concept layered on top of a stream text file) To "insert" the text ,3 at the end of the first line, we need to do all of these things:
move the "\n" after the 2, and all the following text two positions later in the file (leaving some junk in its place)
overwrite the junk with ",3"
Then you would repeat this process for each additional row.
This is obviously very inefficient. In simple terms, the CSV file format is not designed for efficient insertion of data.
Your two options are:
Load the file into memory (as, i.e., an array of lines), operate on it there, and then write it all back out over the existing file. Assuming your file only grows, this will work fine, but you'll need to be willing to allocate enough memory to read and operate on the whole file.
Write to a temporary file as you work through the data, and then move the temporary file in place of the original when you're done.
Updating the file "in place" is not practical.
A file is like one long string, for example:
1,2\n2,5
However, unlike a string, you can only overwrite characters in a file. In the example above, there are 7 characters. You can overwrite any of those characters with any characters you choose. So for instance, if you put the sum of the numbers at position 0 and position 2 into position 3, the result is:
1,232,5
That's probably not what you want because it looks like the first two numbers are 1 and 232 and their sum is 5. However, that is all you can do when editing a file inplace: you can only overwrite characters with other characters.
For a large file, you can read in one line, then write the altered line to a new file. When you are done, you can delete the original file, and then you can rename the new file to the old file name. You can use the Tempfile class to avoid name clashes for the new file name.
Instead of CSV.open(), try CSV.read(). For example, it's obviously a little ugly, but:
big_csv_file = CSV.read("big_file.csv")
big_csv_file[0] << eval(big_csv_file[0][0] + big_csv_file[0][1])
CSV.open("copy_with_extra_column.csv", "w") do |csv|
big_csv_file.each do |row|
csv << row
end
end
If you need the file to always be at the latest, the alterations and the writing will need to be in a loop, obviously.

Easier way to search through large files in Ruby?

I'm writing a simple log sniffer that will search logs for specific errors that are indicative of issues with the software I support. It allows the user to specify the path to the log and specify how many days back they'd like to search.
If users have log roll over turned off, the log files can sometimes get quite large. Currently I'm doing the following (though not done with it yet):
File.open(#log_file, "r") do |file_handle|
file_handle.each do |line|
if line.match(/\d+++-\d+-\d+/)
etc...
The line.match obviously looks for the date format we use in the logs, and the rest of the logic will be below. However, is there a better way to search through the file without .each_line? If not, I'm totally fine with that. I just wanted to make sure I'm using the best resources available to me.
Thanks
fgrep as a standalone or called from system('fgrep ...') may be faster solution
file.readlines might be better in speed, but it's a time-space tradeoff
look at this little research - last approaches seem to be rather fast.
Here are some coding hints...
Instead of:
File.open(#log_file, "r") do |file_handle|
file_handle.each do |line|
use:
File.foreach(#log_file) do |line|
next unless line[/\A\d+++-\d+-\d+/]
foreach simplifies opening and looping over the file.
next unless... makes a tight loop skipping every line that does NOT start with your target string. The less you do before figuring out whether you have a good line, the faster your code will run.
Using an anchor at the start of your pattern, like \A gives the regex engine a major hint about where to look in the line, and allows it to bail out very quickly if the line doesn't match. Also, using line[/\A\d+++-\d+-\d+/] is a bit more concise.
If your log file is sorted by date, then you can avoid having search through the entire file by doing a binary search. In this case you'd:
Open the file like you are doing
Use lineo= to fast forward to the middle of the file.
Check if the date on the beging of the line is higher or lower than the date you are looking for.
Continue splitting the file in halves until you find what you need.
I do however think your file needs to be very large for the above to make sense.
Edit
Here is some code which shows the basic idea. It find a line containing search date, not the first. This can be fixed either by more binary searches or by doing an linear search from the last midpoint, which did not contain date. There also isn't a termination condition in case the date is not in the file. These small additions, are left as an exercise to the reader :-)
require 'date'
def bin_fsearch(search_date, file)
f = File.open file
search = {min: 0, max: f.size}
while true
# go to file midpoint
f.seek (search[:max] + search[:min]) / 2
# read in until EOL
f.gets
# record the actual mid-point we are using
pos = f.pos
# read in next line
line = f.gets
# get date from line
line_date = Date.parse(line)
if line_date < search_date
search[:min] = f.pos
elsif line_date > search_date
search[:max] = pos
else
f.seek pos
return
end
end
end
bin_fsearch(Date.new(2013, 5, 4), '/var/log/system.log')
Try this, it will search one time at a time & should be pretty fast & takes less memory.
File.open(file, 'r') do |f|
f.each_line do |line|
# do stuff here to line
end
end
Another more faster option is to read the whole file into one array. it would be fast but will take LOT of memory.
File.readlines.each do |line|
#do stuff with each line
end
Further, if you need fastest approach with least amount of memory try grep which is specifically tuned for searching through large files. so should be fast & memory responsive
`grep -e regex bigfile`.split(/\n/).each do |line|
# ... (called on each matching line) ...
end
Faster than line-by-line is read the line by chunks:
File.open('file.txt') do |f|
buff = f.read(10240)
# ...
end
But you are using regexp to match dates, you might get incomplete lines. You will have to deal with it in your logic.
Also, if performance is that important, consider write a really simple C extension.
If the log file can get huge, and that is your concern, then maybe you can consider saving the errors in a database. Then, you will get faster response.

dealing with large CSV files (20G) in ruby

I am working on little problem and would have some advice on how to solve it:
Given a csv file with an unknown number of columns and rows, output a list of columns with values and the number of times each value was repeated. without using any library.
if the file is small this shouldn't be a problem, but when it is a few Gigs, i get NoMemoryError: failed to allocate memory. is there a way to create a hash and read from the disk instead of loading the file to Memory? you can do that in perl with tied Hashes
EDIT: will IO#foreach load the file into memory? how about File.open(filename).each?
Read the file one line at a time, discarding each line as you go:
open("big.csv") do |csv|
csv.each_line do |line|
values = line.split(",")
# process the values
end
end
Using this method, you should never run out of memory.
Do you read the whole file at once? Reading it on a per-line basis, i.e. using ruby -pe, ruby -ne or $stdin.each should reduce the memory usage by garbage collecting lines which were processed.
data = {}
$stdin.each do |line|
# Process line, store results in the data hash.
end
Save it as script.rb and pipe the huge CSV file into this script's standard input:
ruby script.rb < data.csv
If you don't feel like reading from the standard input we'll need a small change.
data = {}
File.open("data.csv").each do |line|
# Process line, store results in the data hash.
end
For future reference, in such cases you want to use CSV.foreach('big_file.csv', headers: true) do |row|
This will read the file line by line from the IO object with minimal memory footprint (should be below 1MB regardless of file size).

Resources