Why is Ruby CSV file reading very slow? - ruby

I have a fairly large CSV file, with 4 Million records with 375 fields, that needs to be processed.
I'm using the RUBY CSV library to read this file and it is very slow. I thought PHP CSV file processing was slow but comparing the two reads PHP is is more then 100 times faster. I'm not sure if I'm doing something dumb or this is just the reality of RUBY not being optimized for this type of batch processing. I set up simple test pgms to get comparative times in both RUBY and PHP. All I do is read, no writing, no building of big arrays, and break out of the CSV read loops after processing 50,000 records. Has anyone else experienced this performance issue?
I'm running locally on a MAC with 4gig of memory, running OS X 10.6.8 and Ruby 1.8.7.
The Ruby process takes 497 seconds to simply read 50,000 records, the PHP process runs in 4 seconds which is not a typo, it's more then 100 times faster. FYI - I had code in the loops to print out data values to make sure that each of the processes was actually reading the files and bringing data back.
This is the Ruby Code:
require('time')
require('csv')
x=0
t1=Time.new
CSV.foreach(pathfile) do |row|
x += 1
if x > 50000 then break end
end
t2 = Time.new
puts " Time to read the file was #{t2-t1} seconds"
Here is the PHP code:
$t1=time();
$fpiData = fopen($pathdile,'r') or die("can not open input file ");
$seqno=0;
while($inrec = fgetcsv($fpiData,0,',','"')) {
if ($seqno > 50000) break;
$seqno++;
}
fclose($fpiData) or die("can not close input data file");
$t2=time();
$t3=$t2-$t1;
echo "Start time is $t1 - end time is $t2 - Time to Process was " . $t3 . "\n";

You'll likely get a massive speed boost by simply updating to a current version of Ruby. in Version 1.9, FasterCSV was integrated as Ruby's standard CSV library.
Check out Chruby to manage your different Ruby versions.

Check out the smarter_csv Gem, which has special options for handling huge files by reading data in chunks.
It also returns the CSV data as hashes, which can make it easier to insert or update the data in a database.

I think that using CSV is little bit overkill for this.
A long time ago I saw this question, and the reason for the slowness of the Ruby is that it loads the entire CSV file into the memory at once. I have seen some people overcome this issue by using the IO class. For example take a look at this gist for its self.perform(url) method.

Related

Reading and Writing the same CSV file in Ruby

I have some processing to do involving a third party API, and I was planning to use a CSV file as a backlog of things to do.
Example
Task to do Resulting file
#1 data/1.json
#2 data/2.json
#3
So, #1 and #2 are already done. I want to work on #3, and save the CSV file as soon as data/3.json is completed.
As the task is unstable and error prone, I want to save progress after each task in the CSV file.
I've written this script in Ruby, it's working well, but as tasks are numerous (> 100k), it's written couple Megabytes to disk each time a task is processed. The whole thing. It seems a good way to kill my HD:
class CSVResolver
require 'csv'
attr_accessor :csv_path
def initialize csv_path:
self.csv_path = csv_path
end
def resolve
csv = CSV.read(csv_path)
csv.each_with_index do |row, index|
next if row[1] # Don't do anything if we've already processed this task, and got a JSON data
json = very_expensive_task_and_error_prone
row[1] = "/data/#{index}.json"
File.write row[1], JSON.pretty_generate(json)
csv[index] = row
CSV.open(csv_path, "wb") do |old_csv|
csv.each do |row|
old_csv << row
end
end
resolve
end
end
end
Is there any way to improve on this, like making the write to CSV file atomic?
I'd use an embedded database for this purpose, such as SQLite or LevelDB.
Unlike a regular database, you'll still get many of the benefits of a CSV file, ie it can be stored in a single file/folder and without any server or permissioning hassle. At the same time, you'll get the benefit of better I/O characteristic than reading and writing a monolithic file upon each update ... the library should be smart enough to be able to index records, minimise changes, and store things in memory while buffering output.
For data persistence you would be, in most cases, best served to select a tool designed for the job, a database. You've already named enough of a reason to not use the hand spun CSV design as it is memory inefficient and proposes more problems then it likely solves. Also, depending on the amount of data you need to process via the 3rd part API, you may want to handle multi-threaded processes where reading/writing to a single file won't work.
You might wanna checkout https://github.com/jeremyevans/sequel

speedup postgresql to add data from text file using Django python script

I am working with server who's configurations are as:
RAM - 56GB
Processor - 2.6 GHz x 16 cores
How to do parallel processing using shell? How to utilize all the cores of processor?
I have to load data from text file which contains millions of entries for example one file contains half million lines data.
I am using django python script to load data in postgresql database.
But it takes lot of time to add data in database even though i have such a good config. server but i don't know how to utilize server resources in parallel so that it takes less time to process data.
Yesterday i had loaded only 15000 lines of data from text file to postgresql and it took nearly 12 hours to do it.
My django python script is as below:
import re
import collections
def SystemType():
filename = raw_input("Enter file Name:")
in_file = file(filename,"r")
out_file = file("SystemType.txt","w+")
for line in in_file:
line = line.decode("unicode_escape")
line = line.encode("ascii","ignore")
values = line.split("\t")
if values[1]:
for list in values[1].strip("wordnetyagowikicategory"):
out_file.write(re.sub("[^\ a-zA-Z()<>\n""]"," ",list))
# Eliminate Duplicate Entries from extracted data using regular expression
def FSystemType():
lines_seen = set()
outfile = open("Output.txt","w+")
infile = open("SystemType.txt","r+")
for line in infile:
if line not in lines_seen:
l = line.lstrip()
# Below reg exp is used to handle Camel Case.
outfile.write(re.sub(r'((?<=[a-z])[A-Z]|(?<!\A)[A-Z](?=[a-z]))', r' \1', l).lower())
lines_seen.add(line)
infile.close()
outfile.close()
sylist=[]
def create_system_type(stname):
syslist=Systemtype.objects.all()
for i in syslist:
sylist.append(str(i.title))
if not stname in sylist:
slu=slugify(stname)
st=Systemtype()
st.title=stname
st.slug=slu
# st.sites=Site.objects.all()[0]
st.save()
print "one ST added."
if you could express your requirements without the code (not every shell programmer can really read phython), possibly we could help here.
e.g. your report of 12 hours for 15000 lines suggests you have a too-busy "for" loop somewhere, and i'd suggest the nested for
for list in values[1]....
what are you trying to strip? individual characters, whole words? ...
then i'd suggest "awk".
If you are able to work out the precise data structure required by Django, you can load the database tables directly, using the psql "copy" command. You could do this by preparing a csv file to load into the db.
There are any number of reasons why loading is slow using your approach. First of all Django has a lot of transactional overhead. Secondly it is not clear in what way you are running the Django code -- is this via the internal testing server? If so you may have to deal with the slowness of that. Finally what makes a fast database is not normally to do with CPU, but rather fast IO and lots of memory.

How can I increase the performance of watir-webdriver automated scripts

The main problem I'm having is pulling data from tables, but any other general tips would be welcome too. The tables I'm dealing with have roughly 25 columns and varying numbers of rows (anywhere from 5-50).
Currently I am grabbing the table and converting it to an array:
require "watir-webdriver"
b = Watir::Browser.new :chrome
b.goto "http://someurl"
# The following operation takes way too long
table = b.table(:index, 1).to_a
# The rest is fast enough
table.each do |row|
# Code for pulling data from about 15 of the columns goes here
# ...
end
b.close
The operation table = b.table(:index, 5).to_a takes over a minute when the table has 20 rows. It seems like it should be very fast to put the cells of a 20 X 25 table into an array. I need to do this for over 80 tables, so it ends up taking 1-2 hours to run. Why is it taking so long and how can I improve the speed?
I have tried iterating over the table rows without first converting to an array as well, but there was no improvement in performance:
b.table(:index, 1).rows.each do |row|
# ...
Same results using Windows 7 and Ubuntu. I've also tried Firefox instead of Chrome without a noticeable difference.
A quick workaround would be to use Nokogiri if you're just reading data from a big page:
require 'nokogiri'
doc = Nokogiri::HTML.parse(b.table(:index, 1).html))
I'd love to see more detail though. If you can provide a code + HTML example that demonstrates the issue, please file it in the issue tracker.
The #1 thing you can do to improve the performance of a script that uses watir is to reduce the number of remote calls into the browser. Each time you locate or operate on a DOM element, that's a call into the browser and can take 5ms or more.
In your case, you can reduce the number of remote calls by doing the work on the browser side via execute_script() and checking the result on the ruby side.
When attempting to improve the speed of your code it's vital to have some means of testing execution times (e.g. ruby benchmark). You might also like to look at ruby-prof to get a detailled breakdown of the time spent in each method.
I would start by trying to establish if it's not the to_a method rather than the table that's causing the delays on that line of code. Watir's internals (or nokogiri as per jarib's answer) may be quicker.

Fastest Way to Parse a Large File in Ruby

I have a simple text file that is ~150mb. My code will read each line, and if it matches certain regexes, it gets written to an output file.
But right now, it just takes a long time to iterate through all of the lines of the file (several minutes) doing it like
File.open(filename).each do |line|
# do some stuff
end
I know that it is the looping through the lines of the file that is taking a while because even if I do nothing with the data in "#do some stuff", it still takes a long time.
I know that some unix programs can parse large files like this almost instantly (like grep), so I am wondering why ruby (MRI 1.9) takes so long to read the file, and is there some way to make it faster?
It's not really fair to compare to grep because that is a highly tuned utility that only scans the data, it doesn't store any of it. When you're reading that file using Ruby you end up allocating memory for each line, then releasing it during the garbage collection cycle. grep is a pretty lean and mean regexp processing machine.
You may find that you can achieve the speed you want by using an external program like grep called using system or through the pipe facility:
`grep ABC bigfile`.split(/\n/).each do |line|
# ... (called on each matching line) ...
end
File.readlines.each do |line|
#do stuff with each line
end
Will read the whole file into one array of lines. It should be a lot faster, but it takes more memory.
You should read it into the memory and then parse. Of course it depends on what you are looking for. Don't expect miracle performance from ruby, especially comparing to c/c++ programs which are being optimized for past 30 years ;-)

Increasing the Loading Speed of Large Files

There are two large text files (Millions of lines) that my program uses. These files are parsed and loaded into hashes so that the data can be accessed quickly. The problem I face is that, currently, the parsing and loading is the slowest part of the program. Below is the code where this is done.
database = extractDatabase(#type).chomp("fasta") + "yml"
revDatabase = extractDatabase(#type + "-r").chomp("fasta.reverse") + "yml"
#proteins = Hash.new
#decoyProteins = Hash.new
File.open(database, "r").each_line do |line|
parts = line.split(": ")
#proteins[parts[0]] = parts[1]
end
File.open(revDatabase, "r").each_line do |line|
parts = line.split(": ")
#decoyProteins[parts[0]] = parts[1]
end
And the files look like the example below. It started off as a YAML file, but the format was modified to increase parsing speed.
MTMDK: P31946 Q14624 Q14624-2 B5BU24 B7ZKJ8 B7Z545 Q4VY19 B2RMS9 B7Z544 Q4VY20
MTMDKSELVQK: P31946 B5BU24 Q4VY19 Q4VY20
....
I've messed around with different ways of setting up the file and parsing them, and so far this is the fastest way, but it's still awfully slow.
Is there a way to improve the speed of this, or is there a whole other approach I can take?
List of things that don't work:
YAML.
Standard Ruby threads.
Forking off processes and then retrieving the hash through a pipe.
In my usage, reading all or part the file into memory before parsing usually goes faster. If the database sizes are small enough this could be as simple as
buffer = File.readlines(database)
buffer.each do |line|
...
end
If they're too big to fit into memory, it gets more complicated, you have to setup block reads of data followed by parse, or threaded with separate read and parse threads.
Why not use the solution devised through decades of experience: a database, say SQLlite3?
(To be different, although I'd first recommend looking at (Ruby) BDB and other "NoSQL" backend-engines, if they fit your need.)
If fixed-sized records with a deterministic index are used then you can perform a lazy-load of each item through a proxy object. This would be a suitable candidate for a mmap. However, this will not speed up the total access time, but will merely amortize the loading throughout the life-cycle of the program (at least until first use and if some data is never used then you get the benefit of never loading it). Without fixed-sized records or deterministic index values this problem is more complex and starts to look more like a traditional "index" store (eg. a B-tree in an SQL back-end or whatever BDB uses :-).
The general problems with threading here are:
The IO will likely be your bottleneck around Ruby "green" threads
You still need all the data before use
You may be interested in the Widefinder Project, just in general "trying to get faster IO processing".
I don't know too much about Ruby but I have had to deal with the problem before. I found the best way was to split the file up into chunks or separate files then spawn threads to read each chunk in at a single time. Once the partitioned files are in memory combining the results should be fast. Here is some information on Threads in Ruby:
http://rubylearning.com/satishtalim/ruby_threads.html
Hope that helps.

Resources