I am writing a program that loads data from four XML files into four different data structures. It has methods like this:
def loadFirst(year)
File.open("games_#{year}.xml",'r') do |f|
doc = REXML::Document.new f
...
end
end
def loadSecond(year)
File.open("teams_#{year}.xml",'r') do |f|
doc = REXML::Document.new f
...
end
end
etc...
I originally just used one thread and loaded one file after another:
def loadData(year)
time = Time.now
loadFirst(year)
loadSecond(year)
loadThird(year)
loadFourth(year)
puts Time.now - time
end
Then I realized that I should be using multiple threads. My expectation was that loading from each file on a separate thread would be very close to four times as fast as doing it all sequentially (I have a MacBook Pro with an i7 processor):
def loadData(year)
time = Time.now
t1 = Thread.start{loadFirst(year)}
t2 = Thread.start{loadSecond(year)}
t3 = Thread.start{loadThird(year)}
loadFourth(year)
t1.join
t2.join
t3.join
puts Time.now - time
end
What I found was that the version using multiple threads is actually slower than the other. How can this possibly be? The difference is around 20 seconds with each taking around 2 to 3 minutes.
There are no shared resources between the threads. Each opens a different data file and loads data into a different data structure than the others.
I think (but I'm not sure) the problem is that you are reading (using multiple threads) contents placed on the same disk, so all your threads can't run simultaneously because they wait for IO (disk).
Some days ago I had to do a similar thing (but fetching data from network) and the difference between sequential vs threads was huge.
A possible solution could be to load all file content instead of load it like you did in your code. In your code you read contents line by line. If you load all the content and then process it you should be able to perform much better (because threads should not wait for IO)
It's impossible to give a conclusive answer to why your parallel problem is slower than the sequential one without a lot more information, but one possibility is:
With the sequential program, your disk seeks to the first file, reads it all out, seeks to the 2nd file, reads it all out, and so on.
With the parallel program, the disk head keeps moving back and forth trying to service I/O requests from all 4 threads.
I don't know if there's any way to measure disk seek time on your system: if so, you could confirm whether this hypothesis is true.
Related
I have some processing to do involving a third party API, and I was planning to use a CSV file as a backlog of things to do.
Example
Task to do Resulting file
#1 data/1.json
#2 data/2.json
#3
So, #1 and #2 are already done. I want to work on #3, and save the CSV file as soon as data/3.json is completed.
As the task is unstable and error prone, I want to save progress after each task in the CSV file.
I've written this script in Ruby, it's working well, but as tasks are numerous (> 100k), it's written couple Megabytes to disk each time a task is processed. The whole thing. It seems a good way to kill my HD:
class CSVResolver
require 'csv'
attr_accessor :csv_path
def initialize csv_path:
self.csv_path = csv_path
end
def resolve
csv = CSV.read(csv_path)
csv.each_with_index do |row, index|
next if row[1] # Don't do anything if we've already processed this task, and got a JSON data
json = very_expensive_task_and_error_prone
row[1] = "/data/#{index}.json"
File.write row[1], JSON.pretty_generate(json)
csv[index] = row
CSV.open(csv_path, "wb") do |old_csv|
csv.each do |row|
old_csv << row
end
end
resolve
end
end
end
Is there any way to improve on this, like making the write to CSV file atomic?
I'd use an embedded database for this purpose, such as SQLite or LevelDB.
Unlike a regular database, you'll still get many of the benefits of a CSV file, ie it can be stored in a single file/folder and without any server or permissioning hassle. At the same time, you'll get the benefit of better I/O characteristic than reading and writing a monolithic file upon each update ... the library should be smart enough to be able to index records, minimise changes, and store things in memory while buffering output.
For data persistence you would be, in most cases, best served to select a tool designed for the job, a database. You've already named enough of a reason to not use the hand spun CSV design as it is memory inefficient and proposes more problems then it likely solves. Also, depending on the amount of data you need to process via the 3rd part API, you may want to handle multi-threaded processes where reading/writing to a single file won't work.
You might wanna checkout https://github.com/jeremyevans/sequel
I'll try to extend a title of my question. I work on ruby project. I have to process a big amount of data (around 120000) stored in CSV files. I have to read this data, process and put in DB. Now it takes couple days. I have to make this much faster. The problem is that sometimes during processing I get some anomalies and I have to repeat whole import process. I decided that more important is to improve performance instead of looking for bug using small amount of data. For now I stick to CSV files. I decided to benchmark processing script to find bottle necks and improve loading data from CSV. I see following steps:
Benchmark and fix the most problematic bottle necks
Maybe split loading from CSV and processing. For example create separate table and load data there. In next step load this data, process and put in right table.
Introduce threads to load data from CSV
For now I use standard ruby CSV library. Do you recommend some better gem?
If some of you are familiar in similar problem It would be happy to get to know you opinion.
Edit:
Database: postgrees
System: linux
I haven't had the opportunity to test it myself but refently I crossed this article, seems to do the job.
https://infinum.co/the-capsized-eight/articles/how-to-efficiently-process-large-excel-files-using-ruby
You'll have to adapt to use CSV instead of XLSX.
For future reference if the site would stop here the code.
It works by writing BATCH_IMPORT_SIZE records at the database at the same time, should give a huge profit.
class ExcelDataParser
def initialize(file_path)
#file_path = file_path
#records = []
#counter = 1
end
BATCH_IMPORT_SIZE = 1000
def call
rows.each do |row|
increment_counter
records << build_new_record(row)
import_records if reached_batch_import_size? || reached_end_of_file?
end
end
private
attr_reader :file_path, :records
attr_accessor :counter
def book
#book ||= Creek::Book.new(file_path)
end
# in this example, we assume that the
# content is in the first Excel sheet
def rows
#rows ||= book.sheets.first.rows
end
def increment_counter
self.counter += 1
end
def row_count
#row_count ||= rows.count
end
def build_new_record(row)
# only build a new record without saving it
RecordModel.new(...)
end
def import_records
# save multiple records using activerecord-import gem
RecordModel.import(records)
# clear records array
records.clear
end
def reached_batch_import_size?
(counter % BATCH_IMPORT_SIZE).zero?
end
def reached_end_of_file?
counter == row_count
end
end
https://infinum.co/the-capsized-eight/articles/how-to-efficiently-process-large-excel-files-using-ruby
Make sure your Ruby script can process multiple files given as parameters, i.e. that you can run it like this:
script.rb abc.csv def.csv xyz.csv
Then parallelise it using GNU Parallel like this to keep all your CPU cores busy:
find . -name \*.csv -print0 | parallel -0 -X script.rb
The -X passes as many CSV files as possible to your Ruby job without exceeding maximum length of command line. You can add in -j 8 after parallel if you want GNU Parallel to run say 8 jobs at a time, and you can use --eta to get the estimated arrival/finish time:
find . -name \*.csv -print0 | parallel -0 -X -j 8 --eta script.rb
By default, GNU Parallel will run as many jobs in parallel as you have CPU cores.
There are a few ways of going about it. I personally would recommend SmarterCSV, which makes it much faster and easier to process CSVs using Array of Hashes. You should definitely split up the work if possible, perhaps making a queue of files to process and do it in batches with use of Redis
I'm working with 6 csv files that each contain the attributes of an object. I can read them one at a time, but the idea of splitting each one to a thread to do in parallel is very appealing.
I've created a database object (no relational DBs or ORMs allowed) that has an array for each of the objects it is holding. I've tried the following to make each CSV open and initialize concurrently, but have seen no impact on speed.
threads = []
CLASS_FILES.each do |klass, filename|
threads << Thread.new do
file_to_objects(klass, filename)
end
end
threads.each {|thread| thread.join}
update
end
def self.load(filename)
CSV.open("data/#{filename}", CSV_OPTIONS)
end
def self.file_to_objects(klass, filename)
file = load(filename)
method_name = filename.sub("s.csv","")
file.each do |line|
instance = klass.new(line.to_hash)
Database.instance.send("#{method_name}") << instance
end
end
How can I speed things up in ruby (MRI 1.9.3)? Is this a good case for Rubinius?
Even though Ruby 1.9.3 uses native threads in order to implement concurrency, it has a global interpreter lock which makes sure only one thread executes at a time.
Therefore, nothing really runs in parallel in C Ruby. I know that JRuby imposes no internal lock on any thread, so try using it to run your code, if possible.
This answer by Jörg W Mittag has a more in-depth look at the threading models of the several Ruby implementations. It isn't clear to me whether Rubinius is fit for the job, but I'd give it a try.
The main problem I'm having is pulling data from tables, but any other general tips would be welcome too. The tables I'm dealing with have roughly 25 columns and varying numbers of rows (anywhere from 5-50).
Currently I am grabbing the table and converting it to an array:
require "watir-webdriver"
b = Watir::Browser.new :chrome
b.goto "http://someurl"
# The following operation takes way too long
table = b.table(:index, 1).to_a
# The rest is fast enough
table.each do |row|
# Code for pulling data from about 15 of the columns goes here
# ...
end
b.close
The operation table = b.table(:index, 5).to_a takes over a minute when the table has 20 rows. It seems like it should be very fast to put the cells of a 20 X 25 table into an array. I need to do this for over 80 tables, so it ends up taking 1-2 hours to run. Why is it taking so long and how can I improve the speed?
I have tried iterating over the table rows without first converting to an array as well, but there was no improvement in performance:
b.table(:index, 1).rows.each do |row|
# ...
Same results using Windows 7 and Ubuntu. I've also tried Firefox instead of Chrome without a noticeable difference.
A quick workaround would be to use Nokogiri if you're just reading data from a big page:
require 'nokogiri'
doc = Nokogiri::HTML.parse(b.table(:index, 1).html))
I'd love to see more detail though. If you can provide a code + HTML example that demonstrates the issue, please file it in the issue tracker.
The #1 thing you can do to improve the performance of a script that uses watir is to reduce the number of remote calls into the browser. Each time you locate or operate on a DOM element, that's a call into the browser and can take 5ms or more.
In your case, you can reduce the number of remote calls by doing the work on the browser side via execute_script() and checking the result on the ruby side.
When attempting to improve the speed of your code it's vital to have some means of testing execution times (e.g. ruby benchmark). You might also like to look at ruby-prof to get a detailled breakdown of the time spent in each method.
I would start by trying to establish if it's not the to_a method rather than the table that's causing the delays on that line of code. Watir's internals (or nokogiri as per jarib's answer) may be quicker.
There are two large text files (Millions of lines) that my program uses. These files are parsed and loaded into hashes so that the data can be accessed quickly. The problem I face is that, currently, the parsing and loading is the slowest part of the program. Below is the code where this is done.
database = extractDatabase(#type).chomp("fasta") + "yml"
revDatabase = extractDatabase(#type + "-r").chomp("fasta.reverse") + "yml"
#proteins = Hash.new
#decoyProteins = Hash.new
File.open(database, "r").each_line do |line|
parts = line.split(": ")
#proteins[parts[0]] = parts[1]
end
File.open(revDatabase, "r").each_line do |line|
parts = line.split(": ")
#decoyProteins[parts[0]] = parts[1]
end
And the files look like the example below. It started off as a YAML file, but the format was modified to increase parsing speed.
MTMDK: P31946 Q14624 Q14624-2 B5BU24 B7ZKJ8 B7Z545 Q4VY19 B2RMS9 B7Z544 Q4VY20
MTMDKSELVQK: P31946 B5BU24 Q4VY19 Q4VY20
....
I've messed around with different ways of setting up the file and parsing them, and so far this is the fastest way, but it's still awfully slow.
Is there a way to improve the speed of this, or is there a whole other approach I can take?
List of things that don't work:
YAML.
Standard Ruby threads.
Forking off processes and then retrieving the hash through a pipe.
In my usage, reading all or part the file into memory before parsing usually goes faster. If the database sizes are small enough this could be as simple as
buffer = File.readlines(database)
buffer.each do |line|
...
end
If they're too big to fit into memory, it gets more complicated, you have to setup block reads of data followed by parse, or threaded with separate read and parse threads.
Why not use the solution devised through decades of experience: a database, say SQLlite3?
(To be different, although I'd first recommend looking at (Ruby) BDB and other "NoSQL" backend-engines, if they fit your need.)
If fixed-sized records with a deterministic index are used then you can perform a lazy-load of each item through a proxy object. This would be a suitable candidate for a mmap. However, this will not speed up the total access time, but will merely amortize the loading throughout the life-cycle of the program (at least until first use and if some data is never used then you get the benefit of never loading it). Without fixed-sized records or deterministic index values this problem is more complex and starts to look more like a traditional "index" store (eg. a B-tree in an SQL back-end or whatever BDB uses :-).
The general problems with threading here are:
The IO will likely be your bottleneck around Ruby "green" threads
You still need all the data before use
You may be interested in the Widefinder Project, just in general "trying to get faster IO processing".
I don't know too much about Ruby but I have had to deal with the problem before. I found the best way was to split the file up into chunks or separate files then spawn threads to read each chunk in at a single time. Once the partitioned files are in memory combining the results should be fast. Here is some information on Threads in Ruby:
http://rubylearning.com/satishtalim/ruby_threads.html
Hope that helps.