I am looking for a gem that will split a CSV dataset into smaller datasets for training and test on a machine learning system. There is a package in R which will do this, based on random sampling; but my research has not turned up anything in Ruby. The reason I wanted to do this in Ruby is that the original dataset is quite large, e.g. 17 million rows or 5.5 gig. R expects to load the entire dataset into memory. Ruby is far more flexible. Any suggestions would be appreciated.
This will partition your original data to two files without loading it all into memory:
require 'csv'
sample_perc = 0.75
CSV.open('sample.csv','w') do |sample_out|
CSV.open('test.csv','w') do |test_out|
CSV.foreach('alldata.csv') do |row|
(Random.rand < sample_perc ? sample_out : test_out) << row
end
end
end
you can use the smarter_csv Ruby gem and set the chunk_size to the desired sample size,
and then save the chunks as Resque jobs , which can then be processed in parallel.
https://github.com/tilo/smarter_csv
see examples on that GitHub page.
CSV is built-in to ruby, you don't need any gem to do this:
require 'csv'
csvs = (1..10).map{|i| CSV.open("data#{i}.csv", "w")}
CSV.foreach("data.csv") do |row|
csvs.sample << row
end
CSV.foreach will not load the entire file into memory.
You will probably want to write your own code for this, based around Ruby's bundled csv gem. There are lots of possibilities for how to split the data, and the requirement to do this efficiently over such a large data set is quite specialist, whilst also not requiring that much code.
However, you might have some luck looking through the many sub-features of ai4r
I've not yet found many mature pre-packaged machine learning algorithms for Ruby (that you might also find in R or in Python's scikitlearn). No random forests, gbm etc - or if there are, they are difficult to find. There is a Ruby interface to R. Also wrappers for ATLAS. I have tried neither.
I do make use of ruby-fann (neural nets) , and the gem narray is your friend for large numerical data sets.
Related
I need to fetch some data but I'm completely stumped after trying a few things.
I want to access Airlines & Destinations from the Albuquerque_International_Sunport's wiki page - keep in mind, I'll be going through a prepopulated list of airports with this data.
There are multiple "types" of Airlines: Passenger, Cargo, sometimes there's other (sub?)sections; other times there are none:
Articles for multiple airports will be accessed automatically - including some less known airports. This means I need to:
Check if "Airlines & Destinations" section exists
Take all data inside of any table
Scrape it; otherwise do nothing
I've tried using the ruby wikipedia-client gem however, the .raw_data method isn't even returning the section data:
Next, I went to Wikipedia's API: unless I am mistaken, but it doesn't return "section" names! This doesn't seem right but I wasn't able to get it working.
So I suppose that leaves Nokogiri. I can grab and parse the pages fine, but:
How would I go about detecting "Airlines & Destinations" section presence, getting all table data BEFORE end of section? I have a suspicion I need some tricky Xpath for this.
Seems to be the only viable solution.
Any thoughts welcome. Putting a bounty on this question when I can.
Edit: Perhaps it's better to simply somehow grab a list of all airlines in the world and hit them against HTML? Seems like it could be computationally expensive.
Well, I'm not an expert user of Nokogiri but maybe this can give you some idea.
require 'nokogiri'
require 'open-uri'
page = Nokogiri::HTML(open("https://en.wikipedia.org/wiki/Albuquerque_International_Sunport"))
# this is the passenger table
page.xpath('//*[#id="mw-content-text"]/div/table[2]/tr').each do |tr|
p tr.text()
puts "-"*50
end
# this is the cargo table
page.xpath('//*[#id="mw-content-text"]/div/table[3]/tr').each do |tr|
p tr.text()
puts "-"*50
end
I need to parse a large (4gb) xml file in ruby, preferably with nokogiri. I've seen a lot of code exampled using
File.open(path)
but this takes too much time in my case. Is there an option to read the xml node by node in order to prevent loading the file at ones. Or what would be the fastest way to parse such a large file.
Best,
Phil
You can try using Nokogiri::XML::SAX
The basic way a SAX style parser works is by creating a parser,
telling the parser about the events we’re interested in, then giving
the parser some XML to process. The parser will notify you when it
encounters events your said you would like to know about.
I do this kind of work with LibXML http://xml4r.github.io/libxml-ruby/ (require 'xml') and its LibXML::XML::Reader API. It's simpler than SAX and allows you to make almost everything. REXML includes a similar API also, but it's quite buggy. Stream APIs like the one I mention or SAX shouldn't have any problem with huge files. I have not tested Nokogiri.
you may like to try this out - https://github.com/amolpujari/reading-huge-xml
HugeXML.read xml, elements_lookup do |element|
# => element{ :name, :value, :attributes}
end
I also tried using ox
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I must create an application using only tools available in ruby core or stdlib. Do YAML or SQLite come with ruby? What are some of the other tools available that would allow me to store data to a file? What are their advantages or disadvantages?
Ruby's stdlib is deep. Maybe too deep. I knew sqlite wasn't in there, but I figured something was. Here is what I found...
There are up to 4 different simple databases already in the stdlib:
PStore - Very simple persistent hash. Handles marshaling for you, so you can store trees of ruby objects. Pure ruby solution.
SDBM - C-based key/value store. Ruby ships with the entire source so it should be portable across platforms. Simple string keys and values only.
GDBM - Another string only key/value store. Uses GNU dbm. Its "enumerable" so its a little more hash-like. Possibly not very portable.
DBM - Uses the DBM headers available on the platform ruby was compiled on, so it could be one of several DBM implementations (read: not portable). Yet another string only key/value store. That's 3. Unlike GDBM though this one will allow you to store non-string values and silently ruin them by calling #to_s or #inspect.
I might actually use PStore for small things myself now. SQLite is probably better, but PStore is undoubtedly simpler so if the job is small enough it makes sense.
You can also use serialization. Marshal will dump actual ruby objects and their data. YAML can sort of do this as well. Using JSON/YAML/CSV you can finely control the format of the data. All of these can be used with File to write their output to a file.
You can you ruby's stdlib CSV library to store any database data. Its format is very useful, for storing, exporting, and importing DB data. See documentation on CSV here. As example, just do:
require 'csv'
# save
CSV.open("file.csv", "wb") do |csv|
csv << ["row", "of", "CSV", "data"]
csv << ["another", "row"]
...
end
#load
CSV.foreach("file.csv") do |row|
row # => ["row", "of", "CSV", "data"]
...
end
File.open 'local.rbdb', 'w+' do |f|
f.write JSON.generate(write_target)
End
Build your write_target data in a typical manner, and then use JSON as a storage format.
I have a number of data files to process from a data warehouse that have the following format:
:header 1 ...
:header n
# remarks 1 ...
# remarks n
# column header 1
# column header 2
DATA ROWS
(Example: "#### ## ## ##### ######## ####### ###afp## ##e###")
The data is separated by white spaces and has both numbers and other ASCII chars. Some of those pieces of data will be split up and made more meaningful.
All of the data will go into a database, initially an SQLite db for development, and then pushed up to another, more permanent, storage.
These files will be pulled in actually via HTTP from the remote server and I will have to crawl a bit to get some of it as they span folders and many files.
I was hopeful to get some input what the best tools and methods may be to accomplish this the "Ruby way", as well as to abstract out some of this. Otherwise, I'll tackle it probably similar to how I would in Perl or other such approaches I've taken before.
I was thinking along the lines of using OpenURI to open each url, then if input is HTML collect links to crawl, otherwise process the data. I would use String.scan to break apart the file appropriately each time into a multi-dimensional array parsing each component based on the established formatting by the data provider. Upon completion, push the data into the database. Move on to next input file/uri. Rinse and repeat.
I figure I must be missing some libs that those with more experience would use to clean/quicken this process up dramatically and make the script much more flexible for reuse on other data sets.
Additionally, I will be graphing and visualizing this data as well as generating reports, so perhaps that should too be considered.
Any input as to perhaps a better approach or libs to simply this?
Your question focuses on a lot on "low level" details -- parsing URL's and so on. One key aspect of the "Ruby Way" is "Don't reinvent the wheel." Leverage existing libraries. :)
My recommendation? First, leverage a crawler such as spider or anemone. Second, use Nokogiri for HTML/XML parsing. Third, store the results. I recommend this because you might do different analyses later and you don't want to throw away the hard work of your spidering.
Without knowing too much about your constraints, I would look at storing your results in MongoDB. After thinking this, I did a quick search and found a nice tutorial Scraping a blog with Anemone and MongoDB.
I've written probably a bajillion spiders and site analyzers and find that Ruby has some nice tools that should make this an easy process.
OpenURI makes it easy to retrieve pages.
URI.extract makes it easy to find links in pages. From the docs:
Description
Extracts URIs from a string. If block given, iterates through all matched URIs. Returns nil if block given or array with matches.
require "uri"
URI.extract("text here http://foo.example.org/bla and here mailto:test#example.com and here also.")
# => ["http://foo.example.com/bla", "mailto:test#example.com"]
Simple, untested, logic to start might look like:
require "openuri"
require "uri"
urls_to_scan = %w[
http://www.example.com/page1
http://www.example.com/page2
]
loop do
break if urls_to_scan.empty?
url = urls_to_scan.shift
html = open(url).read
# you probably want to do something to make sure the URLs are not
# pointing outside the site you're walking.
#
# Something like:
#
# URI.extract(html).select{ |u| u[%r{^http://www\.example\.com}i] }
#
new_urls = URI.extract(html)
if (new_urls.any?)
urls_to_scan += new_urls
else
; # parse your file as data using the content in html
end
end
Unless you own the site you're crawling, you want to be kind and gentle: Don't run as fast as possible because it's not your pipe. Pay attention to the site's robot.txt file or risk being banned.
There are true web-crawler gems for Ruby, but the basic task is so simple I never bother with them. If you want to check out other alternatives, visit some of the links to the right for other questions on SO that touch on this subject.
If you need more power or flexibility, the Nokogiri gem makes short work of parsing HTML, allowing you to use CSS accessors to search for tags of interest. There are some pretty powerful gems for making it easy to grab pages such as typhoeus.
Finally, while ActiveRecord, which is recommended in some comments, is nice, finding documentation for using it outside of Rails can be difficult or confusing. I recommend using Sequel. It is a great ORM, very flexible, and well documented.
Hi I would start by taking a very close look at the gem called Mechanize before firing up any basic open-uri stuff - cause it's build into mechanize. It's a brilliant, fast, and easy to use gem for automating web-crawling. Since your data-format is pretty strange (at least compared to json, xml or html) I don't think you will make any use of the build-in parser - but you could still take a look at it. it's called nokogiri and is extremely smart as well. But in the last end, after crawling and fetching the resources, you will probably have to go with some good old regular expression stuff.
Good luck!
I am trying to get the value from the table(employee)connecting through
the oracle database. Since there are 100s of values in one column, I would need to iterate the table and get the exact value.
I have the code that works if I use the index no. such as row[1] but I
wanted to use the column_name "first name" instead of row[1]. Below is
the code that I have which works.
Code:
def load_borrower
connection = OCI8.new('usrname', 'pwd', //host:portno/sid')
connection.exec(("SELECT BI_PREFIX, BI_FNAME, BI_MNAME, BI_LNAME, B.BI_SUFFIX, BI_ID_TYPE, BI_ID_NUMBER, BI_DOB, B1.*, R.*, M.*, C.*, L.* FROM EMPLOYEE, SC_BORROWERPREF_NEW S1, BORROWER_NEW B, BORROWERPREF_NEW B1, RES_ADD R, MAIL_ADD M, CLOS_ADD C, LLORD_ADD L WHERE S2=SCENARIO_ID = S1.SCENARIO_ID AND S1.PREF_ID = B1.PREF_ID AND B1.BORROWER_ID = B.BORROWER_ID AND B1.PREF_ID = R.RES_PREF_ID AND B1.PREF_ID = M.MAIL_PREF_ID AND B1.PREF_ID = C.CLOS_PREF_ID AND B1.PREF_ID = L.LLORD_PREF_ID AND S.RELEASE_ID= "1" AND S.SCENARIO_NO = '2' ORDER BY S1.SC_BORROWERPREF_ID") do |row|
$BI_PREFIX=row[0].to_s
$BI_FNAME=row[1].to_s
$BI_MNAME=row[2].to_s
$BI_LNAME=row[3].to_s
$BI_SUFFIX=row[4].to_s
$BI_BI_ID_TYPE=row[5].to_s
$BI_BI_ID_NUMBER=row[6].to_s
$BI_DOB=row[7].to_s
$BI_EMAIL=row[9].to_s
$BI_CELL_PH=row[11].to_s
$BI_WORK_PH=row[12].to_s
$BI_PREF_CONT=row[13].to_s
$BI_MAR_STATUS=row[16].to_s
$BI_EMP_STATUS=row[23].to_s
$BI_EDUC_YEARS=row[17].to_s
$BI_NUM_DEPEND=row[21].to_s
end
end
Now I'm running the above functions below
load_borrower
So the code above right now works fine. But As you can see from above, I am defining the variables from the db table as row[5], row[24] like that which is very hectic and time consuming although it works. So I was just wondering if we have any method or command to use the column_name such that it gets the value from the row and the column such as row['Emp_id'] instead of finding about the index of every column_name.
I am not sure if this is a drawback of Ruby as it treates the table from the db as an array and may be that's why we can't specify by column_name.
Firstly it appears you are a bit confused by the boundaries and separations between the various bits of technology you are using. There is no Watir in the code you provided, NONE. it's all pure Ruby and a tiny bit of stuff from the OCI8 Gem. A GEM is a standard way that Ruby folks use to distribute code libraries and programs written in the Ruby language. See HERE for more info to better understand what a Gem is and how they are used.
Watir is another Ruby gem that is for driving web-browsers, and you might be using it elsewhere in your code, but it doesn't relate to this question or OCI8 other than both of them being Ruby code libraries distributed as Gems. So lets leave it aside so as to not confuse things.
The behavior you are seeing is how the OCI8 gem works, NOT anything to do with Ruby specifically. If you want something more elegant, then look into different gems that have been created for doing db access with Ruby, for example ActiveRecord, which was suggested in another answer already. The OCI8 Gem only returns an array if you have the results feeding into a block like you do in your current code. Otherwise the results are in an object called a Cursor, and you can use the cursor's fetch_hash method to get fetched data as a Hash. The hash keys are column names. (see http://ruby-oci8.rubyforge.org/en/api_OCI8Cursor.html)
Allow me to strongly recommend that you spend a little time learning a bit more about the Ruby language before you tear much further into your current project. Given the nature of the coding you seem to be doing, I'd advise you to read Brian Marik's book "Everyday Scripting with Ruby", thats going to give you a lot better understanding of the technology you are using, and you'll understand better when we toss around terms like 'hash' as I just did.
If you will allow a bit of general advice in terms of how you are going about interfacing with your database. IMHO, you should be taking advantage of the db by constructing a query that returns JUST the data you want, instead of grabbing huge amounts of data and trying to parse through it manually. It's better use of the resource, uses less memory, takes less time to transfer the info from the db, and no matter how good your parsing code might be, it won't be as good as what the Oracle people wrote. Let the db do the heavy lifting, that's what it's there for.
If what you are dealing with here is data to drive your testing, or validate results, rather than construct one huge monolithic array, I'd recommend you use a much more modular approach. Use one global variable such as the EMP_ID of the current user you are testing with or against, and have the test code get query results for just the values needed for each validation, or a small logical group of validations like the parts of an address. It's a lot easier to build up stuff that way on a case by case basis working as you go, instead of trying to write the whole data retrieval bit in one giant piece that will be a nightmare to maintain.
As it stands all your test code that is verifying function or validating how the site works is going to be tightly coupled to a big monolithic piece that fetches the data from the db. that creates a lot of dependencies and makes your test code hard to maintain. If you deal with things in a more modular way, where each validation step retrieves just the data it needs, then it's a lot easier to expand or modify your test code as the site or database changes.
If you had an array containing the column names then you could zip it up with the row array and build a hash:
Hash[column_names.zip( row )]
I would recommend using activerecord for this though.
This should work
connection = OCI8.new('usrname', 'pwd', //host:portno/sid')
cursor = connection.exec(("SELECT BI_PREFIX ...")
cols = cursor.get_col_names
while r = cursor.fetch
$BI_PREFIX=r[cols.index('BI_PREFIX')].to_s
...
end