Accessing and scraping sporadically available Wikipedia sections - ruby

I need to fetch some data but I'm completely stumped after trying a few things.
I want to access Airlines & Destinations from the Albuquerque_International_Sunport's wiki page - keep in mind, I'll be going through a prepopulated list of airports with this data.
There are multiple "types" of Airlines: Passenger, Cargo, sometimes there's other (sub?)sections; other times there are none:
Articles for multiple airports will be accessed automatically - including some less known airports. This means I need to:
Check if "Airlines & Destinations" section exists
Take all data inside of any table
Scrape it; otherwise do nothing
I've tried using the ruby wikipedia-client gem however, the .raw_data method isn't even returning the section data:
Next, I went to Wikipedia's API: unless I am mistaken, but it doesn't return "section" names! This doesn't seem right but I wasn't able to get it working.
So I suppose that leaves Nokogiri. I can grab and parse the pages fine, but:
How would I go about detecting "Airlines & Destinations" section presence, getting all table data BEFORE end of section? I have a suspicion I need some tricky Xpath for this.
Seems to be the only viable solution.
Any thoughts welcome. Putting a bounty on this question when I can.
Edit: Perhaps it's better to simply somehow grab a list of all airlines in the world and hit them against HTML? Seems like it could be computationally expensive.

Well, I'm not an expert user of Nokogiri but maybe this can give you some idea.
require 'nokogiri'
require 'open-uri'
page = Nokogiri::HTML(open("https://en.wikipedia.org/wiki/Albuquerque_International_Sunport"))
# this is the passenger table
page.xpath('//*[#id="mw-content-text"]/div/table[2]/tr').each do |tr|
p tr.text()
puts "-"*50
end
# this is the cargo table
page.xpath('//*[#id="mw-content-text"]/div/table[3]/tr').each do |tr|
p tr.text()
puts "-"*50
end

Related

Getting HTML table values using XPath in Nokogiri?

I'm trying to get some values from a table using the XPath of this table but it only returns [] (empty):
require 'nokogiri'
require 'open-uri'
url = "http://riopretrans.com.br/linhas.php?ln=106"
doc = Nokogiri::HTML(open(url))
doc.xpath("html/body/table[1]/tbody/tr[2]/td/table/tbody/tr/td/table/tbody/tr[2]/td/div/table[1]/tbody/tr[3]/td/div/div/center/font/table").each do |lines|
puts lines.content
end
I found the table's XPath using Firebug so I think it's correct.
Can anyone help me?
Remove tbody/ from your XPath.
The tbody tag is part of the HTML spec for table tags, but it's rarely actually implemented in the HTML. Some browsers insert it, though it's not in the HTML for the page. Firebug then sees it, which you see, and think it must be so.
Even using "view source" can confuse you, because you expect that to be accurate, but the browser has already munged the content to include "tbody", so, well, basically they're lying to you.
You can confirm this by looking at the HTML that Nokogiri is getting. Use puts doc.to_html['tbody'] and see if you get "tbody" or nil.
...Because in html file all of them were specified(written by programmer)
If you are positive they actually belong there, because they exist in the HTML source, then you'll need to take apart your XPath. Start with a broad path, and slowly add to it to narrow down your search.
The server is unreachable for me right now, so I can't confirm that, or dig into what the hierarchy should be, and show an example. (That's why actually giving us REAL HTML in your question is SO much better than a link which might not work.)
An alternate is to use XPath's // (search anywhere) with a less restrictive path, or CSS selectors. Either way, actually examine the HTML, instead of relying on Firebug's XPath, and determine what "landmarks" you can use in the source to navigate to your desired table. Today's HTML is chock-full of id and class parameters, or a particular series of tags that act as a finger-print for the table you want. Search for the minimum needed to pin-point that table.
If the table is something like <table id="foo">, then use doc.at('table#foo'). If it's in a <div class="bar"><table> use doc.at('div.bar table'). In any case, use the smallest sized accessor necessary to get the job done. That will increase your chances of success if anything in the HTML changes in the future.

Wiki quotes API?

I would want to get a structured version of a Wikiquote page via JSON (basically I need all phrases)
Example: http://en.wikiquote.org/wiki/Fight_Club_(film)
I tried with: http://en.wikiquote.org/w/api.php?format=xml&action=parse&page=Fight_Club_(film)&prop=text
but I get all HTML source code. I need each pharse as an element of an Array
How could I achieve that with DBPEDIA?
For one thing Iam not sure whether you can query wiki quotes using DBpedia and secondly, DBpedia gives you only info box data in a structured way, it does not in a any way the article content in a structured way. Instead with a little bit of trouble you can use the Media wiki api to get the data
EDIT
The URI you are trying gives you a text so this will make things easier, but not completely.
Try this piece of code in your console:
require 'Nokogiri'
content = JSON.parse(open("http://en.wikiquote.org/w/api.php?format=json&action=parse&page=Fight_Club_%28film%29&prop=text").read)
data = content['parse']['text']['*']
xpath_data = Nokogiri::HTML data
xpath_data.xpath("//ul/li").map{|data_node| data_node.text}
This is the closest I have come to an answer, of course this is not completely right because you will get a lot on unnecessary data. But if you dig into Nokogiri and xpath and find out how to pin point the nodes you need you can get a solution which will give you correct quotes at least 90% of the time.
Just change the format to JSON. Look up the Wikipedia API for more details.
http://en.wikiquote.org/w/api.php?format=json&action=parse&page=Fight_Club_(film)&prop=text

How can I scrape, parse and crawl files in Ruby?

I have a number of data files to process from a data warehouse that have the following format:
:header 1 ...
:header n
# remarks 1 ...
# remarks n
# column header 1
# column header 2
DATA ROWS
(Example: "#### ## ## ##### ######## ####### ###afp## ##e###")
The data is separated by white spaces and has both numbers and other ASCII chars. Some of those pieces of data will be split up and made more meaningful.
All of the data will go into a database, initially an SQLite db for development, and then pushed up to another, more permanent, storage.
These files will be pulled in actually via HTTP from the remote server and I will have to crawl a bit to get some of it as they span folders and many files.
I was hopeful to get some input what the best tools and methods may be to accomplish this the "Ruby way", as well as to abstract out some of this. Otherwise, I'll tackle it probably similar to how I would in Perl or other such approaches I've taken before.
I was thinking along the lines of using OpenURI to open each url, then if input is HTML collect links to crawl, otherwise process the data. I would use String.scan to break apart the file appropriately each time into a multi-dimensional array parsing each component based on the established formatting by the data provider. Upon completion, push the data into the database. Move on to next input file/uri. Rinse and repeat.
I figure I must be missing some libs that those with more experience would use to clean/quicken this process up dramatically and make the script much more flexible for reuse on other data sets.
Additionally, I will be graphing and visualizing this data as well as generating reports, so perhaps that should too be considered.
Any input as to perhaps a better approach or libs to simply this?
Your question focuses on a lot on "low level" details -- parsing URL's and so on. One key aspect of the "Ruby Way" is "Don't reinvent the wheel." Leverage existing libraries. :)
My recommendation? First, leverage a crawler such as spider or anemone. Second, use Nokogiri for HTML/XML parsing. Third, store the results. I recommend this because you might do different analyses later and you don't want to throw away the hard work of your spidering.
Without knowing too much about your constraints, I would look at storing your results in MongoDB. After thinking this, I did a quick search and found a nice tutorial Scraping a blog with Anemone and MongoDB.
I've written probably a bajillion spiders and site analyzers and find that Ruby has some nice tools that should make this an easy process.
OpenURI makes it easy to retrieve pages.
URI.extract makes it easy to find links in pages. From the docs:
Description
Extracts URIs from a string. If block given, iterates through all matched URIs. Returns nil if block given or array with matches.
require "uri"
URI.extract("text here http://foo.example.org/bla and here mailto:test#example.com and here also.")
# => ["http://foo.example.com/bla", "mailto:test#example.com"]
Simple, untested, logic to start might look like:
require "openuri"
require "uri"
urls_to_scan = %w[
http://www.example.com/page1
http://www.example.com/page2
]
loop do
break if urls_to_scan.empty?
url = urls_to_scan.shift
html = open(url).read
# you probably want to do something to make sure the URLs are not
# pointing outside the site you're walking.
#
# Something like:
#
# URI.extract(html).select{ |u| u[%r{^http://www\.example\.com}i] }
#
new_urls = URI.extract(html)
if (new_urls.any?)
urls_to_scan += new_urls
else
; # parse your file as data using the content in html
end
end
Unless you own the site you're crawling, you want to be kind and gentle: Don't run as fast as possible because it's not your pipe. Pay attention to the site's robot.txt file or risk being banned.
There are true web-crawler gems for Ruby, but the basic task is so simple I never bother with them. If you want to check out other alternatives, visit some of the links to the right for other questions on SO that touch on this subject.
If you need more power or flexibility, the Nokogiri gem makes short work of parsing HTML, allowing you to use CSS accessors to search for tags of interest. There are some pretty powerful gems for making it easy to grab pages such as typhoeus.
Finally, while ActiveRecord, which is recommended in some comments, is nice, finding documentation for using it outside of Rails can be difficult or confusing. I recommend using Sequel. It is a great ORM, very flexible, and well documented.
Hi I would start by taking a very close look at the gem called Mechanize before firing up any basic open-uri stuff - cause it's build into mechanize. It's a brilliant, fast, and easy to use gem for automating web-crawling. Since your data-format is pretty strange (at least compared to json, xml or html) I don't think you will make any use of the build-in parser - but you could still take a look at it. it's called nokogiri and is extremely smart as well. But in the last end, after crawling and fetching the resources, you will probably have to go with some good old regular expression stuff.
Good luck!

calling the column_name in the table while connecting via Oracle DB

I am trying to get the value from the table(employee)connecting through
the oracle database. Since there are 100s of values in one column, I would need to iterate the table and get the exact value.
I have the code that works if I use the index no. such as row[1] but I
wanted to use the column_name "first name" instead of row[1]. Below is
the code that I have which works.
Code:
def load_borrower
connection = OCI8.new('usrname', 'pwd', //host:portno/sid')
connection.exec(("SELECT BI_PREFIX, BI_FNAME, BI_MNAME, BI_LNAME, B.BI_SUFFIX, BI_ID_TYPE, BI_ID_NUMBER, BI_DOB, B1.*, R.*, M.*, C.*, L.* FROM EMPLOYEE, SC_BORROWERPREF_NEW S1, BORROWER_NEW B, BORROWERPREF_NEW B1, RES_ADD R, MAIL_ADD M, CLOS_ADD C, LLORD_ADD L WHERE S2=SCENARIO_ID = S1.SCENARIO_ID AND S1.PREF_ID = B1.PREF_ID AND B1.BORROWER_ID = B.BORROWER_ID AND B1.PREF_ID = R.RES_PREF_ID AND B1.PREF_ID = M.MAIL_PREF_ID AND B1.PREF_ID = C.CLOS_PREF_ID AND B1.PREF_ID = L.LLORD_PREF_ID AND S.RELEASE_ID= "1" AND S.SCENARIO_NO = '2' ORDER BY S1.SC_BORROWERPREF_ID") do |row|
$BI_PREFIX=row[0].to_s
$BI_FNAME=row[1].to_s
$BI_MNAME=row[2].to_s
$BI_LNAME=row[3].to_s
$BI_SUFFIX=row[4].to_s
$BI_BI_ID_TYPE=row[5].to_s
$BI_BI_ID_NUMBER=row[6].to_s
$BI_DOB=row[7].to_s
$BI_EMAIL=row[9].to_s
$BI_CELL_PH=row[11].to_s
$BI_WORK_PH=row[12].to_s
$BI_PREF_CONT=row[13].to_s
$BI_MAR_STATUS=row[16].to_s
$BI_EMP_STATUS=row[23].to_s
$BI_EDUC_YEARS=row[17].to_s
$BI_NUM_DEPEND=row[21].to_s
end
end
Now I'm running the above functions below
load_borrower
So the code above right now works fine. But As you can see from above, I am defining the variables from the db table as row[5], row[24] like that which is very hectic and time consuming although it works. So I was just wondering if we have any method or command to use the column_name such that it gets the value from the row and the column such as row['Emp_id'] instead of finding about the index of every column_name.
I am not sure if this is a drawback of Ruby as it treates the table from the db as an array and may be that's why we can't specify by column_name.
Firstly it appears you are a bit confused by the boundaries and separations between the various bits of technology you are using. There is no Watir in the code you provided, NONE. it's all pure Ruby and a tiny bit of stuff from the OCI8 Gem. A GEM is a standard way that Ruby folks use to distribute code libraries and programs written in the Ruby language. See HERE for more info to better understand what a Gem is and how they are used.
Watir is another Ruby gem that is for driving web-browsers, and you might be using it elsewhere in your code, but it doesn't relate to this question or OCI8 other than both of them being Ruby code libraries distributed as Gems. So lets leave it aside so as to not confuse things.
The behavior you are seeing is how the OCI8 gem works, NOT anything to do with Ruby specifically. If you want something more elegant, then look into different gems that have been created for doing db access with Ruby, for example ActiveRecord, which was suggested in another answer already. The OCI8 Gem only returns an array if you have the results feeding into a block like you do in your current code. Otherwise the results are in an object called a Cursor, and you can use the cursor's fetch_hash method to get fetched data as a Hash. The hash keys are column names. (see http://ruby-oci8.rubyforge.org/en/api_OCI8Cursor.html)
Allow me to strongly recommend that you spend a little time learning a bit more about the Ruby language before you tear much further into your current project. Given the nature of the coding you seem to be doing, I'd advise you to read Brian Marik's book "Everyday Scripting with Ruby", thats going to give you a lot better understanding of the technology you are using, and you'll understand better when we toss around terms like 'hash' as I just did.
If you will allow a bit of general advice in terms of how you are going about interfacing with your database. IMHO, you should be taking advantage of the db by constructing a query that returns JUST the data you want, instead of grabbing huge amounts of data and trying to parse through it manually. It's better use of the resource, uses less memory, takes less time to transfer the info from the db, and no matter how good your parsing code might be, it won't be as good as what the Oracle people wrote. Let the db do the heavy lifting, that's what it's there for.
If what you are dealing with here is data to drive your testing, or validate results, rather than construct one huge monolithic array, I'd recommend you use a much more modular approach. Use one global variable such as the EMP_ID of the current user you are testing with or against, and have the test code get query results for just the values needed for each validation, or a small logical group of validations like the parts of an address. It's a lot easier to build up stuff that way on a case by case basis working as you go, instead of trying to write the whole data retrieval bit in one giant piece that will be a nightmare to maintain.
As it stands all your test code that is verifying function or validating how the site works is going to be tightly coupled to a big monolithic piece that fetches the data from the db. that creates a lot of dependencies and makes your test code hard to maintain. If you deal with things in a more modular way, where each validation step retrieves just the data it needs, then it's a lot easier to expand or modify your test code as the site or database changes.
If you had an array containing the column names then you could zip it up with the row array and build a hash:
Hash[column_names.zip( row )]
I would recommend using activerecord for this though.
This should work
connection = OCI8.new('usrname', 'pwd', //host:portno/sid')
cursor = connection.exec(("SELECT BI_PREFIX ...")
cols = cursor.get_col_names
while r = cursor.fetch
$BI_PREFIX=r[cols.index('BI_PREFIX')].to_s
...
end

Generate summary of an url like facebook in Ruby

Is there any gem in ruby to generate a summary of an url similar to what facebook does when you post a link.
None that I'm aware of, but it should't be too hard to roll your own. In the simplest case you can just require 'open-uri' and then use the open method to retrieve the contents of the site, or go for one of the HTTP libraries.
Once you got the document, all you have to do is use something like Nokogori or Hpricot to get the title, first paragraph of text and an image and you are done.
Generating a thumbnail isn't a straightforward task. The page has to be rendered, the window captured, shrunk down, then stored or returned. While it would be possible for a gem to do it, there would be significant overhead.
There are websites that can create the thumbnails, then you can reference the image:
Websnapr
Webthumb
ShrinkTheWeb
iWEBTOOL
I haven't tried them, but there's a good page discussing the first two on The Accidental Technologist.
If you need some text from the page, its simple to grab some, but making it be sensible is a different problem:
require 'nokogiri'
require 'open-uri'
doc = Nokogiri::HTML(open('http://www.example.com'))
page_text = doc.text
print page_text.gsub(/\s+/, ' ').squeeze(' ')[0..99]
# >> IANA — Example domains Domains Numbers Protocols About IANA Example Domains As described in RFC 2606

Resources