How to get 'Next Page' link with Scrubyt - ruby

I'm trying to use Scrubyt to get the details from this page http://www.nuffieldtheatre.co.uk/cn/events/event_listings.php?section=events. I've managed to get the titles and detail URLs from the list, but I can't use next_page to get the scraper to go to the next page. I assume that's cause I'm not using the correct pattern for the next page link. I tried the string "Next Page", and I've also tried the XPath. Any other ideas?
The code is below:
require 'rubygems'
require 'scrubyt'
nuffield_data = Scrubyt::Extractor.define do
fetch 'http://www.nuffieldtheatre.co.uk/cn/events/event_listings.php?section=events'
event do
title 'The Coast of Mayo'
#url "href", :type => :attribute
link_url
end
next_page "Next Page", :limit => 2
end
nuffield_data.to_xml.write($stdout,1)

Try this with a slightly different URL:
fetch 'http://www.nuffieldtheatre.co.uk/cn/events/event_listings.php'
scrubyt seems to be having issues with "?section=events" query on the end of the URL.
When it looks for the next page it is trying to return this URL:
http://www.nuffieldtheatre.co.uk/cn/events/?pageNum_rsSearch=1&totalRows_rsSearch=39&section=events
instead of:
http://www.nuffieldtheatre.co.uk/cn/events/event_listings.php?pageNum_rsSearch=1&totalRows_rsSearch=39&section=events
Removing the query string on the end of the URL seems to fix this - you might want to file this as a bug.

Related

Scraping successive pages until the last page using Nokogiri and Mechanize

I am trying to scrape multiple pages from a website. I want to scrape a page, then click on next, get that page, and repeat until I hit the end.
I wrote this so far:
page = agent.submit(form, form.buttons.first)
#submitting a form
while lien = page.link_with(:text=>'Next')
# while I have a next link on page, keep scraping
html_body = Nokogiri::HTML(body)
links = html_body.css('.list').xpath("//table/tbody/tr/td[2]/a[1]")
links.each do |link|
purelink = link['href']
puts purelink[/codeClub=([^&]*)/].gsub('codeClub=', '')
lien.click
end
end
Unfortunately, with this script I keep on scraping the same page in an infinite loop... How can I achieve what I want to do ?
I would try this, replace lien.click with page = lien.click.
It should look more like this:
page = form.submit form.button
scrape page
while link = page.link_with :text => 'Next'
page = link.click
scrape page
end
Also you don't need to parse the page body with nokogiri, mechanize already does that for you.

Using Nokogiri to scrape search results from POST form

I would like to scrape search results from http://maxdelivery.com, but unfortunately, they are using POST instead of GET for their search form. I found this description of how to use Nokogiri and RestClient to fake a post form submission, but it's not returning any results for me: http://ruby.bastardsbook.com/chapters/web-crawling/
I've worked with Nokogiri before, but not for the results of a POST form submission.
Here's my code right now, only slightly modified from the example at the link above:
class MaxDeliverySearch
REQUEST_URL = "http://www.maxdelivery.com/nkz/exec/Search/Display"
def initialize(search_term)
#term = search_term
end
def search
if page = RestClient.post(REQUEST_URL, {
'searchCategory'=>'*',
'searchString'=>#term,
'x'=>'0',
'y'=>'0'
})
puts "Success finding search term: #{#term}"
File.open("temp/Display-#{#term}.html", 'w'){|f| f.write page.body}
npage = Nokogiri::HTML(page)
rows = npage.css('table tr')
puts "#{rows.length} rows"
rows.each do |row|
puts row.css('td').map{|td| td.text}.join(', ')
end
end
end
end
Now (ignoring the formatting stuff), I would expect if page = RestClient.post(REQUEST_URL, {...} to return some search results if passed a 'good' search term, but each time I just get the search results page back with no actual results, as if I had pasted the URL into the browser.
Anyone have any idea what I'm missing? Or, just how to get back the results I'm looking for with another gem?
With the class above, I would like to be able to do:
s = MaxDeliverySearch.new("ham")
s.search #=> big block of search results objects to traverse
Mechanize is what you should use to automate a web search form. This should get you started using Mechanize.
require 'mechanize'
agent = Mechanize.new
page = agent.get('http://maxdelivery.com')
form = page.form('SearchForm')
form.searchString = "ham"
page = agent.submit(form)
page.search("div.searchResultItem").each do |item|
puts item.search(".searchName i").text.strip
end

How to submit formstack form using ruby?

I have a form similiar to THIS and want to be submit data to it from a CSV file using ruby. Here is what I have been trying to do:
require 'uri'
require 'net/http'
params = {
'field15157482-first' => 'bip',
'field15157482-last' => 'bop',
'field15157485' => 'bip#bob.com',
'field15157487' => 'option1'
'fsSubmitButton1196962' => 'Submit'
}
x = Net::HTTP.post_form(URI.parse('http://www.formstack.com/forms/?1196833-GxMTxR20GK'), params)
I keep getting A valid form ID was not supplied. I have a hunch I am using the wrong URL but I don't know what to replace it with.
I would use the the API but I don't have access to the token hence my stone age approach. Any suggestions would be much appreciated.
The form uses hidden variables and cookies to attempt to maintain a "unique session". Fortunately, Mechanize makes handling 'sneaky' forms quite easy.
require "mechanize"
form_uri = "http://www.formstack.com/forms/?1196962-617Z6Foyif"
#agent = Mechanize.new
page = #agent.get form_uri
form = page.forms[0]
form.fields_with(:class => /fsField/).each do |field|
field.value = case field.name
when /first/ then "First Name"
when /last/ then "Last Name"
else "email#address.com"
end
end
page = form.submit form.buttons.first
puts
puts "=== Response Header"
puts
puts page.header
puts
puts "=== Response Body"
puts
puts page.body
Looking at the source on http://www.formstack.com/forms/?1196833-GxMTxR20GK and the example in your link, it appears that formstack forms post to index.php, and require a form id to be passed in to identify which form is being submitted.. Looking at the forms in both examples, you'll see a field similar to this:
<input type="hidden" name="form" value="1196833" />
Try adding the following to your params hash:
'form' => '1196883' # or other appropriate form value
You may also need to include the other hidden fields for a valid submit.

Stripping out results from a website that doesn't have differing URLs

I'm trying to automate the process of searching for alternative telephone numbers using SayNoTo0870 . Every time one searches for an alternate number or name it brings up the '/companysearch.php' page.
Clearly this page has no reference, and in my mind you can't just link to this page.
What I'm hoping to do is use the code below, to automate the opening of a browser, searching of a name/number, stripping out the HTML and then providing the top 5 results. I've got the automation part down, but clearly when trying to save the webpage using Hpricot it only brings up the 'Sorry nothing can be found page' because I can't link directly to the search result page.
Here is my code thus far:
(I've removed comments to shorten it)
require 'rubygems'
require 'watir'
require 'hpricot'
require 'open-uri'
class OH870
def searchName(name)
browser = Watir::Browser.new
browser.goto 'http://www.saynoto0870.com/search.php'
browser.text_field(:name => 'search_name').set name
browser.button(:name => 'submit').click
end
def searchNumber(number)
browser = Watir::Browser.new
browser.goto 'http://www.saynoto0870.com/search.php'
browser.text_field(:name => 'number').set number
browser.button(:name => 'submit').click
end
def loadNew(website)
doc = Hpricot(open(website))
puts(doc)
end
def strip_tags
stripped = website.gsub( %r{</?[^>]+?>}, '' )
puts stripped
end
end # class
class Main < OH870
puts "What is the name of the place you want?"
website = 'http://www.saynoto0870.com/companysearch.php'
question = gets.chomp
whichNumber = OH870.new
whichNumber.searchName(question)
#result = OH870.new
#withoutTags = website.strip_tags
#result.loadNew(withoutTags)
end
Now I'm not sure whether there's a way of "asking watir to follow through to the companysearch.php page and dump the results without having to pass this page as a variable.
I wonder if anyone has any suggestions here?
With WATIR, minus the extraneous libraries, here's all it takes to accomplish what you've described (using the 'name' test case only). I've pulled it out of the function format since you already know how to do that, and this will be a clearer test case path.
require 'watir'
#browser = Watir::Browser.new :firefox #open a browser called #browser
#browser.goto "http://(your search page here)" #go to the search page
#browser.text_field(:name => 'name').value = "Awesome" #fill in the 'name' field
#browser.button(:name => 'submit').click #submit the form
If all goes well, we should now be looking at the search results. WATIR already knows it's on a new page - we don't have to specify a URL. In the case that the results are in a frame, we do need to access that frame before we can view its content. Let's pretend they're in a DIV element with an ID of "search_results":
results = #browser.div(:id => "search_results").text
resultsFrame = #browser.frame(:index => 1) #in the case of a frame
results = resultsFrame.div(id => "search_results).text
As you can see, you do not need to save the entire page to parse the results. They could be in table cells, they could be in a different div per line, or a new frame. All are easily accessible with WATIR to be stored in a variable, array, or immediately written to the console or log file.
#results = Array.new #create an Array to store our results
#browser.divs.each do |div| #for each div element on the page
if div.id == "search_results" #if the div ID equals "search_results"
#results << div.text #add it to our array named #results
end
end
Now, if you just wanted the top 5 there are many ways to access them.
#results[0] #first element
#results[0..4] #first 5 elements
I'd also suggest you look into a few programming principles like DRY (Don't Repeat Yourself). In your function definitions where you see that they share code, like opening the browser and visiting the same URL - you can consolidate those:
def search(how, what)
#browser = Watir::Browser.new :firefox
#browser.goto "(that search url again)"
#browser.text_field(:name => how).value = what
etc...
end
search("name", "Hilton")
search("number", "555555")
Since we know that the two available text_field names are "name" and "number", and those make good logical sense as a 'how', we can parameterize them and use a single function for both the Search by Name and Search by Number test cases. This is more efficient, as long as the test cases remain similar enough to be shared.

Mechanize: picking right submit from multiple in same form

I use Mechanize to loop through a table, which is paginated.
I have a problem with a form that holds multiple submit inputs. The input tags are used as pagination and they are generated dynamically. When I loop through the pages I need to scrape, I need to be able to pick the right input, since only one of them will take me to the “next page”. The right tag can be identified by different attributes such as name, class, value etc. My problem is though, that I can’t find out how to tell mechanize which one to use.
I tried this:
require 'mechanize'
require 'yaml'
url = "http://www.somewhere.com"
agent = Mechanize.new
page = agent.get(url)
loop do
puts "some content from site using nokogiri"
if next_page = page.form_with(:action => /.*/)
page = next_page.submit(page.form_with(:action => /.*/).submits[3])
else
break
end
end
From this post, http://rubyforge.org/pipermail/mechanize-users/2008-November/000314.html, but as told the number of tags are changing so just picking a hardcoded number of the submits is not too good an idea.
What I would like to know is if there is a way like this:
loop do
puts "some content from site using nokogiri"
if next_page = page.form_with(:action => /.*/)
page = next_page.submit(:name => /the_right_submit_button/)
else
break
end
end
or something like that, maybe with a css or xpath selector.
I usually use form.button_with to select the right button to click:
form = results_page.forms[0]
results_page = form.submit(form.button_with(:name=>'ctl00$ContentBody$ResultsPager$NextButton'))

Resources