I'm having trouble scraping the rows from "List of Nobel laureates" in Nokogiri.
I believe my CSS selector is correct, but it's returning empty.
The original tutorial is "Writing a Web Crawler".
require 'rubygems'
require 'nokogiri'
require 'open-uri'
BASE_WIKIPEDIA_URL = 'http://en.wikipedia.org/'
LIST_URL = "#{BASE_WIKIPEDIA_URL}/wiki/List_of_Nobel_laureates"
page = Nokogiri::HTML(open(LIST_URL))
rows = page.css('div#content.mw-body div#bodyContent div#mw-content-text.mw-content-ltr table.wikitable.sortable.jquery-tablesorter tr')
puts "length : #{rows.size}"
There are two problems:
You have a double slash in the URL you are building, so you're not actually looking at the page you think you're looking at. This is the URL you are using: http://en.wikipedia.org//wiki/List_of_Nobel_laureates, if you follow the link you'll see that it redirects to the Wikipedia homepage.
Your CSS selector is far too specific, and includes some information that won't be present in the raw page source. You should try a more simple selector:
rows = page.css('table.wikitable tr')
Specifically you are including the jquery-tablesorter class in your selector. This class is added by JavaScript, but the tools you're using don't execute the page's JavaScript, so the class won't be present and you can't use it to find table rows.
If you use "view source", instead of the your browser's DOM inspector tool, you will see the raw source code without any JavaScript applied.
I can see that you're expecting to a table with class jquery-tablesorter. That's because you're inspecting the table in your browser and it has that class. The problem is that jquery adds that class after the page loads. But since open-uri doesn't process javascript, that class never gets added to the table that nokogiri sees.
Long story short, you probably want to go with just:
page.css('table.wikitable tr')
Related
I'm doing a scraping exercise and trying to scrape the poster from a website using Nokogiri.
This is the link that I want to get:
https://a.ltrbxd.com/resized/film-poster/5/8/6/7/2/3/586723-glass-onion-a-knives-out-mystery-0-460-0-690-crop.jpg?v=ce7ed2a83f
But instead I got this:
https://s.ltrbxd.com/static/img/empty-poster-500.825678f0.png
Why?
This is what I tried:
url = "https://letterboxd.com/film/glass-onion-a-knives-out-mystery/"
serialized_html = URI.open(url).read
html = Nokogiri::HTML.parse(serialized_html)
title = html.search('.headline-1').text.strip
overview = html.search('.truncate p').text.strip
poster = html.search('.film-poster img').attribute('src').value
{
title: title,
overview: overview,
poster_url: poster,
}
It has nothing to do with your ruby code.
If you run in your terminal something like
curl https://letterboxd.com/film/glass-onion-a-knives-out-mystery/
You can see that the output HTML does not have the images you are looking for. You can see then in your browser because after that initial load some javascript runs and loads more resources.
The ajax call that loads the image you are looking for is https://letterboxd.com/ajax/poster/film/glass-onion-a-knives-out-mystery/std/500x750/?k=0c10a16c
Play with the network inspector of your browser and you will be able to identify the different parts of the website and how each one loads.
Nokogiri does not execute Javascript however the link has to be there or at least there has to be a link to some API that returns the link.
First place where I would search for it would be the data attributes of the image element or its parent however in this case it was hidden in an inline script along with some other interesting data about the movie.
First download the web page using curl or wget and open the file in text editor to see what Nokogiri sees. Search for something you know about the file, I searched for ce7ed2a83f part of the image url and found the JSON.
Then the data can be extracted like this:
require 'nokogiri'
require 'open-uri'
require 'json'
url = "https://letterboxd.com/film/glass-onion-a-knives-out-mystery/"
serialized_html = URI.open(url).read
html = Nokogiri::HTML.parse(serialized_html)
data_str = html.search('script[type="application/ld+json"]').first.to_s.gsub("\n",'').match(/{.*}/).to_s
data = JSON.parse(data_str)
data['image']
I know how to find an element using Nokogiri. I know how to click a link using Mechanize. But I can't figure out how to find a specific link and click it. This seems like it should be really easy, but for some reason I can't find a solution.
Let's say I'm just trying to click on the first result on a Google search. I can't just click the first link with Mechanize, because the Google page has a bunch of other links, like Settings. The search result links themselves don't seem to have class names, but they're enveloped in <h3 class="r"></h3>.
I could just use Nokogiri to follow the href value of the link like so:
document = open("https://www.google.com/search?q=stackoverflow")
parsed_content = Nokogiri::HTML(document.read)
href = parsed_content.css('.r').children.first['href']
new_document = open(href)
# href is equal to "/url?sa=t&rct=j&q=&esrc=s&source=web&url=https%3A%2F%2Fstackoverflow.com%2F"
but it's not a direct url, and going to that url gives an error. The data-href value is a direct url, but I can't figure out how to get that value - doing the same thing except with ...first['data-href'] returns nil.
Anyone know how I can just find the first .r element on the page and click the link inside it?
Here's the start to my action:
require 'open-uri'
require 'nokogiri'
require 'mechanize'
document = open("https://www.google.com/search?q=stackoverflow")
parsed_content = Nokogiri::HTML(document.read)
Here's the .r element on the Google search results page:
<h3 class="r">
Stack Overflow
</h3>
You should make sure your question is the correct code in your example - it looks like it is not, because you don't surround the url in quotes and the css selector is .r a not r. You use .r a because you want to access the link inside elements with the r class.
Anyway, you can use the approach detailed here like so:
require 'open-uri'
require 'nokogiri'
require 'uri'
base_url = "https://www.google.com/search?q=stackoverflow"
document = open(base_url)
parsed_content = Nokogiri::HTML(document.read)
href = parsed_content.css('.r').first.children.first['href']
new_url = URI.join base_url, href
new_document = open(new_url)
I tested this and following new_url does redirect to StackOverflow as expected.
In the past, I have successfully used Nokogiri to scrape websites using a simple Ruby script. For a current project, I need to scrape a website that only uses inline CSS. As you can imagine, it is an old website.
What possibilities do I have to target specific elements on the page based on the inline CSS of the elements? It seems this is not possible with Nokogiri or have I overlooked something?
UPDATE: An example can be found here. I basically need the main content without the footnotes. The latter have a smaller font size and are grouped below each section.
I'm going to teach you how to fish. Instead of trying to find what I want, it's sometimes a lot easier to find what I don't want and remove it.
Start with this code:
require 'nokogiri'
require 'open-uri'
URL = 'http://www.eximsystems.com/LaVerdad/Antiguo/Gn/Genesis.htm'
FOOTNOTE_ACCESSORS = [
'span[style*="font-size: 8.0pt"]',
'span[style*="font-size:8.0pt"]',
'span[style*="font-size: 7.5pt"]',
'span[style*="font-size:7.5pt"]',
'font[size="1"]'
].join(',')
doc = Nokogiri.HTML(open(URL))
doc.search(FOOTNOTE_ACCESSORS).each do |footnote|
footnote.remove
end
File.write(File.basename(URI.parse(URL).path), doc.to_html)
Run it, then open the resulting HTML file in your browser. Scroll through the file looking for footnotes you want to remove. Select part of their text, then use "Inspect Element", or whatever tool you have that will find that selected text in the source of the page. Find something unique in that text that makes it possible to isolate it from the text you want to keep. For instance, I locate footnotes using the font-sizes in <span> and <font> tags.
Keep adding accessors to the FOOTNOTE_ACCESSORS array until you have all undesirable elements removed.
This code isn't complete, nor is it written as tightly as I'd normally do it for this sort of task, but it will give you an idea how to go about this particular task.
This is a version that is a bit more flexible:
require 'nokogiri'
require 'open-uri'
URL = 'http://www.eximsystems.com/LaVerdad/Antiguo/Gn/Genesis.htm'
FOOTNOTE_ACCESSORS = [
'span[style*="font-size: 8.0pt"]',
'span[style*="font-size:8.0pt"]',
'span[style*="font-size: 7.5pt"]',
'span[style*="font-size:7.5pt"]',
'font[size="1"]',
]
doc = Nokogiri.HTML(open(URL))
FOOTNOTE_ACCESSORS.each do |accessor|
doc.search(accessor).each do |footnote|
footnote.remove
end
end
File.write(File.basename(URI.parse(URL).path), doc.to_html)
The major difference is the previous version assumed all entries in FOOTNOTE_ACCESSORS were CSS. With this change XPath can also be used. The code will take a little bit longer to run as the entries are iterated over, but the ability to dig in with XPath might make it worthwhile for you.
You can do something like:
doc.css('*[style*="foo"]')
That will select any element with foo appearing anywhere in it's style attribute.
I want to click a link with Mechanize that I select with xpath (nokogiri).
How is that possible?
next_page = page.search "//div[#class='grid-dataset-pager']/span[#class='currentPage']/following-sibling::a[starts-with(#class, 'page')][1]"
next_page.click
The problem is that nokogiri element doesn't have click function.
I can't read the href (URL) and send get request because the link has onclick function defined (no href attribute).
If that's not possible, what are the alternatives?
Use page.at instead of page.search when you're trying to find only one element.
You can make your selector simpler (shorter) by using CSS selector syntax:
next_page = page.at('div.grid-dataset-pager > span.currentPage + a[class^="page"]')
You can construct your own Link instance if you have the Nokogiri element, page, and mechanize object to feed the constructor:
next_link = Mechanize::Page::Link.new( next_page, mech, page )
next_link.click
However, you might not need that, because Mechanize#click lets you supply a string with the text of the anchor/button to click on.
# Assuming this link text is unique on the page, which I suspect it is
mech.click next_page.text
Edit after re-reading the question completely: However, none of this is going to help you, because Mechanize is not a web browser! It does not have a JavaScript engine, and thus won't (can't) execute your onclick for you. For this you'll need to use Ruby to control a real web browser, e.g. using Watir or Selenium or Celerity or the like.
In general you would do:
page.link_with(:node => next_link).click
However like Phrogz says, this won't really do what you want.
Why don't you use a hpricot element instead? Mechanize can click on a hpricot element as long as the link has a 'src' or 'href' attribute. Try something along these lines:
page = agent.get("http://www.example.com")
next_page = agent.click((page/"//your/xpath/a"))
Edit After reading Phrogz answer I also realized that this won't really do it. Mechanize doesn't support Javascript yet. With this in mind you have 3 options.
Use a library that controls a real web browser. See #Phrogz answer.
Use Capybara which is an integration testing library but can also be used as a stand alone crawler. I've done this successfully with HTMLUnit which is a also an integration testing library in Java. Capybara comes with Selenium support by default though it also supports Webkit via an external gem. Capybara interprets Javascript out of the box. This blog post might help.
Grok the page that you intend to crawl and use something like HTTPFox to monitor what the onclick Javascript function does and replicate this in your Mechanize script.
Good luck.
I'm writing a sample test with Watir where I navigate around a site with the IE class, issue queries, etc..
That works perfectly.
I want to continue by using PageContainer's methods on the last page I landed on.
For instance, using its HTML method on that page.
Now I'm new to Ruby and just started learning it for Watir.
I tried asking this question on OpenQA, but for some reason the Watir section is restricted to normal members.
Thanks for looking at my question.
edit: here is a simple example
require "rubygems"
require "watir"
test_site = "http://wiki.openqa.org/"
browser = Watir::IE.new
browser.goto(test_site)
# now if I want to get the HTML source of this page, I can't use the IE class
# because it doesn't have a method which supports that
# the PageContainer class, does have a method that supports that
# I'll continue what I want to do in pseudo code
Store HTML source in text file
# I know how to write to a file, so that's not a problem;
# retrieving the HTML is the problem.
# more specifically, using another Watir class is the problem.
Close browser
# end
Currently, the best place to get answers to your Watir questions is the Watir-General email list.
For this question, it would be nice to see more code. Is the application under test (AUT) opening a new window/tab that you were having trouble getting to and therefore wanted to try the PageContainer, or is it just navigating to a second page?
If it is the first one, you want to look at #attach, if it is the second, then I would recommend reading the quick start tutorial.
Edit after code added above:
What I think you missed is that Watir::IE includes the Watir::PageContainer module. So you can call browser.html to get the html displayed on the page to which you've navigated.
I agree. It seems to me that browser.html is what you want.