I'm trying to write a Ruby script that will take the Flickr BBCode from an image and only find the actual image link and ignore all of the other stuff.
The BBCode from Flickr looks like this:
<img src="https://farm3.staticflickr.com/2864/92917419471_248187_c.jpg" width="800" height="526" alt="Wiggle Wiggle">
and I'm trying to get my output to be just the link, so: https://farm3.staticflickr.com/2864/92917419471_248187_c.jpg
So far, my code is this
#!/usr/bin/ruby
require 'rubygems'
str1 = ""
puts "What text would you like me to use? "
text = gets
text.scan(/"([^"]*)"/) { str1 = $1}
puts str1
and I need to know how I can scan through the input and only find the part that starts at https and ends with the quote. Any help is appreciated
Don't try to parse HTML with a regex.
Instead, use an HTML parser. Something like Nokogiri http://nokogiri.org/
require 'nokogiri'
doc = Nokogiri::HTML.parse '<img src="https://farm3.staticflickr.com/2864/92917419471_248187_c.jpg" width="800" height="526" alt="Wiggle Wiggle">'
doc.css('a').each do |link|
puts link.attr(:href)
end
You should really use a proper HTML parser if you're trying to parse HTML.
For example, this is trivial in Nokogiri:
require 'nokogiri'
bbcode = %Q[<img src="https://farm3.staticflickr.com/2864/92917419471_248187_c.jpg" width="800" height="526" alt="Wiggle Wiggle">]
Nokogiri::HTML(bbcode).css('a')[0]['href']
# => "http://www.flickr.com/photos/user/9049969465/"
You'll obviously have to add some error checking in there, but that's the basics.
require 'nokogiri'
doc = Nokogiri::HTML (<<-eol)
<img src="https://farm3.staticflickr.com/2864/92917419471_248187_c.jpg" width="800" height="526" alt="Wiggle Wiggle">
eol
doc.at_css("a")['href']
# => "http://www.flickr.com/photos/user/9049969465/"
doc.at("a")['href']
# => "http://www.flickr.com/photos/user/9049969465/"
Related
I'm trying to parse a html raw file using nokogiri.
html_file = URI.open(url).read
html_doc = Nokogiri::HTML(html_file)
puts html_doc.search("p", "h2").map(&:text)
When I do this, I get all the "p" text and then all the "h2" text. Is there a way to get them in the order that they appear in the original text?
I tried something like this below but it doesn't quite work
puts html_doc.search("p" || "h2").map(&:text)
Sorry, found my own answer.
puts html_doc.search("p, h2").map(&:text)
I wrote a simple script:
require 'rubygems'
require 'nokogiri'
require 'open-uri'
url = "http://au.finance.yahoo.com/q/bs?s=MYGN"
doc = Nokogiri::HTML(open(url))
name = doc.at_css("#yfi_rt_quote_summary h2").text
market_cap = doc.at_css("#yfs_j10_mygn").text
ebit = doc.at("//*[#id='yfncsumtab']/tbody/tr[2]/td/table[2]/tbody/tr/td/table/tbody/tr[11]/td[2]/strong").text
puts "#{name} - #{market_cap} - #{ebit}"
The script grabs three values from Yahoo finance. The problem is that the ebit XPath returns nil. The way I got the XPath was using the Chrome developer tools and copy and pasting.
This is the page I'm trying to get the value from http://au.finance.yahoo.com/q/bs?s=MYGN and the actual value is 483,992 in the total current assets row.
Any help would be appreciated, especially if there is a way to get this value with CSS selectors.
Nokogiri supports:
require 'nokogiri'
require 'open-uri'
doc = Nokogiri::HTML(open("http://au.finance.yahoo.com/q/bs?s=MYGN"))
ebit = doc.at('strong:contains("Total Current Assets")').parent.next_sibling.text.gsub(/[^,\d]+/, '')
puts ebit
# >> 483,992
I'm using the <strong> tag as an place-marker with the :contains pseudo-class, then backing up to the containing <td>, moving to the next <td> and grabbing its text, then finally stripping the white-space using gsub(/[^,\d]+/, '') which removes everything that isn't a number or a comma.
Nokogiri supports a number of jQuery's JavaScript extensions, which is why :contains works.
This seems to work for me
doc.css("table.yfnc_tabledata1 tr[11] td[2]").text.tr(",","").to_i
#=> 483992
Or as a string
doc.css("table.yfnc_tabledata1 tr[11] td[2]").text.strip.gsub(/\u00A0/,"")
#=> "483,992"
When parsing HTML document, how Nokogiri handle <br> tags? Suppose we have document that looks like this one:
<div>
Hi <br>
How are you? <br>
</div>
Do Nokogiri know that <br> tags are something special not just regular XML tags and ignore them when parsing node feed? I think Nokogiri is that smart, but I want to make sure before I accept this project involving scraping site written as HTML4. You know what I mean (How are you? is not a content of the first <br> as it would be in XML).
Here's how Nokogiri behaves when parsing (malformed) XML:
require 'nokogiri'
doc = Nokogiri::XML("<div>Hello<br>World</div>")
puts doc.root
#=> <div>Hello<br>World</br></div>
Here's how Nokogiri behaves when parsing HTML:
require 'nokogiri'
doc = Nokogiri::HTML("<div>Hello<br>World</div>")
puts doc.root
#=> <html><body><div>Hello<br>World</div></body></html>
p doc.at('div').text
#=> "HelloWorld"
I'm assuming that by "something special" you mean that you want Nokogiri to treat it like a newline in the source text. A <br> is not something special, and so appropriately Nokogiri does not treat it differently than any other element.
If you want it to be treated as a newline, you can do this:
doc.css('br').each{ |br| br.replace("\n") }
p doc.at('div').text
#=> "Hello\nWorld"
Similarly, if you wanted a space instead:
doc.css('br').each{ |br| br.replace(" ") }
p doc.at('div').text
#=> "Hello World"
You must parse this fragment using the HTML parser, as obviously this is not valid XML. When using the HTML one, Nokogiri then behaves as you'd expect it:
require 'nokogiri'
doc = Nokogiri::HTML(<<-EOS
<div>
Hi <br>
How are you? <br>
</div>
EOS
)
doc.xpath("//br").each{ |e| puts e }
prints
<br>
<br>
Mechanize is based on Nokogiri for doing web scraping, so it is quite appropriate for the task.
As far as I can remember from doing some HTML parsing last year it'll view them as separate.
EDIT: My bad, I've just got someone to send me the code and retested it, we ended up dealing with somethings including <br> separately.
Say I have:
<div class="amt" id="displayFare-1_69-61-0" style="">
<div class="per">per person</div>
<div class="per" id="showTotalSubIndex-1_69-61-0" style="">Total $334</div>
$293
</div>
I want to grab just the $334. It will always have "Total $" but the id showTotalSubIndex... will be dynamic so I can't use that.
You can use a nokogiri xpath expression to iterate over all the div nodes
and scan the string for the 'Total $' Prefix like this
require 'rubygems'
require 'nokogiri'
doc = Nokogiri::XML.parse( open( "test.xml" ))
doc.xpath("//div/text()").each{ |t|
tmp = t.to_str.strip
puts tmp[7..-1] if tmp.index('Total $') == 0
}
Rather than finding the text:
html = Nokogiri::HTML(html)
html.css("div.amt").children[1].text.gsub(/^Total /, '')
I assume here that the HTML is structured in such a way that the second child of any div.amt tag is the value that you're after, and then we'll just grab the text of that and gsub it.
Both of these work:
require 'nokogiri'
doc = Nokogiri::XML(xml)
doc.search('//div[#id]/text()').select{ |n| n.text['Total'] }.first.text.split.last
and
doc.search('//div/text()').select{ |n| n.text['Total'] }.first.text.split.last
The difference is the first should run a bit faster if you know the div you're looking for always has an id.
If the ID always starts with "showTotalSubIndex" you could use:
doc.at('//div[starts-with(#id,"showTotalSubIndex")]').first.text.split.last
and if you know there's only going to be one in the document, you can use:
doc.at('//div[starts-with(#id,"showTotalSubIndex")]').text.split.last
EDIT:
Ryan posits the idea the XML structure might be consistent. If so:
doc.at('//div[2]').text[/(\$\d+)/, 1]
:-)
I want to Extract the Members Home sites links from a site.
Looks like this
<a href="http://www.ptop.se" target="_blank">
i tested with it this site
http://www.rubular.com/
<a href="(.*?)" target="_blank">
Shall output http://www.ptop.se,
Here comes the code
require 'open-uri'
url = "http://itproffs.se/forumv2/showprofile.aspx?memid=2683"
open(url) { |page| content = page.read()
links = content.scan(/<a href="(.*?)" target="_blank">/)
links.each {|link| puts #{link}
}
}
if you run this, it dont works. why not?
I would suggest that you use one of the good ruby HTML/XML parsing libraries e.g. Hpricot or Nokogiri.
If you need to log in on the site you might be interested in a library like WWW::Mechanize.
Code example:
require "open-uri"
require "hpricot"
require "nokogiri"
url = "http://itproffs.se/forumv2"
# Using Hpricot
doc = Hpricot(open(url))
doc.search("//a[#target='_blank']").each { |user| puts "found #{user.inner_html}" }
# Using Nokogiri
doc = Nokogiri::HTML(open(url))
doc.xpath("//a[#target='_blank']").each { |user| puts "found #{user.text}" }
Several issues with your code
I don't know what you mean by using
{link}. But if you want to append a '#' character to the link make sure
you wrap that with quotes. ie
"#{link}"
String.scan accepts a block. Use it
to loop through the matches.
The page you are trying to access
does not return any links that the
regex would match anyway.
Here's something that would work:
require 'open-uri'
url = "http://itproffs.se/forumv2/"
open(url) do |page|
content = page.read()
content.scan(/<a href="(.*?)" target="_blank">/) do |match|
match.each { |link| puts link}
end
end
There're better ways to do it, I am sure. But this should work.
Hope it helps