How do I extract links from HTML using regex? - ruby

I want to extract links from google.com; My HTML code looks like this:
<a href="http://www.test.com/" class="l"
I took me around five minutes to find a regex that works using www.rubular.com.
It is:
"(.*?)" class="l"
The code is:
require "open-uri"
url = "http://www.google.com/search?q=ruby"
source = open(url).read()
links = source.scan(/"(.*?)" class="l"/)
links.each { |link| puts #{link}
}
The problem is, is it not outputting the websites links.

Those links actually have class=l not class="l". By the way, to figure this put I added some logging to the method so that you can see the output at various stages and debug it. I searched for the string you were expecting to find and didn't find it, which is why your regex failed. So I looked for the right string you actually wanted and changed the regex accordingly. Debugging skills are handy.
require "open-uri"
url = "http://www.google.com/search?q=ruby"
source = open(url).read
puts "--- PAGE SOURCE ---"
puts source
links = source.scan(/<a.+?href="(.+?)".+?class=l/)
puts "--- FOUND THIS MANY LINKS ---"
puts links.size
puts "--- PRINTING LINKS ---"
links.each do |link|
puts "- #{link}"
end
I also improved your regex. You are looking for some text that starts with the opening of an a tag (<a), then some characters of some sort that you dont care about (.+?), an href attribute (href="), the contents of the href attribute that you want to capture ((.+?)), some spaces or other attributes (.+?), and lastly the class attrubute (class=l).
I have .+? in three places there. the . means any character, the + means there must be one or more of the things right before it, and the ? means that the .+ should try to match as short a string as possible.

To put it bluntly, the problem is that you're using regexes. The problem is that HTML is what is known as a context-free language, while regular expressions can only the class of languages that are known as regular languages.
What you should do is send the page data to a parser that can handle HTML code, such as Hpricot, and then walk the parse tree you get from the parser.

What im going wrong?
You're trying to parse HTML with regex. Don't do that. Regular expressions cannot cover the range of syntax allowed even by valid XHTML, let alone real-world tag soup. Use an HTML parser library such as Hpricot.
FWIW, when I fetch ‘http://www.google.com/search?q=ruby’ I do not receive ‘class="l"’ anywhere in the returned markup. Perhaps it depends on which local Google you are using and/or whether you are logged in or otherwise have a Google cookie. (Your script, like me, would not.)

Related

Ruby Regex: Return just the match

When I do
puts /<title>(.*?)<\/title>/.match(html)
I get
<h2>foobar</h2>
But I want just
foobar
What's the most elegant method for doing so?
The most elegant way would be to parse HTML with an HTML parser:
require 'nokogiri'
html = '<title><h2>Pancakes</h2></title>'
doc = Nokogiri::HTML(html)
title = doc.at('title').text
# title is now 'Pancakes'
If you try to do this with a regular expression, you will probably fail. For example, if you have an <h2> in your <title> what's to prevent you from having something like this:
<title><strong>Where</strong> is <span>pancakes</span> <em>house?</em></title>
Trying to handle something like that with a single regex is going to be ugly but doc.at('title').text handles that as easily as it handles <title>Pancakes</title> or <title><h2>Pancakes</h2></title>.
Regular expressions are great tools but they shouldn't be the only tool in your toolbox.
Something of this style will return just the contents of the match.
html[/<title>(.*?)<\/title>/,1]
Maybe you need to tell us more, like what html might contain, but right now, you are capturing the contents of the title block, irrespective of the internal tags. I think that is the way you should do it, rather than assuming that there is an internal tag you want to handle, especially because what would happen if you had two internal tags? This is why everyone is telling you to use an html parser, which you really should do.

Getting all links of a webpage using Ruby

I'm trying to retrieve every external link of a webpage using Ruby. I'm using String.scan with this regex:
/href="https?:[^"]*|href='https?:[^']*/i
Then, I can use gsub to remove the href part:
str.gsub(/href=['"]/)
This works fine, but I'm not sure if it's efficient in terms of performance. Is this OK to use or I should work with a more specific parser (nokogiri, for example)? Which way is better?
Thanks!
Using regular expressions is fine for a quick and dirty script, but Nokogiri is very simple to use:
require 'nokogiri'
require 'open-uri'
fail("Usage: extract_links URL [URL ...]") if ARGV.empty?
ARGV.each do |url|
doc = Nokogiri::HTML(open(url))
hrefs = doc.css("a").map do |link|
if (href = link.attr("href")) && !href.empty?
URI::join(url, href)
end
end.compact.uniq
STDOUT.puts(hrefs.join("\n"))
end
If you want just the method, refactor it a little bit to your needs:
def get_links(url)
Nokogiri::HTML(open(url).read).css("a").map do |link|
if (href = link.attr("href")) && href.match(/^https?:/)
href
end
end.compact
end
I'm a big fan of Nokogiri, but why reinvent the wheel?
Ruby's URI module already has the extract method to do this:
URI::extract(str[, schemes][,&blk])
From the docs:
Extracts URIs from a string. If block given, iterates through all matched URIs. Returns nil if block given or array with matches.
require "uri"
URI.extract("text here http://foo.example.org/bla and here mailto:test#example.com and here also.")
# => ["http://foo.example.com/bla", "mailto:test#example.com"]
You could use Nokogiri to walk the DOM and pull all the tags that have URLs, or have it retrieve just the text and pass it to URI.extract, or just let URI.extract do it all.
And, why use a parser, such as Nokogiri, instead of regex patterns? Because HTML, and XML, can be formatted in a lot of different ways and still render correctly on the page or effectively transfer the data. Browsers are very forgiving when it comes to accepting bad markup. Regex patterns, on the other hand, work in very limited ranges of "acceptability", where that range is defined by how well you anticipate the variations in the markup, or, conversely, how well you anticipate the ways your pattern can go wrong when presented with unexpected patterns.
A parser doesn't work like a regex. It builds an internal representation of the document and then walks through that. It doesn't care how the file/markup is laid out, it does its work on the internal representation of the DOM. Nokogiri relaxes its parsing to handle HTML, because HTML is notorious for being poorly written. That helps us because with most non-validating HTML Nokogiri can fix it up. Occasionally I'll encounter something that is SO badly written that Nokogiri can't fix it correctly, so I'll have to give it a minor nudge by tweaking the HTML before I pass it to Nokogiri; I'll still use the parser though, rather than try to use patterns.
Mechanize uses Nokogiri under the hood but has built-in niceties for parsing HTML, including links:
require 'mechanize'
agent = Mechanize.new
page = agent.get('http://example.com/')
page.links_with(:href => /^https?/).each do |link|
puts link.href
end
Using a parser is generally always better than using regular expressions for parsing HTML. This is an often-asked question here on Stack Overflow, with this being the most famous answer. Why is this the case? Because constructing a robust regular expression that can handle real-world variations of HTML, some valid some not, is very difficult and ultimately more complicated than a simple parsing solution that will work for just about all pages that will render in a browser.
why you dont use groups in your pattern?
e.g.
/http[s]?:\/\/(.+)/i
so the first group will already be the link you searched for.
Can you put groups in your regex? That would reduce your regular expressions to 1 instead of 2.

Rails HTML Sanitizing

I am trying to sanitize an HTML file and it isn't working correctly. I want to all be entirely plain text except for paragraph and line break tags. Here is my sanitization code (the dots signify other code in my class that isn't relevant to the problem):
.
.
.
include ActionView::Helpers::SanitizeHelper
.
.
.
def remove_html(html_content)
sanitized_content_1 = sanitize(html_content, :tags => %w(p br))
sanitized_content_2 = Nokogiri::HTML(sanitized_content_1)
sanitized_content_2.css("style","script").remove
return sanitized_content_2
end
It isn't working correctly. Here is the original HTML file from which the function is reading its input, and here is the "sanitized" code it is returning. It is leaving in the body of CSS tags, JavaScript, and HTML Comment Tags. It might be leaving in other stuff as well that I have not noticed. Please advise on how to thoroughly remove all CSS, HTML, and JavaScript other than paragraph and line break tags?
I don't think you want to sanitize it. Sanitizing strips HTML, leaving the text behind, except for the HTML elements you deem OK. It is intended for allowing a user-input field to contain some markup.
Instead, you probably want to parse it. For example, the following will print the text content of the <p> tags in a given html string.
doc = Nokogiri::HTML.parse(html)
doc.search('p').each do |el|
puts el.text
end
You can sanitize with using CGI namespace too.
require 'CGI'
str = "<html><head><title>Hello</title></head><body></body></html>"
p str
p CGI::escapeHTML str
Run this script, we get following result.
$ ruby sanitize.rb
"<html><head><title>Hello</title></head><body></body></html>"
"<html><head><title>Hello</title></head><body></body></html>"

How do I count a sub string using a regex in ruby?

I have a very large xml file which I load as a string
so my XML lools like
<publication ID="7728" contentstatus="Unchanged" idID="0b000064800e9e39">
<volume contentstatus="Unchanged" idID="0b0000648151c35d">
<article ID="5756261" contentstatus="Changed" doi="10.1109/TNB.2011.2145270" idID="0b0000648151d8ca"/>
</volume>
I want to count the number of occurrences the string
article ID="5705641" contentstatus="Changed"
how can I convert the ID to a regex
Here is what I have tried doing
searchstr = 'article ID=\"/[1-9]{7}/\" contentstatus=\"Changed\"'
count = ((xml.scan(searchstr).length)).to_s
puts count
Please let me know how can I achieve this?
Thanks
I'm going to go out on a limb and guess that you're new to Ruby. First, it's not necessary to convert count into a string to puts it. Puts automatically calls to_s on anything you send to it.
Second, it's rarely a good idea to handle XML with string manipulation. I would strongly advise that you use a full fledged XML parser such as Nokogiri.
That said, you can't embed a regex in a string like that. The entire query string would need to be a regex.
Something like
/article ID="[1-9]{7}" contentstatus="Changed"/
Quotation marks aren't special characters in a regex, so you don't need to escape them.
When in doubt about regex in Ruby, I recommend checking out Rubular.com.
And once again, I can't emphasize enough that I really don't condone trying to manipulate XML via regex. Nokogiri will make dealing with XML a billion times easier and more reliable.
If XPath is an option, it is a preferred way of selecting XML elements. You can use the selector:
//article[#contentstatus="Changed"]
Or, if possible:
count(//article[#contentstatus="Changed"])
Nokogiri is my recommended Ruby XML parser. It's very robust, and is probably the standard for the language now.
I added two more "articles" to show how easily you can find and manipulate the contents, without having to rely on a regex.
require 'nokogiri'
xml =<<EOT
<publication ID="7728" contentstatus="Unchanged" idID="0b000064800e9e39">
<volume contentstatus="Unchanged" idID="0b0000648151c35d">
<article ID="5756261" contentstatus="Changed" doi="10.1109/TNB.2011.2145270" idID="0b0000648151d8ca"/>
<article ID="5756262" contentstatus="Unchanged" doi="10.1109/TNB.2011.2145270" idID="0b0000648151d8ca"/>
<article ID="5756263" contentstatus="Changed" doi="10.1109/TNB.2011.2145270" idID="0b0000648151d8ca"/>
</volume>
EOT
doc = Nokogiri::XML(xml)
puts doc.search('//article[#contentstatus="Changed"]').size.to_s + ' found'
puts doc.search('//article[#contentstatus="Changed"]').map{ |n| "#{ n['ID'] } #{ n['doi'] } #{ n['idID'] }" }
>> 2 found
>> 5756261 10.1109/TNB.2011.2145270 0b0000648151d8ca
>> 5756263 10.1109/TNB.2011.2145270 0b0000648151d8ca
The problem with using regex with HTML or XML, is they'll break really easily if the XML changes, or if your XML comes from different sources or is malformed. Regex was never designed to handle that sort of problem, but a parser was. You could have XML with line ends after every tag, or none at all, and the parser won't really care as long as the XML is well-formed. A good parser, like Nokogiri can even do fixups if the XML is broken, in order to try to make sense of it, but
Your current string looks almost perfect to me, just remove the errant / from around the numbers:
searchstr = 'article ID=\"[1-9]{7}\" contentstatus=\"Changed\"'

How can I get Nokogiri to parse and return an XML document?

Here's a sample of some oddness:
#!/usr/bin/ruby
require 'rubygems'
require 'open-uri'
require 'nokogiri'
print "without read: ", Nokogiri(open('http://weblog.rubyonrails.org/')).class, "\n"
print "with read: ", Nokogiri(open('http://weblog.rubyonrails.org/').read).class, "\n"
Running this returns:
without read: Nokogiri::XML::Document
with read: Nokogiri::HTML::Document
Without the read returns XML, and with it is HTML? The web page is defined as "XHTML transitional", so at first I thought Nokogiri must have been reading OpenURI's "content-type" from the stream, but that returns 'text/html':
(rdb:1) doc = open(('http://weblog.rubyonrails.org/'))
(rdb:1) doc.content_type
"text/html"
which is what the server is returning. So, now I'm trying to figure out why Nokogiri is returning two different values. It doesn't appear to be parsing the text and using heuristics to determine whether the content is HTML or XML.
The same thing is happening with the ATOM feed pointed to by that page:
(rdb:1) doc = Nokogiri.parse(open('http://feeds.feedburner.com/RidingRails'))
(rdb:1) doc.class
Nokogiri::XML::Document
(rdb:1) doc = Nokogiri.parse(open('http://feeds.feedburner.com/RidingRails').read)
(rdb:1) doc.class
Nokogiri::HTML::Document
I need to be able to parse a page without knowing what it is in advance, either HTML or a feed (RSS or ATOM) and reliably determine which it is. I asked Nokogiri to parse the body of either a HTML or XML feed file, but I'm seeing those inconsistent results.
I thought I could write some tests to determine the type but then I ran into xpaths not finding elements, but regular searches working:
(rdb:1) doc = Nokogiri.parse(open('http://feeds.feedburner.com/RidingRails'))
(rdb:1) doc.class
Nokogiri::XML::Document
(rdb:1) doc.xpath('/feed/entry').length
0
(rdb:1) doc.search('feed entry').length
15
I figured xpaths would work with XML but the results don't look trustworthy either.
These tests were all done on my Ubuntu box, but I've seen the same behavior on my Macbook Pro. I'd love to find out I'm doing something wrong, but I haven't seen an example for parsing and searching that gave me consistent results. Can anyone show me the error of my ways?
It has to do with the way Nokogiri's parse method works. Here's the source:
# File lib/nokogiri.rb, line 55
def parse string, url = nil, encoding = nil, options = nil
doc =
if string =~ /^\s*<[^Hh>]*html/i # Probably html
Nokogiri::HTML::Document.parse(string, url, encoding, options || XML::ParseOptions::DEFAULT_HTML)
else
Nokogiri::XML::Document.parse(string, url, encoding, options || XML::ParseOptions::DEFAULT_XML)
end
yield doc if block_given?
doc
end
The key is the line if string =~ /^\s*<[^Hh>]*html/i # Probably html. When you just use open, it returns an object that doesn't work with regex, thus it always returns false. On the other hand, read returns a string, so it could be regarded as HTML. In this case it is, because it matches that regex. Here's the start of that string:
<!DOCTYPE html PUBLIC
The regex matches the "!DOCTYPE " to [^Hh>]* and then matches the "html", thus assuming it's HTML. Why someone selected this regex to determine if the file is HTML is beyond me. With this regex, a file that begins with a tag like <definitely-not-html> is considered HTML, but <this-is-still-not-html> is considered XML. You're probably best off staying away from this dumb function and invoking Nokogiri::HTML::Document#parse or Nokogiri::XML::Document#parse directly.
Responding to this part of your question:
I thought I could write some tests to
determine the type but then I ran into
xpaths not finding elements, but
regular searches working:
I've just come across this problem using Nokogiri to parse an Atom feed. The problem seemed down to the anonymous name-space declaration:
<feed xmlns="http://www.w3.org/2005/Atom">
Removing the XMLNS declaration from the source XML would enable Nokogiri to search with XPath as usual. Removing that declaration from the feed obviously wasn't an option here, so instead I just removed the namespaces from the document after parsing:
doc = Nokogiri.parse(open('http://feeds.feedburner.com/RidingRails'))
doc.remove_namespaces!
doc.xpath('/feed/entry').length
Ugly I know, but it did the trick.

Resources