This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
simple parsing in ruby
I am trying to verify a title in a website and after some trial and error I have found that this can be done in ruby by using nokogiri and rest-client
require 'nokogiri'
require 'rest-client'
page = Nokogiri::HTML(RestClient.get("http:/#{user.username}.domain.com/"))
simian = page.at_css("title").text
if simian == "Welcome to"
puts "default monkey"
else
puts "website updated"
end
unfortunately for a large number of websites this doesn't always seems to work as it returns
RestClient::InternalServerError at /admin/users/list
500 Internal Server Error
I was wondering if there is any option to achieve the same by simply using
mycurl = %x(curl http://........) what would be an efficient way to use that by parsing the title and without using any gem or can the curl option be used directly with nokogiri ?
Thanks
After reading your question wasn't really sure if you are set with those 2 gems or not, here is another way that may prove simpler.
require 'open-uri'
url="http://google.com"
source = open(url).read
source[/<title>(.*)<\/title>, 1]
There's two parts to this. One is fetching the page and the other is parsing. For fetching, you don't really need the rest-client gem, when open-uri from the standard library will do. Nokogiri does the parsing, and it is not likely your problem. Try this:
require 'open-uri'
require 'nokogiri'
page = Nokogiri::HTML(open('http://example.com/'))
puts page.at('title').text
Related
I want to create a rails 3 app for Reading Rss(Feed). but i could not to find good way in rails 3.
this code is working. but i want to read full content of rss. that does not support.
require 'rss/2.0'
require 'open-uri'
result = RSS::Parser.parse(response, false)
output += "Feed Title: #{result.channel.title}<br />"
FeedZirra is a good gem for this sort of thing. Simply do this:
require 'feedzirra'
feed = Feedzirra::Feed.fetch_and_parse("http://feeds.feedburner.com/PaulDixExplainsNothing")
puts feed.entries.first.title
puts feed.entries.first.content
I'm using Ruby 1.8.7's RSS::Parser, part of stdlib. I'm new to Ruby.
I want to parse an RSS feed, make some changes to the data, then output it (as RSS).
The docs say I can use '#to_s', but and it seems to work with some feeds, but not others.
This works:
#!/usr/bin/ruby -w
require 'rss'
require 'net/http'
url = 'http://news.ycombinator.com/rss'
feed = Net::HTTP.get_response(URI.parse(url)).body
rss = RSS::Parser.parse(feed, false, true)
# Here I would make some changes to the RSS, but right now I'm not.
p rss.to_s
Returns expected output: XML text.
This fails:
#!/usr/bin/ruby -w
require 'rss'
require 'net/http'
url = 'http://feeds.feedburner.com/devourfeed'
feed = Net::HTTP.get_response(URI.parse(url)).body
rss = RSS::Parser.parse(feed, false, true)
# Here I would make some changes to the RSS, but right now I'm not.
p rss.to_s
Returns nothing (empty quotes).
And yet, if I change the last line to:
p rss
I can see that the object is filled with all of the feed data. It's the to_s method that fails.
Why?
How can I get some kind of error output to debug a problem like this?
From what I can tell, the problem isn't in to_s, it's in the parser itself. Stepping way into the parser.rb code showed nothing being returned, so to_s returning an empty string is valid.
I'd recommend looking at something like Feedzirra.
Also, as a FYI, take a look at Ruby's Open::URI module for easy retrieval of web assets, like feeds. Open-URI is simple but adequate for most tasks. Net::HTTP is lower level, which will require you to type a lot more code to replace the functionality of Open-URI.
I had the same problem, so I started debugging the code. I think the ruby rss has a few too many required elements. The channel need to have "title, link, description", if one is missing to_s will fail.
The second feed in the example above is missing the description, which will make the to_s fail...
I believe this is a bug, but I really don't understand the code and barely ruby so who knows. It would seem natural to me that to_s would try its best even if some elements are missing.
Either way
rss.channel.description="something"
rss.to_s
will "work"
The problem lies in def have_required_elements?
Or in the
self.class::MODELS
Looking at this site here: http://www.grammy.com/nominees/search?artist=&title=&year=1958&genre=All
I can view all winners by year -- how can I scrape just the names of the winners on each page (for each year) and get them in a simple database?
thanks!
This will get you the actual names, cleaning them up a little bit and inserting them into a DB is an exercise left to you:
require 'rubygems'
require 'hpricot'
require 'open-uri'
html = open("http://www.grammy.com/nominees/search?artist=&title=&year=1958&genre=All")
doc = Hpricot(html)
doc.search("td.views-field-field-nominee-extended-value").each do |winner|
puts winner.inner_html.strip
end
I'm having an issue getting Nokogiri to work properly. I'm using version 1.4.4 with Ruby 1.9.2.
I have both libxml2 libxslt installed and up to date. When I run a Ruby script with XML, it works great.
require 'nokogiri'
doc = Nokogiri::XML(File.open("test.xml"))
doc = doc.css("name").each do |node|
puts node.text
end
Enter into the CL, run ruby test.rb, returns
Name 1
Name 2
Name 3
And the crowd goes wild.
I tweak a few things, make a few adjustments to the code...
require 'nokogiri'
require 'open-uri'
doc = Nokogiri::HTML(open("http://domain.tld"))
doc = doc.css("p").each do |node|
puts node.text
end
Back to CL, ruby test.rb, returns... nothing! Just a new, empty line.
Is there any reason that it will work with an XML file, but not HTML?
To debug this sort of problem we need more information from you. Since you're not giving a working URL, and because we know that Nokogiri works fine for this sort of problem, the debugging falls on you.
Here's what I would do to test:
In IRB:
Do you get output when you do: open('http://whateverURLyouarehiding.com').read
If that returns a valid document, what do you get when you wrap the previous open statement in Nokogiri::HTML(...). That needs to preserve the .read in the previous line too, so Nokogiri is receiving the body of the page, NOT an IO stream.
Try #2 above, but remove the .read. That will tell if there's a problem with Nokogiri reading an IO stream, though I seriously doubt it has a problem since I use it all the time. At that point I'd suspect a problem on your system.
If you're getting a document in #2 and #3, then the problem could be in your accessor; I suspect what you're looking for doesn't exist.
If it does exist, then check the value of doc.errors after Nokogiri parses the document. It could be finding errors in the document, and, if so, they'll be captured there.
How to load a Web page and search for a word in Ruby??
Here's a complete solution:
require 'open-uri'
if open('http://example.com/').read =~ /searchword/
# do something
end
For something simple like this I would prefer to write a couple of lines of code instead of using a full blown gem. Here is what I will do:
require 'net/http'
# let's take the url of this page
uri = 'http://stackoverflow.com/questions/1878891/how-to-load-a-web-page-and-search-for-a-word-in-ruby'
response = Net::HTTP.get_response(URI.parse(uri)) # => #<Net::HTTPOK 200 OK readbody=true>
# match the word Ruby
/Ruby/.match(response.body) # => #<MatchData "Ruby">
I can go to the path of using a gem if I need to do more than this and I need to implement some algorithm for that which is already being done in one of the gems
I suggest using Nokogiri or hpricot to open and parse HTML documents. If you need something simple that doesn't require parsing the HTML, you can just use the open-uri library built in to most ruby distributions. If need something more complex for posting forms (or logging in), you can elect to use Mechanize.
Nokogiri is probably the preferred solution post _why, but both are about as simple as this:
require 'nokogiri'
require 'open-uri'
doc = Nokogiri(open("http://www.example.com"))
if doc.inner_text.match(/someword/)
puts "got it"
end
Both also allow you to search using xpath-like queries or CSS selectors, which allows you to grab items out of all divs with class=foo, for example.
Fortunately, it's not that big of a leap to move between open-uri, nokogiri and mechanize, so use the first one that meets your needs, and revise your code once you realize you need the capabilities of one of the other libraries.
You can also use mechanize gem, something similar to this.
require 'rubygems'
require 'mechanize'
mech = WWW::Mechanize.new.get('http://example.com') do |page|
if page.body =~ /mysearchregex/
puts "found it"
end
end