I'm trying to read this ATOM Feed (http://ffffound.com/feed), but I'm unable to get to any of the values which are defined as part of the namespace e.g. media:content and media:thumbnail.
Do I need to make the parser aware of the namespaces?
Here's what I 've got:
require 'rss/2.0'
require 'open-uri'
source = "http://ffffound.com/feed"
content = ""
open(source) do |s| content = s.read end
rss = RSS::Parser.parse(content, false)
I believe you would have to use libxml-ruby for that.
gem 'libxml-ruby', '>= 0.8.3'
require 'xml'
xml = open("http://ffffound.com/feed")
parser = XML::Parser.string(xml, :options =>XML::Parser::Options::RECOVER)
doc = parser.parse
doc.find("channel").first.find("items").each do |item|
puts item.find("media:content").first
#and just guessing you want that url thingy
puts item.find("media:content").first.attributes.get_attribute("url").value
end
I hope that points you in the right direction.
Related
I am trying to scrape data from instagram. Here is my code
require 'open-uri'
require 'nokogiri'
require 'json'
require "unicode/emoji"
def get_html
url = 'https://www.instagram.com/muriithi_kabogo/'
html = open(url)
end
def pass_data
html = get_html
doc = Nokogiri::HTML(html)
end
def get_data
profiles = []
body = pass_data.at('body')
script = body.at('script').text
myText = script
json_object_data = eval(myText)
end
get_data()
When I try to change the text into json format, I get an error:
(eval):1: invalid Unicode codepoint (SyntaxError)
usinessmen #beautiful #smile\ud83d\ude0a #teambringit #shebr
How do I move past this error?
JSON, like JavaScript, uses UCS2 encoding, which Ruby chokes on.
Do not use evil. For one thing, Ruby will detect \ud83d\ude0a as invalid codepoints, as it should; for another, it is a security hole; and lastly, it slows down your code.
Use JSON.parse, which is safer, faster, and knows how to deal with UCS2:
require 'json'
json_str = '"usinessmen #beautiful #smile\ud83d\ude0a #teambringit #shebr"'
JSON.parse(json_str)
# => "usinessmen #beautiful #smile😊 #teambringit #shebr"
I am trying to parse the RSS feed from a Stack Overflow Jobs search (URL) using the built in Ruby RSS parser. For some reason I cannot access the Location element from the feed.
It crashes with
undefined method `location' for #<RSS::Rss::Channel::Item:0x000000020db720> (NoMethodError)
I am able to get the other elements (title, description, link, etc.) but not the location
Code
require 'rss'
require 'open-uri'
url = "https://stackoverflow.com/jobs/feed?q=%5bruby%5d&l=New+York%2c+NY%2c+United+States&d=20&u=Miles"
open(url) do |rss|
feed = RSS::Parser.parse(rss)
puts "Title: #{feed.channel.title}"
feed.items.each do |item|
puts item.title
puts item.location
end
end
How do I get this location value?
For some reason the location element has a namespace (xmlns attribute). I have not been able to find anything on how to use namespaces in the Ruby RSS model but using Nokogiri you can define custom namespaces and parse out the location attribute with the XPath (nokogiri docs).
require 'nokogiri'
require 'open-uri'
# Open the URL with Nokogiri XML parser
xml = Nokogiri::XML(open("http://stackoverflow.com/jobs/feed?q=%5Bruby%5D&l=New%20York%2C%20NY%2C%20United%20States&d=20&u=Miles"))
# xml.xpath('XPath', 'custom_namespace' => "http://namespace/url")
puts xml.xpath('//rss/channel/item/so:location', 'so' => 'http://stackoverflow.com/jobs/')
I am trying to search for a specific node in an XML file using XPath. This search worked just fine under REXML but REXML was too slow for large XML docs. So moved over to LibXML.
My simple example is processing a Yum repomd.xml file, an example can be found here: http://mirror.san.fastserv.com/pub/linux/centos/6/os/x86_64/repodata/repomd.xml
My test script is as follows:
require 'rubygems'
require 'libxml'
p = LibXML::XML::Parser.file( "/tmp/dr.xml")
repomd = p.parse
filelist = repomd.find_first("/repomd/data[#type='filelists']/location#href")
puts "Length: " + filelist.length.to_s
filelist.each do |f|
puts f.attributes['href']
end
I get this error:
Error: Invalid expression.
/usr/lib/ruby/gems/1.8/gems/libxml-ruby-2.7.0/lib/libxml/document.rb:123:in `find': Error: Invalid expression. (LibXML::XML::Error)
from /usr/lib/ruby/gems/1.8/gems/libxml-ruby-2.7.0/lib/libxml/document.rb:123:in `find'
from /usr/lib/ruby/gems/1.8/gems/libxml-ruby-2.7.0/lib/libxml/document.rb:130:in `find_first'
from /tmp/scripty.rb:6
I have also tried simpler examples like below, but still no dice.
p = LibXML::XML::Parser.file( "/tmp/dr.xml")
repomd = p.parse
filelist = repomd.root.find(".//location")
puts "Length: " + filelist.length.to_s
In the above case I get the output:
Length: 0
Your inspired guidance would be greatly appreciated, and I have searched for what I am doing wrong, and I just can't figure it out...
Here is some code that will fetch the file and process it, still doesn't work...
require 'rubygems'
require 'open-uri'
require 'libxml'
raw_xml = open('http://mirror.san.fastserv.com/pub/linux/centos/6/os/x86_64/repodata/repomd.xml').read
p = LibXML::XML::Parser.string(raw_xml)
repomd = p.parse
filelist = repomd.find_first("//data[#type='filelists']/location[#href]")
puts "First: " + filelist
In the end I reverted back to REXML and used stream processing. Much faster and much easier XPath syntax implementation.
Looking at your code,it seems you want to collect only those location elements which has href attribute. If that's the case below should work:
"//data[#type='filelists']/location[#href]"
I'm working on a script to grab data & images from webshop productpages
(with approval from the owner)
I have a working script that loops through a CSV file with 20042 product URLS to get me the data I need that is stored in a CSV file. Final thing I need is to save the product images.
I have this code (thanks to Phrogz in this thread)
URL = 'http://www.sample.com/page.html'
require 'rubygems'
require 'nokogiri'
require 'open-uri'
require 'uri'
def make_absolute( href, root )
URI.parse(root).merge(URI.parse(href)).to_s
end
Nokogiri::HTML(open(URL)).xpath('//*[#id="zoom"]/#href').each do |src|
uri = make_absolute(src,URL)
File.open(File.basename(uri),'wb'){ |f| f.write(open(uri).read) }
end
that runs great for a seperate URL but I'm struggling to get it working and loop through the URLS from the CSV file in my main script that starts like this:
# encoding: utf-8
require 'nokogiri'
require 'open-uri'
require 'csv'
require 'mechanize'
#prices = Array.new
#title = Array.new
#description = Array.new
#warranty = Array.new
#leadtime = Array.new
#urls = Array.new
#categories = Array.new
#subcategories = Array.new
#subsubcategories = Array.new
urls = CSV.read("lotofurls.csv")
(0..urls.length - 1).each do |index|
puts urls[index][0]
doc = Nokogiri::HTML(open(urls[index][0]))
Looks like all I need to figure out is how to feed the urls to the code saving the image but any help would be much appreciated!
You can make quick work of this with something like RMagick (or ImageMagick, MiniMagick, etc)
For RMagick, you could do something like this
require 'rmagick'
images.each do |image|
url = image.url # should be a string
Magick::Image.read(url).first.resize_to_fill(200,200).write(image.desired_filename)
end
That would write a 200x200px image for each url you provide (resize_to_fill is optional, obviously). The library is very powerful, with many, many options. If you go this route, I'd recommend the railscast on image manipulation: http://railscasts.com/episodes/374-image-manipulation
And the documentation if you want to get more advanced: http://rmagick.rubyforge.org/
I'm writing a very simple Ruby script to parse tweets out of a twitter RSS feed. Here's the code I have:
require 'rss'
#rss = RSS::Parser.parse('statuses.xml', false)
outputfile = open("output.txt", "w")
#rss.items.each do |i|
pubdate = i.published.to_s
if pubdate.include? '2011-05'
tweet = i.title.to_s
tweet = tweet.gsub(/<title>SlyFlourish: /, "")
tweet = tweet.gsub(/<\/title>/, "\n\n")
outputfile << tweet
end
end
I think I'm missing something about dealing with the objects coming out of the RSS parser. Can someone tell me how I can better pull out the title and date entries from the object returned by the parser?
Is there a reason you chose RSS? Parsing XML is expensive.
I'd consider using JSON instead.
There's also a twitter Ruby gem that makes this really easy:
require "twitter"
Twitter.user_timeline("gavin_morrice").each do |tweet|
puts tweet.text
puts tweet.created_at
end