Save image with Mechanize and Nokogiri? - ruby

I'm using Mechanize and Nokogiri to gather some data. I need to save a picture that's randomly generated at each request.
In my attempt I'm forced to download all pictures, but the only one I really want is the image located within div#specific.
In addition, is it possible to generate Base64 data from it, without saving it, or reloading its source?
require 'rubygems'
require 'mechanize'
require 'nokogiri'
a = Mechanize.new { |agent|
agent.keep_alive = true
agent.max_history = 0
}
urls = Array.new()
urls.push('http://www.domain.com');
urls.each {|url|
page = a.get(url)
doc = Nokogiri::HTML(page.body)
if doc.at_css('#specific')
page.images.each do |img|
img.fetch.save('picture.png')
end
end
}

To fetch the images from the specific location:
agent = Mechanize.new
page = agent.get('http://www.domain.com')
images = page.search("#specific img")
To save the image:
agent.get(images.first.attributes["src"]).save "path/to/folder/image_name.jpg"
To get image encoded without saving:
encoded_image = Base64.encode64 agent.get(images.first.attributes["src"]).body_io.string
I ran this just to make sure that the image that was encoded can be decoded back:
File.open("images/image_name.jpg", "wb") {|f| f.write(Base64.decode64(encoded_image))}

Related

want to get taobao's list of URL of products on search result page without taobao API

I want to get taobao's list of URL of products on search result page without taobao API.
I tried following Ruby script.
require "open-uri"
require "rubygems"
require "nokogiri"
url='https://world.taobao.com/search/search.htm?_ksTS=1517338530524_300&spm=a21bp.7806943.20151106.1&search_type=0&_input_charset=utf-8&navigator=all&json=on&q=%E6%99%BA%E8%83%BD%E6%89%8B%E8%A1%A8&cna=htqfEgp0pnwCATyQWEDB%2FRCE&callback=__jsonp_cb&abtest=_AB-LR517-LR854-LR895-PR517-PR854-PR895'
charset = nil
html = open(url) do |f|
charset = f.charset
f.read
end
doc = Nokogiri::HTML.parse(html, nil, charset)
p doc.xpath('//*[#id="list-itemList"]/div/div/ul/li[1]/div/div[1]/div/a/#href').each{|i| puts i.text}
# => 0
I want to get list of URL like https://click.simba.taobao.com/cc_im?p=%D6%C7%C4%DC%CA%D6%B1%ED&s=328917633&k=525&e=lDs3%2BStGrhmNjUyxd8vQgTvfT37ERKUkJtUYVk0Fu%2FVZc0vyfhbmm9J7EYm6FR5sh%2BLS%2FyzVVWDh7%2FfsE6tfNMMXhI%2B0UDC%2FWUl0TVvvELm1aVClOoSyIIt8ABsLj0Cfp5je%2FwbwaEz8tmCoZFXvwyPz%2F%2ByQnqo1aHsxssXTFVCsSHkx4WMF4kAJ56h9nOp2im5c3WXYS4sLWfJKNVUNrw%2BpEPOoEyjgc%2Fum8LOuDJdaryOqOtghPVQXDFcIJ70E1c5A%2F3bFCO7mlhhsIlyS%2F6JgcI%2BCdFFR%2BwwAwPq4J5149i5fG90xFC36H%2B6u9EBPvn2ws%2F3%2BHHXRqztKxB9a0FyA0nyd%2BlQX%2FeDu0eNS7syyliXsttpfoRv3qrkLwaIIuERgjVDODL9nFyPftrSrn0UKrE5HoJxUtEjsZNeQxqovgnMsw6Jeaosp7zbesM2QBfpp6NMvKM5e5s1buUV%2F1AkICwRxH7wrUN4%2BFn%2FJ0%2FIDJa4fQd4KNO7J5gQRFseQ9Z1SEPDHzgw%3D however I am getting 0
What should I do?
I don't know taobao.com but the page seems like its running lots of javascript. So perhaps the content can actually not be retrieved with a client without javascript capabilities. So instead of open-uri, you could try the gem selenium-webdriver:
https://rubygems.org/gems/selenium-webdriver/versions/2.53.4

scanning a webpage for urls with ruby and regex

I'm trying to create an array of all links found at the below url. Using page.scan(URI.regexp) or URI.extract(page) returns more than just urls.
How do I get just the urls?
require 'net/http'
require 'uri'
uri = URI("https://gist.github.com/JsWatt/59f4b8ce6bbf0c7e4dc7")
page = Net::HTTP.get(uri)
p page.scan(URI.regexp)
p URI.extract(page)
If you are just trying to extract links (<a href="..."> elements) from the text file then it seems better to parse it as real HTML with Nokogiri, and then extract the links this way:
require 'nokogiri'
require 'open-uri'
# Parse the raw HTML text
doc = Nokogiri.parse(open('https://gist.githubusercontent.com/JsWatt/59f4b8ce6bbf0c7e4dc7/raw/c340b3fbcab7923e52e5b50165432b6e5f2e3cf4/for_scraper.txt'))
# Extract all a-elements (HTML links)
all_links = doc.css('a')
# Sort + weed out duplicates and empty links
links = all_links.map { |link| link.attribute('href').to_s }.uniq.
sort.delete_if { |h| h.empty? }
# Print out some of them
puts links.grep(/store/)
http://store.steampowered.com/app/214590/
http://store.steampowered.com/app/218090/
http://store.steampowered.com/app/220780/
http://store.steampowered.com/app/226720/
...

Downloading images with Mechanize gem

I'm trying to download all full-res images from a site by checking for image links, visit them and download the full image.
I have managed to make it kinda work. I can fetch all links and download the image from i.imgur. However, I want to make it work with more sites and normal imgur albums and also without wget (which I am using now as shown below).
This is the code I'm currently playing around with (Don't judge, it's only testcode):
require 'mechanize'
require 'uri'
def get_images()
crawler = Mechanize.new
img_links = crawler.get("http://www.reddit.com/r/climbing/new/?count=25&after=t3_39qccc").links_with(href: %r{i.imgur})
return img_links
end
def download_images()
img_links = get_images()
crawler = Mechanize.new
clean_links = []
img_links.each do |link|
current_link = link.uri.to_s
unless current_link.include?("domain")
unless clean_links.include?(current_link)
clean_links << current_link
end
end
end
p clean_links
clean_links.each do |link|
system("wget -P ./images -A jpeg,jpg,bmp,gif,png #{link}")
end
end
download_images()

Web Scraping with Nokogiri::HTML and Ruby - save images

I'm working on a script to grab data & images from webshop productpages
(with approval from the owner)
I have a working script that loops through a CSV file with 20042 product URLS to get me the data I need that is stored in a CSV file. Final thing I need is to save the product images.
I have this code (thanks to Phrogz in this thread)
URL = 'http://www.sample.com/page.html'
require 'rubygems'
require 'nokogiri'
require 'open-uri'
require 'uri'
def make_absolute( href, root )
URI.parse(root).merge(URI.parse(href)).to_s
end
Nokogiri::HTML(open(URL)).xpath('//*[#id="zoom"]/#href').each do |src|
uri = make_absolute(src,URL)
File.open(File.basename(uri),'wb'){ |f| f.write(open(uri).read) }
end
that runs great for a seperate URL but I'm struggling to get it working and loop through the URLS from the CSV file in my main script that starts like this:
# encoding: utf-8
require 'nokogiri'
require 'open-uri'
require 'csv'
require 'mechanize'
#prices = Array.new
#title = Array.new
#description = Array.new
#warranty = Array.new
#leadtime = Array.new
#urls = Array.new
#categories = Array.new
#subcategories = Array.new
#subsubcategories = Array.new
urls = CSV.read("lotofurls.csv")
(0..urls.length - 1).each do |index|
puts urls[index][0]
doc = Nokogiri::HTML(open(urls[index][0]))
Looks like all I need to figure out is how to feed the urls to the code saving the image but any help would be much appreciated!
You can make quick work of this with something like RMagick (or ImageMagick, MiniMagick, etc)
For RMagick, you could do something like this
require 'rmagick'
images.each do |image|
url = image.url # should be a string
Magick::Image.read(url).first.resize_to_fill(200,200).write(image.desired_filename)
end
That would write a 200x200px image for each url you provide (resize_to_fill is optional, obviously). The library is very powerful, with many, many options. If you go this route, I'd recommend the railscast on image manipulation: http://railscasts.com/episodes/374-image-manipulation
And the documentation if you want to get more advanced: http://rmagick.rubyforge.org/

Iterate through pages nokogiri get link address

I'm trying to get images or image addresses from the website below. It works for the one website that I put below: "http://www.1stsourceservall.com/Category/Accessories". However--once it's finished with the page--I want it to then click on the next page link and cycle through all 20+ pages. How would I do that?
require 'rubygems'
require 'nokogiri'
require 'open-uri'
url = "http://www.1stsourceservall.com/Category/Accessories"
while (url) do
doc = Nokogiri::HTML(open(url))
puts doc.css(".productImageMed")
end
link = doc.css('.pagination a')
url = link && link[0]['href'] #=> url is nil if no link is found on the page
end

Resources