Ruby: expand shorten urls the hard way - ruby

Is there a way to open URLS in ruby and output the re-directed url:
ie convert http://bit.ly/l223ue to http://paper.li/CoyDavidsonCRE/1309121465
I find that there are more url shortener services than gems can keep up with, so I'm asking for the hard -but robust- way, instead of using a gem that connects to some API.

Here is a lengthen method
This has very little error handling but it might help you get started.
You could wrap lengthen with a begin rescue block that returns nil or attempt to retry it later. Not sure what you are trying to build but hope it helps.
require 'uri'
require 'net/http'
def lengthen(url)
uri = URI(url)
Net::HTTP.new(uri.host, uri.port).get(uri.path).header['location']
end
irb(main):008:0> lengthen('http://bit.ly/l223ue')
=> "http://paper.li/CoyDavidsonCRE/1309121465"

Related

Setting an HTTP Timeout in Ruby 1.9.3

I'm using Ruby 1.9.3 and need to GET a URL. I have this working with Net::HTTP, however, if the site is down, Net::HTTP ends up hanging.
While searching the internet, I've seen many people faced similar problems, all with hacky solutions. However, many of those posts are quite old.
Requirements:
I'd prefer using Net::HTTP to installing a new gem.
I need both the Body and the Response Code. (e.g. 200)
I do not want to require open-uri, since that makes global changes and raises some security issues.
I need to GET a URL within X seconds, or return error.
Using Ruby 1.9.3, how can I GET a URL while setting a timeout?
To clarify, my existing code looks like:
Net::HTTP.get_response(URI.parse(url))
Trying to add:
Net::HTTP.open_timeout(1000)
Results in:
NoMethodError: undefined method `open_timeout' for Net::HTTP:Class
You can set the open_timeout attribute of the Net::HTTP object before making the connection.
uri = URI.parse(url)
Net::HTTP.new(uri.hostname, uri.port) do |http|
http.open_timeout = 1000
response = http.request_get(uri.request_uri)
end
I tried all the solutions here and on the other questions about this problem but I only got everything right with the following code, The open-uri gem is a wrapper for net::http.
I needed a get that had to wait longer than the default timeout and read the response. The code is also simpler.
require 'open-uri'
open(url, :read_timeout => 5 * 60) do |response|
if response.read[/Return: Ok/i]
log "sending ok"
else
raise "error sending, no confirmation received"
end
end

How to parse a webpage in Ruby without any library or gem?

I want to use the API of a website in a Ruby script, and the only return from the API is a number through the HTTPS protocol. Nothing more, not even tags or something, so I was wondering if there is a way to get that number in a string or integer in my script without using any XML parsing livrary or gem like REXML or hpricot or libXML, because the webpages that I want to parse are, as I said, extremely basic...
If I understand. A request to https://www.website.com/api/getid return 2.
Then, I guess this would do:
require 'net/https'
require 'uri'
def open(url)
Net::HTTP.get(URI.parse(url))
end
response = open("https://www.website.com/api/getid")
EDIT
You'll find much usefull examples here.
As it is mentioned in the link above, HTTParty is quite popular. An example:
require 'httparty'
response = HTTParty.get('http://twitter.com/statuses/public_timeline.json')
puts response.body, response.code, response.message, response.headers.inspect

Download an image from a URL?

I am trying to use HTTP::get to download an image of a Google chart from a URL I created.
This was my first attempt:
failures_url = [title, type, data, size, colors, labels].join("&")
require 'net/http'
Net::HTTP.start("http://chart.googleapis.com") { |http|
resp = http.get("/chart?#{failures_url")
open("pie.png" ,"wb") { |file|
file.write(resp.body)
}
}
Which produced only an empty PNG file.
For my second attempt I used the value stored inside failure_url inside the http.get() call.
require 'net/http'
Net::HTTP.start("http://chart.googleapis.com") { |http|
resp = http.get("/chart?chtt=Builds+in+the+last+12+months&cht=bvg&chd=t:296,1058,1217,1615,1200,611,2055,1663,1746,1950,2044,2781,1553&chs=800x375&chco=4466AA&chxl=0:|Jul-2010|Aug-2010|Sep-2010|Oct-2010|Nov-2010|Dec-2010|Jan-2011|Feb-2011|Mar-2011|Apr-2011|May-2011|Jun-2011|Jul-2011|2:|Months|3:|Builds&chxt=x,y,x,y&chg=0,6.6666666666666666666666666666667,5,5,0,0&chxp=3,50|2,50&chbh=23,5,30&chxr=1,0,3000&chds=0,3000")
open("pie.png" ,"wb") { |file|
file.write(resp.body)
}
}
And, for some reason, this version works even though the first attempt had the same data inside the http.get() call. Does anyone know why this is?
SOLUTION:
After trying to figure why this is happening I found "How do I download a binary file over HTTP?".
One of the comments mentions removing http:// in the Net::HTTP.start(...) call otherwise it won't succeed. Sure enough after I did this:
failures_url = [title, type, data, size, colors, labels].join("&")
require 'net/http'
Net::HTTP.start("chart.googleapis.com") { |http|
resp = http.get("/chart?#{failures_url")
open("pie.png" ,"wb") { |file|
file.write(resp.body)
}
}
it worked.
I'd go after the file using Ruby's Open::URI:
require "open-uri"
File.open('pie.png', 'wb') do |fo|
fo.write open("http://chart.googleapis.com/chart?#{failures_url}").read
end
The reason I prefer Open::URI is it handles redirects automatically, so WHEN Google makes a change to their back-end and tries to redirect the URL, the code will handle it magically. It also handles timeouts and retries more gracefully if I remember right.
If you must have lower level control then I'd look at one of the many other HTTP clients for Ruby; Net::HTTP is fine for creating new services or when a client doesn't exist, but I'd use Open::URI or something besides Net::HTTP until the need presents itself.
The URL:
http://chart.googleapis.com/chart?chtt=Builds+in+the+last+12+months&cht=bvg&chd=t:296,1058,1217,1615,1200,611,2055,1663,1746,1950,2044,2781,1553&chs=800x375&chco=4466AA&chxl=0:|Jul-2010|Aug-2010|Sep-2010|Oct-2010|Nov-2010|Dec-2010|Jan-2011|Feb-2011|Mar-2011|Apr-2011|May-2011|Jun-2011|Jul-2011|2:|Months|3:|Builds&chxt=x,y,x,y&chg=0,6.6666666666666666666666666666667,5,5,0,0&chxp=3,50|2,50&chbh=23,5,30&chxr=1,0,3000&chds=0,3000
makes URI upset. I suspect it is seeing characters that should be encoded in URLs.
For documentation purposes, here is what URI says when trying to parse that URL as-is:
URI::InvalidURIError: bad URI(is not URI?)
If I encode the URI first, I get a successful parse. Testing further using Open::URI shows it is able to retrieve the document at that point and returns 23701 bytes.
I think that is the appropriate fix for the problem if some of those characters are truly not acceptable to URI AND they are out of the RFC.
Just for information, the Addressable::URI gem is a great replacement for the built-in URI.
resp = http.get("/chart?#{failures_url")
If you copied your original code then you're missing a closing curly bracket in your path string.
Your original version did not have the parameter name for each parameter, just the data. For example, on the title, you cannot just submit "Builds+in+the+last+12+months", but instead it must be "chtt=Builds+in+the+last+12+months".
Try this:
failures_url = ["title="+title, "type="+type, "data="+data, "size="+size, "colors="+colors, "labels="+labels].join("&")

Using Watir to check for bad links

I have an unordered list of links that I save off to the side, and I want to click each link and make sure it goes to a real page and doesnt 404, 500, etc.
The issue is that I do not know how to do it. Is there some object I can inspect which will give me the http status code or anything?
mylinks = Browser.ul(:id, 'my_ul_id').links
mylinks.each do |link|
link.click
# need to check for a 200 status or something here! how?
Browser.back
end
My answer is similar idea with the Tin Man's.
require 'net/http'
require 'uri'
mylinks = Browser.ul(:id, 'my_ul_id').links
mylinks.each do |link|
u = URI.parse link.href
status_code = Net::HTTP.start(u.host,u.port){|http| http.head(u.request_uri).code }
# testing with rspec
status_code.should == '200'
end
if you use Test::Unit for testing framework, you can test like the following, i think
assert_equal '200',status_code
another sample (including Chuck van der Linden's idea): check status code and log out URLs if the status is not good.
require 'net/http'
require 'uri'
mylinks = Browser.ul(:id, 'my_ul_id').links
mylinks.each do |link|
u = URI.parse link.href
status_code = Net::HTTP.start(u.host,u.port){|http| http.head(u.request_uri).code }
unless status_code == '200'
File.open('error_log.txt','a+'){|file| file.puts "#{link.href} is #{status_code}" }
end
end
There's no need to use Watir for this. A HTTP HEAD request will give you an idea whether the URL resolves and will be faster.
Ruby's Net::HTTP can do it, or you can use Open::URI.
Using Open::URI you can request a URI, and get a page back. Because you don't really care what the page contains, you can throw away that part and only return whether you got something:
require 'open-uri'
if (open('http://www.example.com').read.any?)
puts "is"
else
puts "isn't"
end
The upside is the Open::URI resolves HTTP redirects. The downside is it returns full pages so it can be slow.
Ruby's Net::HTTP can help somewhat, because it can use HTTP HEAD requests, which don't return the entire page, only a header. That by itself isn't enough to know whether the actual page is reachable because the HEAD response could redirect to a page that doesn't resolve, so you have to loop through the redirects until you either don't get a redirect, or you get an error. The Net::HTTP docs have an example to get you started:
require 'net/http'
require 'uri'
def fetch(uri_str, limit = 10)
# You should choose better exception.
raise ArgumentError, 'HTTP redirect too deep' if limit == 0
response = Net::HTTP.get_response(URI.parse(uri_str))
case response
when Net::HTTPSuccess then response
when Net::HTTPRedirection then fetch(response['location'], limit - 1)
else
response.error!
end
end
print fetch('http://www.ruby-lang.org')
Again, that example is returning pages, which might slow you down. You can replace get_response with request_head, which returns a response like get_response does, which should help.
In either case, there's another thing you have to consider. A lot of sites use "meta refreshes", which cause the browser to refresh the page, using an alternate URL, after parsing the page. Handling these requires requesting the page and parsing it, looking for the <meta http-equiv="refresh" content="5" /> tags.
Other HTTP gems like Typhoeus and Patron also can do HEAD requests easily, so take a look at them too. In particular, Typhoeus can handle some heavy loads via its companion Hydra, allowing you to easily use parallel requests.
EDIT:
require 'typhoeus'
response = Typhoeus::Request.head("http://www.example.com")
response.code # => 302
case response.code
when (200 .. 299)
#
when (300 .. 399)
headers = Hash[*response.headers.split(/[\r\n]+/).map{ |h| h.split(' ', 2) }.flatten]
puts "Redirected to: #{ headers['Location:'] }"
when (400 .. 499)
#
when (500 .. 599)
#
end
# >> Redirected to: http://www.iana.org/domains/example/
Just in case you haven't played with one, here's what the response looks like. It's useful for exactly the sort of situation you're look at:
(rdb:1) pp response
#<Typhoeus::Response:0x00000100ac3f68
#app_connect_time=0.0,
#body="",
#code=302,
#connect_time=0.055054,
#curl_error_message="No error",
#curl_return_code=0,
#effective_url="http://www.example.com",
#headers=
"HTTP/1.0 302 Found\r\nLocation: http://www.iana.org/domains/example/\r\nServer: BigIP\r\nConnection: Keep-Alive\r\nContent-Length: 0\r\n\r\n",
#http_version=nil,
#mock=false,
#name_lookup_time=0.001436,
#pretransfer_time=0.055058,
#request=
:method => :head,
:url => http://www.example.com,
:headers => {"User-Agent"=>"Typhoeus - http://github.com/dbalatero/typhoeus/tree/master"},
#requested_http_method=nil,
#requested_url=nil,
#start_time=nil,
#start_transfer_time=0.109741,
#status_message=nil,
#time=0.109822>
If you have a lot of URLs to check, see the Hydra example that is part of Typhoeus.
There's a bit of a philosophical debate on whether watir or watir-webdriver should provide HTTP return code information. The premise being that an ordinary 'user' which is what Watir is simulating on the DOM is ignorant of HTTP return codes. I don't necessarily agree with this, as I have a slightly different use case perhaps to the main (performance testing etc)... but it is what it is. This thread expresses some opinions about the distinction => http://groups.google.com/group/watir-general/browse_thread/thread/26486904e89340b7
At present there's no easy way to determine HTTP response codes from Watir without using supplementary tools like proxies/Fiddler/HTTPWatch/TCPdump, or downgrading to a net/http level of scripting mid test... I personally like using firebug with the netexport plugin to keep a retrospective look at tests.
All previous solutions are inefficient if you have a very huge number of links because for each one, it will establish a new HTTP connection with the server hosting the link.
I have written a one-liner bash command that will use the curl command to fetch a list of links supplied from stdin and returns a list of status codes corresponding to each link. The key point here is that curl takes all bunch of links in the same invocation and it will reuse HTTP connections which will dramatically improve speed.
However, curl will divide the list into chunks of 256, which is still by far more than 1! To make sure connections are reused, sort the links first (simply using the sort command).
cat <YOUR_LINKS_FILE_ONE_PER_LINE> | xargs curl --head --location -w '---HTTP_STATUS_CODE:%{http_code}\n\n' -s --retry 10 --globoff | grep HTTP_STATUS_CODE | cut -d: -f2 > <RESULTS_FILE>
It is worth noting that the above command will follow HTTP redirects, retry 10 times for temporary errors (timeouts or 5xx) and of course will only fetch headers.
Update: added --globoff so that curl won't expand any url if it contains {} or []

Is there a workaround to open URLs containing underscores in Ruby?

I'm using open-uri to open URLs.
resp = open("http://sub_domain.domain.com")
If it contains underscore I get an error:
URI::InvalidURIError: the scheme http does not accept registry part: sub_domain.domain.com (or bad hostname?)
I understand that this is because according to RFC URLs can contain only letters and numbers. Is there any workaround?
This looks like a bug in URI, and uri-open, HTTParty and many other gems make use of URI.parse.
Here's a workaround:
require 'net/http'
require 'open-uri'
def hopen(url)
begin
open(url)
rescue URI::InvalidURIError
host = url.match(".+\:\/\/([^\/]+)")[1]
path = url.partition(host)[2] || "/"
Net::HTTP.get host, path
end
end
resp = hopen("http://dear_raed.blogspot.com/2009_01_01_archive.html")
URI has an old-fashioned idea of what an url looks like.
Lately I'm using addressable to get around that:
require 'open-uri'
require 'addressable/uri'
class URI::Parser
def split url
a = Addressable::URI::parse url
[a.scheme, a.userinfo, a.host, a.port, nil, a.path, nil, a.query, a.fragment]
end
end
resp = open("http://sub_domain.domain.com") # Yay!
Don't forget to gem install addressable
This initializer in my rails app seems to make URI.parse work at least:
# config/initializers/uri_underscore.rb
class URI::Generic
def initialize_with_registry_check(scheme,
userinfo, host, port, registry,
path, opaque,
query,
fragment,
parser = DEFAULT_PARSER,
arg_check = false)
if %w(http https).include?(scheme) && host.nil? && registry =~ /_/
initialize_without_registry_check(scheme, userinfo, registry, port, nil, path, opaque, query, fragment, parser, arg_check)
else
initialize_without_registry_check(scheme, userinfo, host, port, registry, path, opaque, query, fragment, parser, arg_check)
end
end
alias_method_chain :initialize, :registry_check
end
Here is a patch that solves the problem for a wide variety of situations (rest-client, open-uri, etc.) without using external gems or overriding parts of URI.parse:
module URI
DEFAULT_PARSER = Parser.new(:HOSTNAME => "(?:(?:[a-zA-Z\\d](?:[-\\_a-zA-Z\\d]*[a-zA-Z\\d])?)\\.)*(?:[a-zA-Z](?:[-\\_a-zA-Z\\d]*[a-zA-Z\\d])?)\\.?")
end
Source: lib/uri/rfc2396_parser.rb#L86
Ruby-core has an open issue: https://bugs.ruby-lang.org/issues/8241
An underscore can not be contained in a domain name like that. That is part of the DNS standard. Did you mean to use a dash(-)?
Even if open-uri didn't throw an error such a command would be pointless. Why? Because there is no way it can resolve such a domain name. At best you'd get an unknown host error. There is no way for you to register a domain name with an _ in it, and even running your own private DNS server, it is against the specification to use a _. You could bend the rules and allow it(by modifying the DNS server software), but then your operating system's DNS resolver won't support it, neither will your router's DNS software.
Solution: Don't try to use a _ in a DNS name. It won't work anywhere and it's against the specifications
I had this same error while trying to use gem update / gem install etc. so I used the IP address instead and its fine now.
Here is another ugly hack, no gem needed:
def parse(url = nil)
begin
URI.parse(url)
rescue URI::InvalidURIError
host = url.match(".+\:\/\/([^\/]+)")[1]
uri = URI.parse(url.sub(host, 'dummy-host'))
uri.instance_variable_set('#host', host)
uri
end
end
I recommend using the Curb gem: https://github.com/taf2/curb which just wraps libcurl. Here is a simple example that will automatically follow redirects and print the response code and response body:
rsp = Curl::Easy.http_get(url){|curl| curl.follow_location = true; curl.max_redirects=10;}
puts rsp.response_code
puts rsp.body_str
I usually avoid the ruby URI classes since they are too strick to the spec which as you know the web is the wild west :) Curl / curb handles every url I throw at it like a champ.
For anyone stumbling upon this:
Ruby's URI.parse used to be based on RFC2396 (published in Aug 1998), see https://bugs.ruby-lang.org/issues/8241
But starting at ruby 2.2 URI is upgraded into RFC 3986, so if you're on a modern version, no monkey patches are necessary now.

Resources