Ruby basic syntax and Net::HTTP - ruby

I am completely new to ruby. I have the following code:
body = "hello"
site = "api.mysite.net"
port = 80
conn = Net::HTTP.new(site, port)
resp, data = conn.post("/v1/profile", body, {})
puts body
my questions are:
Where should I go for a library on how NET::HTTP.new() , conn.post() etc... works?
What does the comma between resp and data mean?
How come puts body gives me nothing even though I have hello defined initially? And when passed through the post(), I figure it would assign it a value? but instead puts resp.body actually gives me the http response.
This is all so new to me, just trying to get a handle on things.

Read the docs I guess, but you will need background knowledge on HTTP to really understand it.
That's shorthand for assigning two variables at the same time, assuming the right-hand side returns an array of 2 (or more) items.
You've posted the body in your request, resp.body is the body in the response. I don't know why body should be empty though. I would double-check that, but it sounds like a side effect of conn.post if anything.
BTW there are several nice 3rd-party gems which make HTTP client development much easier than dealing with Net::HTTP, e.g. RESTClient, Excon, HTTparty. Check these out. Or if you want to use the standard Ruby library, also look at Open URI as a higher-level API.

Related

Totally stuck trying to get HTTPS data using Ruby on Windows

I'm using Ruby 1.9.3 and trying to write a Google Play scraper loosely based on this one. I am having a really hard time with the HTTPS part of it.
Basically, using Nokogiri::HTML(open("https://play.google.com/store/#{type}/details?id=#{id}")) (as in the original gem) failed on Windows, for reasons explained on this thread.
So, I tried implementing the solution from that same thread, but it is really not working at all. I've even stopped trying with HTTPS for now, because there must be something basic I am missing on even just HTTP.
Here's the code I currently have:
url = URI.parse( "http://google.com/" )
http = Net::HTTP.new( url.host, url.port )
http.use_ssl = true if url.port == 443
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
res, data = http.get ("http://google.com/")
puts data
In this case, I get nothing. Not even "nil", just no output at all.
However, when I just do a straight Net::HTTP.get_print URI('http://www.google.com'), I get the output, no problems.
Any help would be most appreciated. The real solution I am looking for is a simple way to scrape Google Play pages when using Windows -- this is just a step on the way there. So, if you know of a simpler way to accomplish this, I'd love to hear about it.
The reason you are getting nil is because data doesn't have anything assigned to it. This line is only assigning to res:
res, data = http.get("http://google.com/")
Also, Google must be accessed using http://www.google.com with the www otherwise all you get back is a 301 redirect message and Net::HTTPMovedPermanently object.

Can I reference a complete Ruby Net::HTTP request as a string before sending?

I'm using Net::HTTP in Ruby 1.9.2p290 to handle some, obviously, networking calls.
I now have a need to see the complete request that is sent to the server (as one long big String conforming to HTTP 1.0/1.1.
In other words, I want Net::HTTP to handle the heavy lifting of generating the HTTP standard-compliant request+body, but I want to send the string with a custom delivery mechanism.
Net::HTTPRequest doesn't seem to have any helpful methods here -- do I need to go lower down the stack and hijack something?
Does anyone know of a good library, maybe other than Net::HTTP, that could help?
EDIT: I'd also like to do the same going the other way (turning a string response into Net::HTTP::* -- although it seems I may be able to instantiate Net::HTTPResponse by myself?
Request:
post = Net::HTTP::Post.new('http://google.com')
post.set_form_data :query => 'ruby http'
sio = StringIO.new
post.exec si, Net::HTTP::HTTPVersion, post.path
puts sio.string
Response:
si = StringIO.new("HTTP/1.1 200 OK\n")
bio = Net::BufferedIO.new(si)
Net::HTTPResponse.read_new(bio)

How do I make a uber-simple API wrapper in Ruby?

I'm trying to use a super simple API from is.gd:
http://is.gd/api.php?longurl=http://www.example.com
Which returns a response header "HTTP/1.1 200 OK" if the URL was shortened as expected, or "HTTP/1.1 500 Internal Server Error" if there was any problem that prevented this. Assuming the request was successful, the body of the response will contain only the new shortened URL
I don't even know where to begin or if there are any available ruby methods to make sending and receiving of these API requests frictionless. I basically want to assign the response (the shortened url) to a ruby object.
How would you do this? Thanks in advance.
Super simple:
require 'open-uri'
def shorten(url)
open("http://is.gd/api.php?longurl=#{url}").read
rescue
nil
end
open-uri is part of the Ruby standard library and (among other things) makes it possible to do HTTP requests using the open method (which usually opens files). open returns an IO, and calling read on the IO returns the body. open-uri will throw an exception if the server returns a 500 error, and in this case I'm catching the exception and return nil, but if you want you can let the exception bubble up to the caller, or raise another exception.
Oh, and you would use it like this:
url = "http://www.example.com"
puts "The short version of #{url} is #{shorten(url)}"
I know you already got an answer you accepted, but I still want to mention httparty because I've made very good experiences wrapping APIs (Delicious and Github) with it.

Ruby's open-uri and cookies

I would like to store the cookies from one open-uri call and pass them to the next one. I can't seem to find the right docs for doing this. I'd appreciate it if you could tell me the right way to do this.
NOTES: w3.org is not the actual url, but it's shorter; pretend cookies matter here.
h1 = open("http://www.w3.org/")
h2 = open("http://www.w3.org/People/Berners-Lee/", "Cookie" => h1.FixThisSpot)
Update after 2 nays: While this wasn't intended as rhetorical question I guarantee that it's possible.
Update after tumbleweeds: See (the answer), it's possible. Took me a good while, but it works.
I thought someone would just know, but I guess it's not commonly done with open-uri.
Here's the ugly version that neither checks for privacy, expiration, the correct domain, nor the correct path:
h1 = open("http://www.w3.org/")
h2 = open("http://www.w3.org/People/Berners-Lee/",
"Cookie" => h1.meta['set-cookie'].split('; ',2)[0])
Yes, it works. No it's not pretty, nor fully compliant with recommendations, nor does it handle multiple cookies (as is).
Clearly, HTTP is a very straight-forward protocol, and open-uri lets you at most of it. I guess what I really needed to know was how to get the cookie from the h1 request so that it could be passed to the h2 request (that part I already knew and showed). The surprising thing here is how many people basically felt like answering by telling me not to use open-uri, and only one of those showed how to get a cookie set in one request passed to the next request.
You need to add a "Cookie" header.
I'm not sure if open-uri can do this or not, but it can be done using Net::HTTP.
# Create a new connection object.
conn = Net::HTTP.new(site, port)
# Get the response when we login, to set the cookie.
# body is the encoded arguments to log in.
resp, data = conn.post(login_path, body, {})
cookie = resp.response['set-cookie']
# Headers need to be in a hash.
headers = { "Cookie" => cookie }
# On a get, we don't need a body.
resp, data = conn.get(path, headers)
Thanks Matthew Schinckel your answer was really useful. Using Net::HTTP I was successful
# Create a new connection object.
site = "google.com"
port = 80
conn = Net::HTTP.new(site, port)
# Get the response when we login, to set the cookie.
# body is the encoded arguments to log in.
resp, data = conn.post(login_path, body, {})
cookie = resp.response['set-cookie']
# Headers need to be in a hash.
headers = { "Cookie" => cookie }
# On a get, we don't need a body.
resp, data = conn.get(path, headers)
puts resp.body
Depending on what you are trying to accomplish, check out webrat. I know it is usually used for testing, but it can also hit live sites, and it does a lot of the stuff that your web browser would do for you, like store cookies between requests and follow redirects.
you would have to roll your own cookie support by parsing the meta headers when reading and adding a cookie header when submitting a request if you are using open-uri. Consider using httpclient http://raa.ruby-lang.org/project/httpclient/ or something like mechanize instead http://mechanize.rubyforge.org/ as they have cookie support built in.
There is a RFC 2109 and RFC 2965 cookie jar implementation to be found here for does that want standard compliant cookie handling.
https://github.com/dwaite/cookiejar

How do I read only x number of bytes of the body using Net::HTTP?

It seems like the methods of Ruby's Net::HTTP are all or nothing when it comes to reading the body of a web page. How can I read, say, the just the first 100 bytes of the body?
I am trying to read from a content server that returns a short error message in the body of the response if the file requested isn't available. I need to read enough of the body to determine whether the file is there. The files are huge, so I don't want to get the whole body just to check if the file is available.
This is an old thread, but the question of how to read only a portion of a file via HTTP in Ruby is still a mostly unanswered one according to my research. Here's a solution I came up with by monkey-patching Net::HTTP a bit:
require 'net/http'
# provide access to the actual socket
class Net::HTTPResponse
attr_reader :socket
end
uri = URI("http://www.example.com/path/to/file")
begin
Net::HTTP.start(uri.host, uri.port) do |http|
request = Net::HTTP::Get.new(uri.request_uri)
# calling request with a block prevents body from being read
http.request(request) do |response|
# do whatever limited reading you want to do with the socket
x = response.socket.read(100);
# be sure to call finish before exiting the block
http.finish
end
end
rescue IOError
# ignore
end
The rescue catches the IOError that's thrown when you call HTTP.finish prematurely.
FYI, the socket within the HTTPResponse object isn't a true IO object (it's an internal class called BufferedIO), but it's pretty easy to monkey-patch that, too, to mimic the IO methods you need. For example, another library I was using (exifr) needed the readchar method, which was easy to add:
class Net::BufferedIO
def readchar
read(1)[0].ord
end
end
Shouldn't you just use an HTTP HEAD request (Ruby Net::HTTP::Head method) to see if the resource is there, and only proceed if you get a 2xx or 3xx response? This presumes your server is configured to return a 4xx error code if the document is not available. I would argue this was the correct solution.
An alternative is to request the HTTP head and look at the content-length header value in the result: if your server is correctly configured, you should easily be able to tell the difference in length between a short message and a long document. Another alternative: set the content-range header field in the request (which again assumes that the server is behaving correctly WRT the HTTP spec).
I don't think that solving the problem in the client after you've sent the GET request is the way to go: by that time, the network has done the heavy lifting, and you won't really save any wasted resources.
Reference: http header definitions
I wanted to do this once, and the only thing that I could think of is monkey patching the Net::HTTP#read_body and Net::HTTP#read_body_0 methods to accept a length parameter, and then in the former just pass the length parameter to the read_body_0 method, where you can read only as much as length bytes.
To read the body of an HTTP request in chunks, you'll need to use Net::HTTPResponse#read_body like this:
http.request_get('/large_resource') do |response|
response.read_body do |segment|
print segment
end
end
Are you sure the content server only returns a short error page?
Doesn't it also set the HTTPResponse to something appropriate like 404. In which case you can trap the HTTPClientError derived exception (most likely HTTPNotFound) which is raised when accessing Net::HTTP.value().
If you get an error then your file wasn't there if you get 200 the file is starting to download and you can close the connection.
You can't. But why do you need to? Surely if the page just says that the file isn't available then it won't be a huge page (i.e. by definition, the file won't be there)?

Resources