Maintaining session and cookies over a 302 redirect - ruby

I am trying to make fetch a PDF file that gets generated on-demand behind an auth wall. Based on my testing, the flow is as follows:
I make a GET request with several parameters (including auth credentials) to the appropriate page. That page validates my credentials and then processes my request. When the request is finished processing (nearly instantly), I am sent a 302 response that redirects me to the location of the generated PDF. This PDF can then only be accessed by that session.
Using a browser, there's really nothing strange that happens. I attempted to do the same via curl and wget without any optional parameters, but those both failed. I was able to get curl working by adding -L -b /tmp/cookie.txt as options, though (to follow redirects and store cookies).
According to the ruby-doc, using Net::HTTP.start should get me close to what I want. After playing around with it, I was indeed fairly close. I believe the only issue, however, was that my Set-Cookie values were different between requests, even though they were using the same http object in the same start block.
I tried keeping it as simple as possible and then expanding once I got the results I was looking for:
url = URI.parse("http://dev.example.com:8888/path/to/page.jsp?option1=test1&option2=test2&username=user1&password=password1")
Net::HTTP.start(url.host, url.port) do |http|
# Request the first URL
first_req = Net::HTTP::Get.new url
first_res = http.request first_req
# Grab the 302 redirect location (it will always be relative like "../servlet/sendfile/result/543675843657843965743895642865273847328.pdf")
redirect_loc = URI.parse(first_res['Location']
# Request the PDF
second_req = Net::HTTP::Get.new redirect_loc
second_res = http.request first_req
end
I also attempted to use http.get instead of creating a new request each time, but still no luck.

The problem is with cookie: it should be passed within the second request. Smth like:
second_req = Net::HTTP::Get.new(uri.path, {'Cookie' => first_req['Set-Cookie']})

Related

Using Ruby Script to perform a login

my goal is to use a ruby script to perform a login.
The website uses javascript to render the login form therefore I cannot use mechanize. I want to avoid using selenium,
If I were to login with false data, I can see under the network section, that an action url is performed ->
Request URL: https://www.example.com/admin/bocontroller/bocontroller.cfm?action=dologin
further down I can see the Form Data
->
username: Sample
password: 12345678
Based on this I tried to write several scripts (this being the closest i hope...)
require "net/http"
require "uri"
uri = URI.parse("https://www.eample.com/admin/bocontroller/bocontroller.cfm?action=dologin")
http = Net::HTTP.new(uri.host, uri.port)
request = Net::HTTP::Post.new(uri.request_uri)
request.set_form_data({'username' => 'Sample', 'password' => '12345678'})
request["Content-Type"] = "application/json"
response = http.request(request)
Unfortunately My script just stops running... and I am kind of lost. Can anyone give me some hints to lead me into the right direction? IS this the right approach?
As it seems to have gained some traction as a comment, I thought I'd move it to an answer.
There's a good chance this will be timing out to prevent CSRF attacks. Here's a link to the Rails docs explaining this: https://guides.rubyonrails.org/security.html#csrf-countermeasures.
In a nutshell, sites will send (and require) an authenticity token along with any potentially transformative request (POST, PUT, DELETE, etc.), in order to prevent people from sending such requests from outside the domain - as you're doing.
I'm not suggesting you have ill intent, though this prevents someone attempting to gain access to something they shouldn't, should their actions be designed to work in a manner beyond what the site intends.

Wait for selector to present

When doing web scraping with Nokogiri I occasionally get the following error message
undefined method `at_css' for nil:NilClass (NoMethodError)
I know that the selected element is present at some time, but the site is sometimes a bit slow to respond, and I guess this is the reason why I'm getting the error.
Is there some way to wait until a certain selector is present before proceeding with the script?
My current http request block looks like this
url = URL
body = BODY
uri = URI.parse(url)
http = Net::HTTP.new(uri.host, uri.port)
http.read_timeout = 200 # default 60 seconds
http.open_timeout = 200 # default nil
http.use_ssl = true
request = Net::HTTP::Post.new(uri.request_uri)
request.body = body
request["Content-Type"] = "application/x-www-form-urlencoded"
begin
response = http.request(request)
doc = Nokogiri::HTML(response.body)
rescue
sleep 100
retry
end
While you can use a streaming Net::HTTP like #Stefan says in his comment, and an associated handler that includes Nokogiri, you can't parse a partial HTTP document using a DOM model, which is Nokogiri's default, because it expects the full document also.
You could use Nokogiri's SAX parser, but that's an entirely different programming style.
If you're retrieving an entire page, then use OpenURI instead of the lower-level Net::HTTP. It automatically handles a number of things that Net::HTTP will not do by default, such as redirection, which makes it a lot easier to retrieve pages and will greatly simplify your code.
I suspect the problem is either that the site is timing out, or the tag you're trying to find is dynamically loaded after the real page loads.
If it's timing out you'll need to increase your wait time.
If it's dynamically loading that markup, you can request the main page, locate the appropriate URL for the dynamic content and load it separately. Once you have it, you can either insert it into the first page if you need everything, or just parse it separately.

Output Net::HTTP request to human readable form

I am trying to debug a request that I am making to a server, but I can't figure out what is wrong with it, as it seems to be just like the request that I am putting through on RESTClient.
I am initializing it like so:
request = Net::HTTP::Post.new(url.to_s)
request.add_field "HeaderKey", "HeaderValue"
request.body = requestBody
and then I am executing it like so:
Net::HTTP.start(url.host, url.port) do |http|
response = http.request(request)
end
The requestBody is a string that is encoded with Base64.encode64.
Is there a way to output the request to see exactly where it's going and with what contents? I've used Paros for checking my iOS connections and I can also output a description of requests from most platforms I've worked with, but I can't figure it out for Ruby.
I've found that HTTP Scoop works pretty well for grabbing network traffic (disclaimer - it's not free!)

Fetching only X/HTML links (not images) based on mime type

I'm crawling a site using Ruby + OpenURI + Nokogiri. Fetch a page, find all the a[href] and (if they're in the same domain and right protocol) follow them to crawl again.
Sometimes there are links to large binaries (e.g. jpeg, exe), and I don't want to crawl those.
I tried using the HTTP "Accept" header to get an error or empty response for the wrong mime types like so:
require 'open-uri'
page = open(url, 'Accept'=>'text/html,application/xhtml+xml,application/xml')
...but OpenURI still downloads binaries sent with another mime type.
Other than looking at file extensions in the url for a probable file type, how can I prevent the download (or detect a conflicting response type) for an arbitrary URL?
You could send a HEAD request first, then check the Content-type header of the response and only make the real request if it’s acceptable:
ACCEPTABLE_TYPES = %w{text/html application/xhtml+xml application/xml}
uri = URI(url)
type = Net::HTTP.start(uri.host, uri.port) do |http|
http.head(uri.path).content_type
end
if ACCEPTABLE_TYPES.include? type
# fetch the url
else
# do whatever
end
This will need an extra request for each page, but I can’t see a way of avoiding it. It also relies on the server sending the same headers for a HEAD request as it does for a GET, which I think is a reasonable assumption but something to be aware of.

Ruby's open-uri and cookies

I would like to store the cookies from one open-uri call and pass them to the next one. I can't seem to find the right docs for doing this. I'd appreciate it if you could tell me the right way to do this.
NOTES: w3.org is not the actual url, but it's shorter; pretend cookies matter here.
h1 = open("http://www.w3.org/")
h2 = open("http://www.w3.org/People/Berners-Lee/", "Cookie" => h1.FixThisSpot)
Update after 2 nays: While this wasn't intended as rhetorical question I guarantee that it's possible.
Update after tumbleweeds: See (the answer), it's possible. Took me a good while, but it works.
I thought someone would just know, but I guess it's not commonly done with open-uri.
Here's the ugly version that neither checks for privacy, expiration, the correct domain, nor the correct path:
h1 = open("http://www.w3.org/")
h2 = open("http://www.w3.org/People/Berners-Lee/",
"Cookie" => h1.meta['set-cookie'].split('; ',2)[0])
Yes, it works. No it's not pretty, nor fully compliant with recommendations, nor does it handle multiple cookies (as is).
Clearly, HTTP is a very straight-forward protocol, and open-uri lets you at most of it. I guess what I really needed to know was how to get the cookie from the h1 request so that it could be passed to the h2 request (that part I already knew and showed). The surprising thing here is how many people basically felt like answering by telling me not to use open-uri, and only one of those showed how to get a cookie set in one request passed to the next request.
You need to add a "Cookie" header.
I'm not sure if open-uri can do this or not, but it can be done using Net::HTTP.
# Create a new connection object.
conn = Net::HTTP.new(site, port)
# Get the response when we login, to set the cookie.
# body is the encoded arguments to log in.
resp, data = conn.post(login_path, body, {})
cookie = resp.response['set-cookie']
# Headers need to be in a hash.
headers = { "Cookie" => cookie }
# On a get, we don't need a body.
resp, data = conn.get(path, headers)
Thanks Matthew Schinckel your answer was really useful. Using Net::HTTP I was successful
# Create a new connection object.
site = "google.com"
port = 80
conn = Net::HTTP.new(site, port)
# Get the response when we login, to set the cookie.
# body is the encoded arguments to log in.
resp, data = conn.post(login_path, body, {})
cookie = resp.response['set-cookie']
# Headers need to be in a hash.
headers = { "Cookie" => cookie }
# On a get, we don't need a body.
resp, data = conn.get(path, headers)
puts resp.body
Depending on what you are trying to accomplish, check out webrat. I know it is usually used for testing, but it can also hit live sites, and it does a lot of the stuff that your web browser would do for you, like store cookies between requests and follow redirects.
you would have to roll your own cookie support by parsing the meta headers when reading and adding a cookie header when submitting a request if you are using open-uri. Consider using httpclient http://raa.ruby-lang.org/project/httpclient/ or something like mechanize instead http://mechanize.rubyforge.org/ as they have cookie support built in.
There is a RFC 2109 and RFC 2965 cookie jar implementation to be found here for does that want standard compliant cookie handling.
https://github.com/dwaite/cookiejar

Resources