I'm trying to learn to use AJAX with Rails.
Here is my client side coffeescript code:
$(document).ready ->
$("#url").blur ->
$.get("/test_url?url=" + $(this).val(), (data) ->
alert("Response code: " + data)
).fail( () ->
alert("Why am I failing?")
)
Here is my server-side Ruby code:
def url_response
url = URI.parse(params[:url])
Net::HTTP.get_response(url).code unless url.port.nil?
end
The Ruby code is being called and correctly returns the HTTP response code, but I can't do anything with the data because the client-side script says the call has failed. As far as I can see, it is not failing. url_response is being called and it is returning a value, so what exactly is failing here?
The problem was I removed the line that rendered the response. I previously had in, but thanks to Frederick Cheung's hint to check if the URL works directly in the browser, I realised that it no longer worked in the browser as it did previously, which is why I didn't think to check again!
The code below got everything working again.
def url_response
url = URI.parse(params[:url])
render :text => Net::HTTP.get_response(url).code unless url.port.nil?
end
Related
I have been trying for days to pull down activity data from the Withings API using the OAuth Ruby gem. Regardless of what method I try I consistently get back a 503 error response (not enough params) even though I copied the example URI from the documentation, having of course swapped out the userid. Has anybody had any luck with this in the past. I hope it is just something stupid I am doing.
class Withings
API_KEY = 'REMOVED'
API_SECRET = 'REMOVED'
CONFIGURATION = { site: 'https://oauth.withings.com', request_token_path: '/account/request_token',
access_token_path: '/account/access_token', authorize_path: '/account/authorize' }
before do
#consumer = OAuth::Consumer.new API_KEY, API_SECRET, CONFIGURATION
#base_url ||= "#{request.env['rack.url_scheme']}://#{request.env['HTTP_HOST']}#{request.env['SCRIPT_NAME']}"
end
get '/' do
#request_token = #consumer.get_request_token oauth_callback: "#{#base_url}/access_token"
session[:token] = #request_token.token
session[:secret] = #request_token.secret
redirect #request_token.authorize_url
end
get '/access_token' do
#request_token = OAuth::RequestToken.new #consumer, session[:token], session[:secret]
#access_token = #request_token.get_access_token oauth_verifier: params[:oauth_verifier]
session[:token] = #access_token.token
session[:secret] = #access_token.secret
session[:userid] = params[:userid]
redirect "#{#base_url}/activity"
end
get '/activity' do
#access_token = OAuth::AccessToken.new #consumer, session[:token], session[:secret]
response = #access_token.get("http://wbsapi.withings.net/v2/measure?action=getactivity&userid=#{session[:userid]}&startdateymd=2014-01-01&enddateymd=2014-05-09")
JSON.parse(response.body)
end
end
For other API endpoints I get an error response of 247 - The userid provided is absent, or incorrect. This is really frustrating. Thanks
So I figured out the answer after copious amount of Googleing and grasping a better understanding of both the Withings API and the OAuth library I was using. Basically Withings uses query strings to pass in API parameters. I though I was going about passing these parameters correctly when I was making API calls, but apparently I needed to explicitly set the OAuth library to use the query string scheme, like so
http_method: :get, scheme: :query_string
This is appended to my OAuth consumer configuration and all worked fine immediately.
I'm using the google custom search api and I'm trying to access it through some ruby code:
Here is a snippet of the code
req = Typhoeus::Request.new("https://www.googleapis.com/customsearch/v1?key={my_key}&cx=017576662512468239146:omuauf_lfve&q=" + keyword, followlocation: true)
res = req.run
It appears that the body of the answer is this one:
<p>Your client has issued a malformed or illegal request. <ins>That’s all we know.</ins>
'
from /usr/local/lib/ruby/2.1.0/json/common.rb:155:in `parse'
from main.rb:20:in `initialize'
from main.rb:41:in `new'
from main.rb:41:in `<main>'
When I try to do the same thing from the browser it works like a charm. Even more confusing is that this same code worked 12 hours ago. I only changed the keyword that it should look for, however it started returning the error.
Any suggestions? I'm sure that I have enough credits for more requests
You probably have problems with special characters in your get parameter keyword. If you enter the URL in your browser, the brower adjusts these. However, for ruby you need to escape these characters, in such a way that a string like "sky line" becomes "sky+line" and so on. There is a utility function CGI::escape, which is used like this:
require 'cgi'
CGI::escape("sky line")
=> "sky+line"
Your fixed code would look something like this:
req = Typhoeus::Request.new("https://www.googleapis.com/customsearch/v1?key={my_key}&cx=017576662512468239146:omuauf_lfve&q=" + CGI::escape(keyword), followlocation: true)
res = req.run
However, since you're using Typhoeus anyway, you should be able to use its params parameter and let Typhoeus handle the escaping:
req = Typhoeus::Request.new(
"https://www.googleapis.com/customsearch/v1?&cx=017576662512468239146:omuauf_lfve",
followlocation: true,
params: {q: keyword, key: my_key}
)
res = req.run
There's more examples on Typhoeus' GitHub page.
I have a Rails action which responds with head :ok, rather than rendering any content. I'm calling this action using RestClient, like so:
resp = RestClient.post("#{api_server_url}/action/path", {:param_1 => thing, :param_2 => other_thing}, :authorization => auth)
The Rails server log shows that this worked as expected:
Completed 200 OK in 78ms (ActiveRecord: 21.3ms)
However, the resulting value of resp is the string " ", rather than an object I can examine (to see what its status code is, for instance).
I tried changing the action to use head :created instead, just to see if it produced a different result, but it's the same: " ".
How can I get the status code of this response?
RestClient.post returns an instance of the class RestClient::Response that inherits from the String class.
You can still check the return code by calling the method code resp.code. Other methods are for example resp.headers and resp.cookies.
I've written a short snippet which sends a GET request, performs auth and checks if there is a 200 OK response (when auth success). Now, one thing I saw with this specific GET request, is that the response is always 200 irrespective of whether auth success or not.
The diff is in the HTTP response. That is when auth fails, the first response is 200 OK, just the same as when auth success, and after this then there is a second step. The page gets redirected again to the login page.
I am just trying to make a quick script which can check my login user and pass on my web application and tell me which auth passed and which didn't.
How should I check this? The sample code is like this:
def funcA(u, p)
print_A("#{ip} - '#{u}' : '#{p}' - Pass")
end
def try_login(u, p)
path = '/index.php?uuser=#{u}&ppass=#{p}'
r = send_request_raw({
'URI' => 'path',
'method' => 'GET'
})
if (r and r.code.to_i == 200)
check = true
end
if check == true
funcA(u, p)
else
out = "#{ip} - '#{u} - Fail"
print_B(out)
end
return check, r
end
end
Update:
I also tried adding a new check for matching a 'Success/Fail' keyword coming in HTTP response. It didn't work either. But I now noticed that the response coming back seems to be in a different form. The Content-Type in response is text/html;charset=utf-8 though. And I am not doing any parsing so it is failing.
Success Response is in form of:
{"param1":1,"param2"="Auth Success","menu":0,"userdesc":"My User","user":"uuser","pass":"ppass","check":"success"}
Fail response is in form of:
{"param1":-1,"param2"="Auth Fail","check":"fail"}
So now I need some pointers on how to parse this response.
Many Thanks.
I do this with with "net/http"
require 'net/http'
uri = URI(url)
connection = Net::HTTP.start(uri.host, uri.port)
#response = Net::HTTP.get_response(URI(url))
#httpStatusCode = #response.code
connection.finish
If there's a redirect from a 200 then it must be a javascript or meta redirect. So just look for that in the response body.
I am doing a video crawler in ruby. In there I have to log in to a page by enabling cookies and download pages. For that I am using the CURL library in ruby. I can successfully log in, but I can't download the pages inside that with curl. How can I fix this or download the pages otherwise?
My code is
curl = Curl::Easy.new(1st url)
curl.follow_location = true
curl.enable_cookies = true
curl.cookiefile = "cookie.txt"
curl.cookiejar = "cookie.txt"
curl.http_post(1st url,field)
curl.perform
curl = Curl::Easy.perform(2nd url)
curl.follow_location = true
curl.enable_cookies = true
curl.cookiefile = "cookie.txt"
curl.cookiejar = "cookie.txt"
curl.http_get
code = curl.body_str
What I've seen in writing my own similar "post-then-get" script is that ruby/Curb (I'm using version 0.7.15 with ruby 1.8) seems to ignore the cookiejar/cookiefile fields of a Curl::Easy object. If I set either of those fields and the http_post completes successfully, no cookiejar or cookiefile file is created. Also, curl.cookies will still be nil after your curl.http_post, however, the cookies ARE set within the curl object. I promise :)
I think where you're going wrong is here:
curl = Curl::Easy.perform(2nd url)
The curb documentation states that this creates a new object. That new object doesn't have any of your existing cookies set. If you change your code to look like the following, I believe it should work. I've also removed the curl.perform for the first url since curl.http_post already implicitly does the "perform". You were basically http_post'ing twice before trying your http_get.
curl = Curl::Easy.new(1st url)
curl.follow_location = true
curl.enable_cookies = true
curl.http_post(1st url,field)
curl.url = 2nd url
curl.http_get
code = curl.body_str
If this still doesn't seem to be working for you, you can verify if the cookie is getting set by adding
curl.verbose = true
Before
curl.http_post
Your Curl::Easy object will dump all the headers that it gets in the response from the server to $stdout, and somewhere in there you should see a line stating that it added/set a cookie. I don't have any example output right now but I'll try to post a follow-up soon.
HTTPClient automatically enables cookies, as does Mechanize.
From the HTTPClient docs:
clnt = HTTPClient.new
clnt.get_content(url1) # receives Cookies.
clnt.get_content(url2) # sends Cookies if needed.
Posting a form is easy too:
body = { 'keyword' => 'ruby', 'lang' => 'en' }
res = clnt.post(uri, body)
Mechanize makes this sort of thing really simple (It will handle storing the cookies, among other things).