I`m trying to write some tools with chrome Chrome DevTools Protocol https://chromedevtools.github.io/devtools-protocol/tot/Network/#method-enable.
I want get page ResponseBody and I don't know where can I find RequestId. So here is my simple Ruby code:
chrome = ChromeRemote.client
# Enable events
chrome.send_cmd("Network.enable")
chrome.send_cmd("Page.enable")
puts chrome.send_cmd "Network.getCookies"
# for this command I need RequestId ->
puts chrome.send_cmd "Network.getResponseBody"
For now I have empty result for puts chrome.send_cmd "Network.getResponseBody".
You need to listen to the requestWillBeSent event, which would give you both URL and requestId
Related
I am building an iOS app with Swift2.0, XCode 7.2
I am trying to make an api call to:
htttp://xyz.com/t/restaurants-us?KEY=someKey&filters={"locality":{"$eq":"miami"}}
let endPoint:String = "htttp://xyz.com/t/restaurants-us?KEY=someKey&filters={%22locality%22:{%22$eq%22:%22miami%22}}"
When I try to create an URL using this string(endPoint):
let url = NSURL(string: endPoint), a nil is returned.
So I tried encoding the string before trying to create URL:
let encodedString = endPoint.stringByAddingPercentEncodingWithAllowedCharacters(NSCharacterSet.URLQueryAllowedCharacterSet())
Now the encodedString:
"htttp://xyz.com/t/restaurants-us?KEY=someKey&filters=%7B%2522locality%2522:%7B%2522$eq%2522:%2522miami%2522%7D%7D"
But now when i create a NSURL session and send the request, I get an unexpected response from the server:
Reply from server:
{
"error_type" = InvalidJsonArgument;
message = "Parameter 'filters' contains an error in its JSON syntax. For documentation, please see: http://developer.factual.com.";
status = error;
version = 3;
}
So if I don't encode the string, I will not be able to create NSURL.
But if I encode and send the request, the server is not able to handle the request.
Can anyone please suggest a workaround.
When you declare endpoint, you have already percent-encoded some characters (the quotes). When you ask iOS to percent-encode it, it percent-encodes the percent-encodes. Decoding the encodedString results in:
htttp://xyz.com/t/restaurants-us?KEY=someKey&filters={%22locality%22:{%22$eq%22:%22miami%22}}
Instead, you should start with actual quotes in endpoint:
let endPoint:String = "htttp://xyz.com/t/restaurants-us?KEY=someKey&filters={\"locality\":{\"$eq\":\"miami\"}}"
I'm trying to learn to use AJAX with Rails.
Here is my client side coffeescript code:
$(document).ready ->
$("#url").blur ->
$.get("/test_url?url=" + $(this).val(), (data) ->
alert("Response code: " + data)
).fail( () ->
alert("Why am I failing?")
)
Here is my server-side Ruby code:
def url_response
url = URI.parse(params[:url])
Net::HTTP.get_response(url).code unless url.port.nil?
end
The Ruby code is being called and correctly returns the HTTP response code, but I can't do anything with the data because the client-side script says the call has failed. As far as I can see, it is not failing. url_response is being called and it is returning a value, so what exactly is failing here?
The problem was I removed the line that rendered the response. I previously had in, but thanks to Frederick Cheung's hint to check if the URL works directly in the browser, I realised that it no longer worked in the browser as it did previously, which is why I didn't think to check again!
The code below got everything working again.
def url_response
url = URI.parse(params[:url])
render :text => Net::HTTP.get_response(url).code unless url.port.nil?
end
I know that Snoo seems to be unmaintained, but I wanted to use a ruby framework since I'm trying to improve my Ruby skill.
I'm trying to add some functionality starting with subscribing and unsubscribing to subreddits. Link to API doc.
My first attempt was with the built-in post method which returned a 404 error
def subscribe(subreddit)
logged_in?
post('/api/subscribe.json',body:{uh: #modhash, action:'sub', sr: subreddit, api_type: 'json'})
end
Since the built-in post method was giving me a 404 I decided to try the HTTParty post method:
def subscribe(subreddit)
logged_in?
HTTParty.post('http://www.reddit.com/api/subscribe.json',body:{uh: #modhash, action:'sub', sr: subreddit, api_type: 'json'})
end
That returns this:
pry(main)> reddit.subscribe('/r/nba')
=> {"json"=>{"errors"=>[["USER_REQUIRED", "please login to do that", nil]]}}
Does anyone know if I need to pass more info in the body or if I'm just sending a badly formed request? Thanks!
Also, before running "reddit.subscribe" I have verified that I'm logged in with with a cookie, a modhash, can access my account info, etc.
Solution found:
def subscribe(subreddit)
#query the subreddit for it's 'about' info and get json back
subreddit_json = self.subreddit_info(subreddit)
#build the coded unique identifier for the targeted subreddit
subreddit_id = subreddit_json['kind'] + "_" + subreddit_json['data']['id']
#send post request to server
server_response = self.class.post('/api/subscribe.json',
body:{uh:#modhash, action:'sub', sr: subreddit_id, api_type:'json'})
end
The Reddit API doesn't accept the subreddit name as the value passed with 'sr', (e.g. sr:'/r/funny'). It requires the subreddit "type" (which is always 't5' for subreddits) and unique forum id. The parameter passed would look something like: sr: "t5_2qo4s". This information is available if you go to your target subreddit and add about.json, e.g., www.reddit.com/r/funny/about.json
I have a requirement to proxy a request in a Rails app. I was hoping I could proxy it with chunking (so, 1 chunk received, one chunk is sent). The app is working fine without chunking (load the request into memory, and transmit).
Here is my code to proxy the chunks through to the end-client:
self.response.headers['Last-Modified'] = Time.now.ctime.to_s
self.response_body = Enumerator.new do |y|
client = HTTPClient.new
http_response = client.get(proxy_url, nil, headers) do |chunk|
y << chunk
end
end
The problem is, I can't inspect "http_response" until all the chunks have been received, thus I can't set the headers based on the headers of the client.
What I'm trying to do is transmit the headers returned from the client before the first chunk is sent. Is this possible?
If not, is this pattern possible in any other Ruby HTTP client gem?
Update
I have a solution for you.
If you call get_async instead, it will retun immediately with an HTTPClient::Connection object that is updated with the header information as soon as it is received. This code sample demonstrates.
The patch to HTTPClient::Connection is almost certainly not necessary for you, but it lets you write things like conn.queue.size? and conn.queue.empty?.
conn.pop blocks until the response (or exception) has been pushed to the queue by the async thread and then returns the normal HTTP::Message object. (Note that, if you are using the monkey patch, you can use conn.queue.empty? to see if pop is going to block.)
resp.content returns an IO object which is a pipe read endpoint, and can be called as soon as pop hs returned. The other end is written by the async thread as the data arrives, and you can read the entire content in one go or in whatever size chunks you like using read.
require 'httpclient'
class HTTPClient::Connection
attr_reader :queue
end
client = HTTPClient.new
conn = client.get_async 'http://en.wikipedia.org/wiki/Ruby_(programming_language)'
resp = conn.pop
resp.header.all.each { |name, val| puts "#{name}=#{val}" }
puts
pipe = resp.content
while chunk = pipe.read(8192)
print chunk
end
You could parse the first chunk you receive to extract the headers, but I suggest you call head first to get the header information. Then do the get as well.
(Updated - the first chunk holds the beginning of the content so this won't work.)
I am doing a video crawler in ruby. In there I have to log in to a page by enabling cookies and download pages. For that I am using the CURL library in ruby. I can successfully log in, but I can't download the pages inside that with curl. How can I fix this or download the pages otherwise?
My code is
curl = Curl::Easy.new(1st url)
curl.follow_location = true
curl.enable_cookies = true
curl.cookiefile = "cookie.txt"
curl.cookiejar = "cookie.txt"
curl.http_post(1st url,field)
curl.perform
curl = Curl::Easy.perform(2nd url)
curl.follow_location = true
curl.enable_cookies = true
curl.cookiefile = "cookie.txt"
curl.cookiejar = "cookie.txt"
curl.http_get
code = curl.body_str
What I've seen in writing my own similar "post-then-get" script is that ruby/Curb (I'm using version 0.7.15 with ruby 1.8) seems to ignore the cookiejar/cookiefile fields of a Curl::Easy object. If I set either of those fields and the http_post completes successfully, no cookiejar or cookiefile file is created. Also, curl.cookies will still be nil after your curl.http_post, however, the cookies ARE set within the curl object. I promise :)
I think where you're going wrong is here:
curl = Curl::Easy.perform(2nd url)
The curb documentation states that this creates a new object. That new object doesn't have any of your existing cookies set. If you change your code to look like the following, I believe it should work. I've also removed the curl.perform for the first url since curl.http_post already implicitly does the "perform". You were basically http_post'ing twice before trying your http_get.
curl = Curl::Easy.new(1st url)
curl.follow_location = true
curl.enable_cookies = true
curl.http_post(1st url,field)
curl.url = 2nd url
curl.http_get
code = curl.body_str
If this still doesn't seem to be working for you, you can verify if the cookie is getting set by adding
curl.verbose = true
Before
curl.http_post
Your Curl::Easy object will dump all the headers that it gets in the response from the server to $stdout, and somewhere in there you should see a line stating that it added/set a cookie. I don't have any example output right now but I'll try to post a follow-up soon.
HTTPClient automatically enables cookies, as does Mechanize.
From the HTTPClient docs:
clnt = HTTPClient.new
clnt.get_content(url1) # receives Cookies.
clnt.get_content(url2) # sends Cookies if needed.
Posting a form is easy too:
body = { 'keyword' => 'ruby', 'lang' => 'en' }
res = clnt.post(uri, body)
Mechanize makes this sort of thing really simple (It will handle storing the cookies, among other things).