WEBrick socket returns eof? == true - ruby

I'm writing a MITM proxy with webrick and ssl support (for mocking out requests with VCR on client side, see this thread VCRProxy: Record PhantomJS ajax calls with VCR inside Capybara or my github repository https://github.com/23tux/vcr_proxy), and I made it really far (in my opinion). My configuration is that phantomjs is configured to use a proxy and ignore ssl errors. That proxy (written in webrick) records normal HTTP requests with VCR. If a SSL request is made, the proxy starts another webrick server, mounts it at / and re-writes the unparsed_uri for the request, so that not the original server is called but my just started webrick server. The new started server handles then the requests, records it with VCR and so on.
Everything works fine when using cURL to test the MITM proxy. For example a request made by curl like
curl --proxy localhost:11111 --ssl --insecure https://blekko.com/ws/?q=rails+/json -v
gets handled, recorded...
But: When I try to do the same request inside page served by poltergeist from javascript with an jsonp ajax request, something goes wrong. I debugged it to the line which causes the problem. It's inside the httpserver.rb from webrick inside the ruby source code at line 80 (Ruby 1.9.3):
def run(sock)
while true
res = HTTPResponse.new(#config)
req = HTTPRequest.new(#config)
server = self
begin
timeout = #config[:RequestTimeout]
while timeout > 0
break if IO.select([sock], nil, nil, 0.5)
timeout = 0 if #status != :Running
timeout -= 0.5
end
raise HTTPStatus::EOFError if timeout <= 0
raise HTTPStatus::EOFError if sock.eof?
The last line raise HTTPStatus::EOFError if sock.eof? raises an error when doing requests with phantomjs, because sock.eof? == true:
1.9.3p392 :002 > sock
=> #<OpenSSL::SSL::SSLSocket:0x007fa36885e090>
1.9.3p392 :003 > sock.eof?
=> true
I tried it with the curl command and there it's sock.eof? == false, so the error doesn't get raised, and everything works fine:
1.9.3p392 :001 > sock
=> #<OpenSSL::SSL::SSLSocket:0x007fa36b7156b8>
1.9.3p392 :002 > sock.eof?
=> false
I only have very little experience with socket programming in ruby, so I'm a little bit stucked.
How can I find out, what's the difference between the two requests, based on the sock variable? As I can see in the IO docs of ruby, eof? blocks until the other side sends some data or closes it. Am I right? But why is it closed when calling the same request, same parameters, same method with phantomjs, and it's not closed when using curl?
Hope somebody can help me to figure this out. thx!

Since this is a HTTPS I bet the client is closing the connection. In HTTPS this can happen when the server certificate is for example not valid. What kind of HTTPS library do you use? These libraries can be usually configured to ignore SSL CERT and continue working when it is not valid.
In curl you are actually doing that using -k (--insecure), without this it would not work. Try this without this option and if curl fails, then your server certificate is not valid. Note to get this working you usually either need to turn the checking off or to provide valid certificate to the client so it can verify it.

Related

Intercept WEBrick request

I have a web app that runs on different pieces of hardware, that for the most part consists of smart TVs and set-top boxes.
My web app contains a ruby script to setup the app for local debugging. This script builds my app, listens for file changes, and hosts the app using a simple WEBrick server.
Now I'm running into a problem on a specific piece of hardware. This hardware expects to get a success response from a POST request to a health_check API running on the same host as the web app, before it will load up the web app.
I'm simply hoping to intercept this request and spoof it so that the hardware will load my client. So far I've gotten as far as this:
def start_server
require 'webrick'
root = File.expand_path 'public'
request_callback = Proc.new { |req, res|
if req.path =~ /health_check/
# return 200 response somehow?
end
}
server = WEBrick::HTTPServer.new :Port => 5000, :DocumentRoot => root, :RequestCallback => request_callback
server.start
end
I can modify the response object to set status to 200, but it still ends up returning a 404.
You don't need to "intercept" all requests and check for a specific path. You simply want mount_proc, to handle a specific route with a proc.
Add the following before server.start:
server.mount_proc '/health_check' do |req, res|
res.body = 'what what' # your content here
end
You'll probably want to wrap this in a check to determine if you're running on whatever custom hardware requires this behavior.
See Custom Behavior in the WEBrick docs.

Savon proxy works in script, not in Rails

I'm using Savon to make calls to a SOAP API. The API I'm accessing requires calls to be coming from a whitelisted IP address, so I'm using a QuotaGuard proxy.
The call that I'm making returns perfectly in IRB and also as a plain ruby script. When I put the exact same code into a method in my Rails model, the call times out because it isn't coming through the proxy IP. QuotaGuard has a dashboard where I can look at requests going through the proxy IP, so I know for sure that this call is not going through.
Here is my ruby script code:
require 'savon'
ping_request = Savon.client do
wsdl "http://xx.xxx.xxx.xx:8080/svbase4api/Ping?wsdl"
proxy "http://xxxxxxxxxxx:xxxxxxxxx#us-east-1-static-brooks.quotaguard.com:9293"
end
response = ping_request.call(:ping, message: {message: "oogly boogly"})
puts response.to_hash[:ping_response][:return]
The puts statement does exactly what I want. It puts "saved ping message oogly boogly"
Here's my Rails model:
class Debitcard < ActiveRecord::Base
def self.ping
ping_request = Savon.client do
wsdl "http://xx.xxx.xxx.xx:8080/svbase4api/Ping?wsdl"
proxy "http://xxxxxxxxxxx:xxxxxxxxx#us-east-1-static-brooks.quotaguard.com:9293"
end
response = ping_request.call(:ping, message: {message: "oogly boogly"})
puts response.to_hash[:ping_response][:return]
#ping_response = response.to_hash[:ping_response][:return]
end
end
And this is the result in the rails server when I press a button which posts to the controller action which calls the ping method:
D, [2014-10-23T18:38:08.587540 #2200] DEBUG -- : HTTPI GET request to
xx.xxx.xxx.xx (net_http) Completed 500 Internal Server Error in 75228ms
Errno::ETIMEDOUT (Operation timed out - connect(2)):
Can anyone shine a light on this? Thanks!

How to use SOCKSify proxy

I try to proxy traffic of a ruby application over a SOCKS proxy using ruby 2.0 and SOCKSify 1.5.0.
require 'socksify/http'
uri = URI.parse("www.example.org")
proxy_addr = "127.0.0.1"
proxy_port = 14000
puts Net::HTTP.SOCKSProxy(proxy_addr, proxy_port).get(uri)
This is the minimal working example. Obviously it doesn't work but I think it should. I receive no error messages executing the file, it doesn't stop so I have to abort it manually. I have tried the solution after I found it in this answer (the code in that answer is different, but as mentioned above I first adapted it to my match my existing non-proxy-code and afterwards reduced it)
The proxies work, I tested both tor and ssh -D connection on my own webserver and other websites.
As rubyforge seems to be no longer existing, I can't access the SOCKSify documentation on it. I think the version might be outdated, does not work with ruby 2.0 or something like that.
What am I doing wrong here? Or is there an alternative to SOCKSify?
Checking the documentation for Net::HTTP#Proxies gives an example we can base our code on. Also note the addition of the .body method, also found in the documentation.
Try this code:
require 'socksify/http'
uri = URI.parse('http://www.example.org/')
proxy_addr = '127.0.0.1'
proxy_port = 1400
Net::HTTP.SOCKSProxy(proxy_addr, proxy_port).start(uri.host, uri.port) do |http|
puts http.get(uri.path).body
end

Sinatra Net::HTTP causes timeouts on a simple request

I have a small simple Net::HTTP POST request to do to my Sinatra app:
def collect(website)
uri = URI("http://localhost:9393/save/#{website}")
res = Net::HTTP.post_form(uri, 'q' => 'ruby', 'max' => '50')
puts res.body
end
But it causes a timeout. Here is the request handler:
post '/save/:website' do |website|
puts request.body
"done"
end
I never reach the puts nor the done. My shotgun server is running on port 9393 of course. When I use the REST Console extension and paste valid json in it, it works for that same path.
What is causing this Timeout::Error?
So the weird thing is, I changed my server from shotgun to simply running it with sinatra and the gem sinatra/reloader. I was using shotgun because it would auto reload whenever the source file changed, and sinatra itself didn't.
After ditching shotgun, it worked straight away.

How can I make ruby's xmlrpc client ignore SSL certificate errors?

When access an XML-RPC service using xmlrpc/client in ruby, it throws an OpenSSL::SSL::SSLError when the server certificate is not valid. How can I make it ignore this error and proceed with the connection?
Turns out it's like this:
xmlrpc = ::XMLRPC::Client.new("foohost")
xmlrpc.instance_variable_get(:#http).instance_variable_set(:#verify_mode, OpenSSL::SSL::VERIFY_NONE)
That works with ruby 1.9.2, but clearly is poking at internals, so the real answer is "the API doesn't provide such a mechanism, but here's a hack".
Actually the client has been updated, now one has direct access to the http connection:
https://bugs.ruby-lang.org/projects/ruby-trunk/repository/revisions/41286/diff/lib/xmlrpc/client.rb
xmlrpc.http.verify_mode = OpenSSL::SSL::VERIFY_NONE
But better set ca_file or ca_path.
Still I see no option to apply such config to _async calls.
Update: found a workaround by monkey patching the client object:
xmlrpc_client.http.ca_file = #options[:ca_file]
xmlrpc_client.instance_variable_set(:#ca_file, #options[:ca_file])
def xmlrpc_client.net_http(host, port, proxy_host, proxy_port)
h = Net::HTTP.new host, port, proxy_host, proxy_port
h.ca_file = #ca_file
h
end
So you need both, the older approach and the monkey patching. We add also an instance variable, otherwise the new method cannot see the actual value.

Resources