Savon proxy works in script, not in Rails - ruby

I'm using Savon to make calls to a SOAP API. The API I'm accessing requires calls to be coming from a whitelisted IP address, so I'm using a QuotaGuard proxy.
The call that I'm making returns perfectly in IRB and also as a plain ruby script. When I put the exact same code into a method in my Rails model, the call times out because it isn't coming through the proxy IP. QuotaGuard has a dashboard where I can look at requests going through the proxy IP, so I know for sure that this call is not going through.
Here is my ruby script code:
require 'savon'
ping_request = Savon.client do
wsdl "http://xx.xxx.xxx.xx:8080/svbase4api/Ping?wsdl"
proxy "http://xxxxxxxxxxx:xxxxxxxxx#us-east-1-static-brooks.quotaguard.com:9293"
end
response = ping_request.call(:ping, message: {message: "oogly boogly"})
puts response.to_hash[:ping_response][:return]
The puts statement does exactly what I want. It puts "saved ping message oogly boogly"
Here's my Rails model:
class Debitcard < ActiveRecord::Base
def self.ping
ping_request = Savon.client do
wsdl "http://xx.xxx.xxx.xx:8080/svbase4api/Ping?wsdl"
proxy "http://xxxxxxxxxxx:xxxxxxxxx#us-east-1-static-brooks.quotaguard.com:9293"
end
response = ping_request.call(:ping, message: {message: "oogly boogly"})
puts response.to_hash[:ping_response][:return]
#ping_response = response.to_hash[:ping_response][:return]
end
end
And this is the result in the rails server when I press a button which posts to the controller action which calls the ping method:
D, [2014-10-23T18:38:08.587540 #2200] DEBUG -- : HTTPI GET request to
xx.xxx.xxx.xx (net_http) Completed 500 Internal Server Error in 75228ms
Errno::ETIMEDOUT (Operation timed out - connect(2)):
Can anyone shine a light on this? Thanks!

Related

Need help handling Timeout::Error in Savon

I'm building a connection between REST-API and SOAP API in Ruby (without Rails).
For SOAP calls I use Savon gem, which is great.
However, I cannot figure out from the docs, how does Savon handle Timeout::Error?
Does it raise Savon::HTTPError or Savon::SOAPFault?
Please advise.
I was curious myself. After a bit of experimentation and skimming through Savon sources it seems that transport-level errors aren't handled and translated to Savon's own exception types, but thrown "as is", so if you need to handle them, you have to handle exceptions thrown by the underlying HTTP client library.
It's important to note that Savon supports multiple HTTP clients through the httpi abstraction layer. By default it just chooses one from those being available, but if you need to handle it's exceptions, you shouldn't rely on the automatic selection, but explicitly configure which HTTPI adapter should be used (e.g. HTTPI.adapter = :net_http).
The code below can be used to test the timeout scenario with HTTPI adapter of your choice.
Code for experimentation
Server (written in PHP, because there are no up-to-date working solutions for writing a dead-simple SOAP server like this, without a ton of boilerplate code, in Ruby):
<?php
// simple SOAP server - run with 'php -S localhost:12312 name_of_this_file.php'
class SleepySoapServer
{
public function hello()
{
sleep(3600); // take an hour's nap before responding
return 'Hello, world!';
}
}
$options = array('uri' => 'http://localhost:12312/');
$server = new SoapServer(null, $options);
$server->setClass(SleepySoapServer::class);
$server->handle();
Client (using Savon 2):
require 'savon'
HTTPI.adapter = :net_http # request Net::HTTP client from the standard library
uri = 'http://localhost:12312'
client = Savon.client do
endpoint uri
namespace uri
end
response = client.call :Hello
p response.body
If you don’t like to rescue errors, here’s how you can tell Savon not to raise them
Savon.configure do |config|
config.raise_errors = false
end

Intercept WEBrick request

I have a web app that runs on different pieces of hardware, that for the most part consists of smart TVs and set-top boxes.
My web app contains a ruby script to setup the app for local debugging. This script builds my app, listens for file changes, and hosts the app using a simple WEBrick server.
Now I'm running into a problem on a specific piece of hardware. This hardware expects to get a success response from a POST request to a health_check API running on the same host as the web app, before it will load up the web app.
I'm simply hoping to intercept this request and spoof it so that the hardware will load my client. So far I've gotten as far as this:
def start_server
require 'webrick'
root = File.expand_path 'public'
request_callback = Proc.new { |req, res|
if req.path =~ /health_check/
# return 200 response somehow?
end
}
server = WEBrick::HTTPServer.new :Port => 5000, :DocumentRoot => root, :RequestCallback => request_callback
server.start
end
I can modify the response object to set status to 200, but it still ends up returning a 404.
You don't need to "intercept" all requests and check for a specific path. You simply want mount_proc, to handle a specific route with a proc.
Add the following before server.start:
server.mount_proc '/health_check' do |req, res|
res.body = 'what what' # your content here
end
You'll probably want to wrap this in a check to determine if you're running on whatever custom hardware requires this behavior.
See Custom Behavior in the WEBrick docs.

Attempting a PUT to Google Groups API using OAuth2 gem and Ruby

I'm trying to do a PUT call on a Google Groups API using Ruby and the OAuth2 GEM. I've managed to authenticate OK, and the GET call works properly, but I can't seem to get the call to use the PUT method. I thought the following would work, since OAuth2 uses Faraday, but I just keep getting the 400 message back, with an indication that something's "required":
data = access_token.put('https://www.googleapis.com/groups/v1/groups/{email address}?alt=json').parsed do |request|
request.params['email'] = "{email address}"
end
Has anyone got a working example of passing parameters to a PUT request?
OK. Looks like the ".parsed" was interfering with the call here's what works, with some additions to the request object:
response = access_token.put('https://www.googleapis.com/groups/v1/groups/{email address}') do |request|
request.headers['Content-Type'] = 'application/json'
request.body='{"email": "{email address}"}'
end
# check this
puts response.status
# works if it's 200

WEBrick socket returns eof? == true

I'm writing a MITM proxy with webrick and ssl support (for mocking out requests with VCR on client side, see this thread VCRProxy: Record PhantomJS ajax calls with VCR inside Capybara or my github repository https://github.com/23tux/vcr_proxy), and I made it really far (in my opinion). My configuration is that phantomjs is configured to use a proxy and ignore ssl errors. That proxy (written in webrick) records normal HTTP requests with VCR. If a SSL request is made, the proxy starts another webrick server, mounts it at / and re-writes the unparsed_uri for the request, so that not the original server is called but my just started webrick server. The new started server handles then the requests, records it with VCR and so on.
Everything works fine when using cURL to test the MITM proxy. For example a request made by curl like
curl --proxy localhost:11111 --ssl --insecure https://blekko.com/ws/?q=rails+/json -v
gets handled, recorded...
But: When I try to do the same request inside page served by poltergeist from javascript with an jsonp ajax request, something goes wrong. I debugged it to the line which causes the problem. It's inside the httpserver.rb from webrick inside the ruby source code at line 80 (Ruby 1.9.3):
def run(sock)
while true
res = HTTPResponse.new(#config)
req = HTTPRequest.new(#config)
server = self
begin
timeout = #config[:RequestTimeout]
while timeout > 0
break if IO.select([sock], nil, nil, 0.5)
timeout = 0 if #status != :Running
timeout -= 0.5
end
raise HTTPStatus::EOFError if timeout <= 0
raise HTTPStatus::EOFError if sock.eof?
The last line raise HTTPStatus::EOFError if sock.eof? raises an error when doing requests with phantomjs, because sock.eof? == true:
1.9.3p392 :002 > sock
=> #<OpenSSL::SSL::SSLSocket:0x007fa36885e090>
1.9.3p392 :003 > sock.eof?
=> true
I tried it with the curl command and there it's sock.eof? == false, so the error doesn't get raised, and everything works fine:
1.9.3p392 :001 > sock
=> #<OpenSSL::SSL::SSLSocket:0x007fa36b7156b8>
1.9.3p392 :002 > sock.eof?
=> false
I only have very little experience with socket programming in ruby, so I'm a little bit stucked.
How can I find out, what's the difference between the two requests, based on the sock variable? As I can see in the IO docs of ruby, eof? blocks until the other side sends some data or closes it. Am I right? But why is it closed when calling the same request, same parameters, same method with phantomjs, and it's not closed when using curl?
Hope somebody can help me to figure this out. thx!
Since this is a HTTPS I bet the client is closing the connection. In HTTPS this can happen when the server certificate is for example not valid. What kind of HTTPS library do you use? These libraries can be usually configured to ignore SSL CERT and continue working when it is not valid.
In curl you are actually doing that using -k (--insecure), without this it would not work. Try this without this option and if curl fails, then your server certificate is not valid. Note to get this working you usually either need to turn the checking off or to provide valid certificate to the client so it can verify it.

Can not access response.body inside after filter block in Sinatra 1.0

I'm struggling with a strange issue. According to http://github.com/sinatra/sinatra (secion Filters) a response object is available in after filter blocks in Sinatra 1.0. However the response.status is correctly accessible, I can not see non-empty response.body from my routes inside after filter.
I have this rackup file:
config.ru
require 'app'
run TestApp
Then Sinatra 1.0.b gem installed using:
gem install --pre sinatra
And this is my tiny app with a single route:
app.rb
require 'rubygems'
require 'sinatra/base'
class TestApp < Sinatra::Base
set :root, File.dirname(__FILE__)
get '/test' do
'Some response'
end
after do
halt 500 if response.empty? # used 500 just for illustation
end
end
And now, I would like to access the response inside the after filter. When I run this app and access /test URL, I got a 500 response as if the response is empty, but the response clearly is 'Some response'.
Along with my request to /test, a separate request to /favicon.ico is issued by the browser and that returns 404 as there is no route nor a static file. But I would expect the 500 status to be returned as the response should be empty.
In console, I can see that within the after filter, the response to /favicon.ico is something like 'Not found' and response to /test really is empty even though there is response returned by the route.
What do I miss?
The response.body is set Sinatra::Base#invoke, which wraps around Sinatra::Base#dispatch!, which in turn calls the filters. However, #invoke sets the response body after dispatch! is done, therefore the body is not yet set. What you want to do is probably better solved with a rack middleware.

Resources