Specify client certificate with Ruby Typhoeus - ruby

I'm trying to use Typhoeus to make an HTTP request that requires a client certificate. The Typhoeus README does not mention using client certificates at all. The only discussion I could find about specifying a client certificate with Typhoeus is this GitHub issue discussion. Based on that discussion, this is the non-functioning code I've come up with (actual URL and filenames have been changed):
require 'openssl'
require 'typhoeus'
cert = OpenSSL::X509::Certificate.new(File.read("cert.pem"))
key = OpenSSL::PKey::RSA.new(File.read("cert-key.pem"))
ca = OpenSSL::X509::Certificate.new(File.read("cert-ca.pem"))
t = Typhoeus.get(
"https://example.com/",
ssl_verifyhost: 0,
ssl_verifypeer: false,
sslcert: cert,
sslkey: key,
cainfo: ca
)
p t.return_code
This returns :ssl_certproblem. The response body is empty and :response_code is 0.
I confirmed that my certificate files and URL are correct. This curl command returns the response body I expect:
curl --key cert-key.pem --cert cert.pem --cacert cert-ca.pem https://example.com
I got a request to complete using Faraday, however, I also need to make requests in parallel. Faraday's way of doing this is to use Typhoeus as an adapter, which still results in a client certificate error. It seems like as long as Typhoeus is involved, I'm not going to be able to authenticate with a client certificate and I don't know of another HTTP gem that can handle parallel requests for me. For now, I'm just going to have to settle for sending requests in series with Faraday, which makes the execution of my script a LOT slower. I'll probably end up rewriting the script in another language eventually.
This is how I made the request using Faraday:
require 'faraday'
require 'openssl'
ssl_opts = {
:client_cert => OpenSSL::X509::Certificate.new(File.read("cert.pem")),
:client_key => OpenSSL::PKey::RSA.new(File.read("cert-key.pem")),
:ca_file => "cert-ca.pem"
}
f = Faraday.new(
"https://example.com",
:ssl => ssl_opts
).get
What is the correct way to use Typhoeus to make an HTTP request that requires a client certificate? I'm open to using alternatives to Typhoeus, though I will need the ability to make parallel requests.

In the linked Github issue there was a hint that worked for me.
I skipped cainfo, but sslcert and sslkey needed to be plain strings of the paths to the files.
Something similar to this worked for me:
t = Typhoeus.get(
"https://example.com/",
ssl_verifyhost: 0,
ssl_verifypeer: false,
sslcert: "/path/to/project/cert.pem",
sslkey: "/path/to/project/cert-key.pem"
)

Related

Firebase Cloud Function no longer receiving calls from ruby's Net:HTTP

Question
Why, suddenly, do all calls to Firebase Cloud Function webhooks timeout when made via ruby's standard HTTP library (Net::HTTP)?
Background
This works just fine:
require 'net/http'
require 'json'
uri = URI("https://postb.in/1570228026855-4628713761921?hello=world")
res = Net::HTTP.start(uri.host, uri.port, use_ssl: true) do |http|
req = Net::HTTP::Post.new(uri)
req['Content-Type'] = 'application/json'
req.body = {a: 1}.to_json
http.request(req)
end
However the same script does not work with a Cloud Function URL in place of the postb.in one.
Making the same POST request to the Cloud Function URL via cURL works. It's only when made via ruby Net:HTTP library where it's timing out:
/usr/lib/ruby/2.5.0/net/http.rb:937:in `initialize': execution expired (Net::OpenTimeout)
This function has been called many times per second over the past several months, from a Ruby Net:HTTP Post without issue. And it suddenly stopped working last night. I've tested on multiple servers with ruby versions 2.3.8 and also 2.5.
The Cloud Function code is:
export const testHook = functions.https.onRequest((request, response) => {
console.log(request)
response.status(200).send('works')
})
The answer ended up being to add require 'resolv-replace' to the ruby script making use of Net::HTTP to make the HTTP POST. Found that thanks to: Ruby Net::OpenTimeout: execution expired
Why Net:HTTP is able to resolve postbin URLs but not Firebase ones, and why it was able to resolve Firebase ones successfully for many months until suddenly not, I can't explain.

How to use SOCKSify proxy

I try to proxy traffic of a ruby application over a SOCKS proxy using ruby 2.0 and SOCKSify 1.5.0.
require 'socksify/http'
uri = URI.parse("www.example.org")
proxy_addr = "127.0.0.1"
proxy_port = 14000
puts Net::HTTP.SOCKSProxy(proxy_addr, proxy_port).get(uri)
This is the minimal working example. Obviously it doesn't work but I think it should. I receive no error messages executing the file, it doesn't stop so I have to abort it manually. I have tried the solution after I found it in this answer (the code in that answer is different, but as mentioned above I first adapted it to my match my existing non-proxy-code and afterwards reduced it)
The proxies work, I tested both tor and ssh -D connection on my own webserver and other websites.
As rubyforge seems to be no longer existing, I can't access the SOCKSify documentation on it. I think the version might be outdated, does not work with ruby 2.0 or something like that.
What am I doing wrong here? Or is there an alternative to SOCKSify?
Checking the documentation for Net::HTTP#Proxies gives an example we can base our code on. Also note the addition of the .body method, also found in the documentation.
Try this code:
require 'socksify/http'
uri = URI.parse('http://www.example.org/')
proxy_addr = '127.0.0.1'
proxy_port = 1400
Net::HTTP.SOCKSProxy(proxy_addr, proxy_port).start(uri.host, uri.port) do |http|
puts http.get(uri.path).body
end

WEBrick socket returns eof? == true

I'm writing a MITM proxy with webrick and ssl support (for mocking out requests with VCR on client side, see this thread VCRProxy: Record PhantomJS ajax calls with VCR inside Capybara or my github repository https://github.com/23tux/vcr_proxy), and I made it really far (in my opinion). My configuration is that phantomjs is configured to use a proxy and ignore ssl errors. That proxy (written in webrick) records normal HTTP requests with VCR. If a SSL request is made, the proxy starts another webrick server, mounts it at / and re-writes the unparsed_uri for the request, so that not the original server is called but my just started webrick server. The new started server handles then the requests, records it with VCR and so on.
Everything works fine when using cURL to test the MITM proxy. For example a request made by curl like
curl --proxy localhost:11111 --ssl --insecure https://blekko.com/ws/?q=rails+/json -v
gets handled, recorded...
But: When I try to do the same request inside page served by poltergeist from javascript with an jsonp ajax request, something goes wrong. I debugged it to the line which causes the problem. It's inside the httpserver.rb from webrick inside the ruby source code at line 80 (Ruby 1.9.3):
def run(sock)
while true
res = HTTPResponse.new(#config)
req = HTTPRequest.new(#config)
server = self
begin
timeout = #config[:RequestTimeout]
while timeout > 0
break if IO.select([sock], nil, nil, 0.5)
timeout = 0 if #status != :Running
timeout -= 0.5
end
raise HTTPStatus::EOFError if timeout <= 0
raise HTTPStatus::EOFError if sock.eof?
The last line raise HTTPStatus::EOFError if sock.eof? raises an error when doing requests with phantomjs, because sock.eof? == true:
1.9.3p392 :002 > sock
=> #<OpenSSL::SSL::SSLSocket:0x007fa36885e090>
1.9.3p392 :003 > sock.eof?
=> true
I tried it with the curl command and there it's sock.eof? == false, so the error doesn't get raised, and everything works fine:
1.9.3p392 :001 > sock
=> #<OpenSSL::SSL::SSLSocket:0x007fa36b7156b8>
1.9.3p392 :002 > sock.eof?
=> false
I only have very little experience with socket programming in ruby, so I'm a little bit stucked.
How can I find out, what's the difference between the two requests, based on the sock variable? As I can see in the IO docs of ruby, eof? blocks until the other side sends some data or closes it. Am I right? But why is it closed when calling the same request, same parameters, same method with phantomjs, and it's not closed when using curl?
Hope somebody can help me to figure this out. thx!
Since this is a HTTPS I bet the client is closing the connection. In HTTPS this can happen when the server certificate is for example not valid. What kind of HTTPS library do you use? These libraries can be usually configured to ignore SSL CERT and continue working when it is not valid.
In curl you are actually doing that using -k (--insecure), without this it would not work. Try this without this option and if curl fails, then your server certificate is not valid. Note to get this working you usually either need to turn the checking off or to provide valid certificate to the client so it can verify it.

Google Places API server key doesn't work using Curl

I have obtained a valid api key from Google Places API. I need to use this on the backend, so I get a server-side key. However, the call does not work using curl nor the Rails console.
It DOES, however, work thru the browser. That said, I have triple checked that I am using the server-side key that I generated. I'm also only using the sample URL that is in the Google places documentation, so all params should be correct. Here is my curl:
curl -v https://maps.googleapis.com/maps/api/place/search/xml?location=-33.8670522,151.1957362&radius=500&types=food&name=harbour&sensor=false&key=my_key
Also, in (Ruby) Rails console:
Net::HTTP.get_response(URI.parse("https://maps.googleapis.com/maps/api/place/search/xml?location=-33.8670522,151.1957362&radius=500&types=food&name=harbour&sensor=false&key=my_key"))
Any ideas? It seems like multiple people have had issues, but there is nothing specific out there for server keys not working.
Thanks!
With CURL, be sure to put quotes around the URL. Otherwise, if you're working in Linux, the URL will be truncated after the first ampersand, which will cause a REQUEST_DENIED response.
For HTTPS with Ruby, the following should work (ref http://www.rubyinside.com/nethttp-cheat-sheet-2940.html):
require "net/https"
require "uri"
uri = URI.parse("https://maps.googleapis.com/maps/api/place/search/xml?location=-33.8670522,151.1957362&radius=500&types=food&name=harbour&sensor=false&key=...")
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)
print response.body

How can I make ruby's xmlrpc client ignore SSL certificate errors?

When access an XML-RPC service using xmlrpc/client in ruby, it throws an OpenSSL::SSL::SSLError when the server certificate is not valid. How can I make it ignore this error and proceed with the connection?
Turns out it's like this:
xmlrpc = ::XMLRPC::Client.new("foohost")
xmlrpc.instance_variable_get(:#http).instance_variable_set(:#verify_mode, OpenSSL::SSL::VERIFY_NONE)
That works with ruby 1.9.2, but clearly is poking at internals, so the real answer is "the API doesn't provide such a mechanism, but here's a hack".
Actually the client has been updated, now one has direct access to the http connection:
https://bugs.ruby-lang.org/projects/ruby-trunk/repository/revisions/41286/diff/lib/xmlrpc/client.rb
xmlrpc.http.verify_mode = OpenSSL::SSL::VERIFY_NONE
But better set ca_file or ca_path.
Still I see no option to apply such config to _async calls.
Update: found a workaround by monkey patching the client object:
xmlrpc_client.http.ca_file = #options[:ca_file]
xmlrpc_client.instance_variable_set(:#ca_file, #options[:ca_file])
def xmlrpc_client.net_http(host, port, proxy_host, proxy_port)
h = Net::HTTP.new host, port, proxy_host, proxy_port
h.ca_file = #ca_file
h
end
So you need both, the older approach and the monkey patching. We add also an instance variable, otherwise the new method cannot see the actual value.

Resources