I try to proxy traffic of a ruby application over a SOCKS proxy using ruby 2.0 and SOCKSify 1.5.0.
require 'socksify/http'
uri = URI.parse("www.example.org")
proxy_addr = "127.0.0.1"
proxy_port = 14000
puts Net::HTTP.SOCKSProxy(proxy_addr, proxy_port).get(uri)
This is the minimal working example. Obviously it doesn't work but I think it should. I receive no error messages executing the file, it doesn't stop so I have to abort it manually. I have tried the solution after I found it in this answer (the code in that answer is different, but as mentioned above I first adapted it to my match my existing non-proxy-code and afterwards reduced it)
The proxies work, I tested both tor and ssh -D connection on my own webserver and other websites.
As rubyforge seems to be no longer existing, I can't access the SOCKSify documentation on it. I think the version might be outdated, does not work with ruby 2.0 or something like that.
What am I doing wrong here? Or is there an alternative to SOCKSify?
Checking the documentation for Net::HTTP#Proxies gives an example we can base our code on. Also note the addition of the .body method, also found in the documentation.
Try this code:
require 'socksify/http'
uri = URI.parse('http://www.example.org/')
proxy_addr = '127.0.0.1'
proxy_port = 1400
Net::HTTP.SOCKSProxy(proxy_addr, proxy_port).start(uri.host, uri.port) do |http|
puts http.get(uri.path).body
end
Related
I am trying to do what look simple but clearly isn't - authenticate with NTLM.
The example in the documentation (http://www.rubydoc.info/gems/httpclient/2.1.5.2/HTTPClient) looks straightforward, but when I try it I always get a 401.
I created a simple website in IIS whch requires NTLM. I tested this works in IE & chrome.
I then
require 'httpclient'
require 'pp'
require 'kconv' #seemed to be needed due to a bug..
require 'rubyntlm' # probably not needed directly?
domain = 'http://qvcluster1/'
url = 'http://qvcluster1/default.htm'
user = 'testuser'
password = 'testpassword'
client = HTTPClient.new
client.set_auth(nil ,user,password)
r = client.get("http://qvcluster1/default.htm")
pp r
From my understanding there is not much more to it, yet it is failing
this is the tail end of what I get from the last line:
#reason_phrase="Unauthorized",
#request_absolute_uri=nil,
#request_method="GET",
#request_query=nil,
#request_uri=#<URI::HTTP http://qvcluster1/default.htm>,
#status_code=401>,
Any suggestions are appreciated !
As an aside, I just tested from curl and this works fine:
"C:\Program Files\cURL\bin\curl.exe" --ntlm -u testuser:testpassword http://qvcluster1/default.htm
False Alarm - as it turns out there were problems with the web server communicating with the domain controller so it was messing up authentication. I removed and re-added it to the domain and all is fine now!
I have an app I created on Heroku which is written in Ruby (not rails) and Sinatra.
It is hosted on the default herokuapp domain so I can address the app with both HTTP and HTTPS.
The app requests user credentials which I forward on to an HTTPS call so the forwarding part is secure.
I want to ensure my users always connect securely to my app so the credentials aren't passed in clear text.
Despite lots of research, I've not found a solution to this simple requirement.
Is there a simple solution without changing my app to Ruby rails or otherwise?
Thanks,
Alan
I use a helper that looks like this:
def https_required!
if settings.production? && request.scheme == 'http'
headers['Location'] = request.url.sub('http', 'https')
halt 301, "https required\n"
end
end
I can then add it to any single route I want to force to https, or use it in the before filter to force on a set of urls:
before "/admin/*" do
https_required!
end
Redirect in a Before Filter
This is untested, but it should work. If not, or if it needs additional refinement, it should at least give you a reasonable starting point.
before do
redirect request.url.sub('http', 'https') unless request.secure?
end
See Also
Filters
Request Object
RackSsl::Enforcer
I'm trying to test if a uri is valid (e.g. actually has content, not testing if it is well formed here) using ruby code, and I can open a uri using open(uri). But in my case, the uri is a link to a file to be downloaded and I don't want to have to download the whole file just to verify that there is content there.
Is there another solution for this?
Try this
require 'net/http'
u = URI.parse('http://www.example.com/')
status = Net::HTTP.start(u.host, u.port).head(u.request_uri).code
# status is HTTP status code
You'll need to use rescue to catch exception in case domain resolution fails.
When access an XML-RPC service using xmlrpc/client in ruby, it throws an OpenSSL::SSL::SSLError when the server certificate is not valid. How can I make it ignore this error and proceed with the connection?
Turns out it's like this:
xmlrpc = ::XMLRPC::Client.new("foohost")
xmlrpc.instance_variable_get(:#http).instance_variable_set(:#verify_mode, OpenSSL::SSL::VERIFY_NONE)
That works with ruby 1.9.2, but clearly is poking at internals, so the real answer is "the API doesn't provide such a mechanism, but here's a hack".
Actually the client has been updated, now one has direct access to the http connection:
https://bugs.ruby-lang.org/projects/ruby-trunk/repository/revisions/41286/diff/lib/xmlrpc/client.rb
xmlrpc.http.verify_mode = OpenSSL::SSL::VERIFY_NONE
But better set ca_file or ca_path.
Still I see no option to apply such config to _async calls.
Update: found a workaround by monkey patching the client object:
xmlrpc_client.http.ca_file = #options[:ca_file]
xmlrpc_client.instance_variable_set(:#ca_file, #options[:ca_file])
def xmlrpc_client.net_http(host, port, proxy_host, proxy_port)
h = Net::HTTP.new host, port, proxy_host, proxy_port
h.ca_file = #ca_file
h
end
So you need both, the older approach and the monkey patching. We add also an instance variable, otherwise the new method cannot see the actual value.
I'm new to Ruby coming from Java. I'm trying to make a http get request and I'm getting an http response code of 400. The service I'm calling over http is very particular and I'm pretty sure that my request isn't exactly correct. It'd be helpful to "look inside" the req object after I do the head request (below) to double check that the request_headers that are being sent are what I think I'm sending. Is there a way to print out the req object?
req = Net::HTTP.new(url.host, url.port)
req.use_ssl = true
res = req.head(pathWithScope, request_headers)
code = res.code.to_i
puts "Response code: #{code}"
I tried this: puts "Request Debug: #{req.inspect}" but it only prints this: #<Net::HTTP www.blah.com:443 open=false>
Use set_debug_output.
http = Net::HTTP.new(url.host, url.port)
http.set_debug_output($stdout) # Logger.new("foo.log") works too
That and more in http://github.com/augustl/net-http-cheat-sheet :)
If you want to see & debug exactly what your app is sending, not just see its log output, I've just released an open-source tool for exactly this: http://httptoolkit.tech/view/ruby/
It supports almost all Ruby HTTP libraries so it'll work perfectly for this case, but also many other tools & languages too (Python, Node, Chrome, Firefox, etc).
As noted in the other answer you can configure Net::HTTP to print its logs to work out what it's doing, but that only shows you what it's trying to do, it won't help you if you use any other HTTP libraries or tools (or use modules that do), and it requires you to change your actual application code (and remember to change it back).
With HTTP Toolkit you can just click a button to open a terminal, run your Ruby code from there as normal, and every HTTP request sent gets collected automatically.