Anyone get working FTPS/FTP::TLS Under Ruby 1.9.3? - ruby

I've tried several gems, examples, etc, and cannot get this working, the more promising gems were: double-bag-ftps and FTPFXP, I can connect but I cannot transfer files, in active or passive mode..
sample code with ftpfxp:
#conn2 = Net::FTPFXPTLS.new
#conn2.passive = true
#conn2.debug_mode = true
#conn2.connect('192.168.0.2', 990)
#conn2.login('myuser2', 'mypass2')
#conn2.chdir('/')
#conn2.get("data.txt")
#conn2.close
sample code with double-bag:
ftps = DoubleBagFTPS.new
ftps.ssl_context = DoubleBagFTPS.create_ssl_context(:verify_mode => OpenSSL::SSL::VERIFY_NONE)
ftps.connect('192.168.0.2')
ftps.login('myuser2', 'mypass2')
ftps.chdir('/')
ftps.get("data.txt")
ftps.close
sample error with double-bag:
~/.rbenv/versions/1.9.3-p385/lib/ruby/gems/1.9.1/gems/double-bag-ftps-0.1.0/lib/double_bag_ftps.rb:148:in `connect': Broken pipe - SSL_connect (Errno::EPIPE)
Sample error with ftpfxp:
~/.rbenv/versions/1.9.3-p385/lib/ruby/1.9.1/net/ftp.rb:206:in `initialize': No route to host - connect(2) (Errno::EHOSTUNREACH)
Any recomendation that does not involve external commands ?
Thanks.

I've solved the issue, the server was returning a private ip address while trying to connect in pasive mode with Explicit tls, so I've added a line to Double-Bag-FTPS to check if the returned ip was private fallback to the original public ip address...
GitHub Pull request
So if someone has the same issue maybe this is the answer hope that this can help someone else :)

Related

Pusher not able to establish connection

I'm trying to connect to the Liquid exchange's Stream API with pusher. Documentation here
I have constructed a function that is in line with the Pusher library's documentation. Link to that here
My current code looks like this:
require 'pusher-client'
channels_client = PusherClient::Socket.new('LIQUID', {
ws_host: 'tap.liquid.com'
})
and running that gives me an error message of:
D, [2019-08-20Txx:xx:xx.xxxxxx #xxxxx] DEBUG -- : Binding
pusher:connection_established to pusher_global_channel
D, [2019-08-20Txx:xx:xx.xxxxxx #xxxxx] DEBUG -- : Binding
pusher:connection_disconnected to pusher_global_channel
D, [2019-08-20Txx:xx:xx.xxxxxx #xxxxx] DEBUG -- : Binding pusher:error
to pusher_global_channel
D, [2019-08-20Txx:xx:xx.xxxxxx #xxxxx] DEBUG -- : Binding pusher:ping
to pusher_global_channel
If I try to run this after the first command:
channels_client.connect
...it refuses to connect.
C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/pusher-client-0.6.2/lib/pusher-client/websocket.rb:17:in
`initialize': No connection could be made because the target machine
actively refused it. - connect(2) for "tap.liquid.com" port 80
(Errno::ECONNREFUSED)
The error comes from tcpsocket.new (Ruby https://ruby-doc.org/stdlib-2.5.1/libdoc/socket/rdoc/TCPSocket.html) in websocket line 17: https://github.com/pusher-community/pusher-websocket-ruby/blob/master/lib/pusher-client/websocket.rb
ws_host tap.liquid.com 80 seems to be invalid, https://pusher.com/docs/channels/library_auth_reference/pusher-websockets-protocol
I have tried using a secure connection, at least in the browser and got a 404 for ws and invalid request, which is good in this case, for https (https://tap.liquid.com/).
You need to configure ssl in the client configuration, but the pusher-client gem as well as one other I have found are pretty old. Maybe it is wise to switch to a more up-to-date library like https://github.com/pusher/pusher-http-ruby/blob/master/README.md
The example contains an option for activated SSL.

ECONNRESET error when accessing Ruby application which runs in Webrick

I am trying to install redmine (bugtracker which runs on ruby). I use webrick, it starts all fine, but when I access http://IP:3000/, it throws the following error in the server logs and the page does not load in the browser.
ERROR Errno::ECONNRESET: Connection reset by peer # io_fillbuf - fd:11
/root/.rbenv/versions/2.5.1/lib/ruby/2.5.0/webrick/httpserver.rb:82:in `eof?'
/root/.rbenv/versions/2.5.1/lib/ruby/2.5.0/webrick/httpserver.rb:82:in `run'
/root/.rbenv/versions/2.5.1/lib/ruby/2.5.0/webrick/server.rb:307:in `block in start_thread'
I am a bit stuck here, any help would be greatly appreciated.
Thanks in advance.
In the end, it was due to the port 3000 was being blocked by the network where the server was part of. Once the port block was removed, it started to work. Thanks #mike-k for your tips.

API Request - OpenSSL::SSL::SSLError: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read server hello A

Another of of these questions, I know this question has been asked (and answered) a lot on StackOverflow, but I can't get any of those to work for me and I also have a few questions I would like to learn.
Here is my error:
OpenSSL::SSL::SSLError: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read server hello A
To start, here is my system settings.
I am on OSX El Capitan version 10.11.6
openssl version
OpenSSL 0.9.8zh 14 Jan 2016
which openssl
/usr/bin/openssl
ruby -v
ruby 2.1.6p336 (2015-04-13 revision 50298) [x86_64-darwin14.0]
rbenv -v
rbenv 0.4.0
My questions are these:
1) Does this error mean that a certificate was sent back to me, and then my OpenSSL version was unable to verify it? Did the other server have a chance to read mine, or even see it yet? Is there a way to dig into this request using Net::HTTP and inspect this other than opening up a program like Wireshark? Once I call net::HTTP.new.request(request) I seem to lose control and it just errors.
2) Did I even successfully talk to the other server, and it denied me?
3) At what point in the request am I in when I get this message?
and most of all
4) What are my options to get past this point
4a. So far i'm seeing a possible brew solution, but I haven't been able to get brew to link
4b. I could manually install Mozilla's CA (Or any other CA) into my Mac OSX Keychain
4c. Can I attach the file using the request.ca_file = "file" as I tried in my code? (see below)
4d. Is there any other solutions / best and most politically correct version?
5) Am I going to have this issue when I deploy to Heroku?
From what i'm reading, this is an issue of my OS not containing the correct CA files. the ca_file part is due to my first attempts to add the correct ca_file to my requests. I'm guessing I don't need that. I am using a Proxy with heroku because this API requires a static IP.
Here is my generic code
cert = File.read(File.join(Rails.root, 'ssl', 'test_env', 'their_test_cert.der'))
ca_file = File.read(File.join(Rails.root, 'ssl', 'test_env', 'Class3PublicPrimaryCA.der'))
uri = URI("https://xml.theirtestenv.com/api/receive")
headers = {
'x-IK-Version' => 'IKR/V4.00',
}
proxy_host = "myproxyhose"
proxy_port = "1234"
proxy_user = "myproxyuser"
proxy_pass = "myproxypass"
proxy_request = Net::HTTP.new(uri.hostname, '443', proxy_host, proxy_port, proxy_user, proxy_pass)
# http.key = OpenSSL::PKey::RSA.new(rsa_key)
proxy_request.use_ssl = true
proxy_request.cert = OpenSSL::X509::Certificate.new(cert)
proxy_request.ca_file = ca_file
proxy_request.verify_mode = OpenSSL::SSL::VERIFY_PEER
# proxy_request.ssl_version = :SSLv3
# This doesn't seem to matter whether I put this or not...
# Tried variations of these...
# proxy_request.ssl_version = :TLSv1
# proxy_request.ciphers = ['DES-CBC3-SHA']
post_request = Net::HTTP::Post.new(uri, headers)
post_request.content_type = "multipart/related"
response = proxy_request.request(post_request)
puts response.inspect
Also, i've noticed no matter what proxy_requst.ssl_version I put, my error always specifies SSLv2/v3, does that mean on their end they are requiring that version?
Sorry for all the questions. Thanks in advance
It's been awhile, but I just wanted to post that this was a couple of issues, the certificates I was passing were not the correct ones they were for the wrong environment. Once the correct certificates were passed this started working, though I never got the SSL Version questions quite figured out.

Connecting to Mongod via Ruby driver using SSL returns Mongo::ConnectionFailure

I want to use SSL with MongoDB. It's not enabled by default so one has to compile from source with the necessary options. I followed the official documentation and got the v2.6.4 binary built and running nicely on a freshly deployed server running Ubuntu 14.04. All good so far.
Next I set up mongod as described in the official docs. I did follow their example of using a self-certified key for testing purposes. And the relevant part of the config looks like:
...
net:
bindIp: 127.0.0.1
port: 27017
ssl:
mode: requireSSL
PEMKeyFile: /opt/mongo/security/mongodb.pem
...
If I then run the client and specify to use SSL I connect fine. ($ mongo --ssl). FWIW if I try without the --ssl argument then it doesn't connect.
Ok, time to link up via Ruby. I'm on the same server and I try the following ruby script:
require 'rubygems'
require 'mongo'
client = Mongo::MongoClient.new('localhost', 27017, {:ssl => true})
Nope. It's not having it:
/home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:422:in `connect': Failed to connect to a master node at localhost:27017 (Mongo::ConnectionFailure)
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:661:in `setup'
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:177:in `initialize'
from test_mongo_ssl.rb:8:in `new'
from test_mongo_ssl.rb:8:in `<main>'
So best to make sure that there's nothing wrong with the default connection without SSL. I disable SSL on mongod and restart. Then try the ruby script again, this time without the ssl option:
...
client = Mongo::MongoClient.new('localhost', 27017)
And it's fine. Therefore I feel I've narrowed it down to the ruby driver & ssl, but beyond that there's little else to go on.
EDIT I tried their Python driver on the same server and used their example program:
from pymongo import MongoClient
c = MongoClient(host="localhost", port=27017, ssl=True)
And that did connect OK. So at least I can feel fairly confident that the mongod is configured properly and the issue lies somewhere within the Mongo Ruby driver. Quite possibly a bug in their current driver (v1.11.1).
UPDATE I've also had success connecting via ssl using the node.js driver:
var mongo = require('mongodb');
var database = new mongo.Db("my_database", new mongo.Server("127.0.0.1", 27017, {ssl:true} ), {w:0});
database.open(function(err, db) {
if(err) throw err;
db.authenticate('user', 'password', function(err, result) {
var collection = db.collection('foo');
collection.findOne(function(err, item) {
if(err) throw err;
console.log(item);
db.close();
});
});
});
There it seems to be increasingly likely that there's either a bug in the ruby driver, or the documentation is incomplete and not explaining accurately how to use SSL connections. Therefore I've opened a new issue on MongoDB's issue tracker to hopefully get to the bottom of this.
Rather embarrassingly, the solution to this issue was my /etc/hosts file had a typo for the localhost entry:
127.0.0.1 localhost.localdomain locahost
As you can see, it's missing the second letter L in "localhost". (I suspect it went missing during an accidental vim gesture.) Therefore to resolve I just had to reinstate the missing "l":
127.0.0.1 localhost.localdomain localhost
It's still a mystery as to why the Python sample worked correctly. And it's because of that I didn't twig earlier that it was a problem with the hosts file.

APNS Connection Issue

OK, I know there has been a lot of discussion regarding APNS connection failures. Most of the discussion recommends checking the outgoing server port 2195 to be sure it will allow the connection. This is not my problem, although I am experiencing the 'connection refused' error (111).
I have validated communication between my server and the sandbox server be receiving a 200 response from the Apple Gateway. I know my certificates are good because I have tested the connection using openssl from a Mac. I have also been able to connect with the gateway once from my server but cannot get a consistent connection.
The test code I am using is as follows:
$ctx = stream_context_create();
stream_context_set_option($ctx,'ssl', 'local_cert', $pem);
stream_context_set_option($ctx, 'ssl', 'passphrase', $key);
$gateway = 'gateway.sandbox.push.apple.com';
$port = '2195';
$remote_socket = 'ssl://'.$gateway.':'.$port;
$fp = stream_socket_client($remote_socket, $err, $errstr, 60,STREAM_CLIENT_CONNECT, $ctx);
if (!$fp){
echo $err.'<br>';
echo $errstr.'</br>';
echo 'error=apple failed to connect';
} else {
fclose($fp);
echo 'success';
}
I have placed the .pem file in the same directory as the script file, removed the use of the passphrase, specified verify_peer, used the STREAM_CLIENT_ASYNC_CONNECT and the STREAM_CLIENT_PERSISTENT flags without success.
Is this an issue with something that I am doing, an issue with the apns sandbox server or is this what I should expect from the apns? Any insights or help you can provide is greatly appreciated -- my hair is getting pretty thin!
Regards.
PS -- If I remove the passphrase I get a 115 error saying the key cannot be accessed.
Despite being able to connect at least once, it turns out that you cannot get access to the APNS gateway without, at least, a virtual dedicated server on GoDaddy. At a minimum of $80 per month, paid upfront, that is too expensive. On to other means.
Some other things to try :
Use STREAM_CLIENT_CONNECT|STREAM_CLIENT_PERSISTENT in the stream_socket_client call
Don't use verify_peer
Check that the owner of the PHP script has enough rights to read the pem file and connect to a port located elsewhere (ie/ not just localhost)

Resources