I'm attempting to use ruby to connect to mongodb. But first I need to connect to the server hosting mongo.
There is a bastion host that is able to be connected to publicly. The database host is only able to be connected to through the bastion.
Here is my code:
require 'net/ssh/gateway'
require 'mongo'
include Mongo
ops_host = '<PUBLIC IP/DNS>'
db_host = '<PRIVATE IP>'
port = '27017'
user = 'ubuntu'
key = File.read("<PATHTOKEY>")
gateway = Net::SSH::Gateway.new( ops_host, user, :key_data => key, :keys_only => TRUE)
gateway.ssh(db_host, user, :key_data => key, :keys_only => TRUE) do |ssh|
puts ssh.exec!("hostname")
client = MongoClient.new(db_host, port)
db = client.db('<DBNAME>')
coll = db.collection('profiles')
puts coll.find().count()
end
I'm able to connect to the database host as the puts ssh.exec!("hostname") command returns the correct hostname, but I cannot get mongo to connect as I keep recieving Failed to connect to a master node errors.
Related
How do I establish a ruby Datamapper connection to MariaDB on Amazon RDS with SSL?
Here's what I did:
A non-SSL connection works when testing with:
uri = 'mysql://user:pass#host:port/db_name'
connection = DataObjects::Connection.new(uri)
=> #<DataObjects::Mysql::Connection:0x000056179a3a5921
connection.secure?
=> false
According to the MySQL datamapper wiki, an ssl connection requires the following options: :ssl_ca, :client_key, and :client_cert.
This would result in the following code:
uri = 'mysql://user:pass#host:port/db_name?'
ssl_opts = 'ssl[ssl_ca]=file&ssl[client_key]=file&ssl[client_cert]=file'
connection = DataObjects::Connection.new(uri + ssl_opts)
connection.secure?
=> false
However the only files get is the RDS combind CA bundle, refered from the RDS docs
I do not have a client_cert at all.
Connecting with the mysql client on cli works with SSL:
mysql --ssl -h host -u user -p pass db_name
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 1638
Server version: 10.1.26-MariaDB MariaDB Server
In doc
https://github.com/datamapper/do/wiki/MySQL
It also says
as tested only ca_cert was required to connect to RDS.
So try adding only ca_cert path and do a test
There's only one parameter required: :ssl => {:ca_cert => 'pem_file'}.
However it looks like using uri string for configuration does not work. The reason is a limitation in Addressable::Uri. It cannot handle query strings which aim to represent hashes with more than 1 level.
The good news is that it works using DataMapper.setup with a config Hash:
DataMapper.setup(:default,
:adapter => 'mysql',
:user => 'user',
:database => 'db_name',
:host => 'host',
:password => 'pass',
:ssl => {
:ca_cert => '/path/to/rds-combined-ca-bundle.pem'
}
)
I'm trying to connect to FTP via SOCKS5 proxy using ruby's library Net::FTP. Documentation says to set env variable SOCKS_SERVER in order to connect through proxy (http://ruby-doc.org/stdlib-2.0.0/libdoc/net/ftp/rdoc/Net/FTP.html#method-i-connect), but it seems like it does not work.
Code I'm running is this:
irb(main):054:0> ftp = Net::FTP.new
=> #<Net::FTP:0x007efd08c73768 #mon_owner=nil, #mon_count=0, #mon_mutex=#<Thread::Mutex:0x007efd08c73718>, #binary=true, #passive=true, #debug_mode=false, #resume=false, #sock=#<Net::FTP::NullSocket:0x007efd08c736f0>, #logged_in=false, #open_timeout=nil, #read_timeout=60>
irb(main):056:0> ENV['SOCKS_SERVER'] = 'host:port'
=> "host:port"
irb(main):055:0> ftp.connect('test.rebex.net')
=> nil
irb(main):057:0> ftp.login('demo', 'password')
=> true
irb(main):058:0> ftp.ls
=> ["10-27-15 03:46PM <DIR> pub", "04-08-14 03:09PM 403 readme.txt"]
When I look to proxy logs I can not see any requests going through.
What I'm doing wrong or does anybody have an example how to achieve that?
If your on Windows computer you'll need to use dress_socks gem and Monkeypath:
$socks_server = '127.0.0.1'
$socks_port = '9090'
require 'dress_socks'
class Net::FTP
def open_socket(host, port) # :nodoc:
# puts "opening socket #{#host}:#{port}"
return DressSocks::Socket.new(#host, port,
socks_server: $socks_server, socks_port: $socks_port)
end
end
I wrote a Ruby script that's trying to connect to a Postgres database hosted on Heroku.
If I use a hardcoded password, or if I load the password using gets, everything works fine.
However, if I load the password using IO.noecho, I get the following exception:
storing.rb:11:in `initialize': FATAL: password authentication failed for user "***" (PG::ConnectionBad)
FATAL: no pg_hba.conf entry for host "****", user "***", database "***", SSL off
from storing.rb:11:in `new'
from storing.rb:11:in `create_conn'
from fetch_currencies.rb:11:in `<main>'
Here's my code:
def create_conn(password)
conn = PGconn.connect(
:host => '***',
:port => 5432,
:dbname => '***',
:user => '***',
:password => password)
return conn
end
puts 'Postgres DB password:'
pass = STDIN.noecho(&:gets)
conn = create_conn(pass)
I tried printing the password after loading it, as well as checking whether it's a String, and everything seems to be fine. What could be the problem?
The problem, of course, was that I didn't chomp the input, so I guess the terminating new line character was also passed as part of the password.
The right way to go is then
pass = STDIN.noecho(&:gets).chomp
I am trying to connect via Net::FTPTLS to a Microsoft-based file server (IIS) which is configured to use FTP on port 22 and requires SSL.
I connect via:
require 'net/ftptls'
ftp = Net::FTPTLS.new()
ftp.connect('host.com', port_number)
ftp.login('Username', 'Password')
ftp.puttextfile('somefile.txt', 'where/to/save/somefile.txt')
ftp.close
Problem is, I get the following error:
hostname does not match the server certificate
It seems that I have to disable the openssl peer verification: OpenSSL::SSL::VERIFY_PEER should become OpenSSL::SSL::VERIFY_NONE.
Any ideas on how to monkey-patch the Net::FTPTLS class? Has anyone done this successfully?
Instead using Net::FTPTLS, use Ruby 2.4+ with the following code:
require 'net/ftp'
ftp = Net::FTP.new(nil, ssl: {:verify_mode => OpenSSL::SSL::VERIFY_NONE})
ftp.connect('host.com', port_number)
ftp.login('Username', 'Password')
ftp.puttextfile('somefile.txt', 'where/to/save/somefile.txt')
ftp.close
What I did, rather than monkeypatching ruby itself, was bring a copy of this into /lib of my project.
module Net
class FTPTLS < FTP
def connect(host, port=FTP_PORT)
#hostname = host
super
end
def login(user = "anonymous", params = {:password => nil, :acct => nil, :ignore_cert => false})
store = OpenSSL::X509::Store.new
store.set_default_paths
ctx = OpenSSL::SSL::SSLContext.new('SSLv23')
ctx.cert_store = store
ctx.verify_mode = params[:ignore_cert] ? OpenSSL::SSL::VERIFY_NONE : OpenSSL::SSL::VERIFY_PEER
ctx.key = nil
ctx.cert = nil
voidcmd("AUTH TLS")
#sock = OpenSSL::SSL::SSLSocket.new(#sock, ctx)
#sock.connect
#sock.post_connection_check(#hostname) unless params[:ignore_cert]
super(user, params[:password], params[:acct])
voidcmd("PBSZ 0")
end
end
end
I also cleaned up the param passing a bit. You would use this like so:
require 'ftptls' # Use my local version, not net/ftptls
#ftp_connection = Net::FTPTLS.new()
#ftp_connection.passive = true
#ftp_connection.connect(host, 21)
#ftp_connection.login('user', :password => 'pass', :ignore_cert => true)
HTH
This works fine for me. #ROR
ftp = Net::FTP.new("ftps.host.com", ftp_options)
open("where/is/your/file/somefile.txt") do |file_data|
ftp.putbinaryfile(file_data, 'where/to/save/somefile.txt')
end
ftp.puttextfile('somefile.txt', 'where/to/save/somefile.txt')
def ftp_options
{
port: FTP_PORT,
username: 'ftp_user',
password: 'password',
passive: true,
ssl: { verify_mode: 0 }
}
end
Remember that you have to provide ftps.hostname.com.
I am new to cassandra so I followed this guide on how to get Cassandra set up on an EC2 instance. I have gotten it all set up and ready to go but for some reason I'm not able to connect from ruby. Here is what I have been trying:
require 'cassandra'
client = Cassandra.new('PERSON', 'ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:9160')
# =>
# <Cassandra:0x100cda3b0
# #auto_discover_nodes = true,
# #column_name_class = {},
# #column_name_maker = {},
# #is_super = {},
# #sub_column_name_class = {},
# #sub_column_name_maker = {},
# attr_accessor :keyspace = "PERSON",
# attr_reader :servers = [
# [0] "ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:9160"
# ],
# attr_reader :thrift_client_class = ThriftClient < AbstractThriftClient,
# attr_reader :thrift_client_options = {
# :transport_wrapper => Thrift::FramedTransport < Thrift::BaseTransport,
# :thrift_client_class => ThriftClient < AbstractThriftClient,
# :protocol => Thrift::BinaryProtocolAccelerated < Thrift::BinaryProtocol
# }
# >
Then when I tried to use the client, I get this error:
client.keyspaces
#=> ThriftClient::NoServersAvailable: No live servers in [ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:9160].
I am able to connect via SSH so I'm not sure what I'm doing wrong here.
Update:
My security group includes:
port: 9160
protocol: tcp
source: sg-xxxxxxxx
Where is the "source" of the connection into the Cassandra server?
The security group you are using only opens up port 9160 to instances in the group sg-xxxxxxxx. If you are trying to connect from anywhere else (like the outside world) you will not be successful.
There are two things to check:
do you have a firewall running on the client and/or the server?
do you allow access in your EC2 security group from your client to your server?