Unable to connect to postgresQL with Ruby - ruby

I've searched in the archive but could not find an answer to my dilemma. I'm coding in Ruby and using watir webdriver framework on my local Mac Yosemite and want to connect to postgres database on a linux box.
I have the required ruby gems installed on my local Mac
* LOCAL GEMS *
dbd-pg (0.3.9)
pg (0.18.4)
dbi (0.4.5, 0.4.4)
I am using the following code.
require 'rubygems'
require 'pg'
require 'dbd/pg'
require 'dbi'
conn = PGconn.connect("10.0.xx.xx","5432",'','',"mydbname","dbuser", "")
res = conn.exec('select * from priorities_map;')
puts res.getvalue(0,0)
conn.close if conn
On running this
I a getting these errors
.initialize': Could not connect to server: Connection refused (PG::ConnectionBad)
Is the server running on host "10.0.xx.xx" and accepting
TCP/IP connections on port 5432?
If I use the code
dbh = DBI.connect("dbi:pg:mydbname:ipaddress", "user", "")
row = dbh.exec('select * from etr_priorities_map;')
puts row.getvalue(0,0)
dbh.disconnect if dbh
I get the error
block in load_driver': Unable to load driver 'pg' (underlying error: wrong constant name pg) (DBI::InterfaceError) from System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'
I am new to Ruby. How can I resolve these issues?

The first error, as #Doon said in the comments, comes from the TCP connection and usually means your database is not listening on the network. Most PostgreSQL packages come with a default configuration that only allows local connections, but you can enable connections over the network in the server configuration via the listen_addresses setting. I installed PostgreSQL through Homebrew on my Mac, and my config is at /usr/local/var/postgres/postgresql.conf, but if you installed it some other way the path may be different.
The second error is happening because the "driver" part of the connection string is case-sensitive, and the DBD driver for Postgres is named Pg, not pg. Try this:
dbh = DBI.connect("dbi:Pg:mydbname:ipaddress", "user", "")
Also, unless you have your heart set on using Ruby/DBI, you might want to consider using a more-recently maintained library. Ruby-DBI is very well-written and tested, but it hasn't seen a release since 2010, and Ruby itself has changed pretty significantly in the interim.
If you do want to consider alternatives, I use Sequel for mostly everything, and I highly recommend it, especially for Postgres development, but both DataMapper and ActiveRecord have a large userbase as well.

Related

Selenium Webdriver connection timing out while running automation tests

While running automation tests I sometimes get a timeout error for Selenium Webdriver (I think this is where the issue is at least). Me and my team have all recently migrated to Macbooks (from a combination of Windows and Ubunutu machines) and are all getting this behaviour.
While running a suite of tests I will (seemingly at random) get the following error output in the console:
Errno::ETIMEDOUT: Failed to open TCP connection to 127.0.0.1:9515 (Operation timed out - connect(2) for "127.0.0.1" port 9515)
This doesn't happen consistently, sometimes I'll run a pack and not have any such errors, sometimes I'll have multiple occurrences.
Here is the code which registers the driver (in case anything here points to what the issue could be):
Capybara.register_driver :selenium do |app|
opts = Selenium::WebDriver::Chrome::Options.new
opts.add_argument '--start-maximized'
opts.add_argument 'disable-infobars'
opts.add_argument '--disable-notifications'
opts.add_preference(:safebrowsing,
enabled: true)
opts.add_preference(:browser, set_download_behavior: { behavior: 'allow' })
Capybara::Selenium::Driver.new(app, browser: :chrome, options: opts)
end
The gems I am using are Capybara (3.11.0), Cucumber (3.1.0) and Selenium-webdriver (3.141.0). I have ChromeDriver(73.0.3683.68) installed via HomeBrew
Has anyone encountered this issue and worked out what the cause is?
Port 9515 is the default port chromedriver runs on. If you happen to be using Chrome/chromedriver v74 try rolling back to 73 or forward to 75 - 74 has been reported to have issues where it will randomly hang.
Another potential solution is to upgrade to Capybara >= 3.16.0 which defaults to using a persistent connection to chromedriver. This would mean less opening/closing of connections and less chance for chromedriver to hang during connection establishment.

Unresponsive socket after x time (puma - ruby)

I'm experiencing an unresponsive socket in with my Puma setup after random time. Up to this point I don't have a clue what's causing the issue. I was hoping somebody over here can help we with some answers or point me in the right direction. I'm having the following setup:
I'm using the official docker ruby-2.2.3-slim image together with the latest puma release 2.15.3, I've also installed Nginx as a reverse proxy. But I'm already sure Nginx isn't the problem over here because and I've tried to verify if the socket was working using this script. And the socket wasn't working, I got a timeout over there as well so I could ignore Nginx.
This is a testing environment so the server isn't experiencing any extreme load, I've also check memory consumption it has still several GB's of free space so that couldn't be the issue either.
What triggered me to look at the puma socket was the error message I got in my Nginx error logging:
upstream timed out (110: Connection timed out) while reading response header from upstream
Also I couldn't find anything in the logs of puma indicating what is going wrong, over here are my puma setup:
threads 0, 16
app_dir = ENV.fetch('APP_HOME')
environment ENV['RAILS_ENV']
daemonize
bind "unix://#{app_dir}/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log", "#{app_dir}/log/puma.stderr.log", true
pidfile "#{app_dir}/pids/puma.pid"
state_path "#{app_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require 'active_record'
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[ENV['RAILS_ENV']])
end
And this it the output in my puma state file:
---
pid: 43
config: !ruby/object:Puma::Configuration
cli_options:
conf:
options:
:min_threads: 0
:max_threads: 16
:quiet: false
:debug: false
:binds:
- unix:///APP/sockets/puma.sock
:workers: 1
:daemon: true
:mode: :http
:before_fork: []
:worker_timeout: 60
:worker_boot_timeout: 60
:worker_shutdown_timeout: 30
:environment: staging
:redirect_stdout: "/APP/log/puma.stdout.log"
:redirect_stderr: "/APP/log/puma.stderr.log"
:redirect_append: true
:pidfile: "/APP/pids/puma.pid"
:state: "/APP/pids/puma.state"
:control_url: unix:///tmp/puma-status-1449260516541-37
:config_file: config/puma.rb
:control_url_temp: "/tmp/puma-status-1449260516541-37"
:control_auth_token: cda8879717be7a645ea323d931b88d4b
:tag: APP
The application itself is a Rails app on the latest version 4.2.5, it's deployed on GCE (Google Container Engine).
If somebody could give me some pointer's on how to debug this any further would be very much appreciated. Because now I don't see any output anywhere which could help me any further.
EDIT
I replaced the unix socket with tcp connection to Puma with the same result, still hangs after x time
I'd start with:
How many requests get processed successfully per instance of puma?
Make sure you log the beginning and end of each request with the thread id of the thread executing it, what do you see?
Not knowing more about your application, I'd say it's likely the threads get stuck doing some long/blocking calls without timeouts or spinning on some computation until the whole thread pool gets depleted.
We'll see.
I finally found out why my application was behaving the way it was.
After trying to use a tcp connection and switching to Unicorn I start looking into other possible sources.
That's when I thought maybe my connection to Google Cloud SQL could be the problem. Once I read the faq of Cloud SQL, they mentioned that you have to tweak you Compute instances to ensure they keep open your DB connection. So I performed the next steps they recommend and that solved the problem for me, I added them just in case:
# Display the current tcp_keepalive_time value.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
# Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.
$ echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
# Apply the change.
$ sudo /sbin/sysctl --load=/etc/sysctl.conf
# Display the tcp_keepalive_time value to verify the change was applied.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time

Connecting to Mongod via Ruby driver using SSL returns Mongo::ConnectionFailure

I want to use SSL with MongoDB. It's not enabled by default so one has to compile from source with the necessary options. I followed the official documentation and got the v2.6.4 binary built and running nicely on a freshly deployed server running Ubuntu 14.04. All good so far.
Next I set up mongod as described in the official docs. I did follow their example of using a self-certified key for testing purposes. And the relevant part of the config looks like:
...
net:
bindIp: 127.0.0.1
port: 27017
ssl:
mode: requireSSL
PEMKeyFile: /opt/mongo/security/mongodb.pem
...
If I then run the client and specify to use SSL I connect fine. ($ mongo --ssl). FWIW if I try without the --ssl argument then it doesn't connect.
Ok, time to link up via Ruby. I'm on the same server and I try the following ruby script:
require 'rubygems'
require 'mongo'
client = Mongo::MongoClient.new('localhost', 27017, {:ssl => true})
Nope. It's not having it:
/home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:422:in `connect': Failed to connect to a master node at localhost:27017 (Mongo::ConnectionFailure)
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:661:in `setup'
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:177:in `initialize'
from test_mongo_ssl.rb:8:in `new'
from test_mongo_ssl.rb:8:in `<main>'
So best to make sure that there's nothing wrong with the default connection without SSL. I disable SSL on mongod and restart. Then try the ruby script again, this time without the ssl option:
...
client = Mongo::MongoClient.new('localhost', 27017)
And it's fine. Therefore I feel I've narrowed it down to the ruby driver & ssl, but beyond that there's little else to go on.
EDIT I tried their Python driver on the same server and used their example program:
from pymongo import MongoClient
c = MongoClient(host="localhost", port=27017, ssl=True)
And that did connect OK. So at least I can feel fairly confident that the mongod is configured properly and the issue lies somewhere within the Mongo Ruby driver. Quite possibly a bug in their current driver (v1.11.1).
UPDATE I've also had success connecting via ssl using the node.js driver:
var mongo = require('mongodb');
var database = new mongo.Db("my_database", new mongo.Server("127.0.0.1", 27017, {ssl:true} ), {w:0});
database.open(function(err, db) {
if(err) throw err;
db.authenticate('user', 'password', function(err, result) {
var collection = db.collection('foo');
collection.findOne(function(err, item) {
if(err) throw err;
console.log(item);
db.close();
});
});
});
There it seems to be increasingly likely that there's either a bug in the ruby driver, or the documentation is incomplete and not explaining accurately how to use SSL connections. Therefore I've opened a new issue on MongoDB's issue tracker to hopefully get to the bottom of this.
Rather embarrassingly, the solution to this issue was my /etc/hosts file had a typo for the localhost entry:
127.0.0.1 localhost.localdomain locahost
As you can see, it's missing the second letter L in "localhost". (I suspect it went missing during an accidental vim gesture.) Therefore to resolve I just had to reinstate the missing "l":
127.0.0.1 localhost.localdomain localhost
It's still a mystery as to why the Python sample worked correctly. And it's because of that I didn't twig earlier that it was a problem with the hosts file.

OpenSSL::SSL::SSLError: hostname does not match the server certificate

All of sudden today morning my HTTP client (HTTParty) threw an error OpenSSL::SSL::SSLError: hostname does not match the server certificate
Firstly I'm not able to understand which so today we have been make that api call almost all day number times from past 2 years without any issue
Secondly I don't understand how do I solve it since it internal to HTTParty
The only thing I know of is that I cant set SSL_CERT_FILE in ENV but as said I already have ROOT CA listed in my /etc/ssl/certs (SSL_CERT_DIR)
Here my output
irb(main):001:0> require "openssl"
=> true
irb(main):002:0> puts OpenSSL::OPENSSL_VERSION
OpenSSL 1.0.1 14 Mar 2012
=> nil
irb(main):003:0> puts "SSL_CERT_FILE: %s" % OpenSSL::X509::DEFAULT_CERT_FILE
SSL_CERT_FILE: /usr/lib/ssl/cert.pem
=> nil
irb(main):004:0> puts "SSL_CERT_DIR: %s" % OpenSSL::X509::DEFAULT_CERT_DIR
SSL_CERT_DIR: /usr/lib/ssl/certs
Lastly as said nothing has change on Openssl and code wise only thing that has happen is the patch the openssl version citing HEARTBLEED vulnerability
Mind you we just patch the openssl version but didnt recompile the RUBY could that be a issue for this
Ruby in question is ruby 1.9.3p327
Net::HTTP library is version httparty-0.13.0
NOTE: - As a solution I didn't except to have VERIFY_NONE options in OPENSSL
It's hard to be sure without knowing host you are connecting too, but I guess that they simply changed the certificate at the servers end. The problem might be, that your script does not support SNI (server name indication, e.g. multiple host names and certificates behind the same IP), but the server providers now changed the default certificate for this site (the one which is used if client does not support SNI).
But like I said, it's hard to be sure with this lack of details in the question.

ADODB.Connection error from Ruby script on Apache server

I have a Ruby script (non-Rails) that connects to a SQL Server database. When run from the command line, it runs fine. When executed via an http request, it generates an error, specifically when opening the DB connection. Something about the combination of the http/SQL methods is failing.
I'm running the script on a machine with: Windows 7 Ultimate (64-bit), Ruby 1.9.3p125, Apache 2.2.11. The database is SQL Server 10.0.4000, hosted on a separate (corporate, internal) server.
The script looks something like this:
#!/Ruby193/bin/ruby
require 'win32ole'
...
$qadb = nil
begin
$qadb = SqlServer.new('192.168.100.249', 'qauser', 'password')
$qadb.open('qadb')
rescue
logRegression("Rescued: Unable to access QADB: #{$!}")
end
The SqlServer class is based on David Mullet's code, found at http://rubyonwindows.blogspot.com/2007/03/ruby-ado-and-sqlserver.html (not copied here for brevity).
From the command line, the DB opens fine and I get an expected result from the script. When I call the script via my internal server (http://qatools/getTask.rb) I get the following error in my log file:
Rescued: Unable to access QADB: failed to create WIN32OLE object from `ADODB.Connection'
HRESULT error code:0x8007007e
The specified module could not be found.
I've considered that I might be missing a DLL. Other research led me to ntwdblib.dll -- I tried downloading a copy and placing it in various folders. I've also considered that I might be facing an Apache configuration issue and/or a security/permissions issue but I haven't found any solutions for those that seem to fit my specific problem.
Any ideas?

Resources