Rstudio fails to connect to SSL enabled postgresql server on windows machine - windows

I am facing a problem while connecting with a SSL enabled PostgreSQL server from Windows. I am getting the following error:
Error :
Error in postgresqlNewConnection(drv, …) :
RS-DBI driver: (could not connect ip:80 on dbname "all": sslmode value "require" invalid when SSL support is not compiled in.
Commands I have used :
install.packages(“RPostgreSQL”)
install.packages(“rstudioapi”)
require(“RPostgreSQL”)
require(“rstudioapi”)
drv <- dbDriver("PostgreSQL")
pg_dsn = paste0(
'dbname=', "all", ' ',
'sslmode=require')
con <- dbConnect(drv,
dbname = pg_dsn,
host = "ip",
port = 80,
user = "abcd",
password = rstudioapi::askForPassword("Database password"))

You need to use a PostgreSQL client shared library (libpq.dll) that was built with SSL support.

Related

Connecting to neo4j from ruby

Currently I'm having problems connecting to a local neo4j instance with Ruby. I have the mac version of neo4j desktop running. My ruby version is 2.6.5 and gem versions:
neo4j (9.6.0)
neo4j-core (9.0.0)
neo4j version: Neo4j Desktop 1.2.3 for Mac using Neo4j 3.5.12
Here is the code that I use;
user = 'neo4j'
pass = 'asdf1234'
url = "https://localhost:7473"
options = {user: user, pass: pass}
neo4j_adaptor = Neo4j::Core::CypherSession::Adaptors::HTTP.new(url, options)
neo4j_session = Neo4j::Core::CypherSession.new(neo4j_adaptor)
result = neo4j_session.query("MATCH (n:blah) RETURN count(n)")
The error I get is:
Neo4j::Core::CypherSession::ConnectionFailedError: Faraday::ConnectionFailed: SSL peer certificate or SSH remote key was not OK
When I add the no ssl option like this, the error is the same
options = {ssl: false, user: user, pass: pass}
When I switch to bolt like this:
require 'neo4j/core/cypher_session/adaptors/bolt'
user = 'neo4j'
pass = 'asdf1234'
url = "bolt://localhost:7687"
options = {user: user, pass: pass}
neo4j_adaptor = Neo4j::Core::CypherSession::Adaptors::Bolt.new(url, options)
neo4j_session = Neo4j::Core::CypherSession.new(neo4j_adaptor)
result = neo4j_session.query("MATCH (n:blah) RETURN count(n)")
The error becomes:
Net::TCPClient::ConnectionFailure: #connect Failed to connect to any of localhost:7687 after 0 retries. Net::TCPClient::ConnectionFailure: #connect SSL handshake failure with 'localhost[127.0.0.1]:7687': OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate)
When I switch the ssl false on
options = {ssl: false, user: user, pass: pass}
The error changes
RuntimeError: Init did not complete successfully
Neo.ClientError.Security.Unauthorized
The client is unauthorized due to authentication failure.
My graph settings are set to default with only these options being enabled:
dbms.memory.heap.initial_size=512m
dbms.memory.heap.max_size=1G
dbms.memory.pagecache.size=512m
dbms.connector.bolt.enabled=true
dbms.connector.http.enabled=true
dbms.jvm.additional=-XX:+UseG1GC
dbms.jvm.additional=-XX:-OmitStackTraceInFastThrow
dbms.jvm.additional=-XX:+AlwaysPreTouch
dbms.jvm.additional=-XX:+UnlockExperimentalVMOptions
dbms.jvm.additional=-XX:+TrustFinalNonStaticFields
dbms.jvm.additional=-XX:+DisableExplicitGC
dbms.jvm.additional=-Djdk.tls.ephemeralDHKeySize=2048
dbms.jvm.additional=-Djdk.tls.rejectClientInitiatedRenegotiation=true
dbms.windows_service_name=neo4j
dbms.jvm.additional=-Dunsupported.dbms.udc.source=desktop
Any ideas please?

Traefik not getting SSL certificates for new domains

I've got Traefik/Docker Swarm/Let's Encrypt/Consul set up, and it's been working fine. It managed to successfully get certificates for the domains admin.domain.tld, registry.domain.tld and staging.domain.tld, but now that I've tried adding containers that are serving domain.tld and matomo.domain.tld those aren't getting any certificates (browser warns of self signed certificate because it's the default Traefik certificate).
My Traefik configuration (that's being uploaded to Consul):
debug = false
logLevel = "DEBUG"
insecureSkipVerify = true
defaultEntryPoints = ["https", "http"]
[entryPoints]
[entryPoints.ping]
address = ":8082"
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[traefikLog]
filePath = '/var/log/traefik/traefik.log'
format = 'json'
[accessLog]
filePath = '/var/log/traefik/access.log'
format = 'json'
[accessLog.fields]
defaultMode = 'keep'
[accessLog.fields.headers]
defaultMode = 'keep'
[accessLog.fields.headers.names]
"Authorization" = "drop"
[retry]
[api]
entryPoint = "traefik"
dashboard = true
debug = false
[ping]
entryPoint = "ping"
[metrics]
[metrics.influxdb]
address = "http://influxdb:8086"
protocol = "http"
pushinterval = "10s"
database = "metrics"
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "domain.tld"
watch = true
exposedByDefault = false
network = "net_web"
swarmMode = true
[acme]
email = "my#mail.tld"
storage = "traefik/acme/account"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
Possibly related, in traefik.log I repeatedly (as in almost once per second) get the following (but only for the registry subdomain). Sounds like an issue to persist the data to consul, but there are no errors indicating such an issue.
{"level":"debug","msg":"Looking for an existing ACME challenge for registry.domain.tld...","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"Looking for provided certificate to validate registry.domain.tld...","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"No provided certificate found for domains registry.domain.tld, get ACME certificate.","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"ACME got domain cert registry.domain.tld","time":"2019-07-07T11:37:23Z"}
Update: I managed to find this line in the log:
{"level":"error","msg":"Error getting ACME certificates [matomo.domain.tld] : cannot obtain certificates: acme: Error -\u003e One or more domains had a problem:\n[matomo.domain.tld] acme: error: 400 :: urn:ietf:paramsacme:error:connection :: Fetching http://matomo.domain.tld/.well-known/acme-challenge/WJZOZ9UC1aJl9ishmL2ACKFbKoGOe_xQoSbD34v8mSk: Timeout after connect (your server may be slow or overloaded), url: \n","time":"2019-07-09T16:27:43Z"}
So it seems the issue is the challenge failing because of a timeout. Why the timeout though?
Update 2: More log entries:
{"level":"debug","msg":"Looking for an existing ACME challenge for staging.domain.tld...","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"Looking for provided certificate to validate staging.domain.tld...","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"No provided certificate found for domains staging.domain.tld, get ACME certificate.","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"No certificate found or generated for staging.domain.tld","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"http: TLS handshake error from 10.255.0.2:51981: remote error: tls: unknown certificate","time":"2019-07-10T19:38:34Z"}
But then, after a couple minutes to an hour, it works (for two domains so far).
not sure if its a feature or a bug, but removing the following http to https redirect solved it for me:
[entryPoints.http.redirect]
entryPoint = "https"

Multi url hosts in JDBC connection

We are using a JDBC URL like "jdbc:vertica://80.90..:***/". How can I set a second Vertica host for a separate cluster in this URL? Both clusters have the same table, username and password. The only difference is the host IP.
I have tried to set the URL as shown below but it doesn't work.
jdbc:vertica://00.00.00.2:1111,00.00.00.1:1111/vertica
url = "jdbc:vertica://****:***/"
url1 = "jdbc:vertica://***:****/"
properties = {
"user": "****",
"password": "*****",
"driver": "com.vertica.jdbc.Driver"
}
df =spark.read.format("JDBC").options(
url =url and url1,
query = "SELECT COUNT(*) from traffic.stats where date(time_stamp) between '2019-03-16 ' and '2019-03-17' ",
**properties
).load().show()
Note: pyspark 2.4 , vertica jar 9.1.1
One way to do this is to specify a backup host.
url = "jdbc:vertica://00.00.00.2:1111/vertica"
properties = {
"user": "****",
"password": "*****",
"driver": "com.Vertica.jdbc.Driver",
"ConnectionLoadBalance": 1,
"BackupServerNode": "00.00.00.1:1111"
}
This will try the host specified in the URL (00.00.00.2:1111). If that host is unavailable it will try the BackupServerNode. You can specify multiple backup server nodes separated by commas.
The above solution will only work if the original host is unavailable.
Another solution is, if you want a random host selected, you can do that logic within python itself.
import random
host_list = ["00.00.00.2:1111", "00.00.00.1:1111"]
host = random.choice(hosts) # python2 random syntax, lookup random if using a different version of python
url = "jdbc:vertica://{0}/vertica".format(host)
Note: The connection property BackupServerNode is named such because it is usually used to specify an alternate node within the same database cluster, but if—like yourself—you have two databases with the same username, password, etc., it will also work for connecting to a separate database cluster host.

Firebird connection to remote server using FbConnectionStringBuilder

We're having trouble connecting to a remote Firebird server using the .NET provider FbConnectionStringBuilder class. We can connect to a local database file in either embedded or server mode however when it comes to establishing a connection to a remote server we can't establish the connection.
We assign properties of the FbConnectionStringBuilder class with the following code (server mode). I have omitted the code which assigns properties for embedded mode.
var cs = new FbConnectionStringBuilder
{
Database = databaseSessionInfo.PathAbsoluteToDatabase,
Charset = "UTF8",
Dialect = 3,
};
cs.DataSource = databaseSessionInfo.Hostname;
cs.Port = databaseSessionInfo.Port;
cs.ServerType = (FbServerType)databaseSessionInfo.Mode;
cs.Pooling = true;
cs.ConnectionLifeTime = 30;
if (databaseSessionInfo.UseCustomUserAccount)
{
cs.UserID = databaseSessionInfo.Username;
cs.Password = databaseSessionInfo.Password;
}
else
{
cs.UserID = Constants.DB_DefaultUsername;
cs.Password = Constants.DB_DefaultPassword;
}
Pretty straightforward. Our software contains a connection configuration screen whereby a user can supply different connection properties. These properties get assigned to the FbConnectionStringBuilder class using the code above.
The connection builder class outputs a connection string in the following format:
initial catalog="P:\Source\database.fdb";character set=UTF8;dialect=3;data source=localhost;port number=3050;server type=Default;pooling=True;connection lifetime=30;user id=USER;password=example
However literature on Firebird connection strings as indicated on this page (Firebird Connection Strings) talk of a different structure. I can only assume the FbConnectionStringBuilder class builds a Firebird connection string satisfying the requirements of Firebird.
Does the FbConnectionStringBuilder class append the hostname to the connection string correctly?
The Firebird server is running on the server. I assume there is no need to install it on the client?
What libraries need to be installed with the client to support a remote server connection?
Are we doing this right?
Any advice is appreciated.
Answering your questions:
1) Yes it will.
2) Correct, the client library will connect over the network.
3) If you use the ADO library, just FirebirdSql.Data.FirebirdClient.dll
4) Maybe, I dont know if this will help but this is how I connect.
FbConnectionStringBuilder ret = new FbConnectionStringBuilder();
ret.DataSource = Host;
ret.Port = Port;
ret.Database = Database;
ret.UserID = User;
ret.Password = Password;
ret.Charset = CharacterSet;
FbConnection ret = new FbConnection(connectionString);
Interestingly, what is the absolute path you are providing to the StringBuilder ? Is it the server absolute path, or some kind of network mapped drive?
Also, I assume you've reviewed firewall settings and allowing port 3050 inbound.

unable to use RODM to connect to Oracle database from R

I am trying to connect to an Oracle database from R.
I used RODM_open_dbms_connection(dsn, uid = "", pwd = ""), but it doesn't work. I am not sure what kind of the error it is.
Here is the error screen from R.
> library(RODM) Loading required package: RODBC DB<-
> RODM_open_dbms_connection(dsn="****",uid="****", pwd="****") Error in
> typesR2DBMS[[driver]] <<- value[c("double", "integer", "character", :
> cannot change value of locked binding for 'typesR2DBMS'
Have you tried ROracle? After you get the instant client installed on your machine, connecting and fetching records from R looks like this:
library(ROracle)
con <- dbConnect(dbDriver("Oracle"), username="username", password="password", dbname = "dbname")
res <- dbSendQuery(con, "select * from schema.table")
dt <- data.table(fetch(res, n=-1))
I explored the RODM_open_dbms_connection. I commented out the part setSqlTYpeInfo(). After that I didn't receive that error.
Install RODM package from source then only you can edit the package.

Resources