Currently I'm having problems connecting to a local neo4j instance with Ruby. I have the mac version of neo4j desktop running. My ruby version is 2.6.5 and gem versions:
neo4j (9.6.0)
neo4j-core (9.0.0)
neo4j version: Neo4j Desktop 1.2.3 for Mac using Neo4j 3.5.12
Here is the code that I use;
user = 'neo4j'
pass = 'asdf1234'
url = "https://localhost:7473"
options = {user: user, pass: pass}
neo4j_adaptor = Neo4j::Core::CypherSession::Adaptors::HTTP.new(url, options)
neo4j_session = Neo4j::Core::CypherSession.new(neo4j_adaptor)
result = neo4j_session.query("MATCH (n:blah) RETURN count(n)")
The error I get is:
Neo4j::Core::CypherSession::ConnectionFailedError: Faraday::ConnectionFailed: SSL peer certificate or SSH remote key was not OK
When I add the no ssl option like this, the error is the same
options = {ssl: false, user: user, pass: pass}
When I switch to bolt like this:
require 'neo4j/core/cypher_session/adaptors/bolt'
user = 'neo4j'
pass = 'asdf1234'
url = "bolt://localhost:7687"
options = {user: user, pass: pass}
neo4j_adaptor = Neo4j::Core::CypherSession::Adaptors::Bolt.new(url, options)
neo4j_session = Neo4j::Core::CypherSession.new(neo4j_adaptor)
result = neo4j_session.query("MATCH (n:blah) RETURN count(n)")
The error becomes:
Net::TCPClient::ConnectionFailure: #connect Failed to connect to any of localhost:7687 after 0 retries. Net::TCPClient::ConnectionFailure: #connect SSL handshake failure with 'localhost[127.0.0.1]:7687': OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate)
When I switch the ssl false on
options = {ssl: false, user: user, pass: pass}
The error changes
RuntimeError: Init did not complete successfully
Neo.ClientError.Security.Unauthorized
The client is unauthorized due to authentication failure.
My graph settings are set to default with only these options being enabled:
dbms.memory.heap.initial_size=512m
dbms.memory.heap.max_size=1G
dbms.memory.pagecache.size=512m
dbms.connector.bolt.enabled=true
dbms.connector.http.enabled=true
dbms.jvm.additional=-XX:+UseG1GC
dbms.jvm.additional=-XX:-OmitStackTraceInFastThrow
dbms.jvm.additional=-XX:+AlwaysPreTouch
dbms.jvm.additional=-XX:+UnlockExperimentalVMOptions
dbms.jvm.additional=-XX:+TrustFinalNonStaticFields
dbms.jvm.additional=-XX:+DisableExplicitGC
dbms.jvm.additional=-Djdk.tls.ephemeralDHKeySize=2048
dbms.jvm.additional=-Djdk.tls.rejectClientInitiatedRenegotiation=true
dbms.windows_service_name=neo4j
dbms.jvm.additional=-Dunsupported.dbms.udc.source=desktop
Any ideas please?
Related
I have server working that looks a little bit like this
require "socket"
require "openssl"
require "thread"
listeningPort = Integer(ARGV[0])
server = TCPServer.new(listeningPort)
sslContext = OpenSSL::SSL::SSLContext.new
sslContext.cert = OpenSSL::X509::Certificate.new(File.open("cert.pem"))
sslContext.key = OpenSSL::PKey::RSA.new(File.open("priv.pem"))
sslServer = OpenSSL::SSL::SSLServer.new(server, sslContext)
puts "Listening on port #{listeningPort}"
loop do
connection = sslServer.accept
Thread.new {...}
end
When I connect with TLS1.3 and I provide a client cert, I can see that it's working when I verify the cert in the ssl context, but peer_cert is never set on the connection, only the context receives a session.
Do I need to upgrade manually to TLS to access the cert from the client?
The reason why I want it is, I can restrict content or authenticate by looking at the cert on the Gemini protocol
After a lot of reading in the OpenSSL docs I found a solution:
I set the sslContext.verify_mode = OpenSSL::SSL::VERIFY_PEER and add a verification callback
sslContext.verify_callback = proc do |_a, _b|
true
end
Which will behave like VERIFY_NONE, but it does request the peer certificate (which it won't when mode is set to VERIFY_NONE as the documentation states: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_verify.html
When i am creating secure connection.
Getting error follows as,
OpenSSL::SSL::SSLError: certificate verify failed
connect at org/jruby/ext/openssl/SSLSocket.java:266
Getting error on line : #ssl_socket.connect
sample code :
#socket = TCPSocket.new(#uri.host, #port)
#ssl_context = OpenSSL::SSL::SSLContext.new
#ssl_context.cert = OpenSSL::X509::Certificate.new(File.open("certificate.crt"))
#ssl_context.key = OpenSSL::PKey::RSA.new(File.open("certificate.key"))
#ssl_context.ssl_version = :TLSv1_2_client
#ssl_context.verify_mode = OpenSSL::SSL::VERIFY_PEER
#ssl_socket = OpenSSL::SSL::SSLSocket.new(#socket, #ssl_context)
#ssl_socket.sync_close = true
#ssl_socket.connect
any help on it, why am i getting such error on line #ssl_socket.connect
My aim is to connect with secure websocket.
I've got Traefik/Docker Swarm/Let's Encrypt/Consul set up, and it's been working fine. It managed to successfully get certificates for the domains admin.domain.tld, registry.domain.tld and staging.domain.tld, but now that I've tried adding containers that are serving domain.tld and matomo.domain.tld those aren't getting any certificates (browser warns of self signed certificate because it's the default Traefik certificate).
My Traefik configuration (that's being uploaded to Consul):
debug = false
logLevel = "DEBUG"
insecureSkipVerify = true
defaultEntryPoints = ["https", "http"]
[entryPoints]
[entryPoints.ping]
address = ":8082"
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[traefikLog]
filePath = '/var/log/traefik/traefik.log'
format = 'json'
[accessLog]
filePath = '/var/log/traefik/access.log'
format = 'json'
[accessLog.fields]
defaultMode = 'keep'
[accessLog.fields.headers]
defaultMode = 'keep'
[accessLog.fields.headers.names]
"Authorization" = "drop"
[retry]
[api]
entryPoint = "traefik"
dashboard = true
debug = false
[ping]
entryPoint = "ping"
[metrics]
[metrics.influxdb]
address = "http://influxdb:8086"
protocol = "http"
pushinterval = "10s"
database = "metrics"
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "domain.tld"
watch = true
exposedByDefault = false
network = "net_web"
swarmMode = true
[acme]
email = "my#mail.tld"
storage = "traefik/acme/account"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
Possibly related, in traefik.log I repeatedly (as in almost once per second) get the following (but only for the registry subdomain). Sounds like an issue to persist the data to consul, but there are no errors indicating such an issue.
{"level":"debug","msg":"Looking for an existing ACME challenge for registry.domain.tld...","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"Looking for provided certificate to validate registry.domain.tld...","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"No provided certificate found for domains registry.domain.tld, get ACME certificate.","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"ACME got domain cert registry.domain.tld","time":"2019-07-07T11:37:23Z"}
Update: I managed to find this line in the log:
{"level":"error","msg":"Error getting ACME certificates [matomo.domain.tld] : cannot obtain certificates: acme: Error -\u003e One or more domains had a problem:\n[matomo.domain.tld] acme: error: 400 :: urn:ietf:paramsacme:error:connection :: Fetching http://matomo.domain.tld/.well-known/acme-challenge/WJZOZ9UC1aJl9ishmL2ACKFbKoGOe_xQoSbD34v8mSk: Timeout after connect (your server may be slow or overloaded), url: \n","time":"2019-07-09T16:27:43Z"}
So it seems the issue is the challenge failing because of a timeout. Why the timeout though?
Update 2: More log entries:
{"level":"debug","msg":"Looking for an existing ACME challenge for staging.domain.tld...","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"Looking for provided certificate to validate staging.domain.tld...","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"No provided certificate found for domains staging.domain.tld, get ACME certificate.","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"No certificate found or generated for staging.domain.tld","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"http: TLS handshake error from 10.255.0.2:51981: remote error: tls: unknown certificate","time":"2019-07-10T19:38:34Z"}
But then, after a couple minutes to an hour, it works (for two domains so far).
not sure if its a feature or a bug, but removing the following http to https redirect solved it for me:
[entryPoints.http.redirect]
entryPoint = "https"
I am facing a problem while connecting with a SSL enabled PostgreSQL server from Windows. I am getting the following error:
Error :
Error in postgresqlNewConnection(drv, …) :
RS-DBI driver: (could not connect ip:80 on dbname "all": sslmode value "require" invalid when SSL support is not compiled in.
Commands I have used :
install.packages(“RPostgreSQL”)
install.packages(“rstudioapi”)
require(“RPostgreSQL”)
require(“rstudioapi”)
drv <- dbDriver("PostgreSQL")
pg_dsn = paste0(
'dbname=', "all", ' ',
'sslmode=require')
con <- dbConnect(drv,
dbname = pg_dsn,
host = "ip",
port = 80,
user = "abcd",
password = rstudioapi::askForPassword("Database password"))
You need to use a PostgreSQL client shared library (libpq.dll) that was built with SSL support.
I'm trying to run the following code (taken from codeacademy) in CentOS 6.2:
require 'rubygems'
require 'oauth'
# Change the following values to those provided on dev.twitter.com
# The consumer key identifies the application making the request.
# The access token identifies the user making the request.
consumer_key = OAuth::Consumer.new(
"MY_KEY",
"MY_SECRET")
access_token = OAuth::Token.new(
"STRING1",
"STRING2")
# All requests will be sent to this server.
baseurl = "https://api.twitter.com"
# The verify credentials endpoint returns a 200 status if
# the request is signed correctly.
address = URI("#{baseurl}/1.1/account/verify_credentials.json")
# Set up Net::HTTP to use SSL, which is required by Twitter.
http = Net::HTTP.new address.host, address.port
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
# Build the request and authorize it with OAuth.
request = Net::HTTP::Get.new address.request_uri
request.oauth! http, consumer_key, access_token
# Issue the request and return the response.
http.start
response = http.request request
puts "The response status was #{response.code}"
and get the following error message:
/usr/lib/ruby/1.8/net/http.rb:586:in `connect': SSL_connect returned=1
errno=0 state=SSLv3 read server certificate B: certificate verify
failed (OpenSSL::SSL::SSLError)
The keys have been omitted (tehy are, after all, secret), but i'm using correct ones.
The neccessary gems are installed.
What might the problem be?
http = Net::HTTP.new address.host, address.port
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
...
You also need:
http.ca_file = File.join(File.dirname(__FILE__), "ca-cert.pem")
Since its Tweeter:
$ openssl s_client -connect api.twitter.com:443
CONNECTED(00000003)
depth=1 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU = Terms of use at https://www.verisign.com/rpa (c)10, CN = VeriSign Class 3 Secure Server CA - G3
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
0 s:/C=US/ST=California/L=San Francisco/O=Twitter, Inc./OU=Twitter Security/CN=api.twitter.com
i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
...
You need the top issuer, (i: at level 1), which is VeriSign Class 3 Public Primary Certification Authority - G5. You can get that from Public Root CA - VeriSign. The filename is PCA-3G5.pem.
After you download the root, you can then run s_client again and the server certificate will verify:
$ openssl s_client -connect api.twitter.com:443 -CAfile PCA-3G5.pem