I want to write a simple server in Ruby that returns a different TLS certificate depending on the hostname.
Currently I do it so that I specify a TCPServer with SSLContext and give the SSLContext certificate and key. This certificate is then used for all connections regardless of the hostname.
context = OpenSSL::SSL::SSLContext.new
context.min_version = :TLS1_2
context.add_certificate cert, key
serv = TCPServer.new host, port
secure = OpenSSL::SSL::SSLServer.new(serv, context)
Thread.new(secure.accept) do |conn|
# do stuff
end
Therefore a different certificate should be sent depending on the SNI. How to implement this?
You can use servername_cb:
context.servername_cb = lambda do |_, name|
ctx = OpenSSL::SSL::SSLContext.new
# load certificate for name
ctx.add_certificate cert[0], cert[1]
return ctx
end
Alternatively, you can use the already existing Context:
context.servername_cb = lambda do |socket, name|
ctx = socket.context
# load certificate for name
ctx.add_certificate cert[0], cert[1]
return ctx
end
The function which servername_cb is called during the TLS handshake. It is passed an SSLSocket and the name as argument. It is expected to return an SSLContext with corresponding certificates.
https://ruby-doc.org/3.1.2/exts/openssl/OpenSSL/SSL/SSLContext.html#attribute-i-servername_cb
Related
I have code to login to my email account to fetch recent emails:
def fetchRecentEmail(emailAddr, emailPassword, timeout=120):
host = fetch_imap_server(emailAddr) # e.g. 'outlook.office365.com'
with IMAP4_SSL(host) as session:
status, _ = session.login(emailAddr, emailPassword)
if status == 'OK':
# fetch most recent message
status, messageData = session.select("Inbox")
:
I'm trying to tweak it to go through a proxy.
ref: How can I fetch emails via POP or IMAP through a proxy?
ref: https://gist.github.com/sstevan/efccf3d5d3e73039c21aa848353ff52f
In each of the above resources, the proxy is of clean form IP:PORT.
However my proxy is of the form USER:PASS:HOST:PORT.
The proxy works:
USER = 'Pp7fwti5n-res-any-sid-' + random8Digits()
PASS = 'abEDxts7v'
HOST = 'gw.proxy.rainproxy.io'
PORT = 5959
proxy = f'{USER}:{PASS}#{HOST}:{PORT}'
proxies = {
'http': 'http://' + proxy,
'https': 'http://' + proxy
}
response = requests.get(
'https://ip.nf/me.json',
proxies=proxies, timeout=15
)
The following code looks like it should work, but errors:
HOST = 'outlook.office365.com'
IMAP_PORT = 963
PROXY_TYPE = 'http' # rainproxies are HTTP
mailbox = SocksIMAP4SSL(
host=HOST,
port=IMAP_PORT,
proxy_type=PROXY_TYPE,
proxy_addr=URL,
proxy_port=PORT,
username=USER,
password=PASS
)
emailAddress, emailPassword = EMAIL.split(',')
mailbox.login(emailAddress, emailPassword)
typ, data = mailbox.list()
print(typ)
print(data)
I needed to add a timeout arg/param in 2 places to get the code to run:
def _create_socket(self, timeout=None):
sock = SocksIMAP4._create_socket(self, timeout)
server_hostname = self.host if ssl.HAS_SNI else None
return self.ssl_context.wrap_socket(
sock, server_hostname=server_hostname
)
def open(self, host='', port=IMAP4_PORT, timeout=None):
SocksIMAP4.open(self, host, port, timeout)
Rather confusing that nobody else seems to have flagged that in the gist.
But it still won't work.
If I use any number other than 443 for IMAP_PORT I get this error:
GeneralProxyError: Socket error: 403: Forbidden
[*] Note: The HTTP proxy server may not be supported by PySocks (must be a CONNECT tunnel proxy)
And if I use 443, while I now get no error, mailbox = SocksIMAP4SSL( never completes.
So I am still far from a working solution.
I am hoping to run this code simultaneously on 2 CPU cores, so I don't understand the implications of using port 443. Is that going to mean that no other process on my system can use that port? And if this code is using this port simultaneously in two processes, does this mean that there will be a conflict?
Maybe you can try monkeypatching socket.socket with PySocket.
import socket
import socks
socks.set_default_proxy(socks.SOCKS5, HOST, PORT, True, USER, PASS)
socket.socket = socks.socksocket
Then check if your IMAP traffic is going through a given proxy.
I have server working that looks a little bit like this
require "socket"
require "openssl"
require "thread"
listeningPort = Integer(ARGV[0])
server = TCPServer.new(listeningPort)
sslContext = OpenSSL::SSL::SSLContext.new
sslContext.cert = OpenSSL::X509::Certificate.new(File.open("cert.pem"))
sslContext.key = OpenSSL::PKey::RSA.new(File.open("priv.pem"))
sslServer = OpenSSL::SSL::SSLServer.new(server, sslContext)
puts "Listening on port #{listeningPort}"
loop do
connection = sslServer.accept
Thread.new {...}
end
When I connect with TLS1.3 and I provide a client cert, I can see that it's working when I verify the cert in the ssl context, but peer_cert is never set on the connection, only the context receives a session.
Do I need to upgrade manually to TLS to access the cert from the client?
The reason why I want it is, I can restrict content or authenticate by looking at the cert on the Gemini protocol
After a lot of reading in the OpenSSL docs I found a solution:
I set the sslContext.verify_mode = OpenSSL::SSL::VERIFY_PEER and add a verification callback
sslContext.verify_callback = proc do |_a, _b|
true
end
Which will behave like VERIFY_NONE, but it does request the peer certificate (which it won't when mode is set to VERIFY_NONE as the documentation states: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_verify.html
I've got Traefik/Docker Swarm/Let's Encrypt/Consul set up, and it's been working fine. It managed to successfully get certificates for the domains admin.domain.tld, registry.domain.tld and staging.domain.tld, but now that I've tried adding containers that are serving domain.tld and matomo.domain.tld those aren't getting any certificates (browser warns of self signed certificate because it's the default Traefik certificate).
My Traefik configuration (that's being uploaded to Consul):
debug = false
logLevel = "DEBUG"
insecureSkipVerify = true
defaultEntryPoints = ["https", "http"]
[entryPoints]
[entryPoints.ping]
address = ":8082"
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[traefikLog]
filePath = '/var/log/traefik/traefik.log'
format = 'json'
[accessLog]
filePath = '/var/log/traefik/access.log'
format = 'json'
[accessLog.fields]
defaultMode = 'keep'
[accessLog.fields.headers]
defaultMode = 'keep'
[accessLog.fields.headers.names]
"Authorization" = "drop"
[retry]
[api]
entryPoint = "traefik"
dashboard = true
debug = false
[ping]
entryPoint = "ping"
[metrics]
[metrics.influxdb]
address = "http://influxdb:8086"
protocol = "http"
pushinterval = "10s"
database = "metrics"
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "domain.tld"
watch = true
exposedByDefault = false
network = "net_web"
swarmMode = true
[acme]
email = "my#mail.tld"
storage = "traefik/acme/account"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
Possibly related, in traefik.log I repeatedly (as in almost once per second) get the following (but only for the registry subdomain). Sounds like an issue to persist the data to consul, but there are no errors indicating such an issue.
{"level":"debug","msg":"Looking for an existing ACME challenge for registry.domain.tld...","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"Looking for provided certificate to validate registry.domain.tld...","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"No provided certificate found for domains registry.domain.tld, get ACME certificate.","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"ACME got domain cert registry.domain.tld","time":"2019-07-07T11:37:23Z"}
Update: I managed to find this line in the log:
{"level":"error","msg":"Error getting ACME certificates [matomo.domain.tld] : cannot obtain certificates: acme: Error -\u003e One or more domains had a problem:\n[matomo.domain.tld] acme: error: 400 :: urn:ietf:paramsacme:error:connection :: Fetching http://matomo.domain.tld/.well-known/acme-challenge/WJZOZ9UC1aJl9ishmL2ACKFbKoGOe_xQoSbD34v8mSk: Timeout after connect (your server may be slow or overloaded), url: \n","time":"2019-07-09T16:27:43Z"}
So it seems the issue is the challenge failing because of a timeout. Why the timeout though?
Update 2: More log entries:
{"level":"debug","msg":"Looking for an existing ACME challenge for staging.domain.tld...","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"Looking for provided certificate to validate staging.domain.tld...","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"No provided certificate found for domains staging.domain.tld, get ACME certificate.","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"No certificate found or generated for staging.domain.tld","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"http: TLS handshake error from 10.255.0.2:51981: remote error: tls: unknown certificate","time":"2019-07-10T19:38:34Z"}
But then, after a couple minutes to an hour, it works (for two domains so far).
not sure if its a feature or a bug, but removing the following http to https redirect solved it for me:
[entryPoints.http.redirect]
entryPoint = "https"
So, I can pretty easily create an SSLSocket in asyncio:
# Create a Socket
sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
sock.setblocking(False)
# Connect
await loop.sock_connect(sock, host_address) # type: ignore
# Create an SSL Context
ssl_context = ssl.create_default_context(cafile=PROXY_CA_BUNDLE)
...
# Wrap the SSL Context Around the Socket
def do_handshake(loop, sock, waiter):
sock_fd = sock.fileno()
try:
sock.do_handshake()
except ssl.SSLWantReadError:
loop.remove_reader(sock_fd)
loop.add_reader(sock_fd, do_handshake,
loop, sock, waiter)
return
except ssl.SSLWantWriteError:
loop.remove_writer(sock_fd)
loop.add_writer(sock_fd, do_handshake,
loop, sock, waiter)
return
loop.remove_reader(sock_fd)
loop.remove_writer(sock_fd)
waiter.set_result(None)
waiter = loop.create_future()
do_handshake(loop, sslconn, waiter)
await waiter
The problem is that an SSLSocket breaks the non-blocking interface of the original socket library, so it is no longer compatible with other asyncio methods like asyncio.sock_sendall. Is there a way to instead wrap a socket and have it still respect the original socket interface?
I am trying to interact with the Cleverbot API with Lua. I've got a key and a username, so I tested with Postman and it worked perfectly. Then I tried to do the same thing with Lua but I'm having a weird error.
This is the code:
local https = require("ssl.https")
local string = require("string")
local ltn12 = require ("ltn12")
local funcs = (loadfile "./libs/functions.lua")()
local function cleverbot(msg)
local params = {
['user'] = 'SyR2nvN1cAxxxxxx',
['key'] = 'ckym8oDRNvpYO95GmTD14O9PuGxxxxxx',
['nick'] = 'cleverbot',
['text'] = tostring(msg),
}
local body = funcs.encode_table(params)
local response = {}
ok, code, headers, status = https.request({
method = "POST",
url = "https://cleverbot.io/1.0/ask/",
headers = {
['Accept'] = '*/*',
['content-type'] = 'application/x-www-form-urlencoded',
['accept-encoding'] = 'gzip',
['content-length'] = tostring(#body),
},
print(tostring(ok)),
print(tostring(code)),
print(tostring(headers)),
print(tostring(status)),
source = ltn12.source.string(body),
sink = ltn12.sink.table(response)
})
response = table.concat(response)
if code ~= 200 then
return
end
if response[1] ~= nil then
return tostring(response)
end
end
However, when I call this, this is what those 4 prints shows:
nil
tlsv1 alert internal error
nil
nil
I tried to connect using HTTP instead, but this is what happens:
1
301
table: 0xe5f7d60
HTTP/1.1 301 Moved Permanently
response is always empty. Please, what am I doing wrong?
Thanks!
My strong suspicion is, that the target host (cleverbot.io) insists to get a hostname through SNI (server name indication), which the ssl-library you use does not send. Usually, servers use a default site then, but of course they are free to let the handshake fail then. Seems like this is, what cloudflare (where cleverbot.io is hosted or proxied through) does.
Unfortunately there is no easy way to fix this, unless the underlying ssl-libraries are changed to use sni with hostname cleverbot.io for the ssl handshake.
See also
Fails:
openssl s_client -connect cleverbot.io:443 -tls1_1
Succeeds:
openssl s_client -servername cleverbot.io -connect cleverbot.io:443 -tls1_1
This means, not only the underlying ssl libraries have to support sni, but also have to be told, which servername to use by the lua-binding-layer in between. Luasec for example does not make use of sni currently, afaik