Socat - certificate rotation for mTLS connection - reload credential files interval - socat

Use case:
I use socat to stream traffic between some app and external world via Squid (app->socat->Squid). To authenticate in SQUID I use mTLS.
My socat usage:
socat -d -d tcp-listen:3128,reuseaddr,fork \
openssl-connect:<SQUID_IP>:3128,cert=client-cert-key.pem,cafile-squid=squid-ca.crt,openssl-commonname=<SQUID_CN>-prd,keepalive
where
content of the PEM and CERT filles rotate.
Problem: If I put some trash into squid-ca.crt file socat after a couple of seconds catches the change and logs errors:
socat[72] E SSL_connect(): error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
socat[72] N exit(1)
socat[9] N childdied(): handling signal 17
, and when I recover squid-ca.crt file socat ater some time (several to several dozen seconds) fetches the change and starts working again with with the recovered data.
Is there an option to control the time on files probing by socat?

Socat with your command waits for client connections and forks a new sub process for each one. Only in these sub processes the OpenSSL module is initialized and the certificate loaded.
So it is not some timing but just depends on the next TCP connection to arrive.

Related

rsyslogd does not write data to logfile when configured with TLS

I'm trying to set up rsyslog with TLS to forward specific records from /var/log/auth.log from host A to a remote server B.
The configuration file I wrote for rsyslog is the following:
$DefaultNetstreamDriverCAFile /etc/licensing/certificates/ca.pem
$DefaultNetstreamDriverCertFile /etc/licensing/certificates/client-cert.pem
$DefaultNetstreamDriverKeyFile /etc/licensing/certificates/client-key.pem
$InputFilePollInterval 10
#Read from the auth.log file and assign the tag "ssl-auth" for its messages
input
(type="imfile"
File="/var/log/auth.log"
reopenOnTruncate="on"
deleteStateOnFileDelete="on"
Tag="ssl-auth")
$template auth_log, " %msg% "
# Send ssl traffic to server on port 514
if ($syslogtag == 'ssl-auth') then{action
(type="omfwd"
protocol="tcp"
target="<ip#server>"
port="514"
template="auth_log"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="x509/name"
)}
Using this configuration, when I try to ssh-login the first time into the host A from another host X everything works fine; the file /var/log/auth.log is written and the tcpdump shows traffic towards server B.
But from then on, it does not work anymore.
Even if I try to exit from host A and login back again whenever I do, the file /var/log/auth.log is not ever written and no traffic appears over tcpdump.
The very strange things is that if I remove the TLS from the configuration it works.

ssh client to show server-supported algorithms

In order to check that all the servers across a fleet aren't supporting deprecated algorithms, I'm (programmatically) doing this:
telnet localhost 22
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
SSH-2.0-OpenSSH_8.0p1 Ubuntu-6build1
SSH-2.0-Censor-SSH2
4&m����&F �V��curve25519-sha256,curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1Arsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519lchacha20-poly1305#openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm#openssh.com,aes256-gcm#openssh.comlchacha20-poly1305#openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm#openssh.com,aes256-gcm#openssh.com�umac-64-etm#openssh.com,umac-128-etm#openssh.com,hmac-sha2-256-etm#openssh.com,hmac-sha2-512-etm#openssh.com,hmac-sha1-etm#openssh.com,umac-64#openssh.com,umac-128#openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1�umac-64-etm#openssh.com,umac-128-etm#openssh.com,hmac-sha2-256-etm#openssh.com,hmac-sha2-512-etm#openssh.com,hmac-sha1-etm#openssh.com,umac-64#openssh.com,umac-128#openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1none,zlib#openssh.comnone,zlib#openssh.comSSH-2.0-Censor-SSH2
Connection closed by foreign host.
Which is supposed to be a list of supported algorithms for the various phases of setting up a connection. (kex, host key, etc). Every time I run, I get a different piece of odd data at the start - always a different length.
There's an nmap plugin - ssh2-enum-algos - which returns the data in it's complete form, but I don't want to run nmap; I have a go program which opens the port, and sends the query, but it gets the same as telnet. What am I missing, and how do I fix it?
For comparison, here's the top few lines from the output of nmap script:
$ nmap --script ssh2-enum-algos super
Starting Nmap 7.80 ( https://nmap.org ) at 2019-12-27 22:15 GMT
Nmap scan report for super (192.168.50.1)
Host is up (0.0051s latency).
rDNS record for 192.168.50.1: supermaster
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
| ssh2-enum-algos:
| kex_algorithms: (12)
| curve25519-sha256
| curve25519-sha256#libssh.org
| ecdh-sha2-nistp256
| ecdh-sha2-nistp384
| ecdh-sha2-nistp521
Opening a tcp connection to port 22, (in golang, with net.Dial) then accepting and sending connection strings leaves us able to Read() from the Reader for the connection. Thence the data is in a standard format described by the RFC. From this, I can list the algorithms supported in each phase of an ssh connection. This is very useful for measuring what is being offered, rather than what the appears to be configured (it's easy to configure sshd to use a different config file).
It's a useful thing to be able to do from a security POV.
Tested on every version of ssh I can find from 1.x on a very old solaris or AIX box, to RHEL 8.1.
In some cases you can specify an algorithm to use, and if you specify one that is not supported the server will reply with a list of supported algorithms.
For example, to check for supported key exchange algorithms you can use:
ssh 127.0.0.1 -oKexAlgorithms=diffie-hellman-group1-sha1
diffie-hellman-group1-sha1 is insecure and should be missing from most modern servers. The server will probably respond with something like:
Unable to negotiate with 127.0.0.1 port 22: no matching key exchange method found. Their offer: curve25519-sha256,curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256
Exit 255
Typing: "ssh -Q cipher | cipher-auth | mac | kex | key"
will give you a list of the algorithms supported by your client
Typing: "man ssh"
will let you see what options you can specify with the -o argument, including Cipher, MACs, and KexAlgorithms

OpenLDAP as a Proxy cache only, no local database

I am trying to get a local LDAP proxy cache running. The idea is this:
Currently a computer (A) is sending all ldap requests to a remote ldap server (L)
Instead of that, there should be a proxy cache "server" running on A to act as an intermediate between A and L. The cache would store all queries and all their attributes (until it is filled up and then it starts "recycling").
OpenLDAP's Proxy Cache Engine looks pretty good, but there is not much information about how to set it up. There is an example config file, but I cannot get it to work.
When connected to the internet, running this command will successfully bind me.
ldapwhoami -vvv -h localhost -D "CN=Melka Martin,OU=something,OU=else,(...),DC=int,DC=somedomain,DC=com" -x -w <passwd>
However, each following request will still pool the remote LDAP server (as shown by sniffing the connection, and when the machine is disconnected from the internet, the local bind fails).
In the slapd output there is a lot of stuff, but the elligible:
56449abd QUERY NOT ANSWERABLE
56449abd QUERY CACHEABLE
This is the current config file, which should cache all the bind requests
database ldap
suffix "dc=int,dc=somedomain,dc=com"
rootdn "cn=admin,dc=int,dc=somedomain,dc=com"
rootpw <something>
uri ldap://dc-04.int.somedomain.com:389
overlay pcache
pcache hdb 100000 1 1000 100
pcacheAttrset 0 *
pcacheTemplate (sn=) 0 3600
pcacheBind (sn=) 0 3600 sub dc=int,dc=somedomain,dc=com
cachesize 200
directory /var/lib/ldap
index objectClass eq
index cn eq,sub
I have created the /var/lib/ldap directory, added a default DB_CONFIG file in there and then edited the slapd.conf file. If there are more things to do to set it up properly, could you instruct me?
I am a little confused about the rootdn/rootpw directives. They are used to write into the remote LDAP server, correct?
Edit: Below here is the original issue, which was resolved by using the full proper DN.
As this is supposed to only be a proxy cache, I shouldn't need to set up a local database. So the config file looks like this:
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema
moduleload pcache.la
database ldap
suffix "dc=int,dc=somedomain,dc=com"
rootdn "dc=int,dc=somedomain,dc=com"
uri ldap://dc-04.int.somedomain.com:389
overlay pcache
pcache hdb 100000 1 1000 100
pcacheAttrset 0 *
pcacheTemplate (sn=) 0 3600
cachesize 20
directory /var/lib/ldap
index objectClass eq
index cn eq,sub
Now I would expect that any request to ldap://localhost would mirror to the remote LDAP, if not in the cache.
I use this command to test the auth on the remote server:
ldapwhoami -vvv -h dc-04.int.somedomain.com -p 389 -D melka#somedomain.com -x -w <passwd>
Which works well, I get the auth.
However, when I try to run the same command on localhost:
ldapwhoami -vvv -h localhost -p 389 -D melka#somedomain.com -x -w <passwd>
It fails, saying
ldap_initialize( ldap://localhost:389 )
ldap_bind: Invalid DN syntax (34)
additional info: invalid DN
Slapd is listening on localhost, netstat contains this line:
tcp 0 0 0.0.0.0:389 0.0.0.0:* LISTEN 10352/slapd
Is there something I am missing?
Thanks
melka#somedomain.com
That may be a DN in the target LDAP system, who knows, but it certainly isn't in OpenLDAP. You need to provide a proper Distinguished Name.

Faking a FTP service

I am writing a piece of software which can fool Nmap into believing that GuildFTPd FTP Server is running on port 21. My python code so far looks like this:
import socket
s = socket.socket( )
s.bind(('', 21))
s.listen(1)
conn, addr = s.accept()
conn.send("220-GuildFTPd FTP Server (c) 1997-2002\r\n220-Version 0.999.14")
conn.close()
The Nmap regexp for matching this particular service is:
match ftp m|^220-GuildFTPd FTP Server \(c\) \d\d\d\d(-\d\d\d\d)?\r\n220-Version (\d[-.\w]+)\r\n| p/Guild ftpd/ v/$2/ o/Windows/
However when I scan the host which is running the script with Nmap the results are:
21/tcp open ftp?
How can this be? When I scan the real service with Nmap it identifies the service correctly.
First, you're missing a \r\n at the end of your fake response that the match line needs.
The other major issue is that your program only handles one connection, then closes. Nmap will first do a port scan, then send service fingerprinting probes. If you ran nmap as root (or Administrator on Windows), it would use a half-open TCP SYN scan, and your application wouldn't see the portscan as a connection, but otherwise it will accept the portscan, close the connection, and be unavailable for the service scan phase.
Here's a very basic adaptation of your script that can handle sequential connections (but not parallel), which is enough to fool Nmap:
import socket
s = socket.socket( )
s.bind(('', 21))
s.listen(1)
while True:
conn, addr = s.accept()
conn.send("220-GuildFTPd FTP Server (c) 1997-2002\r\n220-Version 0.999.14\r\n")
conn.close()

Sending an email from R using the sendmailR package

I am trying to send an email from R, using the sendmailR package. The code below works fine when I run it on my PC, and I recieve the email. However, when I run it with my macbook pro, it fails with the following error:
library(sendmailR)
from <- sprintf("<sendmailR#%s>", Sys.info()[4])
to <- "<myemail#gmail.com>"
subject <- "TEST"
sendmail(from, to, subject, body,
control=list(smtpServer="ASPMX.L.GOOGLE.COM"))
Error in socketConnection(host = server, port = port, blocking = TRUE) :
cannot open the connection
In addition: Warning message:
In socketConnection(host = server, port = port, blocking = TRUE) :
ASPMX.L.GOOGLE.COM:25 cannot be opened
Any ideas as to why this would work on a PC, but not a mac? I turned the firewall off on both machines.
Are you able to send email via the command-line?
So, first of all, fire up a Terminal and then
$ echo “Test 123” | mail -s “Test” user#domain.com
Look into /var/log/mail.log, or better use
$ tail -f /var/log/mail.log
in a different window while you send your email. If you see something like
... setting up TLS connection to smtp.gmail.com[xxx.xx.xxx.xxx]:587
... Trusted TLS connection established to smtp.gmail.com[xxx.xx.xxx.xxx]:587:\
TLSv1 with cipher RC4-MD5 (128/128 bits)
then you succeeded. Otherwise, it means you have to configure you mailing system. I use postfix with Gmail for two years now, and I never had have problem with it. Basically, you need to grab the Equifax certificates, Equifax_Secure_CA.pem from here: http://www.geotrust.com/resources/root-certificates/. (They were using Thawtee certificates before but they changed last year.) Then, assuming you used Gmail,
Create relay_password in /etc/postfix and put a single line like this (with your correct login and password):
smtp.gmail.com login#gmail.com:password
then in a Terminal,
$ sudo postmap /etc/postfix/relay_password
to update Postfix lookup table.
Add the certificates in /etc/postfix/certs, or any folder you like, then
$ sudo c_rehash /etc/postfix/certs/
(i.e., rehash the certificates with Openssl).
Edit /etc/postfix/main.cf so that it includes the following lines (adjust the paths if needed):
relayhost = smtp.gmail.com:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/relay_password
smtp_sasl_security_options = noanonymous
smtp_tls_security_level = may
smtp_tls_CApath = /etc/postfix/certs
smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache
smtp_tls_session_cache_timeout = 3600s
smtp_tls_loglevel = 1
tls_random_source = dev:/dev/urandom
Finally, just reload the Postfix process, with e.g.
$ sudo postfix reload
(a combination of start/stop works too).
You can choose a different port for the SMTP, e.g. 465.
It’s still possible to use SASL without TLS (the above steps are basically the same), but in both case the main problem is that your login informations are available in a plan text file... Also, should you want to use your MobileMe account, just replace the Gmail SMTP server with smtp.me.com.

Resources