We have a program called server.exe which starts a websocket server (ws, wss) on computer of client.
It`s main purpose accept connections from browser (127.0.0.1) and send some data to it. It uses openssl dlls (1.0.2.20).
Problem: After startup of Windows server.exe does not work. It does not accept secure connections.
Debug Log with errors:
10.12.2019_16:11:09:0861 <<< ID = 728, msg: SSL library error during handshake on fd = 728 error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher
10.12.2019_16:11:09:0876 <<< ID = 592, msg: SSL library error during handshake on fd = 592 error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number
10.12.2019_16:11:09:0876 <<< ID = 776, msg: SSL library error during handshake on fd = 776 error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number
But!! If we just restart server.exe - everything begin to work fine!
if we launch server.exe with .bat file (5 sec dealy) - everything is working!
Why? How can we solve problem?
Fixed.
server.exe file cannot find path to the dll.
Related
I am trying to access 6.x ES instance using High Level REST Client 6.7.2.
Access to this ES instance is provided to me via hostname (https://****.azureedge.net), username & password.
My Spring Boot application is getting data from the same ES without issues when it runs from my dev environment (IDE), but throws SSLHandshakeException as soon as I try run it from Docker container (from my development machine or K8s cluster in cloud).
Container is made with base image: FROM debian:stretch-slim & OpenJDK 11.0.2 with Spring Boot necessary modules.
I made some progress debugging with -Djavax.net.debug=all. It turns out that while running in docker image just few first steps of usual SSL handshaking happen:
Produced ClientHello handshake message
WRITE: TLS13 handshake, length = 2352
Raw write
Raw read (0000: 15 03 03 00 02 02 28 ......( )
READ: TLSv1.2 alert, length = 2
Received alert message (
"Alert": {
"level" : "fatal",
"description": "handshake_failure"
}
)
followed by SSLHandshakeException:
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:938)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:227)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1764)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1749)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1708)
at org.elasticsearch.client.SecurityClient.getSslCertificates(SecurityClient.java:508)
....
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
at java.base/sun.security.ssl.Alert.createSSLException(Unknown Source)
at java.base/sun.security.ssl.Alert.createSSLException(Unknown Source)
at java.base/sun.security.ssl.TransportContext.fatal(Unknown Source)
at java.base/sun.security.ssl.Alert$AlertConsumer.consume(Unknown Source)
at java.base/sun.security.ssl.TransportContext.dispatch(Unknown Source)
at java.base/sun.security.ssl.SSLTransport.decode(Unknown Source)
at java.base/sun.security.ssl.SSLEngineImpl.decode(Unknown Source)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(Unknown Source)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(Unknown Source)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(Unknown Source)
at java.base/javax.net.ssl.SSLEngine.unwrap(Unknown Source)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doUnwrap(SSLIOSession.java:271)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession.java:316)
at org.apache.http.nio.reactor.ssl.SSLIOSession.isAppInputReady(SSLIOSession.java:509)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:120)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
When running from my local environment handshake looks uninterrupted:
Produced ClientHello handshake message
WRITE: TLS13 handshake, length = 460
Raw write
Raw read
READ: TLSv1.2 handshake, length = 155
Consuming ServerHello
ServerHello
Negotiated protocol version: TLSv1.3
Session initialized: Session(1560119025211|TLS_AES_256_GCM_SHA384)
WRITE: TLS13 change_cipher_spec, length = 1
Raw write
Raw read
READ: TLSv1.2 change_cipher_spec, length = 1
Consuming ChangeCipherSpec message
Raw read
READ: TLSv1.2 application_data, length = 27
...
Raw read
READ: TLSv1.2 application_data, length = 8469
Consuming server Certificate handshake message
... // here is the list of 3 certificates with "SHA256withRSA", "SHA256withRSA", "SHA1withRSA" signature algorithms
Found trusted certificate ⇢ SHA1withRSA
...
While running locally I noticed CN=Microsoft IT TLS CA 2, OU=Microsoft IT, O=Microsoft Corporation, L=Redmond, ST=Washington, C=US, as well as CN=Baltimore CyberTrust Root, OU=CyberTrust, O=Baltimore, C=IE as issuers, maybe this is important, but I guess it is expected considering ES host address (Azure).
At the end I wanted to emphasise that I didn't need to do anything special to make this work in my macOS Java 11.0.2 development environment.
I already tried following, but that didn't change anything:
Changing base Docker image from "slim" to non slim version
Using OpenJDK 11.0.1 or 11.0.2
Added cert from host to TrustStore that JVM is using in a runtime. (I checked in Docker container that there is indeed one more cert, but considering when handshake failure happens I imagine this is irrelevant)
tried to enforce App with: "-Dcom.sun.net.ssl.enableECC=false", "-Djdk.tls.client.protocols=TLSv1.3", "-Dhttps.protocols=TLSv1.3", didn't help
Interesting: curl from Docker image with BasicAuth "talks" with same URL without issues (handshake completes) & small query returns results. I guess that curl and JVM are using different sources of trusted CAs inside the Docker, different algorithms for handshaking etc..
Thanks in advance for any help
TLDR: enforcing TLSv1.2 for client in the app made handshaking complete from inside the docker
After a lot of try/fail attempts I made it work. Following things didn't make any difference:
using non "slim" debian base image insead of "slim"
using OpenJDK 11.0.2 instead of 11.0.1
adding host's certificate to JVM TrustedStore while building docker image so it is available when container starts.
enforcing com.sun.net.ssl.enableECC=false
enforcing TLSv1.3 for https.protocols and/or jdk.tls.client.protocols
enforcing TLSv1.2 for https.protocols
What fixed handshake with host, was enforcing TLSv1.2 for client by using -Djdk.tls.client.protocols=TLSv1.2 in Dockerfile, so app runs with this flag inside container. This allowed SSL handshake to complete as it should work anyways. For some reason actual negotiation about protocol version didn't work without enforcing lower version of protocol for client. Logs from local vs docker environment doesn't show any difference but this helped in docker.
What helped me to find out was:
setting of javax.net.debug=ssl:handshake or even more detailed javax.net.debug=all so I could see details of handshake attempts
confirming that "at least someone" can establish outbound communication from inside the docker by using curl to send same request as app is trying, which worked because curl somehow figured out how to proceed handshaking with host.
pure luck
Thanks everyone for support & ideas
I found many questions quite similiar to my problem, but they didn't solve it, so here I am asking for your help.
I am trying to get data from web page with Perl LWP using https.
I can get data from almost every site I have tried, except the one I really need to use.
I am using Perl version v5.18.2 under Windows x64.
This is my basic dummy example:
use strict;
use LWP::UserAgent;
use HTTP::Request;
use IO::Socket::SSL qw(debug3);
my $ua = LWP::UserAgent->new;
my $url = 'https://www.domainx.com:443';
my $req = HTTP::Request->new( GET => $url);
my $response = $ua->request($req);
print $response->status_line . "\n";
And result for the response->status_line:
500 Can't connect to www.domainx.com:443
Debug for SSL:
DEBUG: .../IO/Socket/SSL.pm:1890: new ctx 48125200
DEBUG: .../IO/Socket/SSL.pm:393: socket not yet connected
DEBUG: .../IO/Socket/SSL.pm:395: socket connected
DEBUG: .../IO/Socket/SSL.pm:413: ssl handshake not started
DEBUG: .../IO/Socket/SSL.pm:443: using SNI with hostname www.domainx.com
DEBUG: .../IO/Socket/SSL.pm:466: set socket to non-blocking to enforce timeout=180
DEBUG: .../IO/Socket/SSL.pm:479: Net::SSLeay::connect -> -1
DEBUG: .../IO/Socket/SSL.pm:489: ssl handshake in progress
DEBUG: .../IO/Socket/SSL.pm:499: waiting for fd to become ready: SSL wants a read first
DEBUG: .../IO/Socket/SSL.pm:519: socket ready, retrying connect
DEBUG: .../IO/Socket/SSL.pm:479: Net::SSLeay::connect -> -1
DEBUG: .../IO/Socket/SSL.pm:1359: SSL connect attempt failed with unknown error
DEBUG: .../IO/Socket/SSL.pm:485: fatal SSL error: SSL connect attempt failed with unknown error error:14092105:SSL routines:SSL3_GET_SERVER_HELLO:wrong cipher returned
DEBUG: .../IO/Socket/SSL.pm:1924: free ctx 48125200 open=48125200
DEBUG: .../IO/Socket/SSL.pm:1932: OK free ctx 48125200
From checking the previous posts I tried to apply: ssl_opts => { verify_hostname => 0 }, but that didn’t help.
If I try to connect to that same site with browser (IE or Chrome) it works just fine.
Is this some certificate based error or what is going wrong here?
The actual site is www.firstcard.fi
The server is heavily broken as can also be seen from the report by SSLLabs. To get a connection to the server one must work around these problems by only using the single good cipher the server offers:
my $ua = LWP::UserAgent->new;
$ua->ssl_opts(SSL_cipher_list => 'DES-CBC3-SHA');
Interestingly, this cipher is included in the cipher list used by default in IO::Socket::SSL but the server is too broken to properly deal with the correct ClientHello.
I have been looking at options to ship logs from Windows, I have already got logstash set up, and I currently ship logs from Linux (CentOS) servers to my ELK stack using the logstash-forwarder and ssl encryption.
For compliance reasons encryption is pretty much essential in this environment.
I was hoping to use logstash-forwarder in Windows as well, but after compiling with Go I ran in to issues shipping Event Logs, and I found some people saying that it wasn't possible because of file locking issues, which the logstash-forwarder people appear to be working on, but I can't really wait.
Anyway, eventually I found out that nxlog seems to be able to ship logs in an encrypted format using ssl, I've found a few posts about similar topics and while I've learned quite a bit about how to ship the logs across and how to set up nxlog, I am still at a loss with how to set up logstash to accept the logs so I can process them.
I've asked in the #nxlog and #logstash irc channels, and got some confirmation in #nxlog that it is possible, no further information on how it should be configured.
Anyway, I have taken the crt file created for use with my logstash-forwarder (I will create a new one if needed when I am happy that this will work) and renamed it with a pem extension, which I believe should work as it is readable in ASCII format. I have created the environment variable for %CERTDIR% and put my file in there, I have written the following config file for nxlog from the other articles I have read, I think it is right, but I am not 100% sure:
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html
## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
# Enable json extension
<Extension json>
Module xm_json
</Extension>
# Nxlog internal logs
<Input internal>
Module im_internal
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
# Windows Event Log
<Input eventlog>
# Uncomment im_msvistalog for Windows Vista/2008 and later
Module im_msvistalog
# Uncomment im_mseventlog for Windows XP/2000/2003
# Module im_mseventlog
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5000
CertFile %CERTDIR%/logstash-forwarder.crt
AllowUntrusted TRUE
OutputType Binary
</Output>
<Route 1>
Path eventlog, internal => sslout
</Route>
What I want to know is what input format to use in logstash I have tried shipping logs in to a lumberjack input type (using the same config as my logstash-forwarders use) with the following config:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
But when the service started I get the following in the nxlog logfiles:
2014-11-06 21:16:20 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:20 INFO nxlog-ce-2.8.1248 started
2014-11-06 21:16:21 INFO successfully connected to lumberjack.domain.com:5000
2014-11-06 21:16:22 INFO remote closed SSL socket
2014-11-06 21:16:22 INFO reconnecting in 1 seconds
2014-11-06 21:16:23 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:24 INFO reconnecting in 2 seconds
2014-11-06 21:16:24 ERROR couldn't connect to ssl socket on lumberjack.antmarketing.com:5000; No connection could be made because the target machine actively refused it.
When I turned the logging up to DEBUG I see a massive amount of logs flying through, but I think the key part is:
2014-11-06 21:20:18 ERROR Exception was caused by "rv" at om_ssl.c:532/io_err_handler(); [om_ssl.c:532/io_err_handler()] -; [om_ssl.c:501/om_ssl_connect()] couldn't connect to ssl socket on lumberjack.domain.com:5000; No connection could be made because the target machine actively refused it.
I assume this points to me using the wrong input method on logstash, but I guess it could also be an issue with my ssl certs or the way it is configured. I don't appear to be getting any logs on the logstash server being generated at the time I make the connection from my Windows machine.
Thanks to b0ti for the help, there were a number of issues, my logstash config was crashing the service, but I also had issues with my nxlog setup as well as my ssl certs being set up in the correct way.
I found this post about creating ssl certs, which covers the way they are set up really nicely for self signed certs for use as a web service.
The main thing wrong with nxlog was as b0ti pointed out I was trying to ship in binary when that will only work when shipping to nxlog server. I also noticed in the docs that the default for AllowUntrusted is false, so I just had to delete it once I was happy ssl was working.
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5001
CAFile %CERTDIR%\nxlog-ca.crt
OutputType LineBased
</Output>
Creating the CA key, and secure it as this needs to be kept secret (cd to /etc/pki/tls):
certtool --generate-privkey --bits 2048 --outfile private/nxlog-ca.key
chown logstash:logstash private/nxlog-ca.key
chmod 600 private/nxlog-ca.key
And then Self Signed CA Cert, which will need to be transferred to your clients:
certtool --generate-self-signed --load-privkey private/nxlog-ca.key --bits 2048 --template nxlog-ca-rules.cnf --outfile certs/nxlog-ca.crt
The cnf file is standard only with this option modified:
# Whether this is a CA certificate or not
ca
The logstash input method:
input {
tcp {
port => 5001
type => "nxlogs"
ssl_cacert => "/etc/pki/tls/certs/nxlog-ca.crt"
ssl_cert => "/etc/pki/tls/certs/nxlog.crt"
ssl_key => "/etc/pki/tls/private/nxlog.key"
ssl_enable => true
format => 'json'
}
}
Generate the private key:
certtool --generate-privkey --bits 2048 --outfile private/nxlog.key
chown logstash:logstash private private/nxlog.key
chmod 600 private/nxlog.key
Generate the CSR (Certificate Signing Request):
certtool --generate-request --bits 2048 --load-privkey private/nxlog.key --outfile private/nxlog.csr
Sign the Cert with the CA private key
certtool --generate-certificate --bits 2048 --load-request private/nxlog.csr --outfile certs/nxlog.crt --load-ca-certificate certs/nxlog-ca.crt --load-ca-privkey private/nxlog-ca.key --template nxlog-rules.cnf
Again the only important part over the standard inputs for the cnf file will be:
# Whether this certificate will be used to encrypt data (needed
# in TLS RSA ciphersuites). Note that it is preferred to use different
# keys for encryption and signing.
encryption_key
# Whether this certificate will be used for a TLS client
tls_www_client
I've tested this and it works well, I just need to get the filters set up now
The binary data format is nxlog specific, you should only use it if you send to nxlog.
OutputType Binary
If this doesn't help, check the logstash logs since it's the remote end (logstash) which closes the connection.
when i run from windows-7 with cygwin to connect CFEngine bersion 3.4.2
cf-agent -Bs 217.64.173.210
Challenge response from server 217.64.173.210/217.64.173.210 was incorrect!
I: Made in version 'not specified' of '/var/cfengine/inputs/update.cf' near line 47
!! Authentication dialogue with 217.64.173.210 failed
Challenge response from server 217.64.173.210/217.64.173.210 was incorrect!
I: Made in version 'not specified' of '/var/cfengine/inputs/update.cf' near line
and in /var/cfengine/inputs/update.cf on line 47 is
47 : perms => m("600"),
on cgwin in folder keys
/var/cfengine/ppkeys
localhost.pub
localhost.priv
root-MD5=b8825ba0a0e7017e34b15766d3b3ac58 (which is also at CFEngine Server Side shared ky)
on Cf-Engine Server Side
/var/cfengine/ppkeys/
localhost.priv
localhost.pub
root-MD5=b8825ba0a0e7017e34b15766d3b3ac58
With Regards
Sandeep
Did you also get the server to trust the client's key? like so:
cf-key -t root-MD5=b8825ba0a0e7017e34b15766d3b3ac58
(on the server)
Also, try restarting cf-serverd in verbose mode with the -v switch on the server, and watch what error messages you get on that end.
I have been playing around with node.js and socket.io for the past few days. Everything works fine on my local machine (windows using iss for a webserver), but when uploading it to my remote server (ubuntu box), I get security errors.
[trace] Warning: Failed to load policy file from http://localhost:8000/crossdomain.xml
[trace] *** Security Sandbox Violation ***
[trace] Connection to http://localhost:8000/socket.io/1/ halted - not permitted from http://****/virtualcinema/VirtualCinema.swf
[trace] Error #2044: Unhandled securityError:. text=Error #2048: Security sandbox violation: http://****/virtualcinema/VirtualCinema.swf cannot load data from http://localhost:8000/socket.io/1/.
The AS3 code it's erroring on is:
Security.loadPolicyFile("xmlsocket://localhost:10843");
socket = new FlashSocket("localhost:8000");
The policy file is being served correctly on port 10843 and I can receive the policy file fine at http://**:10843/ in my browser. Why is it trying to load the policy file on port 8000. That warning does not appear on my local build.
The socket.io code:
socket = io.listen(8000);
socket.configure(function()
{
socket.set("transports", ["flashsocket"]);
socket.set("log level", 2);
});
I'm confused as to why it gets resolved fine when I test it on a localmachine but not on a remote one. Any help would be much appreciated :)
The crossdomain.xml I am using:
<cross-domain-policy>
<allow-access-from domain="*" to-ports="*"/>
</cross-domain-policy>
Fixed. I changed it from pointing to localhost to my servers externalIP.
I had tried this before, but unfortunately the server had cached my swf file and I did not realise it was fixed.
Security.loadPolicyFile("xmlsocket://****.com:10843");
socket = new FlashSocket("****.com:8000");