I try to run a microservice (based on Eclipse Microprofile) on OpenLiberty (v20.0.0.1/wlp-1.0.36.cl200120200108-0300) on Eclipse OpenJ9 VM, version 1.8.0_242-b08 (en_US))
I run the server as the official Docker image (open-liberty:kernel)
In my service I try to connect to another rest service via HTTPS
Client client = ClientBuilder.newClient();
client.target("https://myservice.foo.com/").request(....);
This throws the following exception:
javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target
I already added the features 'transportSecurity-1.0' and 'ssl-1.0' into the server.xml file:
<featureManager>
<feature>jaxrs-2.1</feature>
<feature>microProfile-2.2</feature>
<feature>transportSecurity-1.0</feature>
<feature>ssl-1.0</feature>
</featureManager>
and I also tweaked the jvm.options file like this:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=7777
-Dhttps.protocols=TLSv11,TLSv12
-Djdk.tls.client.protocols=TLSv11,TLSv12
-Dhttps.protocols=TLSv11,TLSv12
-Dcom.ibm.jsse2.overrideDefaultProtocol=TLSv11,TLSv12
But nothing helps to get rid of the exception.
How is the correct configuration for OpenLiberty to enable outgoing ssl connections?
Liberty doesn't trust anything over ssl by default, so unless the service you are connecting to uses an identical keystore/truststore file, or you've otherwise configured your service to trust the microservice in some way, you can get that exception. If this is the problem, something like this will probably be seen in messages.log as well:
com.ibm.ws.ssl.core.WSX509TrustManager E CWPKI0823E: SSL HANDSHAKE FAILURE: A signer with SubjectDN [CN=localhost, OU=oidcdemo_client, O=ibm, C=us] was sent from the host [localhost:19443]. The signer might need to be added to local trust store [/Users/tester/tmp/liberty/20003wlp/wlp/usr/servers/urlcheck/resources/security/key.p12], located in SSL configuration alias [defaultSSLConfig]. The extended error message from the SSL handshake exception is: [PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target].
How to manually patch up the truststore is documented here,
https://www.ibm.com/support/knowledgecenter/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_add_trust_cert.html
but what you will probably want to do in a docker environment is modify your images to either include a common keystore/truststore, or read one from outside somewhere (such as a kubernetes secret). By default, each docker image creates it's own unique key/truststore, and they won't be able to "talk" over ssl.
If you only need to communicate with services that have a certificate signed by a well-known authority, you can add
ENV SEC_TLS_TRUSTDEFAULTCERTS=true
to your Dockerfile (20.0003+) to enable that.
As mentioned by Bruce in the answer above, Liberty doesn't trust any certificates by default. If you are making outgoing connections from Liberty to a server, you either need to add their certificate to the truststore you have configured OR you need to trust the JRE's cacerts if the remote endpoint is using a certificate from a well-known CA.
When you say you are using Let's Encrypt certificates, do you mean the remote end-point is using them, or your Liberty server is?
If the remote end-point is, most JRE's cacerts include Let's Encrypt in their cacerts. If the Liberty server is using a certificate signed by Let's Encrypt, that doesn't really have an effect on the outgoing connection unless you are using mutual SSL authentication.
As an FYI, if you are using a certificate signed by Let's Encrypt in Liberty as the default certificate, we will be adding built-in support for the ACME protocol in a few releases. See here for progress: https://github.com/OpenLiberty/open-liberty/issues/9017
Related
NIFI is unable to connect to URL with https using invokeHTTP Processor, no certificate is required to access the site via browser(only user & pass).
The error observed is "Request Processing Failed: javax.net.SSLPeerUnverifiedException".
I have tried adding SSL Context with Java Truststore and nifi Keystore. But it is not working.Kindly suggest.
When using InvokeHTTP to connect to a HTTPS URL, you will need to add an SSLContextService which InvokeHTTP can use to verify the remote server. The SSLContextService will refer to a truststore which contains the public Certificate Authority. For example if connecting to stackoverflow with NiFi, you would need the CN = ISRG Root X1, O = Internet Security Research Group, C = US installed in a pkcs12 truststore, which is used by the SSLContextService. Another option is to use the truststore provided by Java, typically located at $JAVA_HOME/lib/security/cacerts, which will trust most publicly signed web domain certificates.
Please add more details of the error message if this still is not working.
I installed Elasticsearch via Packer and Ansible onto a machine image on GCP. I tried running elasticsearch-reset-password -u elastic to change the password. I think I'm getting the following error because the installation was done on a different IP address (the IP of the instance Packer launches to bake the machine image vs the IP of the launched instance).
WARN org.elasticsearch.common.ssl.DiagnosticTrustManager - failed to establish trust with server at [10.206.0.10]; the server provided a certificate with subject name [CN=packer-62d379fb-f7c3-ca0f-471a-82185776ac77], fingerprint [eb5436427cb38928b3f16994bfdb8102ac5011be], no keyUsage and extendedKeyUsage [serverAuth]; the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [IP:10.128.0.20,DNS:localhost,DNS:packer-62d379fb-f7c3-ca0f-471a-82185776ac77,IP:0:0:0:0:0:0:0:1,IP:127.0.0.1,IP:fe80:0:0:0:4001:aff:fe80:14]; the certificate is issued by [CN=Elasticsearch security auto-configuration HTTP CA]; the certificate is signed by (subject [CN=Elasticsearch security auto-configuration HTTP CA] fingerprint [63fa2023ea0d36865d838d8d3bd17c5e96f8b684] {trusted issuer}) which is self-issued; the [CN=Elasticsearch security auto-configuration HTTP CA] certificate is trusted in this ssl context ([xpack.security.http.ssl (with trust configuration: Composite-Trust{JDK-trusted-certs,StoreTrustConfig{path=certs/http.p12, password=<non-empty>, type=PKCS12, algorithm=PKIX}})])
java.security.cert.CertificateException: No subject alternative names matching IP address 10.206.0.10 found
The IP address of the instance I'm launching from the prebaked machine image is 10.206.0.8 instead of ...10.
This is what I get when I test TLS:
curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200
Enter host password for user 'elastic':
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I would like to get this ideal path to work but I'm at a loss for what to do. Seems like an opportunity to learn but I'm just scratching my head right now.
Working solutions that I have right now are:
run ES from Docker and disable xpack when creating the container with an env variable
pre-bake an install of deb package for es7 (predating the xpack auto config)
install es8 manually on each node I launch vs pre baking a machine image that is pre-installed and pre-configured
Neither of those are suitable paths forward as they circumvent the platform's new security conventions or throw a wrench in my automated infrastructure goals.
Can I modify the generated certs to work on a newly launched instance from a pre-baked image with a different IP address?
THIS IS STILL AN ISSUE ANY HELP WOULD BE APPRETIATED
I am having an issue setting up TLS through a custom CA between Kibana and Enterprise search. I have the default x-pack security set up for the interconnection of my Elasticsearch nodes with both Kibana and Enterprise search, which was done according to the following docs: minimal security basic security ssl/tls config. I can successfully run Enterprise search through http, however my issue arises when I enable ssl/tls for ent-search..
When I have https configured for ent-search using this doc, the server is "running", however I receive an error after boot and Kibana throws an error when attempting to connect.
ent-search error (non corresponding with Kibana's hit to the ent-search hostname, this error raises shortly after ent-search is "starting successfully", but isn't fatal)
[2022-06-14T20:37:45.734+00:00][6081][4496][cron-Work::Cron::SendTelemetry][ERROR]: Exception:
Exception while performing Work::Cron::SendTelemetry.perform()!: Faraday::ClientError: PKIX path
building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid
certification path to requested target
Kibana error (directly corresponding to when I ping http://obfuscated-dns:5601/app/enterprise_search/overview)
[2022-06-14T20:43:51.772+00:00][ERROR][plugins.enterpriseSearch] Could not perform access check to
Enterprise Search: FetchError: request to https://obfuscated-dns:3002/api/ent/v2/internal/client_config
failed, reason: unable to get issuer certificate
The steps I took to generate said certificate were: I created a csr on my server using elasticsearch-certutil csr along with a yml file which specified the distinguished name, I sent the unzipped csr to my CA (Digicert), uploaded the signed certificate and intermediate certificate provided by Digicert to my server, used openssl to generate a keystore using the signed cert and that private key generated along-side the original csr, then finally converted the keystore to .jks format using keytool.
From my understanding, the path of this keystore is what is needed for the configuration file for enterprise-search and the intermediary cert is what is used in the Kibana certificate authority config field (ca.pem). I have also tried to stuff both the signed and intermediate cert in the same .pem, as well as the private-key, signed and intermediate cert. Below are the relevant configurations:
kibana.yml
enterpriseSearch.host: https://ofuscated-dns:3002
enterpriseSearch.ssl.verificationMode: certificate
enterpriseSearch.ssl.certificateAuthorities:
- /path/ca.pem
enterprise-search.yml
ent_search.external_url: https://obfuscated-dns:3002
ent_search.listen_host: 0.0.0.0
ent_search.listen_port: 3002
ent_search.ssl.enabled: true
ent_search.ssl.keystore.path: "/path/keystore.jks"
ent_search.ssl.keystore.password: "pass"
ent_search.ssl.keystore.key_password: "pass"
I'm starting to feel like I fundamentally misunderstand something here. A lot of the jargon behind SSL/TLS certificates seems to lack standardization. While we are at it, what is a root cert in relation to what I have listed? Is it the intermediate cert? I see there is a master "root certificate" for the Digicert CN I certified under, however I'm unsure where this fits in. The config variable "certificateAuthorities" doesn't document what this .pem file should contain specifically and when searched the concept of a certificate authority is never associated with file contents, but instead is simply abstracted to the entity which provides certification (duh).
To put it succinctly: What does this variable "certificateAuthorities" explicitly entail?
UPDATE 09/28/2022
I have now confirmed that SSL is working when calling enterprise-search outside of the VM its running in. I can utilize its endpoint with my flutter and react app, however Kibana is till throwing the error mentioned above. I have checked that the root/intermediate CA provided to kibana's configuration is indeed the certificate linked with the signed cert provided to enterprise search and even confirmed so using SSLPoke.. This leaves me with the suspicion that perhaps Java is a bad actor in the mix? I've added the root/intermediate CA to the cacerts keystore in the ssl/java directory of the Linux VM, but still no luck. Any thoughts?
I have written a restful API project which is developed using spring boot and I am using the embedded tomcat and running a jar on a linux server.
The APIs are live at:
https://api.arevogroup.com:8089/api/regions
and I can see the verified and correct SSL as well as in the given screenshot.
but I am getting an this exception in the postman when I call these apis.
These APIs are consumed by a Xamrin based app which seems to work all good when consumed using iPhone but gives this same exception when the APIs are accessed via android.
I guess, the way I have generated the ssl certificate has some issues.
I have used a pfx file and my SSL config in properties file looks like this:
###SSL Key Info
security.require-ssl=true
server.ssl.key-store-password=PASSWORD
server.ssl.key-store=classpath:ssl_pfx.pfx
server.ssl.key-store-type=PKCS12
I have 2 questions, if disable the ssl verification, would the communication still be encrypted or not? (man in the middle attack is still possible but the info will still be encrypted, right?).
If not, how can I fix this?
You can't disable the verification of the server certificate. No browser will allow you to do it, except on an exceptional basis (the user must confirm the exception). If the client disables the verification, than the communication will be encrypted (i.e. no passive attack will be possible).
The errors you see are cause by a misconfiguration of your server.
Your certificate chain contains just the certificate for your server and lacks the intermediate certificate CN=Go Daddy Secure Certificate Authority - G2. You need to download it from Go Daddy (it is the one named gdig2.crt.pem) and add it to your keystore.
Refer to this question on how to do it.
Some browsers cache intermediate certificates and are able to verify your site even if one certificate is missing. However you should not rely on it.
security.require-ssl=true
server.ssl.key-store-password=PASSWORD
server.ssl.key-store=keystore.jks
server.ssl.key-store-provider=SUN
server.ssl.key-store-type=JKS
Used the jks file instead of pfx and it worked all good. Thought to share with others too.
I'm trying to run liquibase update command using
liquibase --driver="com.ibm.db2.jcc.DB2Driver" --changeLogFile="masterchangelog.xml " --url="jdbc:db2://localhost:60001/SMDINTDB:retrieveMessageFromServerOnGetMessage=true;sslConnection=true;" --username="" --password="" --classpath=/home/db2inst1/sqllib/java/db2jcc4.jar validate
But I'm getting following error. Can anyone help me how to resolve this issue? How I can specify the location of certs ?
Unexpected error running Liquibase: com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2030][11211][4.26.14] A communication error occurred during operations on the connection's underlying socket, socket input stream,
or socket output stream. Error location: Reply.fill() - socketInputStream.read (-1). Message: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target. ERRORCODE=-4499, SQLSTATE=08001
Several pre-requisites exist for on-premises Db2-LUW SSL connectivity with jdbc.
liquibase works correctly with SSL connections to on-premises Db2-LUW, if all the prerequisite configuration completed successfully. Here are some tips.
the target Db2-LUW instance has to be already configured for SSL as per IBM Db2 documentation here. If you are using a cloud based Db2 service from IBM then this is already done for you, although you may need to use the IBM supplied root cert on the client side.
your client side JRE needs to be configured per IBM's Db2-LUW documentation here. I use the IBM JRE (as supplied with the Db2-LUW server) for liquibase.
for on-premises Db2-LUW your client side needs the java keystore created, and the server's certificate imported into it (keytool -importcert -file /your/path/to/server_certificate ... ).
for your specific error, for on-premises Db2-LUW you might try additional options in the connection string to tell the JRE how to access the client side keystore into which you already imported the server certificate. Specifically sslTrustStoreLocation=/path/to/.keystore;sslTrustStorePassword=whatever; . Note that I did not need these options if using Db2-on-cloud (liquibase worked correctly with SSL to Db2-on-cloud once I added DigiCertGlobalRootCA.crt to my keystore (although even that may be unnecessary) , but I did not try Db2-warehouse-on-cloud as I don't use that service.