THIS IS STILL AN ISSUE ANY HELP WOULD BE APPRETIATED
I am having an issue setting up TLS through a custom CA between Kibana and Enterprise search. I have the default x-pack security set up for the interconnection of my Elasticsearch nodes with both Kibana and Enterprise search, which was done according to the following docs: minimal security basic security ssl/tls config. I can successfully run Enterprise search through http, however my issue arises when I enable ssl/tls for ent-search..
When I have https configured for ent-search using this doc, the server is "running", however I receive an error after boot and Kibana throws an error when attempting to connect.
ent-search error (non corresponding with Kibana's hit to the ent-search hostname, this error raises shortly after ent-search is "starting successfully", but isn't fatal)
[2022-06-14T20:37:45.734+00:00][6081][4496][cron-Work::Cron::SendTelemetry][ERROR]: Exception:
Exception while performing Work::Cron::SendTelemetry.perform()!: Faraday::ClientError: PKIX path
building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid
certification path to requested target
Kibana error (directly corresponding to when I ping http://obfuscated-dns:5601/app/enterprise_search/overview)
[2022-06-14T20:43:51.772+00:00][ERROR][plugins.enterpriseSearch] Could not perform access check to
Enterprise Search: FetchError: request to https://obfuscated-dns:3002/api/ent/v2/internal/client_config
failed, reason: unable to get issuer certificate
The steps I took to generate said certificate were: I created a csr on my server using elasticsearch-certutil csr along with a yml file which specified the distinguished name, I sent the unzipped csr to my CA (Digicert), uploaded the signed certificate and intermediate certificate provided by Digicert to my server, used openssl to generate a keystore using the signed cert and that private key generated along-side the original csr, then finally converted the keystore to .jks format using keytool.
From my understanding, the path of this keystore is what is needed for the configuration file for enterprise-search and the intermediary cert is what is used in the Kibana certificate authority config field (ca.pem). I have also tried to stuff both the signed and intermediate cert in the same .pem, as well as the private-key, signed and intermediate cert. Below are the relevant configurations:
kibana.yml
enterpriseSearch.host: https://ofuscated-dns:3002
enterpriseSearch.ssl.verificationMode: certificate
enterpriseSearch.ssl.certificateAuthorities:
- /path/ca.pem
enterprise-search.yml
ent_search.external_url: https://obfuscated-dns:3002
ent_search.listen_host: 0.0.0.0
ent_search.listen_port: 3002
ent_search.ssl.enabled: true
ent_search.ssl.keystore.path: "/path/keystore.jks"
ent_search.ssl.keystore.password: "pass"
ent_search.ssl.keystore.key_password: "pass"
I'm starting to feel like I fundamentally misunderstand something here. A lot of the jargon behind SSL/TLS certificates seems to lack standardization. While we are at it, what is a root cert in relation to what I have listed? Is it the intermediate cert? I see there is a master "root certificate" for the Digicert CN I certified under, however I'm unsure where this fits in. The config variable "certificateAuthorities" doesn't document what this .pem file should contain specifically and when searched the concept of a certificate authority is never associated with file contents, but instead is simply abstracted to the entity which provides certification (duh).
To put it succinctly: What does this variable "certificateAuthorities" explicitly entail?
UPDATE 09/28/2022
I have now confirmed that SSL is working when calling enterprise-search outside of the VM its running in. I can utilize its endpoint with my flutter and react app, however Kibana is till throwing the error mentioned above. I have checked that the root/intermediate CA provided to kibana's configuration is indeed the certificate linked with the signed cert provided to enterprise search and even confirmed so using SSLPoke.. This leaves me with the suspicion that perhaps Java is a bad actor in the mix? I've added the root/intermediate CA to the cacerts keystore in the ssl/java directory of the Linux VM, but still no luck. Any thoughts?
Related
I installed Elasticsearch via Packer and Ansible onto a machine image on GCP. I tried running elasticsearch-reset-password -u elastic to change the password. I think I'm getting the following error because the installation was done on a different IP address (the IP of the instance Packer launches to bake the machine image vs the IP of the launched instance).
WARN org.elasticsearch.common.ssl.DiagnosticTrustManager - failed to establish trust with server at [10.206.0.10]; the server provided a certificate with subject name [CN=packer-62d379fb-f7c3-ca0f-471a-82185776ac77], fingerprint [eb5436427cb38928b3f16994bfdb8102ac5011be], no keyUsage and extendedKeyUsage [serverAuth]; the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [IP:10.128.0.20,DNS:localhost,DNS:packer-62d379fb-f7c3-ca0f-471a-82185776ac77,IP:0:0:0:0:0:0:0:1,IP:127.0.0.1,IP:fe80:0:0:0:4001:aff:fe80:14]; the certificate is issued by [CN=Elasticsearch security auto-configuration HTTP CA]; the certificate is signed by (subject [CN=Elasticsearch security auto-configuration HTTP CA] fingerprint [63fa2023ea0d36865d838d8d3bd17c5e96f8b684] {trusted issuer}) which is self-issued; the [CN=Elasticsearch security auto-configuration HTTP CA] certificate is trusted in this ssl context ([xpack.security.http.ssl (with trust configuration: Composite-Trust{JDK-trusted-certs,StoreTrustConfig{path=certs/http.p12, password=<non-empty>, type=PKCS12, algorithm=PKIX}})])
java.security.cert.CertificateException: No subject alternative names matching IP address 10.206.0.10 found
The IP address of the instance I'm launching from the prebaked machine image is 10.206.0.8 instead of ...10.
This is what I get when I test TLS:
curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200
Enter host password for user 'elastic':
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I would like to get this ideal path to work but I'm at a loss for what to do. Seems like an opportunity to learn but I'm just scratching my head right now.
Working solutions that I have right now are:
run ES from Docker and disable xpack when creating the container with an env variable
pre-bake an install of deb package for es7 (predating the xpack auto config)
install es8 manually on each node I launch vs pre baking a machine image that is pre-installed and pre-configured
Neither of those are suitable paths forward as they circumvent the platform's new security conventions or throw a wrench in my automated infrastructure goals.
Can I modify the generated certs to work on a newly launched instance from a pre-baked image with a different IP address?
I'm fairly new to the whole certificate shebang and not a versed Linux admin.
In our company, we run a Windows domain, but we also have some CentOS servers for different services.
On one of said servers we have our ticket system, which is browser based. I want to certify it with a certificate, signed by our Windows root CA, but no matter what I do, the certificate is shown as invalid in the browser.
Funny enough, both certificates in the chain (CA -> server) are shown as valid.
I already did the following:
start certificate process from scratch
tried different certificate formats (.cer, .pem)
verified server cert with root cert
checked validity with openssl (OK)
checked SSL connection with openssl, no issues
added root cert to Linux server trusted CA store
recreated cert chain (of 2)
restarted Apache over and over
reset browser cache
tried different browser
checked DNS entries
checked, if root CA is trusted in Windows (it is)
manually installed server cert in my browser
Both the server cert and the root cert show up as valid in the browser, with the correct relation.
I'm completely lost here. Is there some key step I forgot and not one of the ~30 guides I read forgot to mention?
Any help is greatly appreciated
Your question is missing some information:
Did you check the SSL connection from outside the server?
Did you verify the RootCA cert is inside the cert-store of the server (sometimes it is rejected without error messages)?
I would check the reason for rejecting the certificate in the browser (FireFox is usually more informative than Chrome), and look for the error-code.
Reasons can be (some of which you have already verified):
Wrong certificate properties (missing the required values in the "usage" attribute)
Wrong domain name
Expired certificate
Certificate could not be verified on the client-side
See this image as an example of an error code:
https://user-images.githubusercontent.com/165314/71407838-14f55a00-2634-11ea-8a30-c119d2eb1eb1.png
I have written a restful API project which is developed using spring boot and I am using the embedded tomcat and running a jar on a linux server.
The APIs are live at:
https://api.arevogroup.com:8089/api/regions
and I can see the verified and correct SSL as well as in the given screenshot.
but I am getting an this exception in the postman when I call these apis.
These APIs are consumed by a Xamrin based app which seems to work all good when consumed using iPhone but gives this same exception when the APIs are accessed via android.
I guess, the way I have generated the ssl certificate has some issues.
I have used a pfx file and my SSL config in properties file looks like this:
###SSL Key Info
security.require-ssl=true
server.ssl.key-store-password=PASSWORD
server.ssl.key-store=classpath:ssl_pfx.pfx
server.ssl.key-store-type=PKCS12
I have 2 questions, if disable the ssl verification, would the communication still be encrypted or not? (man in the middle attack is still possible but the info will still be encrypted, right?).
If not, how can I fix this?
You can't disable the verification of the server certificate. No browser will allow you to do it, except on an exceptional basis (the user must confirm the exception). If the client disables the verification, than the communication will be encrypted (i.e. no passive attack will be possible).
The errors you see are cause by a misconfiguration of your server.
Your certificate chain contains just the certificate for your server and lacks the intermediate certificate CN=Go Daddy Secure Certificate Authority - G2. You need to download it from Go Daddy (it is the one named gdig2.crt.pem) and add it to your keystore.
Refer to this question on how to do it.
Some browsers cache intermediate certificates and are able to verify your site even if one certificate is missing. However you should not rely on it.
security.require-ssl=true
server.ssl.key-store-password=PASSWORD
server.ssl.key-store=keystore.jks
server.ssl.key-store-provider=SUN
server.ssl.key-store-type=JKS
Used the jks file instead of pfx and it worked all good. Thought to share with others too.
I try to run a microservice (based on Eclipse Microprofile) on OpenLiberty (v20.0.0.1/wlp-1.0.36.cl200120200108-0300) on Eclipse OpenJ9 VM, version 1.8.0_242-b08 (en_US))
I run the server as the official Docker image (open-liberty:kernel)
In my service I try to connect to another rest service via HTTPS
Client client = ClientBuilder.newClient();
client.target("https://myservice.foo.com/").request(....);
This throws the following exception:
javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target
I already added the features 'transportSecurity-1.0' and 'ssl-1.0' into the server.xml file:
<featureManager>
<feature>jaxrs-2.1</feature>
<feature>microProfile-2.2</feature>
<feature>transportSecurity-1.0</feature>
<feature>ssl-1.0</feature>
</featureManager>
and I also tweaked the jvm.options file like this:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=7777
-Dhttps.protocols=TLSv11,TLSv12
-Djdk.tls.client.protocols=TLSv11,TLSv12
-Dhttps.protocols=TLSv11,TLSv12
-Dcom.ibm.jsse2.overrideDefaultProtocol=TLSv11,TLSv12
But nothing helps to get rid of the exception.
How is the correct configuration for OpenLiberty to enable outgoing ssl connections?
Liberty doesn't trust anything over ssl by default, so unless the service you are connecting to uses an identical keystore/truststore file, or you've otherwise configured your service to trust the microservice in some way, you can get that exception. If this is the problem, something like this will probably be seen in messages.log as well:
com.ibm.ws.ssl.core.WSX509TrustManager E CWPKI0823E: SSL HANDSHAKE FAILURE: A signer with SubjectDN [CN=localhost, OU=oidcdemo_client, O=ibm, C=us] was sent from the host [localhost:19443]. The signer might need to be added to local trust store [/Users/tester/tmp/liberty/20003wlp/wlp/usr/servers/urlcheck/resources/security/key.p12], located in SSL configuration alias [defaultSSLConfig]. The extended error message from the SSL handshake exception is: [PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target].
How to manually patch up the truststore is documented here,
https://www.ibm.com/support/knowledgecenter/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_add_trust_cert.html
but what you will probably want to do in a docker environment is modify your images to either include a common keystore/truststore, or read one from outside somewhere (such as a kubernetes secret). By default, each docker image creates it's own unique key/truststore, and they won't be able to "talk" over ssl.
If you only need to communicate with services that have a certificate signed by a well-known authority, you can add
ENV SEC_TLS_TRUSTDEFAULTCERTS=true
to your Dockerfile (20.0003+) to enable that.
As mentioned by Bruce in the answer above, Liberty doesn't trust any certificates by default. If you are making outgoing connections from Liberty to a server, you either need to add their certificate to the truststore you have configured OR you need to trust the JRE's cacerts if the remote endpoint is using a certificate from a well-known CA.
When you say you are using Let's Encrypt certificates, do you mean the remote end-point is using them, or your Liberty server is?
If the remote end-point is, most JRE's cacerts include Let's Encrypt in their cacerts. If the Liberty server is using a certificate signed by Let's Encrypt, that doesn't really have an effect on the outgoing connection unless you are using mutual SSL authentication.
As an FYI, if you are using a certificate signed by Let's Encrypt in Liberty as the default certificate, we will be adding built-in support for the ACME protocol in a few releases. See here for progress: https://github.com/OpenLiberty/open-liberty/issues/9017
I am working in Ec2 instance. I have connected my php files like http://13.57.220.172/phpinsert.php. But it is not secured site. So i want to convert http into https://13.57.220.172.
I have cloudflare ssl. When i try to add ssl certificate. It shows
com.amazonaws.pki.acm.exceptions.external.ValidationException: Provided certificate is not a valid self signed. Please provide either a valid self-signed certificate or certificate chain. Choose Previous button below and fix it.
i have enclose the image with it.
So how can i get the self signed certificate. is there any online tool available.
I think the error message your seeing has to do with this sentence:
If your certificate is signed by a CA, you must include the
certificate chain when you import your certificate.
from https://docs.aws.amazon.com/acm/latest/userguide/import-certificate-prerequisites.html.
Since it sounds like you're not yet in "production" mode, I'm guessing you're not particularly attached to your existing certificate, but just want a certificate to be able to do HTTPS on your web server (and don't really care if it's self-signed).
If you want to use AWS Certificate Manager, I think it would be easier to just let them (AWS) issue you a certificate instead of trying to import one from somewhere else. AWS doesn't charge anything for certificates. https://docs.aws.amazon.com/acm/latest/userguide/acm-billing.html
Even if you get the certificate setup in AWS Certificate Manager, that's not going to be installed directly on your EC2 instance, but rather (most likely) on a load balancer in front of your web server, which will add a little complexity to your setup. https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
If all you want to do is use HTTPS on your web server, Let's Encrypt (also free) is probably a simpler option. If you are using AWS Linux 2, there are instructions for getting a certificate here - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/SSL-on-amazon-linux-2.html
Well, add to the points which #jefftrotman has already mentioned.
If your expectation is to just secure your IP address using HTTPS you can achieve that using the below approaches:
A SELF SIGNED certificate that you can create using OpenSSL.
You can also get an SSL certificate from a trust signing authority like (GoDaddy or VeriSign) or Let's encrypt.
The only requirement in the second point is that for getting a certificate from a valid signing authority you need to have a domain name like "myphpapp.com" and then use this domain to get the SSL certificate.
The below details are in case you want to use AWS ACM(Amazon Certificate Manager)
If you prefer ACM, you can get the free Public SSL certificate which you can map to the IP address and your web application will be secured.
If your requirement is to add SSL certificates (like PEM files) to a web server like
NGINX or Apache then you first need to create a Private CA using in ACM and then you using this CA you will be able to create Private SSL certificates. After creating those you can export the files and add those files to the configuration file. (try to use Amazon Linux 2) ec2 image for ease.