NIFI is unable to connect to URL with https using invokeHTTP Processor, no certificate is required to access the site via browser(only user & pass) - apache-nifi

NIFI is unable to connect to URL with https using invokeHTTP Processor, no certificate is required to access the site via browser(only user & pass).
The error observed is "Request Processing Failed: javax.net.SSLPeerUnverifiedException".
I have tried adding SSL Context with Java Truststore and nifi Keystore. But it is not working.Kindly suggest.

When using InvokeHTTP to connect to a HTTPS URL, you will need to add an SSLContextService which InvokeHTTP can use to verify the remote server. The SSLContextService will refer to a truststore which contains the public Certificate Authority. For example if connecting to stackoverflow with NiFi, you would need the CN = ISRG Root X1, O = Internet Security Research Group, C = US installed in a pkcs12 truststore, which is used by the SSLContextService. Another option is to use the truststore provided by Java, typically located at $JAVA_HOME/lib/security/cacerts, which will trust most publicly signed web domain certificates.
Please add more details of the error message if this still is not working.

Related

Kibana to EnterpriseSearch TLS issue

THIS IS STILL AN ISSUE ANY HELP WOULD BE APPRETIATED
I am having an issue setting up TLS through a custom CA between Kibana and Enterprise search. I have the default x-pack security set up for the interconnection of my Elasticsearch nodes with both Kibana and Enterprise search, which was done according to the following docs: minimal security basic security ssl/tls config. I can successfully run Enterprise search through http, however my issue arises when I enable ssl/tls for ent-search..
When I have https configured for ent-search using this doc, the server is "running", however I receive an error after boot and Kibana throws an error when attempting to connect.
ent-search error (non corresponding with Kibana's hit to the ent-search hostname, this error raises shortly after ent-search is "starting successfully", but isn't fatal)
[2022-06-14T20:37:45.734+00:00][6081][4496][cron-Work::Cron::SendTelemetry][ERROR]: Exception:
Exception while performing Work::Cron::SendTelemetry.perform()!: Faraday::ClientError: PKIX path
building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid
certification path to requested target
Kibana error (directly corresponding to when I ping http://obfuscated-dns:5601/app/enterprise_search/overview)
[2022-06-14T20:43:51.772+00:00][ERROR][plugins.enterpriseSearch] Could not perform access check to
Enterprise Search: FetchError: request to https://obfuscated-dns:3002/api/ent/v2/internal/client_config
failed, reason: unable to get issuer certificate
The steps I took to generate said certificate were: I created a csr on my server using elasticsearch-certutil csr along with a yml file which specified the distinguished name, I sent the unzipped csr to my CA (Digicert), uploaded the signed certificate and intermediate certificate provided by Digicert to my server, used openssl to generate a keystore using the signed cert and that private key generated along-side the original csr, then finally converted the keystore to .jks format using keytool.
From my understanding, the path of this keystore is what is needed for the configuration file for enterprise-search and the intermediary cert is what is used in the Kibana certificate authority config field (ca.pem). I have also tried to stuff both the signed and intermediate cert in the same .pem, as well as the private-key, signed and intermediate cert. Below are the relevant configurations:
kibana.yml
enterpriseSearch.host: https://ofuscated-dns:3002
enterpriseSearch.ssl.verificationMode: certificate
enterpriseSearch.ssl.certificateAuthorities:
- /path/ca.pem
enterprise-search.yml
ent_search.external_url: https://obfuscated-dns:3002
ent_search.listen_host: 0.0.0.0
ent_search.listen_port: 3002
ent_search.ssl.enabled: true
ent_search.ssl.keystore.path: "/path/keystore.jks"
ent_search.ssl.keystore.password: "pass"
ent_search.ssl.keystore.key_password: "pass"
I'm starting to feel like I fundamentally misunderstand something here. A lot of the jargon behind SSL/TLS certificates seems to lack standardization. While we are at it, what is a root cert in relation to what I have listed? Is it the intermediate cert? I see there is a master "root certificate" for the Digicert CN I certified under, however I'm unsure where this fits in. The config variable "certificateAuthorities" doesn't document what this .pem file should contain specifically and when searched the concept of a certificate authority is never associated with file contents, but instead is simply abstracted to the entity which provides certification (duh).
To put it succinctly: What does this variable "certificateAuthorities" explicitly entail?
UPDATE 09/28/2022
I have now confirmed that SSL is working when calling enterprise-search outside of the VM its running in. I can utilize its endpoint with my flutter and react app, however Kibana is till throwing the error mentioned above. I have checked that the root/intermediate CA provided to kibana's configuration is indeed the certificate linked with the signed cert provided to enterprise search and even confirmed so using SSLPoke.. This leaves me with the suspicion that perhaps Java is a bad actor in the mix? I've added the root/intermediate CA to the cacerts keystore in the ssl/java directory of the Linux VM, but still no luck. Any thoughts?

OpenLiberty throws javax.net.ssl.SSLHandshakeException

I try to run a microservice (based on Eclipse Microprofile) on OpenLiberty (v20.0.0.1/wlp-1.0.36.cl200120200108-0300) on Eclipse OpenJ9 VM, version 1.8.0_242-b08 (en_US))
I run the server as the official Docker image (open-liberty:kernel)
In my service I try to connect to another rest service via HTTPS
Client client = ClientBuilder.newClient();
client.target("https://myservice.foo.com/").request(....);
This throws the following exception:
javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target
I already added the features 'transportSecurity-1.0' and 'ssl-1.0' into the server.xml file:
<featureManager>
<feature>jaxrs-2.1</feature>
<feature>microProfile-2.2</feature>
<feature>transportSecurity-1.0</feature>
<feature>ssl-1.0</feature>
</featureManager>
and I also tweaked the jvm.options file like this:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=7777
-Dhttps.protocols=TLSv11,TLSv12
-Djdk.tls.client.protocols=TLSv11,TLSv12
-Dhttps.protocols=TLSv11,TLSv12
-Dcom.ibm.jsse2.overrideDefaultProtocol=TLSv11,TLSv12
But nothing helps to get rid of the exception.
How is the correct configuration for OpenLiberty to enable outgoing ssl connections?
Liberty doesn't trust anything over ssl by default, so unless the service you are connecting to uses an identical keystore/truststore file, or you've otherwise configured your service to trust the microservice in some way, you can get that exception. If this is the problem, something like this will probably be seen in messages.log as well:
com.ibm.ws.ssl.core.WSX509TrustManager E CWPKI0823E: SSL HANDSHAKE FAILURE: A signer with SubjectDN [CN=localhost, OU=oidcdemo_client, O=ibm, C=us] was sent from the host [localhost:19443]. The signer might need to be added to local trust store [/Users/tester/tmp/liberty/20003wlp/wlp/usr/servers/urlcheck/resources/security/key.p12], located in SSL configuration alias [defaultSSLConfig]. The extended error message from the SSL handshake exception is: [PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target].
How to manually patch up the truststore is documented here,
https://www.ibm.com/support/knowledgecenter/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_add_trust_cert.html
but what you will probably want to do in a docker environment is modify your images to either include a common keystore/truststore, or read one from outside somewhere (such as a kubernetes secret). By default, each docker image creates it's own unique key/truststore, and they won't be able to "talk" over ssl.
If you only need to communicate with services that have a certificate signed by a well-known authority, you can add
ENV SEC_TLS_TRUSTDEFAULTCERTS=true
to your Dockerfile (20.0003+) to enable that.
As mentioned by Bruce in the answer above, Liberty doesn't trust any certificates by default. If you are making outgoing connections from Liberty to a server, you either need to add their certificate to the truststore you have configured OR you need to trust the JRE's cacerts if the remote endpoint is using a certificate from a well-known CA.
When you say you are using Let's Encrypt certificates, do you mean the remote end-point is using them, or your Liberty server is?
If the remote end-point is, most JRE's cacerts include Let's Encrypt in their cacerts. If the Liberty server is using a certificate signed by Let's Encrypt, that doesn't really have an effect on the outgoing connection unless you are using mutual SSL authentication.
As an FYI, if you are using a certificate signed by Let's Encrypt in Liberty as the default certificate, we will be adding built-in support for the ACME protocol in a few releases. See here for progress: https://github.com/OpenLiberty/open-liberty/issues/9017

Configuring SSL on Nifi 1.9 Single Node setup

Could you please help me setup the SSL on the Nifi Application.
To explain about the steps taken so far.
I have used the following link intructions to use the CA signed certs provided to us (This include root,intermediate and Server cert). I have sucessfully configured Nifi to run on SSL on server end but i am not getting the steps to create a client cert so that using the client cert we can login to Nifi.
Help in this regard will be highly appreciated.
You'll need to generate a Certificate Signing Request (CSR) or request from your security/IT team who provided the CA-signed server certs that they provide a client certificate (and private key) signed by the same intermediate or root CA. You could also generate your own client certificate signed by a self-signed CA and put the public certificate of that CA in the NiFi truststore. More documentation around this process can be found in the NiFi Toolkit Guide.

SSL certificate error while securing nifi with ranger

I am using HDF 3.0
I create keystore and truststore certificate with path and mention in Nifi ranger policy. While I test the connection I got an issue that I have shown below. I follow this link here
NiFi is secured using TLS certificates. When installed and secured via Ambari, by default Ambari installs a NiFi Certificate Authority (CA) and will generate certificates for each NiFi node and get those certificates signed by that CA.
The NiFI node private key is loaded in the keystore.jks file of each NiFi Node and the public key for the CA is loaded in the truststore.jks file on each NiFi node.
When a client (Ranger in the above case is the client) initiates a connection to NiFi, a two-way TLS connection is negotiated. This involves the server sending the node's public key (derived from the certificate in the keystore.jks) to the client. The client will check that key against a list of trustedCertEntries in its truststore.jks file. If it finds the servers public key or the public key of the CA who signed that server key in the truststore.jks, it will trusted the cert provided by the server. The client will then provide its client certificate (derived from ranger keystore.jks private key) to the target NiFi node. NiFi will follow the same steps above to determine if it should trusted the cert provided from that client using its truststore.jks file.
The error you are seeing indicates that this two-way TLS negotiating is failing because Ranger does not trust the cert being presented by NiFi. If you get past this, you will like have failure of trust the other way as well. You need to make sure the truststore.jks used by both NiFi's nodes and Ranger contains all the necessary "trustedCertEntries" for both sides of this connection. (This means having the public key of the NiFi CA loaded in Ranger's truststore.jks file and loading the public key for your Ranger certificate in the truststore.jks file used by your NiFi instances.

configuring CA certificates in WSO2 API Manager

I have WSO2 API manager deployed in AWS EC2 instance.
I have purchased a SSL certificate via sslforfree.com. I tried to import it via keytool command. But its not working and throwing error. It gives me
KrbException: Cannot locate default realm
How can I associate this certificate with the API Manager? I don't have a domain name for WSO2 and I access it via IP address.
Is it possible for have CA signed certificate in this case?
In case if I want a domain name for this EC2, how can I have one?
You can import the certificate inside Carbon. Log into <your_server>:9443/carbon as admin. After that go on Main -> Manage -> Keystores -> List
If you're still using the default settings you'll have the wso2carbon.jks entry here. Click on Import cert, chose your cert file and click on Import. Your certificate should be working after this.
there are several topics in this question:
I tried to import it via keytool command.But its not working and
throwing error.It gives me KrbException: Cannot locate default realm
The keytool gives you this exception? It would be useful to provide the keytool command you've used. There's not reason for that exception.
please not that the certificate CN must be the same as the fqdn (domain name) of the server (how your browser access it).
How can I associate this certificate with the API Manager?
There are two options.
Import the keypair (private key and certificate chain) into a keystore and configure the APIM to use the keystore (in the repository/conf/tomcat/catalina-server.xml)
Have a reverse proxy server (Apache HTTP, NGinx), and configure the SSL on that proxy server. This is my favorite approach .
See: https://docs.wso2.com/display/AM210/Adding+a+Reverse+Proxy+Server
Then you have control over who/where can access the carbon console, store and publisher.
I don't have a domain name for WSO2 and I access it via IP address. Is
it possible for have CA signed certificate in this case?
Certificate authorities don't provide IP based certificate, as they can validate ownership/control of a domain name, but not of the IP address.
You can create (and made trusted) your own CA and certificate (good for PoC, DEV environment, ..) but in long run you'll need a trusted certificate on a hostname.
In case if i want a domain name for this EC2 , how can i have one ?
You can always buy one :D For start - when having EC2 instance with a dynamic IP address, you may use some dynamic dns service (e.g. https://ydns.io/ , just search for more if you wish)

Resources