Ambari Kerberos Wizard Trust ldaps subject alternative DNS name - hadoop

I'm setting up kerberos with an existing Active Directory as KDC and having an issue communicating to the ldaps server. We have a cluster of servers for AD. let's say server1.example.com,server2.example.com,server3.example.com and the company just uses example.com to connect. I've setup ldap integration with amabri for user access to the portal via the ambari-server setup-ldap, but did it without ssl and I can use ldap://example.com as the ldap server and it works fine. With ldaps, however, ldaps://example.com:636 doesn't work. I get an error in the ambari-server.log: "java.security.cert.CertificateException: No subject alternative DNS name matching example.com found". I have imported the CA cert and each individual server's certificate into my keystore and put the ca in /etc/pki/ca-trust/source/anchors/activedirectory.pem, but I still can't get it to work for example.com. I can get it to work for server1.example.com and all the others individually, but I can't get it to work for the example.com dns name. I don't have control over the certificate creation on the AD ldaps side. These certs were self-signed by the AD server and each server has it's own certificate. Is there anyway to tell ambari to accept invalid certs for the kerberos wizard, or any other way to get the broader domain name to work? Thanks in advance for any help.

Related

Unable to connect to remote HTTPS API's without DNS - Istio

I have a Service which is running in Istio 1.16 with envoy sidecar injection enabled.
The service connect with a remote API every now and then to send the health information.
The remote end point is https but without having a domain name, yeah the endpoint have to be invoked like https://168.x.x.x/http/health. I could see the connection is working fine with another API but with a proper hostname.
So the issue is clearly with the DNS resolution, I am not great with networking. So, you folks should help me out.
This is the error i get from the server (of service).
x509: cannot validate certificate for because it doesn't contain any IP SANs
Istio version - 1.16
Kubernetes - 1.24
golang (service) - 1.19
Can we bypass this x509 SAN check using destination Rules?
The error "x509: certificate has expired or is not yet valid" usually occurs when the SSL certificate being used has expired or has not yet been activated. This error can also occur when the certificate being used is not valid for the domain or IP address that the request is being sent to.
To resolve this issue, you will need to either obtain a new valid SSL certificate or renew the existing certificate.
You can check your certificate expiration date by using the below command:
kubeadm certs check-expiration
Refer to this SO for more detailed steps.

SCOM - Issue with single server domain-management & agent

for my new task I have to use SCOM to monitoring non-domain server/computer. My company told me to do it with only 1 server management that contains others SCOM features. So I have a server Windows 2016 with SCOM with a local domain, and I have to connect the others devices. It seems easy, but I have a problem with certificates: when I try to certificates my server & computers, and I'll import the certificate with MOMCertImport, in Event Viewer I see the event id 21007, that tell me "The OpsMgr Connector cannot create a mutually authenticated connection to 'PC-NAME' because it is not in a trusted domain." So I have the certificates installed but I can't anyway connect Agent to SCOM, What will I do? I search anywhere for this problem, but any solution not work with me!
There are few things you need to look at.
The certificate: must have both client auth and server auth purposes.
Authentication is MUTUAL, i.e. you agent confirms its identity to a gateway, or to a management server, AND the gateway or management server confirms its identity to the agent.
Certificates must be issued to EXACT conputer FQDN. If you rename, or join domain, or change DNS suffix => this will invalidate certificate, because FQDN changes.
Install and bind certificates at both participating servers (i.e. agent and (MS or GW)). This is because #2.
Obviously, you need individual certificates for each server, because of #3.
Ensure, that both servers can maintain trust chanin to own certificate and to other party's one. Ideally, if you have a single root/issuing CA, which used to issue both certificates. In this case, just install root/issuing CA certs in appropriate storages in local computer account. If using self-signed, you need to install them as trusted at other party.

SSL Cert for multiple subdomains in poste.io

To the poste.io users:
The mail server supports multiple subdomain. So far so good. But the ssl cert is always of the main domain. i.e., when opening the webmail using the subdomain's url, I see a security exception.
Isn't is possible to extend the lets encrypt's ssl cert list to also have the subdomains so that we have valid certs per subdomain?
I found the option to manually issue ssl cert with multiple subdomains. After logging into admin console, this option can be found under System Settings -> TLS Certificate -> Change Certificate Settings option.

configuring CA certificates in WSO2 API Manager

I have WSO2 API manager deployed in AWS EC2 instance.
I have purchased a SSL certificate via sslforfree.com. I tried to import it via keytool command. But its not working and throwing error. It gives me
KrbException: Cannot locate default realm
How can I associate this certificate with the API Manager? I don't have a domain name for WSO2 and I access it via IP address.
Is it possible for have CA signed certificate in this case?
In case if I want a domain name for this EC2, how can I have one?
You can import the certificate inside Carbon. Log into <your_server>:9443/carbon as admin. After that go on Main -> Manage -> Keystores -> List
If you're still using the default settings you'll have the wso2carbon.jks entry here. Click on Import cert, chose your cert file and click on Import. Your certificate should be working after this.
there are several topics in this question:
I tried to import it via keytool command.But its not working and
throwing error.It gives me KrbException: Cannot locate default realm
The keytool gives you this exception? It would be useful to provide the keytool command you've used. There's not reason for that exception.
please not that the certificate CN must be the same as the fqdn (domain name) of the server (how your browser access it).
How can I associate this certificate with the API Manager?
There are two options.
Import the keypair (private key and certificate chain) into a keystore and configure the APIM to use the keystore (in the repository/conf/tomcat/catalina-server.xml)
Have a reverse proxy server (Apache HTTP, NGinx), and configure the SSL on that proxy server. This is my favorite approach .
See: https://docs.wso2.com/display/AM210/Adding+a+Reverse+Proxy+Server
Then you have control over who/where can access the carbon console, store and publisher.
I don't have a domain name for WSO2 and I access it via IP address. Is
it possible for have CA signed certificate in this case?
Certificate authorities don't provide IP based certificate, as they can validate ownership/control of a domain name, but not of the IP address.
You can create (and made trusted) your own CA and certificate (good for PoC, DEV environment, ..) but in long run you'll need a trusted certificate on a hostname.
In case if i want a domain name for this EC2 , how can i have one ?
You can always buy one :D For start - when having EC2 instance with a dynamic IP address, you may use some dynamic dns service (e.g. https://ydns.io/ , just search for more if you wish)

Worklight Quality Assurance https setup

I recently setup https on a Worklight Quality Assurance virtual appliance. I provided the certificate signed by my CA following the directions on the IBM Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/SSFRDS_6.0.0/com.ibm.mqa.install.doc/topics/t_confighttps.html?lang=en
and configured the appliance to accept connection only in https (I disabled port 80 through the firewall configuration wizard).
However, when I try to connect on https, the certificate retrieved by the browser is the default certificate issued by the appliance.
Is this correct? I was expecting the browser to retrieve the certificate I just imported.
Many thanks,
Marco
The best way to check if the certificate has been updated properly is to check the modified date of the cert and the key.

Resources