The WS-Management service cannot find the certificate that was requested - windows

Server1 sends WinRM Get request -
Server2 has been listening -
I guarantee that CertificateThumbprint and IP addresses in both servers match (Sorry part of IP addresses and CertificateThumbprint have to be removed since I am not allowed to publish all here).
I don't know why WinRM still has the error "The WS-Management service cannot find the certificate that was requested" presented.

I've found a solution to this problem. You must create a CSR, from the CSR you use digicert utility to create the certificate. That you import, and export again with the private key. Import that in the Certificate store and use winrm create to create the listener.
All found in my post.

There are probably many reasons for this error. In my case we were using the existing IIS SSL cert which was working for WinRM on some machines but not others. The difference was the certificate was marked as exportable on the ones that worked.
Try re-importing the certificate and making sure it is marked as exportable.
Export/Import certificates:
https://www.digicert.com/ssl-support/pfx-import-export-iis-7.htm

Related

Unable to connect to remote HTTPS API's without DNS - Istio

I have a Service which is running in Istio 1.16 with envoy sidecar injection enabled.
The service connect with a remote API every now and then to send the health information.
The remote end point is https but without having a domain name, yeah the endpoint have to be invoked like https://168.x.x.x/http/health. I could see the connection is working fine with another API but with a proper hostname.
So the issue is clearly with the DNS resolution, I am not great with networking. So, you folks should help me out.
This is the error i get from the server (of service).
x509: cannot validate certificate for because it doesn't contain any IP SANs
Istio version - 1.16
Kubernetes - 1.24
golang (service) - 1.19
Can we bypass this x509 SAN check using destination Rules?
The error "x509: certificate has expired or is not yet valid" usually occurs when the SSL certificate being used has expired or has not yet been activated. This error can also occur when the certificate being used is not valid for the domain or IP address that the request is being sent to.
To resolve this issue, you will need to either obtain a new valid SSL certificate or renew the existing certificate.
You can check your certificate expiration date by using the below command:
kubeadm certs check-expiration
Refer to this SO for more detailed steps.

Pre-baking machine image of Elasticsearch 8 with xpack auto security configure shutting me out with TLS errors

I installed Elasticsearch via Packer and Ansible onto a machine image on GCP. I tried running elasticsearch-reset-password -u elastic to change the password. I think I'm getting the following error because the installation was done on a different IP address (the IP of the instance Packer launches to bake the machine image vs the IP of the launched instance).
WARN org.elasticsearch.common.ssl.DiagnosticTrustManager - failed to establish trust with server at [10.206.0.10]; the server provided a certificate with subject name [CN=packer-62d379fb-f7c3-ca0f-471a-82185776ac77], fingerprint [eb5436427cb38928b3f16994bfdb8102ac5011be], no keyUsage and extendedKeyUsage [serverAuth]; the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [IP:10.128.0.20,DNS:localhost,DNS:packer-62d379fb-f7c3-ca0f-471a-82185776ac77,IP:0:0:0:0:0:0:0:1,IP:127.0.0.1,IP:fe80:0:0:0:4001:aff:fe80:14]; the certificate is issued by [CN=Elasticsearch security auto-configuration HTTP CA]; the certificate is signed by (subject [CN=Elasticsearch security auto-configuration HTTP CA] fingerprint [63fa2023ea0d36865d838d8d3bd17c5e96f8b684] {trusted issuer}) which is self-issued; the [CN=Elasticsearch security auto-configuration HTTP CA] certificate is trusted in this ssl context ([xpack.security.http.ssl (with trust configuration: Composite-Trust{JDK-trusted-certs,StoreTrustConfig{path=certs/http.p12, password=<non-empty>, type=PKCS12, algorithm=PKIX}})])
java.security.cert.CertificateException: No subject alternative names matching IP address 10.206.0.10 found
The IP address of the instance I'm launching from the prebaked machine image is 10.206.0.8 instead of ...10.
This is what I get when I test TLS:
curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200
Enter host password for user 'elastic':
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I would like to get this ideal path to work but I'm at a loss for what to do. Seems like an opportunity to learn but I'm just scratching my head right now.
Working solutions that I have right now are:
run ES from Docker and disable xpack when creating the container with an env variable
pre-bake an install of deb package for es7 (predating the xpack auto config)
install es8 manually on each node I launch vs pre baking a machine image that is pre-installed and pre-configured
Neither of those are suitable paths forward as they circumvent the platform's new security conventions or throw a wrench in my automated infrastructure goals.
Can I modify the generated certs to work on a newly launched instance from a pre-baked image with a different IP address?

Kibana to EnterpriseSearch TLS issue

THIS IS STILL AN ISSUE ANY HELP WOULD BE APPRETIATED
I am having an issue setting up TLS through a custom CA between Kibana and Enterprise search. I have the default x-pack security set up for the interconnection of my Elasticsearch nodes with both Kibana and Enterprise search, which was done according to the following docs: minimal security basic security ssl/tls config. I can successfully run Enterprise search through http, however my issue arises when I enable ssl/tls for ent-search..
When I have https configured for ent-search using this doc, the server is "running", however I receive an error after boot and Kibana throws an error when attempting to connect.
ent-search error (non corresponding with Kibana's hit to the ent-search hostname, this error raises shortly after ent-search is "starting successfully", but isn't fatal)
[2022-06-14T20:37:45.734+00:00][6081][4496][cron-Work::Cron::SendTelemetry][ERROR]: Exception:
Exception while performing Work::Cron::SendTelemetry.perform()!: Faraday::ClientError: PKIX path
building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid
certification path to requested target
Kibana error (directly corresponding to when I ping http://obfuscated-dns:5601/app/enterprise_search/overview)
[2022-06-14T20:43:51.772+00:00][ERROR][plugins.enterpriseSearch] Could not perform access check to
Enterprise Search: FetchError: request to https://obfuscated-dns:3002/api/ent/v2/internal/client_config
failed, reason: unable to get issuer certificate
The steps I took to generate said certificate were: I created a csr on my server using elasticsearch-certutil csr along with a yml file which specified the distinguished name, I sent the unzipped csr to my CA (Digicert), uploaded the signed certificate and intermediate certificate provided by Digicert to my server, used openssl to generate a keystore using the signed cert and that private key generated along-side the original csr, then finally converted the keystore to .jks format using keytool.
From my understanding, the path of this keystore is what is needed for the configuration file for enterprise-search and the intermediary cert is what is used in the Kibana certificate authority config field (ca.pem). I have also tried to stuff both the signed and intermediate cert in the same .pem, as well as the private-key, signed and intermediate cert. Below are the relevant configurations:
kibana.yml
enterpriseSearch.host: https://ofuscated-dns:3002
enterpriseSearch.ssl.verificationMode: certificate
enterpriseSearch.ssl.certificateAuthorities:
- /path/ca.pem
enterprise-search.yml
ent_search.external_url: https://obfuscated-dns:3002
ent_search.listen_host: 0.0.0.0
ent_search.listen_port: 3002
ent_search.ssl.enabled: true
ent_search.ssl.keystore.path: "/path/keystore.jks"
ent_search.ssl.keystore.password: "pass"
ent_search.ssl.keystore.key_password: "pass"
I'm starting to feel like I fundamentally misunderstand something here. A lot of the jargon behind SSL/TLS certificates seems to lack standardization. While we are at it, what is a root cert in relation to what I have listed? Is it the intermediate cert? I see there is a master "root certificate" for the Digicert CN I certified under, however I'm unsure where this fits in. The config variable "certificateAuthorities" doesn't document what this .pem file should contain specifically and when searched the concept of a certificate authority is never associated with file contents, but instead is simply abstracted to the entity which provides certification (duh).
To put it succinctly: What does this variable "certificateAuthorities" explicitly entail?
UPDATE 09/28/2022
I have now confirmed that SSL is working when calling enterprise-search outside of the VM its running in. I can utilize its endpoint with my flutter and react app, however Kibana is till throwing the error mentioned above. I have checked that the root/intermediate CA provided to kibana's configuration is indeed the certificate linked with the signed cert provided to enterprise search and even confirmed so using SSLPoke.. This leaves me with the suspicion that perhaps Java is a bad actor in the mix? I've added the root/intermediate CA to the cacerts keystore in the ssl/java directory of the Linux VM, but still no luck. Any thoughts?

Identifying which certificate is needed in order to perform https post using Oracle utl_http

Short story
I'm trying to send a POST request from a PL/SQL script using the utl_http utility in Oracle. I've been able to send the request using http, but not https. I've added what I thought was the necessary certificates to a Oracle Wallet, and I believe they are being imported and used (but in all honesty, this is a little hard to verify). My current assumption is that calls from our DB server are passing through a proxy server, and that that is somehow messing up some part of the https / certificate functionality.
Supporting evidence (possibly?): I tried to make calls (POST requests) to a dummy service at webhook.site. Again, I got this working with http, but not https - the latter results in a cert validation error.
I then tried to replicate the behavior using postman, and that basically produces the same result, unless I fiddle around with the settings:
Initial Postman result:
Could not get any response
There was an error connecting to https://webhook.site/950...
Disabling SSL verification
Under the Post man settings, I turned off SSL Certificate Verification, and tried again. This time, I got a 200 OK response, and confirmed that the webhook received the post request fine.
It seems clear that the error is due to a missing cert, but I can't figure out which, or how to configure it. My assumption is that if I can get this to work for a webhook-url from Postman (without disabling cert verification), then I should also be able to get it to work from PL/SQL later.
When I look at the webhook site in a browser and inspect the certs, the webhook cert is the lowest cert (leaf node?). Above it there is one intermediate cert related to the company I'm working for, and then a root cert also related to the company. The root node is named something like "Company Proxy Server CA" - So I'm assuming the proxy somehow manipulates my requests and inserts it's own cert here.
I've tried downloading all of these certs and importing them into my cert store, as well as importing them under the Postman settings (under Certificates) in various combinations, but nothing seems to make any difference; all attempts at posting with HTTPS produces the following error in my Postman Console:
POST https://webhook.site/9505...
Error: unable to verify the first certificate
Any ideas about how to resolve this, or at least obtain more information about what to do would be greatly appreciated.
Switching OFF "SSL Certificate Verification" in Postman only means that it (i.e. Postman) will not check the validity of SSL certificates while making a request. Meaning that it will just send the certificates as they are. Because your connection fails if the setting in ON, this means Postman cannot verify the validity of your certificates.
This is most likely the case with the actual service you're trying to POST to, they cannot verify the certificates. Is that service outside your company network? And is it a public one or one owned by your company? Where is that service hosted? What certificate do they need?
BTW, TLS client certificates are sent as part of establishing the SSL connection, not as part of the HTTP request. The TLS handshake (and exchange/validation of client and server certificates) happens before any HTTP message is sent.
I'm thinking this might be a blocked port issue.
You said... ""Company Proxy Server CA" - So I'm assuming the proxy somehow manipulates my requests and inserts it's own cert here."
That means your client software needs your Company Proxy Server CA in its trusted certificates list. If that client's list is that of the oracle wallet...
https://knowledge.digicert.com/solution/SO979.html
This talks about how to do that.
Also, if your system running postman has a non-oracle based wallet trusted certificate (probably the operating system?) you'll have to execute something like adding the trust to your account on the workstation
https://www.thewindowsclub.com/manage-trusted-root-certificates-windows
in order to have the proxy server certificate trusted.
Once the certificate you're making the connection with has a root of trust per the effective configuration of the client being used, then you'll be able to verify the certificate.
A couple of possible issues:
The server doesn't actually support HTTPS. Connect a browser to the URL that you POST to, and see if you receive a response. (It looks like you already did this, but I'm documenting it for completeness.)
The server uses the Server Name Indication (SNI) extension to determine what certificate chain to send back, but your POSTing client doesn't send that extension. You can identify this case by looking up the IP for the host you're POSTing to, then going to https://nnn.nnn.nnn.nnn/ (obviously use the IP here, instead of the literal string 'nnn.nnn.nnn.nnn') in your browser, and checking the certificate chain it returns. If it is not the same as you get from step 1, this is your problem, and you need to figure out how to either get SNI support in your Oracle PL/SQL client or get the POST endpoint exposed on that hostname. (alternatively, you might be able to use these certificates to prime your Oracle Wallet, but they might have an issue with the hostname in the certificate not matching the hostname you connect to.)
You have a proxy in the way. I don't think this is what's going on, since that would basically only cause problems if you were doing client-side certificate authentication. (If this is the problem or is a condition, you need to import those certificates into your trusted wallet; you also need to ensure that the server you're posting from is going through the same proxy. Otherwise, you need to ensure that the certificate authority for the proxy that the machine actually running the code sees is in the wallet. This may require the assistance of the system/network administrators who run that machine and its connection to the network.)
HTTPS is a finicky beast. Many, many things must work exactly correctly for TLS connections to work and the certificates to correctly verify (the TLS port must respond, the client and server must agree to speak the same version of TLS, the client and server must agree to use the same cipher combination, the certificate chain presented by the server must be issued by a CA the client recognizes, and the leaf certificate in that chain must certify the name client requested).
SNI is needed to support multiple names on a single host without messing with the certifications of other names on the same host. Unfortunately, SNI is one of those things that has been standardized for over a decade (RFC 3546), but many enterprise-grade softwares haven't implemented.

Telegram Bot SSL Error

So I have made a small script on my website for my telegram bot. Only problem is that if I set my URL as webhook for the bot it gives an SSL error.
Also tried to add an self signed certificate, so has_custom_certificate turned to true, but the same error appeared.
What am I doing wrong?
You have to create a self-signed certificate for deploying your server over https. If you are using flask you can follow this nice tutorial - https://blog.miguelgrinberg.com/post/running-your-flask-application-over-https
The problem is with your certificate.
The error in your getWebHookInfo:
"last_error_message":"SSL error {337047686, error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed}"
Is Telegram saying that it needs the whole certificate chain (it's also called CA Bundle or full chained certificate).
How to check your certificate:
You can use the SSL Labs SSL Server Test service to check your certificate:
Just pass your URL like the following example, replacing coderade.github.io with your host:
https://www.ssllabs.com/ssltest/analyze.html?d=coderade.github.io&hideResults=on&latest
If you see "Chain issues: Incomplete" you do not serve a full chained certificate.
How to fix:
You need to add all the three needed files (.key, .crt, and .ca-bundle). The Namecheap has very good documentation of how to install an SSL certificate in your site in many different ways, like Apache, Node.js, Nginx and etc. Please, check if you can follow one of the available ways: Namecheap - How to Install SSL certificates
Anyway, you need to download the full chained certificate for your SSL certificate provider and install this on your webserver.
I don't know which service you are using, but for my example, with gunicorn I solved adding the ca-certs with ca-bundle file sent by my SSL Certificate provider (In my case Namecheap Comodo) on my SSL configuration, like the following example:
ca_certs = "cert/my-service.ca-bundle"
For further information: #martini answer on this thread and the FIX: Telegram Webhooks Not Working post.

Resources