Self Signed Certificate - macos

I'm trying to get a cert to work with a dev url on my local machine.
I've generated a self signed cert using keytool and have it connected with jboss. In chrome I can click on the lock with the x in it to view the cert details.
I downloaded the cert, added it to System and set the trust level to Always Trust. As per directions in Getting Chrome to accept self-signed localhost certificate . Then I loaded the page (even restarted browser, followed by system reboot to make sure everything was picked up).
I still see the lock with red x in chrome, for my dev url, 127.0.0.1, and localhost. What am I doing wrong to get chrome to trust the site for the local host, which is followed by the real question, which is do I need to anything special to get it to work for my dev url?
My hosts file has the dev url and localhost resolving to 127.0.0.1. When doing real certs I know the domain has to be specified, which is making me wonder if I need to do anything special for the custom dev url.

I finally figured out my issue and am posting the answer for anyone else who runs into the same problem. I also posted the answer in the referenced question.
The question referenced has an answer suggest by bjnord, Google Chrome, Mac OS X and Self-Signed SSL Certificates. This blog did not solve the problem directly, however there was a comment to the blog that was gold:
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain site.crt
You pretty much have to follow the directions in the blog to get the cert, then use the command above to install it properly.
I also found that for the java keytool that when you are prompted for your first and last name, this acts like the CN, so you enter your url there instead. After doing this, everything worked fine with the custom dev url.

Related

How to setup TLS certificates for a Windows gitlab-runner?

I've been trying to use this documentation as guide but I am having no luck setting up a gitlab-runner on Windows. It correctly polls for jobs but when it tries to pull artifacts, it returns a x509: certificate signed by unknown authority error.
Can anyone step through how to generate the proper certificate and attach it to the Windows gitlab-runner in order to get things to work?
I've tried generating certificates using openssl and setting the --tls-ca-file flag but so far, it hasn't helped.
I got this working finally using this as a reference.
The basic idea, when you're not hosting your own gitlab server, is to pull the certificate from gitlab.com. From your browser, click on the little lock symbol next to the https://gitlab.com URL and download the certificate. From Safari, it's just dragging the little certificate image over to your Desktop.
Once you have the cert, store it in your Gitlab-Runner folder and reference it with the tls-ca-file parameter in your config.toml.

how to configure apache nifi on https

1) http and no-dns (working)
I was using nifi on http, and it was working fine. at that time I was accessing it via ip or server name itself. everything was working fine.
2) https (self signed) and no-dns (working)
i did https setup using toolkit and it worked, although chrome kept showing red color message. which was expected. but atleast things worked.
3) dns (internal) + signed certificate (external authority (Symantec))
dns works fine, as I am able to ping the box using dns. also i added this dns to etc host file.
even though nifi is internal to org, I still went head and bought a certificate. and CNAME i used was the dns name of my server.
certificate i got was a chained certificate
my_dns_>TrustedSecureCertificateAuthority5>USERTrustRSAAddTrustCA>AddTrustExternalCARoot
I create a JKS, and added all of them in it, also added a key_pair to JKS, and I appended all certificated to key_pair too.
Then I changed nifi.properties and used same jks as trust-store and key-store.
now if i use nifi with new dns and https, i am getting "ERR_SSL_VERSION_OR_CIPHER_MISMATCH" (attached image) on Chrome. On IE, i get a TLS error. so I dont think thr is something wrong either with certificate or browser.
if I give url as (see image) https://server_name:9447/nifi, nifi opens up..with but still shows up red color warning, but this time is not for self signed certificate, but for name not matching. which confirms that nifi web server has access to my new jks, and it also reads it...but then why it is not working?
what am i missing here? can nifi run on externally bought certificate? or it always has to work with self-signed certificate?
if you are running nifi with external certified certificate, please share your configuration.
do I still have to use toolkit? or toolkit does same thing, which i did by buying the certificate? if true, what am i missing here?
so here is how i resolved it.
after nothing worked. I went back to https setup of nifi, where nifi generates keystore and truststore jks.
and then i downloaded both, and edited it. I removed all previous certificates (self signed one). and then added my CA certificate chain.
then simply uploaded them back. then just restarted nifi.
nifi is now on https.

Having trouble authenticating in Drush with SSL

My Drupal site is hosted at Pantheon (getpantheon.com), I'm using Drush on a Windows 7 x64 machine. I was reading this article on commands using Drush + Terminus (a special Drush extension for Pantheon sites):
https://www.getpantheon.com/blog/five-steps-feeling-drush
I want to be able to use both Drush and Terminus to quickly and efficiently manage my Pantheon Drupal sites.
I installed Terminus fine and was able to issue all the drush-related commands and connect to the server. However, when I got to the part about using 'pauth' to authenticate and use the actual Terminus commands my authentication was successful but then on the part where it's supposed to say 'Success!' :) It says instead:
Dashboard unavailable: SSL certificate problem: unable to get local issuer certificate
Pantheon told me:
This is due to Windows not bundling an Internet-friendly set of Certificate Authority (CA) certs with curl. Check Stack Overflow or the like for a bunch of solutions
Any suggestions on how to proceed? I'm not familiar with cURL at all, so something basic would be great, thanks.
Still new here...figuring this out. I should have done more research :p I found the answer here:
AWS SSL security error : [curl] 60: SSL certificate prob...: unable to get local issuer certificate
Once I'd downloaded the .pem file and saved it in a directory and referenced it from php.ini I was good to go.

certificate working on IP but not on URL

I have a problem accessing my site (on https) with IEMobile 9 (WP 7.5).
It says it's got problem with the certificate, as if it wasn't valid. Everything works on any other browser or platform I tested (android (several phones and a galaxy tab with stock browser, firefox, opera, dolphin), iOS (iphone and ipad with safari and chrome), an old nokia with symbian, windows 7, linux and mac).
To try to solve this I saved the certificate (.cer) on the server and accessed it from the phone browser. It always complained except when I accessed it through the server IP (192.168.xx.xx). At that point it (said it) installed correctly the certificate. If then I try to access the index.html still using the IP all works fine and it doesn't complain about the certificate. If, though, I try to access the index using the actual URL (blah.myblah.com), it complains again about the certificate, as if it wasn't installed!
It isn't a problem of DNS, cause that's up and serving the right ip, and the phone is correctly setup to use it.
The certificate is signed by geotrust/rapidssl for *.myblah.com.
That's normal. certificates are issued to a particular host+domain name. Basically, SSL's validation code will have something like
if (requested host name != certificate issued hostname) {
issue security alert
}
so you're doing
if (192.168.xx.xx != example.com) {
and get the security warning.
I have had issues with certificates related to how some HTTP over TLS implementations look for SubjectAltName(SAN). RFC2818 states that, if the hostname is a DNS entry, implementations must check the hostname against the subjectAltName extension array looking for a DNS entry that matches the host. In case there's no subjectAltName CommonName is used.
If the hostname is an IP, the certificate must contain a subjectAltName IP entry matching the IP.
Also note that wildcard certificates are being discouraged by the newer RFC6125, so MAYBE windows phone is already enforcing this, although I might be wrong.
My first step would be to check the SAN portion of the certificate and make sure it has a DNS entry matching the your site's host.

Google Chrome doesn't trust mitmproxy's certfificates

I'm running mitmdump (from mitmproxy) on my Macbook Pro, and I'm connecting to the proxy through my Windows desktop PC.
However, Chrome (running on the PC) refuses to connect to so many sites because of the invalid certificates which mitmproxy provides.
Chrome throws the error: ERR::NET_CERT_AUTHORITY_INVALID
Here's what mitmdump shows:
But why? What's wrong with mitmproxy's certificates, why can't it just send back google's as if nothing happened?
I'd like to know how I can fix this and make (force) my desktop PC to connect to any website through my Macbook's mitmproxy.
Answering this question for people who may find this important now. To get the proxy working, you have to add the certificate as trusted in your browser.
For windows follow this: https://www.nullalo.com/en/chrome-how-to-install-self-signed-ssl-certificates/2/
For linux follow this: https://dev.to/suntong/using-squid-to-proxy-ssl-sites-nj3
For Mac-os follow this: https://www.andrewconnell.com/blog/updated-creating-and-trusting-self-signed-certs-on-macos-and-chrome/#add-certificate-to-trusted-root-authority
There are some additional details in the above links; tldr; import the certificate in your chrome://settings url and add the certificate as trusted. That shall do.
This will make your browser trust your self-signed certificate(mitm auto generated certificates too.)
The default certificates of mitmproxy is at ~/.mitmproxy/ directory.
Per the Getting Started page of the docs you add the CA by going to http://mitm.it while mitmproxy is running and selecting the operating system that you are using. This should solve your problem and will allow https sites to work with mitmproxy.
This is the expected behavior.
mitmproxy performes a Man-In-The-Middle attack to https connections by providing on-the-fly generated fake certificates to the client while it keeps communicating to the server over fully encrypted connection using the real certificates.
This way the communication between client and proxy can be decrypted. But the client has to actively approve using those fake certificates.
If that wasn't the case then SSL would be broken - which it isn't.
The whole story is very well explained here:
http://docs.mitmproxy.org/en/stable/howmitmproxy.html

Resources