Yes, I need to downgrade to SHA-1 to add compatibility to older browser in a project.
Is there a way to do this?
I'm using Linux Centos 6.5 and Apache/2.2.15.
I have 3 files:
SSLCertificateFile /etc/ssl/certs/portal.crt
SSLCertificateKeyFile /etc/ssl/certs/portal.key
SSLCertificateChainFile /etc/ssl/certs/gd_bundle-g2-g1.crt
Thanks a lot for any support!
You need to re-issue the certificate (if it's self-signed) or request the re-issue from the CA if it has been obtained from the CA.
Related
I have .cer and .p7b certificate I tried to apply it on my ssl config file but my apache suddenly crash so I retain the previous settings which is ca.crt and the apache return to active status.
Question 1 should I need to convert my .cer file to .crt before applying it on my ssl config?
Question 2 How to convert the file and make it compatible with my server/ssl?
If you need more details, let me know. Thank you!
I'm stuck on how to fix this SSL error --
My SSL certs work fine on Chrome, but in Safari and Firefox I get an error that there is a host name mismatch if I go to www.domain.com instead of just domain.com
I've set up SSL Certificates using Certbot for my domain for both domain.com and www.domain.com
When I check on nginx to make sure that the certificates exist, I run sudo certbot --nginx, then select both of the domains when asked Which names would you like to activate https for?, and for both domain.com and www.domain.com, I get the result "You have an existing certificate that has exactly the same domains or certificate name you requested and isn't close to expiry" and asks if I'd like to attempt to reinstall or renew and replace the cert.
I'm not sure what other steps I can take, as last time I installed certbot I simply followed the instructions, did the above for both www and non-www addresses, and it simply worked at both www and non-www!
Does anyone have any suggestions what to do next?
TLDR:
domain.com: works fine in firefox/safari, nginx says cert exists
www.domain.com: host name mismatch in firefox/safari, nginx says cert exists
why?!
After messing with it for a while, and trying #xyz's ssl checker I figured out the following things:
both certs were valid
When I re-installed the certs using certbot, the most recent cert would start working and the previous one would stop working
Turned out that i needed to add the other url as a subdomain to the existing cert and that fixed it!
I used:
sudo certbot -d domain.com -d www.domain.com
and that did the trick
You can check both domains from an external service, e.g. here:
https://www.sslshopper.com/ssl-checker.html
It will tell you if the certificate is correctly installed on both.
You should also open a new tab in Chrome, open developer tools, record network requests, then goto www.domain.com and see what redirects Chrome makes and what URL's it actually makes requests to. Maybe it has some automatic URL changing based on previously successfully resolved URL's.
I'm having trouble enabling TLS 1.2 connections on a Windows (environment has both Windows 2008 and Windows 10 environments) platform. Currently, my private keys are managed by the Windows certificate store, using the CAPI engineId within stunnel (v 5.41), which uses OpenSSL 1.0.2k-fips. Because of this, stunnel can only negotiate a TLS 1.1 connection (SSLv2 and SSLv3/TLS1 are disabled for obvious reasons).
I’ve tried compiling OpenSSL 1.1.0f and stunnel 5.41, but no luck either cross compiling under CentOS, nor under Windows using either MSYS2/MINGW32 or Cygwin.
I am specifically looking for a way to manage the pfx/p12 (private key) in stunnel without resorting to the Windows certificate store. I found an example on how to configure stunnel to use capi - which worked beautifully, but because openssl 1.0.2 doesn't support ciphers that are used in TLS 1.2, only TLS 1.1 works. I need TLS 1.2.
https://www.stunnel.org/pipermail/stunnel-users/2017-February/005720.html documents why I can't use TLS 1.2 with OpenSSL 1.0.2.
OpenSSL 1.0.2 is what is built into stunnel 5.41. Recompiling didn't work. I'm specifically looking for how to configure stunnel to point at a pkcs12 key.
Solution based on dave_thompson_085's comment:
The solution was to simply put in the location of the p12 file for the cert variable, do not include engineId, do not include key. Don't worry about the password, stunnel will prompt for password.
I kept thinking that I needed to set an engine - as with pkcs11 or capi.
Ie., I was over thinking and completely missed the obvious.
Example of snippet that worked for me below. (Everything above was left as default, except sslOptions, which was set to sslOptions=TLS1.2
[https-test-services]
client=yes
accept=127.0.0.1:7000
connect=hostname.of.remote.server:8443
verifyChain = yes
CAfile = ca-certs.pem
cert = C:\Location\To\certandkey.p12
checkHost = hostname.of.remote.server
OCSPaia=yes
I have the information to switch from Mac OS X Server TLS 1.0 to TLS 1.1. But I do not know what file to add.
"
SSLProtocol -all -SSLv2 -SSLv3 -TLSv1 +TLSv1.1 +TLSv1.2
SSLHonorCipherOrder on
SSLCompression Off
SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-SSLv3:-EXP:!kEDH
"
What files do I need to attach these settings to?
Update to the latest Server app version 5.3. It supports TLS v1.2 and has dropped TLS 1.0, so it should give you all you need.
For a discussion of the relevant config files, also see my question tlsv1 alert protocol version when connecting via SSL to OS X Server, although it does not give an ultimate answer to which of the files are actually used.
I made my own git server on a centos distribution.
I can contact the server via git protocol at my home. But when I try to access via https at office I obtain:
Cloning into /Users/vito/Documents/... error:
error:14077458:SSL routines:SSL23_GET_SERVER_HELLO:reason(1112) while
accessing https://gitolite#myserverxyz.com/vitorepo.git/info/refs
fatal: HTTP request failed
Where is the problem? On my server or on my office-mac?
I got the exact same response from curl when trying to connect with an ubuntu instance running openssl 1.0.0e. I successfully resolved the problem by adding the -ssl3 flag to the curl command.
It seems that it's a compatibility problem between older version of OpenSSL (0.9.8) acting as a client and recent OpenSSL version (1.0.0) acting as a server with some specific options used by Curl on client side and Apache on server side.
It's probably due to some recent security fix in OpenSSL (probably the one against protocol downgrade attacks).
Try upgrading the OpenSSL library version on the client side to 1.0.0.
See:
https://sourceforge.net/tracker/?func=detail&atid=100976&aid=3395520&group_id=976
In case anyone has this issue with XMLRPC.
Daniel's answer (forcing SSL version 3) solved the issue for me. just specify XMLRPC_SSLVERSION_SSLv3 in the clientXmlTransport_curl options (C++).
The problem began when we upgraded our server to OpenSSL version 1.0.1-4ubuntu5.5 and the clients were still running 0.9.8o-5ubuntu1.7.
I believe this is a host-name matching issue on the server. Error 1112 is SSL_R_TLSV1_UNRECOGNIZED_NAME, and comes from an SNI name mismatch (info on SNI). I was having the same issue in curl.
For me, the work around was to make sure the name I used on the client matched one of the ServerName or ServerAlias configurations on the server. Of course, these commands are for an apache server; I don't know what you need to do for a git server. But I suspect the server names you're using from home and work are different, and the home name is the cannonical name the git server is using (and therefore SNI is working).
The 'real' fix will probably take a client change in git to allow a way to ignore the name-mismatch warning (the way your browser already does).
Not sure if I had exactly the same problem, but the error message was the same. It only seemed to be happening on the ubuntu box I set up a git server on, for some reason the centos box with a git server set up on it was fine.
I only just solved it after 3 or 4 days. It turns out to be because git's underlying Curl library has a broken Keep-alive implementation (I ended up dumping HTTP traffic and verifying the behaviour by hand).
In a nutshell Curl (at least the version used inside every Git implementation I could find, including command line git and eclipse's EGit) doesn't seem to correctly interpret the Connection response header, or more correctly doesn't seem to correctly interpret the absence of it.
To fix the problem you need to configure the SSL virtual host inside the apache that is serving your GIT repository with an extra directive specifically for git. Add these lines just before the </VirtualHost>.
BrowserMatch "git" nokeepalive ssl-unclean-shutdown
You unfortunately can't tell apache to just downgrade to HTTP/1.0 (would be cleaner) because Curl can't handle that, but you can just tell it to force a Connection:close on every request which Curl does know how to handle.
In a misleading coincidence, if you try to test Curl directly without this change it will seem to work, because it makes a single request and then aborts. Only by getting curl to execute two requests on the same keep-alive connection over ssl will this problem become apparent.
I had the same error. The root cause seems to be incompatibility of client/server openssl versions.
I've upgraded my server with apt-get upgrade openssl and upgraded my windows git installation.
The combination of windows git client
git version 1.9.4.msysgit.0, which contains openssl version:
OpenSSL 0.9.8e 23 Feb 2007
And server with openssl version:
OpenSSL 1.0.1c 10 May 2012
seems to work fine together.