I have a working client and a server which, once finished would have a valid SSL certificate.
At the moment, for the mean of testing, I'm just disabling the need of SSL verification in the client by adding the flags INTERNET_FLAG_IGNORE_CERT_CN_INVALID and SECURITY_FLAG_IGNORE_UNKNOWN_CA like this:
HINTERNET hRequest = HttpOpenRequest(hConnection,
"GET","index.html",
NULL,NULL,NULL,
INTERNET_FLAG_RELOAD|
INTERNET_FLAG_EXISTING_CONNECT
#ifdef __HTTPS__
| INTERNET_FLAG_SECURE |INTERNET_FLAG_IGNORE_CERT_CN_INVALID|SECURITY_FLAG_IGNORE_UNKNOWN_CA
#endif
,
dwContext);
I got this to work perfectly on my laptop. Now I'm trying to use it on my PC. same code exactly. copied and pasted the files (both Visual Studio Professional 2008), now I'm getting 12045 error. which means Invalid Certificate Authority
I tried disabling Windows Firewall, didn't work. both computer connected to the same router
Any ideas what can cause this?
Thanks!
EDIT
Basically what happens is the same as described here by Microsoft, only this article is for Windows CE
Does this help?
If a server SSL certificate is issued by unknown or invalid certificate authority WinInet HttpSendRequest API or MFC CInternetFile::SendRequest will fail with error 12045 (ERROR_INTERNET_INVALID_CA).
I think you should call InternetSetOption() on the request, before issuing it, like this, because the HttpOpenRequest() flags do not specify SECURITY_FLAG_IGNORE_UNKNOWN_CA:
HINTERNET hRequest = HttpOpenRequest(hConnection,...
#ifdef __HTTPS__
DWORD dwFlags;
DWORD dwBuffLen = sizeof(dwFlags);
InternetQueryOption(hRequest, INTERNET_OPTION_SECURITY_FLAGS, (LPVOID)&dwFlags, &dwBuffLen);
dwFlags |= SECURITY_FLAG_IGNORE_UNKNOWN_CA;
InternetSetOption (hRequest, INTERNET_OPTION_SECURITY_FLAGS, &dwFlags, sizeof (dwFlags));
#endif
HttpSendRequest(hRequest,...
Related
Client Environment is Xamarin Android Native TLS 1.2 SSL/TLS implementation (boringssl aka btls), using System.Net.WebSockets.ClientWebSocket. This is running on an Android 7.0 device. Visual Studio 2017 15.8.1, Xamarin.Android 9.0.0.18.
Server Environment is Windows .NET 4.7 running Fleck (WebSocket server) configured with TLS 1.2 using a certificate issued by a homemade (non-trusted anywhere on the globe) Certificate Authority (CA).
Assuming a homemade CA Cert (.pem or .cer format) has been installed on the android device via Settings->Security->Install from SD Card, the ClientWebSocket connects using TLS 1.2 without problems, as one would expect. Since this is a global solution to a local (one part of my app) problem, not to mention opening a security hole for the larger device ecosystem, I do not wish to require this setup.
I have then tried several methods to localize the trust of the CA to only my application without success. Regardless of approach, there is always the same exception thrown by ClientWebSocket.ConnectAsync(): A call to SSPI failed and Ssl error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED at /Users/builder/jenkins/workspace/xamarin-android-d15-8/xamarin-android/external/mono/external/boringssl/ssl/handshake_client.c:1132
I created a sample windows server and console app and Xamarin.Forms Android app that demonstrates the issue and the attempts to workaround it described below. Included is a custom CA cert. The server code dynamically issues a client cert with SANs bound to your IP/hostnames for ease of repro.
Attempt 1:
Apply the android:networkSecurityConfig="#xml/network_security_config" attribute to the application element in the AndroidManifest.xml file, including resources Resources\raw\sample_ca.pem Resources\xml\network_security_config.xml
<?xml version="1.0" encoding="utf-8" ?>
<network-security-config>
<base-config>
<trust-anchors>
<certificates src="#raw/sample_ca"/>
<certificates src="system"/>
<certificates src="user"/>
</trust-anchors>
</base-config>
</network-security-config>
This had no visible effect, and I cannot see anything in the debug output that would indicate that the runtime is even loading this. I have seen references to messages like this:
D/NetworkSecurityConfig: No Network Security Config specified, using platform default
However, with or without this in place I have never seen messages like this or similar. I really have no idea if it is being applied or not, or if btls implementation even uses/respects this.
Interestingly, since the Android minSdk is set to 24 and target sdk of 27 I would expect the lack of this declaration should cause TLS 1.2 to not work if I simply added the CA to the android device user certificate store. I suspect there are a few Xamarin bugs surrounding this.
Attempt 2:
Add the CA to the X509 Store, hoping btls uses that as a source of certificates. This approach works on Windows/.NET 4 (it does bring up a dialog to accept the addition of the certificate).
X509Store store = new X509Store(StoreName.Root, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadWrite);
var certs = store.Certificates.Find(X509FindType.FindByThumbprint, cert.Thumbprint, false);
if (certs.Count == 0)
store.Add(cert);
store.Close();
Attempt 3:
Handle ServerCertificateValidationCallback. This never gets called in Xamarin Android, but this approach works on Windows/.NET 4.
ServicePointManager.ServerCertificateValidationCallback += (sender, certificate, chain, errors) =>
{
if (errors == SslPolicyErrors.None)
return true;
//BUGBUG: Obviously there should be a much more extensive check here.
if (certificate.Issuer == caCert.Issuer)
return true;
return false;
};
There are some Mono issues surrounding btls and a pull request that makes this approach look possible in the near future.
Attempt 4:
Add the CA cert (and/or the cert issued from the CA) to the ClientWebSocket.Options ClientCertificates collection. Technically this should not work with CA Certs but should be the approach to use with self-signed certificates. I offer it here merely for completeness. As expected, it does not work either.
Full Repro Code
Easy to use code that demonstrates this issue with all of the attempted workarounds described above is available on GitHub.
I don't know, what the problem was, but I also had some problems, as I have started with my first https webservices.
The reason was, that I have started with a self signed certificate, what does not work without ugly workarounds...
So.. only use signed (trusted) certificates from a public vendor and you should not have any problem...
To switch between http and https you only have to change the url (from http to https) - no further changes in the app ae needed.
I normally do first (local) tests with the web service with http (the url ist loaded from an .ini file) and then copy the web service to the "real" webserver (with certificate and https url).
I never had any problems (when a trusted certificate is used)...
I've got some in-house code that performs a Microsoft Update scan using the Windows Update API. Because some of the clients do not have direct internet access, I explicitly set the WebProxy property to point to our local proxy server. During testing (on Windows 7) this seemed to work perfectly. Now I'm testing it on Windows 10 (see footnote 1) and it seems that the proxy setting is being ignored.
The Windows Update client was revised significantly in Windows 10, so it is possible that this is a bug or undocumented limitation in the new version of the client, but on the other hand I have little previous experience using COM so I might be doing something wrong.
Observed results, using the test code posted below:
The code works as desired on Windows 7, regardless of what security context it is run in, and regardless of whether or not the client has direct internet access and/or a proxy server configured in the user's Internet settings.
The code also works on Windows 10, provided the client has direct internet access.
The code mostly works on Windows 10 if the client does not have direct internet access but the user that runs it has a suitable proxy server configured in their Internet settings. (See footnote 2.)
The code does not work on Windows 10 if the client does not have direct internet access and the user that runs it does not have a suitable proxy configured. Instead of connecting to the proxy specified in the code, it attempts to connect to a series of external IP addresses; once all of these connection attempts have finally timed out, it returns 0x80072ee2, ERROR_INTERNET_TIMEOUT, on the line shown. (I can get very similar behaviour on Windows 7 by leaving out the part of the code that sets the proxy server.)
Also, if I deliberately change the proxy URL in the code to point to a non-existent server, the code stops working on Windows 7, as expected, but the behaviour on Windows 10 is unchanged. So it really does look as though Windows Update is simply ignoring the WebProxy property. (The property is being set; I can read it back from the IUpdateSession object.)
Changing the Delivery Optimization Download Mode does not appear to help. I've tried all of the different modes that are available via Group Policy. Adding a trailing slash to the proxy URL broke the code for Windows 7 and made no difference for Windows 10. Using a bare DNS name rather than a URL worked on Windows 7 but made no difference on Windows 10.
Since the code is ultimately intended to become part of a system service and/or be run remotely, configuring proxy settings at the user level is not an ideal option, though I might be able to fall back on that if no other solution is available.
This is the code I've been testing, a cut-down version of the original code. The test code does not actually process the results, if any, since the problem occurs before that point. I've hidden the real DNS name of our proxy server, but the URL is of the form shown. Anyone wanting to test the code in their own environment will of course need to point it at their own proxy anyway.
#include <windows.h>
#include <wuapi.h>
#include <stdio.h>
#define stringize1(x) L#x
#define stringize(x) stringize1(x)
#define fail() fail_fn(L"Fatal error at line " stringize(__LINE__))
void fail_fn(wchar_t * msg)
{
wprintf(L"%s\n", msg);
exit(1);
}
int wmain(int argc, wchar_t ** argv)
{
IUpdateSearcher* updateSearcher;
IWebProxy* webProxy;
IUpdateServiceManager2* serviceManager;
IUpdateServiceRegistration* serviceRegistration;
IUpdateSession* updateSession;
ISearchResult* results;
BSTR searchString, proxyString, bstrServiceID;
HRESULT hr;
if((hr = CoInitialize(NULL)) != S_OK) {
fail();
}
hr = CoCreateInstance(&CLSID_UpdateServiceManager, NULL, CLSCTX_INPROC_SERVER,
&IID_IUpdateServiceManager2, (void **)&serviceManager);
if (hr != S_OK) fail();
bstrServiceID = SysAllocString(L"7971f918-a847-4430-9279-4a52d1efe18d");
serviceManager->lpVtbl->AddService2(serviceManager, bstrServiceID,
asfAllowPendingRegistration | asfRegisterServiceWithAU,
NULL, &serviceRegistration);
if (hr != S_OK) fail();
hr = CoCreateInstance(&CLSID_UpdateSession, NULL, CLSCTX_INPROC_SERVER,
&IID_IUpdateSession, (LPVOID*)&updateSession);
if (hr != S_OK) fail();
hr = CoCreateInstance(&CLSID_WebProxy, NULL, CLSCTX_INPROC_SERVER,
&IID_IWebProxy, (void **)&webProxy);
if (hr != S_OK) fail();
hr = webProxy->lpVtbl->put_AutoDetect(webProxy, VARIANT_FALSE);
if (hr != S_OK) fail();
proxyString = SysAllocString(L"http://proxy.contoso.co.nz:80");
if (proxyString == NULL) fail();
hr = webProxy->lpVtbl->put_Address(webProxy, proxyString);
if (hr != S_OK) fail();
hr = updateSession->lpVtbl->put_WebProxy(updateSession, webProxy);
if (hr != S_OK) fail();
hr = updateSession->lpVtbl->CreateUpdateSearcher(updateSession, &updateSearcher);
if (hr != S_OK) fail();
hr = updateSearcher->lpVtbl->put_ServerSelection(updateSearcher, ssOthers);
if (hr != S_OK) fail();
hr = updateSearcher->lpVtbl->put_ServiceID(updateSearcher, bstrServiceID);
if (hr != S_OK) fail();
searchString = SysAllocString(L"IsInstalled=0 and Type='Software'");
hr = updateSearcher->lpVtbl->Search(updateSearcher, searchString, &results);
if (hr != S_OK) /* fails here */
{
wprintf(L"Error %0x\n", hr);
fail();
}
wprintf(L"Update search completed successfully.\n");
CoUninitialize();
exit(0);
}
Is there anything I can do to make this work on Windows 10 the same way as it does on Windows 7?
(1) I am running Windows 10 LTSB 2016. This is basically the same as Windows 10 version 1607, also known as Windows 10 Anniversary Update. Most of my clients don't have the March updates but are otherwise up to date. I've also confirmed that the problem still occurs on a client with the March updates installed.
(2) During testing, using the user-configured proxy has failed on two occasions, both on newly reinstalled machines; once it starts working, it keeps working. In this scenario, the scan does still attempt to connect to various external IP addresses, but the fact that these connections time out does not cause the scan to fail. I suspect this all has something to do with the Delivery Optimization download mode, but I'm still experimenting.
Addendum: the case where the user account the code is running as has a suitable proxy server configured in their Internet Settings only works when the code is run interactively. In a non-interactive context, e.g., a service or scheduled task, this does not work. At present, it appears to me to be impossible on a Windows 10 machine to access Microsoft Update from a service unless you have direct internet access.
I've read through this question and answer: "
Is it Possible to Dynamically Return an SSL Certificate in NodeJS?"... but it uses .key and .crt files for the domains and the server.
On a Windows 2008 R2 machine, I can't find the domain1.key, server.key and server.crt files. Instead I've created a domain1.pfx file by exporting the SSL certficate from IIS.
I am able to successfully run an https node.js server using this one PFX file with one domain like this:
var fs = require('fs');
var https = require('https');
var crypto = require('crypto');
function getSecureContext(domain) {
return crypto.createCredentials({
pfx: fs.readFileSync('/path/to/' + domain + '.pfx'),
passphrase: 'passphrase'
}).context
}
var secureContext = {
'domain1': getSecureContext('domain1')
}
var options = {
SNICallback: function (domain) {
return (secureContext.hasOwnProperty(domain) ? secureContext[domain] : {});
},
pfx: fs.readFileSync('/path/to/domain1.pfx'); // for the server certificate
};
var server = https.createServer(
options,
requestListener).listen(443);
However what if I have a multiple domain certificate plus another certificate for a single domain, how would the SNICallback and the getSecureContext functions be configured to have each domain name use the correct certificate?
I think the server certificate should be the same for both PFX files since they are on the same server so I'm using only the first PFX file (for domain1) as the server certificate.
I've tried changing the secureContext object like this:
var secureContext = {
'domain1': getSecureContext('domain1'),
'domain2': getSecureContext('domain2'),
.
.
}
This gives me the error "listen EACCES'.
In my specific situation I have two SSL certificates. One is an extended validation certificate for one domain name, and the second is a multiple domain certificate supporting five domain names.
I've found it very difficult to debug the EACCES error. There doesn't seem to be more detail as to what is causing the EACCES. Is my configuration wrong, is there a problem with the certificates? I do know that these certificates work correctly when I use them in IIS running an IIS server (instead of a node.js server) on the same Windows 2008 R2 server.
I would like to stay with a pure windows and node.js configuration. (Not nginx, iisnode or any other libraries if possible).
Solved it. The EACCES error was due to my not listing all the sites that need to use the two certificates. Since I was testing, I only was working with two site names, but the multi-domain certificate includes some other sites. Each site needs to be listed as below. Otherwise one or more of the sites will not have a certificate associated with it causing the EACCES error.
var secureContext = {
'domain1': getSecureContext('domain1'),
'domain2': getSecureContext('domain2'),
'domain3': getSecureContext('domain2'),
'domain4': getSecureCOntext('domain2')
}
I'm writing a server application that uses CryptoAPI and Schannel for setting up a secure SSL connection to clients. The server requires the clients to submit a certificate for verification (by setting the ASC_REQ_MUTUAL_AUTH flag in AcceptSecurityContext).
The problem I have is that some clients (namely clients using javax.net.ssl) does not pass along their client certificate (even though it's been configured to do so). I suspect this is because the CA certificate used for signing the client certificates are not in the list of CA's passed to the client during the handshake.
I've tried to do variations of the following to add the CA certificate to this list:
PCERT_CONTEXT caCertContext = ...; /* Imported from a DER formatted file */
HCERTSTORE systemStore = CertOpenStore(
CERT_STORE_PROV_SYSTEM,
0,
0,
CERT_STORE_OPEN_EXISTING_FLAG |
CERT_SYSTEM_STORE_LOCAL_MACHINE,
L"ROOT");
bool ok = CertAddCertificateContextToStore(
systemStore,
caCertContext,
CERT_STORE_ADD_USE_EXISTING,
NULL);
if (!ok)
{
std::cerr << "Could not add certificate to system store!" << std::endl;
}
In the above example CertAddCertificateContextToStorealways fails. If I change CERT_SYSTEM_STORE_LOCAL_MACHINEto CERT_SYSTEM_STORE_CURRENT_USER I am presented with a popup asking me to confirm the certificate, but even if I accept the CA certificate will not appear in the list sent to the client.
I also tried extending the system store collection with a temporary memory store (something I picked up from here) but to no avail.
Anyone know of a way to solve this? Ideally programmatically without using any GUI or external tool?
You are getting that error because you don't have permission to access the store as read and write, you can only access it as read. So what you have to do is add CERT_STORE_READONLY_FLAG so it will be:
HCERTSTORE systemStore = CertOpenStore(
CERT_STORE_PROV_SYSTEM,
0,
0,
CERT_STORE_OPEN_EXISTING_FLAG |
CERT_SYSTEM_STORE_LOCAL_MACHINE | CERT_STORE_READONLY_FLAG ,
L"ROOT");
If you which to make changes to your store and not have it read only that means you will require administration elevation when you are running your C++ application.
If you don't want to add it systemwide (as you mentioned in the comment) you can open with the CERT_SYSTEM_STORE_CURRENT_USER flag.
CertOpenStore(CERT_STORE_PROV_SYSTEM, 0, 0, CERT_STORE_OPEN_EXISTING_FLAG |
CERT_SYSTEM_STORE_CURRENT_USER, L"MY");
We have a web application where sometimes the request are broken on irregular basis and only using the Firefox browser the error that comes up is :
SSL_ERROR_BAD_MAC_READ
-12273
"SSL received a record with an incorrect Message Authentication Code."
One customer claimes that they have this error about every 3 minutes but the others doesn't have this problem, but the other customers have this problem only a few times.
Any idea how find out the source of that problem?
I browsed a little through the Firefox code and found that
if (NSS_SecureMemcmp(mac, pBuf, macLen) != 0) {
/* MAC's didn't match... */
SSL_DBG(("%d: SSL[%d]: mac check failed, seq=%d",
SSL_GETPID(), ss->fd, ss->sec.rcvSequence));
PRINT_BUF(1, (ss, "computed mac:", mac, macLen));
PRINT_BUF(1, (ss, "received mac:", pBuf, macLen));
PORT_SetError(SSL_ERROR_BAD_MAC_READ);
rv = SECFailure;
goto cleanup;
}
Obviously it is possible to see what was the received mac and what was the computed mac...anyone know where those logs are in FF or maybe I should enable some logging in FF?
Where can I find the logs for this in Firefox?
We upgraded openSSL to the latest version available for our platform, and it worked. The problem is gone, so it was probably a bug in the openSSL implementation.
This could be an issue with SSL implementation you are using. MAC is like hash of the ssl packet transferred. If the ssl packet is not flushed properly by the implementation (eating some byes or not flushing completely) you will see these kind of issues.
I opened the cmd window and used the ipconfig /flushdns command while FF was closed. I reopened it and I was able to access the URL successfully.