letsencrypt: domain was skipped due to unreachable file - lets-encrypt

I am running 300 domains on my server they all are symlinked to the same destination. (it is a whitelabel system). every domain i add that has the correct nameservers (we own our own nameservers) works good with creating a SSL Certificate in direct admin using the letsencrypt service. But today ive added a domain (it has the correct nameservers for 7 days already, also when i use safari to skip te warning the website appears that should appear), and tried to get a Certificate and i am getting the following error (i tried every Key Size):
xxx.nl was skipped due to unreachable http://xxx.nl/.well-known/acme-challenge/ file.
www.xxx.nl was skipped due to unreachable http://www.xxx.nl/.well-known/acme-challenge/ file.
No domains pointing to this server to generate the certificate for.
After this i added other .nl domains that ive checked for correct nameservers they worked.
hope someone knows a solution or can bring me in the right direction. thanks!

Related

how to configure apache nifi on https

1) http and no-dns (working)
I was using nifi on http, and it was working fine. at that time I was accessing it via ip or server name itself. everything was working fine.
2) https (self signed) and no-dns (working)
i did https setup using toolkit and it worked, although chrome kept showing red color message. which was expected. but atleast things worked.
3) dns (internal) + signed certificate (external authority (Symantec))
dns works fine, as I am able to ping the box using dns. also i added this dns to etc host file.
even though nifi is internal to org, I still went head and bought a certificate. and CNAME i used was the dns name of my server.
certificate i got was a chained certificate
my_dns_>TrustedSecureCertificateAuthority5>USERTrustRSAAddTrustCA>AddTrustExternalCARoot
I create a JKS, and added all of them in it, also added a key_pair to JKS, and I appended all certificated to key_pair too.
Then I changed nifi.properties and used same jks as trust-store and key-store.
now if i use nifi with new dns and https, i am getting "ERR_SSL_VERSION_OR_CIPHER_MISMATCH" (attached image) on Chrome. On IE, i get a TLS error. so I dont think thr is something wrong either with certificate or browser.
if I give url as (see image) https://server_name:9447/nifi, nifi opens up..with but still shows up red color warning, but this time is not for self signed certificate, but for name not matching. which confirms that nifi web server has access to my new jks, and it also reads it...but then why it is not working?
what am i missing here? can nifi run on externally bought certificate? or it always has to work with self-signed certificate?
if you are running nifi with external certified certificate, please share your configuration.
do I still have to use toolkit? or toolkit does same thing, which i did by buying the certificate? if true, what am i missing here?
so here is how i resolved it.
after nothing worked. I went back to https setup of nifi, where nifi generates keystore and truststore jks.
and then i downloaded both, and edited it. I removed all previous certificates (self signed one). and then added my CA certificate chain.
then simply uploaded them back. then just restarted nifi.
nifi is now on https.

Certificate validation using internet to validate unnecessarily?

I have an application that receives items from a high-speed scanner device. As the items are received, they are written to disk using SQL Compact. The following digitally signed Microsoft DLLs are used:
sqlceca40.dll
sqlcecompact40.dll
sqlceer40EN.dll
sqlceme40.dll
sqlceoledb40.dll
sqlceqp40.dll
sqlcese40.dll
I recieved a performance complain from a customer, and traced the issue using Microsoft Procmon to a TCP Reconnect failure when attempting to contact the site for certificate validation when we make calls to methods in these dlls. At first, I could not recreate the issue locally. After talking to their infrastructure people and developers, I learned that they must use a proxy for internet connectivity. Some of the customer's users (in the test environment) had valid proxy settings, and they got good performance from our application. Naturally when they turned their proxy settings off, the validation could not be done and the performance issue arose.
I attempted to recreate the issue by setting our machine up with false proxy settings to a non-existent machine. On my initial attempt, I still got good performacne from our application, and no attempt was made to contact the internet for cert validation. After looking at the cert's validation chain, I noticed that it derived from the certificate "Microsoft Root Certificate Authority". I then exported and deleted that Cert, and was able to reproduce the issue as determined by a comparison of logs.
I did the following tests:
Test 1:
1. Opened the proxy settings, and enabled them pointing to a non-existent address.
2. Ran a test.
Results: No performance issue.
Test 2:
1. Exported the “Microsoft Root Certificate Authority” cert and moved it to the untrusted folder.
2. Ran a test.
Results: The performance issue occurred.
Test 3:
1. Deleted the “Microsoft Root Certificate Authority” cert.
2. Started a test.
Results: The performance issue began occuring.
3. While the test was in progress and device was hesitating I removed the false proxy settings.
Results: The performance issue disappeared and the application recovered.
Tentative Conclusions:
1. That I can simulate the no internet access condition by providing false proxy settings.
2. If the “Microsoft Root Certificate Authority” cert is installed properly, the .Net infrastructure does not need to access the network to verify the necessary cert.
3. If not, it will attempt to validate via the internet connection.
Nevertheless, when the customer checked the certificates in the "Trusted Roots Certificates" folder of mmc->certificates-local computer. The "Microsoft Root Certificate Authority" certificate does appear there, and it seems to be identical to mine. Yet for some reason the use of the dll's causes certificate validation to attempt to access the internet resulting in a performance issue.
In the customer's situation, eventually devices will be used in production with no internet access.
My question is, is there a setting (registry, or GPO) that might cause certificate validation to always attempt to use the internet, regardless of whether the root certificate of the validation chain is installed in the local computer?
Can a setting be enabled that causes a certificate validation to access the internet to check to see if the root certificate has been revoked, for example?
Please feel free to ask questions if you need more information.
This appears to occur for SQL Server Compact 4.0 on any system with an invalid proxy configuration, as a Certificate Revocation List check is run each time the engine is loaded (which happens on the first call to .Open()).
Solution: To avoid this delay, which probably affects any signed app on the system in question, you must fix the configuration or disable the check. The check can be disabled via UI or via registry settings, as described here: http://digital.ni.com/public.nsf/allkb/18E25101F0839C6286256F960061B282
For additionla issues see my blog post here: http://erikej.blogspot.com/2013/08/faq-why-is-opening-my-sql-server.html

Firefox force dev domains to use SSL as well as Chrome

Today when I woke up to continue my developing process I got Firefox update and then I wasn't able to reach my localhost websites and redirecting to HTTPS protocol.
We all know that Google did the same while before but as many of us using Firefox mostly we (at least me) didn't care and continued our works with Firefox, now that Firefox decided to play with us (developers) here is some unanswered questions for me here:
Questions
How do we add HTTPS to our localhost?
Should we buy SSL certificate for our local environment?
How do I add SSL to my laravel project on localhost?
What will happen if I develop application with SSL and when I move it to host my domain doesn't have SSL (will be any conflict there?)
Concerns
My most concerns goes to:
What if I don't want to buy SSL certificate for my local environment and Publish my projects data (such as names etc.) with others (basically SSL companies).
What if I develop with HTTPS and my live site is HTTP
UPDATE
As I'm working on Windows and also I'm suing Laragon (i don't know about mapps,xampp etc.) here is how I solved my issue But still looking for answer to my other questions
First of all I turned on my laragon ssl certificate, then i changed my domains to pp now my sites loads like domain.pp
PS: I also tested same way with .local, .test and .app it didn't worked but pp worked.
You can also change the domain suffix.
just like
.localhost
.invalid
.test
.example
The folks that created DesktopServer (which I ***highly**** recommend over MAMP/XAMPP) registered the domain .dev.cc for local development use when Google did its thing with dev, which, as we all know, now requires https for local work when you use Chrome or Firefox. When you use DesktopServer to install a new instance of a site locally, DS will append the .dev.cc TLD which will only exist on your local computer. DesktopServer modifies all instances of .dev.cc to the correct production domain when you push your site to live. But, even if you don't use DS, you can use the .dev.cc domain.

Can't use bitbucket any more. Your connection is not secure

I've been using Bitbucket for 2 years on my Macbook. Today I went to view one of my depots but I am getting the error message, Your connection is not secure. All other sites works, it's only Bitbucket.org that is giving me this error. I've tried using Safari and Firefox, neither work. I also can not connect using SourceTree. I am able to connect on my Windows computer so that rules out my router. I've deleted all expired certificates in Keychain and deleted cookies and cache. Does anyone know what the issue might be?
The Macbook's clock is set automatically and is displaying the correct time. In Firefox, when the website fails to load, I can see these 3 messages by clicking the Advance button,
bitbucket.org uses an invalid security certificate.
The certificate is only valid for search.dnsadvantage.com
Error code: SSL_ERROR_BAD_CERT_DOMAIN.
If I click on the last error, it opens another page which displays, https://bitbucket.org/ Unable to communicate securely with peer: requested domain name does not match the server's certificate. HTTP Strict Transport Security: true HTTP Public Key Pinning: false.
Is there somewhere else I need to go to locate more information about the error?
Looks like you've picked up a virus and/or malware:
http://www.fixingvirus.com/always-redirected-to-search-dnsadvantage-com-how-to-stop-it/
That link is for Windows machines so maybe check this for Macbook?:
https://www.fixyourbrowser.com/how-to/remove-adware-mac-osx-safari-chrome-firefox/
Note I don't vouch for above links but first ones that came up when I Googled for "search.dnsadvantage.com". Seems a common problem.

Why does Windows Azure Tools insist that my SSL configuration is incorrect?

I'm about at the end of my rope with Windows Azure Tools and SSL configuration in the ServiceDefinition/ServiceConfiguration files in a cloud project.
At first, I had a web role with RDP enabled (and certificate configured, etc). All that worked for a long time. Then I added an SSL certificate for an https endpoint. It wouldn't deploy because of certificates not being installed in my localmachine/personal store, etc. After messing with it, somehow I've gotten into a bad state where even if I completely remove all configurations having to do with RDP or SSL, I still get this from the emulator:
Windows Azure Tools: Warning: The SSL certificate 'Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption' for endpoint 'HttpsIn' of role 'My.Web' was not found in the local machine's certificate store.
Windows Azure Tools: Warning: Certificate identification setting 'Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption' for role 'My.Worker' specified in the service configuration file is not declared in the service definition file in the Certificate or as part of an SSL endpont
Like I said, there is no such configuration in any of my files, and when there were, they hadn't changed from the time that they worked until this. I tried deleting the dftemp directory where the deployments get placed, I've cleaned and rebuilt the cloud project, I've killed visual studio and the emulator(s), and still always wind up back in the same place.
Has anyone else seen this?
I'm not sure what happened, but after leaving for the day and coming back this morning, I found that a system reboot seemed to clear the issue. I have no idea why, but this seems to have resolved itself.
for first error, you have to remove certificate from certificate section of your service configuration file(cscfg). This certificate is used for encrypting your RDP password.
For second error, I think if you remove above section, it will automatically disappear. Also ensure that you have removed modules RemoteAccess, RemoteForwarder from service definition file(csdef)

Resources