List all CRL Distribution Points in Windows Certificate Store - windows

Windows Server 2019 seems to check if a x509 certificate was revoked. Since the machine is behind a proxy I´d like to whitelist the urls neccessary.
For that I´d like to generate a list of all hosts (like crl.xxx.com, crl.yyy.com) that are necessary to check x509 certificates.
I tried Get-ChildItem Cert:\ -Recurse

Related

How can I verify and run a signed PowerShell script only trusting the Issuing Certificate (root) and not the Signing Certificate (leaf) itself?

I have a Windows image that uses the PowerShell execution policy AllSigned
I also have a PowerShell script that is signed by a signing certificate issued by an internal CA
The certificate of the issuing CA is installed/trusted on the target machine
When I sign the PowerShell script, I am including the full certificate chain (IncludeChain = 'All')
The certificate chain looks like this:
|- Issuing CA Certificate
|- Signing Certificate
The PowerShell script is signed by the Signing Certificate, but we are not installing that certificate on our target machine. We are only installing the Issuing CA Certificate into the Trusted Publishers and other certificate stores.
This method works when we sign our custom application binaries using the Signing Certificate (we use Windows Defender Application Control to ensure that any applications running on our target are signed by trusted publishers) but it does not work when running PowerShell scripts.
According to this MSDN community post, PowerShell uses Known Publisher rules, which state that the Signing Certificate itself has to be in the Trusted Publishers certificate store.
This is not ideal, as the signing certificate we use to sign the PowerShell scripts is not something we want to ship out, nor will it be valid anymore by the time our product ships.
I understand that if I use a timestamp server when signing the PowerShell scripts, that the signature will still be considered valid if the signature was generated within the validity window of the signing certificate, but this is not our preferred solution.
Ideally I would like to know if it is possible, and how, to have PowerShell use the Issuing CA Certificate to validate the signed PowerShell scripts. (i.e. Known Issuer rules)
In the case that it's not possible, I would like to know why Microsoft departed from the practice of allowing you to validate signatures without explicitly trusting the signing certificate (i.e. using the issuing certificate to validate it).
Any help would be greatly appreciated.
There is a difference between deployment and the running of PowerShell scripts.
The confusion is that Windows Defender Application Control can use code signing 2 different ways, for 2 different reasons, and PowerShell has only ever supported one. Windows Defender Application Control can use code signing:
With a trusted Issuing CA Certificate to authenticate applications. This is the situation when your company wants to share many internal applications among all employees easily. It is also used for "trusted" Microsoft Store applications.
Because "normal" people don't blindly trust all applications and generic certificate authorities, you can instead deploy based off of trusting just the Signing Certificate for verification. (see Catalog Files). This is so that you could deploy applications with certificates that may not have an accessible CA. e.g. if you singed the application with an internal CA and you sell it to another company, or if you are using a self signed certificate.
Windows Defender Application Control primary purpose is for Application deployment/control, and a byproduct is that it can do PowerShell scripts as well. Most "normal" applications can run with "invalid" or "broken" certificates with incomplete certificate chains. The certificate was only used to control the distribution of the code and validation that the application was not tampered with/changed, and has nothing to do with the active "running" of the code.
PowerShell, on the other hand, when running with AllSigned, always validates the entire chain before running. It doesn't care about the distribution, it cares about what it runs. That means that the entire certificate chain needs to be present and trusted on the running machine. Yes, that means that if you sign with an internal CA, you need the Issuing CA Certificate, and the Signing Certificate distributed and trusted by the running party.
This leads you to 3 options:
Self signed certificate - This is ok for personal/development projects, and marginally better than distributing unsigned code.
Internal CA certificate - This is ok for internal projects. In this case, yes, you would have to share the entire certificate chain if you wanted to distribute.
Global/Public CA Certificate - This is the recommended method if you are distributing publicly/commercially. The Public CA's are trusted, and code signing certificates can be bought from places like DigiCert and can last 3 years. I know, for me, I would feel much more comfortable running code signed by a DigiCert over having to mess with internal or self signed certificates.
That doesn't make sense.
If the internal root CA certificate is in your trusted root CA store and the intermediate in your trusted intermediate store. Then a PS script is signed by a certificate with a trusted chain back to the trusted root, it should trust the signing of the certificate.
There should be no difference between an internal trusted CA and a public trusted CA.
If anything a code signing certificate issued by an internal CA is more trustworthy than a public signed one. Internal processes and controls mean only actual trusted internal publsihers can get one, unlike a public code signing cert bought for a small amount.

How to make certificate trusted and valid for more than one year

I am using the following command to create certificate and this certificate will be used in window application. This certificate we need to validate application for cyberark security tool.
New-SelfSignedCertificate -DnsName "www.companyname.com", "www.companyname.com" -CertStoreLocatio "cert:\LocalMachine\My" -Type "CodeSigningCert" -Subject “Application Name” -KeyUsage "DigitalSignature"
By using above command, I am able to create certificate.
Problem:
1.When I am looking into the installed certificate, it is showing:
2.It also show only one year valid date. How I can increase the valid date range more than one year.
Self-signed certificates aren't considered trustworthy unless you tell machines to trust them. Because cybercreeps.
To make your self-signed certificate trusted by a Windows machine, you must import it into the Trusted Root Certification Authority / Certificates location in the machine's certificate store. There are plenty of tutorials out there to walk you through this. Look for "How to install a self-signed certificate on Windows".
For the validity duration problem: Add -NotAfter (Get-Date).AddYears(10) to your command line if you want a self-signed certificate good for ten years.
Docs here.

Errors with TLS 1.0 client connecting to my ASP .Net Core service

TL;DR: how do I configure an ASP.Net Core service which:
Is installed on Windows (7 and above, though I'm developing/testing on Windows 10)
Serves data over HTTPS
Uses a Self-signed SSL Certificate (clients will reside on the same local network as the service & be accessed via local hostname, so purchasing a cert is not practical)
Allows TLS1.0 clients to connect (running on Windows XP, sadly)
I have no problems with the first 3 things, it's just the TLS 1.0 clients I have a problem with.
I don't believe it's a problem with my service, rather the Windows OS and/or the self-signed SSL Certificate.
The certificate is being generated via a wrapper around the CERTENROLLLib dll. I can change how this is generated (key lengths and hash algorithms etc), but it's not (clearly) my area of expertise....
Multiple clients based on Windows 7 and up work fine, assuming they have the self-signed cert added as a trusted cert and/or are simply written to not reject this cert...
When clients (well, the only single client I have available to test with) supporting only TLS 1.0 try to connect, they get SSL errors, and I can see in my Windows 10 System Event logs this:
An TLS 1.0 connection request was received from a remote client application,
but none of the cipher suites supported by the client application are
supported by the server. The TLS connection request has failed.
If I create a local client on my machine and force it to use only TLS 1.0 via:
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls;
Then my local client works fine... but then this is the machine on which the cert was generated, so it'll have available exactly the same list of ciphers etc. so this isn't a huge surprise..
I have followed various advice found online about ensuring my windows 10 install will accept TLS 1.0 connections, and believe it is set up to do so.
So I think that the problem is one of these:
My windows machine will accept TLS1.0 connections, but doesn't have whatever form of encryption cipher the windows XP client is requesting so the two can't agree; or
The Self-Signed Certificate is using some form of hash algorithm that the Windows XP client doesn't understand.
Does anyone have any idea which of these it's likely to be, or any suggestions for what hash algorithms etc. I should use in the cert if it's the latter? Or if I'm way off and there's something else amiss....?
Thanks.
The Actual Problem
OK, I finally figured this out. It was actually the certificate - but NOT cipher suite related (well, that may have been part of the initial problem, but it wasn't the final piece of the puzzle)
We had a third party service available which was accessible from the XP machine over SSL. There were a number of differences with this system (hosted on Win7/IIS Vs win10/Kestrel but after eliminating these (by eventually hosting a basic site on IIS/win 7 using different certs) I eventually found that it was simply the certificate.
Careful analysis of the one working cert vs. the self-signed ones we were generating revealed a number of differences:
SHA-1 vs other, stronger encryption
very limited other information (e.g. Subject/Issuer were simply the Hostname, no additional DNS stuff, etc.
The 'Key Usage' part was marked as critical on our self-signed certs, but not on the working one
The last part appears to have been the real key to this. When the cert had the 'Key Usage' extension marked as critical it simply wouldn't work.
What was very odd was that even when looking at the network traffic, you never saw the server actually sending the Certificate back to the client (usually all you'd see is the client sending ClientHello TLS packets), so I don't fully understand how this caused the issue, but it definitely was the cert.
The Fix:
We were originally using c# code which wrapped around the 'CERTENROLLLib' Interop library to generate certificates. It turns out there is a bug in this code somewhere, which meant even if these parts were explicitly defined as Critical=false; then they would still end up being flagged as Critical:
var keyUsageClass = new CX509ExtensionKeyUsageClass();
keyUsageClass.InitializeEncode(
CERTENROLLLib.X509KeyUsageFlags.XCN_CERT_DIGITAL_SIGNATURE_KEY_USAGE
| CERTENROLLLib.X509KeyUsageFlags.XCN_CERT_KEY_ENCIPHERMENT_KEY_USAGE
| CERTENROLLLib.X509KeyUsageFlags.XCN_CERT_DATA_ENCIPHERMENT_KEY_USAGE
);
//This appears to be IGNORED; maybe fixed in some later version, no idea.
keyUsageClass.Critical = false;
...
cert.InitializeFromPrivateKey(
X509CertificateEnrollmentContext.ContextMachine,
privateKey, "");
cert.X509Extensions.Add((CX509Extension)keyUsageClass);
In the end I simply used powershell. This appears to still have the problem, but with the New-SelfSignedCertificate cmdLet it is at least possible to specify a Key Usage of None (seems not possible via CertEnrollLib), which means this section is omitted entirely.
For reference, this is the section of powershell script I used, which creates the cert then saves it to a .pfx file for import on clients as 'trusted'
#various vars to use in the create command
$certSubject = $env:computername
$certFriendlyName= "$($certSubject)_MyCert"
$certPfxFileName = "$($certFriendlyName).pfx"
$expiryDate = (Get-Date).AddYears(10)
$flagsForServerCert = "2.5.29.37={text}1.3.6.1.5.5.7.3.1"
$certPassword = "somePassword123!"
#Note: this is one command, on one line, broken here for readability:
New-SelfSignedCertificate
-DNSName $certSubject
-FriendlyName $certFriendlyName
-certstorelocation cert:\localmachine\my
-subject $certSubject
-HashAlgorithm SHA
-NotAfter $expiryDate
-TextExtension $flagsForServerCert
-KeyUsage None
#Now we need to fetch the thumbprint of that cert. note, multiple matching Certs will
#mean this doesn't work, so ensure FriendlyName is unique in the first place.
$thumbprint=(Get-ChildItem -Path cert:\localmachine\my | Where-Object {$_.FriendlyName -match $certFriendlyName}).Thumbprint
#Now to save as a pfx file, need to create a SecureString from the password.
$pwd = ConvertTo-SecureString -String $certPassword -Force -AsPlainText
#Get the certificate, and pipe it through Export-PfxCertificate to save it.
Get-ChildItem -Path "cert:\localmachine\my\$($thumbprint)" | Export-PfxCertificate -FilePath $certPfxFileName -Password $pwd
This then generated a Server cert which XP clients are happy with.

Running CodeSigned powershell scripts on target machines

Goal: I would like to run my custom powershell scripts that are signed with a valid certificate against target machines with their powershell execution policy set to “AllSigned” without having to install another certificate on the target machine.
Problem: The powershell scripts will not run until I install the public key of the certificate I used to sign the scripts as a trusted publisher (lets call it MyCert.cer) on the target machine.
Possible Misunderstanding: I could be misunderstanding the way code siging works with my “problem” above. However I was under the impression that since windows comes with DigiCert certificates installed by default as “Trusted Root Certification Authorities” (See image below) that all I would need for my signed scripts to work is signing them from a digicert authority.
My Certificate details:
I purchased a code signing certificate from DigiCert. The certificate is valid and has an “EKU” of “Code Signing (1.3.6.1.5.5.7.3.3)”.
Certificate chain:
Final Thoughts: I signed the powershell scripts using the cmdlet “Set-AuthenticodeSignature” and my issued code signing certificate. The scripts will run if I install the MyCert.cer public key as a “Trusted Publisher” on the target machine. However, I would like to not have to touch the target machine and be able to run my code signed scripts against said machine. Is this possible? Have I purchased the wrong certificate for my goal? Or is an entry to the Trusted Publishers certificate store required for running code signed scripts?
Thank you for your time.
Update: Here is the command I used to sign the powershell scripts.
Set-AuthenticodeSignature -Certificate $cert -FilePath $FileToSign -IncludeChain all
I wanted to let everyone know that I did include "All" for the includeChain. I have also tried using digiCerts timestamp server for the -TimestampServer parameter. However adding the timestamp does not make a difference for running the script. The -TimestampServer parameter as to my understanding is for when a certificate expires and need to be re-validated. However the certificate I am using is still current and not expired.
You are finding the intended behavior of the AllSigned Execution Policy. From Get-Help about_Execution_Policies you will see:
AllSigned
- Scripts can run.
- Requires that all scripts and configuration files
be signed by a trusted publisher, including scripts
that you write on the local computer.
The short answer is that you'll need to trust your cert on all your computers (the easy way to do this is with Group Policy). The Group Policy Object that you'll write will modify Computer Configuration\Windows Settings\Security Settings\Public Key Policies\Trusted Publishers and then you'll need to follow the instructions in the Certificate Import Wizard. The key thing here is that the certificate can be traced back to the Trusted Root Certification Authorities in your organization, so it's a very good thing that you bought a Digicert certificate and that your organization trusts Digicert certs.
So why does Digicert show up under Trusted Root Certification Authorities? The answer here is pretty simple. It means that your organization recognizes Digicert certificates and allows them to be trusted. This doesn't mean that every single Digicert certificate automatically gets a pass, it just means that they are allowed to be installed to your domain. I am gonna pick on Comodo, since they're close alphabetically and they don't show up in your screenshot. Because Comodo also gives out digital certificates, if I were to sign my scripts with my Comodo cert and try and install that certificate across your domain, it wouldn't stick since Comodo is not a Trusted Root Certification Authority in your domain.
I hope that helps explain what's going on!

Powershell Scripting Signing of SSL CSRs by CA?

I'm in a situation where I need to deploy around 200 SSL Certificates to various devices around our Agency (HP iLO - such joy they bring...). At present I have a powershell script that obtains a CSR from the iLO Device, but I now need to be able to sign this with our CA in an automated manner so I can pass back to the script and import into the device.
We are using Microsoft Certificate Services and I can download the CA Certificate, Certificate Chain, or CRL if required.
I do not really care how I get the signed certificate - what I mean is, if powershell needs to call an external app to get it signed, then thats fine. I've had a look at makecert.exe in the .NET SDK but it does not appear to do the job.
I have the feeling it may be possible using OpenSSL - If someone could enlighten me, that would be great.
Cheers
Having Dealt with Microsoft Engineer this morning, the most graceful solution in doing this with existing infrastructure is using certreq.exe. Once you have a CSR, you can run this command to obtain a certificate using MS CA:
certreq.exe -attrib "CertificateTemplate:WebServer" infile.csr outfile.cer
from there, you ca get the certificate using Get-Content outfile.cer to feed back into the script.
Cheers
This article contains the steps to create a CA and sign a CSR from IIS:
Creating a Self-Signed Certificate using OpenSSL for use with Microsoft Internet Information Services (IIS) 5
You can export the CA into a format OpenSSL can read, and follow the steps after "Sign the Certificate Request".
Here's a FAQ for OpenSSL on managing a CA with the tool.
http://www.gtlib.cc.gatech.edu/pub/linux/docs/HOWTO/other-formats/html_single/SSL-Certificates-HOWTO.html
i've used OpenSSL successfully to create and manage a root CA, sign CSRs and generate certificates.
Powershell cannot handle this natively, but it can interact and script the whole process, definitely.
-Oisin

Resources