after reading a lot, i still have a question. How to provide reasonable proof of SRK_pub being bound to given TPM?
Working in windows 10 tpm context.
Creating attestation for normal platform keys using AIK is straightforward. However for hostage keys (wrapped key import), you need to use storage root key and you need to send this key to server. It is not clear how to generate ai attestation for this key.
Normal certify command with this key (0x40000000) fails with error 0x84.
thanks
Related
I'm trying to learn a little about Windows application development, and I have yet to wrap my head around certificate handling.
Is encryption and decryption handled by Windows OS functions, or by fetching the private key from the store and performing the cryptographic tasks separately?
An example: assume I have some web page hosted with IIS using SSL certs.
Is e.g. the IIS web server using api calls like the one below, or does IIS ask the OS to encrypt/decrypt using some user selected cert from the store?
https://learn.microsoft.com/en-us/dotnet/api/system.security.cryptography.x509certificates.rsacertificateextensions.getrsaprivatekey?view=net-5.0
I na given case, IIS acquires private key handle and calls CrypoAPI functions to performs cryptographic operations. Though, IIS doesn't use referenced API, it uses native functions directly.
Windows uses abstraction layer by defining APIs for cryptography and one important part of these APIs is key handle. OS may not have access to raw key material and use key handle to access the key by key owner. Key owners are implemented as Cryptographic Service Provider (CSP) or modern Key Storage Provider (KSP). When necessary, OS calls implemented API and pass key handle to CSP or KSP. Then CSP/KSP implementation is responsible for raw key material access actual cryptographic operations. CSP/KSP implementation is vendor-specific. Windows ships a dozen of software-based CSP/KSP implementations and 3rd party vendors may ship their own implementation, especially when key stored on a hardware (smart card or HSM).
In any way, OS don't care how and where the key is stored, it simply calls defined API, passes key handle and input parameters. CSP/KSP then accesses raw key material, perform requested operation and return result back to caller.
And here is the answer to your question: cryptography operations are handled by CSP/KSP that owns/stores particular key identified by key handle. Whether it is OS or not depends on CSP/KSP implementation. If it is software, then most likely (not necessary, but very often) it is handled by OS. If it is hardware, then it is handled by hardware itself.
My use case looks like this:
encrypt some super secret data using a key provided by user
when requested, ask the user for that key and decrypt the data
re-encrypt the data with a key that will allow my program to access the data for a user defined period of time
if token expired ask user for original key again
This feels like it should be a solved problem by my googlefu is weak today.
I could just decrypt the data and store it with a known key in my program but cracking my code would expose those secrets.
I could and maybe should, use some local secure storage for this data like macos keychain etc but i'd like to keep the amount of native variations to a minimum.
The answer to this specific question appears to be, no it is not possible to do locally.
The best solution to this kind of problem, ie a temporary cache of data decrypted with a user's key is to either use security tooling present on the users machine, ie macos keychain or to simply re-encrypt the cache with a key known to the program and except that it is possible to reverse engineer that program to find the decryption key.
My plan to deliver this is to generate an encryption key when the program is first run, use that + a known salt to encrypt my cache. The idea being that both the program, the generated key and the cache would need to be compromised together to decrypt my cache.
I am designing a generic Java library which needs to Sign messages before Sending and Verify messages before accepting. Sign and Verify needs to work with both of the following constraints;
Use raw 32 characters long key with some additional details like key start date, end date, grace period, the algorithm as raw information (do not have a choice, but have to accept it as raw data).
Use a proper PKI certificate containing a public key
For PKI based secret it is straightforward to use JKS/PKCS12 to store information and use it.
The problem I am facing is how do I manage and store raw information? What should be the data structure? So far my options are;
Use JCEKS provided by JRE to store all the raw information in my own made up data structure as secret properties and resolve at runtime to execute my library
Use JKS/PKSC12 and along with X509 Certificate data structure to store all raw information under its Extensions
What I am really looking for is the best practices for these kinds of versatile requirements i.e. manage un-managed secret properties?
The following youtube video does a pretty good job at summarizing how EFS works.
For those interested in a summary of the contents of such windows I have attached
it below. However this leaves me with one question concerning security:
When a user logs on in Windows, presumably a hash is computed from the password
(or alternatively from the password plus the username and perhaps other data such
as a salt). When a user first creates a password, such hash must be stored somewhere
on the hard drive if I am not mistaken. At least, old Unix systems used to work in
such manner (with such has stored in /etc/passwd). Thus when a user logs on, the
password hash is computed and compared to what is stored in such file in order
to authenticate the user. If the hashes match, the user is logged in.
So far so good. If the above mechanism is the one used (on modern Windows systems),
this means that when someone hacks into a Window system, they can read such password hash,
and thus, using the special Microsoft symmetric encryption algorithm (as described below)
which is stored on the hard drive and thus can be learned by a hacker, the password hash
plus the Microsoft special symmetric algorithm plus knowledge of where the encrypted
private key is stored on the hard drive allows the hacker to decrypt it, thus obtaining
the private key. And once the private key is obtained of course, then all data encrypted
using the public key in the certificate can be decrypted by the hacker.
Can someone please point out the flaw in my reasoning?
Presumably the flaw is due to a misunderstanding of mine concerning
how Windows authentication is carried out.
Thanks.
http://www.youtube.com/watch?v=YxgWsa-slOU
Summary of the contents of the above video:
- EFS (available in the NTFS file system) is designed to allow users
to encrypt files and folders so that nobody except for the person
encrypting such file or folder can access it. Administrative accounts
on stolen machines can be created with minimal hacking knowledge, and
can thus gain access to virtually any files contained on the hard drive.
Symmetric key encryption algorithms work about 100 to 1000 times faster
than public key encryption algorithms.
right-click -> Properties -> General -> Advanced... -> Encrypt Contents
to Secure Data and click on Apply, (you can then choose between
encrypting just the file or encrypting the file and its parent folder
and then click on OK). Windows will turn the file green and we will
still have full access to the file. Once this someone logging in
with an administrator account will not be able to see the file.
You can in fact access the certificate manager with the "certmgr"
command, and from there you can view the contents of the
Personal -> Certificates application folder, which can
start out as empty. When we encrypt a file in the above
manner, a symmetric key called a DESX algorithm file encryption key (FEK)
is generated and then the certificate's public key is used to encrypt
the FEK and store it with the encrypted data. In the certificate contained
in the certificate store you can get access to the public key but not the
private key (the cerificate attests that user such and such are who they
say they are and displays the user's public key). The certificate also
points to the private key, but such private key is stored in a special
location on the hard drive, and is encrypted using a special Microsoft
symmetric key algorithm generated master key, where the master key is
generated using a hash component from the username and password of the
user every time the user logs on, and the resulting symmetric key is not
stored anywhere on the hard drive (i.e. it must be kept somewhere in memory).
The hash value that is used to access the private key, which unlocks the symmetric key, is not the same as the hash value that is stored (used for authentication). Both hashes are derived from the password, but they are not computed the same way. Neither are reversible, and they cannot be used interchangeably.
To access your files, they need you to either be logged in already, or they need your password.
Also note that EFS normally designates the administrator or domain administrator as the "recovery agent". When the private key is stored, it also stores a copy that can be accessed by the administrator.
As shown in Figure 15.9, encryption with EFS requires the presence of at least two certificates in the certificate store on the local computer: one for the user (file owner) and one for a recovery agent account.
You can disable this feature by setting another of your accounts as the recovery agent, but in a domain, normally your domain administrator will set this policy and not allow you to disable it. So, the administrator can still access your files.
As long as an attacker doesn't gain the password for the recovery agent's account (or yours), your data should still be safe from an attacker, assuming the attacker isn't the same person as the recovery agent.
It's important to have strong passwords, keep them safe, and avoid running malicious software that could access the data directly.
Thanks for your views on my YouTube video. I am certainly no expert on the details of current encryption technology and so my answer won't do your question justice. The video is intended to give someone who is unfamiliar with the details of EFS a more coherent understanding of how it all works.
However, having said that.. it looks like the previous reply answers the question. Hashes are not reversible. I think I used the words 'virtually impossible' to reverse engineer.. but really Hashes are used because they cannot be reversed to give the passwords. Password crack programs, from my limited understanding, start with a plaintext word from a dictionary, use the same hash algorithm and attempt to generate the same hash as the target hash they are attempting to decrypt. As long as you've used a good password, you can't crack the hash. Bad passwords are the only way passwords get cracked.
It is easy to set up an administrative account if you have access to any machine, but any new account set up will not have access to any private keys. A recovery agent has to be set up PRIOR to encrypting anything with EFS in order for the recovery agent to have access to a user's file. But then, both the Recovery Agents private key hash and the target person's private key hash are both unrecoverable to a new admin account.
I think that's the way it needs to work, or there is no real security.
Dave Crabbe
Is it possible to store passwords on the local system (Windows XP) that can only be accessed by the application itself?
My instinctive answer would be "no". Even if some kind of hashing or encyption is used I would think that as long as the source code is available then the determined seeker could always use this to retrieve the password.
I'm working on a personal open source hobby project in which I would like to give users the option of storing passwords on disk so that they don't need to type them every time they use the software. One example of a password that could be stored would be the one used to authenticate on their network's proxy server.
There are a few related questions here on Stack Overflow and the most appropriate solution sounds like using an operating system service like DPAPI.
Is the basic premise correct that as long as the password is retrievable by the software without any user input, and the source code is open source, that the password will always be retrievable by a (suitably technically and willfully inclined) passer-by?
You could read about the Pidgin developers' take on it here:Plain Text Passwords.
Using the DPAPI in UserData mode will only allow your account on your machine to access the encrypted data.
It generates a master key based off of your login credentials and uses that for the encryption.
If the password is retrievable by the software without any user input, then the password will always be retrievable by a (suitably technically and willfully inclined) passer-by. Open or closed source only affects how much effort is involved.
Absolutely, you can write a program to store passwords securely.
Using AES, you could have your program generate an AES Key, and have that key stored in an operating system protected area. In WinXP, this is the registry, encrypted with DPAPI. Thus the only way to access the key is to have physical access to the machine.
You need to ensure that when you generate your AES key that you do so in a cryptographically secure manner. Just using RAND won't work, nor will generating a random character string.
Open Source has very little to do with security (in my opinion). Given the level of sophistication in tools for reverse engineering source code, even if you had a closed source solution, people determined to snoop at your code could do so.
Your effort is better spent ensuring that you follow best practice guidelines while using the chosen encryption scheme. I would argue that having your code openly looked at by a larger community would actually make your code more secure; vulnerabilities and threats would likely be identified sooner with a larger audience looking through your code.