My use case looks like this:
encrypt some super secret data using a key provided by user
when requested, ask the user for that key and decrypt the data
re-encrypt the data with a key that will allow my program to access the data for a user defined period of time
if token expired ask user for original key again
This feels like it should be a solved problem by my googlefu is weak today.
I could just decrypt the data and store it with a known key in my program but cracking my code would expose those secrets.
I could and maybe should, use some local secure storage for this data like macos keychain etc but i'd like to keep the amount of native variations to a minimum.
The answer to this specific question appears to be, no it is not possible to do locally.
The best solution to this kind of problem, ie a temporary cache of data decrypted with a user's key is to either use security tooling present on the users machine, ie macos keychain or to simply re-encrypt the cache with a key known to the program and except that it is possible to reverse engineer that program to find the decryption key.
My plan to deliver this is to generate an encryption key when the program is first run, use that + a known salt to encrypt my cache. The idea being that both the program, the generated key and the cache would need to be compromised together to decrypt my cache.
Related
Are there any examples of using encryption to encrypt the disk-cache used by OkHttp's HttpResponseCache? Naively, I don't think this is a very hard thing to do, but I'd appreciate any advice or experience to avoid security-pitfalls.
Without too many specifics, here's what I'm trying to achieve: a server that accept user's api-keys (typically 40-character random string) for established service X, and makes many API calls on the users behalf. The server won't persist user's api-keys, but a likely use case is that users will periodically call the server, supplying the api-key each time. Established service X uses reasonable rate-limiting, but supports conditional (ETag, If-Modified-Since) requests, so server-side caching by my server makes sense. The information is private though, and the server will be hosted on Heroku or the like, so I'd like to encrypt the files cached by HttpResponseCache so that if the machine is compromised, they don't yield any information.
My plan would be to create a wrapper around HttpResponseCache that accepts a secret key - which would actually be a hash of half of the api-key string. This would be used to AES-encrypt the cached contents and keys used by HttpResponseCache. Does that sound reasonable?
Very difficult to do with the existing cache code. It's a journaled on-disk datastructure that is not designed to support privacy, and privacy is not a feature you can add on top.
One option is to mount an encrypted disk image and put the cache in there. Similar to Mac OS X's FileVault for example. If you can figure out how to do that, you're golden.
Your other option is to implement your own cache, using the existing cache as a guide. Fair warning: the OkResponseCache is subject to change in the next release!
The following youtube video does a pretty good job at summarizing how EFS works.
For those interested in a summary of the contents of such windows I have attached
it below. However this leaves me with one question concerning security:
When a user logs on in Windows, presumably a hash is computed from the password
(or alternatively from the password plus the username and perhaps other data such
as a salt). When a user first creates a password, such hash must be stored somewhere
on the hard drive if I am not mistaken. At least, old Unix systems used to work in
such manner (with such has stored in /etc/passwd). Thus when a user logs on, the
password hash is computed and compared to what is stored in such file in order
to authenticate the user. If the hashes match, the user is logged in.
So far so good. If the above mechanism is the one used (on modern Windows systems),
this means that when someone hacks into a Window system, they can read such password hash,
and thus, using the special Microsoft symmetric encryption algorithm (as described below)
which is stored on the hard drive and thus can be learned by a hacker, the password hash
plus the Microsoft special symmetric algorithm plus knowledge of where the encrypted
private key is stored on the hard drive allows the hacker to decrypt it, thus obtaining
the private key. And once the private key is obtained of course, then all data encrypted
using the public key in the certificate can be decrypted by the hacker.
Can someone please point out the flaw in my reasoning?
Presumably the flaw is due to a misunderstanding of mine concerning
how Windows authentication is carried out.
Thanks.
http://www.youtube.com/watch?v=YxgWsa-slOU
Summary of the contents of the above video:
- EFS (available in the NTFS file system) is designed to allow users
to encrypt files and folders so that nobody except for the person
encrypting such file or folder can access it. Administrative accounts
on stolen machines can be created with minimal hacking knowledge, and
can thus gain access to virtually any files contained on the hard drive.
Symmetric key encryption algorithms work about 100 to 1000 times faster
than public key encryption algorithms.
right-click -> Properties -> General -> Advanced... -> Encrypt Contents
to Secure Data and click on Apply, (you can then choose between
encrypting just the file or encrypting the file and its parent folder
and then click on OK). Windows will turn the file green and we will
still have full access to the file. Once this someone logging in
with an administrator account will not be able to see the file.
You can in fact access the certificate manager with the "certmgr"
command, and from there you can view the contents of the
Personal -> Certificates application folder, which can
start out as empty. When we encrypt a file in the above
manner, a symmetric key called a DESX algorithm file encryption key (FEK)
is generated and then the certificate's public key is used to encrypt
the FEK and store it with the encrypted data. In the certificate contained
in the certificate store you can get access to the public key but not the
private key (the cerificate attests that user such and such are who they
say they are and displays the user's public key). The certificate also
points to the private key, but such private key is stored in a special
location on the hard drive, and is encrypted using a special Microsoft
symmetric key algorithm generated master key, where the master key is
generated using a hash component from the username and password of the
user every time the user logs on, and the resulting symmetric key is not
stored anywhere on the hard drive (i.e. it must be kept somewhere in memory).
The hash value that is used to access the private key, which unlocks the symmetric key, is not the same as the hash value that is stored (used for authentication). Both hashes are derived from the password, but they are not computed the same way. Neither are reversible, and they cannot be used interchangeably.
To access your files, they need you to either be logged in already, or they need your password.
Also note that EFS normally designates the administrator or domain administrator as the "recovery agent". When the private key is stored, it also stores a copy that can be accessed by the administrator.
As shown in Figure 15.9, encryption with EFS requires the presence of at least two certificates in the certificate store on the local computer: one for the user (file owner) and one for a recovery agent account.
You can disable this feature by setting another of your accounts as the recovery agent, but in a domain, normally your domain administrator will set this policy and not allow you to disable it. So, the administrator can still access your files.
As long as an attacker doesn't gain the password for the recovery agent's account (or yours), your data should still be safe from an attacker, assuming the attacker isn't the same person as the recovery agent.
It's important to have strong passwords, keep them safe, and avoid running malicious software that could access the data directly.
Thanks for your views on my YouTube video. I am certainly no expert on the details of current encryption technology and so my answer won't do your question justice. The video is intended to give someone who is unfamiliar with the details of EFS a more coherent understanding of how it all works.
However, having said that.. it looks like the previous reply answers the question. Hashes are not reversible. I think I used the words 'virtually impossible' to reverse engineer.. but really Hashes are used because they cannot be reversed to give the passwords. Password crack programs, from my limited understanding, start with a plaintext word from a dictionary, use the same hash algorithm and attempt to generate the same hash as the target hash they are attempting to decrypt. As long as you've used a good password, you can't crack the hash. Bad passwords are the only way passwords get cracked.
It is easy to set up an administrative account if you have access to any machine, but any new account set up will not have access to any private keys. A recovery agent has to be set up PRIOR to encrypting anything with EFS in order for the recovery agent to have access to a user's file. But then, both the Recovery Agents private key hash and the target person's private key hash are both unrecoverable to a new admin account.
I think that's the way it needs to work, or there is no real security.
Dave Crabbe
How can I securely store a crypto key object of type javax.crypto.SecretKey during a user session in a java web application? I have to manage such a key, because I can create that key only after login but may need that key later for some decryption of sensitive user data.
The secretKey itself is derived from the user password by a password based derived key functions (currently "PBKDF2WithHmacSHA1"). The used salt and number of iterations are persistent in the database. With those parameters -- password, salt and iterations -- I can recreate that password key right after login, when the password is available. After that,
I'd like to keep the generated key in memory, in contrast to keep the plain password all the time.
Since I'm using Spring / Hibernate, is it safe to put that key object into a bean with session scope? Such an object exists in-memory only and should be safe, isn't it?
The general question: is it possible to build secure environments if the time a secret key is available differs from the time this key should used, even by some minutes?
It all depends on what your requirement/definition of 'safe' for this project.
Keeping secret key in memory, in session scope is 'safe' from the prospective that it theoretically should not be accessible from other sessions. Unless of course there are bugs or security vulnerabilities in Spring, web container or in your code - take a look at session hijacking for example, make sure you understand the potential risks.
On the other hand once secret key is in memory in readable form it can be potentially recovered via memory dump or through unsecured swap file. If the session is distributed or persistent it could be intercepted when session data is transmitted to another node or persisted to disk or database. Granted, this is relatively more difficult and would require access to the network or box which runs your software.
I was thinking of making a small tool. It is not important what the tool will do. The important thing, is that the tool will need to store some sensitive information on the user's HDD. EDIT: The information that will be stored is USER'S information - I'm not trying to protect my own content, that I distribute with the app.
I understand that I need to encrypt this information. But then, where do I safely store the encryption password? It's some sort of an infinite recursion...
So, is there a way, to encrypt information on windows, and have windows securely manage the passwords? When I say windows I mean Windows XP SP2 or later.
I should also note, that users on the same system must not have access to other users information (even when they are both running my application).
I'm looking for both - .NET 2.0 (C#) and native (C/C++) solutions to this problem.
is there a way, to encrypt information on windows, and have windows securely manage the passwords?
CryptProtectData: http://msdn.microsoft.com/en-us/library/windows/desktop/aa380261(v=vs.85).aspx
Using from .NET: http://msdn.microsoft.com/en-us/library/aa302402.aspx
Historically, Protected Storage (available in XP, read-only in vista+): http://msdn.microsoft.com/en-us/library/bb432403%28VS.85%29.aspx
You should consider using DPAPI for this purpose. It will encrypt your data with a special (internal) symmetric key which is on per-user basis. You don't even need to ask for passwords in this case, because different users on the system will have different keys assigned to them.
The downside of it might be that you can't recover the data if the user is deleted/Windows reinstalled (I believe that this is the case, not quite sure though). In that case encrypt the data with a "self-generated" key derived from the password and store the password in registry/file encrypted using DPAPI.
You can use the native encryption facility. Set the encrypt attribute on your folder or file (from the property page, click on the "advanced" button). Then you can set the users that can access the file (by default this only includes the file creator). The big advantage of this solution is that it is totally transparent from the application and the users points of view.
To do it programmatically: using the Win32 API, call EncryptFile() on the directory where you want to store your sensitive per-user data. From now on all newly created files within this dir will be encrypted and only readable by their creator (that would be the current user of your app). Alternatively you can use the FILE_ATTRIBUTE_ENCRYPTED flag on individual files at creation time. You can check encryption info from the explorer on the file's property page, and see that app-created files are correctly encrypted and restricted to their respective users. There is no password to store or use, everything is transparent.
If you want to hide data from all users then you can create a special app-specific user and impersonate it from your app. This, along with ACLs, is the blessed technique on Windows for system services.
You might want to look at Isolated Storage, which is a way of storing settings and other data on a per-application data automatically.
See an example and MSDN.
This is an alternative to storing normal settings in the registry, a better one in a lot of cases... I'm not sure how the data is stored to file however so you'd need to check, you wouldn't want it to be accessible, even encrypted, to other users. From memory only the app. that created the storage can open it - but that needs checking.
Edit:
From memory when I last used this, a good approach is to write a "Setting" class which handles all the settings etc. in your app. This class then has the equivalent of Serialize and DeSerialize methods which allow it to write all its data to an IsolatedStorage file, or load them back again.
The extra advantage of implementing it in this way is you can use attributes to mark up bits of the source and can then use a Property Grid to quickly give you user-edit control of settings (the Property Grid manipulates class properties at runtime using reflection).
I recommend you look at the Enterprise Library Cryptography Application Block. Check this blog post. Windows has a built in Data Protection API for encrypting data, but the Crypto Application Block makes it more straightforward.
Um, what you're trying to achieve is exactly what DRM tried to achieve. Encrypt something then give the user the keys (however obfuscated) and the crypto. They did it with DVDs. They did it with Blu-Ray. They did it with iTunes.
What you are proposing to do will never be secure. Your average lay person will probably not figure it out, but any sufficiently motivated attacker will work it out and discover the keys, the algorithm and decrypt the data.
If all you're doing is encrypting user data then ask the user for their password. If you're trying to protect your internal data from the user running the application you're S.O.L.
Erm hash the password? You don't need to store the real deal anywhere on the machine just a hashed password (possibly salted too). Then when the user enters their password you perform the same operation on that and compare it to the hashed one you've stored on disk.
Is it possible to store passwords on the local system (Windows XP) that can only be accessed by the application itself?
My instinctive answer would be "no". Even if some kind of hashing or encyption is used I would think that as long as the source code is available then the determined seeker could always use this to retrieve the password.
I'm working on a personal open source hobby project in which I would like to give users the option of storing passwords on disk so that they don't need to type them every time they use the software. One example of a password that could be stored would be the one used to authenticate on their network's proxy server.
There are a few related questions here on Stack Overflow and the most appropriate solution sounds like using an operating system service like DPAPI.
Is the basic premise correct that as long as the password is retrievable by the software without any user input, and the source code is open source, that the password will always be retrievable by a (suitably technically and willfully inclined) passer-by?
You could read about the Pidgin developers' take on it here:Plain Text Passwords.
Using the DPAPI in UserData mode will only allow your account on your machine to access the encrypted data.
It generates a master key based off of your login credentials and uses that for the encryption.
If the password is retrievable by the software without any user input, then the password will always be retrievable by a (suitably technically and willfully inclined) passer-by. Open or closed source only affects how much effort is involved.
Absolutely, you can write a program to store passwords securely.
Using AES, you could have your program generate an AES Key, and have that key stored in an operating system protected area. In WinXP, this is the registry, encrypted with DPAPI. Thus the only way to access the key is to have physical access to the machine.
You need to ensure that when you generate your AES key that you do so in a cryptographically secure manner. Just using RAND won't work, nor will generating a random character string.
Open Source has very little to do with security (in my opinion). Given the level of sophistication in tools for reverse engineering source code, even if you had a closed source solution, people determined to snoop at your code could do so.
Your effort is better spent ensuring that you follow best practice guidelines while using the chosen encryption scheme. I would argue that having your code openly looked at by a larger community would actually make your code more secure; vulnerabilities and threats would likely be identified sooner with a larger audience looking through your code.