Working on implementing flash encryption and secure boot on ESP32. The first step is to get flash encryption working. I am targeting the following settings:
Release Mode
No reflash over UART.
Use the esp generated key (no need to reflash anything).
MAlong with my 2 OTA app partitions, I have used a data partition of sub-type nvs to store my device security certificate for access to my cloud backend.
In my partitions.csv file, I don't think I can set the nvs partition to encrypted as it would brick my device. How can this be made secure?
Do I need to add nvs encryption an nvs_keys partition?
NVS partitions can also be encrypted, but it's done differently from encrypting everything else in Flash. In short, you need two partitions. One is your NVS data partition which gets encrypted using NVS encryption algorithm. Other is the "NVS keys" partition which holds the encryption keys for previous, and this one gets encrypted using the standard Flash encryption algorithm. If it sounds confusing, welcome to the club. But once you dig through this, it works fine.
Pre-generating an encrypted NVS data partition with some data on it (e.g. your certificate) now gets an extra step where you encrypt it on your computer before writing it to Flash.
Related
My use case looks like this:
encrypt some super secret data using a key provided by user
when requested, ask the user for that key and decrypt the data
re-encrypt the data with a key that will allow my program to access the data for a user defined period of time
if token expired ask user for original key again
This feels like it should be a solved problem by my googlefu is weak today.
I could just decrypt the data and store it with a known key in my program but cracking my code would expose those secrets.
I could and maybe should, use some local secure storage for this data like macos keychain etc but i'd like to keep the amount of native variations to a minimum.
The answer to this specific question appears to be, no it is not possible to do locally.
The best solution to this kind of problem, ie a temporary cache of data decrypted with a user's key is to either use security tooling present on the users machine, ie macos keychain or to simply re-encrypt the cache with a key known to the program and except that it is possible to reverse engineer that program to find the decryption key.
My plan to deliver this is to generate an encryption key when the program is first run, use that + a known salt to encrypt my cache. The idea being that both the program, the generated key and the cache would need to be compromised together to decrypt my cache.
I am designing a generic Java library which needs to Sign messages before Sending and Verify messages before accepting. Sign and Verify needs to work with both of the following constraints;
Use raw 32 characters long key with some additional details like key start date, end date, grace period, the algorithm as raw information (do not have a choice, but have to accept it as raw data).
Use a proper PKI certificate containing a public key
For PKI based secret it is straightforward to use JKS/PKCS12 to store information and use it.
The problem I am facing is how do I manage and store raw information? What should be the data structure? So far my options are;
Use JCEKS provided by JRE to store all the raw information in my own made up data structure as secret properties and resolve at runtime to execute my library
Use JKS/PKSC12 and along with X509 Certificate data structure to store all raw information under its Extensions
What I am really looking for is the best practices for these kinds of versatile requirements i.e. manage un-managed secret properties?
I want to prevent modifications from my application files stored in Documents directory.
I try CryptoSwift and AES256CBC libs but they greatly slow down my application which has multiple read / write files
Enabling Data Protection capability on my application project it enough for prevent user to modifying theses files content ?
The data protection feature is secure against everyone except the iPhone owner if the iPhone is not jailbroken. It uses AES encryption and the encryption key is stored in the keychain.
The data protection feature used Common Crypto that uses the hardware encryption engine and is very fast, on my iPhone 6s 1MB encrypts in ~2.3 mSec, a rate of > 400MB/s.
my questions is about general crypto implementations, specifically how they prevent reading the private key out from the memory.
i know e.g. in ssh, the private key is saved on the HDD, with permissions only for the user. But when the ssh process (or any other crypto implementation, regardless if asymmetric of symmetric) needs to decrypt sth, they obviously have to read the private key(if they use one) - how do such implementations prevent that other processes can read out the memory address for the private key variable?
Without encapsulating encryption hardware the key will be available.
iOS gets around this when decryption binaries to run by using hardware decryption in the DMA (hardware path) between flash storage and RAM.
Serious security such as iMessages cloud store/forward and banking to name a couple use HSMs (Hardware Encryption Modules). The keys are never outside the hardware unless encrypted by an HSM key. The encryption is done in the HSM. But that is not enough in some situations the HSM must be in a secure area with sign-in. Further they are tamper resistant, they clear their keys if they sense a physical or access attack. To administer two keys are needed for the HSM and three or more administrators must be present each inserting their Smart Card (which in the case of iMessages is shredded after initial setup) and entering the associated code.
OK, but the real question is how much security do you need? Carefully evaluate who your potential attacker are, how much time, money and technical talent they have and how much they are willing to spend on your data. Evaluate the value of your data to you, your users, attackers and reputation.
If you protecting against the device's authorized user(s) there is little you can do, what you need is DRM.
If you are protecting agains a well-funded or repressive government there is little chance.
But if you do it right, control the software and hardware, you can come very close until the court order is issued (see FBI vs Apple).
Is it possible to store passwords on the local system (Windows XP) that can only be accessed by the application itself?
My instinctive answer would be "no". Even if some kind of hashing or encyption is used I would think that as long as the source code is available then the determined seeker could always use this to retrieve the password.
I'm working on a personal open source hobby project in which I would like to give users the option of storing passwords on disk so that they don't need to type them every time they use the software. One example of a password that could be stored would be the one used to authenticate on their network's proxy server.
There are a few related questions here on Stack Overflow and the most appropriate solution sounds like using an operating system service like DPAPI.
Is the basic premise correct that as long as the password is retrievable by the software without any user input, and the source code is open source, that the password will always be retrievable by a (suitably technically and willfully inclined) passer-by?
You could read about the Pidgin developers' take on it here:Plain Text Passwords.
Using the DPAPI in UserData mode will only allow your account on your machine to access the encrypted data.
It generates a master key based off of your login credentials and uses that for the encryption.
If the password is retrievable by the software without any user input, then the password will always be retrievable by a (suitably technically and willfully inclined) passer-by. Open or closed source only affects how much effort is involved.
Absolutely, you can write a program to store passwords securely.
Using AES, you could have your program generate an AES Key, and have that key stored in an operating system protected area. In WinXP, this is the registry, encrypted with DPAPI. Thus the only way to access the key is to have physical access to the machine.
You need to ensure that when you generate your AES key that you do so in a cryptographically secure manner. Just using RAND won't work, nor will generating a random character string.
Open Source has very little to do with security (in my opinion). Given the level of sophistication in tools for reverse engineering source code, even if you had a closed source solution, people determined to snoop at your code could do so.
Your effort is better spent ensuring that you follow best practice guidelines while using the chosen encryption scheme. I would argue that having your code openly looked at by a larger community would actually make your code more secure; vulnerabilities and threats would likely be identified sooner with a larger audience looking through your code.