Is encrypting password for memory overkill? - macos

I am in need to build a secure application for Mac. For that I am using a master password that only exists in the head of the creator.
To retrieve it the password first needs to be entered in a secure textfield* it can then be used to encrypt and decrypt files. While the application remains open that master password will be stored in a variable, meaning it exists in memory. Would encrypting this password in memory be overkill?
The reason why I am asking this question is that before the master password can be encrypted for memory it already exists as a variable, meaning it's already open for memory scanning attacks. Is this something I should be worried about?
I read the following on https://www.apple.com/macos/security/:
Runtime protections defend at the core. The technically sophisticated
runtime protections in macOS work at the very core of your Mac to help
keep your system safe. Built right into the processor, the XD (execute
disable) feature creates a strong wall between memory used for data
and memory used for executable instructions. This protects against
malware that attempts to trick the Mac into treating data the same way
it treats a program in order to compromise your system. Address Space
Layout Randomization (ASLR) changes the memory locations where
different parts of an app are stored. This makes it difficult for an
attacker to do harm by finding and reordering parts of an app to make
it do something it wasn’t intended to do. macOS brings ASLR to the
memory used by the kernel at the heart of the operating system, so the
same defenses work at every level in your Mac.
Can I conclude that Mac has already build in protection against memory scanning and hijacking?
(* I am aware this might cause keylogger vulnerability)

In every case, you would first derrive a key from the user password, and use this key to encrypt the files. So instead of holding the password in memory, you can immediately calculate the key with a key-derivation-function, and hold the key in memory. The advantage you get is, that an attacker can only learn the key, which allows to decrypt the files, but not the original password, which can possibly be reused.
Some OS offer a specialized SecureString, which is probably the nearest you can get to what you want, it holds a string encrypted in memory and can remove it from there. I do not know whether OSX provides anything like this.
I doubt that an encrypted key in memory is of much use. If an attacker is capable of analysing the memory, (s)he will probably be able to decrypt the memory as well, the application must be able to decrypt the key after all. But certainly it raises the bar and needs more work to do.
The linked article addresses another problem in my opinion, it prevents to place executable code in memory (as input data) and trick the processor to execute it afterwards.

The existence of tools such as mach_inject and Cycript clearly indicate your program's memory is never safe. In iOS world the security of keychain comes from the fact the key is engraved in a separate hardware chip and it's never copied to application memory. If you're doing the encryption/decryption inside your program by definition it's prone to being hijacked in some form. Key things to consider:
what do you want protect? The data? The encryption method? Both?
having access to your binary program an attacker is likely to reverse engineer it, what are the implications?
Do you need the actual encryption/decryption to happen in your program? If at least one crucial step required for the data to be useful would be moved to a external backend it could be way safer
Supplementing your solution with file system encryption like FileVault or TrueCrypt will always improve security

Related

how do crypto implementations prevent reading out memory address of saved private key, while using it?

my questions is about general crypto implementations, specifically how they prevent reading the private key out from the memory.
i know e.g. in ssh, the private key is saved on the HDD, with permissions only for the user. But when the ssh process (or any other crypto implementation, regardless if asymmetric of symmetric) needs to decrypt sth, they obviously have to read the private key(if they use one) - how do such implementations prevent that other processes can read out the memory address for the private key variable?
Without encapsulating encryption hardware the key will be available.
iOS gets around this when decryption binaries to run by using hardware decryption in the DMA (hardware path) between flash storage and RAM.
Serious security such as iMessages cloud store/forward and banking to name a couple use HSMs (Hardware Encryption Modules). The keys are never outside the hardware unless encrypted by an HSM key. The encryption is done in the HSM. But that is not enough in some situations the HSM must be in a secure area with sign-in. Further they are tamper resistant, they clear their keys if they sense a physical or access attack. To administer two keys are needed for the HSM and three or more administrators must be present each inserting their Smart Card (which in the case of iMessages is shredded after initial setup) and entering the associated code.
OK, but the real question is how much security do you need? Carefully evaluate who your potential attacker are, how much time, money and technical talent they have and how much they are willing to spend on your data. Evaluate the value of your data to you, your users, attackers and reputation.
If you protecting against the device's authorized user(s) there is little you can do, what you need is DRM.
If you are protecting agains a well-funded or repressive government there is little chance.
But if you do it right, control the software and hardware, you can come very close until the court order is issued (see FBI vs Apple).

Encrypting OkHttp's HttpResponseCache

Are there any examples of using encryption to encrypt the disk-cache used by OkHttp's HttpResponseCache? Naively, I don't think this is a very hard thing to do, but I'd appreciate any advice or experience to avoid security-pitfalls.
Without too many specifics, here's what I'm trying to achieve: a server that accept user's api-keys (typically 40-character random string) for established service X, and makes many API calls on the users behalf. The server won't persist user's api-keys, but a likely use case is that users will periodically call the server, supplying the api-key each time. Established service X uses reasonable rate-limiting, but supports conditional (ETag, If-Modified-Since) requests, so server-side caching by my server makes sense. The information is private though, and the server will be hosted on Heroku or the like, so I'd like to encrypt the files cached by HttpResponseCache so that if the machine is compromised, they don't yield any information.
My plan would be to create a wrapper around HttpResponseCache that accepts a secret key - which would actually be a hash of half of the api-key string. This would be used to AES-encrypt the cached contents and keys used by HttpResponseCache. Does that sound reasonable?
Very difficult to do with the existing cache code. It's a journaled on-disk datastructure that is not designed to support privacy, and privacy is not a feature you can add on top.
One option is to mount an encrypted disk image and put the cache in there. Similar to Mac OS X's FileVault for example. If you can figure out how to do that, you're golden.
Your other option is to implement your own cache, using the existing cache as a guide. Fair warning: the OkResponseCache is subject to change in the next release!

How well are Cocoa UI and general framework elements protected against malicious attacks?

So far I had little concern about overall security considerations, because I have been developing only promotional and uncritical iPhone apps.
Currently, however, I'm working on a Mac application which requires a few more thougts about the matter, because it deals with sensitive user information.
While I know that I must take care to protect the data in its physical form (on disk), for example by encrypting it, I wonder how safe it is while it resides in memory in the course of normal use of the application.
Thus I'd like to know:
How safe is my application as long as it is built only upon framework elements such as NSTextField and Core Data?
How sensitive are Cocoa input elements to malicious attacks? What would be the best way to protect saved data which is stored using Core Data?
Objective-C is a dynamic language, which means that it is possible to replace classes and specific methods of classes at runtime. For example, this is how the 1Password plugin finds its way into Safari, and Dropbox finds it way into the Finder. It is currently possible for a malicious attacker to use the low level mach_inject API, or a number of other slightly higher-level methods, such as SIMBL or OSAX injection, to load code into your app. Once code is loaded into your app, the dynamic nature of Objective-C makes it possible in theory to replace NSTextField with a subclass of the attacker's choice, or specific methods in the class, including listening and storing user input. The secure version of NSTextField, which is designed for passwords, may have some protections against this, though I haven't found specific documentation to that effect. Security.framework and the keychain APIs in general do have protection for your data in memory, and they are not based on Objective-C, so it is significantly harder (although maybe still possible) to interfere with them.
To add to mgorbach's answer above (which is very good), Core Data can store data in four forms:
SQLite3 Database (most common)
.plist File (e.g. XML)
Binary File
In-Memory (non-persistent storage)
Neither .plist, Binary File, or SQLite are secure. .plist files can be easily read. A Binary file will be trickier, but AFAIK it's not using any encryption, and any Objective-C coder should easily be able to extract its contents. SQLite isn't secure either. Tools like SQLite Manager for FireFox, or Base for Mac, make it trivial to read Core Data SQLite data.
Since no Core Data storage methods are secure, your best bet is to encrypt data before committing it to disk.
This doesn't take into consideration any in-memory attacks. Of course, for this to be successful, a system typically has to already be compromised somehow.
If an end-user has FileVault enabled (encrypts their entire home folder), secure virtual memory enabled, their Firewall on, and a strong password, they're reasonably safe against many attacks.

Write Secure Cocoa Code

Im making an application in cocoa and wanted to see if some strings in it were easily accessible so I ran OTX on it and sadly all of my code was found. Is there a method I can use to make my code more "secure" or at least encrypt/hide the strings? The reason I want to encrypt the string is it's a password for a server. I don'd need it really secure I just don't want the password to be so easy to find.
Thanks for any help
You should never put a password into an executable.
This is like putting the password on a sticky note next to the monitor. If a malicious hacker has your application they can eventually extract the password regardless of what language or API you use to write it.
For example, if I know that your application connects to a password protected server but the application never ask for a password, then I know you've made the mistake of including the password. To find the password, I need only monitor the operation of the program to see what areas of code are active around the time it connects to the server. This will tell me where to focus the search for the password regardless of how big your application is. Then it is only a matter of time until I track the password down. Encrypting the password does no good because the encryption algorithm must also be in the app and I can unravel that as well.
Remember that there are many people out there who can unravel your code using only the raw machine code. For those people it doesn't matter what language or API you use because they all distill to machine code in the end. Those people are the scary skilled gods of programming and they laugh at mere mortals such as you or I. Unfortunately, some of them are evil.
Did I mention that you should never put a password into an executable? If I didn't, let me repeat that you should never put a password into an executable.
In your particular case, as novice programmer, you have no hope of hiding of the password from someone with even a little bit more experience than yourself. This is yet another good reason why you should never put a password into an executable.
1. Avoid ObjC in secure code.
Because ObjC's class system depends heavily on runtime reflection, the whole interface needs to be included alongside the executable. This allows tools like class-dump to easily recover the source #interface of the binary.
Therefore, the secure code functions should be written as a C function, not an ObjC method.
2. Use strip.
By default the compiler will keep all the private symbols (which allows stack trace to be more readable). You can use strip to delete all these symbols.
3. Obfuscation.
The above steps can only hide the code logic. But if the password is a constant string, it is immediately visible using the strings utility. You may obfuscate this by constructing the password in runtime (e.g. store the password encoded in ROT-13 in the file.)
4. Or just change your design.
No matter how good your protection system is, as the hacker have total control on their machine, given enough time, they always win. It's better to revise your design, like why the password must come with the executable? Or why a global password even needed?

Encryption of passwords on disk for open source desktop applications

Is it possible to store passwords on the local system (Windows XP) that can only be accessed by the application itself?
My instinctive answer would be "no". Even if some kind of hashing or encyption is used I would think that as long as the source code is available then the determined seeker could always use this to retrieve the password.
I'm working on a personal open source hobby project in which I would like to give users the option of storing passwords on disk so that they don't need to type them every time they use the software. One example of a password that could be stored would be the one used to authenticate on their network's proxy server.
There are a few related questions here on Stack Overflow and the most appropriate solution sounds like using an operating system service like DPAPI.
Is the basic premise correct that as long as the password is retrievable by the software without any user input, and the source code is open source, that the password will always be retrievable by a (suitably technically and willfully inclined) passer-by?
You could read about the Pidgin developers' take on it here:Plain Text Passwords.
Using the DPAPI in UserData mode will only allow your account on your machine to access the encrypted data.
It generates a master key based off of your login credentials and uses that for the encryption.
If the password is retrievable by the software without any user input, then the password will always be retrievable by a (suitably technically and willfully inclined) passer-by. Open or closed source only affects how much effort is involved.
Absolutely, you can write a program to store passwords securely.
Using AES, you could have your program generate an AES Key, and have that key stored in an operating system protected area. In WinXP, this is the registry, encrypted with DPAPI. Thus the only way to access the key is to have physical access to the machine.
You need to ensure that when you generate your AES key that you do so in a cryptographically secure manner. Just using RAND won't work, nor will generating a random character string.
Open Source has very little to do with security (in my opinion). Given the level of sophistication in tools for reverse engineering source code, even if you had a closed source solution, people determined to snoop at your code could do so.
Your effort is better spent ensuring that you follow best practice guidelines while using the chosen encryption scheme. I would argue that having your code openly looked at by a larger community would actually make your code more secure; vulnerabilities and threats would likely be identified sooner with a larger audience looking through your code.

Resources