How should a "secret key" be retained in memory? - windows

We have a server that securely sends a key to the client via a custom login program. The key is subsequently used for encrypting further client requests. That key is kept on the client's disk, like a cookie, and is used by a program that might be started and stopped multiple times before the client decides to logout and cause the key to be obsolete (hence the key is saved on disk, because there may be long periods between login and logout when no program is running).
It would seem to be a bit more secure to keep the key only in memory instead of on disk (it's OK if a crash or restart loses the key and subsequently forces a new login).
On Windows, what's the best way to retain the key only in memory (ignoring that the memory might be virtual and paged to disk) between separate executions of a program?
One possible solution is to leave a trivial Windows service running on the client that accepts the key, retains it in the service's memory, and returns it upon request (or use an equivalent trivial DDE server that does the same thing). A non-.net solution is preferred.
Is there a standard Windows service usually running that already provides this ability?
Is there a better approach?

There are probably a couple of solutions you can try that does not involve a running process:
Store it in a volatile registry key (REG_OPTION_VOLATILE)
Store it in the global atom table. The key has to be stored as a string. You would probably require two atoms; one that stores the key and one used to locate the first atom so you can call GlobalGetAtomName. The second atom should have a known name like "YourAppName:S-UsersSidGoesHere" so you can call GlobalFindAtom.
If you decide to store it in a file in %temp% you could use TOKEN_STATISTICS.AuthenticationId as part of a key used to encrypt the real key. You could encrypt the file itself with EFS (FILE_ATTRIBUTE_ENCRYPTED)...

Related

Ways to find out if the process is created by system (by pid) on macOS?

I'm implementing API which allows to launch other apps (using NSTask) inside VFS (FUSE on macOS). After VFS is mounted a bunch of processes start accessing launched VFS in which my app works, and I'd like to implement some kind of filtering mechnism which will allow to detect whether process which is accessing the VFS is created by system (and potentially safe) or not, and if so it'll be granted an access to the file system where my app runs.
So far I'm able to get basic information of the process by it's pid. For example: process path, uid, ppid, code signature of the process etc (using Security framework, libproc etc)
I've done a couple of tests and see that there are process with uid != 0 and still critical for my app to run (if I deny access to them app which is started in VFS crashes) (e.g. /usr/libexec/secinitd, /System/Library/CoreServices/Dock.app/Contents/MacOS/Dock), so looks like approach with filtering processes by pids, uids, ppids might not work.
So the question is: is it possible to distinguish whether process which is accessing my app was created by system and is potentially safe? I also don't want to do too much work by denying accees to critical system processes which will allow the app to successfully start and run in VFS.
Judging from the comment thread, your threat model is data theft via malware etc.
In this case, you can trust almost nothing, so the best way is probably to maintain an explicit whitelist of processes which are allowed to access your mount point, and block access to everything else by default. Log any processes to which access is denied, and allow the user to reverse that decision and add them to the whitelist. In other words, let the user decide what applications they consider safe.
Your said that according to your inspection, there were several processes which were mandatory for the process to run, so why won't use try-and-error approach.
You deploy you FUSE drive on clean environment and record all processes that attempt to access your files - try to prevent each process and keep only those which crash your apps, and add them to a white-list.
Of course that this list is subject to change in different macOS versions, but it can give you the general idea.
Alternatively, you can break your app into couple of parts. for example, put the sensitive logic inside separated dylib file, and prevent access to this file only.. since dylib is not the main executable in your app, I believe fewer processes require mandatory access it.

What would happen to Flask sessions if the secret key changed? [duplicate]

When using sessions, Flask requires a secret key. In every example I've seen, the secret key is somehow generated and then stored either in source code or in configuration file.
What is the reason to store it permanently? Why not simply generate it when the application starts?
app.secret_key = os.urandom(50)
The secret key is used to sign the session cookie. If you had to restart your application, and regenerated the key, all the existing sessions would be invalidated. That's probably not what you want (or at least, not the right way to go about invalidating sessions). A similar case could be made for anything else that relies on the secret key, such as tokens generated by itsdangerous to provide reset password urls (for example).
The application might need to be restarted because of a crash, or because the server rebooted, or because you are pushing a bug fix or new feature, or because the server you're using spawns new processes, etc. So you can't rely on the server being up forever.
The standard practice is to have some throwaway key commited to the repo (so that there's something there for dev machines) and then to set the key in the local config when deploying. This way, the key isn't leaked and doesn't need to be regenerated.
There's also the case of running secondary systems that depend on the app context, such as Celery for running background tasks, or multiple load balanced instances of the application. If each running instance of the application has different settings, they may not work together correctly in some cases.

How to guarantee file integrity without mandatory file lock on OS X?

AFAIK, OS X is a BSD derivation, which doesn't have actual mandatory file locking. If so, it seems that I have no way to prevent writing access from other programs even while I am writing a file.
How to guarantee file integrity in such environment? I don't care integrity after my program exited, because that's now user's responsibility. But at least, I think I need some kind of guarantee while my program is running.
How do other programs guarantee file content integrity without mandatory locking? Especially database programs. If there's common technique or recommended practice, please let me know.
Update
I am looking for this for data layer of GUI application for non-engineer users. And currently, my program have this situations.
Data is too big that it cannot be fit to RAM. And even hard to be temporarily copied. So it cannot be read/written atomically, and should be used from disk directly while program is running.
A long running professional GUI content editor application used by humans who are non-engineers. Though users are not engineers, but they still can access the file simultaneously with Finder or another programs. So users can delete or write on currently using file accidentally. Problem is users don't understand what is actually happening, and expect program handles file integrity at least program is running.
I think the only way to guarantee file's integrity in current situation is,
Open file with system-wide exclusive mandatory lock. Now the file is program's responsibility.
Check for integrity.
Use the file as like external memory while program is running.
Write all the modifications.
Unlock. Now the file is user's responsibility.
Because OS X lacks system-wide mandatory lock, so now I don't know what to do for this. But still I believe there's a way to archive this kind of file integrity, which just I don't know. And I want to know how everybody else handles this.
This question is not about my programming error. That's another problem. Current problem is protecting data from another programs which doesn't respect advisory file lockings. And also, users are usually root and the program is running with same user, so trivial Unix file privilege is not useful.
You have to look at the problem that you are trying to actually solve with mandatory locking.
File content integrity is not guaranteed by mandatory locking; unless you keep your file locked 24/7; file integrity will still depend on all processes observing file format/access conventions (and can still fail due to hard drive errors etc.).
What mandatory locking protects you against is programming errors that (by accident, not out of malice) fail to respect the proper locking protocols. At the same time, that protection is only partial, since failure to acquire a lock (mandatory or not) can still lead to file corruption. Mandatory locking can also reduce possible concurrency more than needed. In short, mandatory locking provides more protection than advisory locking against software defects, but the protection is not complete.
One solution to the problem of accidental corruption is to use a library that is aggressively tested for preserving data integrity. One such library (there are others) is SQlite (see also here and here for more information). On OS X, Core Data provides an abstraction layer over SQLite as a data storage. Obviously, such an approach should be complemented by replication/backup so that you have protection against other causes for data corruption where the storage layer cannot help you (media failure, accidental deletion).
Additional protection can be gained by restricting file access to a database and allowing access only through a gateway (such as a socket or messaging library). Then you will just have a single process running that merely acquires a lock (and never releases it). This setup is fairly easy to test; the lock is merely to prevent having more than one instance of the gateway process running.
One simple solution would be to simply hide the file from the user until your program is done using it.
There are various ways to hide files. It depends on whether you're modifying an existing file that was previously visible to the user or creating a new file. Even if modifying an existing file, it might be best to create a hidden working copy and then atomically exchange its contents with the file that's visible to the user.
One approach to hiding a file is to create it in a location which is not normally visible to users. (That is, it's not necessary that the file be totally impossible for the user to reach, just out of the way so that they won't stumble on it.) You can obtain such a location using -[NSFileManager URLForDirectory:inDomain:appropriateForURL:create:error:] and passing NSItemReplacementDirectory and NSUserDomainMask for the first two parameters. See -replaceItemAtURL:withItemAtURL:backupItemName:options:resultingItemURL:error: method for how to atomically move the file into its file place.
You can set a file to be hidden using various APIs. You can use -[NSURL setResourceValue:forKey:error:] with the key NSURLIsHiddenKey. You can use the chflags() system call to set UF_HIDDEN. The old Unix standby is to use a filename starting with a period ('.').
Here's some details about this topic:
https://developer.apple.com/library/ios/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileCoordinators/FileCoordinators.html
Now I think the basic policy on OSX is something like this.
Always allow access by any process.
Always be prepared for shared data file mutation.
Be notified when other processes mutates the file content, and provide proper response on them. For example you can display an error to end users if other process is trying to access the file. And then users will learn that's bad, and will not do it again.

How to emulate shm_open on Windows?

My service needs to store a few bits of information (at minimum, at least 20 bits or so, but I can easily make use of more) such that
it persists across service restarts, even if the service crashed or was otherwise terminated abnormally
it does not persist across a reboot
can be read and updated with very little overhead
If I store this information in the registry or in a file, it will not get automatically emptied when the system reboots.
Now, if I were on a modern POSIX system, I would use shm_open, which would create a shared memory segment which persists across process restarts but not system reboots, and I could use shm_unlink to clean it up if the persistent data somehow got corrupted.
I found MSDN : Creating Named Shared Memory and started reimplementing pieces of it within my service; this basically uses CreateFileMapping(INVALID_HANDLE_NAME, ..., PAGE_READWRITE, ..., "Global\\my_service") instead of shm_open("/my_service", O_RDWR, O_CREAT).
However, I have a few concerns, especially centered around the lifetime of this pagefile-backed mapping. I haven't found answers to these questions in the MSDN documentation:
Does the mapping persist across reboots?
If not, does the mapping disappear when all open handles to it are closed?
If not, is there a way to remove or clear the mapping? Doesn't need to be while it's in use.
If it does persist across reboots, or does disappear when unreferenced, or is not able to be reset manually, this method is useless to me.
Can you verify or find faults in these points, and/or recommend a different approach?
If there were a directory that were guaranteed to be cleaned out upon reboot, I could save data in a temporary file there, but it still wouldn't be ideal: under certain system loads, we are encountering file open/write failures (rare, under 0.01% of the time, but still happening), and this functionality is to be used in the logging path. I would like not to introduce any more file operations here.
The shared memory mapping would not persist across reboots and it will disappear when all of its handles are closed. A memory mapping object is a kernel object - they always get deleted when the last reference to them goes away, either explicitly via a CloseHandle or when the process containing the reference exits.
Try creating a registry key with RegCreateKeyEx with REG_OPTION_VOLATILE - the data will not preserved when the corresponding hive is unloaded. This will be at system shutdown for HKLM or user logoff for HKCU.
sounds like maybe you want serialization instead of shared memory? If that is indeed appropriate for your application, the way you serialize will depend on your language. If you're using c++, check out boost::serialize. C# undoubtedly has lots of serializations options (like java), if that's what you're using.

What are the performance implications of accessing a windows registry value?

If have I have multiple processes accessing a registry value thousands of times per second, will there be any significant performance implications of reading this registry value?
The value of the registry value will never change, it will be read only. I guess another question is that is reading the registry value a blocking operation?
The registry value is for storing database connection details, accessed by an ASP.NET applications, Win Forms applications, and WCF services.
Thanks,
Stuart
The registry is fast, really fast. But thousands of times per second? At the very least, cache the value in each application so you only have to read it once on app startup.
Windows registry is just a file that happens to have more protection around it than other files.
Just like any file, however, there will be a performance hit as it is accessed.
I would suggest that you read your values once, on application startup and store them in memory, passing them to your objects as required.

Resources