How to emulate shm_open on Windows? - windows

My service needs to store a few bits of information (at minimum, at least 20 bits or so, but I can easily make use of more) such that
it persists across service restarts, even if the service crashed or was otherwise terminated abnormally
it does not persist across a reboot
can be read and updated with very little overhead
If I store this information in the registry or in a file, it will not get automatically emptied when the system reboots.
Now, if I were on a modern POSIX system, I would use shm_open, which would create a shared memory segment which persists across process restarts but not system reboots, and I could use shm_unlink to clean it up if the persistent data somehow got corrupted.
I found MSDN : Creating Named Shared Memory and started reimplementing pieces of it within my service; this basically uses CreateFileMapping(INVALID_HANDLE_NAME, ..., PAGE_READWRITE, ..., "Global\\my_service") instead of shm_open("/my_service", O_RDWR, O_CREAT).
However, I have a few concerns, especially centered around the lifetime of this pagefile-backed mapping. I haven't found answers to these questions in the MSDN documentation:
Does the mapping persist across reboots?
If not, does the mapping disappear when all open handles to it are closed?
If not, is there a way to remove or clear the mapping? Doesn't need to be while it's in use.
If it does persist across reboots, or does disappear when unreferenced, or is not able to be reset manually, this method is useless to me.
Can you verify or find faults in these points, and/or recommend a different approach?
If there were a directory that were guaranteed to be cleaned out upon reboot, I could save data in a temporary file there, but it still wouldn't be ideal: under certain system loads, we are encountering file open/write failures (rare, under 0.01% of the time, but still happening), and this functionality is to be used in the logging path. I would like not to introduce any more file operations here.

The shared memory mapping would not persist across reboots and it will disappear when all of its handles are closed. A memory mapping object is a kernel object - they always get deleted when the last reference to them goes away, either explicitly via a CloseHandle or when the process containing the reference exits.
Try creating a registry key with RegCreateKeyEx with REG_OPTION_VOLATILE - the data will not preserved when the corresponding hive is unloaded. This will be at system shutdown for HKLM or user logoff for HKCU.

sounds like maybe you want serialization instead of shared memory? If that is indeed appropriate for your application, the way you serialize will depend on your language. If you're using c++, check out boost::serialize. C# undoubtedly has lots of serializations options (like java), if that's what you're using.

Related

Lotus Notes - CreateMIMEEntity not releasing the control of .NSF file

I am using Interop.Domino to work with .NSF file. To generate the html mime entity I used the nnote but in some case it failed to generate it so in that case I took the RTFTEXT / PLIAN TEXT as output.
so I used CreateMIMEEntity for it.
NotesMIMEEntity MIMEBody = NoteDocument.CreateMIMEEntity("Body");
It works but it holds the control on the Database (.nsf file), file is getting mark as being used in another process.
By troubleshooting it it clear that above statement holds the control.
I have released all the Note objects assigned with it.Still problem remains same.
Is there are proper way to use it or release it?
The Notes core DLLs that are underneath the COM classes keep databases open in cache. The only way that I know of to close them is to terminate the process that loaded the DLLs. One option is to design code using the COM API so that it dispatches short-term worker processes to open the database, do the work, and terminate. Yeah, it's ugly and slow, but if you need a long-running service and you're using the COM API instead of the Notes C API, it's the best way.
In any case, the cached open databases should not cause a sharing violation if you are opening the database through the Domino server. If you are using "" instead of the server name when opening the database however, it's going to be a problem -- and you shouldn't even do that in short-running worker processes.

Ways to find out if the process is created by system (by pid) on macOS?

I'm implementing API which allows to launch other apps (using NSTask) inside VFS (FUSE on macOS). After VFS is mounted a bunch of processes start accessing launched VFS in which my app works, and I'd like to implement some kind of filtering mechnism which will allow to detect whether process which is accessing the VFS is created by system (and potentially safe) or not, and if so it'll be granted an access to the file system where my app runs.
So far I'm able to get basic information of the process by it's pid. For example: process path, uid, ppid, code signature of the process etc (using Security framework, libproc etc)
I've done a couple of tests and see that there are process with uid != 0 and still critical for my app to run (if I deny access to them app which is started in VFS crashes) (e.g. /usr/libexec/secinitd, /System/Library/CoreServices/Dock.app/Contents/MacOS/Dock), so looks like approach with filtering processes by pids, uids, ppids might not work.
So the question is: is it possible to distinguish whether process which is accessing my app was created by system and is potentially safe? I also don't want to do too much work by denying accees to critical system processes which will allow the app to successfully start and run in VFS.
Judging from the comment thread, your threat model is data theft via malware etc.
In this case, you can trust almost nothing, so the best way is probably to maintain an explicit whitelist of processes which are allowed to access your mount point, and block access to everything else by default. Log any processes to which access is denied, and allow the user to reverse that decision and add them to the whitelist. In other words, let the user decide what applications they consider safe.
Your said that according to your inspection, there were several processes which were mandatory for the process to run, so why won't use try-and-error approach.
You deploy you FUSE drive on clean environment and record all processes that attempt to access your files - try to prevent each process and keep only those which crash your apps, and add them to a white-list.
Of course that this list is subject to change in different macOS versions, but it can give you the general idea.
Alternatively, you can break your app into couple of parts. for example, put the sensitive logic inside separated dylib file, and prevent access to this file only.. since dylib is not the main executable in your app, I believe fewer processes require mandatory access it.

What errors can happen when (Windows) system file cache disk write-back fails? how are they reported?

Apparently the Windows file cache flushes data to disk asynchronously, even when using the synchronous WriteFile() API. Quoting "File Caching" on MSDN:
By default, [...] write operations write file data to the system
file cache rather than to the disk, and this type of cache is
referred to as a write-back cache.
Assuming that write-through and no-buffering flags are not used, what happens if the actual write to disk fails? Can clients be notified of such failures? What is the expected client error handling model for such failures? "Fire and forget" and "Write and pray" come to mind but maybe there is something else.
Secondary question: are there certain classes of errors that are guaranteed to be detected early? E.g. will WriteFile() always return an error if the disk is full? -- even though the actual write to disk would be deferred?
I would like to know how to write reliable file i/o that responds to these kinds of errors without disabling the Windows File Cache.
Bonus points: is this handled differently on other operating systems? Can you recommend a good resource on the topic?
In Windows 7, the user is notified via a pop-up dialog from the notification area.
Normal errors (such as the disk being full, lack of permissions, etc.) are reported back to the application immediately, these do not cause late failures.
Late failures can only happen in a handful of situations, such as a hardware failure or operating system crash. They can also happen when writing to a network share if the connection drops unexpectedly for any reason.
In most cases, it doesn't make sense for an application to worry about this. Data loss is to be expected under these circumstances; let the user deal with it.
If the data you are writing is unusually important, then you may need to worry, in which case you will have to use the write-through and/or no-buffering flags.
There is no third option.

How to guarantee file integrity without mandatory file lock on OS X?

AFAIK, OS X is a BSD derivation, which doesn't have actual mandatory file locking. If so, it seems that I have no way to prevent writing access from other programs even while I am writing a file.
How to guarantee file integrity in such environment? I don't care integrity after my program exited, because that's now user's responsibility. But at least, I think I need some kind of guarantee while my program is running.
How do other programs guarantee file content integrity without mandatory locking? Especially database programs. If there's common technique or recommended practice, please let me know.
Update
I am looking for this for data layer of GUI application for non-engineer users. And currently, my program have this situations.
Data is too big that it cannot be fit to RAM. And even hard to be temporarily copied. So it cannot be read/written atomically, and should be used from disk directly while program is running.
A long running professional GUI content editor application used by humans who are non-engineers. Though users are not engineers, but they still can access the file simultaneously with Finder or another programs. So users can delete or write on currently using file accidentally. Problem is users don't understand what is actually happening, and expect program handles file integrity at least program is running.
I think the only way to guarantee file's integrity in current situation is,
Open file with system-wide exclusive mandatory lock. Now the file is program's responsibility.
Check for integrity.
Use the file as like external memory while program is running.
Write all the modifications.
Unlock. Now the file is user's responsibility.
Because OS X lacks system-wide mandatory lock, so now I don't know what to do for this. But still I believe there's a way to archive this kind of file integrity, which just I don't know. And I want to know how everybody else handles this.
This question is not about my programming error. That's another problem. Current problem is protecting data from another programs which doesn't respect advisory file lockings. And also, users are usually root and the program is running with same user, so trivial Unix file privilege is not useful.
You have to look at the problem that you are trying to actually solve with mandatory locking.
File content integrity is not guaranteed by mandatory locking; unless you keep your file locked 24/7; file integrity will still depend on all processes observing file format/access conventions (and can still fail due to hard drive errors etc.).
What mandatory locking protects you against is programming errors that (by accident, not out of malice) fail to respect the proper locking protocols. At the same time, that protection is only partial, since failure to acquire a lock (mandatory or not) can still lead to file corruption. Mandatory locking can also reduce possible concurrency more than needed. In short, mandatory locking provides more protection than advisory locking against software defects, but the protection is not complete.
One solution to the problem of accidental corruption is to use a library that is aggressively tested for preserving data integrity. One such library (there are others) is SQlite (see also here and here for more information). On OS X, Core Data provides an abstraction layer over SQLite as a data storage. Obviously, such an approach should be complemented by replication/backup so that you have protection against other causes for data corruption where the storage layer cannot help you (media failure, accidental deletion).
Additional protection can be gained by restricting file access to a database and allowing access only through a gateway (such as a socket or messaging library). Then you will just have a single process running that merely acquires a lock (and never releases it). This setup is fairly easy to test; the lock is merely to prevent having more than one instance of the gateway process running.
One simple solution would be to simply hide the file from the user until your program is done using it.
There are various ways to hide files. It depends on whether you're modifying an existing file that was previously visible to the user or creating a new file. Even if modifying an existing file, it might be best to create a hidden working copy and then atomically exchange its contents with the file that's visible to the user.
One approach to hiding a file is to create it in a location which is not normally visible to users. (That is, it's not necessary that the file be totally impossible for the user to reach, just out of the way so that they won't stumble on it.) You can obtain such a location using -[NSFileManager URLForDirectory:inDomain:appropriateForURL:create:error:] and passing NSItemReplacementDirectory and NSUserDomainMask for the first two parameters. See -replaceItemAtURL:withItemAtURL:backupItemName:options:resultingItemURL:error: method for how to atomically move the file into its file place.
You can set a file to be hidden using various APIs. You can use -[NSURL setResourceValue:forKey:error:] with the key NSURLIsHiddenKey. You can use the chflags() system call to set UF_HIDDEN. The old Unix standby is to use a filename starting with a period ('.').
Here's some details about this topic:
https://developer.apple.com/library/ios/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileCoordinators/FileCoordinators.html
Now I think the basic policy on OSX is something like this.
Always allow access by any process.
Always be prepared for shared data file mutation.
Be notified when other processes mutates the file content, and provide proper response on them. For example you can display an error to end users if other process is trying to access the file. And then users will learn that's bad, and will not do it again.

Storing a value in Memory Independent of Process

I need a way to store a value somewhere for temporarily by say Process A. Process A can exit the after storing the value in memory. After sometime Process B comes accesses the same location of memory and read the value. I need to store in memory, because I dont want the data to persistent across reboots. But as long as the system is up, it Independent of the Process the data must be accessible. I tried MailSlots and Temporary files in windows, both seem to have problem where the process reference count drops to zero , the entities dont persist in memory. What is a suitable mechanism for this in Windows preferably using Win32 API?
Ganesh
Write a service that is started at boot time, and let it create some shared memory.
This shared memory can then be filled by process A, and process B can read it afterwards.
If your system is rebooted, the shared memory is gone and you have a fresh, new piece of shared memory.
Make sure that your service correctly 'initializes' the shared memory.
Is there a reason why the data must be resident in memory when ProcessA quits as opposed to being stored somewhere on disk? I ask as you mention temporary files which should work unless ProcessA fails in an unexpected way.
Depending on your needs a nice way to provide shared/fast/atomic data is via the ESENT API.
Try the following. I can't say I know this works, but it seems reasonable.
Create a shared memory file in the global namespace using OpenFileMapping. Then call Duplicatehandle and for the target process handle use some process that will live longer than process A. You may be able to add the handle to winlogon.exe This should stop the shared memory from being destroyed when process A terminates. Then in process B you can look up the shared memory file.
Well, I managed to a create a MailSlot on a Process which doesnt exit, the other two Processes can read and write to the MailSlot server as clients... Even if the clients exit, the Mailslot will still have the data... the MailSlot server enables me to store data in volatile memory has long as the MailSlot server process is up.. or the OS is up.. and vanishes on OS reboot... Thanks for all the ideas and help.... :)

Resources