Are Interlocked* functions useful on shared memory? - winapi

Two Windows processes have memory mapped the same shared file. If the file consists of counters, is it appropriate to use the Interlocked* functions (like InterlockedIncrement) to update those counters? Will those synchronize access across processes? Or do I need to use something heavier, like a mutex? Or perhaps the shared-memory mechanism itself ensures consistent views.

The interlocked functions are intended for exactly that type of use.
From http://msdn.microsoft.com/en-us/library/ms684122.aspx:
The threads of different processes can use these functions if the variable is in shared memory.
Of course, if you need to have more than one item updated atomically, you'll need to go with a mutex or some other synchronization object that works across processes. There's nothing built-in to the shared memory mechanism to provide synchronization to access to the shared memory - you'll need to use the interlocked functions or a synchronization object.

From MSDN:
...
The Interlocked API
The interlocked functions provide a
simple mechanism for synchronizing
access to a variable that is shared by
multiple threads. They also perform
operations on variables in an atomic
manner. The threads of different
processes can use these functions if
the variable is in shared memory.
So, yes, it is safe with your shared memory approach.

Related

Is there a robust implementation of condition_variable and mutex that can be stored in shared memory on Windows?

As described in this question, use of boost's interprocess_mutex and interproces condition_variable may result in a deadlock if the process holding the mutex crashes.
This is because boost's mutex is not a kernel object and therefore is not automatically released when the process holding it exits.
Is there a way in boost to use interprocess conditional variables with the mutex returned by a call to CreateMutex?
Just use CreateSemaphore() directly to implement condvars across multiple processes. You don't need to use Boost condvars. Windows provides a very rich set of well defined, fairly correct, named machine-wide synchronisation objects. Use those instead.

Implementation of MTA COM server

I can't find any source code on the prerequisites of an MTA compliant COM. I tried changing the ThreadingModel registry key of my object from Apartment to Both, and it results in a crash when a secondary thread calls the method before any data is accessed.
If STA COMs require a message pump, what kind of plumbing code do MTA COM objects require?
I do not think that there is anything special about MTA, except that you need to use synchronization primitives like mutexes to synchronize access to your internal structures. Does the "Multithreaded Apartments" not give you all that you need?
Quoting from the documentation, emphasis is mine:
Because calls to objects are not serialized in any way, multithreaded object concurrency offers the highest performance and takes the best advantage of multiprocessor hardware for cross-thread, cross-process, and cross-machine calling. This means, however, that the code for objects must provide synchronization in their interface implementations, typically through the use of synchronization primitives such as event objects, critical sections, mutexes, or semaphores, which are described later in this section. In addition, because the object doesn't control the lifetime of the threads that are accessing it, no thread-specific state may be stored in the object (in thread local storage).

Does Go support volatile / non-volatile variables?

I'm new to the language so bear with me.
I am curious how GO handles data storage available to threads, in the sense that non-local variables can also be non-volatile, like in Java for instance.
GO has the concept of channel, which, by it's nature -- inter thread communication, means it bypasses processor cache, and reads/writes to heap directly.
Also, have not found any reference to volatile in the go lang documentation.
TL;DR: Go does not have a keyword to make a variable safe for multiple goroutines to write/read it. Use the sync/atomic package for that. Or better yet Do not communicate by sharing memory; instead, share memory by communicating.
Two answers for the two meanings of volatile
.NET/Java concurrency
Some excerpts from the Go Memory Model.
If the effects of a goroutine must be observed by another goroutine,
use a synchronization mechanism such as a lock or channel
communication to establish a relative ordering.
One of the examples from the Incorrect Synchronization section is an example of busy waiting on value.
Worse, there is no guarantee that the write to done will ever be
observed by main, since there are no synchronization events between
the two threads. The loop in main is not guaranteed to finish.
Indeed, this code(play.golang.org/p/K8ndH7DUzq) never exits.
C/C++ non-standard memory
Go's memory model does not provide a way to address non-standard memory. If you have raw access to a device's I/O bus you'll need to use assembly or C to safely write values to the memory locations. I have only ever needed to do this in a device driver which generally precludes use of Go.
The simple answer is that volatile is not supported by the current Go specification, period.
If you do have one of the use cases where volatile is necessary, such as low-level atomic memory access that is unsupported by existing packages in the standard library, or unbuffered access to hardware mapped memory, you'll need to link in a C or assembly file.
Note that if you do use C or assembly as understood by the GC compiler suite, you don't even need cgo for that, since the [568]c C/asm compilers are also able to handle it.
You can find examples of that in Go's source code. For example:
http://golang.org/src/pkg/runtime/sema.goc
http://golang.org/src/pkg/runtime/atomic_arm.c
Grep for many other instances.
For how memory access in Go does work, check out The Go Memory Model.
No, go does not support the volatile or register statement.
See this post for more information.
This is also noted in the Go for C++ Programmers guide.
The Go Memory Model documentation explains why the concept of 'volatile' has no application in Go.
Loosely: Among other things, goroutines are free to keep goroutine-local changes cached in registers so those changes are not observable by other goroutines. To "flush" those changes to memory, a synchronization must be performed. Either by using locks or by communicating (channel send or receive).

Controlling read/writes to memory mapped files (windows)

Are you meant to protect against simultanously reads/writes to file mapped memory that is open by multiple processes?
For example if a string in the memory is "hello" and one process writes "hi..." over it, am I correct to say that another process that reads at the same time may get an intermittant value like "hi.lo"?
Basically what I am asking is how do people protect again these sorts of things. Are you meant to use semaphores? Do these work across processes?
Yes, if you need to protect against multiple writers or avoid reading partial updates then a shared Mutex / Semaphore used by each process would work to control access to the shared data.
There is some sample code which does this at the bottom of this MSDN article: Memory-Mapped Files in .NET 4.0

Why use SysV or POSIX shared memory vs mmap()?

Needing to use IPC to pass large-ish amounts of data (200kb+) from a child process to a parent on OS X 10.4 and above, I read up on shared memory on Unix, specifically System V and POSIX shared memory mechanisms. Then I realized that mmap() can be used with the MAP_ANON and MAP_SHARED flags to do a similar thing (or just with the MAP_SHARED flag, if I don't mind a regular file being created).
My question is, is there any reason not to just use mmap()? It seems much simpler, the memory is still shared, and it doesn't have to create a real file if I use MAP_ANON. I can create the file in the parent process then fork() and exec() the child and use it in the child process.
Second part of the question is, what would be reasons that this approach is not sufficient, and one would have to use SysV or POSIX shared memory mechanisms?
Note that I was planning on doing synchronization using pipes that I need for other communication, i.e. the parent asks for data over the pipe, the child writes it to shared memory, and responds over the pipe that its ready. No multiple readers or writers involved. Portability is not a priority.
If you have a parent/child relationship, it's perfectly fine to use mmap.
sysv_shm is the original unix implementation that allows related and unrelated processes to share memory. posix_shm standardized shared memory.
If you're on posix system without mmap, you'd use posix_shm. If you're on a unix without posix_shm you'd use sysv_shm. If you only need to share memory vs a parent/child you'd use mmap if available.
If memory serves, the only reason to use SysV/POSIX over mmap is portability. In particularly older Unix systems don't support MAP_ANON. Solaris, Linux, the BSDs and OS X do, however, so in practice, there's little reason not to use mmap.
shm in Linux is typically implemented via a /dev/shm file that gets mmapped, so, performance should be equivalent -- I'd go with mmap (w/MAP_ANON and MAP_SHARED as you mention) for simplicity, if I know portability is no issue as you say's the case for you.
As far as the documentation knows, you must use SYSV shared-memory if you want to use Xlib/XCB shared-memory images.

Resources