boost asio concurrent async_read and async_write - c++14

Looking at the documentation it looks like the TCP socket object is not thread-safe. So I cannot issue async_read from one thread and async_write concurrently from another thread? Also I would guess it applies to boost::asio::write() as well?
Can I issue write() - synchronous, while I do async_read from another thread?
If that is not safe, then only way is probably to get the socket native handle
and use synchronous linux mechanisms to achieve concurrent read and writes. I have an application where the reads and writes are actually independent.

It is thread-safe for the use-cases you listed. You can read in one thread, and write in another. And you can use the synchronous as well as asynchronous operations for that.
You will however run into problems, if you try to do one dedicated operation type (e.g. reads) from more than one thread. Especially if you are using the freestanding/composed operations (boost::asio::read(socket) instead of socket.read_some(). The reason for this is one the primitive operations are atomic / threadsafe. And the composed operations are working by calling multiple times into the primitives.

Related

GetOverlappedResultEx will create a thread to process on or do I have to create and sync the threads?

Trying to understand how this works... do I have to create various threads to take advantage of the functionality for GetOverlappedResultEx? However why couldn't I just put GetOverlappedResult in a separate thread from the main thread to handle blocking of the IO and not interfere with main operations?
GetOverlappedResult function
https://learn.microsoft.com/en-us/windows/win32/api/ioapiset/nf-ioapiset-getoverlappedresult
Retrieves the results of an overlapped operation on the specified file, named pipe, or communications device. To specify a timeout interval or wait on an alertable thread, use GetOverlappedResultEx.
https://learn.microsoft.com/en-us/windows/win32/api/ioapiset/nf-ioapiset-getoverlappedresultex
Retrieves the results of an overlapped operation on the specified file, named pipe, or communications device within the specified time-out interval. The calling thread can perform an alertable wait.
https://learn.microsoft.com/en-us/windows/win32/fileio/alertable-i-o
You handle threads, for concurrency, yourself.
There are basically three ways to do it:
Having initiated an overlapped (i.e., async completion) I/O operation you do something else and then every once in awhile poll the handle to see if the overlapped operation has completed. This is how you can use GetOverlappedResult looking for STATUS_PENDING to see if the operation isn't done yet.
You sit around waiting for an overlapped operation to complete. But it's not as bad as that, because you can actually sit around waiting for any of a set of overlapped operations to complete. As soon as any one completes you handle it, and then loop around to wait for the rest. Handling it, of course, may fire off another asynch operation, you add that handle to the list. This is where you use WaitForSingleObject{Ex} or better WaitForMultipleObjects{Ex}.
You use I/O Completion ports. Here you pass some handles to a kernel object called an I/O Completion port - this kernel object cleverly combines a thread pool (that it manages itself) with callbacks. It is a very efficient way of dealing with multiple - in fact, very many - async operations in-flight simultaneously. In these callbacks you can do whatever you want, including initiating more async operations and adding them to the same I/O Completion port.
There is also a fourth concept: alertable I/O, which executes a callback on an "APC" on your thread that initiated the I/O, provided your thread is in an "alertable" state - which means it is executing one or another of certain APIs that wait in the kernel. But I've never used it, as it seems to have drawbacks (such as only working on the thread that initiated the I/O, and that the environment the callback environment runs in isn't as clear as it could be) and if you're going to go that far just figure out I/O Completion ports and use them.
Options #2 and #3 of course involve concurrent programming - so in both cases you have to make sure your callbacks are thread-safe with respect to your other threads.
There are plenty of examples of all these methods out there on the intertubes.

IO callbacks on goroutine

I'm a beginner at golang. Looking at all golang tutorials, it looks you should create goroutines for everything. Coming from something like libuv in C where you can define callbacks for socket read/write on a single thread, is the right way to achieve that in golang to create nested goroutines for any IO tasks needed?
As an example, take something like nginx where a single thread will handle multiple connections. To do something like that in golang, we would need a goroutine for every connection?
Go stands out in the area of tools to write networked services specifically because of the fact it has I/O-awareness integrated right into the runtime scheduler powering any running GO program.
The basic idea is roughly like this: a goroutine performs normal, sequential, callback-free operations on sockets — that is, plain reads and plain writes, — and as soon as the next I/O operation would block (yes, the relevant syscall on a Unix-like kernel returns EWOULDBLOCK), the goroutine is suspended, its socket is handed out into a component of the runtime called "netpoller", which is implemented using the platform-native socket I/O multiplexor such as epoll, kqueue or IOCP, and the OS thread the goroutine was running on is handed off to another goroutine which wants to run. As soon as the netpoller signals the I/O on the socket caused the goroutine to suspend can proceed, the scheduler queues that goroutine for execution and then it contnues to run exactly where it left off.
Because of this, the usual model employed when writing networking services in Go is to have one goroutine per socket. When you're writing plain TCP server, you should create a goroutine yourself (and hand it the socket returned by the listener once it accepted a client's connection).
net/http.Server has this behaviour built-in as it creates a goroutine to serve each incoming client request (actually, for HTTP/1.x, two or even three goroutines are created per connection, but it's invisible to HTTP request handlers).
Now, we've just covered the basics. Of course, there might exist legitimate reasons to have extra goroutines to handle tasks needed to be carried out to complete a request, and that's what #Volker referred to.
More info:
"What color is your function?" — a classical essay dealing with I/O multiplexing implemented as a library vs it being implemented in the core.
"Go's work-stealing scheduler"; also see this and this and this design doc.
State threads library which implements the approach quite similar to that of Go, just on much lower level. Its documentation is quite insightful on the approach implemented in Go.
libtask is a much more recent stab at
the same problem, by one of Go's creators.

boost.asio - do i need to use locks if sharing database type object between different async handlers?

I'm making a little server for a project, I have a log handler class which contains a log implemented as a map and some methods to act on it (add entry, flush to disk, commit etc..)
This object is instantiated in the server Class, and I'm passing the address to the session so each session can add entries to it.
The sessions are async, the log writes will happen in the async_read callback. I'm wondering if this will be an issue and if i need to use locks?
The map format is map<transactionId map<sequenceNum, pair<head, body>>, each session will access a different transactionId, so there should be no clashes as far as i can figure. Also hypothetically, if they were all writing to the same place in memory -- something large enough that the operation would not be atomic; would i need locks? As far as I understand each async method dispatches a thread to handle the operation, which would make me assume yes. At the same time I read that one of the great uses of async functions is the fact that synchronization primitives are not needed. So I'm a bit confused.
First time using ASIO or any type of asynchronous functions altogether, and i'm not a very experienced coder. I hope the question makes sense! The code seems to run fine so far, but i'm curios if it's correct.
Thank you!
Asynchronous handlers will only be invoked in application threads processing the io_service event loop via run(), run_one(), poll(), or poll_one(). The documentation states:
Asynchronous completion handlers will only be called from threads that are currently calling io_service::run().
Hence, for a non-thread safe shared resource:
If the application code only has one thread, then there is neither concurrency nor race conditions. Thus, no additional form of synchronization is required. Boost.Asio refers to this as an implicit strand.
If the application code has multiple threads processing the event-loop and the shared resource is only accessed within handlers, then synchronization needs to occur, as multiple threads may attempt to concurrently access the shared resource. To resolve this, one can either:
Protect the calls to the shared resource via a synchronization primitive, such as a mutex. This question covers using mutexes within handlers.
Use the same strand to wrap() the ReadHandlers. A strand will prevent concurrent invocation of handlers dispatched through it. For more details on the usage of strands, particularly for composed operations, such as async_read(), consider reading this answer.
Rather than posting the entire ReadHandler into the strand, one could limit interacting with the shared resource to a specific set of functions, and these functions are posted as CompletionHandlers to the same strand. This subtle difference between this and the previous solution is the granularity of synchronization.
If the application code has multiple threads and the shared resource is accessed from threads processing the event loop and from threads not processing the event loop, then synchronization primitives, such as a mutex, needs to be used.
Also, even if a shared resource is small enough that writes and reads are always atomic, one should prefer using explicit and proper synchronization. For example, although the write and read may be atomic, without proper memory fencing to guarantee memory visibility, a thread may not observe a chance in memory even though the actual memory has chanced. Boost.Asio's will perform the proper memory barriers to guarantee visibility. For more details, on Boost.Asio and memory barriers, consider reading this answer.

ZeroMQ: several I/O threads but only of them in user code?

By default, there is only one thread doing I/O in ZeroMQ. Thus, there will be no more than one of such threads in user code, in the case that we are using callbacks, like in Node.js:
aSocket.on ('message', function(request) { ... user code ... } );
But, at least in the C API, one may ask ZeroMQ to have more than one I/O thread.
In this case (several I/O threads), can we assume that no more than one I/O thread will be executing user code in callbacks?
If not true in general, at least, I guess it is so in node.js
To directly answer:
In this case (several I/O threads), can we assume that no more than one I/O thread will be executing user code in callbacks?
The ZeroMQ C library doesn't have a callback-based framework so yes we can assume that. However, as you note in your post, you can set it up to have multiple I/O threads, in which case you need to manually deal with this in your own way -- Again, no callbacks.

WaitForSingleObject() vs RegisterWaitForSingleObject()?

What is the advantage/disadvantage over using RegisterWaitForSingleObject() instead of WaitForSingleObject()?
The reason that I know:
RegisterWaitForSingleObject() uses the thread pool already available in OS
In case of the use of WaitForSingleObject(), an own thread should be polling for the event.
the only difference is Polling vs. Automatic Event? or Is there any considerable performance advantage between these?
It's pretty straight-forward, WaitForSingleObject() blocks a thread. It is consuming a megabyte of virtual memory and not doing anything useful with it while it is blocked. It won't wake up and resume doing useful stuff until the handle is signaled.
RegisterWaitForSingleObject() does not block a thread. The thread can continue doing useful work. When the handle is signaled, Windows grabs a thread-pool thread to run the code you specified as the callback. The same code you would have programmed after a WFSO call. There is still a thread involved with getting that callback to run, the wait thread, but it can handle many RWFSO requests.
So the big advantage is that your program can use a lot less threads while still handling many service requests. A disadvantage is that it can take a bit longer for the completion code to start running. And it is harder to program correctly since that code runs on another thread. Also note that you don't need RWFSO when you already use overlapped I/O.
They serve two different code models. In case with RegisterWaitForSingleObject you'll get an asynchronous notification callback on a random thread from the thread pool managed by the OS. If you can structure your code like this, it might be more efficient. On the other hand, WaitForSingleObject is a synchronous wait call blocking (an thus 'occupying') the calling thread. In most cases, such code is easier to write and would probably be less error-prone to various dead-lock and race conditions.

Resources