I'm just starting out with WinRT's concurrency model. I have a task that I need to wait on, but calling wait() throws an exception that I can't catch.
Simplest possible code:
concurrency::task<StorageFile^> getFileTask = concurrency::create_task(Windows::Storage::ApplicationData::Current->LocalFolder->GetFileAsync(fileString));
getFileTask.wait();
The exception it throws is:
Microsoft C++ exception: Concurrency::invalid_operation at memory location 0x0402C45C
How do I set this up so that it works?
One of the most important rules that you must follow when building a Windows Store app is that you must never block the UI thread. Never ever.
If you start an asynchronous operation, you get a future or task that "owns" that operation. If you call get() or wait() on that operation before the asynchronous operation has completed, that call will block until the operation completes, then it will return the result.
Since blocking the UI thread is bad, if you attempt to synchronize with a not-yet-completed asynchronous operation on the UI thread, the call to get() or wait() will throw, to prevent the UI thread from being blocked. This exception is there to help you to remember not to block the UI thread. :-)
You should use task's then() to provide a continuation that will run when the asynchronous operation completes. If the continuation needs to run on the UI thread as well, be sure to pass task_continuation_context::use_current() to then() to ensure that the continuation is marshaled back to the UI thread for execution.
Note: This exception is only thrown if you are using C++/CX. If you are using C++ without the C++/CX language extensions, the call to get() or wait() will successfully block, potentially resulting in a poor user experience. In general, C++/CX has many more "guard rail" features like this one that are designed to make it easier for you to write good code. When using C++/CX, you get the full power of C++, with the understanding that there are more opportunities for error.
Related
When would I choose to use Dispatchers.Unconfined? Is when it doesn't really matter where the coroutine should run? So you let the coroutine to choose the thread pool as it better suits?
And how does it differ from Dispatchers.Default? Is it that when running the Default dispatcher is always within a specific thread pool defined as the default one?
So you let the coroutine to choose the thread pool as it better suits?
That's not really how Unconfined works. The best way to understand it is that it is a "no-op" dispatcher that doesn't actually do any dispatch at all. Wherever you call continuation.resume(), that's where the coroutine resumes execution — within that very call. When the resume() call returns, it means the coroutine has either suspended again or completed.
In normal programming, you usually call continuation.resume() from a callback and it is not your code that runs the callback, so you don't actually have any control over the thread where your coroutine will resume. It is not advisable to use the Unconfined dispatcher when resuming from a callback provided by a library that is not under your control.
Unconfined is really a special-cased tool you can use when building a coroutine execution environment yourself, or in other custom scenarios. Basically, you should use it only when you are actively looking for a way to disable the normal dispatching mechanism.
The unconfined dispatcher is appropriate for coroutines which neither consume CPU time nor update any shared data (like UI) confined to a specific thread.
So, I'd use it in non-IO, UI or computation heavy situations basically :D.
I think the nunmber of use-cases for this is pretty low, but I'd think of an operation which isn't heavy, but still for some reason you'd like it to run on a different thread.
Here's a link for how it actually works.
Dispatchers.Default is really different, and it's mostly used for heavy CPU operations.
This is because, it actually dispatches works to a thread pool with a number of threads equal to the number of CPU cores, and it's at least 2. This way developers can leverage the full capacity of the cpu when doing heavy computational work.
My activities throw exceptions from time to time during the execution, so I've implemented the Faulted methods of Activity<TInstance> to handle that, discarding the changes made in the Execute method. I thought that there's some wiring underneath in Automatonymous that makes it so that the Faulted method executes when the Execute method throws an exception and then calls the Faulted methods for the activities that were executed already. It turns out that there's no such thing, as my Faulted methods are never executed.
Should I call those myself in a try/catch block instead? I could produce the BehaviorExceptionContextProxy based on BehaviorContext and the exception thrown. The only next Behavior I could pass would be the one inserted into that Activity's Execute method, but logically that means I'm compensating in the wrong direction as that next Behavior is actually to be executed after this one succeeds, so I'd compensate too much.
I also tried to use the Catch in the state machine, which does handle the exception, however, I couldn't find any way to start the execution of the compensation flow for the activity that failed when I only have the ExceptionActivityBinder present.
Is there any good way to trigger the compensation flow of the activities?
An activity within a state machine (using Automatonymous) is much different than an activity within Courier. Unfortunately, they both have the same name, which can create confusion.
When an activity throws an exception, the Faulted method of the next activity in the behavior is called. If that method is a regular activity method (such as .Then, .Publish, etc.) it is skipped, since the Faulted method of those activities just calls the next activity in the behavior.
A Catch activity, however, can be used to catch the exception and execute a rescue behavior (which is a sequence of activities).
Either way, the Faulted method of the activity which throws an exception within the Execute method is not called. So yes, you should use a try/catch, but allow the exception to flow back out of the Execute method so that the behavior handles it properly.
I don't understand part of the latest Windows threadpool API. I need help with that.
From the documentation, the recipe to use it for I/O (in my case, for SOCKET) can be summarized as follows:
Call CreateThreadpoolIo.
Call StartThreadpoolIo. You can find this warning there:
You must call this function before initiating each asynchronous I/O operation on the file handle bound to the I/O completion object. Failure to do so will cause the thread pool to ignore an I/O operation when it completes and will cause memory corruption.
Call the operation on the file handle (e.g., WSARecvFrom). If it fails, call CancelThreadpoolIo. Otherwise, process the result when it is available. WSARecvFrom, when used asynchronously, asks for a WSAOVERLAPPED (that you have to create beforehand) but not for any information that links it to the previous call to StartThreadpoolIo. CancelThreadpoolIo only asks for the PTP_IO, but not for any additional information to derive a specific asynchronous operation.
Repeat steps 2 and 3.
Call CloseThreadpoolIo to finish. You can find this warning there:
It may be necessary to cancel threadpool I/O notifications to prevent memory leaks. For more information, see CancelThreadpoolIo.
I usually need it for UDP, so I strive to have several reception operations queued (asynchronous WSARecvFrom operations started) at any given time. That way I don't have to rush to start another reception operation at the beginning of the callback function nor synchronize access to the reception buffers (I can have a pool of them, each one able to contain a datagram, and reissue the reception operation when I finish processing each message; in the interim, other queued operations will keep the receiver busy). Datagrams are independent and self contained. I'm aware that this approach may not be valid for TCP.
StartThreadpoolIo/CancelThreadpoolIo seem to me the source of the problem: StartThreadpoolIo and WSARecvFrom are not directly bound (they don't share any arguments). So:
How can the framework know which operation to cancel when you call CancelThreadpoolIo? How does it cancel just the operation that failed and not any of the pending ones?
You can say, "don't call StartThreadpoolIo concurrently". I can live without several concurrent WSARecvFrom's, but I can't live without concurrent WSARecvFrom and WSASendTo. So I think being unable to have several asynchronous operations at the same time can't be the way the API was designed.
You can say, "call StartThreadpoolIo only once, that will suffice to register the callback; it is an on/off process". But the documentation says:
You must call this function before initiating each asynchronous I/O operation on the file handle...
You can say, "it cancels the operation started by the same thread that just called StartThreadpoolIo". But then the advice of calling CancelThreadpoolIo in the context of calling CloseThreadpoolIo doesn't make sense (I will call CloseThreadpoolIo from the thread that triggers stopping, which will be completely independent from the threads issuing the asynchronous operations; and a single call to CancelThreadpoolIo may not be enough to cancel several operations). Being unable to trigger cancellation from a different thread is a serious limitation, anyway. I'm aware of the existence of CreateThreadpoolCleanupGroup, but my question is more fundamental. I want to understand how this API can be fundamentally right and useful.
You can say "call CreateThreadpoolIo several times, so that you have independent PTP_IO's to work with". It doesn't work. When I call CreateThreadpoolIo a second time, nullptr is returned.
Am I wrong, or is this API awkward? Normally, other asynchronous APIs work with one of these patterns:
Create an operation and receive a handle => call methods passing the handle.
Create a reusable handle => call methods (including starting operations) passing the handle.
The latest Windows threadpool API, in which the handle seems to be implicit, or there are several handles for the same operation (TP_IO, WSAOVERLAPPED, StartThreadpoolIo) and they aren't all explicitly linked together, uses neither of them.
Thank you very much for your help.
How can the framework know which operation to cancel when you call CancelThreadpoolIo? How does it cancel just the operation that failed
and not any of the pending ones?
CancelThreadpoolIo() doesn't cancel IO. It is reciprocal to StartThreadpoolIo(). StartThreadpoolIo() prepares threadpool to accept a completion. If threadpool doesn't expect a completion, it won't wait for it, thus you may miss it. If threadpool expects a completion but completion doesn't happen, threadpool may waste resources.
CancelThreadpoolIo() undoes whatever StartThreadpoolIo() did.
What is the advantage/disadvantage over using RegisterWaitForSingleObject() instead of WaitForSingleObject()?
The reason that I know:
RegisterWaitForSingleObject() uses the thread pool already available in OS
In case of the use of WaitForSingleObject(), an own thread should be polling for the event.
the only difference is Polling vs. Automatic Event? or Is there any considerable performance advantage between these?
It's pretty straight-forward, WaitForSingleObject() blocks a thread. It is consuming a megabyte of virtual memory and not doing anything useful with it while it is blocked. It won't wake up and resume doing useful stuff until the handle is signaled.
RegisterWaitForSingleObject() does not block a thread. The thread can continue doing useful work. When the handle is signaled, Windows grabs a thread-pool thread to run the code you specified as the callback. The same code you would have programmed after a WFSO call. There is still a thread involved with getting that callback to run, the wait thread, but it can handle many RWFSO requests.
So the big advantage is that your program can use a lot less threads while still handling many service requests. A disadvantage is that it can take a bit longer for the completion code to start running. And it is harder to program correctly since that code runs on another thread. Also note that you don't need RWFSO when you already use overlapped I/O.
They serve two different code models. In case with RegisterWaitForSingleObject you'll get an asynchronous notification callback on a random thread from the thread pool managed by the OS. If you can structure your code like this, it might be more efficient. On the other hand, WaitForSingleObject is a synchronous wait call blocking (an thus 'occupying') the calling thread. In most cases, such code is easier to write and would probably be less error-prone to various dead-lock and race conditions.
I remember there was a way to do this, something similar to unix signals, but not so widely used. But can't remember the term. No events/mutexes are used: the thread is just interrupted at random place, the function is called and when it returns, the thread continues.
Windows has Asynchronous Procedure Calls which can call a function in the context of a specific thread. APC's do not just interrupt a thread at a random place (that would be dangerous - the thread could be in the middle of writing to a file or obtaining a lock or in Kernel mode). Instead an APC will be dispatched when the calling thread enters an alterable wait by calling a specific function (See the APC documentation).
If the reason that you need to call code in a specific thread is because you are interacting with the user interface, it would be more direct to send or post a window message to the window handle that you want to update. Window messages are always processed in the thread that created the window.
you can search RtlRemoteCall, it's an undocumented routine though. there's APC in Windows semantically similar to Unix signal, however APC requires target thread is in an alertable state to get delivered, it's not guaranteed this condition is always met