Why would I use Sleep() with infinite timeout? - windows

According to MSDN, Sleep() can be provided INFINITE value and that "indicates that the suspension should not time out".
Why would I want to call Sleep() with INFINITE timeout in my program?

I have used Sleep(INFINITE) and it makes perfect sense. I've used it to keep the thread alive. I have registered for WMI notification event (ExecNotificationQueryAsync, which receives event notification infinitely) then you need to keep the application alive. dont' know if this make sense to you.

A sleep with no timeout does not need a timer. This reduces the overhead where you anticipate a variable-length wait but are absolutely sure that the thread will be resumed.

As far as I know, Sleep, since they introduced SleepEx, it's just a thin, convenient wrapper around SleepEx, and when they rewrote it as a wrapper, they decided just to pass the timeout parameter to SleepEx, without processing it in any way. Obviously in this way the behavior of the function with INFINITE as timeout is propagated also to Sleep (and so must be documented), although, without the bAlertable parameter of the SleepEx, it's completely useless (a Sleep(timeout) is equal to SleepEx(timeout, FALSE), so you'll have an infinite nonalertable wait).
On Windows CE, then, they may have decided to change this behavior because it was actually silly, so I think that a Sleep(INFINITE) on CE is translated automatically to a SuspendThread; however, on Windows they are probably forced to keep the "simple" behavior for compatibility reasons.

In addition to what was said (basically waiting for an interrupt to happen) You might very well have an infinite timeout without being insane. For example, I've an application (a worker) that needs to do 3 different things at a time.
I chose to make each of those work run in new threads and have an infinite timeout in the Main() thread (as the program is not supposed to exit, except if an Exception is thrown in which case the whole app is restarted), for convenience and readability (I can comment out any of the 3 works without affecting the global behavior or easily split them to different workers if needed).
This probably adds a very small overhead compared to have 2 new thread + the main thread doing the 3rd work, but it's negligible considering today computers performances and memory.

There's no reasons one in his sane mind would ever Sleep(INFINITE). It has no practical meaning.
It is for generality and symmetry to WaitForSingleObject(..., timeout) and SleepEx(timeout), where INFINITE does make sense.
Reminding, that SleepEx will try to consume things out of your thread's APC queue.

Well, When we need to wait until ^C but we do not want while(1);

Related

How to terminate long running function after a timeout

So I a attempting to shut down a long running function if something takes too long, maybe is just a solution to treating the symptoms rather than cause, but in any case for my situation it didn't really worked out.
I did it like this:
func foo(abort <- chan struct{}) {
for {
select{
case <-abort:
return
default:
///long running code
}
}
}
And in separate function I have which after some time closes the passed chain, which it does, if I cut the body returns the function. However if there is some long running code, it does not affect the outcome it simply continues the work as if nothing has happened.
I am pretty new to GO, but it feels like it should work, but it does not. Is there anything I am missing. After all routers frameworks have timeout function, after which whatever is running is terminated. So maybe this is just out of curiosity, but I would really want how to od it.
your code only checks whether the channel was closed once per iteration, before executing the long running code. There's no opportunity to check the abort chan after the long running code starts, so it will run to completion.
You need to occasionally check whether to exit early in the body of the long running code, and this is more idiomatically accomplished using context.Context and WithTimeout for example: https://pkg.go.dev/context#example-WithTimeout
In your "long running code" you have to periodically check that abort channel.
The usual approach to implement that "periodically" is to split the code into chunks each of which completes in a reasonably short time frame (given that the system the process runs on is not overloaded).
After executing each such chunk you check whether the termination condition holds and then terminate execution if it is.
The idiomatic approach to perform such a check is "select with default":
select {
case <-channel:
// terminate processing
default:
}
Here, the default no-op branch is immediately taken if channel is not ready to be received from (or closed).
Some alogrithms make such chunking easier because they employ a loop where each iteration takes roughly the same time to execute.
If your algorithm is not like this, you'd have to chunk it manually; in this case, it's best to create a separate function (or a method) for each chunk.
Further points.
Consider using contexts: they provide a useful framework to solve the style of problems like the one you're solving.
What's better, the fact they can "inherit" one another allow one to easily implement two neat things:
You can combine various ways to cancel contexts: say, it's possible to create a context which is cancelled either when some timeout passes or explicitly by some other code.
They make it possible to create "cancellation trees" — when cancelling the root context propagates this signal to all the inheriting contexts — making them cancel what other goroutines are doing.
Sometimes, when people say "long-running code" they do not mean code actually crunching numbers on a CPU all that time, but rather the code which performs requests to slow entities — such as databases, HTTP servers etc, — in which case the code is not actually running but sleeping on the I/O to deliver some data to be processed.
If this is your case, note that all well-written Go packages (of course, this includes all the packages of the Go standard library which deal with networked services) accept contexts in those functions of their APIs which actually make calls to such slow entities, and this means that if you make your function to accept a context, you can (actually should) pass this context down the stack of calls where applicable — so that all the code you call can be cancelled in the same way as yours.
Further reading:
https://go.dev/blog/pipelines
https://blog.golang.org/advanced-go-concurrency-patterns

When to use Unconfined in Kotlin

When would I choose to use Dispatchers.Unconfined? Is when it doesn't really matter where the coroutine should run? So you let the coroutine to choose the thread pool as it better suits?
And how does it differ from Dispatchers.Default? Is it that when running the Default dispatcher is always within a specific thread pool defined as the default one?
So you let the coroutine to choose the thread pool as it better suits?
That's not really how Unconfined works. The best way to understand it is that it is a "no-op" dispatcher that doesn't actually do any dispatch at all. Wherever you call continuation.resume(), that's where the coroutine resumes execution — within that very call. When the resume() call returns, it means the coroutine has either suspended again or completed.
In normal programming, you usually call continuation.resume() from a callback and it is not your code that runs the callback, so you don't actually have any control over the thread where your coroutine will resume. It is not advisable to use the Unconfined dispatcher when resuming from a callback provided by a library that is not under your control.
Unconfined is really a special-cased tool you can use when building a coroutine execution environment yourself, or in other custom scenarios. Basically, you should use it only when you are actively looking for a way to disable the normal dispatching mechanism.
The unconfined dispatcher is appropriate for coroutines which neither consume CPU time nor update any shared data (like UI) confined to a specific thread.
So, I'd use it in non-IO, UI or computation heavy situations basically :D.
I think the nunmber of use-cases for this is pretty low, but I'd think of an operation which isn't heavy, but still for some reason you'd like it to run on a different thread.
Here's a link for how it actually works.
Dispatchers.Default is really different, and it's mostly used for heavy CPU operations.
This is because, it actually dispatches works to a thread pool with a number of threads equal to the number of CPU cores, and it's at least 2. This way developers can leverage the full capacity of the cpu when doing heavy computational work.

WaitForSingleObject() vs RegisterWaitForSingleObject()?

What is the advantage/disadvantage over using RegisterWaitForSingleObject() instead of WaitForSingleObject()?
The reason that I know:
RegisterWaitForSingleObject() uses the thread pool already available in OS
In case of the use of WaitForSingleObject(), an own thread should be polling for the event.
the only difference is Polling vs. Automatic Event? or Is there any considerable performance advantage between these?
It's pretty straight-forward, WaitForSingleObject() blocks a thread. It is consuming a megabyte of virtual memory and not doing anything useful with it while it is blocked. It won't wake up and resume doing useful stuff until the handle is signaled.
RegisterWaitForSingleObject() does not block a thread. The thread can continue doing useful work. When the handle is signaled, Windows grabs a thread-pool thread to run the code you specified as the callback. The same code you would have programmed after a WFSO call. There is still a thread involved with getting that callback to run, the wait thread, but it can handle many RWFSO requests.
So the big advantage is that your program can use a lot less threads while still handling many service requests. A disadvantage is that it can take a bit longer for the completion code to start running. And it is harder to program correctly since that code runs on another thread. Also note that you don't need RWFSO when you already use overlapped I/O.
They serve two different code models. In case with RegisterWaitForSingleObject you'll get an asynchronous notification callback on a random thread from the thread pool managed by the OS. If you can structure your code like this, it might be more efficient. On the other hand, WaitForSingleObject is a synchronous wait call blocking (an thus 'occupying') the calling thread. In most cases, such code is easier to write and would probably be less error-prone to various dead-lock and race conditions.

MFC - Add function call to mainloop?

I am fixing a MFC applivation written in C++. It is a GUI and it communicates with an external module connected to the PC via USB.
I want to avoid using a separate thread. Is there a way I can add things to the mainloop so that it runs continously rather than being event based?
I want the mainloop to make a call to a function runCommStack() in every loop.
Some possible approaches:
You can use CWnd::SetTimer to set a timer.
You can override CWinApp::OnIdle (called by CWinApp::Run).
You can override CWinApp:Run, copying and modifying the original MFC's CWinApp:Run. This definitely is not the easiest solution.
You can create a background thread.
It depends on the requirements of runCommStack(). Is this function running long times? Then you probably won't want to run it in the GUI thread. Does runCommStack need to get called every n milliseconds? Then it might also be better to run it in it's own thread. In other cases you can just use the timer or OnIdle approach.
Regarding solution 1: as Tim pointed out WM_TIMER messages are low priority messages and will not be passed to the application while other higher-priority messages are in the message queue. See also Determine priority of a window message.
With solution 2 you should remind that OnIdle will only be called if no window message is available. So this is quite the same as solution 1 (in fact a little worse).
Also keep in mind that solutions 2 and 3 might result in your runCommStack not getting called if a dialog's DoModal() is called or if a message box is displayed. This is because during MessageBox() the control is not returned to CWinApp::Run().
I'ld implement solution 1 or 4.
When there are no messages (like keys, mouse, repaint) arriving the main loop suspends the program, waiting for the next message. So the idea of adding a call to the main loop will give you very erratic operation. For example, minimizing your window will stop all the USB communication.
Using SetTimer and calling your runCommStack function in a WM_TIMER handler will probably be much more satisfactory.
You can use idle processing with CWinApp::OnIdle; this will work if reading your USB device takes only a short amount of time, otherwise the user interface will be blocked during long reads.
But using a separate thread is definitely a better method.

Inter-thread communication (worker threads)

I've created two threads A & B using CreateThread windows API. I'm trying to send the data from thread A to B.
I know I can use Event object and wait for the Event object in another using "WaitForSingleObject" method. What this event does all is just signal the thread. That's it! But how I can send a data. Also I don't want thread B to wait till thread A signals. It has it own job to do. I can't make it wait.
I can't find a Windows function that will allow me to send data to / from the worker thread and main thread referencing the worker thread either by thread ID or by the returned HANDLE. I do not want to introduce the MFC dependency in my project and would like to hear any suggestions as to how others would or have done in this situation. Thanks in advance for any help!
First of all, you should keep in mind that Windows provides a number of mechanisms to deal with threading for you: I/O Completion Ports, old thread pools and new thread pools. Depending on what you're doing any of them might be useful for your purposes.
As to "sending" data from one thread to another, you have a couple of choices. Windows message queues are thread-safe, and a a thread (even if it doesn't have a window) can have a message queue, which you can post messages to using PostThreadMessage.
I've also posted code for a thread-safe queue in another answer.
As far as having the thread continue executing, but take note when a change has happened, the typical method is to have it call WaitForSingleObject with a timeout value of 0, then check the return value -- if it's WAIT_OBJECT_0, the Event (or whatever) has been set, so it needs to take note of the change. If it's WAIT_TIMEOUT, there's been no change, and it can continue executing. Either way, WaitForSingleObject returns immediately.
Since the two threads are in the same process (at least that's what it sounds like), then it is not necessary to "send" data. They can share it (e.g., a simple global variable). You do need to synchronize access to it via either an event, semaphore, mutex, etc.
Depending on what you are doing, it can be very simple.
Thread1Func() {
Set some global data
Signal semaphore to indicate it is available
}
Thread2Func() {
WaitForSingleObject to check/wait if data is available
use the data
}
If you are concerned with minimizing Windows dependencies, and assuming you are coding in C++, then I recommend using Boost.Threads, which is a pretty nice, Posix-like C++ threading interface. This will give you easy portability between Windows and Linux.
If you go this route, then use a mutex to protect any data shared across threads, and a condition variable (combined with the mutex) to signal one thread from the other.
Don´t use a mutexes when only working in one single process, beacuse it has more overhead (since it is a system-wide defined object)... Place a critical section around Your data and try to enter it (as Jerry Coffin did in his code around for the thread safe queue).

Resources