pthread_kill to a GCD-managed thread - macos

I am attempting to send a signal to a specific thread with pthread_kill. I use pthread_from_mach_thread_np() to get a handle and then use pthread_kill to send the signal.
This worked well in my other testing, but now I see that when attempting to signal a thread internally created by GCD, I get a return code of 45 from pthread_kill.
GCD API that spawned that thread:
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INITIATED, 0), ^{ ... });
Any reason this is happening?
—-
To add some further information, I am not attempting to kill threads. pthread_kill() is the standard POSIX API to send signals to threads. If a signal handler is installed, the thread’s context is switched with a trampoline to the handler.
While what I attempt to achieve using my signal handler can be achieved in better ways, this is not in question here. Even if for purely academic reasons, I would like to understand what is going on here internally.

The pthread_kill() API is specifically disallowed on workqueue threads (the worker threads underlying GCD) and returns ENOTSUP for such threads.
This is primarily intended to prevent execution of arbitrary signal handlers in the context of code that may not expect it (since these threads are a shared resource used by many independent subsystems in a process), as well as to abstract away that execution context so that the system has the freedom to change it in the future.
You can see the details of how this is achieved in the implementation.

That is a very bad idea. You don't own GCDs thread-pool, and you absolutely must not kill its threads out from under it.
The answer to your question is DO NOT DO THAT UNDER ANY CIRCUMSTANCES.

Related

ZeroMQ: several I/O threads but only of them in user code?

By default, there is only one thread doing I/O in ZeroMQ. Thus, there will be no more than one of such threads in user code, in the case that we are using callbacks, like in Node.js:
aSocket.on ('message', function(request) { ... user code ... } );
But, at least in the C API, one may ask ZeroMQ to have more than one I/O thread.
In this case (several I/O threads), can we assume that no more than one I/O thread will be executing user code in callbacks?
If not true in general, at least, I guess it is so in node.js
To directly answer:
In this case (several I/O threads), can we assume that no more than one I/O thread will be executing user code in callbacks?
The ZeroMQ C library doesn't have a callback-based framework so yes we can assume that. However, as you note in your post, you can set it up to have multiple I/O threads, in which case you need to manually deal with this in your own way -- Again, no callbacks.

WaitForSingleObject() vs RegisterWaitForSingleObject()?

What is the advantage/disadvantage over using RegisterWaitForSingleObject() instead of WaitForSingleObject()?
The reason that I know:
RegisterWaitForSingleObject() uses the thread pool already available in OS
In case of the use of WaitForSingleObject(), an own thread should be polling for the event.
the only difference is Polling vs. Automatic Event? or Is there any considerable performance advantage between these?
It's pretty straight-forward, WaitForSingleObject() blocks a thread. It is consuming a megabyte of virtual memory and not doing anything useful with it while it is blocked. It won't wake up and resume doing useful stuff until the handle is signaled.
RegisterWaitForSingleObject() does not block a thread. The thread can continue doing useful work. When the handle is signaled, Windows grabs a thread-pool thread to run the code you specified as the callback. The same code you would have programmed after a WFSO call. There is still a thread involved with getting that callback to run, the wait thread, but it can handle many RWFSO requests.
So the big advantage is that your program can use a lot less threads while still handling many service requests. A disadvantage is that it can take a bit longer for the completion code to start running. And it is harder to program correctly since that code runs on another thread. Also note that you don't need RWFSO when you already use overlapped I/O.
They serve two different code models. In case with RegisterWaitForSingleObject you'll get an asynchronous notification callback on a random thread from the thread pool managed by the OS. If you can structure your code like this, it might be more efficient. On the other hand, WaitForSingleObject is a synchronous wait call blocking (an thus 'occupying') the calling thread. In most cases, such code is easier to write and would probably be less error-prone to various dead-lock and race conditions.

How would you implement a save thread cooperation with signals in ruby 2.0?

I just began to work with threads. I know the theory and understand the main aspects of it, but I've got only a little practice on this topic.
I am looking for a good solution (or pattern, if available) for the following problem.
Assume there should be a transaction component which holds a pool of threads processing tasks from a queue, which is also part of this transaction component.
Each thread of this pool waits until there's a task to do, pops it from the queue, processes it and then waits for the next turn.
Assume also, there are multiple threads adding tasks to this queue. Then I want these threads to suspend until their tasks are processed.
If a task is processed, the thread, which enqueued the processed task, should be made runnable again.
The ruby class Thread provides the methods Thread#stop and Thread#run. However, I read, that you should not use these methods, if you want a stable implementation. And to use some kind of signalling mechanism.
In ruby, there are some classes which deal with synchronization and thread cooperation in general like Thread, Mutex, Monitor, ConditionVariable, etc.
Maybe ConditionVariable could be my friend, because it allows to emit signals, but I'm just not sure.
How would you implement this?
Ruby provides a threadsafe Queue class that will handles some of this for you:
queue.pop
Will block until a value is pushed to the queue. You can have as many threads as you want waiting on the queue in this fashion. If one of the things you push onto the queue is another queue or a condition variable then you could use that to signal task completion.
Threads are notoriously hard to reason about effectively. You may find that an alternative higher level approach such as celluloid easier to work with.

Inter-thread communication (worker threads)

I've created two threads A & B using CreateThread windows API. I'm trying to send the data from thread A to B.
I know I can use Event object and wait for the Event object in another using "WaitForSingleObject" method. What this event does all is just signal the thread. That's it! But how I can send a data. Also I don't want thread B to wait till thread A signals. It has it own job to do. I can't make it wait.
I can't find a Windows function that will allow me to send data to / from the worker thread and main thread referencing the worker thread either by thread ID or by the returned HANDLE. I do not want to introduce the MFC dependency in my project and would like to hear any suggestions as to how others would or have done in this situation. Thanks in advance for any help!
First of all, you should keep in mind that Windows provides a number of mechanisms to deal with threading for you: I/O Completion Ports, old thread pools and new thread pools. Depending on what you're doing any of them might be useful for your purposes.
As to "sending" data from one thread to another, you have a couple of choices. Windows message queues are thread-safe, and a a thread (even if it doesn't have a window) can have a message queue, which you can post messages to using PostThreadMessage.
I've also posted code for a thread-safe queue in another answer.
As far as having the thread continue executing, but take note when a change has happened, the typical method is to have it call WaitForSingleObject with a timeout value of 0, then check the return value -- if it's WAIT_OBJECT_0, the Event (or whatever) has been set, so it needs to take note of the change. If it's WAIT_TIMEOUT, there's been no change, and it can continue executing. Either way, WaitForSingleObject returns immediately.
Since the two threads are in the same process (at least that's what it sounds like), then it is not necessary to "send" data. They can share it (e.g., a simple global variable). You do need to synchronize access to it via either an event, semaphore, mutex, etc.
Depending on what you are doing, it can be very simple.
Thread1Func() {
Set some global data
Signal semaphore to indicate it is available
}
Thread2Func() {
WaitForSingleObject to check/wait if data is available
use the data
}
If you are concerned with minimizing Windows dependencies, and assuming you are coding in C++, then I recommend using Boost.Threads, which is a pretty nice, Posix-like C++ threading interface. This will give you easy portability between Windows and Linux.
If you go this route, then use a mutex to protect any data shared across threads, and a condition variable (combined with the mutex) to signal one thread from the other.
Don´t use a mutexes when only working in one single process, beacuse it has more overhead (since it is a system-wide defined object)... Place a critical section around Your data and try to enter it (as Jerry Coffin did in his code around for the thread safe queue).

Are there any benefits of suspending a thread over making it wait?

I was going through a legacy code and found that the code uses SuspendThread Function to suspend the execution of a worker thread. Whenever the worker thread needs to process a request, the calling thread resumes this worker thread. Once the task is done the thread suspends itself.
I don’t know why it was done this way. According to me it could have been done more elegantly using an Event object with WaitForSingleObject API.
My question is, what are the benefits (if any) of suspending a thread as compared to making a thread wait on a synchronization object? In which scenarios would you prefer SuspendThread, ResumeThread APIs?
No.
Suspending a thread is discouraged in every environment I've ever worked in. The main concern is that a thread may be suspended while holding onto a lock on some resource, potentially causing a dead lock. Any resources saved in terms of synchronization objects aren't worth the deadlock risks.
This is not a concern when a thread is made to wait, as the thread inherently controls its own "suspension" and can be sure to release any locks it is holding.
If you read the documentation on SuspendThread, you'll see that it is meant for use by debuggers. Tear it out of any application code if you can.
To illustrate my point, a list of the "do not use" suspension methods I've come across:
SuspendThread
Thread.Suspend
Thread.suspend
As an aside; I'm really surprised that Thread.Suspend in .NET was "supported" in 1.0/1.1, it really should have been warning worthy from the start.
You'll need a separate event object for each thread if you want to be able to wake up a specific thread. That would lead to higher kernel object consumption which is not good by itself and could possibly cause problems on early versions of Windows. With manual resume you don't need any new kernel objects.

Resources