I have a blocking call to accept(). From another thread I close the socket, hoping that it'll unblock the accept() call, which it does but I have a case when it doesn't: e.g. thread A enters accept(), thread B closes the socket, thread A doesn't return from accept().
Question: what could cause closing a socket to not unblock an accept()?
One hacky trick to unblock accept(2) is to actually connect(2) to the listening end from your other thread. Flip some flag indicating it's time to stop the loop, connect(2), close(2) the connecting socket. That way the accept(2)-ing thread would know to close the socket and shut itself down.
You must not ever free a resource in one thread while another thread is or might be using it. You will never get this to work reliably. For one thing, you can never be 100% sure the thread is actually blocked in accept, as opposed to about to block in it. So there will always be race conditions.
And, of course, shutdown won't work because the socket is not connected.
There are a couple of ways to handle this problem. For example, you can set a flag that the thread checks when it returns from accept and then make a connection yourself. That will cause the thread to return from accept and then it will see the flag and terminate.
You can also switch to non-blocking sockets. Have the thread call select or poll with a timeout and check if the thread should terminate when it returns from select or poll. You can also select or poll on both the socket and a pipe. Then just send a byte on the pipe to unblock the thread. pthread_kill is another possibility, as is pthread_cancel.
Not knowing the details of your problem, my guess would be the best solution is to rearchitect so you don't have a thread whose sole job is to wait forever in accept. That way, you won't even have a thread you need to kill. If you don't want to keep accepting connections, just rig things so that your threads stop doing that, but let the threads keep going doing other things. (The number of running threads you have should be dependent on the number of things you can usefully do at once, not the total number of things you have to do.)
Try calling shutdown() followed by close() from thread B. You should be checking the return codes on these calls too as they may help you figure out what is going wrong when thread A fails to become unblocked.
Related
What I would like to do is to have one thread waiting for messages (WaitMessage) and another processing the logic of the application. The first thread would wake up on every message, signal somehow this event to the other thread, go to sleep again, etc. Is this possible?
UPDATE
Consider the following situation. We have a GUI thread, and this thread is busy in a long calculation. If there is no other thread, there is no option but to check for new messages from time to time. Otherwise, the GUI would become irresponsive during the long calculation. Right now my system uses this "polling" approach (it has a single thread that checks the message queue from time to time.) However, I would like to know whether this other solution is possible: Have another thread waiting on the OS message queue of the GUI so that when a Windows message arrives this thread will wake up and tell the other about the message. Note that I'm not asking how to communicate the news between threads but whether it is possible for the second thread to wait for OS messages that arrive in the queue of the first thread.
I should also add that I cannot have two different threads, one for the GUI and another for the calculations, because the system I'm working on is a Virtual Machine on top of which runs a Smalltalk image that is not thread safe. That's why having a thread that only signals new OS messages would be the ideal solution (if possible.)
This depends on what the second thread needs to do once the first thread has received a message.
If the second thread simply needs to know the first thread received a message, the first thread could signal an Event object using SetEvent() or PulseEvent(), and the second thread could wait on that event using WaitForSingleObject().
If the second thread needs data from the first thread, it could use an I/O Completion Port. The first thread could wrap the data inside a dynamically allocated struct and post it to the port using PostQueuedCompletionStatus(), and the second thread could wait for the data using GetQueuedCompletionStatus() and then free it when done using it.
Update: based on new information you have provided, it is not possible for one thread to wait on or service another thread's message queue. Only the thread that created and owns the queue can poll messages from its queue. Each thread has its own message queue.
You really need to move your long calculations to a different thread, they don't belong in the GUI thread to begin with. Let the GUI thread manage the GUI and service messages, do any long-running things in another thread.
If you can't do that because your chosen library is not thread safe, then you have 4 options:
find a different library that is thread safe.
have the calculations poll the message queue periodically when running in the GUI thread.
break up the calculations into small chunks that can be triggered by the GUI thread posting messages to itself. Post a message and return to the message loop. When the message is received, do a little bit of work, post the next message, and return to the message loop. Repeat as needed until the work is done. This allows the GUI thread to continue servicing the message queue in between each calculation step.
move the library to a separate process that communicates back with your main app as needed.
I have hand-made thread pool. Threads read from completion port and do some other stuff. One particular thread has to be ended. How to interrupt it's waiting if it hangs on GetQueuedCompletionStatus() or GetQueuedCompletionStatusEx()?
Finite timeout (100-1000 ms) and exiting variable are far from elegant, cause delays and left as last resort.
CancelIo(completionPortHandle) within APC in target thread causes ERROR_INVALID_HANDLE.
CancelSynchronousIo(completionPortHandle) causes ERROR_NOT_FOUND.
PostQueuedCompletionStatus() with termination packet doesn't allow to choose thread.
Rough TerminateThread() with mutex should work. (I haven't tested it.) But is it ideologically good?
I tried to wait on special event and completion port. WaitForMultipleObjects() returned immediately as if completion port was signalled. GetQueuedCompletionStatus() shows didn't return anything.
I read Overlapped I/O: How to wake a thread on a completion port event or a normal event? and googled a lot.
Probably, the problem itself – ending thread's work – is sign of bad design and all my threads should be equal and compounded into normal thread pool. In this case, PostQueuedCompletionStatus() approach should work. (Although I have doubts that this approach is beautiful and laconic especially if threads use GetQueuedCompletionStatusEx() to get multiple packets at once.)
If you just want to reduce the size of the thread pool it doesn't matter which thread exits.
However if for some reason you need to signal to an particular thread that it needs to exit, rather than allowing any thread to exit, you can use this method.
If you use GetQueuedCompletionStatusEx you can do an alertable wait, by passing TRUE for fAlertable. You can then use QueueUserAPC to queue an APC to the thread you want to quit.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms684954(v=vs.85).aspx
If the thread is busy then you will still have to wait for the current work item to be completed.
Certainly don't call TerminateThread.
Unfortunately, I/O completion port handles are always in a signaled state and as such cannot really be used in WaitFor* functions.
GetQueuedCompletionStatus[Ex] is the only way to block on the completion port. With an empty queue, the function will return only if the thread becomes alerted. As mentioned by #Ben, the QueueUserAPC will make the the thread alerted and cause GetQueuedCompletionStatus to return.
However, QueueUserAPC allocates memory and thus can fail in low-memory conditions or when memory quotas are in effect. The same holds for PostQueuedCompletionStatus. As such, using any of these functions on an exit path is not a good idea.
Unfortunately, the only robust way seems to be calling the undocumented NtAlertThread exported by ntdll.dll.
extern "C" NTSTATUS __stdcall NtAlertThread(HANDLE hThread);
Link with ntdll.lib. This function will put the target thread into an alerted state without queuing anything.
I'm trying to use boost::asio for the first time to write a process that connects to N servers reads data from them.
My question regards the way in which asynchronicity works. My design goal is to connect to all servers in parallel, and also read data from every server in parallel. This should be done with async_connect and async_read, and calling io_service::run() N times, then reading the results. And the question is: is it enough to call io_service::run() from a single thread, sequentially, N times, in order to achieve parallelism?
Note that this is a matter of the implementation of asio: specifically, when calling connect_async and write_async, does the call signal the OS to begin connecting/reading before returning, or does it simply delegate a synchronous connect/read task to the worker thread and returns immediately? - case in which calling io_service::run() from a single thread means serial execution of tasks.
My guess is the former, of course, but I need someone to please confirm. I find it off that the documentation for async stuff (http://think-async.com/Asio/boost_asio_1_3_1/doc/html/boost_asio/overview/core/basics.html) doesn't mention when the async_xxx calls return, which would clarify my question.
The heart of asio is an event loop, which begins with the call to io_service::run(), which is a blocking call. When you call async_connect, you queue up the connect operation in the io_services event queue. To achieve parallelism, you must create a thread pool and have each thread call run() on the same io_service instance.
If I have an app that is creating threads which do their work and then exit, and one or more threads get themselves into a deadlock (possibly through no fault of my own!), is there a way of programmatically forcing one of the threads to advance past the WaitForSingleObject it might be stuck at, and thus resolving the deadlock?
I don't necessarily want to terminate the thread, I just want to have it move on (and thus allow the threads to exit "gracefully".
(yes, I know this sounds like a duplicate of my earlier question Delphi 2006 - What's the best way to gracefully kill a thread and still have the OnTerminate handler fire?, but the situation is slightly different - what I'm asking here is whether it is possible to make a WaitForSingleObject (Handle, INFINTE) behave like a WaitForSingleObject (Handle, ItCantPossiblyBeWorkingProperlyAfterThisLong)).
Please be gentle with me.
* MORE INFO *
The problem is not necessarily in code I have the source to. The actual situation is a serial COM port library (AsyncFree) that is thread based. When the port is USB-based, the library seems to have a deadlock between two of the threads it creates on closing the port. I've already discussed this at length in this forum. I did recode one of the WaitForSingleObject calls to not be infinite, and that cured that deadlock, but then another one appeared later in the thread shutdown sequence, this time in the Delphi TThread.Destroy routine.
So my rationale for this is simple: when my threads deadlock, I fix the code if I can. If I can't, or one appears that I don't know about, I just want the thread to finish. I doesn't have to be pretty. I can't afford to have my app choke.
You can make a handle used in WaitForSingleObject invalid by closing it (from some other thread). In this case WaitForSingleObject should return WAIT_FAILED and your thread will be 'moved on'
If you don't use INFINITE but just set a given timeout time, you can check if the call returned because the time out time expired or because the handle you were waiting for got into the signalled state. Then your code can decide what to do next. Enter another waiting cycle, or simply exit anyway maybe showing somewhere 'hey, I was waiting but it was too long and I terminated anyway).
Another options is to use WaitForMultipleObjects and use something alike an event to have the wait terminate if needed. The advantage it doesn't need the timeout to expire.
Of course one the thread is awaken it must be able to handle the "exceptional" condition of continuing even if the "main" handle it was waiting for didn't return in time.
I've read the documentation for ReadDirectoryChangesW() and also seen the CDirectoryChangeWatcher project, but neither say why one would want to call it asynchronously. I understand that the current thread will not block, but, at least for the CDirectoryChangeWatcher code that uses a completion port, when it calls GetQueuedCompletionStatus(), that thread blocks anyway (if there are no changes).
So if I call ReadDirectoryChangesW() synchronously in a separate thread in the first place that I don't care if it blocks, why would I ever want to call ReadDirectoryChangesW() asynchronously?
When you call it asynchronously, you have more control over which thread does the waiting. It also allows you to have a single thread wait for multiple things, such as a directory change, an event, and a message. Finally, even if you're doing the waiting in the same thread that set up the watch in the first place, it gives you control over how long you're willing to wait. GetQueuedCompletionStatus has a timeout parameter that ReadDirectoryChangesW doesn't offer by itself.
You would call ReadDirectoryChangesW such that it returns its results asynchronously if you ever needed the calling thread to not block. A tautology, but the truth.
Candidates for such threads: the UI thread & any thread that is solely responsible for servicing a number of resources (Sockets, any sort of IPC, independent files, etc.).
Not being familiar with the project, I'd guess the CDirectoryChangeWatcher doesn't care if its worker thread blocks. Generally, that's the nature of worker threads.
I tried using ReadDirectoryChanges in a worker thread synchronously, and guess what, it blocked so that the thread wouldn't exit by itself at the program exit.
So if you don't want to use evil things like TerminateThread, you should use asynchronous calls.