libuv has a central event loop and allows asynchronous network I/O, timers etc around it.
The high level architecutre as presented in the docs is:
When the event loop blocks for "ready" sockets (using epoll etc), how does it unblock itself if none of the sockets are ready? It might miss some timers which could run out in the meantime.
If it immediately unblocks if none of the sockets are empty, and there are no timers to trigger, doesn't the event loop degenerate to "busy waiting" for sockets to get ready?
uv_run makes sure to not busy wait by passing an additional timeout parameter to the polling function. On windows the implementation to calculate the timeout for the polling call basically looks like this
int uv__next_timeout(const uv_loop_t* loop) {
const struct heap_node* heap_node;
const uv_timer_t* handle;
uint64_t diff;
/* If not timer block indefinitely */
heap_node = heap_min(timer_heap(loop));
if (heap_node == NULL)
return -1;
/* Timer should have fired already */
handle = container_of(heap_node, uv_timer_t, heap_node);
if (handle->timeout <= loop->time)
return 0;
/* Timer fires in the future, compute the timeout until the
next timer should fire */
diff = handle->timeout - loop->time;
if (diff > INT_MAX)
diff = INT_MAX;
return (int) diff;
}
Only if no timers are available the loop will block until a socket / IO completion port is ready to be consumed, otherwise it will block for at most the time until the next timeout should fire. The heap_min makes sure to always return the next timer to not miss any.
Related
AVFormatContext's interrupt_callback field is a
Custom interrupt callbacks for the I/O layer.
It's type is AVIOInterruptCB, and it explains in comment section:
Callback for checking whether to abort blocking functions.
AVERROR_EXIT is returned in this case by the interrupted function. During blocking operations, callback is called with opaque as parameter. If the callback returns 1, the blocking operation will be aborted.
No members can be added to this struct without a major bump, if new elements have been added after this struct in AVFormatContext or AVIOContext.
I have 2 questions:
what does the last section means? Especially "without a major bump"?
If I use this along with an RTSP source, when I close the input by avformat_close_input, the "TEARDOWN" message is being sent out, however it won't reach the RTSP server.
For 2: here is a quick pseudo-code for demo:
int pkts = 0;
bool early_exit = false;
int InterruptCallback(void* ctx) {
return early_exit ? 1 : 0;
}
void main() {
ctx = avformat_alloc_context
ctx->interrupt_callback.callback = InterruptCallback;
avformat_open_input
avformat_find_stream_info
pkts=0;
while(!early_exit) {
av_read_frame
if (pkts++ > 100) early_exit=true;
}
avformat_close_input
}
In case I don't use the interrupt callback at all, TEARDOWN is being sent out, and it also reaches the RTSP server so it can actually tear down the connection. Otherwise, it won't tear down it, and I have to wait until TCP socket times out.
What is the proper way of using this interrupt callback?
It means that they are not going to change anything for this structure (AVIOInterruptCB). However, if thats the case it would be in a major bump (major change from 4.4 eg to 5.0)
You need to pass a meaningful parameter to void* ctx. Anything that you like so you can check it within the static function. For example a bool that you will set as cancel so you will interrupt the av_read_frame (which will return an AVERROR_EXIT). Usually you pass a class of your decoder context or something similar which also holds all the info that you required to check whether to return 1 to interrupt or 0 to continue the requests properly. A real example would be that you open a wrong rtsp and then you want to open another one (the right one) so you need to cancel your previous requests.
NOTE: I have added the C++ tag to this because a) the code is C++ and b) people using C++ may well have used IO completion ports. So please don't shout.
I am playing with IO completion ports, and have eventually fully understood (and tested, to prove) - both with help from RbMm - the meaning of the NumberOfConcurrentThreads parameter within CreateIoCompletionPort().
I have the following small program which creates 10 threads all waiting on the completion port. I tell my completion port to only allow 4 threads to be runnable at once (I have four CPUs). I then enqueue 8 packets to the port. My thread function outputs a message if it dequeues a packet with an ID > 4; in order for this message to be output, I have to stop at least one of the four currently running threads, which happens when I enter '1' at the console.
Now this is all fairly simple code. I have one big concern however, and that is that if all of the threads that are processing a completion packet get bogged down, it will mean no more packets can be dequeued and processed. That is what I am simulating with my infinite loop - the fact that no more packets are dequeued until I enter '1' at the console highlights this potential problem!
Would a better solution not be to have my four threads dequeuing packets (or as many threads as CPUs), then when one is dequeued, farm the processing of that packet off to a worker thread from a separate pool, thereby removing the risk of all threads in the IOCP being bogged down thus no more packets being dequeued?
I ask this as all the examples of IO completion port code I have seen use a method similar to what I show below, not using a separate thread pool which I propose. This is what makes me think that I am missing something because I am outnumbered!
Note: this is a somewhat contrived example, because Windows will allow an additional packet to be dequeued if one of the runnable threads enters a wait state; I show this in my code with a commented out cout call:
The system also allows a thread waiting in GetQueuedCompletionStatus
to process a completion packet if another running thread associated
with the same I/O completion port enters a wait state for other
reasons, for example the SuspendThread function. When the thread in
the wait state begins running again, there may be a brief period when
the number of active threads exceeds the concurrency value. However,
the system quickly reduces this number by not allowing any new active
threads until the number of active threads falls below the concurrency
value.
But I won't be calling SuspendThread in my thread functions, and I don't know which functions other than cout will cause the thread to enter a wait state, thus I can't predict if one or more of my threads will ever get bogged down! Hence my idea of a thread pool; at least context switching would mean that other packets get a chance to be dequeued!
#define _CRT_SECURE_NO_WARNINGS
#include <windows.h>
#include <thread>
#include <vector>
#include <algorithm>
#include <atomic>
#include <ctime>
#include <iostream>
using namespace std;
int main()
{
HANDLE hCompletionPort1;
if ((hCompletionPort1 = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 4)) == NULL)
{
return -1;
}
vector<thread> vecAllThreads;
atomic_bool bStop(false);
// Fill our vector with 10 threads, each of which waits on our IOCP.
generate_n(back_inserter(vecAllThreads), 10, [hCompletionPort1, &bStop] {
thread t([hCompletionPort1, &bStop]()
{
// Thread body
while (true)
{
DWORD dwBytes = 0;
LPOVERLAPPED pOverlapped = 0;
ULONG_PTR uKey;
if (::GetQueuedCompletionStatus(hCompletionPort1, &dwBytes, &uKey, &pOverlapped, INFINITE) == 1)
{
if (dwBytes == 0 && uKey == 0 && pOverlapped == 0)
break; // Special completion packet; end processing.
//cout << uKey; // EVEN THIS WILL CAUSE A "wait" which causes MORE THAN 4 THREADS TO ENTER!
if (uKey >4)
cout << "Started processing packet ID > 4!" << endl;
while (!bStop)
; // INFINITE LOOP
}
}
});
return move(t);
}
);
// Queue 8 completion packets to our IOCP...only four will be processed until we set our bool
for (int i = 1; i <= 8; ++i)
{
PostQueuedCompletionStatus(hCompletionPort1, 0, i, new OVERLAPPED);
}
while (!bStop)
{
int nVal;
cout << "Enter 1 to cause current processing threads to end: ";
cin >> nVal;
bStop = (nVal == 1);
}
for (int i = 0; i < 10; ++i) // Tell all 10 threads to stop processing on the IOCP
{
PostQueuedCompletionStatus(hCompletionPort1, 0, 0, 0); // Special packet marking end of IOCP usage
}
for_each(begin(vecAllThreads), end(vecAllThreads), mem_fn(&thread::join));
return 0;
}
EDIT #1
What I mean by "separate thread pool" is something like the following:
class myThread {
public:
void SetTask(LPOVERLAPPED pO) { /* start processing pO*/ }
private:
thread m_thread; // Actual thread object
};
// The threads in this thread pool are not associated with the IOCP in any way whatsoever; they exist
// purely to be handed a completion packet which they then process!
class ThreadPool
{
public:
void Initialise() { /* create 100 worker threads and add them to some internal storage*/}
myThread& GetNextFreeThread() { /* return one of the 100 worker thread we created*/}
} g_threadPool;
The code that each of my four threads associated with the IOCP then change to
if (::GetQueuedCompletionStatus(hCompletionPort1, &dwBytes, &uKey, &pOverlapped, INFINITE) == 1)
{
if (dwBytes == 0 && uKey == 0 && pOverlapped == 0)
break; // Special completion packet; end processing.
// Pick a new thread from a pool of pre-created threads and assign it the packet to process
myThread& thr = g_threadPool.GetNextFreeThread();
thr.SetTask(pOverlapped);
// Now, this thread can immediately return to the IOCP; it doesn't matter if the
// packet we dequeued would take forever to process; that is happening in the
// separate thread thr *that will not intefere with packets being dequeued from IOCP!*
}
This way, there is no possible way that I can end up in the situation where no more packets are being dequeued!
It seems there is conflicting opinion on whether a separate thread pool should be used. Clearly, as the sample code I have posted shows, there is potential for packets to stop being dequeued from the IOCP if the processing of the packets does not enter a wait state; given, the infinite loop is perhaps unrealistic but it does demonstrate the point.
I use MPI non-blocking communication(MPI_Irecv, MP_Isend) to monitor the slaves' idle states, the code is like bellow.
rank 0:
int dest = -1;
while( dest <= 0){
int i;
for(i=1;i<=slaves_num;i++){
printf("slave %d, now is %d \n",i,idle_node[i]);
if (idle_node[i]== 1) {
idle_node[i] = 0;
dest = i;
break;
}
}
if(dest <= 0){
MPI_Irecv(&idle_node[1],1,MPI_INT,1,MSG_IDLE,MPI_COMM_WORLD,&request);
MPI_Irecv(&idle_node[2],1,MPI_INT,2,MSG_IDLE,MPI_COMM_WORLD,&request);
MPI_Irecv(&idle_node[3],1,MPI_INT,3,MSG_IDLE,MPI_COMM_WORLD,&request);
// MPI_Wait(&request,&status);
}
usleep(100000);
}
idle_node[dest] = 0;//indicates this slave is busy now
rank 1,2,3:
while(1)
{
...//do something
MPI_Isend(&idle,1,MPI_INT,0,MSG_IDLE,MPI_COMM_WORLD,&request);
MPI_Wait(&request,&status);
}
it works, but I want it to be faster, so I delete the line:
usleep(100000);
then rank 0 goes into a dead while like this:
slave 1, now is 0
slave 2, now is 0
slave 3, now is 0
slave 1, now is 0
slave 2, now is 0
slave 3, now is 0
...
So does it indicate that when I use the MPI_Irecv, it just tells MPI I want to receive a message hereļ¼haven't received message), and MPI needs other time to receive the real data? or some reasons else?
The use of non-blocking operations has been discussed over and over again here. From the MPI specification (section Nonblocking Communication):
Similarly, a nonblocking receive start call initiates the receive operation, but does not complete it. The call can return before a message is stored into the receive buffer. A separate receive complete call is needed to complete the receive operation and verify that the data has been received into the receive buffer. With suitable hardware, the transfer of data into the receiver memory may proceed concurrently with computations done after the receive was initiated and before it completed.
(the bold text is copied verbatim from the standard; the emphasis in italic is mine)
The key sentence is the last one. The standard does not give any guarantee that a non-blocking receive operation will ever complete (or even start) unless MPI_WAIT[ALL|SOME|ANY] or MPI_TEST[ALL|SOME|ANY] was called (with MPI_TEST* setting a value of true for the completion flag).
By default Open MPI comes as a single-threaded library and without special hardware acceleration the only way to progress non-blocking operations is to either call periodically into some non-blocking calls (with the primary example of MPI_TEST*) or call into a blocking one (with the primary example being MPI_WAIT*).
Also your code leads to a nasty leak that will sooner or later result in resource exhaustion: you are calling MPI_Irecv multiple times with the same request variable, effectively overwriting its value and losing the reference to the previously started requests. Requests that are not waited upon are never freed and therefore remain in memory.
There is absolutely no need to use non-blocking operations in your case. If I understand the logic correctly, you can achieve what you want with code as simple as:
MPI_Recv(&dummy, 1, MPI_INT, MPI_ANY_SOURCE, MSG_IDLE, MPI_COMM_WORLD, &status);
idle_node[status.MPI_SOURCE] = 0;
If you'd like to process more than one worker processes at the same time, it is a bit more involving:
MPI_Request reqs[slaves_num];
int indices[slaves_num], num_completed;
for (i = 0; i < slaves_num; i++)
reqs[i] = MPI_REQUEST_NULL;
while (1)
{
// Repost all completed (or never started) receives
for (i = 1; i <= slaves_num; i++)
if (reqs[i-1] == MPI_REQUEST_NULL)
MPI_Irecv(&idle_node[i], 1, MPI_INT, i, MSG_IDLE,
MPI_COMM_WORLD, &reqs[i-1]);
MPI_Waitsome(slaves_num, reqs, &num_completed, indices, MPI_STATUSES_IGNORE);
// Examine num_completed and indices and feed the workers with data
...
}
After the call to MPI_Waitsome there will be one or more completed requests. The exact number will be in num_completed and the indices of the completed requests will be filled in the first num_completed elements of indices[]. The completed requests will be freed and the corresponding elements of reqs[] will be set to MPI_REQUEST_NULL.
Also, there appears to be a common misconception about using non-blocking operations. A non-blocking send can be matched by a blocking receive and also a blocking send can be equally matched by a non-blocking receive. That makes such constructs nonsensical:
// Receiver
MPI_Irecv(..., &request);
... do something ...
MPI_Wait(&request, &status);
// Sender
MPI_Isend(..., &request);
MPI_Wait(&request, MPI_STATUS_IGNORE);
MPI_Isend immediately followed by MPI_Wait is equivalent to MPI_Send and the following code is perfectly valid (and easier to understand):
// Receiver
MPI_Irecv(..., &request);
... do something ...
MPI_Wait(&request, &status);
// Sender
MPI_Send(...);
Queue_delayed_work is a non blocking "timer" function which will queue `dwork` after `delay` jiffies.
I cannot find any non blocking function WIN API, which does same.
Any idea if some function is there, and I don't know. Or How can implement something like this (a non blocking "timer", which will queue dwork on workqafter at least delay jiffies.
For Linux user space,
Only teo system calls allow setting a timer without simultaneously blocking the calling process on a event:
-timer_set_time
-Alarm
For Win Kernel,
Can implement with KTIMER
KTIMER (Wdm.h)
KeInitializeTimer(PKTIMER Timer):
Timer: Pointer to a timer object, for which the caller provides the storage.
KeSetTimer (PKTIMER Timer, LARGE_INTEGER DueTimer, **PKDPC Dpc**)
For Win User space,
Can implement with Waitable Timer Objects:
BOOL SetWaitableTimer(
HANDLE hTimer, // handle to a timer object
const LARGE_INTEGER *pDueTime, // when timer will become signaled
LONG lPeriod, // periodic timer interval
PTIMERAPCROUTINE pfnCompletionRoutine, // pointer to the completion routine
LPVOID lpArgToCompletionRoutine, // data passed to the completion routine
BOOL fResume // flag for resume state
);
I am building a Visual C++ WinSock TCP server using BindIoCompletionCallback, it works fine receiving and sending data, but I can't find a good way to detect timeout: SetSockOpt/SO_RCVTIMEO/SO_SNDTIMEO has no effect on nonblocking sockets, if the peer is not sending any data, the CompletionRoutine is not called at all.
I am thinking about using RegisterWaitForSingleObject with the hEvent field of OVERLAPPED, that might work but then CompletionRoutine is not needed at all, am I still using IOCP ? is there a performance concern if I use only RegisterWaitForSingleObject and not using BindIoCompletionCallback ?
Update: Code Sample:
My first try:
bool CServer::Startup() {
SOCKET ServerSocket = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, WSA_FLAG_OVERLAPPED);
WSAEVENT ServerEvent = WSACreateEvent();
WSAEventSelect(ServerSocket, ServerEvent, FD_ACCEPT);
......
bind(ServerSocket......);
listen(ServerSocket......);
_beginthread(ListeningThread, 128 * 1024, (void*) this);
......
......
}
void __cdecl CServer::ListeningThread( void* param ) // static
{
CServer* server = (CServer*) param;
while (true) {
if (WSAWaitForMultipleEvents(1, &server->ServerEvent, FALSE, 100, FALSE) == WSA_WAIT_EVENT_0) {
WSANETWORKEVENTS events = {};
if (WSAEnumNetworkEvents(server->ServerSocket, server->ServerEvent, &events) != SOCKET_ERROR) {
if ((events.lNetworkEvents & FD_ACCEPT) && (events.iErrorCode[FD_ACCEPT_BIT] == 0)) {
SOCKET socket = accept(server->ServerSocket, NULL, NULL);
if (socket != SOCKET_ERROR) {
BindIoCompletionCallback((HANDLE) socket, CompletionRoutine, 0);
......
}
}
}
}
}
}
VOID CALLBACK CServer::CompletionRoutine( __in DWORD dwErrorCode, __in DWORD dwNumberOfBytesTransfered, __in LPOVERLAPPED lpOverlapped ) // static
{
......
BOOL res = GetOverlappedResult(......, TRUE);
......
}
class CIoOperation {
public:
OVERLAPPED Overlapped;
......
......
};
bool CServer::Receive(SOCKET socket, PBYTE buffer, DWORD length, void* context)
{
if (connection != NULL) {
CIoOperation* io = new CIoOperation();
WSABUF buf = {length, (PCHAR) buffer};
DWORD flags = 0;
if ((WSARecv(Socket, &buf, 1, NULL, &flags, &io->Overlapped, NULL) != 0) && (GetLastError() != WSA_IO_PENDING)) {
delete io;
return false;
} else return true;
}
return false;
}
As I said, it works fine if the client is actually sending data to me, 'Receive' is not blocking, CompletionRoutine got called, data received, but here is one gotcha, if the client is not sending any data to me, how can I give up after a timeout ?
Since SetSockOpt/SO_RCVTIMEO/SO_SNDTIMEO wont help here, I think I should use the hEvent field in the OVERLAPPED stucture which will be signaled when the IO completes, but a WaitForSingleObject / WSAWaitForMultipleEvents on that will block the Receive call, and I want the Receive to always return immediately, so I used RegisterWaitForSingleObject and WAITORTIMERCALLBACK. it worked, the callback got called after the timeout, or, the IO completes, but now I have two callbacks for any single IO operation, the CompletionRoutine, and the WaitOrTimerCallback:
if the IO completed, they will be called simutaneously, if the IO is not completed, WaitOrTimerCallback will be called, then I call CancelIoEx, this caused the CompletionRoutine to be called with some ABORTED error, but here is a race condition, maybe the IO will be completed right before I cancel it, then ... blahblah, all in all its quite complicated.
Then I realized I dont actually need BindIoCompletionCallback and CompletionRoutine at all, and do everything from the WaitOrTimerCallback, it may work, but here is the interesting question, I wanted to build an IOCP-based Winsock server in the first place, and thought BindIoCompletionCallback is the easiest way to do that, using the threadpool provied by Windows itself, now I endup with a server without IOCP code at all ? is it still IOCP ? or should I forget BindIoCompletionCallback and build my own IOCP threadpool implementation ? why ?
What I did was to force the timeout/completion notifications to enter a critical section in the socket object. Once in, the winner can set a socket state variable and perform its action, whatever that might be. If the I/O completion gets in first, the I/O buffer array is processed in the normal way and any timeout is directed to restart by the state-machine. Similarly if the timeout gets in first, the I/O gets CancelIOEx'd and any later queued completion notification is discarded by the state-engine. Because of these possible 'late' notifications, I put released sockets onto a timeout queue and only recycle them onto the socket object pool after five minutes, in a similar way to how the TCP stack itself puts its sockets into 'TIME_WAIT'.
To do the timeouts, I have one thread that operates on FIFO delta-queues of timing-out objects, one queue for each timeout limit. The thread waits on an input queue for new objects with a timeout calculated from the smallest timeout-expiry-time of the objects at the head of the queues.
There were only a few timeouts used in the server, so I used queues fixed at compile-time. It would be fairly easy to add new queues or modify the timeout by sending appropriate 'command' messages to the thread input queue, mixed-in with the new sockets, but I didn't get that far.
Upon timeout, the thread called an event in the object which, in case of a socket, would enter the socket object CS-protected state-machine, (these was a TimeoutObject class which the socket descended from, amongst other things).
More:
I wait on the semaphore that controls the timeout thread input queue. If it's signaled, I get the new TimeoutObject from the input queue and add it to the end of whatever timeout queue it asks for. If the semaphore wait times out, I check the items at the heads of the timeout FIFO queues and recalculate their remaining interval by sutracting the current time from their timeout time. If the interval is 0 or negative, the timeout event gets called. While iterating the queues and their heads, I keep in a local the minimum remaining interval before the next timeout. Hwn all the head items in all the queues have non-zero remaining interval, I go back to waiting on the queue semaphore using the minimum remaining interval I have accumulated.
The event call returns an enumeration. This enumeration instructs the timeout thread how to handle an object whose event it's just fired. One option is to restart the timeout by recalcuating the timeout-time and pushing the object back onto its timeout queue at the end.
I did not use RegisterWaitForSingleObject() because it needed .NET and my Delphi server was all unmanaged, (I wrote my server a long time ago!).
That, and because, IIRC, it has a limit of 64 handles, like WaitForMultipleObjects(). My server had upwards of 23000 clients timing out. I found the single timeout thread and multiple FIFO queues to be more flexible - any old object could be timed out on it as long as it was descended from TimeoutObject - no extra OS calls/handles needed.
The basic idea is that, since you're using asynchronous I/O with the system thread pool, you shouldn't need to check for timeouts via events because you're not blocking any threads.
The recommended way to check for stale connections is to call getsockopt with the SO_CONNECT_TIME option. This returns the number of seconds that the socket has been connected. I know that's a poll operation, but if you're smart about how and when you query this value, it's actually a pretty good mechanism for managing connections. I explain below how this is done.
Typically I'll call getsockopt in two places: one is during my completion callback (so that I have a timestamp for the last time that an I/O completion occurred on that socket), and one is in my accept thread.
The accept thread monitors my socket backlog via WSAEventSelect and the FD_ACCEPT parameter. This means that the accept thread only executes when Windows determines that there are incoming connections that require accepting. At this time I enumerate my accepted sockets and query SO_CONNECT_TIME again for each socket. I subtract the timestamp of the connection's last I/O completion from this value, and if the difference is above a specified threshold my code deems the connection as having timed out.