How to detect WinSock TCP timeout with BindIoCompletionCallback - winapi

I am building a Visual C++ WinSock TCP server using BindIoCompletionCallback, it works fine receiving and sending data, but I can't find a good way to detect timeout: SetSockOpt/SO_RCVTIMEO/SO_SNDTIMEO has no effect on nonblocking sockets, if the peer is not sending any data, the CompletionRoutine is not called at all.
I am thinking about using RegisterWaitForSingleObject with the hEvent field of OVERLAPPED, that might work but then CompletionRoutine is not needed at all, am I still using IOCP ? is there a performance concern if I use only RegisterWaitForSingleObject and not using BindIoCompletionCallback ?
Update: Code Sample:
My first try:
bool CServer::Startup() {
SOCKET ServerSocket = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, WSA_FLAG_OVERLAPPED);
WSAEVENT ServerEvent = WSACreateEvent();
WSAEventSelect(ServerSocket, ServerEvent, FD_ACCEPT);
......
bind(ServerSocket......);
listen(ServerSocket......);
_beginthread(ListeningThread, 128 * 1024, (void*) this);
......
......
}
void __cdecl CServer::ListeningThread( void* param ) // static
{
CServer* server = (CServer*) param;
while (true) {
if (WSAWaitForMultipleEvents(1, &server->ServerEvent, FALSE, 100, FALSE) == WSA_WAIT_EVENT_0) {
WSANETWORKEVENTS events = {};
if (WSAEnumNetworkEvents(server->ServerSocket, server->ServerEvent, &events) != SOCKET_ERROR) {
if ((events.lNetworkEvents & FD_ACCEPT) && (events.iErrorCode[FD_ACCEPT_BIT] == 0)) {
SOCKET socket = accept(server->ServerSocket, NULL, NULL);
if (socket != SOCKET_ERROR) {
BindIoCompletionCallback((HANDLE) socket, CompletionRoutine, 0);
......
}
}
}
}
}
}
VOID CALLBACK CServer::CompletionRoutine( __in DWORD dwErrorCode, __in DWORD dwNumberOfBytesTransfered, __in LPOVERLAPPED lpOverlapped ) // static
{
......
BOOL res = GetOverlappedResult(......, TRUE);
......
}
class CIoOperation {
public:
OVERLAPPED Overlapped;
......
......
};
bool CServer::Receive(SOCKET socket, PBYTE buffer, DWORD length, void* context)
{
if (connection != NULL) {
CIoOperation* io = new CIoOperation();
WSABUF buf = {length, (PCHAR) buffer};
DWORD flags = 0;
if ((WSARecv(Socket, &buf, 1, NULL, &flags, &io->Overlapped, NULL) != 0) && (GetLastError() != WSA_IO_PENDING)) {
delete io;
return false;
} else return true;
}
return false;
}
As I said, it works fine if the client is actually sending data to me, 'Receive' is not blocking, CompletionRoutine got called, data received, but here is one gotcha, if the client is not sending any data to me, how can I give up after a timeout ?
Since SetSockOpt/SO_RCVTIMEO/SO_SNDTIMEO wont help here, I think I should use the hEvent field in the OVERLAPPED stucture which will be signaled when the IO completes, but a WaitForSingleObject / WSAWaitForMultipleEvents on that will block the Receive call, and I want the Receive to always return immediately, so I used RegisterWaitForSingleObject and WAITORTIMERCALLBACK. it worked, the callback got called after the timeout, or, the IO completes, but now I have two callbacks for any single IO operation, the CompletionRoutine, and the WaitOrTimerCallback:
if the IO completed, they will be called simutaneously, if the IO is not completed, WaitOrTimerCallback will be called, then I call CancelIoEx, this caused the CompletionRoutine to be called with some ABORTED error, but here is a race condition, maybe the IO will be completed right before I cancel it, then ... blahblah, all in all its quite complicated.
Then I realized I dont actually need BindIoCompletionCallback and CompletionRoutine at all, and do everything from the WaitOrTimerCallback, it may work, but here is the interesting question, I wanted to build an IOCP-based Winsock server in the first place, and thought BindIoCompletionCallback is the easiest way to do that, using the threadpool provied by Windows itself, now I endup with a server without IOCP code at all ? is it still IOCP ? or should I forget BindIoCompletionCallback and build my own IOCP threadpool implementation ? why ?

What I did was to force the timeout/completion notifications to enter a critical section in the socket object. Once in, the winner can set a socket state variable and perform its action, whatever that might be. If the I/O completion gets in first, the I/O buffer array is processed in the normal way and any timeout is directed to restart by the state-machine. Similarly if the timeout gets in first, the I/O gets CancelIOEx'd and any later queued completion notification is discarded by the state-engine. Because of these possible 'late' notifications, I put released sockets onto a timeout queue and only recycle them onto the socket object pool after five minutes, in a similar way to how the TCP stack itself puts its sockets into 'TIME_WAIT'.
To do the timeouts, I have one thread that operates on FIFO delta-queues of timing-out objects, one queue for each timeout limit. The thread waits on an input queue for new objects with a timeout calculated from the smallest timeout-expiry-time of the objects at the head of the queues.
There were only a few timeouts used in the server, so I used queues fixed at compile-time. It would be fairly easy to add new queues or modify the timeout by sending appropriate 'command' messages to the thread input queue, mixed-in with the new sockets, but I didn't get that far.
Upon timeout, the thread called an event in the object which, in case of a socket, would enter the socket object CS-protected state-machine, (these was a TimeoutObject class which the socket descended from, amongst other things).
More:
I wait on the semaphore that controls the timeout thread input queue. If it's signaled, I get the new TimeoutObject from the input queue and add it to the end of whatever timeout queue it asks for. If the semaphore wait times out, I check the items at the heads of the timeout FIFO queues and recalculate their remaining interval by sutracting the current time from their timeout time. If the interval is 0 or negative, the timeout event gets called. While iterating the queues and their heads, I keep in a local the minimum remaining interval before the next timeout. Hwn all the head items in all the queues have non-zero remaining interval, I go back to waiting on the queue semaphore using the minimum remaining interval I have accumulated.
The event call returns an enumeration. This enumeration instructs the timeout thread how to handle an object whose event it's just fired. One option is to restart the timeout by recalcuating the timeout-time and pushing the object back onto its timeout queue at the end.
I did not use RegisterWaitForSingleObject() because it needed .NET and my Delphi server was all unmanaged, (I wrote my server a long time ago!).
That, and because, IIRC, it has a limit of 64 handles, like WaitForMultipleObjects(). My server had upwards of 23000 clients timing out. I found the single timeout thread and multiple FIFO queues to be more flexible - any old object could be timed out on it as long as it was descended from TimeoutObject - no extra OS calls/handles needed.

The basic idea is that, since you're using asynchronous I/O with the system thread pool, you shouldn't need to check for timeouts via events because you're not blocking any threads.
The recommended way to check for stale connections is to call getsockopt with the SO_CONNECT_TIME option. This returns the number of seconds that the socket has been connected. I know that's a poll operation, but if you're smart about how and when you query this value, it's actually a pretty good mechanism for managing connections. I explain below how this is done.
Typically I'll call getsockopt in two places: one is during my completion callback (so that I have a timestamp for the last time that an I/O completion occurred on that socket), and one is in my accept thread.
The accept thread monitors my socket backlog via WSAEventSelect and the FD_ACCEPT parameter. This means that the accept thread only executes when Windows determines that there are incoming connections that require accepting. At this time I enumerate my accepted sockets and query SO_CONNECT_TIME again for each socket. I subtract the timestamp of the connection's last I/O completion from this value, and if the difference is above a specified threshold my code deems the connection as having timed out.

Related

AVFormatContext: interrupt callback proper usage?

AVFormatContext's interrupt_callback field is a
Custom interrupt callbacks for the I/O layer.
It's type is AVIOInterruptCB, and it explains in comment section:
Callback for checking whether to abort blocking functions.
AVERROR_EXIT is returned in this case by the interrupted function. During blocking operations, callback is called with opaque as parameter. If the callback returns 1, the blocking operation will be aborted.
No members can be added to this struct without a major bump, if new elements have been added after this struct in AVFormatContext or AVIOContext.
I have 2 questions:
what does the last section means? Especially "without a major bump"?
If I use this along with an RTSP source, when I close the input by avformat_close_input, the "TEARDOWN" message is being sent out, however it won't reach the RTSP server.
For 2: here is a quick pseudo-code for demo:
int pkts = 0;
bool early_exit = false;
int InterruptCallback(void* ctx) {
return early_exit ? 1 : 0;
}
void main() {
ctx = avformat_alloc_context
ctx->interrupt_callback.callback = InterruptCallback;
avformat_open_input
avformat_find_stream_info
pkts=0;
while(!early_exit) {
av_read_frame
if (pkts++ > 100) early_exit=true;
}
avformat_close_input
}
In case I don't use the interrupt callback at all, TEARDOWN is being sent out, and it also reaches the RTSP server so it can actually tear down the connection. Otherwise, it won't tear down it, and I have to wait until TCP socket times out.
What is the proper way of using this interrupt callback?
It means that they are not going to change anything for this structure (AVIOInterruptCB). However, if thats the case it would be in a major bump (major change from 4.4 eg to 5.0)
You need to pass a meaningful parameter to void* ctx. Anything that you like so you can check it within the static function. For example a bool that you will set as cancel so you will interrupt the av_read_frame (which will return an AVERROR_EXIT). Usually you pass a class of your decoder context or something similar which also holds all the info that you required to check whether to return 1 to interrupt or 0 to continue the requests properly. A real example would be that you open a wrong rtsp and then you want to open another one (the right one) so you need to cancel your previous requests.

How does the event loop unblock itself from network I/O?

libuv has a central event loop and allows asynchronous network I/O, timers etc around it.
The high level architecutre as presented in the docs is:
When the event loop blocks for "ready" sockets (using epoll etc), how does it unblock itself if none of the sockets are ready? It might miss some timers which could run out in the meantime.
If it immediately unblocks if none of the sockets are empty, and there are no timers to trigger, doesn't the event loop degenerate to "busy waiting" for sockets to get ready?
uv_run makes sure to not busy wait by passing an additional timeout parameter to the polling function. On windows the implementation to calculate the timeout for the polling call basically looks like this
int uv__next_timeout(const uv_loop_t* loop) {
const struct heap_node* heap_node;
const uv_timer_t* handle;
uint64_t diff;
/* If not timer block indefinitely */
heap_node = heap_min(timer_heap(loop));
if (heap_node == NULL)
return -1;
/* Timer should have fired already */
handle = container_of(heap_node, uv_timer_t, heap_node);
if (handle->timeout <= loop->time)
return 0;
/* Timer fires in the future, compute the timeout until the
next timer should fire */
diff = handle->timeout - loop->time;
if (diff > INT_MAX)
diff = INT_MAX;
return (int) diff;
}
Only if no timers are available the loop will block until a socket / IO completion port is ready to be consumed, otherwise it will block for at most the time until the next timeout should fire. The heap_min makes sure to always return the next timer to not miss any.

How can an interprocess producer consumer message passing mechanism be protected against corruption due to one side crashing?

I have implemented an interprocess message queue in shared memory for one producer and one consumer on Windows.
I am using one named semaphore to count empty slots, one named semaphore to count full slots and one named mutex to protect the data structure in shared memory.
Consider, for example the consumer side. The producer side is similar.
First it waits on the full semaphore then (1) it takes a message from the queue under the mutex and then it signals the empty semaphore (2)
The problem:
If the consumer process crashes between (1) and (2) then effectively the number of slots in the queue that can be used by the process is reduced by one.
Assume that while the consumer is down, the producer can handle the queue getting filled up. (it can either specify a timeout when waiting on the empty semaphore or even specify 0 for no wait).
When the consumer restarts it can continue to read data from the queue. Data will not have been overrun but even after it empties all full slots, the producer will have one less empty slot to use.
After multiple such restarts the queue will have no slots that can be used and no messages can be sent.
Question:
How can this situation be avoided or recovered from?
Here's an outline of one simple approach, using events rather than semaphores:
DWORD increment_offset(DWORD offset)
{
offset++;
if (offset == QUEUE_LENGTH*2) offset = 0;
return offset;
}
void consumer(void)
{
for (;;)
{
DWORD current_write_offset = InterlockedCompareExchange(write_offset, 0, 0);
if ((current_write_offset != *read_offset + QUEUE_LENGTH) &&
(current_write_offset + QUEUE_LENGTH != *read_offset))
{
// Queue is not full, make sure producer is awake
SetEvent(signal_producer_event);
}
if (*read_offset == current_write_offset)
{
// Queue is empty, wait for producer to add a message
WaitForSingleObject(signal_consumer_event, INFINITE);
continue;
}
MemoryBarrier();
_ReadWriteBarrier;
consume((*read_offset) % QUEUE_LENGTH);
InterlockedExchange(read_offset, increment_offset(*read_offset));
}
}
void producer(void)
{
for (;;)
{
DWORD current_read_offset = InterlockedCompareExchange(read_offset, 0, 0);
if (current_read_offset != *write_offset)
{
// Queue is not empty, make sure consumer is awake
SetEvent(signal_consumer_event);
}
if ((*write_offset == current_read_offset + QUEUE_LENGTH) ||
(*write_offset + QUEUE_LENGTH == current_read_offset))
{
// Queue is full, wait for consumer to remove a message
WaitForSingleObject(signal_producer_event, INFINITE);
continue;
}
produce((*write_offset) % QUEUE_LENGTH);
MemoryBarrier();
_ReadWriteBarrier;
InterlockedExchange(write_offset, increment_offset(*write_offset));
}
}
Notes:
The code as posted compiles (given the appropriate declarations) but I have not otherwise tested it.
read_offset is a pointer to a DWORD in shared memory, indicating which slot should be read from next. Similarly, write_offset points to a DWORD in shared memory indicating which slot should be written to next.
An offset of QUEUE_LENGTH + x refers to the same slot as an offset of x so as to disambiguate between a full queue and an empty queue. That's why the increment_offset() function checks for QUEUE_LENGTH*2 rather than just QUEUE_LENGTH and why we take the modulo when calling the consume() and produce() functions. (One alternative to this approach would be to modify the producer to never use the last available slot, but that wastes a slot.)
signal_consumer_event and signal_producer_event must be automatic-reset events. Note that setting an event that is already set is a no-op.
The consumer only waits on its event if the queue is actually empty, and the producer only waits on its event if the queue is actually full.
When either process is woken, it must recheck the state of the queue, because there is a race condition that can lead to a spurious wakeup.
Because I use interlocked operations, and because only one process at a time is using any particular slot, there is no need for a mutex. I've included memory barriers to ensure that the changes the producer writes to a slot will be seen by the consumer. If you're not comfortable with lock-free code, you'll find that it is trivial to convert the algorithm shown to use a mutex instead.
Note that InterlockedCompareExchange(pointer, 0, 0); looks a bit complicated but is just a thread-safe equivalent to *pointer, i.e., it reads the value at the pointer. Similarly, InterlockedExchange(pointer, value); is the same as *pointer = value; but thread-safe. Depending on the compiler and target architecture, interlocked operations may not be strictly necessary, but the performance impact is negligible so I recommend programming defensively.
Consider the case when the consumer crashes during (or before) the call to the consume() function. When the consumer is restarted, it will pick up the same message again and process it as normal. As far as the producer is concerned, nothing unusual has happened, except that the message took longer than usual to be processed. An analogous situation occurs if the producer crashes while creating a message; when restarted, the first message generated will overwrite the incomplete one, and the consumer won't be affected.
Obviously, if the crash occurs after the call to InterlockedExchange but before the call to SetEvent in either the producer or consumer, and if the queue was previously empty or full respectively, then the other process will not be woken up at that point. However, it will be woken up as soon as the crashed process is restarted. You cannot lose slots in the queue, and the processes cannot deadlock.
I think the simple multiple-producer single-consumer case would look something like this:
void producer(void)
{
for (;;)
{
DWORD current_read_offset = InterlockedCompareExchange(read_offset, 0, 0);
if (current_read_offset != *write_offset)
{
// Queue is not empty, make sure consumer is awake
SetEvent(signal_consumer_event);
}
produce_in_local_cache();
claim_mutex();
// read offset may have changed, re-read it
current_read_offset = InterlockedCompareExchange(read_offset, 0, 0);
if ((*write_offset == current_read_offset + QUEUE_LENGTH) ||
(*write_offset + QUEUE_LENGTH == current_read_offset))
{
// Queue is full, wait for consumer to remove a message
WaitForSingleObject(signal_producer_event, INFINITE);
continue;
}
copy_from_local_cache_to_shared_memory((*write_offset) % QUEUE_LENGTH);
MemoryBarrier();
_ReadWriteBarrier;
InterlockedExchange(write_offset, increment_offset(*write_offset));
release_mutex();
}
}
If the active producer crashes, the mutex will be detected as abandoned; you can treat this case as if the mutex were properly released. If the crashed process got as far as incrementing the write offset, the entry it added will be processed as usual; if not, it will be overwritten by whichever producer next claims the mutex. In neither case is any special action needed.

Potential kind of asynchronous (overlapped) I/O implementation in Windows

I would like to discuss potential kind of asynchronous (Overlapped) I/O implementations in Windows, because there are many ways to implement this.
Overlapped I/O in Windows provides the ability to process data asynchronously, ie the execution of the operations are nonblocking.
Edit: The purpose of this question is the discussion about improvement of my own implementation on the one hand, and the discussion of alternate implementation on the other hand. What asynchronous I/O implementation would make most sense on parallel heavy I/O, what make most sense in small mostly single threaded application.
I will cite MSDN:
When a function is executed synchronously, it does not return until the operation has been completed. This means that the execution of the calling thread can be blocked for an indefinite period while it waits for a time-consuming operation to finish. Functions called for overlapped operation can return immediately, even though the operation has not been completed. This enables a time-consuming I/O operation to be executed in the background while the calling thread is free to perform other tasks. For example, a single thread can perform simultaneous I/O operations on different handles, or even simultaneous read and write operations on the same handle.
I assume that the reader is familiar with the basic concept of overlapped I/O.
Another solution for asynchronous I/O are completions ports, but this shall not be the subject of this discussion. More information on other I/O concepts can be found on MSDN "About File Management > Input and Output (I/O) > I/O Concepts"
I would like to present my (C/C++) implementation here and share it for discussion.
This is my extended OVERLAPPED struct called IoOperation:
struct IoOperation : OVERLAPPED {
HANDLE Handle;
unsigned int Operation;
char* Buffer;
unsigned int BufferSize;
}
This struct is created each time an asynchronous operation like ReadFile or WriteFile is called. The Handle field shall be initialized with the corresponding device/file handle. Operation is a user defined field that tells what operation was called. The field Buffer is a pointer to a previously allocated chunk of memory with the given size BufferSize. Of course, this struct can be expanded at will. It could contain the operation result, acutaully transfered size etc.
The first thing we need is an (auto reset) event handle to be signaled each time an overlapped I/O is completed.
HANDLE hEvent = CreateEvent(0, FALSE, FALSE, 0);
First I decided to use only one event for all asynchronous operations. Then I decided to register this event with a thread pool thread with RegisterWaitForSingleObject.
HANDLE hWait = 0;
....
RegisterWaitForSingleObject(
&hWait,
hEvent,
WaitOrTimerCallback,
this,
INFINITE,
WT_EXECUTEINPERSISTENTTHREAD | WT_EXECUTELONGFUNCTION
);
So each time this event is signaled, my callback WaitOrTimerCallback is called.
An asynchronous operation is initialized like this:
IoOperation* Io = new IoOperation(hFile, hEvent, IoOperation::Write, Data, DataSize);
if (IoQueue->Enqueue(Io)) {
WriteFile(hFile, Io->Buffer, Io->BufferSize, 0, Io);
}
Each operation is queued and is removed after successful GetOverlappedResult call in my WaitOrTimerCallback callback. Instead calling new all the time here, we could use a memory pool to avoid memory fragmentation and to make allocation faster.
VOID CALLBACK WaitOrTimerCallback(PVOID Parameter, BOOLEAN TimerOrWaitFired) {
list<IoOperation*>::iterator it = IoQueue.begin();
while (it != IoQueue.end()) {
bool IsComplete = true;
DWORD Transfered = 0;
IoOperation* Io = *it;
if (GetOverlappedResult(Io->Handle, Io, &Transfered, FALSE)) {
if (Io->Operation == IoOperation::Read) {
// Handle Read, virtual OnRead(), SetEvent, etc.
} else if (Io->Operation == IoOperation::Write) {
// Handle Read, virtual OnWrite(), SetEvent, etc.
} else {
// ...
}
} else {
if (GetLastError() == ERROR_IO_INCOMPLETE) {
IsComplete = false;
} else {
// Handle Error
}
}
if (IsComplete) {
delete Io;
it = IoQueue.erase(it);
} else {
it++;
}
}
}
Of course, to be multi threading safe, we need a lock protection (critical section) when accessing the I/O queue for example.
There are advantages but also disadvantage of this kind of implementation.
Advantages:
Execution in persistent thread pool thread, no manual thread creation is required
Only one event is required
Each operation is queued in an I/O queue (CancelIoEx can be called later)
Disadvantages:
I/O queue requires extra memory/cpu time
GetOverlappedResult is called for all queued I/O's even incompleted ones

C++/Win. Not getting FD_CLOSE

I have an asynchronous socket and call to connect() + GetLastError() which returns WSA_WOULD_BLOCK, as expected. So I start "receiving/reading" thread and subscribe Event to FD_READ and FD_CLOSE.
The story is: connect will sequentially fail, since Server is not up and running. My understanding that my receiving thread should get FD_CLOSE soon and I need to follow-up with cleaning.
It does not happen. How soon should I receive FD_CLOSE? Is it proper approach? Is there any other way to understand that connect() failed? Shoul I ever receive FD_CLOSE if socket isn't connected?
I do start my receiving thread and subscribe event after successful call to DoConnect() and I am afraid that racing condition prevents me from getting FD_CLOSE.
Here is some code:
int RecvSocketThread::WaitForData()
{
int retVal = 0
while (!retVal)
{
// sockets to pool can be added on other threads.
// please validate that all of them in the pool are connected
// before doing any reading on them
retVal = DoWaitForData();
}
}
int RecvSocketThread::DoWaitForData()
{
// before waiting for incoming data, check if all sockets are connected
WaitForPendingConnection_DoForAllSocketsInThePool();
// other routine to read (FD_READ) or react to FD_CLOSE
// create array of event (each per socket) and wait
}
void RecvSocketThread::WaitForPendingConnection_DoForAllSocketsInThePool()
{
// create array and set it for events associated with pending connect sockets
HANDLE* EventArray = NULL;
int counter = 0;
EventArray = new HANDLE[m_RecvSocketInfoPool.size()];
// add those event whose associated socket is still not connected
// and wait for FD_WRITE and FD_CLOSE. At the end of this function
// don't forget to switch them to FD_READ and FD_CLOSE
while (it != m_RecvSocketInfoPool.end())
{
RecvSocketInfo* recvSocketInfo = it->second;
if (!IsEventSet(recvSocketInfo->m_Connected, &retVal2))
{
::WSAEventSelect(recvSocketInfo->m_WorkerSocket, recvSocketInfo->m_Event, FD_WRITE | FD_CLOSE);
EventArray[counter++] = recvSocketInfo->m_Event;
}
++it;
}
if (counter)
{
DWORD indexSignaled = WaitForMultipleObjects(counter, EventArray, WaitAtLeastOneEvent, INFINITE);
// no matter what is further Wait doen't return for failed to connect socket
if (WAIT_OBJECT_0 <= indexSignaled &&
indexSignaled < (WAIT_OBJECT_0 + counter))
{
it = m_RecvSocketInfoPool.begin();
while (it != m_RecvSocketInfoPool.end())
{
RecvSocketInfo* recvSocketInfo = it->second;
if (IsEventSet(recvSocketInfo->m_Event, NULL))
{
rc = WSAEnumNetworkEvents(recvSocketInfo->m_WorkerSocket,
recvSocketInfo->m_Event, &networkEvents);
// Check recvSocketInfo->m_Event using WSAEnumnetworkevents
// for FD_CLOSE using FD_CLOSE_BIT
if ((networkEvents.lNetworkEvents & FD_CLOSE))
{
recvSocketInfo->m_FD_CLOSE_Recieved = 1;
*retVal = networkEvents.iErrorCode[FD_CLOSE_BIT];
}
if ((networkEvents.lNetworkEvents & FD_WRITE))
{
WSASetEvent(recvSocketInfo->m_Connected);
*retVal = networkEvents.iErrorCode[FD_WRITE_BIT];
}
}
++it;
}
}
// if error - DoClean, if FD_WRITE (socket is writable) check if m_Connected
// before do any sending
}
}
You will not receive an FD_CLOSE notification if connect() fails. You must subscribe to FD_CONNECT to detect that. This is clearly stated in the connect() documentation:
With a nonblocking socket, the connection attempt cannot be completed
immediately. In this case, connect will return SOCKET_ERROR, and
WSAGetLastError will return WSAEWOULDBLOCK. In this case, there are
three possible scenarios:
•Use the select function to determine the completion of the
connection request by checking to see if the socket is writeable.
•If the application is using WSAAsyncSelect to indicate interest in
connection events, then the application will receive an FD_CONNECT
notification indicating that the connect operation is complete
(successfully or not).
•If the application is using WSAEventSelect to indicate interest in
connection events, then the associated event object will be signaled
indicating that the connect operation is complete (successfully or
not).
The result code of connect() will be in the event's HIWORD(lParam) value when LOWORD(lParam) is FD_CONNECT. If the result code is 0, connect() was successful, otherwise it will be a WinSock error code.
If you call connect() and get a blocking notification you have to write more code to monitor for connect() completion (success or failure) via one of three methods as described here.
With a nonblocking socket, the connection attempt cannot be completed
immediately. In this case, connect will return SOCKET_ERROR, and
WSAGetLastError will return WSAEWOULDBLOCK. In this case, there are
three possible scenarios:
•Use the select function to determine the completion of the connection
request by checking to see if the socket is writeable.
•If the
application is using WSAAsyncSelect to indicate interest in connection
events, then the application will receive an FD_CONNECT notification
indicating that the connect operation is complete (successfully or
not).
•If the application is using WSAEventSelect to indicate interest
in connection events, then the associated event object will be
signaled indicating that the connect operation is complete
(successfully or not).
I think I need to start Receving thread once socket handle is created, but before connect is called. It is too late to create it after connect was called on asynchronous socket.
For synchronous socket those two calls createsocket() and connect() was just two consequitive lines. Does not work for non-blocking.
In this case at the beginning of receiving thread I need to check for FD_CONNECT and/or FD_WRITE in order be informed of connect attempt status.

Resources