Multithreaded IOCP Client Issue - client

I am writing a multithreaded client that uses an IO Completion Port.
I create and connect the socket that has the WSA_FLAG_OVERLAPPED attribute set.
if ((m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) == INVALID_SOCKET)
{
throw std::exception("Failed to create socket.");
}
if (WSAConnectByName(m_socket, L"server.com", L"80", &localAddressLength, reinterpret_cast<sockaddr*>(&localAddress), &remoteAddressLength, &remoteAddress, NULL, NULL) == FALSE)
{
throw std::exception("Failed to connect.");
}
I associate the IO Completion Port with the socket.
if ((m_hIOCP = CreateIoCompletionPort(reinterpret_cast<HANDLE>(m_socket), m_hIOCP, NULL, 8)) == NULL)
{
throw std::exception("Failed to create IOCP object.");
}
All appears to go well until I try to send some data over the socket.
SocketData* socketData = new SocketData;
socketData->hEvent = 0;
DWORD bytesSent = 0;
if (WSASend(m_socket, socketData->SetBuffer(socketData->GenerateLoginRequestHeader()), 1, &bytesSent, NULL, reinterpret_cast<OVERLAPPED*>(socketData), NULL) == SOCKET_ERROR && WSAGetLastError() != WSA_IO_PENDING)
{
throw std::exception("Failed to send data.");
}
Instead of returning SOCKET_ERROR with the last error set to WSA_IO_PENDING, WSASend returns immediately.
I need the IO to pend and for it's completion to be handled in my thread function which is also my worker thread.
unsigned int __stdcall MyClass::WorkerThread(void* lpThis)
{
}
I've done this before but I don't know what is going wrong in this case, I'd greatly appreciate any efforts in helping me fix this problem.

It's not a problem unless you make it so.
As long as you're not calling SetFileCompletionNotificationModes() and setting the flag to skip completion port processing on success then even if WSARecv (or whatever) returns SUCCESS an IO Completion Packet is queued to the IOCP the same as if ERROR_IO_PENDING was returned. Thus you need no special handling for the non error return case.
See http://support.microsoft.com/default.aspx?scid=kb;en-us;Q192800 for details.

First of all break the call into more clear logic:
int nRet = WSASend(m_socket, socketData->SetBuffer(socketData->GenerateLoginRequestHeader()), 1, NULL, NULL, reinterpret_cast<OVERLAPPED*>(socketData), NULL);
if (nRet == SOCKET_ERROR)
{
if ((WSAGetLastError()) == WSA_IO_PENDING)
nRet = 0; // ok
else
throw std::exception("Failed to send data."); // failed
}
Also, as you can see in my code, you should NOT pass the "&bytesSent" parameter according to WSASend:
Use NULL for this parameter if the
lpOverlapped parameter is not NULL to
avoid potentially erroneous results.
Besides that your call to WSASend() looks fine.

Related

minifilter send message to r3

I'm writing a minifilter, which wants to notify the r3 application to popup a messagebox in some cases. I used fltsendmessage in minifilter and filtergetmessage in r3. In r3 application, I wrote like:
while (INVALID_HANDLE_VALUE == s_portWorker.m_clientPort)
{
hResult = FilterConnectCommunicationPort(SERVER_PORTNAME_POPUP, 0, NULL, 0, NULL, &s_portWorker.m_clientPort);
if (IS_ERROR(hResult)) {
Sleep(1000);
}
while (true)
{
ZeroMemory(&getStruct, sizeof(GET_STRUCT));
hResult = FilterGetMessage(s_portWorker.m_clientPort, (PFILTER_MESSAGE_HEADER)&getStruct, sizeof(GET_STRUCT), NULL);
}
}
It works fine. But when I stop my minifilter, calling FltCloseCommunicationPort() in driver unload. The port has been closed, but the connection is still in, my r3 process will blocks on FilterGetMessage and never return.
I want to stop waiting the messagew when port close, and try to reconnect to my minifilter. What should I do? Since that FilterGetMessage() routine doesn't support a timeout mechanism, Do I have to create a event to notify the r3 when stop the filter?
You can implement a timeout mechanism by using lpOverlapped parameter.
HANDLE hWait = CreateEvent(NULL, TRUE, FALSE, NULL);
OVERLAPPED op = {0};
op.hEvent = hWait;
HRESULT hResult = FilterGetMessage(s_portWorker.m_clientPort, (PFILTER_MESSAGE_HEADER)&getStruct, sizeof(GET_STRUCT), &op);
if (hResult == HRESULT_FROM_WIN32(ERROR_IO_PENDING))
{
HANDLE phHandles[2] = { hWait, g_hTerm };
WaitForMultipleObjects(2, phHandles, TIME_OUT_VALUE);
}
And you can stop listenning by calling SetEvent(g_hTerm);

Understanding DirectoryWatcher

I've been trying to use and understand the DirectoryWatcher class from Microsoft's Cloud Mirror sample. It uses ReadDirectoryChangesW to monitor changes to a directory. I don't think it's reporting all changes, to be honest. In any event, I had a question about the key part of the code, which is as follows:
concurrency::task<void> DirectoryWatcher::ReadChangesAsync()
{
auto token = _cancellationTokenSource.get_token();
return concurrency::create_task([this, token]
{
while (true)
{
DWORD returned;
winrt::check_bool(ReadDirectoryChangesW(
_dir.get(),
_notify.get(),
c_bufferSize,
TRUE,
FILE_NOTIFY_CHANGE_ATTRIBUTES,
&returned,
&_overlapped,
nullptr));
DWORD transferred;
if (GetOverlappedResultEx(_dir.get(), &_overlapped, &transferred, 1000, FALSE))
{
std::list<std::wstring> result;
FILE_NOTIFY_INFORMATION* next = _notify.get();
while (next != nullptr)
{
std::wstring fullPath(_path);
fullPath.append(L"\\");
fullPath.append(std::wstring_view(next->FileName, next->FileNameLength / sizeof(wchar_t)));
result.push_back(fullPath);
if (next->NextEntryOffset)
{
next = reinterpret_cast<FILE_NOTIFY_INFORMATION*>(reinterpret_cast<char*>(next) + next->NextEntryOffset);
}
else
{
next = nullptr;
}
}
_callback(result);
}
else if (GetLastError() != WAIT_TIMEOUT)
{
throw winrt::hresult_error(HRESULT_FROM_WIN32(GetLastError()));
}
else if (token.is_canceled())
{
wprintf(L"watcher cancel received\n");
concurrency::cancel_current_task();
return;
}
}
}, token);
}
After reviewing an answer to another question, here's what I don't understand about the code above: isn't the code potentially re-calling ReadDirectoryChangesW before the prior call has returned a result? Or is this code indeed correct? Thanks for any input.
Yes, I seem to have confirmed in my testing that there should be another while loop there around the call to GetOverlappedResultEx, similar to the sample code provided in that other answer. I think the notifications are firing properly with it.
Shouldn't there also be a call to CancelIo in there, too? Or is that not necessary for some reason?

Blue screen when rewriting packets at DATAGRAM_DATA layer in WFP

I've been trying to modify outgoing DNS packets via the DATAGRAM_DATA layer in WFP, however i get blue screen errors when rewriting the destination ip in the outgoing packet. What am i doing wrong?
I admit i found the parameters for FwpsInjectTransportSendAsync a bit confusing, and was unsure exactly what to put in for the sendParams arg - though i think what i have looks right.
RtlIpv4StringToAddressExW(
L"1.1.1.1", // hard-coding the new (rewritten) dns server for now
FALSE,
&sin4.sin_addr,
&sin4.sin_port);
RtlIpv4StringToAddressExW(
L"8.8.8.8", // hard-coding the original dns server for now
FALSE,
&origSin4.sin_addr,
&origSin4.sin_port);
if ((Direction == FWP_DIRECTION_OUTBOUND) && (PacketInjectionState == FWPS_PACKET_NOT_INJECTED) && (RemotePort == 53) && (RemoteAddress == origSin4.sin_addr.S_un.S_addr))
{
UINT32 IpHeaderSize = inMetaValues->ipHeaderSize;
UINT32 TransportHeaderSize = inMetaValues->transportHeaderSize;
UINT64 endpointHandle = inMetaValues->transportEndpointHandle;
PNET_BUFFER NetBuffer = NET_BUFFER_LIST_FIRST_NB((PNET_BUFFER_LIST)layerData);
NdisRetreatNetBufferDataStart(NetBuffer, IpHeaderSize + TransportHeaderSize, 0, NULL);
PNET_BUFFER_LIST NetBufferList = NULL;
NTSTATUS Status = FwpsAllocateCloneNetBufferList(layerData, NULL, NULL, 0, &NetBufferList);
if (!NT_SUCCESS(Status))
{
return;
}
NdisAdvanceNetBufferDataStart(NetBuffer, IpHeaderSize + TransportHeaderSize, FALSE, NULL);
if (!NetBufferList)
{
return;
}
NetBuffer = NET_BUFFER_LIST_FIRST_NB(NetBufferList);
PIPV4_HEADER IpHeader = NdisGetDataBuffer(NetBuffer, sizeof(IPV4_HEADER), NULL, 1, 0);
// Rewriting the dest ip
IpHeader->DestinationAddress = sin4.sin_addr.S_un.S_addr;
// Updating the IP checksum
UpdateIpv4HeaderChecksum(IpHeader, sizeof(IPV4_HEADER));
// not 100% sure the sendParams argument is setup correctly, the docs are slightly unclear
FWPS_TRANSPORT_SEND_PARAMS sendParams = {
.remoteAddress = (UCHAR*)IpHeader->DestinationAddress,
.remoteScopeId = inMetaValues->remoteScopeId,
.controlData = inMetaValues->controlData,
.controlDataLength = inMetaValues->controlDataLength,
.headerIncludeHeader = inMetaValues->headerIncludeHeader,
.headerIncludeHeaderLength = inMetaValues->headerIncludeHeaderLength
};
Status = FwpsInjectTransportSendAsync(g_InjectionHandle, NULL, endpointHandle, 0, &sendParams, AF_INET, inMetaValues->compartmentId, NetBufferList, DriverDatagramDataInjectComplete, NULL);
if (!NT_SUCCESS(Status))
{
FwpsFreeCloneNetBufferList(NetBufferList, 0);
}
classifyOut->actionType = FWP_ACTION_BLOCK;
classifyOut->rights &= ~FWPS_RIGHT_ACTION_WRITE;
classifyOut->flags |= FWPS_CLASSIFY_OUT_FLAG_ABSORB;
}
Two things stand out to me, both in the sendParams.
First, remoteAddress is incorrect. It needs to a pointer to the address, so it should be (UCHAR*)&IpHeader->DestinationAddress.
Second, FwpsInjectTransportSendAsync() is asynchronous so any parameters you pass to it need to stay valid until it completes which may be after your calling function returns. Typically you allocate some context structure that contains sendParams and deep copies of relevant members (remoteAddress and controlData). You pass this as the context to the completion routine where you free it.

Aging values in a queue: Best use of Windows timers?

I have an std::set that contains unique values. I have an std::queue that holds the same values
in order to age the values in std::set.
I'd like to use a timer to determine when to pop a value from the queue and then erase the value from the set.
The timer is created/started every time data is added to an empty set/queue.
If data is added to a non-empty set/queue, no change is made to the timer.
The timer would fire every X milliseconds to execute a function.
The function would pop a value from the queue then erase that value from the set.
If the set/queue is now empty the timer would stop.
If the set/queue is not empty, no change is made to the timer.
This program runs in Windows 10.
Does this way make sense? Is there a better/more efficient/simpler way to age the data?
I've read the docs on Using Timer Queues so I see how the queue and the timers are created and destroyed. What I don't see is a recommendation for starting/stopping timers.
Should I be creating a new TimerQueueTimer to wait for X milliseconds once, run the func and then create a new TimerQueueTimer if the set/queue is not empty?
Should I instead create a single TimerQueueTimer to run periodically X milliseconds but delete it once the set/queue is empty?
Is there a 3rd technique I should use instead?
Here's my example code.
using unsignedIntSet = std::set<std::uint32_t>;
using unsignedIntQ = std::queue<std::uint32_t>;
unsignedIntQ agingQ;
unsignedIntSet agingSet;
HANDLE gDoneEvent = NULL;
HANDLE hTimer = NULL;
HANDLE hTimerQueue = NULL;
VOID CALLBACK ageTimer(PVOID lpParam, BOOLEAN TimerOrWaitFired)
{
if (!agingQ.empty())
{
auto c = agingQ.front();
agingSet.erase(c);
agingQ.pop();
if (!agingQ.empty())
{
// rerun CreateTimerQueueTimer() here?
}
}
SetEvent(gDoneEvent);
}
int createTimerForAgingQ()
{
// create timer if it doesn't already exist
if (gDoneEvent == NULL)
{
gDoneEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
if (gDoneEvent == NULL)
{
std::cerr << "CreateEvent() error: " << WSAGetLastError() << std::endl;
return -1;
}
hTimerQueue = CreateTimerQueue();
if (hTimerQueue == NULL)
{
std::cerr << "CreateTimerQueue() error: " << WSAGetLastError() << std::endl;
return -1;
}
if (!CreateTimerQueueTimer(&hTimer, hTimerQueue, (WAITORTIMERCALLBACK)ageTimer, NULL, 500, 0, WT_EXECUTEONLYONCE))
{
std::cerr << "CreateTimerQueueTimer() error: " << WSAGetLastError() << std::endl;
return -1;
}
}
}
void addUnique(unsigned char* buffer, int bufferLen)
{
// hash value
auto h = hash(buffer, bufferLen);
// test insert into set
auto setResult = agingSet.emplace(h);
if (setResult.second)
{
// enqueue into historyQ
agingQ.emplace(h);
if (!gDoneEvent) createTimerForAgingQ();
}
}
Research shows that the CreateTimerQueue/CreateTimerQueueTimer may not be the way to go.
Use of ThreadpoolTimer

Poco c++ Websocket server connection reset by peer

I am writing a kind of chat server app where a message received from one websocket client is sent out to all other websocket clients. To do this, I keep the connected clients in a list. When a client disconnects, I need to remove it from the list (so that future "sends" do not fail).
However, sometimes when a client disconnects, the server just gets an exception "connection reset by peer", and the code does not get chance to remove from the client list. Is there a way to guarantee a "nice" notification that the connection has been reset?
My code is:
void WsRequestHandler::handleRequest(HTTPServerRequest &req, HTTPServerResponse &resp)
{
int n;
Poco::Timespan timeOut(5,0);
try
{
req.set("Connection","Upgrade"); // knock out any extra tokens firefox may send such as "keep-alive"
ws = new WebSocket(req, resp);
ws->setKeepAlive(false);
connectedSockets->push_back(this);
do
{
flags = 0;
if (!ws->poll(timeOut,Poco::Net::Socket::SELECT_READ || Poco::Net::Socket::SELECT_ERROR))
{
// cout << ".";
}
else
{
n = ws->receiveFrame(buffer, sizeof(buffer), flags);
if (n > 0)
{
if ((flags & WebSocket::FRAME_OP_BITMASK) == WebSocket::FRAME_OP_BINARY)
{
// process and send out to all other clients
DoReceived(ws, buffer, n);
}
}
}
}
while ((flags & WebSocket::FRAME_OP_BITMASK) != WebSocket::FRAME_OP_CLOSE);
// client has closed, so remove from list
for (vector<WsRequestHandler *>::iterator it = connectedSockets->begin() ; it != connectedSockets->end(); ++it)
{
if (*it == this)
{
connectedSockets->erase(it);
logger->information("Connection closed %s", ws->peerAddress().toString());
break;
}
}
delete(ws);
ws = NULL;
}
catch (WebSocketException& exc)
{
//never gets called
}
}
See receiveFrame() documentation:
Returns the number of bytes received. A return value of 0 means that the peer has shut down or closed the connection.
So if receiveFrame() call returns zero, you can act acordingly.
I do not know if this is an answer to the question, but the implementation you have done does not deal with PING frames. This is currently (as of my POCO version: 1.7.5) not done automatically by the POCO framework. I put up a question about that recently. According to the RFC (6465), the ping and pong frames are used (among others) as a keep-alive function. This may therefore be critical to get right in order to get your connection stable over time. Much of this is guess-work from my side as I am experimenting with this now myself.
#Alex, you are a main developer of POCO I believe, a comment on my answer would be much appreciated.
I extended the catch, to do some exception handling for "Connection reset by peer".
catch (Poco::Net::WebSocketException& exc)
{
// Do something
}
catch (Poco::Exception& e)
{
// This is where the "Connection reset by peer" lands
}
A bit late to the party here... but I am using Poco and Websockets as well - and properly handling disconnects was tricky.
I ended up implementing a simple ping functionality myself where the client side sends an ACK message for every WS Frame it receives. A separate thread on the server side tries to read the ACK messages - and it will now detect when the client has disconnected by looking at flags | WebSocket::FRAME_OP_CLOSE.
//Serverside - POCO. Start thread for receiving ACK packages. Needed in order to detect when websocket is closed!
thread t0([&]()->void{
while((!KillFlag && ws!= nullptr && flags & WebSocket::FRAME_OP_BITMASK) != WebSocket::FRAME_OP_CLOSE && machineConnection != nullptr){
try{
if(ws == nullptr){
return;
}
if(ws->available() > 0){
int len = ws->receiveFrame(buffer, sizeof(buffer), flags);
}
else{
Util::Sleep(10);
}
}
catch(Poco::Exception &pex){
flags = flags | WebSocket::FRAME_OP_CLOSE;
return;
}
catch(...){
//log::info(string("Unknown exception in ACK Thread drained"));
return;
}
}
log::debug("OperatorWebHandler::HttpRequestHandler() Websocket Acking thread DONE");
});
on the client side I just send a dummy "ACK" message back to the server (JS) every time I receive a WS frame from the server (POCO).
websocket.onmessage = (evt) => {
_this.receivedData = JSON.parse(evt.data);
websocket.send("ACK");
};
It is not about disconnect handling, rather about the stability of the connection.
Had some issues with POCO Websocket server in StreamSocket mode and C# client. Sometimes the client sends Pong messages with zero length payload and disconnect occurs so I added Ping and Pong handling code.
int WebSocketImpl::receiveBytes(void* buffer, int length, int)
{
char mask[4];
bool useMask;
_frameFlags = 0;
for (;;) {
int payloadLength = receiveHeader(mask, useMask);
int frameOp = _frameFlags & WebSocket::FRAME_OP_BITMASK;
if (frameOp == WebSocket::FRAME_OP_PONG || frameOp ==
WebSocket::FRAME_OP_PING) {
std::vector<char> tmp(payloadLength);
if (payloadLength != 0) {
receivePayload(tmp.data(), payloadLength, mask, useMask);
}
if (frameOp == WebSocket::FRAME_OP_PING) {
sendBytes(tmp.data(), payloadLength, WebSocket::FRAME_OP_PONG);
}
continue;
}
if (payloadLength <= 0)
return payloadLength;
if (payloadLength > length)
throw WebSocketException(Poco::format("Insufficient buffer for
payload size %d", payloadLength),
WebSocket::WS_ERR_PAYLOAD_TOO_BIG);
return receivePayload(reinterpret_cast<char*>(buffer), payloadLength,
mask, useMask);
}
}

Resources