How to distinguish which operation was completed in IOCP processing thread? - winapi

My application can simultaneously send and receive data from the client using WSASend and WSARecv. So, How can distinguish which operation was completed in IOCP processing thread (send or receive)?
BOOL bReturn = GetQueuedCompletionStatus(srv.m_hCompPort, &dwBytesTransfered, (LPDWORD)&lpContext, &pOverlapped, INFINITE);
I thought I can use OVERLAPED structure for this purpose, but I can't. Any idea?
Thank You!

Solution is very simple:
struct iOverlaped : public OVERLAPPED{
enum Type {
Send,
Receive
};
iOverlaped(Type type_ ) {
ZeroMemory(this, sizeof(iOverlaped));
type = type_;
}
Type type;
};
And for every connection we have to create two overlapped instances (one per each operation type) ...

Related

Generate an RPC code for a client to remotely call the existing server functions?

Before I ask the main question, I have two existing client/server win32 projects based on sockets in which the client sends a string request for the server and receives the result as a string using socket functions i.e. send(), recv()
a part of the server code (currently still based on sockets)
struct client_ctx
{
int socket;
CHAR buf_recv[MAX_SEND_BUF_SIZE]; // receive buffer
CHAR buf_send[MAX_SEND_BUF_SIZE]; // send buffer
unsigned int sz_recv; // size of recv buffer
unsigned int sz_send_total; // size of send buffer
unsigned int sz_send; // size of data send
// OVERLAPPED structures for notifications of completition
OVERLAPPED overlap_recv;
OVERLAPPED overlap_send;
OVERLAPPED overlap_cancel;
DWORD flags_recv; // Flags for WSARecv
};
struct client_ctx g_ctxs[1 + MAX_CLIENTS];
void schedule_write(DWORD idx)
{
WSABUF buf; buf.buf = g_ctxs[idx].buf_send + g_ctxs[idx].sz_send;
buf.len = g_ctxs[idx].sz_send_total - g_ctxs[idx].sz_send;
memset(&g_ctxs[idx].overlap_send, 0, sizeof(OVERLAPPED));
WSASend(g_ctxs[idx].socket, &buf, 1, NULL, 0, &g_ctxs[idx].overlap_send, NULL);
}
And using the above functions I send the requested data to the client
The data I send from the server
static class SystemInfo{
public :
static std::string GetOSVersion();
static std::string GetCurrentTimeStr();
static std::string GetTimeSinceStartStr();
static std::string GetFreeMemoryStr();
static std::string GetFreeSpaceStr();
static std::string CheckAccess();
static std::string CheckKeyFileDirectoryAccessRights(wchar_t *char_path, wchar_t *char_buf);
static std::string UserNameFromSid(PSID userSid);
static BOOL FileOrDirectoryExists(LPCTSTR szPath);
};
And the question is: is there any guide on how can I use the midl compiler to be able to represent the methods from SystemInfo class as procedures that can be called remotely? I can't find any manual of How to connect the existing functions with the remote procedure calls (and use them from the client side in my case)
You aren't going to be able to hook the client calls directly to existing server functions. Even when implicit binding handles are used on the client side the actual server function gets a binding handle (otherwise the server wouldn't be able to handle multiple clients). Because of that the signatures simply aren't going to match.

Timeout for ConnectEx() in IOCP mode?

In an IOCP Winsock2 client, after ConnectEx() times-out on an unsuccessful connection attempt, the following happens:
An "IO completion" is queued to the associated IO Completion Port.
GetQueuedCompletionStatus() returns FALSE.
WSAGetOverlappedResult() returns WSAETIMEDOUT.
What determines the timeout period between calling ConnectEx() and 1 above? How can I shorten this timeout period?
I know that it is possible to wait for ConnectEx() by passing it a filled-out structure OVERLAPPED.hEvent = WSACreateEvent() and then waiting for this event, e.g. with WaitForSingleObject(Overlapped.hEvent, millisec) to timeout after no connection has been made for the millisec time period. BUT, that solution is outside the scope of this question because it does not refer to the IOCP notification model.
unfortunatelly look like no built-in option for set socket connect timeout. how minimum i not view this and based on this question - How to configure socket connect timeout - nobody not view too.
one possible solution pass event handle to I/O request and if we got ERROR_IO_PENDING - call RegisterWaitForSingleObject for this event. if this call will be successful - our WaitOrTimerCallback callback function will be called - or because I/O will be complete (with any final status) and at this moment event (which we pass both to I/O request and RegisterWaitForSingleObject) will be set or because timeout (dwMilliseconds) expired - in this case we need call CancelIoEx function.
so let say we have class IO_IRP : public OVERLAPPED which have reference counting (we need save pointer to OVERLAPPED used in I/O request for pass it to CancelIoEx. and need be sure that this OVERLAPPED still not used in another new I/O - so yet not free). in this case possible implementation:
class WaitTimeout
{
IO_IRP* _Irp;
HANDLE _hEvent, _WaitHandle, _hObject;
static VOID CALLBACK WaitOrTimerCallback(
__in WaitTimeout* lpParameter,
__in BOOLEAN TimerOrWaitFired
)
{
UnregisterWaitEx(lpParameter->_WaitHandle, NULL);
if (TimerOrWaitFired)
{
// the lpOverlapped unique here (because we hold reference on it) - not used in any another I/O
CancelIoEx(lpParameter->_hObject, lpParameter->_Irp);
}
delete lpParameter;
}
~WaitTimeout()
{
if (_hEvent) CloseHandle(_hEvent);
_Irp->Release();
}
WaitTimeout(IO_IRP* Irp, HANDLE hObject) : _hEvent(0), _Irp(Irp), _hObject(hObject)
{
Irp->AddRef();
}
BOOL Create(PHANDLE phEvent)
{
if (HANDLE hEvent = CreateEvent(NULL, FALSE, FALSE, NULL))
{
*phEvent = hEvent;
_hEvent = hEvent;
return TRUE;
}
return FALSE;
}
public:
static WaitTimeout* Create(PHANDLE phEvent, IO_IRP* Irp, HANDLE hObject)
{
if (WaitTimeout* p = new WaitTimeout(Irp, hObject))
{
if (p->Create(phEvent))
{
return p;
}
delete p;
}
return NULL;
}
void Destroy()
{
delete this;
}
// can not access object after this call
void SetTimeout(ULONG dwMilliseconds)
{
if (RegisterWaitForSingleObject(&_WaitHandle, _hEvent,
(WAITORTIMERCALLBACK)WaitOrTimerCallback, this,
dwMilliseconds, WT_EXECUTEONLYONCE|WT_EXECUTEINWAITTHREAD))
{
// WaitOrTimerCallback will be called
// delete self here
return ;
}
// fail register wait
// just cancel i/o and delete self
CancelIoEx(_hObject, _Irp);
delete this;
}
};
and use something like
if (IO_IRP* Irp = new IO_IRP(...))
{
WaitTimeout* p = 0;
if (dwMilliseconds)
{
if (!(p = WaitTimeout::Create(&Irp->hEvent, Irp, (HANDLE)socket)))
{
err = ERROR_NO_SYSTEM_RESOURCES;
}
}
if (err == NOERROR)
{
DWORD dwBytes;
err = ConnectEx(socket, RemoteAddress, RemoteAddressLength,
lpSendBuffer, dwSendDataLength, &dwBytes, Irp)) ?
NOERROR : WSAGetLastError();
}
if (p)
{
if (err == ERROR_IO_PENDING)
{
p->SetTimeout(dwMilliseconds);
}
else
{
p->Destroy();
}
}
Irp->CheckErrorCode(err);
}
another possible solution set timer via CreateTimerQueueTimer and if timer expired - call CancellIoEx or close I/O handle from here. difference with event solution - if I/O will be completed before timer expired - the WaitOrTimerCallback callback function will be not automatically called. in case event - I/O subsystem set event when I/O complete (after intial pending status) and thanks to that (event in signal state) callback will be called. but in case timer - no way pass it to io request as parameter (I/O accept only event handle). as result we need save pointer to timer object by self and manually free it when I/O complete. so here will be 2 pointer to timer object - one from pool (saved by CreateTimerQueueTimer) and one from our object (socket) class (we need it for dereference object when I/O complete). this require reference counting on object which incapsulate timer too. from another side we can use timer not for single I/O operation but for several I/O (because it not direct bind to some I/O)

COM asynchronous call doesn't respect the message filter

I have an STA COM object that implements a custom interface. My custom interface has a custom proxy stub that was built from the code generated by the MIDL-compiler. I would like to be able to asynchronously make calls to the interface from other apartments. I'm finding that the synchronous interface calls respect the OLE message filter on the calling thread, but the asynchronous interface calls do not. This means that COM asynchronous calls cannot be used in a fire-and-forget manner if the calling apartment has a message filter that suggests retrying the call later.
Is this expected? Is there any way around this other than not using a message filter, not using fire-and-forget operations, or having a separate homegrown component just to manage fire-and-forget operations?
For the code below, MessageFilter is a simple, in-module implementation of IMessageFilter that routes calls to lambdas. If I do not use message filters, both the synchronous and asynchronous calls work fine. If I use the message filters shown below, the synchronous call works (after the main STA message filter stops returning SERVERCALL_RETRYLATER) but the asynchronous call immediately fails and never retries.
The main STA has a message filter that defers for some period of time.
// establish deferral time
chrono::time_point<chrono::system_clock> defer_until = ...;
// create message filter
auto message_filter = new MessageFilter;
message_filter->AddRef();
message_filter->handle_incoming_call
= [defer_until](DWORD, HTASK, DWORD, LPINTERFACEINFO)
{
return chrono::high_resolution_clock::now() >= defer_until
? SERVERCALL_ISHANDLED
: SERVERCALL_RETRYLATER;
};
// register message filter
CoRegisterMessageFilter(message_filter, nullptr);
Another STA sets up its own message filter to tell COM to retry.
// create message filter
auto message_filter = new MessageFilter;
message_filter->AddRef();
message_filter->retry_rejected_call
= [](HTASK, DWORD, DWORD)
{
return 0; // retry immediately
};
// register message filter
CoRegisterMessageFilter(message_filter, nullptr);
In that secondary STA, I get a proxy for the object interface from the main STA.
// get global interface table
IGlobalInterfaceTablePtr global_interface_table;
global_interface_table.CreateInstance(CLSID_StdGlobalInterfaceTable);
// get interface reference
IMyInterfacePtr object_interface;
global_interface_table->GetInterfaceFromGlobal(cookie, __uuidof(IMyInterface), reinterpret_cast<LPVOID*>(&object_interface)));
This works:
// execute synchronously
HRESULT hr = object_interface->SomeMethod();
/* final result, after the deferral period: hr == S_OK */
This does not work:
// get call factory
ICallFactoryPtr call_factory;
object_interface->QueryInterface(&call_factory);
// create async call
AsyncIMyInterfacePtr async_call;
call_factory->CreateCall(__uuidof(AsyncIMyInterface), nullptr, __uuidof(AsyncIMyInterface), reinterpret_cast<LPUNKNOWN*>(&async_call)));
// begin executing asynchronously
async_call->Begin_SomeMethod();
// end executing asynchronously
HRESULT hr = async_call->Finish_SomeMethod();
/* final result, immediate: hr == RPC_E_SERVERCALL_RETRYLATER */

Implementing an asynchronous delay in C++/CX

I am trying to write a function which, given a number of seconds and a callback, runs the callback after the given number of seconds. The callback does not have to be on the same thread. The target language is C++/CX.
I tried using Windows::System::Threading::ThreadPoolTimer, but the result is a memory access exception. The issue appears to be that the callback implementation (in native C++) can't be accessed from the managed thread that the timer is running its callback on.
ref class TimerDoneCallback {
private:
function<void(void)> m_callback;
public:
void EventCallback(ThreadPoolTimer^ timer) {
m_callback(); // <-- memory exception here
}
TimerDoneCallback(function<void(void)> callback) : m_callback(callback) {}
};
void RealTimeDelayCall(const TimeSpan& duration, function<void(void)> callback) {
auto t = ref new TimerDoneCallback(callback);
auto e = ref new TimerElapsedHandler(t, &TimerDoneCallback::EventCallback);
ThreadPoolTimer::CreateTimer(e, duration);
}
void Test() {
RealTimeDelayCall(duration, [](){}); //after a delay, run 'do nothing'
}
I don't want to create a thread and sleep on it, because there may be many concurrent delays.
The TimerDoneCallback instance is not kept alive - delegates in C++/CX take weak references to the target object (to avoid circular references). You can override this behavior by using the extended overload of the delegate constructor:
auto e = ref new TimerElapsedHandler(t, &TimerDoneCallback::EventCallback, CallbackContext::Any, true);
The final bool parameter should be true for strong references, and false for weak references. (False is the default.)
You could also consider using the timer class in PPL agents to make a delayed callback: http://msdn.microsoft.com/en-us/library/hh873170(v=vs.110).aspx to avoid needing to use ThreadPoolTimer.

Can't catch newConnection() signal from QTcpServer

I am trying to make a simple server thread in QT to accept a connection, however although the server is listening (I can connect with my test app) I can't get the newConnection() signal to be acted on.
Any help as to what I'm missing here would be much appreciated!
class CServerThread : public QThread
{
Q_OBJECT
protected:
void run();
private:
QTcpServer* server;
public slots:
void AcceptConnection();
};
void CServerThread::run()
{
server = new QTcpServer;
QObject::connect(server, SIGNAL(newConnection()), this, SLOT(AcceptConnection()));
server->listen(QHostAddress::Any, 1000); // Any port in a storm
exec(); // Start event loop
}
void CServerThread::AcceptConnection()
{
OutputDebugStringA("\n***** INCOMING CONNECTION"); // This is never called!
}
First of all I can say that your server lives in new thread while CServerThread instance lives in another thread (in the thread this instance was created). Signal/slot connection you are creating is inderect and uses thread save event delivery between events loops of two different threads. It actually can cause such problem if thread where you creating CServerThread doesn't have Qt event loop running.
I suggest you to create some MyServer class which creates QTcpServer and calls listen and connects QTcpServer::newConnection() signal to its own slot. Then rewrite your server thread run method to something like this:
void CServerThread::run() {
server = new MyServer(host,port);
exec(); // Start event loop
}
In this approach both QTcpServer and newConnection processing object lives in the same thread. Such situation is easier to handle.
I have one really simple working example:
Header: http://qremotesignal.googlecode.com/svn/tags/1.0.0/doc/html/hello_2server_2server_8h-example.html
Source: http://qremotesignal.googlecode.com/svn/tags/1.0.0/doc/html/hello_2server_2server_8cpp-example.html

Resources