I am using named pipe in windows and confused about the difference between FILE_FLAG_OVERLAPPED and PIPE_NOWAIT which are parameters set in CreateNamedPipe ,I set parameters like this.
HANDLE hPipe = CreateNamedPipe(
lpszPipename, // pipe name
PIPE_ACCESS_DUPLEX | // read/write access
FILE_FLAG_OVERLAPPED, // overlapped mode
PIPE_TYPE_MESSAGE | // message-type pipe
PIPE_READMODE_MESSAGE | // message read mode
PIPE_WAIT, // blocking mode
PIPE_UNLIMITED_INSTANCES, // unlimited instances
BUFSIZE * sizeof(TCHAR), // output buffer size
BUFSIZE * sizeof(TCHAR), // input buffer size
PIPE_TIMEOUT, // client time-out
NULL); // default security attributes
the ConnectNamedPipe return immediately and I get ERROR_IO_PENDING from GetLastError.With a nonblocking-wait handle, the connect operation returns zero immediately, and the GetLastError function returns ERROR_IO_PENDING.However the MSDN tells:
With a nonblocking-wait handle, the connect operation returns zero immediately, and the GetLastError function returns ERROR_PIPE_LISTENING.
so, what does nonblocking-wait mean, PIPE_NOWAIT or FILE_FLAG_OVERLAPPED, thanks a lot!
PIPE_NOWAIT mean that Nonblocking mode is enabled on handle. In this mode, ReadFile, WriteFile, and ConnectNamedPipe always completed immediately.
the FILE_FLAG_OVERLAPPED mean asynchronous mode is enabled on handle. If this mode is enabled, all not synchronous io [1] operations always return immediately.
so FILE_FLAG_OVERLAPPED vs PIPE_NOWAIT - this is return immediately vs completed immediately.
completed immediately (which include return immediately ) mean that io operation is already completed when api return. but visa versa not true. if operation return immediately this not mean that operation is completed already. if operation still not completed ntapi return code STATUS_PENDING. win32 api in such situations usual set last error to ERROR_IO_PENDING.
exist 3 way determinate when io operation completed in case asynchronous handle mode.
bind handle to IOCP (via CreateIoCompletionPort or
BindIoCompletionCallback or CreateThreadpoolIo). as result when
io complete - pointer to OVERLAPPED which we pass to io call -
will be queued back to IOCP (in case BindIoCompletionCallback or
CreateThreadpoolIo system yourself create IOCP and listen on it
and call our registered callback, when pointer to OVERLAPPED will
be queued to IOCP)
some win32 api such ReadFileEx or WriteFileEx and all ntapi let
specify APC completion routine which will be called in context of
thread, which begin io operation, when io operation is completed.
thread must do alertable wait in this case. this wait is not
compatible with bind handle to IOCP (we can not use APC routine in
api call if file handle binded to IOCP - system return invalid
parameter error)
we can create event and pass it to api call (via
OVERLAPPED::hEvent) - in this case this event will be reset by
system when io operation begin and set to signaled state when io
operation is completed. unlike first 2 option in this case we have
no additional context (in face pointer to OVERLAPPED) when io
operation is completed. usually this is worst option.
[1] exist some io operations which is always synchronous api. for example GetFileInformationByHandleEx, SetFileInformationByHandle. but almost io operations is not synchronous io. all this io operations take pointer to OVERLAPPED as parameter. so if no pointer to OVERLAPPED in api signature - this is synchronous api call. if exist - usually asynchronous (exception CancelIoEx for example where pointer to overlapped is related not to current operation but to previous io operation which we want cancel). in particular ReadFile, WriteFile, DeviceIoControl, ConnectNamedPipe( internally this is call DeviceIoControl with FSCTL_PIPE_LISTEN) ) is not synchronous io api
Related
When I'm doing a OVERLAPPED read on a file handle I usually handle both cases of completeion: ReadFile immediately returns TRUE or it returns FALSE and GetLastError() returns ERROR_IO_PENDING. But is this really necessary ? Will a OVERLAPPED read never complete synchonously ? Maybe the data is already in the cache and can rapidly provided to the ReadFile call synchronously.
I'm trying to figure out how to use FindFirstChangeNotification in order to do some file monitoring (in this case, for hot-reloading settings). I'm a bit a confused about what this function returns. From the docs, it creates a "change notification handle". Ok, sure. But then "A wait on a notification handle succeeds when...". In this context, what is a "wait"?
In this context, the "wait" refers to wait for the "change notification handle", which is a kind of HANDLE that you can wait until it is in signaled state by using Wait Functions.
A minimal example would be like this:
static void MyNotifyDirChange(HWND hwnd, LPCWSTR szPath)
{
HANDLE hWaitNotify = ::FindFirstChangeNotificationW(
szPath, TRUE,
FILE_NOTIFY_CHANGE_FILE_NAME |
FILE_NOTIFY_CHANGE_DIR_NAME |
FILE_NOTIFY_CHANGE_ATTRIBUTES |
FILE_NOTIFY_CHANGE_SIZE |
FILE_NOTIFY_CHANGE_LAST_WRITE |
FILE_NOTIFY_CHANGE_LAST_ACCESS |
FILE_NOTIFY_CHANGE_CREATION |
FILE_NOTIFY_CHANGE_SECURITY);
if (hWaitNotify == INVALID_HANDLE_VALUE)
{
::MessageBoxW(hwnd,
L"FindFirstChangeNotificationW failed.",
nullptr, MB_ICONERROR);
return;
}
::WaitForSingleObject(hWaitNotify, INFINITE);
::MessageBoxW(hwnd, L"Dir change notify.",
L"Notify", MB_ICONINFORMATION);
}
WaitForSingleObject waits until the specified object is in the signaled state or the time-out interval elapses. Since I've specified INFINITE, it will stay at there forever until the handle became signaled. And when the handle became signaled, it means something has happened; the files in the directory have changed or whatnot.
From Wait Functions on MSDN:
Wait functions allow a thread to block its own execution. The wait functions do not return until the specified criteria have been met.
Most of the wait functions (the notable exception being WaitOnAddress) accept one or more handles that determine the criteria for returning from the wait. To wait on a handle means to pass the handle to one of these wait functions. It is also common to refer to waiting on an object, which has the same meaning as waiting on a handle to that object.
Synchronization Objects lists the various kinds of objects you can wait on: events, mutexes, semaphore and waitable timers; change and memory resource notifications; jobs, processes and threads; and (subject to some caveats) I/O handles.
I have a Windows named pipe that I create with CreateFile (the server side was created using CreateNamedPipe). I use IO completion ports to read/write data asynchronously on both ends.
I need to send these handles to other processes after they've been opened. I tried to call CloseHandle on the handle returned from CreateIoCompletionPort, and then in the other process call CreateIoCompletionPort again. However it always fails and GetLastError returns 87 (ERROR_INVALID_PARAMETER).
I can also reproduce this in just one process, see below. Note there are no outstanding reads/write to the object before I send it.
std::wstring pipe_name = L"\\\\.\\pipe\\test.12345";
HANDLE server = CreateNamedPipeW(
pipe_name.c_str(),
PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED,
PIPE_TYPE_BYTE | PIPE_READMODE_BYTE,
1,
4096,
4096,
10000,
NULL);
SECURITY_ATTRIBUTES security_attributes = {
sizeof(SECURITY_ATTRIBUTES), NULL, TRUE};
HANDLE client = CreateFileW(
pipe_name.c_str(), GENERIC_READ | GENERIC_WRITE,
0,
&security_attributes,
OPEN_EXISTING,
SECURITY_SQOS_PRESENT | SECURITY_ANONYMOUS | FILE_FLAG_OVERLAPPED,
NULL);
ULONG_PTR key = 1;
HANDLE comp_port = CreateIoCompletionPort(client, NULL, key, 1);
BOOL b1 = CloseHandle(comp_port);
comp_port = CreateIoCompletionPort(client, NULL, key, 1);
if (comp_port == NULL) {
int last_err = GetLastError();
}
Referring to the documentation for CreateIoCompletionPort:
A handle can be associated with only one I/O completion port, and after the association is made, the handle remains associated with that I/O completion port until it [the handle] is closed.
[...] The I/O completion port handle and every file handle associated with that particular I/O completion port are known as references to the I/O completion port. The I/O completion port is released when there are no more references to it.
In other words, closing the I/O completion port handle doesn't achieve anything. The I/O completion port still exists and is permanently associated with the pipe handle. What you're attempting simply isn't possible; you will need to rearchitecture.
Note also:
It is best not to share a file handle associated with an I/O completion port by using either handle inheritance or a call to the DuplicateHandle function. Operations performed with such duplicate handles generate completion notifications. Careful consideration is advised.
The documentation for CreateIoCompletionPort suggests what you're trying to accomplish isn't possible. All handles associated with an I/O completion port refer to the port and as long one is still open the port remains alive:
The I/O completion port handle and every file handle associated with that particular I/O completion port are known as references to the I/O completion port. The I/O completion port is released when there are no more references to it. Therefore, all of these handles must be properly closed to release the I/O completion port and its associated system resources. After these conditions are satisfied, close the I/O completion port handle by calling the CloseHandle function.
It should work if you create a new handle that's not associated with the I/O completion port with CreateFile and then pass it to the other processes with DuplicateHandle. Or just call CreateFile in the other process directly.
I have the classic IOCP callback that dequeues i/o pending requests, process them, and deallocate them, in this way:
struct MyIoRequest { OVERLAPPED o; /* ... other params ... */ };
bool is_iocp_active = true;
DWORD WINAPI WorkerProc(LPVOID lpParam)
{
ULONG_PTR dwKey;
DWORD dwTrans;
LPOVERLAPPED io_req;
while(is_iocp_active)
{
GetQueuedCompletionStatus((HANDLE)lpParam, &dwTrans, &dwKey, (LPOVERLAPPED*)&io_req, WSA_INFINITE);
// NOTE, i could use GetQueuedCompletionStatusEx() here ^ and set it in the
// alertable state TRUE, so i can wake up the thread with an ACP request from another thread!
printf("dequeued an i/o request\n");
// [ process i/o request ]
...
// [ destroy request ]
destroy_request(io_req);
}
// [ clean up some stuff ]
return 0;
}
Then, in the code I will have somewhere:
MyIoRequest * io_req = allocate_request(...params...);
ReadFile(..., (OVERLAPPED*)io_req);
and this just works perfectly.
Now my question is: What about I want to immediately close the IOCP queue without causing leaks? (e.g. application must exit)
I mean: if i set is_iocp_active to 'false', the next time GetQueuedCompletionStatus() will dequeue a new i/o request, that will be the last i/o request: it will return, causing thread to exit and when a thread exits all of its pending i/o requests are simply canceled by the system, according to MSDN.
But the structures of type 'MyIoRequest' that I have instanced when calling ReadFile() won't be destroyed at all: the system has canceled pending i/o request, but I have to manually destroy those structures I have
created, or I will leak all pending i/o requests when I stop the loop!
So, how I could do this? Am I wrong to stop the IOCP loop with just setting that variable to false? Note that is would happen even if i use APC requests to stop an alertable thread.
The solution that come to my mind is to add every 'MyIoRequest' structures to a queue/list, and then dequeue them when GetQueuedCompletionStatusEx returns, but shouldn't that make some bottleneck, since the enqueue/dequeue process of such MyIoRequest structures must be interlocked? Maybe I've misunderstood how to use the IOCP loop. Can someone bring some light on this topic?
The way I normally shut down an IOCP thread is to post my own 'shut down now please' completion. That way you can cleanly shut down and process all of the pending completions and then shut the threads down.
The way to do this is to call PostQueuedCompletionStatus() with 0 for num bytes, completion key and pOverlapped. This will mean that the completion key is a unique value (you wont have a valid file or socket with a zero handle/completion key).
Step one is to close the sources of completions, so close or abort your socket connections, close files, etc. Once all of those are closed you can't be generating any more completion packets so you then post your special '0' completion; post one for each thread you have servicing your IOCP. Once the thread gets a '0' completion key it exits.
If you are terminating the app, and there's no overriding reason to not do so, (eg. close DB connections, interprocess shared memory issues), call ExitProcess(0).
Failing that, call CancelIO() for all socket handles and process all the cancelled completions as they come in.
Try ExitProcess() first!
I have a simple tunnel program that needs to simultaneously block on standard input and a socket. I currently have a program that looks like this (error handling and boiler plate stuff omitted):
HANDLE host = GetStdHandle(STD_INPUT_HANDLE);
SOCKET peer = ...; // socket(), connect()...
WSAEVENT gate = WSACreateEvent();
OVERLAPPED xfer;
ZeroMemory(&xfer, sizeof(xfer));
xfer.hEvent = gate;
WSABUF pbuf = ...; // allocate memory, set size.
// start an asynchronous transfer.
WSARecv(peer, &pbuf, 1, 0, &xfer, 0);
while ( running )
{
// wait until standard input has available data or the event
// is signaled to inform that socket read operation completed.
HANDLE handles[2] = { host, gate };
const DWORD which = WaitForMultipleObjects
(2, handles, FALSE, INFINITE) - WAIT_OBJECT_0;
if (which == 0)
{
// read stuff from standard input.
ReadFile(host, ...);
// process stuff received from host.
// ...
}
if (which == 1)
{
// process stuff received from peer.
// ...
// start another asynchronous transfer.
WSARecv(peer, &pbuf, 1, 0, &xfer, 0);
}
}
The program works like a charm, I can transfer stuff through this tunnel program without a hitch. The thing is that it has a subtle bug.
If I start this program in interactive mode from cmd.exe and standard input is attached to the keyboard, pressing a key that does not produce input (e.g. the Ctrl key) makes this program block and ignore data received on the socket. I managed to realize that this is because pressing any key signals the standard input handle and WaitForMultipleObjects() returns. As expected, control enters the if (which == 0) block and the call to ReadFile() blocks because there is no input available.
Is there a means to detect how much input is available on a Win32 stream? If so, I could use this to check if any input is available before calling ReadFile() to avoid blocking.
I know of a few solutions for specific types of streams (notably ClearCommError() for serial ports and ioctlsocket(socket,FIONBIO,&count) for sockets), but none that I know of works with the CONIN$ stream.
Use overlapped I/O. Then test the event attached to the I/O operation, instead of the handle.
For CONIN$ specifically, you might also look at the Console Input APIs, such as PeekConsoleInput and GetNumberOfConsoleInputEvents
But I really recommend using OVERLAPPED (background) reads wherever possible and not trying to treat WaitForMultipleObjects like select.
Since the console can't be overlapped in overlapped mode, your simplest options are to wait on the console handle and use ReadConsoleInput (then you have to process control sequences manually), or spawn a dedicated worker thread for synchronous ReadFile. If you choose a worker thread, you may want to then connect a pipe between that worker and the main I/O loop, using overlapped pipe reads.
Another possibility, which I've never tried, would be to wait on the console handle and use PeekConsoleInput to find out whether to call ReadFile or ReadConsoleInput. That way you should be able to get non-blocking along with the cooked terminal processing. OTOH, passing control sequences to ReadConsoleInput might inhibit the buffer-manipulation actions they were supposed to take.
If the two streams are processed independently, or nearly so, it may make more sense to start a thread for each one. Then you can use a blocking read from standard input.