WriteFile with Overlapped IO and ERROR_DISK_FULL? - windows

Wonder if anyone knows the internal design of WriteFile() (Storage Team Here?) with overlapped IO for file on on a disk drive/file system. Clearly when using the system buffer and standard synchronous WriteFile() it checks for full disk and allocates space prior to returning because the system cache holding the actual data is written later (a problem causes a delayed write error from the OS).
So the question is: would the same be true when using OVERLAPPED structure for asynchronous WriteFile() that expands the file beyond free space? e.g. It would return ERROR_DISK_FULL right away before pending the IO?
The reason to know is for recovery of freeing disk space, or inserting new media, and resuming the writes. If done this way, it's fairly straight forward, if after pending the IO, you could have a bunch of queued IO that then has to be synchronized and additional information tracked for all queued items in case moving to new media to adjust the offsets and such.
TIA!!

What you mean by asynchronous file operations (WriteFile() etc.) - these operations are only asynchronous for the caller. Internally they work the same way as synchronous (blocking) ones. The implementation of a blocking call invokes the non-blocking one and waits for an event the same as if you were using the OVERLAPPED structure. So, on your question of whether WriteFile would return ERROR_DISK_FULL before pending the IO, the answer is No. The rationale of non-blocking calls is not to make disk operation return results faster, but to allow a single thread to do multiple I/O operations in parallel without the need to create multiple threads.

if no enough disk space for complete write operation - you got ERROR_DISK_FULL (STATUS_DISK_FULL) when I/O operation will complete. are filesystem driver just complete your write request with STATUS_DISK_FULL (converted to ERROR_DISK_FULL) or first return STATUS_PENDING (converted to ERROR_IO_PENDING by win32) and then complete I/O with STATUS_DISK_FULL - this is undefined. can be both. final status will be ERROR_DISK_FULL but you cannot assume are operation will complete synchronous or asynchronous

Related

Does asynchronous file appending in Windows preserve order?

I call CreateFile with FILE_FLAG_NO_BUFFERING | FILE_FLAG_WRITE_THROUGH | FILE_FLAG_OVERLAPPED and then call many WriteFile with OVERLAPPED structures with both Offset and OffsetHigh members set to 0xFFFFFFFF to append new data to file.
Is it guaranteed that operations will be completed in same order as requested?
It seem logical for me but I see no explicit and non-ambiguous proofs of that.
Quote from https://support.microsoft.com/en-us/kb/156932 tells that operation is going to be synchronous:
On Windows NT, any write operation to a file that extends its length will be synchronous.
Great. Synchronous operation preserves order. But then:
The FILE_FLAG_NO_BUFFERING flag has the most effect on the behavior of the file system for asynchronous operation. This is the best way to guarantee that I/O requests are actually asynchronous.
Latter raised doubts in me. Could someone clarify this please?

implementing blocking syscalls in Linux

I would like to understand how implementing blocking I/O syscalls is different from non-blocking? Googling it didn't help much, any links or references would be greatly appreciated.
Thanks.
http://faculty.salina.k-state.edu/tim/ossg/Device/blocking.html
Blocking syscall will put the task (calling thread) to sleep (block it from running on CPU), and syscall will return only after event (or timeout). Non-blocking syscall will not block thread, it just checks in-kernel states and immediately returns.
More detailed description: http://www.makelinux.net/ldd3/chp-6-sect-2
one important issue: how does a driver respond if it cannot immediately satisfy the request? A call to read may come when no data is available, but more is expected in the future. Or a process could attempt to write, but your device is not ready to accept the data, because your output buffer is full. The calling process usually does not care about such issues; the programmer simply expects to call read or write and have the call return after the necessary work has been done. So, in such cases, your driver should (by default) block the process, putting it to sleep until the request can proceed. ....
There are several forms of wait_event kernel functions to block the caller thread, check include/linux/wait.h; thread can be waked up by different ways, for example with wake_up/wake_up_interruptible.

CreateFileMapping and MapViewOfFile with interprocess (un)synchronized multithreaded access?

I use a Shared Memory area to get som data to a second process.
The first process uses CreateFileMapping(INVALID_HANDLE_VALUE, ..., PAGE_READWRITE, ...) and MapViewOfFile( ... FILE_MAP_WRITE).
The second process uses OpenFileMapping(FILE_MAP_WRITE, ...) and MapViewOfFile( ... FILE_MAP_WRITE).
The docs state:
Multiple views of a file mapping object
are coherent if they contain identical data at a specified time.
This occurs if the file views are derived from any file mapping object
that is backed by the same file. (...)
With one important exception, file views derived from any file mapping
object that is backed by the same file are coherent or identical at a
specific time. Coherency is guaranteed for views within a process and
for views that are mapped by different processes.
The exception is related to remote files. (...)
Since I'm just using the Shared Memory as is (backed by the paging file) I would have assumed that some synchronization is needed between processes to see a coherent view of the memory another process has written. I'm unsure however what synchronization would be needed exactly.
The current pattern I have (simplified) is like this:
Process1 | Process2
... | ...
/* write to shared mem, */ | ::WaitForSingleObject(hDataReady); // real code has error handling
/* then: */
::SetEvent(hDataReady); | /* read from shared mem after wait returns */
... | ...
Is this enough synchronization, even for shared memory?
What sync is needed in general between the two processes?
Note that inside of one single process, the call to SetEvent would certainly constitute a full memory barrier, but it isn't completely clear to me whether that holds for shared memory across processes.
I have since come to believe that for memory-access synchronization purposes, it really does not matter if the concurrently accessed memory is shared between processes or just withing one process between threads.
That is, for Shared Memory (the one shared between processes) on Windows, the same restrictions and guidelines apply as with "normal" memory within a process that is just shared between the threads of the process.
The reason I believe this is that a process and a thread are somewhat orthogonal on Windows. A process is a "container" for threads, and in order for the process to be able to do anything, it needs at least one thread. So, for memory that is mapped into multiple process' address space, the synchronization requirements on the threads running within these different processes should be actually the same as for threads running within the same process.
So, the answer to my question Is this enough synchronization, even for shared memory? is that shared memory requires the same synchronization as "normal" memory. But of course, not all synchronization techniques works across process boundaries, so you are restricted in what you can use. (A Critical Section for exampled cannot be used across processes.)
If both of those code snippets are in a loop then in addition to the event you'll need a mutex so that Process1 doesn't start writing again while Process2 is still reading. To be more specific, the mutex must be acquired before reading or writing and released after reading or writing. Make sure the mutex has been released before calling WFSO in Process2.
My understanding is that although Windows may guarantee view coherency, it does not guarantee a write is fully completed before the client reads it.
For example, if you were writing "Hello world!" to the view, it could only be partially written when the client reads it, such as "Hello w".
Therefore, the view would be byte coherent, but not message coherent.
Personally, I use a mutex to guarantee thread-safe access.
Use Semaphore should be better than Event.

Handling streamed data via pipes

A Win32 application (the "server") is sending a continuous stream of data over a named pipe. GetNamedPipeInfo() tells me that input and output buffer sizes are automatically allocated as needed. The pipe is operating in byte mode (although it is sending data units that are bigger than 1 byte (doubles, to be precise)).
Now, my question is this: Can I somehow verify that my application (the "client") is not missing any data when reading from the pipe? I know that those read/write operations are buffered, but I suppose the buffers will not grow indefinitely if the client doesn't fetch the data quickly enough. How do I know if I missed something? Does the server (or the pipe?) silently discard data that is not read in time by the client?
BTW, can I rely on proper alignment of the data the client reads using ReadFile()? As far as I understood, ReadFile() may return with less bytes read than specified, i.e. NumberOfBytesRead <= NumberOfBytesToRead. Do I have to check every time that NumberOfBytesRead is a multiple of sizeof(double)?
The write operation will block if there is no more room in the pipe's buffers. This is from my (old) copy of the SDK manual:
When an application uses the WriteFile
function to write to a pipe, the write
operation may not finish if the pipe
buffer is full. The write operation is
completed when a read operation (using
the ReadFile function) makes more
buffer space available.
Sorry, didn't find out how to comment on your post, Neil.
The write operation will block if there is no more room in the pipe's buffers.
I just discovered that Sysinternals' FileMon can also monitor pipe operations. For testing purposes I connected the client to the named pipe and did no read operations, just waiting. The server writes a few hundred kB to the pipe every 4--5 seconds, even though nobody is fetching the data from the pipe on the client side. No blocking write operation ... And so far no limits in buffer-size seem to have been reached.
This is either a very big buffer ... or the server does some magic additional to just using WriteFile() and waiting for the client to read.

What does CancelIo() do with bytes that have already been read?

What happens if I ReadFile() 10 bytes (in overlapped mode without a timeout) but invoke CancelIo() after 5 bytes have been read? The documentation for CancelIo() says that it cancels any pending I/O, but what happens to the 5 bytes already read? Are they lost? Are they re-enqueued so the next time I ReadFile() I'll get them again?
I'm looking for the specification to indicate one way or another. I don't want to rely on empirical evidence.
According to http://groups.google.ca/group/microsoft.public.win32.programmer.kernel/browse_thread/thread/4fded0ac7e4ecfb4?hl=en
It depends on how the driver writer implemented the device. The exact
semantics of cancel on an operation are not defined to that level.
Either it doesn't matter because you are using overlapped I/O or you can just call SetFilePointer manually when you know you've cancelled I/O.
You don't have to rely on undocumented behavior if you just force the issue.

Resources