What calls best emulate pread/pwrite in MSVC 10?
At the C runtime library level, look at fread, fwrite and fseek.
At the Win32 API level, have a look at ReadFile, WriteFile, and SetFilePointer. MSDN has extensive coverage of file I/O API's.
Note that both ReadFile and WriteFile take an OVERLAPPED struct argument, which lets you specify a file offset. The offset is respected for all files that support byte offsets, even when opened for synchronous (i.e. non 'overlapped') I/O.
Depending on the problem you are trying to solve, file mapping may be a better design choice.
It looks like you just use the lpOverlapped parameter to ReadFile/WriteFile to pass a pointer to an OVERLAPPED structure with the offset specified in Offset and OffsetHigh.
(Note: You don't actually get overlapping IO unless the handle was opened with FILE_FLAG_OVERLAPPED.)
The answer provided by Oren seems correct but doesn't seem to meet the needs. Actually, I too was here for searching the answer but couldn't find it. So, I will update a bit here.
As said,
At the C runtime library level, there are fread, fwrite and fseek.
At the Win32 API level, we can have two level of abstractions. One at the lower level which works with file descriptors and other at higher level which works with Windows' defined data structures such as File and Handle.
If you wish to work with Files and Handles, you have ReadFile, WriteFile, and SetFilePointer. But most the time, C++ developers prefer working with File Descriptors. For that, you have _read, _write and _lseek.
Related
Just as the title says, I am writing a networking program where I open a handle to a network driver using CreateFile, and I have been experimenting with the NO_BUFFERING flag.
Most documentation won't even mention this being used with communication devices, and the ones that do (AKA the MSDN reference, etc), simply mention that you can.
Does anyone have any idea how this may affect communication with the device?
It is a device driver implementation detail, options you specify in the CreateFile() call are passed in the IRP_MJ_REQUEST request. The one I linked is the one for file systems, it is very fancy one. Click through the IrpSp->Parameters.Create.Options link to IoCreateFileSpecifyDeviceObjectHint()'s Options argument to see FILE_NO_INTERMEDIATE_BUFFERING.
The documentation for the IRP_MJ_REQUEST for serial ports is here. Very simple one, no arguments at all :) In general, the winapi to device driver interface for communication ports is a very straight-forward. There's an (almost) direct mapping between the documented winapi function and its underlying IOCTL. The winapi function doesn't do much beyond basic error checking, then quickly passes the job to the driver.
So there isn't any way to pass the FILE_FLAG_NO_BUFFERING option you specify so it simply doesn't get used.
Otherwise the logical conclusion, serial port I/O is interrupt driven, the driver must buffer in order to not lose bytes and keep an acceptable transfer rate. You can technically tinker with the buffer sizes through SetupComm() but, as documented, it is only a recommendation with pretty high odds that the driver simply ignores very low values.
What is the purpose of this flag (from the OS side)?
Which functions use this flag except isDebuggerPresent?
thanks a lot
It's effectively the same, but reading the PEB doesn't require a trip through kernel mode.
More explicitly, the IsDebuggerPresent API is documented and stable; the PEB structure is not, and could, conceivably, change across versions.
Also, the IsDebuggerPresent API (or flag) only checks for user-mode debuggers; kernel debuggers aren't detected via this function.
Why put it in the PEB? It saves some time, which was more important in early versions of NT. (There are a bunch of user-mode functions that check this flag before doing some runtime validation, and will break to the debugger if set.)
If you change the PEB field to 0, then IsDebuggerPresent will also return 0, although I believe that CheckRemoteDebuggerPresent will not.
As you have found the IsDebuggerPresent flag reads this from the PEB. As far as I know the PEB structure is not an official API but IsDebuggerPresent is so you should stick to that layer.
The uses of this method are quite limited if you are after a copy protection to prevent debugging your app. As you have found it is only a flag in your process space. If somebody debugs your application all he needs to do is to zero out the flag in the PEB table and let your app run.
You can raise the level by using the method CheckRemoteDebuggerPresent where you pass in your own process handle to get an answer. This method goes into the kernel and checks for the existence of a special debug structure which is associated with your process if it is beeing debugged. A user mode process cannot fake this one but you know there are always ways around by simply removing your check ....
How can I increment refcount of the HMODULE returned by the GetModuleHandle? Can I DuplicateHandle, or I need to go through hops, retrieve module's path and then LoarLibrary on that path? In short, I want to emulate GetModuleHandleEx without using this function (which is XP+).
You cannot use DuplicateHandle() on a HMODULE. The MSDN Library article lists the kind of handles that DH will accept in the Remarks section, a module handle is not one of them.
One reason for this is that a HMODULE is not actually a handle at all, it is a pseudo handle. There's history behind this, back in the 16-bit versions of Windows they actually were handles. But that disappeared in the 32-bit version, they are now simply the address of the module where it is loaded in memory. One pretty standard trick to convert a code address to a module handle is to use VirtualQuery() and cast the returned MEMORY_BASIC_INFORMATION.BaseAddress to (HMODULE). Very handy sometimes.
Yes, the only other way to increment the reference count is to use LoadLibrary().
Well, I'm working on a project, in which I'm handling potentially big files, that I can't load into ram all at once, so I'm going to treat them like a CHS hard drive, and grab the data one 0x800 byte chunk at a time.
My problem is, I cannot find any functions in the WINAPI that allow me to read the data from a file I've opened with CreateFile, starting at an offset.
And yes, it must be a WINAPI function, and no, I do not want to map the whole file into memory.
Thanks much, Bradley.
Use ReadFile with SetFilePointer
We have an older massive C++ application and we have been converting it to support Unicode as well as 64-bits. The following strange thing has been happening:
Calls to registry functions and windows creation functions, like the following, have been failing:
hWnd = CreateSysWindowExW( ExStyle, ClassNameW.StringW(), Label2.StringW(), Style,
Posn.X(), Posn.Y(),
Size.X(), Size.Y(),
hParentWnd, (HMENU)Id,
AppInstance(), NULL);
ClassNameW and Label2 are instances of our own Text class which essentially uses malloc to allocate the memory used to store the string.
Anyway, when the functions fail, and I call GetLastError it returns the error code for "invalid memory access" (though I can inspect and see the string arguments fine in the debugger). Yet if I change the code as follows then it works perfectly fine:
BSTR Label2S = SysAllocString(Label2.StringW());
BSTR ClassNameWS = SysAllocString(ClassNameW.StringW());
hWnd = CreateSysWindowExW( ExStyle, ClassNameWS, Label2S, Style,
Posn.X(), Posn.Y(),
Size.X(), Size.Y(),
hParentWnd, (HMENU)Id,
AppInstance(), NULL);
SysFreeString(ClassNameWS); ClassNameWS = 0;
SysFreeString(Label2S); Label2S = 0;
So what gives? Why would the original functions work fine with the arguments in local memory, but when used with Unicode, the registry function require SysAllocString, and when used in 64-bit, the Windows creation functions also require SysAllocString'd string arguments? Our Windows procedure functions have all been converted to be Unicode, always, and yes we use SetWindowLogW call the correct default Unicode DefWindowProcW etc. That all seems to work fine and handles and draws Unicode properly etc.
The documentation at http://msdn.microsoft.com/en-us/library/ms632679%28v=vs.85%29.aspx does not say anything about this. While our application is massive we do use debug heaps and tools like Purify to check for and clean up any memory corruption. Also at the time of this failure, there is still only one main system thread. So it is not a thread issue.
So what is going on? I have read that if string arguments are marshalled anywhere or passed across process boundaries, then you have to use SysAllocString/BSTR, yet we call lots of API functions and there is lots of code out there which calls these functions just using plain local strings?
What am I missing? I have tried Googling this, as someone else must have run into this, but with little luck.
Edit 1: Our StringW function does not create any temporary objects which might go out of scope before the actual API call. The function is as follows:
Class Text {
const wchar_t* StringW () const
{
return TextStartW;
}
wchar_t* TextStartW; // pointer to current start of text in DataArea
I have been running our application with the debug heap and memory checking and other diagnostic tools, and found no source of memory corruption, and looking at the assembly, there is no sign of temporary objects or invalid memory access.
BUT I finally figured it out:
We compile our code /Zp1, which means byte aligned memory allocations. SysAllocString (in 64-bits) always return a pointer that is aligned on a 8 byte boundary. Presumably a 32-bit ANSI C++ application goes through an API layer to the underlying Unicode windows DLLs, which would also align the pointer for you.
But if you use Unicode, you do not get that incidental pointer alignment that the conversion mapping layer gives you, and if you use 64-bits, of course the situation will get even worse.
I added a method to our Text class which shifts the string pointer so that it is aligned on an eight byte boundary, and viola, everything runs fine!!!
Of course the Microsoft people say it must be memory corruption and I am jumping the wrong conclusion, but there is evidence it is not the case.
Also, if you use /Zp1 and include windows.h in a 64-bit application, the debugger will tell you sizeof(BITMAP)==28, but calling GetObject on a bitmap will fail and tell you it needs a 32-byte structure. So I suspect that some of Microsoft's API is inherently dependent on aligned pointers, and I also know that some optimized assembly (I have seen some from Fortran compilers) takes advantage of that and crashes badly if you ever give it unaligned pointers.
So the moral of all of this is, dont use "funky" compiler arguments like /Zp1. In our case we have to for historical reasons, but the number of times this has bitten us...
Someone please give me a "this is useful" tick on my answer please?
Using a bit of psychic debugging, I'm going to guess that the strings in your application are pooled in a read-only section.
It's possible that the CreateSysWindowsEx is attempting to write to the memory passed in for the window class or title. That would explain why the calls work when allocated on the heap (SysAllocString) but not when used as constants.
The easiest way to investigate this is to use a low level debugger like windbg - it should break into the debugger at the point where the access violation occurs which should help figure out the problem. Don't use Visual Studio, it has a nasty habit of being helpful and hiding first chance exceptions.
Another thing to try is to enable appverifier on your application - it's possible that it may show something.
Calling a Windows API function does not cross the process boundary, since the various Windows DLLs are loaded into your process.
It sounds like whatever pointer that StringW() is returning isn't valid when Windows is trying to access it. I would look there - is it possible that the pointer returned it out of scope and deleted shortly after it is called?
If you share some more details about your string class, that could help diagnose the problem here.