What does ERROR_INVALID_DATA (13) mean from ReadFile? - winapi

Our application is failing on some WinCE device. The log indicates, that ReadFile failed for some (read-only, working in desktop build) file and GetLastError returns 13, which is ERROR_INVALID_DATA. What could that mean in this context? I only saw this error mentioned for Heap32Next.

Never had such error, possible cause:
http://support.microsoft.com/kb/967335
"In Windows CE 5.0, the SD bus driver incorrectly calculates the memory capacity of Secure Digital (SD) cards as less than the actual memory capacity. Therefore, functions that read data from files whose positions exceed the incorrectly calculated memory capacity may behave incorrectly.
For example, if you use the ReadFile function to read a file that is stored in this area on an SD high capacity (SDHC) card, the ReadFile function returns 0. Therefore, the GetLastError function returns the following error:
ERROR_INVALID_DATA."

Related

Using SetFilePointer to change the location to write in the sector doesn't work?

I'm using SetFilePointer to rewrite the second half of the MBR with something, its a user-mode application and i opened a handle to PhysicalDrive
At first i tried to set the size parameter in WriteFile to 256 but the writefile gave the INVALID_PARAMETER error, as it turns out based on some search on other questions here it seems like this is because we are forced to write in multiplicand of the sector size when the handle is PhysicalDrive for some reason
then i tried to set the filePointer to 256, and Write 512 bytes, both of them return no error, but for some unknown reason it writes from the beginning of the sector! as if the SetFilePointer didn't work even tho the return value of SetFilePointer is OK and it returns 256
So my questions is :
Why the write size have to be multiplicand of sector size when the handle is PhysicalDrive? which other device handles are like this?
Why is this happening and when I set the file pointer to 256, WriteFile still writes from the start?
isn't this really redundant, considering that even if I want to change 1 byte then I have to read the entire sector, change the one byte and then write it back, instead of just writing 1 byte, it seems like 10 times more overhead! isn't there a faster way to write a few bytes in a sector?
I think you are mixing the file system and the storage (block device). File system stays above storage device stack. If your code obtains a handle to a file system device, you can write byte by byte. But if you are accessing storage device stack, you can only write sector by sector (or block size).
Directly writing to block device is definitely slow as you discovered. However, in most cases, people just talk to file systems. Most file system drivers maintain cache and use algorithms for both read and write to improve performance.
Can't comment on file pointer based offset before seeing the actual code. But I guess it might be not sector aligned or it's not used at all.

PCIe UIO multi-DWORD access issues

I have an Intel FPGA PCIe endpoint. It shows up correctly in lspci and all of the lspci -vv information looks correct (memory map, IRQ, BAR0 size all look OK). I want to stream some data over BAR0 and read/write status registers inside of my IP. My host machine has an Intel x86_64 CPU and running a Debian OS.
What I"m currently doing is this:
open() call to /sys/class/uio/uio0/device/resource0 -> returns a file descriptor.
mmap 4 KB on that file descriptor with PROT_READ + PROT_WRITE protection, and MAP_SHARED flags. Offset is 0. --> returns a pointer.
In a loop, set offsets relative to the pointer to random numbers. Do this for ~1000 bytes, 4 bytes at a time. After each write to the pointer, call msync.
Read back the pointer one DWORD at a time.
Read back the pointer in bulk using memcpy.
The outcome of step #4 is that most of the data looks correct, but some is not (which is strange). The outcome of number 5 is that I get 0xffff_ffff for everything, which is even stranger.
If I try to replace step #3 with a single memset / msync sequence, the program hangs for a little while and then returns a bus error. After this, lspci states that BAR0 is disabled and I can no longer interact with it.
Any ideas what I'm doing wrong? It could be a HW issue, but the HW is really simple right now. I configured the FPGA to act purely as a slave device, with its read response being the registered write data coming across the BAR0 interface. The IP I am working with only has a read-response-valid line (no write response valid, somehow) which I have hard-coded to 1. Seems that burst sizes more than 1 cause some kinds of issues with the PCIe core as seen in the memcpy/memset issues, but I don't see why that would be the case.
EDIT/UPDATE:
I was able to work around this. Supposedly the MMIO is only for 32b, so writing a loop that access the pointer 32b at a time is the solution here.

Registering interrupt with irq from pci_irq_vector(9) function results in "No irq handler for this function"?

I am writing a device driver that services the interrupts from the device. The device has only one MSI interrupt vector, so I poll the irq with pci_irq_vector(dev, 0), receive the irq, and register the interrupt. This is shown in the following code snippet (equivalent to what I have minus error handling):
retval = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_MSI);
irq = pci_irq_vector(dev, 0);
retval = request_irq(irq, irq_fnc, 0, "name", dev);
This all completes successfully and without warning (at least with dmesg). Yet when the interrupt comes in, I get the error.
kernel:do_IRQ: 0.xxx No irq handler for this vector (irq -1)
The xxx appears to be an arbitrary number that changes every time the driver is loaded, but does not match the irq number. Instead, it matches the last two hex digits of the message data sent with the MSI interrupt as read from the MSI capability structure. Trying to request an irq of this number returns EINVAL which I think means that it's not associated with any PCI device. What does this number mean anyway?
Something that may be important to note, I am actually manually triggering this interrupt from the host side due to limitations with the device. I am reading the interrupt address and data from the capability structure then instructing the device to write the data to that address.
How would I go about further debugging this? Does anything from my description stand out as suspicious? Any help would be appreciated.
Does this particular irq show when you type cat /proc/interrupts? Maybe you can get the correct irq number from there, as well as other info like where it is attached and what driver is associated with this interrupt line!
So the problem ended up being in the order of things. To manually create the interrupt, I had read the config space for the interrupt address and data before allocating interrupts. While obvious in retrospect, allocating the irq vectors for the device writes the appropriate data to the config space. Hence, using the preexisting value in the message data field would point to an irq vector that does not exist.

How to limit the memory that an application can allocate

I need a way to limit the amount of memory that a service may allocate in order to prevent the service from starving the system, similar to the way SQL Server allows you to set "Maximum server memory".
I know SetProcessWorkingSetSize doesn't do exactly what I want, but I'm trying to get it to behave the way that I believe it should. Regardless of the values that I use, my test app's working set is not limited. Further, if I call GetProcessWorkingSetSize immediately afterwards, the values returned are not what I previously specified. Here's the code used by my test app:
var
MinWorkingSet: SIZE_T;
MaxWorkingSet: SIZE_T;
begin
if not SetProcessWorkingSetSize(GetCurrentProcess(), 20, 12800 ) then
RaiseLastOSError();
if GetProcessWorkingSetSize(GetCurrentProcess(), MinWorkingSet, MaxWorkingSet) then
ShowMessage(Format('%d'#13#10'%d', [MinWorkingSet, MaxWorkingSet]));
No error occurs, but both the Min and Max values returned by GetProcessWorkingSetSize are 81,920.
I tried using SetProcessWorkingSetSizeEx using QUOTA_LIMITS_HARDWS_MAX_ENABLE ($00000004) in the Flags parameter. Unfortunately, SetProcessWorkingSetSizeEx fails with "Code 87. The parameter is incorrect" if I pass anything other than $00000000 in Flags.
I've also pursued using Job Objects to accomplish the same goal. I have memory limits working with Job Objects when launching a child process. However, I need the ability for a service to set its own memory limits rather than depending on a "launching" service to do it. So far, I haven't found a way for a single process to create a job object and then add itself to the job object. This always fails with Access Denied.
Any thoughts or suggestions?
The documentation of SetProcessWorkingSetSize function says:
dwMinimumWorkingSetSize [in]
...
This parameter must be greater than
zero but less than or equal to the maximum working set size. The
default size is 50 pages (for example, this is 204,800 bytes on
systems with a 4K page size). If the value is greater than zero but
less than 20 pages, the minimum value is set to 20 pages.
In case of a 4K page size, the imposed minimum value is 20 * 4096 = 81920 bytes which is the value you saw.
The values are specified in bytes.
To actually limit the memory for your service process, I think it's possible to create a new job (CreateJobObject), set the memory limit (SetInformationJobObject) and assign your current process to the job (AssignProcessToJobObject) in the service's start up routine.
Unfortunately, on Windows before 8 and Server 2012, this won't work if the process already belongs to a job:
Windows 7, Windows Server 2008 R2, Windows XP with SP3, Windows Server
2008, Windows Vista and Windows Server 2003: The process must not
already be assigned to a job; if it is, the function fails with
ERROR_ACCESS_DENIED. This behavior changed starting in Windows 8 and
Windows Server 2012.
If this is your case (ie. you get ERROR_ACCESS_DENIED on older Windows) check if the process is already assigned to a job (in which case, you're out of luck) but also make sure that it has the required access rights: PROCESS_SET_QUOTA and PROCESS_TERMINATE.

How to determine what bloks are allocated for the disk device?

A lot of modern storage types use Thin provisioning to allocate blocks. I need to get Block allocation map for the disk device. There is FSCTL_GET_VOLUME_BITMAP to get volume bitmap, but it is file-system specific and I need an approach that is not FS specific.
Starting in Windows 8 Windows is sending "TRIM and Unmap" hints to storage media to track allocated blocks.
UNMAP is the SCSI command by which an application or the system can
communicate to the storage stack and the disk that a certain sector or
range of sectors are currently not in use, including sectors that were
previously in use by files that were later deleted.
So this should be possible. Unfortunately, I was unable to find Disk Management Control Code or Disk Management Function to get it. Maybe someone know know the way to get it?
Like gubblebozer made a hint - GET LBA STATUS command introduced in SBC-3 is the way to retrieve the low-level mappings from the device itself. From Thin Provisioning
The application can call the IOCTL DSM allocation routine to send the SCSI
command and retrieve the mapped or unmapped state of each slab in a particular
range. If the LBA provisioning status returned does not describe the entire
allocation range, the application sends another SCSI command to retrieve the
provisioning status of the remaining LBA range.
Looks like this can be done with the help of IOCTL_STORAGE_MANAGE_DATA_SET_ATTRIBUTES
then DEVICE_DATA_SET_LB_PROVISIONING_STATE structure will contain a bitmap of
slab allocations.

Resources