BITMAPFILEHEADER.bfSize doesn't match the file size? - winapi

Reading the specs for BITMAPFILEHEADER structure says bfSize is the number of bytes in the file.
But when I look at a .bmp file here on my drive it's actually 2 bytes smaller than the file size? Is that just a mistake or does it not count the bfType ? I've seen other examples where it matches the size?

Related

WinAPI: set file size only and not physical size

When creating a file with zero size, I would like to set the logical size of the file to be bigger than zero size.
One drive shows for dehydrated files Size on disk with zero bytes and the Size
it will be shown to a value that is the bigger than zero.
Can this behavior be done through windows api functions?
Thanks.
Sample onedrive properties window for a file:

What is the maximum PE file header size?

Does someone know the maximum PE file header size? I can find information on the maximum PE file size only.
PE header is usually about 500 bytes (might be slightly longer if you define a lot of sections, but you would heve real trouble going over 1024, i guess

Files take up more space on the disk

When viewing details of a file using Finder, different values are shown for how much space the file occupies. For example, a file takes up 28.8KB of RAM but, 33KB of the disk. Anyone know the explanation?
Disk space is allocated in blocks. Meaning, in multiples of a "block size".
For example, on my system a 1 byte file is 4096 bytes on disk.
That's 1 byte of content & 4095 bytes of unused space.

Can the USN Journal of the NTFS file system be bigger than it's declared size?

Hello fellow programmers.
I'm trying to dump the contents of the USN Journal of a NTFS partition using WinIoCtl functions. I have the *USN_JOURNAL_DATA* structure that tells me that it has a maximum size of 512 MB. I have compared that to what fsutil has to say about it and it's the same value.
Now I have to read each entry into a *USN_RECORD* structure. I do this in a for loop that starts at 0 and goes to the journal's maximum size in increments of 4096 (the cluster size).
I read each 4096 bytes in a buffer of the same size and read all the USN_RECORD structures from it.
Everything is going great, file names are correct, timestamps as well, reasons, everything, except I seem to be missing some recent records. I create a new file on the partition, I write something in it and then I delete the file. I run the app again and the record doesn't appear. I find that the record appears only if I keep reading beyond the journal's maximum size. How can that be?
At the moment I'm reading from the start of the Journal's data to the maximum size + the allocation delta (both are values stored in the *USN_JOURNAL_DATA* structure) which I don't believe it's correct and I'm having trouble finding thorough information related to this.
Can someone please explain this? Is there a buffer around the USN Journal that's similar to how the MFT works (meaning it's size halves when disk space is needed for other files)?
What am I doing wrong?
That's the expected behaviour, as documented:
MaximumSize
The target maximum size for the change journal, in bytes. The change journal can grow larger than this value, but it is then truncated at the next NTFS file system checkpoint to less than this value.
Instead of trying to predetermine the size, loop until you reach the end of the data.
If you are using the FSCTL_ENUM_USN_DATA control code, you have reached the end of the data when the error code from DeviceIoControl is ERROR_HANDLE_EOF.
If you are using the FSCTL_READ_USN_JOURNAL control code, you have reached the end of the data when the next USN returned by the driver (the DWORDLONG at the beginning of the output buffer) is the USN you requested (the value of StartUsn in the input buffer). You will need to set the input parameter BytesToWaitFor to zero, otherwise the driver will wait for the specified amount of new data to be added to the journal.

Is there a size limit on a text file? [duplicate]

This question already has answers here:
Closed 14 years ago.
Duplicate of: Is there an upper limit on .txt file size?
What is the limit to how much you can write to a text file? Any help would be appreciated.
There is no limit, other than the size of your disk, and your file system limitations on a file.
For example, file size limits:
NTFS: 16 TiB - 64 KiB
Ext4: 16 TBs
FAT32: 4GB - 1
On disk, there is no difference between a text file and any other type of file. They all just store bytes of data.
The only conceptual difference when writing to a binary file and a text file is that when a write operation is performed on a text file, a \n character may be replaced with a \r\n character, or some other line ending character(s).

Resources