Creating virtual disk with arbitrary size - windows

I want to do some "experimentation" to learn about the file systems starting from FAT16.
The idea is to use a C++ program to manipulate a disk at byte level and then see how it is read by Windows.
In short, format the disk to FAT16, create files, create directories, rename files, delete files, delete directories, change properties of files, see what happens if I tamper with sector number of files e.t.c. It wil all use the C++ readfile and writefile functions.
Having a "virtual disk" will make things considerably easier as no hardware will be made corrupt and the disk can be "reset" easily.
Yes, I am an electronic engineer so have to work at low level of hardware.

Related

read/write to a disk without a file system

I would like to know if anybody has any experience writing data directly to disk without a file system - in a similar way that data would be written to a magnetic tape. In particular I would like to know if/how data is written in blocks, and whether a certain blocksize needs to be specified (like it does when writing to tape), and if there is a disk equivalent of a tape file mark, which separates the archives written to a tape.
We are creating a digital archive for over 1 PB of data, and we want redundancy built in to the system in as many levels as possible (by storing multiple copies using different storage media, and storage formats). Our current system works with tapes, and we have a mechanism for storing the block offset of each archive on each tape so we can restore it.
We'd like to extend the system to work with disk volumes without having to change much of the logic. Another advantage of not having a file system is that the solution would be portable across Operating Systems.
Note that the ability to browse the files on disk is not important in this application, since we are considering this for an archival copy of data which is not accessed independently. Also note that we would already have an index of the files stored in the application database, which we also write to the end of the tape/disk when it is almost full.
EDIT 27/05/2020: It seems that accessing the disk device as a raw/character device is what I'm looking for.

Can I create a file in Windows that only exists in memory - and if so, how?

This question is not a duplicate of any of these existing questions:
How can I store an object File that only exists in memory as a file inside of my storage system? - This question is not about Java's File API.
Temp file that exists only in RAM? - This is close to what I'm asking, except the OP isn't asking how to create files from memory for the purposes of sharing passing them to child-processes
I'm not asking about Win32's Memory-mapped File either - as they're essentially the opposite of what I'm after: a memory-mapped file is a file-on disk that's mapped to a process' virtual memory space - whereas what I want is a file that exists in the OS' filesystem (but not the disk's physical filesystem) like a mount-point and that file's data is mapped to an existing buffer in memory.
I.e., with Memory Mapped Files, writing/writing to a byte at a particular buffer address and offset in memory will cause the byte at the same offset from the start of the file to be modified - but the file physically exists on-disk, which isn't what I want.
To elaborate and to provide context:
I have an ASP.NET Core server-side application that receives request streams sized between 1 and 10MB on a regular basis. This program will run only on Windows / Windows Server, so using Windows-specific functionality is fine.
75% of the time my application just reads through these streams by itself and that's it.
But a minority of the time it needs to have a separate applications read the data which it starts using Process.Start and passing the file-name as a command-line argument.
It passes the data to these separate applications by saving the stream to a temporary file on-disk and passing the filename of that stream.
Unfortunately it can't write the content to the child-process's stdin because some of the those programs expect a file on-disk rather than reading from stdin.
Additionally, while the machine it's running on has lots of RAM (so keeping the streams buffered in-memory is fine) it has slow spinning-rust HDDs, which is further reason to avoid temporary files on-disk.
I'd like to avoid unnecessary buffering and copies - ideally I'd like to stream the entire 1-10MB request into a single in-memory buffer, and then expose that same buffer to other processes and use that same buffer as the backing for a temporary file.
If I were on Linux, I could use tmpfs - it isn't perfect:
To my knowledge, an existing process can't instruct the OS to take an existing region of its virtual-memory and map a file in tmpfs to that memory region, instead tmpfs still requires that the file be populated by writing (i.e. copying) all of the data to its file-descriptor - which is counter to the aim of having a zero-copy system.
Windows' built-in RAM-disk functionality is limited to providing the basis for a RAM-disk implementation via a third-party device-driver - I'm surprised that Microsoft never shipped Windows with a built-in RAM-disk GUI or API, especially given their relative simplicity.
The ImDisk program is an implementation of a RAM-disk using Microsoft's RAM-disk driver platform, but as far as I can tell while it's more like tmpfs in that it can create a file that exists only in-memory, it doesn't allow the file's data to be backed by a buffer directly accessible to a running process (or a shared-memory buffer).
CreateFileMapping with hFile = INVALID_HANDLE_VALUE "creates a file mapping object of a specified size that is backed by the system paging file instead of by a file in the file system".
From Raymond Chen's The source of much confusion: “backed by the system paging file”:
In other words, “backed by the system paging file” just means “handled like regular virtual memory.”
If the memory is freed before it ever gets paged out, then it will never get written to the system paging file.

Writing to /dev/loop USB image?

I've got an image that I write onto a bootable USB that I need to tweak. I've managed to mount the stick as /dev/loopX including allowing for the partition start offset, and I can read files from it. However writing back 'seems to work' (no errors reported) but after writing the resulting tweaked image to a USB drive, I can no longer read the tweaked files correctly.
The file that fails is large and also a compressed tarfile.
Is writing back in this manner simply a 'no-no' or is there some way to make this work?
If possible, I don't want to reformat the partition and rewrite from scratch because that will (I assume) change the UUID and then I need to go worry about the boot partition etc.
I believe I have the answer. When using losetup to create a writeable virtual device from the partition on your USB drive, you must specify the --sizelimit parameter as well as the offset parameter. If you don't then the resulting writes can go past the last defined sector on the partition (presumably requires your USB drive to have extra space). Linux reports no errors until later when you try to read. The key hints/evidence for this are that when reads (or (re)written data) fail, dmesg shows read errors attempting to read past the end of the drive. fsck tools such as dosfsck also indicate that the drive claims to be larger than it is.

Efficient way to send files across processes

How to effectively send a file from my own process to a program such as Photoshop, Word, Paint.
I do not want to save the whole file to disk and then open the program from the startup parameters using CreateProcess, ShellExecute, etc.
Maybe the only way out is Memory Maped Files?
Maybe I should look to COM, IPC, Pipes?
You cannot tell these programs that your file data is actually a memory mapped file. That really doesn't matter, files are already memory mapped by default. Much more efficiently than a MMF, file data is stored in RAM and doesn't take any space in the paging file.
The file system cache takes care of that. Think of it as a large RAM disk without actually having to pay for the RAM. This works so well that there never was a need for these programs to do something else than accept their input from a file.

How can I find information about a file from logical cluster number in NTFS/FAT32?

I am trying to defragment a single file through Windows defragmentation API ( http://msdn.microsoft.com/en-us/library/aa363911(VS.85).aspx ) but if there is no free space block large enough for my file I would like to move other parts of files to make room for it.
The linked article mentions moving parts of other files but I can't find any information about how to find out which files to move. From the free space bitmap I can find an almost large enough space and I know the logical cluster numbers surrounding it, but from this I can't find out which files are surrounding it and a handle to the files is required to do FSCTL_MOVE_FILE which moves parts of files.
Is there any way, through the API or by parsing the MFT, to find out what file a logical cluster number is part of, and what virtual cluster number in the file corresponds to the logical cluster number found through the bitmap?
The slow but compatible method is to recursively scan all directories for files, and use the FSCTL_GET_RETRIEVAL_POINTERS. Then scan the resulting VCN-LCN mapping for the cluster in question.
Another option would be to query the USN Journal of the drive to get the File Reference IDs, then use FSCT_GET_NTFS_FILE_RECORD to get the $MFT file record.
I'm currently working on a simple Defrag program (written in Java) with the aim to pack files of a directory (e.g. all files of a large game) close together to reduce loading times and loading lags.
I use a faster method to retrieve the file mappings on the NTFS or FAT32 drive.
I parse the $MFT file directly (the format has some pitfalls), or the FAT32 file allocation table along with the directories.
The trick is to open the drive (e.g. "c:") with FileCreate for fully shared GENERIC read. The resulting handle can then be read with FileRead and FileSeek on a byte granularity. This works only in administrator mode (or elevated).
On NTFS, the $MFT might be fragmented and is a bit tricky to locate it from the boot sector info. I use the FSCTL_GET_RETRIEVAL_POINTERS on the C:\$MFT file to get its clusters.
On FAT32, one must parse the boot sector to locate the FAT table and the cluster containing root directory file. You need to parse the directory entries and recursively locate the clusters of the sub-directories.
There is no O(1) way of mapping from block # to file. You need to walk the entire MFT looking for files that contain that block.
Of course, in a live system, once you've read that data it's out-of-date and you must be prepared for failures in the move data FSCTL.

Resources