where does NTFS stores file attributes - windows

I'm currently developing a file system and doing some research on existing ones, and in the file system I have in mind I would like to add extra metadata (or file attributes) to files besides the ones generally stored by FS's like NTFS that stores for each file its filename, type, path, size, date of creation and modification, and the proprietary.
In NTFS in particular I found that the $MFT stores for each file attributes like the file's name in $FILENAME and its timestamps in $STANDARD_INFORMATION, but what about the rest of its attributes like its owner, location, size and type?
I just ask this in order to understand if its possible to complement a FS like NTFS with extra metadata about files, like I said before, but I can't seem to understand where it stores the metadata it already has...

The owner can be determined via the $SECURITY_DESCRIPTOR attribute. The location, I believe you mean path on the volume, can only be determined by parsing directories until you come across that particular file (the INDX blocks that make up the B*-Trees of the filesystem store references to file records in the MFT). File size can be determined accurately from the $DATA attribute.
The file type can only be determined either from the file's content (certain file formats have markers) or from the extension contained in the file name. The file system is agnostic when it comes to file types. If you were referring to file types as in files, directories, links etc, these can be determined from the file record itself.
As for adding extra metadata it would be unwise to add additional attributes that the NTFS driver doesn't recognize since you would have to write your own proprietary driver and distribute it. Machines that do not have that driver will see the drive as corrupt. You should also consider what happens when the attributes take more than the size of the file record (which is fixed in newer versions of NTFS and it's 1024 bytes, unlike old versions where the size of a record may vary).
A good idea to solve this problem and make the file system available to users that don't have your software or driver installed is to add named streams. You could use your own naming convention and store whatever you like and the NTFS driver will take care of the records for you even if they get outside the 1024 limit. The users that do not have the software installed can view that file system and won't know that those named streams exist since applications typically open the unnamed stream passed by the NTFS driver by default (unless otherwise specified).

Related

read/write to a disk without a file system

I would like to know if anybody has any experience writing data directly to disk without a file system - in a similar way that data would be written to a magnetic tape. In particular I would like to know if/how data is written in blocks, and whether a certain blocksize needs to be specified (like it does when writing to tape), and if there is a disk equivalent of a tape file mark, which separates the archives written to a tape.
We are creating a digital archive for over 1 PB of data, and we want redundancy built in to the system in as many levels as possible (by storing multiple copies using different storage media, and storage formats). Our current system works with tapes, and we have a mechanism for storing the block offset of each archive on each tape so we can restore it.
We'd like to extend the system to work with disk volumes without having to change much of the logic. Another advantage of not having a file system is that the solution would be portable across Operating Systems.
Note that the ability to browse the files on disk is not important in this application, since we are considering this for an archival copy of data which is not accessed independently. Also note that we would already have an index of the files stored in the application database, which we also write to the end of the tape/disk when it is almost full.
EDIT 27/05/2020: It seems that accessing the disk device as a raw/character device is what I'm looking for.

How do I create a CFSTR_FILEDESCRIPTOR of unknown size?

I have an email client that allows the user to export a folder of email as a MBOX file. The UX is that they drag the folder from inside the application to an explorer folder (e.g. the Desktop) and a file copy commences via the application adding CFSTR_FILEDESCRIPTOR and CFSTR_FILECONTENTS to the data object being dropped. The issue arises when working out how to specify the size of the "file". Because internally I store the email in a database and it takes quite a while to fully encode the output MBOX, especially if the folder has many emails. Until that encoding is complete I don't have an exact size... just an estimate.
Currently I return an IStream pointer to windows, and over-specify the size in the file descriptor (estimate * 3 or something). Then when I hit the end of my data I return a IStream::Read length less then the input buffer size. Which causes Windows to give up on the copy. In Windows 7 it leaves the "partial" file there in the destination folder which is perfect, but in XP it fails the copy completely, leaving nothing in the destination folder. Other versions may exhibit different behaviour.
Is there a way of dropping a file of unknown size onto explorer that has to be generated by the source application?
Alternatively can I just get the destination folder path and do all the copy progress + output internally to my application? This would be great, I have all the code to do it already. Problem is I'm not the process accepting the drop.
Bonus round: This also needs to work on Linux/GTK and Mac/Carbon so any pointers there would be helpful too.
Windows Explorer use three methods to detect size of stream (in order of priority):
nFileSizeHigh/Low fields of FILEDESCRIPTOR structure if FD_FILESIZE flags is present.
Calling IStream.Seek(0, STREAM_SEEK_END, FileSize).
Calling IStream.Stat. cbSize field of STATSTG structure is used as MAX file size only.
To pass to Explorer a file with unknown size it is necessary:
Remove FD_FILESIZE flags from FILEDESCRIPTOR structure.
IStream.Seek must not be implemented (must return E_NOTIMPL).
IStream.Stat must set cbSize field to -1 (0xFFFFFFFFFFFFFFFF).
Is there a way of dropping a file of unknown size onto explorer that has to be generated by the source application?
When providing CFSTR_FILEDESCRIPTOR, you don't have to provide a file size at all if you don't know it ahead of time. The FD_FILESIZE flag in the FILEDESCRIPTOR::dwFlags field is optional. Provide an exact size only if you know it, otherwise don't provide a size at all, not even an estimate. The copy will still proceed, but the target won't know the final size until IStream::Read() returns S_FALSE to indicate the end of the stream has been reached.
Alternatively can I just get the destination folder path and do all the copy progress + output internally to my application?
A drop operation does not provide any information about the target at all. And for good reason - the source doesn't need to know. A drop target does not need to know where the source data is coming from, only how to access it. The drag source does not need to know how the drop target will use the data, only how to provide the data to it.
Think of what happens if you drop a file (virtual or otherwise) onto an Explorer folder that is implemented as a Shell Namespace Extension that saves the file to another database, or uploads it to a remote server. The filesystem is not involved, so you wouldn't be able to manually copy your data to the target even if you wanted to. Only the target knows how its data is stored.
That being said, the only way I know to get the path of a filesystem folder being dropped into is to drag&drop a dummy file, and then monitor the filesystem for where the drop target creates/copies the file to. Then you can replace the dummy file with your real file. But this is not very reliable, and not very friendly to the target.

Undo a botched command prompt copy which concatenated all of my files

In a Windows 8 Command Prompt, I had a backup drive plugged in and I navigated to my User directory. I executed the command:
copy Documents G:/Seagate_backup/Documents
What I assumed was that copy would create the Documents directory on my backup drive and then copy the contents of the C: Documents directory into it. That is not what happened!
I proceeded to wipe my hard-drive and re-install the operating system, thinking I had backed up the important files, only to find out that copy seemingly concatenated all the C: Documents files of different types (.doc, .pdf, .txt, etc) into one file called "Documents." This file is of course unreadable but opening it in Notepad reveals what happened. I can see some of my documents which were plain text throughout the massively long file.
How do I undo this!!? It's terrible because I was actually helping a friend and was so sure of myself but now this has happened. The only thing I can think of doing is searching for some common separator amongst the concatenated files and write some sort of script to split the file back apart. But then I would have to guess the extensions of each of the pieces...
Merging files together in the fashion that copy uses, discards important file system information such as file size and file name. While the file name may not be as important the size is. Both parameters are used by the OS to discriminate files.
This problem might sound familiar if you have damaged your file allocation table before and all files disappeared. In both cases, you will end up with a binary blob (be it an actual disk or something like your file which might resemble a disk image) that lacks any size and filename information.
Fortunately, this is where a lot of file system recovery tools can help. They are specialized in matching patterns. Specifically they are looking for giveaway clues to what type a file is of, where it starts and what it's size is.
This is for instance enabled by many file types having a set of magic numbers that are used to allow a program to check if a file really is of the type that the extension claims to be.
In principle it is possible to undo this process more or less well.
You will need to use data recovery tools or other analysis tools like binwalk to extract the concatenated binary blob. Essentially the same tools that are used to recover deleted files should be able to extract your documents again. Without any filename of course. I recommend renaming the file to a disk image (.img) and either mounting it from within the operating system as a virtual harddisk (don't worry that it has no file system - it should show up as an unformatted drive) or directly using a data recovery tool or analysis tool which can read binary files (binwalk, for instance, can do that directly, but may not find all types of files as it's mainly for unpacking firmware images that may be assembled in the same or a similar way to how your files ended up).

Get file offset on disk/cluster number

I need to get any information about where the file is physically located on the NTFS disk. Absolute offset, cluster ID..anything.
I need to scan the disk twice, once to get allocated files and one more time I'll need to open partition directly in RAW mode and try to find the rest of data (from deleted files). I need a way to understand that the data I found is the same as the data I've already handled previously as file. As I'm scanning disk in raw mode, the offset of the data I found can be somehow converted to the offset of the file (having information about disk geometry). Is there any way to do this? Other solutions are accepted as well.
Now I'm playing with FSCTL_GET_NTFS_FILE_RECORD, but can't make it work at the moment and I'm not really sure it will help.
UPDATE
I found the following function
http://msdn.microsoft.com/en-us/library/windows/desktop/aa364952(v=vs.85).aspx
It returns structure that contains nFileIndexHigh and nFileIndexLow variables.
Documentation says
The identifier that is stored in the nFileIndexHigh and nFileIndexLow members is called the file ID. Support for file IDs is file system-specific. File IDs are not guaranteed to be unique over time, because file systems are free to reuse them. In some cases, the file ID for a file can change over time.
I don't really understand what is this. I can't connect it to the physical location of file. Is it possible later to extract this file ID from MFT?
UPDATE
Found this:
This identifier and the volume serial number uniquely identify a file. This number can change when the system is restarted or when the file is opened.
This doesn't satisfy my requirements, because I'm going to open the file and the fact that ID might change doesn't make me happy.
Any ideas?
Use the Defragmentation IOCTLs. For example, FSCTL_GET_RETRIEVAL_POINTERS will tell you the extents which contain file data.

How can I find information about a file from logical cluster number in NTFS/FAT32?

I am trying to defragment a single file through Windows defragmentation API ( http://msdn.microsoft.com/en-us/library/aa363911(VS.85).aspx ) but if there is no free space block large enough for my file I would like to move other parts of files to make room for it.
The linked article mentions moving parts of other files but I can't find any information about how to find out which files to move. From the free space bitmap I can find an almost large enough space and I know the logical cluster numbers surrounding it, but from this I can't find out which files are surrounding it and a handle to the files is required to do FSCTL_MOVE_FILE which moves parts of files.
Is there any way, through the API or by parsing the MFT, to find out what file a logical cluster number is part of, and what virtual cluster number in the file corresponds to the logical cluster number found through the bitmap?
The slow but compatible method is to recursively scan all directories for files, and use the FSCTL_GET_RETRIEVAL_POINTERS. Then scan the resulting VCN-LCN mapping for the cluster in question.
Another option would be to query the USN Journal of the drive to get the File Reference IDs, then use FSCT_GET_NTFS_FILE_RECORD to get the $MFT file record.
I'm currently working on a simple Defrag program (written in Java) with the aim to pack files of a directory (e.g. all files of a large game) close together to reduce loading times and loading lags.
I use a faster method to retrieve the file mappings on the NTFS or FAT32 drive.
I parse the $MFT file directly (the format has some pitfalls), or the FAT32 file allocation table along with the directories.
The trick is to open the drive (e.g. "c:") with FileCreate for fully shared GENERIC read. The resulting handle can then be read with FileRead and FileSeek on a byte granularity. This works only in administrator mode (or elevated).
On NTFS, the $MFT might be fragmented and is a bit tricky to locate it from the boot sector info. I use the FSCTL_GET_RETRIEVAL_POINTERS on the C:\$MFT file to get its clusters.
On FAT32, one must parse the boot sector to locate the FAT table and the cluster containing root directory file. You need to parse the directory entries and recursively locate the clusters of the sub-directories.
There is no O(1) way of mapping from block # to file. You need to walk the entire MFT looking for files that contain that block.
Of course, in a live system, once you've read that data it's out-of-date and you must be prepared for failures in the move data FSCTL.

Resources