Is there a tool that would show me for a specific file on disk, how fragmented it is? (How many seeks does physical disk need to make if I were to read that file in a linear fashion)
The Sysinternals tool contig with parameter -a can do this for a file or all files in a folder and its subfolders.
You can use DeviceIoControl with FSCTL_GET_VOLUME_BITMAP, FSCTL_GET_RETRIEVAL_POINTERS and FSCTL_MOVE_FILE, see Defragmenting Files.
You can also find different code examples if you search for FSCTL_MOVE_FILE.
Here is one in C and another in .NET.
filefrag is the tool you're looking for, if you're using Linux.
Use -v parameter with filename to get detailed list of fragmentation.
http://linux.die.net/man/8/filefrag
And, of course, "fragmentation" is suspect:
The file may be in pieces in the same cylinder. No seek overhead, just rotational latency. Or not as the pieces may be an optimal order (chances are near zero for this one).
The file may be "contiguous" but across several cylinders. Even reading sequentially will result in seeks.
The file may be on a stripe set and you have no idea where the boundaries are. You may skip to another controller, another spindle, or another partition on the same drive.
Be careful about what conclusions you draw.
fsutil file queryallocranges offset=<o> length=<l> <file> will show you the file's extents you will need admin rights.
Related
I want to make a tool similar to zerofree for linux. I want to do it by allocating a big file without zeroing it, look for nonzero blocks and rewrite them.
With admin privileges it is possible, uTorrent can do this: http://www.netcheif.com/Articles/uTorrent/html/AppendixA_02_12.html#diskio.no_zero , but it's closed source.
I am not sure this answers your question (need), but such a tool already exists. You might have a look at fsutil.exe Fsutil command line tool. This tool has a huge potential to discover the internal structures of NTFS files and can also create file of any size (without the need to zeroing it manually). Hope that helps.
Wrote a tool https://github.com/basinilya/winzerofree . It uses SetFileValidData() as #RaymondChen suggested
You should try SetFilePointerEx
Note that it is not an error to set the file pointer to a position
beyond the end of the file.
So after you create file, call SetFilePointerEx and then SetEndOfFile or WriteFile or WriteFileEx and close the file, size should be increased.
EDIT
Raymonds suggested SetValidData is also good solution, but this requares privileges, so shouldn't be used often by anyone.
My solution is the best on NTFS, because it supports feature known as initialized size it means that after using SetFilePointerEx data won't be initialized to zeros, but after attempt to read uninitialized data you will receive zeros.
To sum up, if NTFS use SetFilePointerEx, if FAT (not very likely) - use SetValidData
I'm rendering millions of tiles which will be displayed as an overlay on Google Maps. The files are created by GMapCreator from the Centre for Advanced Spatial Analysis at University College London. The application renders files in to a single folder at a time, in some cases I need to create about 4.2 million tiles. Im running it on Windows XP using an NTFS filesystem, the disk is 500GB and was formatted using the default operating system options.
I'm finding the rendering of tiles gets slower and slower as the number of rendered tiles increases. I have also seen that if I try to look at the folders in Windows Explorer or using the Command line then the whole machine effectively locks up for a number of minutes before it recovers enough to do something again.
I've been splitting the input shapefiles into smaller pieces, running on different machines and so on, but the issue is still causing me considerable pain. I wondered if the cluster size on my disk might be hindering the thing or whether I should look at using another file system altogether. Does anyone have any ideas how I might be able to overcome this issue?
Thanks,
Barry.
Update:
Thanks to everyone for the suggestions. The eventual solution involved writing piece of code which monitored the GMapCreator output folder, moving files into a directory heirarchy based upon their filenames; so a file named abcdefg.gif would be moved into \a\b\c\d\e\f\g.gif. Running this at the same time as GMapCreator overcame the filesystem performance problems. The hint about the generation of DOS 8.3 filenames was also very useful - as noted below I was amazed how much of a difference this made. Cheers :-)
There are several things you could/should do
Disable automatic NTFS short file name generation (google it)
Or restrict file names to use 8.3 pattern (e.g. i0000001.jpg, ...)
In any case try making the first six characters of the filename as unique/different as possible
If you use the same folder over and (say adding file, removing file, readding files, ...)
Use contig to keep the index file of the directory as less fragmented as possible (check this for explanation)
Especially when removing many files consider using the folder remove trick to reduce the direcotry index file size
As already posted consider splitting up the files in multiple directories.
.e.g. instead of
directory/abc.jpg
directory/acc.jpg
directory/acd.jpg
directory/adc.jpg
directory/aec.jpg
use
directory/b/c/abc.jpg
directory/c/c/acc.jpg
directory/c/d/acd.jpg
directory/d/c/adc.jpg
directory/e/c/aec.jpg
You could try an SSD....
http://www.crucial.com/promo/index.aspx?prog=ssd
Use more folders and limit the number of entries in any given folder. The time to enumerate the number of entries in a directory goes up (exponentially? I'm not sure about that) with the number of entries, and if you have millions of small files in the same directory, even doing something like dir folder_with_millions_of_files can take minutes. Switching to another FS or OS will not solve the problem---Linux has the same behavior, last time I checked.
Find a way to group the images into subfolders of no more than a few hundred files each. Make the directory tree as deep as it needs to be in order to support this.
The solution is most likely to restrict the number of files per directory.
I had a very similar problem with financial data held in ~200,000 flat files. We solved it by storing the files in directories based on their name. e.g.
gbp97m.xls
was stored in
g/b/p97m.xls
This works fine provided your files are named appropriately (we had a spread of characters to work with). So the resulting tree of directories and files wasn't optimal in terms of distribution, but it worked well enough to reduced each directory to 100s of files and free the disk bottleneck.
One solution is to implement haystacks. This is what Facebook does for photos, as the meta-data and random-reads required to fetch a file is quite high, and offers no value for a data store.
Haystack presents a generic HTTP-based object store containing needles that map to stored opaque objects. Storing photos as needles in the haystack eliminates the metadata overhead by aggregating hundreds of thousands of images in a single haystack store file. This keeps the metadata overhead very small and allows us to store each needle’s location in the store file in an in-memory index. This allows retrieval of an image’s data in a minimal number of I/O operations, eliminating all unnecessary metadata overhead.
I need to get the list of all the files on a drive. I am using a recursive solution. But it is taking a lot of time. I was wondering that, is it possible to get the names and location of all the files on a NTFS drive from it's Master File Table? I think it will be very fast. Any suggestions?
There is a tool that will search the mft directly, it's called ndff. I have used it before and it is very fast.
Presumably it is possible to do what you want - there is another tool called "Everything" which I guess does the same thing - it also uses the USN change journal to update it's index.
When you get a list of all the files on an NTFS-formatted drive using a recursive solution, you are getting them from the MFT. There should be little disk IO outside of the MFT when simply retrieving a list of filenames and directories.
Before going down the path of determining the format of the MFT (which is available from a variety of places on the Internet) and writing code to read it directly, you should probably profile your code and determine that you aren't already CPU or IO bound.
I have the impression you're imagining some kind of list-like structure in the MFT which you can read in one go with no or minimal seeking.
This is not the case. The MFT uses a type of b-tree to store pathnames. When you scan the directory structure on your disk, you are in fact walking the MFT b-tree; you are doing what you would have to do if you accessed the MFT directly.
Yes there is, and the program I just open-sourced does exactly this.
You can read the source to find out how it works, but basically, it just looks for FILE_NAME attributes inside the $MFT and then uses the ParentDirectory field to get the parent of every file.
That way it can completely avoid reading the contents of any directory.
Previously, I asked the question.
The problem is the demands of our file structure are very high.
For instance, we're trying to create a container with up to 4500 files and 500mb data.
The file structure of this container consists of
SQLite DB (under 1mb)
Text based xml-like file
Images inside a dynamic folder structure that make up the rest of the 4,500ish files
After the initial creation the images files are read only with the exception of deletion.
The small db is used regularly when the container is accessed.
Tar, Zip and the likes are all too slow (even with 0 compression). Slow is subjective I know, but to untar a container of this size is over 20 seconds.
Any thoughts?
As you seem to be doing arbitrary file system operations on your container (say, creation, deletion of new files in the container, overwriting existing files, appending), I think you should go for some kind of file system. Allocate a large file, then create a file system structure in it.
There are several options for the file system available: for both Berkeley UFS and Linux ext2/ext3, there are user-mode libraries available. It might also be possible that you find a FAT implementation somewhere. Make sure you understand the structure of the file system, and pick one that allows for extending - I know that ext2 is fairly easy to extend (by another block group), and FAT is difficult to extend (need to append to the FAT).
Alternatively, you can put a virtual disk format yet below the file system, allowing arbitrary remapping of blocks. Then "free" blocks of the file system don't need to appear on disk, and you can allocate the virtual disk much larger than the real container file will be.
Three things.
1) What Timothy Walters said is right on, I'll go in to more detail.
2) 4500 files and 500Mb of data is simply a lot of data and disk writes. If you're operating on the entire dataset, it's going to be slow. Just I/O truth.
3) As others have mentioned, there's no detail on the use case.
If we assume a read only, random access scenario, then what Timothy says is pretty much dead on, and implementation is straightforward.
In a nutshell, here is what you do.
You concatenate all of the files in to a single blob. While you are concatenating them, you track their filename, the file length, and the offset that the file starts within the blob. You write that information out in to a block of data, sorted by name. We'll call this the Table of Contents, or TOC block.
Next, then, you concatenate the two files together. In the simple case, you have the TOC block first, then the data block.
When you wish to get data from this format, search the TOC for the file name, grab the offset from the begining of the data block, add in the TOC block size, and read FILE_LENGTH bytes of data. Simple.
If you want to be clever, you can put the TOC at the END of the blob file. Then, append at the very end, the offset to the start of the TOC. Then you lseek to the end of the file, back up 4 or 8 bytes (depending on your number size), take THAT value and lseek even farther back to the start of your TOC. Then you're back to square one. You do this so you don't have to rebuild the archive twice at the beginning.
If you lay out your TOC in blocks (say 1K byte in size), then you can easily perform a binary search on the TOC. Simply fill each block with the File information entries, and when you run out of room, write a marker, pad with zeroes and advance to the next block. To do the binary search, you already know the size of the TOC, start in the middle, read the first file name, and go from there. Soon, you'll find the block, and then you read in the block and scan it for the file. This makes it efficient for reading without having the entire TOC in RAM. The other benefit is that the blocking requires less disk activity than a chained scheme like TAR (where you have to crawl the archive to find something).
I suggest you pad the files to block sizes as well, disks like work with regular sized blocks of data, this isn't difficult either.
Updating this without rebuilding the entire thing is difficult. If you want an updatable container system, then you may as well look in to some of the simpler file system designs, because that's what you're really looking for in that case.
As for portability, I suggest you store your binary numbers in network order, as most standard libraries have routines to handle those details for you.
Working on the assumption that you're only going to need read-only access to the files why not just merge them all together and have a second "index" file (or an index in the header) that tells you the file name, start position and length. All you need to do is seek to the start point and read the correct number of bytes. The method will vary depending on your language but it's pretty straight forward in most of them.
The hardest part then becomes creating your data file + index, and even that is pretty basic!
An ISO disk image might do the trick. It should be able to hold that many files easily, and is supported by many pieces of software on all the major operating systems.
First, thank-you for expanding your question, it helps a lot in providing better answers.
Given that you're going to need a SQLite database anyway, have you looked at the performance of putting it all into the database? My experience is based around SQL Server 2000/2005/2008 so I'm not positive of the capabilities of SQLite but I'm sure it's going to be a pretty fast option for looking up records and getting the data, while still allowing for delete and/or update options.
Usually I would not recommend to put files inside the database, but given that the total size of all images is around 500MB for 4500 images you're looking at a little over 100K per image right? If you're using a dynamic path to store the images then in a slightly more normalized database you could have a "ImagePaths" table that maps each path to an ID, then you can look for images with that PathID and load the data from the BLOB column as needed.
The XML file(s) could also be in the SQLite database, which gives you a single 'data file' for your app that can move between Windows and OSX without issue. You can simply rely on your SQLite engine to provide the performance and compatability you need.
How you optimize it depends on your usage, for example if you're frequently needing to get all images at a certain path then having a PathID (as an integer for performance) would be fast, but if you're showing all images that start with "A" and simply show the path as a property then an index on the ImageName column would be of more use.
I am a little concerned though that this sounds like premature optimization, as you really need to find a solution that works 'fast enough', abstract the mechanics of it so your application (or both apps if you have both Mac and PC versions) use a simple repository or similar and then you can change the storage/retrieval method at will without any implication to your application.
Check Solid File System - it seems to be what you need.
Is there a way to guarantee that a file on Windows (using the NTFS file system) will use contiguous sectors on the hard disk? In other words, the first chunk of the file will be stored in a certain sector, the second chunk of the file will be stored in the next sector, and so on.
I should add that I want to be able to create this file programmatically, so I'd rather not just ask the user to defrag their harddrive after creating this file. If there is a way to programmatically defrag just the file that I create, then that would be OK too.
I would start here:
http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx
and follow Mark's documentation of the defrag stuff:
http://technet.microsoft.com/en-us/sysinternals/bb897427.aspx
I know of no such guarantees.
But also keep in mind that NTFS "files" are comprised of multiple data streams. So you are actually looking for a way to guarantee that a stream is contiguous.
I believe there's no way to achieve that. You can only defragment the file after it's been written.