EXFAT vs FAT32 speed for cpy -> exfat slower - performance

As I'm working on an embedded system treating file management on media, I noticed that, using a same media (USB or SD, formated several times on both FAT32 and exfat fs), when I try to copy a file on it, it's faster in fat32 than in exfat (2 times faster).
I was wondering why I have such a difference.
I used basic busybox 1.24.1 cp function, or tar_append_tree function (from libtar), I have the same result. When I compare the cpy duration on my windows, I can't notice any difference. Do you have any clue to make me understand why there is such a gap between fat32 and exfat considering copy duration ?
Thank you in advance for your reply.

Related

Read NTFS compressed file from disk, skip caches

I am trying to write a small app that will start the timer, read every file in folder, and stop the timer when everything is in memory.
For "normal", uncompressed files this works rather well, by openining file with FILE_FLAG_NO_BUFFERING. Sadly NTFS compressed files seem to be cached somewhere as readings get faster and faster the more I run the program.
I have tried RamMAP and cleaning standby memory, but the results are rather inconsistent, as windows decides to refill the caches as fast as possible.
How can this be done? How can I ensure that NTFS compressed file is read from disk and uncompressed on the spot.

SAMV71 USB Mass Storage Host extremely slow

I tried the example provided by atmel's ASF on USB mass storage host to send/read a file to a USB flash storage device. When reading a file, i'm getting 1.7 MB/s speed, I tried a lot of solutions, which include :
Made sure its running on High speed mode, and the board is running
on 300 mhz
Tried increasing the buffer size for the F_read function, and I
managed to increase it to 2.2 MB/s
I tested the file system it self, which is FAT32 on a virtual memory
example, and got 30MB/s on read operations ( not sure if thats
helpful for speed debugging purposes)
I tried using the same program, except reading from an SD card, which
gave me an output of 1 MB/s
I also tested it on full speed mode and it gave me an output of 0.66
MB/s.
one extreme idea i tested was running two boards, one in host mode,
and the other in device mode then I tested the transfer speed of the
USB, it gave me an output of 1.66 MB/s on Bulk Mode. (running on HS)
tried the Keil examples which gave me worst results than Atmel's.
can someone please suggest solutions? I've read all documentation regarding USB communication provided by Atmel and Keil.
Atmel's mass storage USB stack lacks multi-sector read and write, though the SCSI layer indeed implements the proper command to get many sectors in a row (see uhi_msc_scsi_read_10). The abstraction layer reading data above the SCSI commands (uhi_msc_mem_read_10_ram and uhi_msc_mem_write_10_ram for instance) only read sector by sector, yielding in very poor performance.
In order to achieve USB High Speed performance (~35 MB/s) you will have to hack these functions (and all the layers above) to use multi-sectors read/write.

How to detect if two files are on the same physical disk?

I want to parallelize some disk I/O, but this will hurt performance badly if my files are on the same disk.
Note that there may be various filter drivers in between the drivers and the actual disk -- e.g. two files might be on different virtual disks (VHDs), but on the same physical disk.
How do I detect if two HANDLEs refer to files (or partitions, or whatever) on the same physical disk?
If this is not possible -- what is the next-best/closest alternative?
This is rather convoluted, but the first step is getting the volume name from the handle. You can do that on Vista or above using the GetFinalPathNameByHandle function. If you need to support earlier Windows versions, you can use this example.
Now that you have the file name, you can parse out the drive letter or volume name, and use the method explained in the accepted answer to this question.

speeding up compile time using an usb key?

As compilation is mostly reading small files, I wonder if buying a fast usb key to work on can speed up compilation time compared to a standard SATA drive, and with a lower price than an SSD drive (16Gb keys are < 30$).
Am I right ?
PS: I working with .Net, VS and the whole MS' tools, but I think this is true for all languages...
[Edit] According Toms's hardware, a recent hdd drive read access time is near 14ms average where the slowest USB key is 1.5ms
That's why I'm wondering if my hypothesis is right
One year later, SSD drives cost has melt.
This is now the simplest solution, as you can host the operating system and all the dev tools on a fast SSD drive, which costs less than $ 120 (100 €)
The specifications on Wikipedias SATA page suggest otherwise. The SATA revision 3.0 is 50% faster than USB 3.0 according to the spec.

How to read individual sectors/clusters using DeviceIoControl() in Windows?

I dropped my laptop while Windows was preparing to hibernate and as a result, I got a head crash on the hard drive. (Teaches me to get a hard drive and/or laptop with a freefall sensor next time around.) Anyway, running SpinRite to try to recover the data has resulted in all the spare sectors on the disk to all be all used up for all the recoverable sectors so far. SpinRite is still going right now, but since there won't be anymore spare sectors to be used, I think it'll be a fruitless exercise except to tell me where all the bad sectors are.
Anyway, I'm planning on writing an application to try to salvage data from the hard drive. From my past forays into defragging, I know that I can use FSCTL_GET_RETRIEVAL_POINTERS to figure out the logical cluster numbers for any given file.
How do I go about trying to read the sectors for that actual cluster? My digging through MSDN's listing for Disk, File, and Volume device control codes hasn't had anything jump out at me as the way I get to the actual cluster data.
Should I not even bother trying to read at that low level? Should I instead be doing SetFilePointer() and ReadFile() calls to get to the appropriate cluster sized offsets into the file and read cluster sized chunks?
If the file I'm trying to read has a bad sector, will NTFS mark the entire file as bad and prevent me from accessing the file in the future? If so how do I tell NTFS not to mark the file as bad or dead? (Remember that the HD is now out of spare sectors to be remapped.)
Should I dust off my *nix knowledge and figure out how to read from /dev/ ?
Update: I found the answer to my own question. :-) The solution is doing SetFilePointer() and ReadFile() on the volume handle rather than on the file handle.
I found the answer to my own question. :-) The solution is doing SetFilePointer() and ReadFile() on the volume handle rather than on the file handle.

Resources