Sparse files can significantly reduce the storage requirements for a file that has large "empty" portions. But does the increased bookkeeping for these sparse regions materially impact access performance?
This will, of course, depend on the file system implementation -- I am specifically asking about NTFS and ext4.
Related
What are the disadvantages of using larger cache memories? Could we use just use a large enough cache memory so a secondary memory wouldn't be needed at all? I understand that the most compelling arguments are related to the cost of it / the problem of it's size. But if we assume that creating such a cache memory is possible, would it encounter any additional problems?
Many problems even if it was not expensive
Size will degrade the performance
Cache is fast because it’s very small compared to the main memory and hence it requires small amount of time to search it. If you build a large cache then it will not be able to perform at the same speed as the smaller counterpart.
Larger die area
Most of the DRAM chips only require a capacitor and a transistor to store a bit. SRAM on the other hand requires at least 6 transistors to make a single cell of memory. Which requires more area.
High power requirements
Because of the more transistors SRAM requires more power to operate. Which in turn generates more heat so you will have to handle the cooling problem.
So as you can see it’s not worth the effort given that today’s computers already achieve 90% hit ratio most of the time.
We're about to buy new hardware to run our analyses and are wondering if we're making the right decisions.
The setting:
We're a bioinformatics lab that will be handling DNA sequencing data. The biggest issue that our field has is the amount of data, rather than the compute. A single experiment will quickly go into the 10s-100s of Gb, and we would typically run different experiments at the same time. Obviously, mapreduce approaches are interesting (see also http://abhishek-tiwari.com/2010/08/mapreduce-and-hadoop-algorithms-in-bioinformatics-papers.html), but not all our software use that paradigm. Also, some software uses ascii files as in/output while other software works with binary files.
What we might be buying:
The machine that we might be buying would be a server with 32 cores and 192Gb of RAM, linked to NAS storage (>20Tb). This seems a very interesting setup for us for many of our (non-mapreduce) applications, but will such configuration prevent us from implementing hadoop/mapreduce/hdfs in a meaningful way?
Many thanks,
jan.
You have an interesting configuration. What would be the Disk IO for the NAS storage used by you?
Make your decision based on the following:
Map Reduce paradigm is used to solve the problem of handling large amount of data. Basically, RAM is more expensive than the Disk storage. You cannot hold all the data in the RAM. Disk storage allows you to store large amounts of data at cheaper costs. But, the speed at which you can read data from the disks is not very high. How does Map Reduce solve this problem? Map Reduce solves this problem by distributing the data over multiple machines. Now, the speed at which you can read data in parallel is greater than you could have done with a single storage disk. Suppose the Disk IO speed is 100 Mbps. With 100 machines you can read the data at 100*100 Mbps = 10Gbps.
Typically processor speed is not the bottleneck. Rather, the Disk IOs are the big bottlenecks while processing large amount of data.
I have a feeling that it may not be very efficient.
Problem Description
I need to stream large files from disk. Assume the files are larger than will fit in memory. Furthermore, suppose that I'm doing some calculation on the data and the result is small enough to fit in memory. As a hypothetical example, suppose I need to calculate an md5sum of a 200GB file and I need to do so with guarantees about how much ram will be used.
In summary:
Needs to be constant space
Fast as possible
Assume very large files
Result fits in memory
Question
What are the fastest ways to read/stream data from a file using constant space?
Ideas I've had
If the file was small enough to fit in memory, then mmap on POSIX systems would be very fast, unfortunately that's not the case here. Is there any performance advantage to using mmap with a small buffer size to buffer successive chunks of the file? Would the system call overhead of moving the mmap buffer down the file dominate any advantages Or should I use a fixed buffer that I read into with fread?
I wouldn't be so sure that mmap would be very fast (where very fast is defined as significantly faster than fread).
Grep used to use mmap, but switched back to fread. One of the reasons was stability (strange things happen with mmap if the file shrinks whilst it is mapped or an IO error occurs). This page discusses some of the history about that.
You can compare the performance on your system with the option --mmap to grep. On my system the difference in performance on a 200GB file is negligible, but your mileage might vary!
In short, I'd use fread with a fixed size buffer. It's simpler to code, easier to handle errors and will almost certainly be fast enough.
Depending on the language you are using, a C-like fread() loop based on a file for which you declared a particular buffer size will require exactly this buffer size, no more no less.
We typically choose a buffer size of 4 to 128 kBytes, there is little gain if any with bigger buffers.
If performance was extremely important, for relatively little gain (and at the risk of re-inventing something), one could consider using a two-thread implementation, whereby one thread reads the file in a set of two buffers, and the other thread perform calculations sequential fashion in one of the buffers at a time. In this fashion the disk access delays can be removed.
mjv is right. You can use double-buffers and overlapped I/O. That way your crunching and the disk reading can be happening at the same time. Then I would profile or stack-shot the crunching to make it as fast as possible. With luck it will be faster than the I/O, so you will end up running the I/O at top speed without pause. Then things like file fragmentation come into the picture.
It used to be that disk compression was used to increase storage space at the expense of efficiency but we were all on single processor systems back then.
These days there are extra cores around to potentially do the decompression work in parallel with processing the data.
For I/O bound applications (particularly read heavy sequential data processing) it might be possible to increase throughput by only reading and writing compressed data to disk.
Does anyone have any experience to support or reject this conjecture?
Take care not to confuse disk seek times and disk read rates. It takes millions of CPU cycles (5–10 milliseconds or 5–10 million nanoseconds) to seek to the right track on a hard drive (HDD). Once you're there, you can read tens of megabytes of data per second, assuming low fragmentation. For solid-state drives (SSD), seek times are lower (35,000–100,000ns) than HDDs.
Whether or not the data is compressed on the disk, you still have to seek. The question becomes, is (disk read time for compressed data + the decompression time) < (disk read time for uncompressed data). Decompression is relatively fast, since it amounts to replacing a short token with a longer one. In the end, it probably boils down to how well the data was compressed and how big it was in the first place. If you're reading a 2KB compressed file instead of a 5KB original, it's probably not worth it. If you're reading a 2MB compressed file instead of a 25MB original, it likely is.
Measure with a reasonable workload.
Yes! In fact, processors are so ridiculously fast now that it even makes sense for memory. (IBM does this, I believe.) I believe, some of the current big iron machines even do compression on the CPU cache.
Yes, this makes perfect sense. On NT-based Windows OS's it's widely accepted that sometimes enabling NTFS compression can be faster than disabling it for precisely this reason. This has been true for years and multicore should only make it more true.
I think it also depends on how aggressive your compression is vs how IO bound you are.
For example, DB2's row compression feature is targeted for IO bound application: data warehouses, reporting systems, etc. It uses a dictionary-based algorithm and isn't very aggressive - resulting in 50-80% compression of data (tables, indexes in storage as well as when in memory). However - it also tends to speed queries up by around 10%.
They could have gone with much more aggressive compression, but then would have taken a performance hit.
Given that Solid State Disks (SSDs) are decreasing in price and soon will become more prevalent as system drives, and given that their access rates are significantly higher than rotating magnetic media, what standard algorithms will gain in performance from the use of SSDs for local storage? For example, the high random read speed of SSDs makes something like a disk-based hashtable a viability for large hashstables; 4GB of disk space is readily available, which makes hashing to the entire range of a 32-bit integer viable (more for lookup than population, though, which would still take a long time); while this size of a hashtable would be prohibitive to work with with rotating media due to the access speed, it shouldn't be as much of an issue with SSDs.
Are there any other areas where the impending transition to SSDs will provide potential gains in algorithmic performance? I'd rather see reasoning as to how one thing will work rather than opinion; I don't want this to turn contentious.
Your example of hashtables is indeed the key database structure that will benefit. Instead of having to load a whole 4GB or more file into memory to probe for values, the SSD can be probed directly. The SSD is still slower than RAM, by orders of magnitude, but it's quite reasonable to have a 50GB hash table on disk, but not in RAM unless you pay big money for big iron.
An example is chess position databases. I have over 50GB of hashed positions. There is complex code to try to group related positions near each other in the hash, so I can page in 10MB of the table at a time and hope to reuse some of it for multiple similar position queries. There's a ton of code and complexity to make this efficient.
Replaced with an SSD, I was able to drop all the complexity of the clustering and just use really dumb randomized hashes. I also got an increase in performance since I only fetch the data I need from the disk, not big 10MB chunks. The latency is indeed larger, but the net speedup is significant.. and the super-clean code (20 lines, not 800+), is perhaps even nicer.
SSDs are only significantly faster for random access. Sequential access to disk they are only twice as performant as mainstream rotational drives. Many SSDs have poorer performance in many scenarios causing them to perform worse, as described here.
While SSDs do move the needle considerably, they are still much slower than CPU operations and physical memory. For your 4GB hash table example, you may be able to sustain 250+ MB/s off of an SSD for accessing random hash table buckets. For a rotational drive, you'd be lucky to break the single digit MB/s. If you can keep this 4 GB hash table in memory, you could access it on the order of gigabytes a second - much faster than even a very swift SSD.
The referenced article lists several changes MS made for Windows 7 when running on SSD's, which can give you an idea of the sort of changes you could consider making. First, SuperFetch for prefetching data off of disk is disabled - it's designed to get around slow random access times for disk which are alleviated by SSDs. Defrag is disabled, because having files scattered across the disk aren't a performance hit for SSDs.
Ipso facto, any algorithm you can think of which requires lots of random disk I/O (random being the key word, which helps to throw the principle of locality to the birds, thus eliminating the usefulness of a lot of caching that goes on).
I could see certain database systems gaining from this though. MySQL, for instance using the MyISAM storage engine (where data records are basically glorified CSVs). However, I think very large hashtables are going to be your best bet for good examples.
SSD are a lot faster for random reads, a bit for sequential reads and properly slower for writes (random or not).
So a diskbased hashtable is properly not useful with an SSD, since it now takes significantly time to update it, but searching the disk becomes (compared to a normal hdd) very cheap.
Don't kid yourself. SSDs are still a whole lot slower than system memory. Any algorithm that chooses to use system memory over the hard disk is still going to be much faster, all other things being equal.