Disadvantage of Using Linked Lists in Memory Management - algorithm

I'm kinda confused as to what the primary disadvantage of using a linked list would be in maintaining a list of free disk blocks. My professor said that using a bit map would help solve said problem. Why does using a bit map solve this problem?
To narrow down my questions:
What is the primary disadvantage of using a linked list in maintaining a list of free disk blocks?
Why does using a bit map solve this problem/disadvantage?

Hi,
What is the primary disadvantage of using a linked list in maintaining a list of free disk blocks?
This scheme is not very efficient since to traverse the list, we must read each block requiring substantial time.
The second disadvantage is additional memory requirement for maintaining linked list of all free disk blocks.
Why does using a bit map solve this problem/disadvantage?
Most often the list of free disk spaces is implemented as a bit map or bit vector. Each block is represented by a single bit. 0(zero) is marked as a free block whereas 1 is for allocated block. So, no need of extra extra memory to store free disk space.
Fast random access allocation check: Checking if a sector is free is as simple as checking the corresponding bit. so traversal is faster than LinkedList.
Other Advantage of using Bit Map:
Fast deletion: Data need not be overwritten on delete, flipping the corresponding bit is sufficient
May this helps you. Fill free for further clarification.
Regards,
Bhavik

The correct solution was given by #FullDecent in the comments to the other answer (he deserves your bounty). To elaborate:
Assuming that the disk drive in question is of the older, conventional type, with a spinning storage surface and a read/write head that physically moves radially across the surface...
In general it is good for files to be stored as contiguously on disk as possible, so that multiple blocks can be read sequentially. If a file is "fragmented" (its blocks are scattered in different places on the disk), the drive head will need to be repositioned several times to read the entire file. Repositioning of the head is one of the most time-consuming operation involved in a disk read (second only to starting the disk spinning after it has been stopped). Hence the procedure known as "defragmentation" or "defragging", which rearranges the used blocks on a disk to make all files contiguous.
With a linked list of free blocks, allocation involves taking blocks from the front of the list, and deallocation involves adding freed blocks to the front of the list. Hence the list can get messy, with blocks that are not adjacent on the disk frequently being adjacent in the list. To find a contiguous stretch of free blocks large enough for a large file, it may be necessary to scan a significant fraction of the list.
With a bitmap, it will still be necessary to scan for a large contiguous free block section, but this is easier since 8, 16, 32, or 64 bits (depending on the hardware's word size) can be checked in a single operation.

Related

How can I better understand the impact of modern caching on algorithm performance?

I'm reading the following paper: http://www-db.in.tum.de/~leis/papers/ART.pdf and in it, they say in the abstract:
Main memory capacities have grown up to a point where most databases
fit into RAM. For main-memory database systems, index structure
performance is a critical bottleneck. Traditional in-memory data
structures like balanced binary search trees are not efficient on
modern hardware, because they do not optimally utilize on-CPU caches.
Hash tables, also often used for main-memory indexes, are fast but
only support point queries.
How can I better understand this utilization of on-CPU caches and how it impacts the performance of particular data structures/algorithms?
Just somewhere to get started would be great because this sort of analysis is really opaque to me and I don't know where to go to start understanding.
This is going to be a really basic answer, as it would otherwise be extremely broad. I'm also not an expert on the subject (picking up bits and pieces to help understand how to optimize my hotspots better). But it might help you get started investigating this subject.
The topic reminds me of my university days when computer architecture
courses only taught about registers, DRAM, and disk, while glossing
over the CPU cache in between. The CPU cache is one of the most
dominant factors these days in performance.
The memory of the computer is divided into a hierarchy ranging from the absolute biggest but slowest (disk) to absolute smallest but fastest (registers).
Below disk is DRAM which is still pretty slow. And above registers is the CPU cache which is pretty damned fast (especially the smallest L1 cache).
Accessing One Node
Now let's say you request to access memory in some form from some data structure, say a linked structure like a tree or linked list and we're just accessing one node.
Note, I'm inverting the view of memory access for simplicity. Typically it begins with an instruction to load something into a register with the process working backwards and forwards, rather than merely forwards.
Virtual to Physical (DRAM)
In this case, unless the memory is already mapped to physical memory, the operating system has to map a page from virtual memory to a physical address in DRAM (this is freaking slow, especially in the worst-case scenario where the page fault involves a disk access). This is often done in pretty hefty chunks (the machine grabs memory by the handful), like aligned 4-kilobyte chunks. So we end up grabbing a big old 4-kilobyte aligned chunk of memory just for this one node.
DRAM to CPU Cache
Now that this 4-kilobyte page is physically mapped, we still want to do something with the node (most instructions have to operate at the register level) so the computer moves it down through the CPU cache hierarchy (this is pretty slow). Typically all levels of CPU cache have the same cache-line size, like 64-byte cache lines on Intel.
To move the memory from DRAM into these CPU caches, we have to grab a chunk of cache-line-sized-and-aligned memory from DRAM and move it into the CPU cache. We might also have to evict some data already in various levels of the CPU cache hierarchy on the way, like the least recently used memory. So now we're grabbing a 64-byte aligned handful of memory for this node.
Maybe at this point, the cache line memory might look like this. Let's say the relevant node data is 42, while the stuff in ??? is irrelevant memory surrounding it that's not part of our linked data structure.
CPU Cache to Register
Now we move the memory from CPU cache into a register (this occurs very quickly). And here we're still grabbing memory in sort of a handful, but a pretty small one. For example, we might grab a 64-bit aligned chunk of memory and move it into a general-purpose register. So we grab the memory around "42" here and move it into a register.
Finally we do some operations on the register and store the results, and the results often kind of work their way back up the memory hierarchy.
Accessing One Other Node
When we access the next node in the linked structure, we end up having to potentially do this all over again, just to read one little node's data. The contents of the cache line might look like this (with 22 being the node data of interest).
We can see potentially how much wasted effort the hardware and operating system are applying, moving big, aligned chunks of data from slower memory to faster memory only in order to access one little teeny bit of it prior to eviction.
And that's why little objects all allocated separately, as in the case of linked nodes or languages which can't represent user-defined types contiguously, aren't very cache or page-friendly. They tend to invoke a lot of page faults and cache misses as we traverse them, accessing their data. That is, unless they have help from a memory allocator which allocates these nodes in a more contiguous fashion (in which case the data or two or more nodes might be right next to each other and accessed together).
Contiguity and Spatial Locality
The most cache-friendly data structures tend to be based on contiguous arrays (it doesn't have to be one gigantic array, but perhaps arrays linked together, e.g., as is the case of an unrolled list). When we iterate through an array and access the first element, we might have to do the motions described above yet we might be able to get this once the memory is moved into a cache line:
Now we can iterate through the array and access all the elements while it's in the second-fastest form of memory on the machine, the L1 cache, simply moving data from L1 cache to register after the initial compulsory cache miss/page fault. If we start at 17, we have the initial compulsory cache miss but all the subsequent elements in this cache line can then be accessed without repeating the motions above. This is extremely fast, and the computer can blaze through such data.
So that was what was meant by this part:
Traditional in-memory data structures like balanced binary search
trees are not efficient on modern hardware, because they do not
optimally utilize on-CPU caches.
Note that it is possible to make linked structures like trees and linked lists substantially more cache-friendly than they would naturally be using a custom memory allocator, but they lack this inherent cache-friendliness at the basic data structure level.
Hash tables, on the other hand, tend to be contiguous table structures based on arrays. They might use chaining and linked bucket structures, but those are also easier to make cache-efficient with a little teeny bit of help from the custom allocator (far less than the tree due to the simpler, sequential access patterns within a hash bucket).
So anyway, that's a little brief overview on the subject, a bit oversimplified, but hopefully enough to help get started. If you want to understand this subject at a deeper level, keywords would be cache/memory efficiency/optimization and locality of reference.

Dynamic array - Does it deallocate memory when elements are removed?

Going by the article in wikipedia Dynamic Array
It automatically allocates memory in geometrical progression amounts as the last empty memory cell is filled up and then copies entire data to the new array. What happens when one removes elements in quantity larger than the amount by which it was increased? Does it automatically deallocate memory too? Or does it leave it as it is?
For example in the image on the top right of the above wikipedia link,
after the last step 2|7|1|3|8|4| one removes all the elements except 2. What happens then? Does it allocate memory of smaller size and copy the entire contents to the new one?
Side question: How or what decides what would be initial amount of memory allocated to a dynamic array?
The article you cite replies your question:
"Many dynamic arrays also deallocate some of the underlying storage if its size drops below a certain threshold, [...]"
It's really worth reading ;-)
For the cases where you know in advance that you will need a specific size, some implementations provide a specific method ("reserve()", in the C++ Standard Library).

Why would anyone use best fit memory allocation?

I'm reading Modern Operating Systems by Andrew Tanenbaum, and he writes that best fit is a widely used memory allocation algorithm.
He also writes that it's slower than first fit/next fit since it have to search the entire list of allocated memory. And that it tends to waste more memory since it leaves behind a lot of small useless gaps in memory.
Why is it then widely used? Is it some obvious advantage i have overlooked?
First, it's is not that widely used (like all sequential fits), except, perhaps, in homeworks ;). In my opinion, the widely used strategy is segregated fits (which can very closely approximate best fit).
Second, best fit strategy can be implemented by using a tree of free lists of various sizes
Third, it's considered one of the best policies with regard to memory fragmentation
See
Dynamic Storage Allocation: A Survey and Critical Review
The Memory Fragmentation Problem: Solved?
for information about memory management, not Tannenbaum.
I think it's a mischaracterisation to say that it wastes more memory than first fit. Best fit maximizes available space compared to first fit, particularly when it comes to conserving space available for large allocations. This blog post gives a good example.
Space efficiency and versatility is really the answer. Large blocks can fit unknown future needs better than small blocks, so a best-fit algorithm tries to use the smallest blocks first.
First-fit and next-fit algorithms (that can also cut up blocks) may end up using pieces of the larger block first, which increases the risk that a large malloc() will fail. This is essentially harm from large blocks of external fragmentation.
A best-fit algorithm will often find fits that are only a few bytes larger, leading to fragmentation that is only a few bytes, while also saving the large blocks for when they're needed. Also, leaving the large blocks untouched as long as possible helps cache locality and minimizes the load on the MMU, minimizing costly page faults and and saving memory pages for other programs.
A good best-fit algorithm will properly maintain its speed even when it's managing a large number of small fragments, by increasing internal fragmentation (which is hard to reclaim) and/or by using good lookup tables and search trees.
First-fit and next-fit still also face their own searching problems. Without good size indexing in these algorithms, they still have to spend time searching through blocks for one that fits. Since their "standards are lower," they may find a fit faster using a straightforward search, but as soon as you add intelligent indexing, the speeds between all algorithms becomes much closer.
The one I've been using and tweaking for the last 6 years can find the best fit block in O(1) time for >90% of all allocs. It utilizes a handful of strategies to jump straight to the right block, or start very close so searching is minimized. It has, on more than one occasion, replaced existing block-pool or first-fit algorithms due to it's performance and ability to pack allocations more efficiently.
Best fit is not the best allocation strategy, but it is better than first fit and next fit. The reason is because it suffers from less fragmentation problems than the latter two.
Consider a micro heap of 64 bytes. First we fill it by allocating one 32 and two 16 byte blocks in that order. Then we free all blocks. There are now three free blocks in the heap, one 32 byte and two 16 byte ones.
Using first fit, we allocate one 16 byte block. We do it using the 32 byte block (because it is first in the heap!) and the remainder 16 bytes of that block is split into a new free block. So there are one 16 byte allocated block at the beginning of the heap and then three free 16 bytes block.
What happens if we now wants to allocate a 32 byte block? We can't! There are still 48 bytes free in the heap, but fragmentation has screwed us over.
What would have happened if we had used best fit? When we were searching for a free block to use for our 16 byte allocation, we would have skipped over the 32 byte block at the beginning of the heap and instead picked the 16 byte block after it. That would have preserved the 32 byte block for larger allocations.
I suggest you draw it on paper, that makes it very easy to see what goes on with the heap during allocation and freeing.

Heap Type Implementation

I was implementing a heap sort and I start wondering about the different implementations of heaps. When you don need to access the elements by index(like in a heap sort) what are the pros and cons of implementing a heap with an array or doing it like any other linked data structure.
I think it's important to take into account the memory wasted by the nodes and pointers vs the memory wasted by empty spaces in an array, as well as the time it takes to add or remove elements when you have to resize the array.
When I should use each one and why?
As far as space is concerned, there's very little issue with using arrays if you know how much is going into the heap ahead of time -- your values in the heap can always be pointers to the larger structures. This may afford for better cache localization on the heap itself, but you're still going to have to go out someplace to memory for extra data. Ideally, if your comparison is based on a small morsel of data (often just a 4 byte float or integer) you can store that as the key with a pointer to the full data and achieve good cache coherency.
Heap sorts are already not particularly good on cache hits throughout traversing the heap structure itself, however. For small heaps that fit entirely in L1/L2 cache, it's not really so bad. However, as you start hitting main memory performance will dive bomb. Usually this isn't an issue, but if it is, merge sort is your savior.
The larger problem comes in when you want a heap of undetermined size. However, this still isn't so bad, even with arrays. Anymore, in non-embedded environments with nice, pretty memory systems growing an array with some calls (e.g. realloc, please forgive my C background) really isn't all that slow because the data may not need to physically move in memory -- just some address pointer magic for most of it. Added to the fact that if you use a array-size-doubling strategy (array is too small, double the size in a realloc call) you're still ending up with an O(n) amortized cost with relatively few reallocs and at most double wasted space -- but hey, you'd get that with linked lists anyways if you're using a 32-bit key and 32-bit pointer.
So, in short, I'd stick with arrays for the smaller base data structures. When the heap goes away, so do the pointers I don't need anymore with a single deallocation. However, it's easier to read pointer-based code for heaps in my opinion since dealing with the indexing magic isn't quite as straightforward. If performance and memory aren't a concern, I'd recommend that to anyone in a heartbeat.

Data structure and algorithm for representing/allocating free space in a file

I have a file with "holes" in it and want to fill them with data; I also need to be able to free "used" space and make free space.
I was thinking of using a bi-map that maps offset and length. However, I am not sure if that is the best approach if there are really tiny gaps in the file. A bitmap would work but I don't know how that can be easily switched to dynamically for certain regions of space. Perhaps some sort of radix tree is the way to go?
For what it's worth, I am up to speed on modern file system design (ZFS, HFS+, NTFS, XFS, ext...) and I find their solutions woefully inadequate.
My goals are to have pretty good space savings (hence the concern about small fragments). If I didn't care about that, I would just go for two splay trees... One sorted by offset and the other sorted by length with ties broken by offset. Note that this gives you amortized log(n) for all operations with a working set time of log(m)... Pretty darn good... But, as previously mentioned, does not handle issues concerning high fragmentation.
I have shipped commercial software that does just that. In the latest iteration, we ended up sorting blocks of the file into "type" and "index," so you could read or write "the third block of type foo." The file ended up being structured as:
1) File header. Points at master type list.
2) Data. Each block has a header with type, index, logical size, and padded size.
3) Arrays of (offset, size) tuples for each given type.
4) Array of (type, offset, count) that keeps track of the types.
We defined it so that each block was an atomic unit. You started writing a new block, and finished writing that before starting anything else. You could also "set" the contents of a block. Starting a new block always appended at the end of the file, so you could append as much as you wanted without fragmenting the block. "Setting" a block could re-use an empty block.
When you opened the file, we loaded all the indices into RAM. When you flushed or closed a file, we re-wrote each index that changed, at the end of the file, then re-wrote the index index at the end of the file, then updated the header at the front. This means that changes to the file were all atomic -- either you commit to the point where the header is updated, or you don't. (Some systems use two copies of the header 8 kB apart to preserve headers even if a disk sector goes bad; we didn't take it that far)
One of the block "types" was "free block." When re-writing changed indices, and when replacing the contents of a block, the old space on disk was merged into the free list kept in the array of free blocks. Adjacent free blocks were merged into a single bigger block. Free blocks were re-used when you "set content" or for updated type block indices, but not for the index index, which always was written last.
Because the indices were always kept in memory, working with an open file was really fast -- typically just a single read to get the data of a single block (or get a handle to a block for streaming). Opening and closing was a little more complex, as it needed to load and flush the indices. If it becomes a problem, we could load the secondary type index on demand rather than up-front to amortize that cost, but it never was a problem for us.
Top priority for persistent (on disk) storage: Robustness! Do not lose data even if the computer loses power while you're working with the file!
Second priority for on-disk storage: Do not do more I/O than necessary! Seeks are expensive. On Flash drives, each individual I/O is expensive, and writes are doubly so. Try to align and batch I/O. Using something like malloc() for on-disk storage is generally not great, because it does too many seeks. This is also a reason I don't like memory mapped files much -- people tend to treat them like RAM, and then the I/O pattern becomes very expensive.
For memory management I am a fan of the BiBOP* approach, which is normally efficient at managing fragmentation.
The idea is to segregate data based on their size. This, way, within a "bag" you only have "pages" of small blocks with identical sizes:
no need to store the size explicitly, it's known depending on the bag you're in
no "real" fragmentation within a bag
The bag keeps a simple free-list of the available pages. Each page keeps a free-list of available storage units in an overlay over those units.
You need an index to map size to its corresponding bag.
You also need a special treatment for "out-of-norm" requests (ie requests that ask for allocation greater than the page size).
This storage is extremely space efficient, especially for small objects, because the overhead is not per-object, however there is one drawback: you can end-up with "almost empty" pages that still contain one or two occupied storage units.
This can be alleviated if you have the ability to "move" existing objects. Which effectively allows to merge pages.
(*) BiBOP: Big Bag Of Pages
I would recommend making customized file-system (might contain one file of course), based on FUSE. There are a lot of available solutions for FUSE you can base on - I recommend choosing not related but simplest projects, in order to learn easily.
What algorithm and data-structure to choose, it highly deepens on your needs. It can be : map, list or file split into chunks with on-the-fly compression/decompression.
Data structures proposed by you are good ideas. As you clearly see there is a trade-off: fragmentation vs compaction.
On one side - best compaction, highest fragmentation - splay and many other kinds of trees.
On another side - lowest fragmentation, worst compaction - linked list.
In between there are B-Trees and others.
As you I understand, you stated as priority: space-saving - while taking care about performance.
I would recommend you mixed data-structure in order to achieve all requirements.
a kind of list of contiguous blocks of data
a kind of tree for current "add/remove" operation
when data are required on demand, allocate from tree. When deleted, keep track what's "deleted" using tree as well.
mixing -> during each operation (or on idle moments) do "step by step" de-fragmentation, and apply changes kept in tree to contiguous blocks, while moving them slowly.
This solution gives you fast response on demand, while "optimising" stuff while it's is used, (For example "each read of 10MB of data -> defragmantation of 1MB) or in idle moments.
The most simple solution is a free list: keep a linked list of free blocks, reusing the free space to store the address of the next block in the list.

Resources