Searching for membership in array of ranges - algorithm

As part of our system simulation, I'm modeling a memory space with 64-bit addressing using a sparse memory array and keeping a list of objects to keep track of buffers that are allocated within the memory space. Buffers are allocated and de-allocated dynamically.
I have a function that searches for a given address or address range within the allocated buffers to see if accesses to the memory model are in allocated space or not, and my first cut "search through all the buffers until you find a match" is slowing down our simulations by 10%. Our UUT does a lot of memory accesses that have to be vetted by the simulation.
So, I'm trying to optimize. The memory buffer objects contain a starting address and a length. I'm thinking about sorting the object array by starting address at object creation, and then, when the checking function is called, doing a binary search through the array looking to see if a given address falls within a start/end range.
Are there any better/faster ways to do this? There must be some faster/cooler algorithm out there using heaps or hash signatures or some-such, right?

Binary search through a sorted array works but makes allocation/deallocation slow.
A simple case is to make an ordered binary tree (red-black tree, AVR tree, etc.) indexed by the starting address, so that insertion (allocation), removal (deallocation) and searching are all O(log n). Most modern languages provide such data structure (e.g. C++'s std::map) already.

My first thought was also binary search and I think that it is a good idea. You should be able to insert and remove quickly too. Using a hash would just make you put the addresses in buckets (in my opinion) and then you'd get to the right bucket quickly (and then have to search through the bucket).

Basically your problem is that you have a defined intervals of "valid" memory, memory outside those intervals is "invalid", and you want to check for a given address whether it is inside a valid memory block or not.
You can definitely do this by storing the start addresses of all allocated blocks in a binary tree; then search for the largest address at or below the queried address, and just verify that the address falls within the length of the valid address. This gives you O(log n) query time where n = number of allocated blocks. The same query of course can be used also to actually the find the block itself, so you can also read the contents of the block at the given address, which I guess you'd need also.
However, this is not the most efficient scheme. Instead, you could use additionally one-dimensional spatial subdivision trees to mark invalid memory areas. For example, use a tree with branching factor of 256 (corresponding to 8 bits) that maps all those 16kB blocks that have only invalid addresses inside them to "1" and others to "0"; the tree will have only two levels and will be very efficient to query. When you see an address, first ask form this tree if it's certainly invalid; only when it's not, query the other one. This will speed things up ONLY IF YOU ACTUALLY GET LOTS OF INVALID MEMORY REFERENCES; if all the memory references are actually valid and you're just asserting, you won't save anything. But you can flip this idea also around and use the tree mark to all those 16kB or 256B blocks that have only valid addresses inside them; how big the tree grows depends on how your simulated memory allocator works.

Related

Disadvantage of Using Linked Lists in Memory Management

I'm kinda confused as to what the primary disadvantage of using a linked list would be in maintaining a list of free disk blocks. My professor said that using a bit map would help solve said problem. Why does using a bit map solve this problem?
To narrow down my questions:
What is the primary disadvantage of using a linked list in maintaining a list of free disk blocks?
Why does using a bit map solve this problem/disadvantage?
Hi,
What is the primary disadvantage of using a linked list in maintaining a list of free disk blocks?
This scheme is not very efficient since to traverse the list, we must read each block requiring substantial time.
The second disadvantage is additional memory requirement for maintaining linked list of all free disk blocks.
Why does using a bit map solve this problem/disadvantage?
Most often the list of free disk spaces is implemented as a bit map or bit vector. Each block is represented by a single bit. 0(zero) is marked as a free block whereas 1 is for allocated block. So, no need of extra extra memory to store free disk space.
Fast random access allocation check: Checking if a sector is free is as simple as checking the corresponding bit. so traversal is faster than LinkedList.
Other Advantage of using Bit Map:
Fast deletion: Data need not be overwritten on delete, flipping the corresponding bit is sufficient
May this helps you. Fill free for further clarification.
Regards,
Bhavik
The correct solution was given by #FullDecent in the comments to the other answer (he deserves your bounty). To elaborate:
Assuming that the disk drive in question is of the older, conventional type, with a spinning storage surface and a read/write head that physically moves radially across the surface...
In general it is good for files to be stored as contiguously on disk as possible, so that multiple blocks can be read sequentially. If a file is "fragmented" (its blocks are scattered in different places on the disk), the drive head will need to be repositioned several times to read the entire file. Repositioning of the head is one of the most time-consuming operation involved in a disk read (second only to starting the disk spinning after it has been stopped). Hence the procedure known as "defragmentation" or "defragging", which rearranges the used blocks on a disk to make all files contiguous.
With a linked list of free blocks, allocation involves taking blocks from the front of the list, and deallocation involves adding freed blocks to the front of the list. Hence the list can get messy, with blocks that are not adjacent on the disk frequently being adjacent in the list. To find a contiguous stretch of free blocks large enough for a large file, it may be necessary to scan a significant fraction of the list.
With a bitmap, it will still be necessary to scan for a large contiguous free block section, but this is easier since 8, 16, 32, or 64 bits (depending on the hardware's word size) can be checked in a single operation.

space efficient algorithm for tracking writes to 2 power 32 elements

This is one of the requirement i came across my work. We have a (2 power 32) contiguous 4294967296 integers allocated as an array in memory whose function is to provide mapping in another table. Some of the entries gets written more often than the other. We want to track the hot spots and provide an approximate histogram.
The catch is that, this is going to be implemented in firmware and not much memory can be used.
information:
The mapping is for scsi lba from host to lbas on the target probably drives or flash memory.
Lets say we have 1 MB space to handle the meta data required to track hot-cold information. How can we use this efficiently other than just bit mapping which shows whether it is written or not. WE can extent and have a mathematical extension on how accurate the data we collect is based on how larget the memory is used for tracking.

Reducing calls to main memory, given heap-allocated objects

The OP here mentions in the final post (4th or so para from bottom):
"Now one thing that always bothered me about this is all the child
pointer checking. There are usually a lot of null pointers, and
waiting on memory access to fill the cache with zeros just seems
stupid. Over time I added a byte that contains a 1 or 0 to tell if
each of the pointers is NULL. At first this just reduced the cache
waste. However, I've managed cram 9 distance comparisons, 8 pointer
bits, and 3 direction bits all through some tables and logic to
generate a single switch variable that allows the cases to skip the
pointer checks and only call the relevant children directly. It is in
fact faster than the above, but a lot harder to explain if you haven't
seen this version."
He is referring to octrees as the data structure for real-time volume rendering. These would be allocated on the heap, due to their size. What I am trying to figure out is:
(a) Are his assumptions in terms of waiting on memory access, valid? My understanding is that he's referring to waiting on a full run out to main memory to fetch data, since he's assuming it won't be found in the cache due to generally not-too-good locality of reference when using dynamically-allocated octrees (common for this data structure in this sort of application).
(b) Should (a) prove to be true, I am trying to figure out how this workaround
Over time I added a byte that contains a 1 or 0 to tell if each of the
pointers is NULL.
would be implemented without still using the heap, and thus still incurring the same overhead, since I assume it would need to be stored in the octree node.
(a) Yes, his concerns about memory wait time are valid. In this case, he seems to be worried about the size of the node itself in memory; just the children take up 8 pointers, which is 64 bytes on a 64-bit architecture, or one cache line just for the children.
(b) That bitfield is stored in the node itself, but now takes up only 1 byte (1 bit for 8 pointers). It's not clear to me that this is an advantage though, as the line(s) containing the children will get loaded anyway when they are searched. However, he's apparently doing some bit tricks that allow him to determine which children to search with very few branches, which may increase performance. I wish he had some benchmarks that would show the benefit.

Heap Type Implementation

I was implementing a heap sort and I start wondering about the different implementations of heaps. When you don need to access the elements by index(like in a heap sort) what are the pros and cons of implementing a heap with an array or doing it like any other linked data structure.
I think it's important to take into account the memory wasted by the nodes and pointers vs the memory wasted by empty spaces in an array, as well as the time it takes to add or remove elements when you have to resize the array.
When I should use each one and why?
As far as space is concerned, there's very little issue with using arrays if you know how much is going into the heap ahead of time -- your values in the heap can always be pointers to the larger structures. This may afford for better cache localization on the heap itself, but you're still going to have to go out someplace to memory for extra data. Ideally, if your comparison is based on a small morsel of data (often just a 4 byte float or integer) you can store that as the key with a pointer to the full data and achieve good cache coherency.
Heap sorts are already not particularly good on cache hits throughout traversing the heap structure itself, however. For small heaps that fit entirely in L1/L2 cache, it's not really so bad. However, as you start hitting main memory performance will dive bomb. Usually this isn't an issue, but if it is, merge sort is your savior.
The larger problem comes in when you want a heap of undetermined size. However, this still isn't so bad, even with arrays. Anymore, in non-embedded environments with nice, pretty memory systems growing an array with some calls (e.g. realloc, please forgive my C background) really isn't all that slow because the data may not need to physically move in memory -- just some address pointer magic for most of it. Added to the fact that if you use a array-size-doubling strategy (array is too small, double the size in a realloc call) you're still ending up with an O(n) amortized cost with relatively few reallocs and at most double wasted space -- but hey, you'd get that with linked lists anyways if you're using a 32-bit key and 32-bit pointer.
So, in short, I'd stick with arrays for the smaller base data structures. When the heap goes away, so do the pointers I don't need anymore with a single deallocation. However, it's easier to read pointer-based code for heaps in my opinion since dealing with the indexing magic isn't quite as straightforward. If performance and memory aren't a concern, I'd recommend that to anyone in a heartbeat.

Data structure and algorithm for representing/allocating free space in a file

I have a file with "holes" in it and want to fill them with data; I also need to be able to free "used" space and make free space.
I was thinking of using a bi-map that maps offset and length. However, I am not sure if that is the best approach if there are really tiny gaps in the file. A bitmap would work but I don't know how that can be easily switched to dynamically for certain regions of space. Perhaps some sort of radix tree is the way to go?
For what it's worth, I am up to speed on modern file system design (ZFS, HFS+, NTFS, XFS, ext...) and I find their solutions woefully inadequate.
My goals are to have pretty good space savings (hence the concern about small fragments). If I didn't care about that, I would just go for two splay trees... One sorted by offset and the other sorted by length with ties broken by offset. Note that this gives you amortized log(n) for all operations with a working set time of log(m)... Pretty darn good... But, as previously mentioned, does not handle issues concerning high fragmentation.
I have shipped commercial software that does just that. In the latest iteration, we ended up sorting blocks of the file into "type" and "index," so you could read or write "the third block of type foo." The file ended up being structured as:
1) File header. Points at master type list.
2) Data. Each block has a header with type, index, logical size, and padded size.
3) Arrays of (offset, size) tuples for each given type.
4) Array of (type, offset, count) that keeps track of the types.
We defined it so that each block was an atomic unit. You started writing a new block, and finished writing that before starting anything else. You could also "set" the contents of a block. Starting a new block always appended at the end of the file, so you could append as much as you wanted without fragmenting the block. "Setting" a block could re-use an empty block.
When you opened the file, we loaded all the indices into RAM. When you flushed or closed a file, we re-wrote each index that changed, at the end of the file, then re-wrote the index index at the end of the file, then updated the header at the front. This means that changes to the file were all atomic -- either you commit to the point where the header is updated, or you don't. (Some systems use two copies of the header 8 kB apart to preserve headers even if a disk sector goes bad; we didn't take it that far)
One of the block "types" was "free block." When re-writing changed indices, and when replacing the contents of a block, the old space on disk was merged into the free list kept in the array of free blocks. Adjacent free blocks were merged into a single bigger block. Free blocks were re-used when you "set content" or for updated type block indices, but not for the index index, which always was written last.
Because the indices were always kept in memory, working with an open file was really fast -- typically just a single read to get the data of a single block (or get a handle to a block for streaming). Opening and closing was a little more complex, as it needed to load and flush the indices. If it becomes a problem, we could load the secondary type index on demand rather than up-front to amortize that cost, but it never was a problem for us.
Top priority for persistent (on disk) storage: Robustness! Do not lose data even if the computer loses power while you're working with the file!
Second priority for on-disk storage: Do not do more I/O than necessary! Seeks are expensive. On Flash drives, each individual I/O is expensive, and writes are doubly so. Try to align and batch I/O. Using something like malloc() for on-disk storage is generally not great, because it does too many seeks. This is also a reason I don't like memory mapped files much -- people tend to treat them like RAM, and then the I/O pattern becomes very expensive.
For memory management I am a fan of the BiBOP* approach, which is normally efficient at managing fragmentation.
The idea is to segregate data based on their size. This, way, within a "bag" you only have "pages" of small blocks with identical sizes:
no need to store the size explicitly, it's known depending on the bag you're in
no "real" fragmentation within a bag
The bag keeps a simple free-list of the available pages. Each page keeps a free-list of available storage units in an overlay over those units.
You need an index to map size to its corresponding bag.
You also need a special treatment for "out-of-norm" requests (ie requests that ask for allocation greater than the page size).
This storage is extremely space efficient, especially for small objects, because the overhead is not per-object, however there is one drawback: you can end-up with "almost empty" pages that still contain one or two occupied storage units.
This can be alleviated if you have the ability to "move" existing objects. Which effectively allows to merge pages.
(*) BiBOP: Big Bag Of Pages
I would recommend making customized file-system (might contain one file of course), based on FUSE. There are a lot of available solutions for FUSE you can base on - I recommend choosing not related but simplest projects, in order to learn easily.
What algorithm and data-structure to choose, it highly deepens on your needs. It can be : map, list or file split into chunks with on-the-fly compression/decompression.
Data structures proposed by you are good ideas. As you clearly see there is a trade-off: fragmentation vs compaction.
On one side - best compaction, highest fragmentation - splay and many other kinds of trees.
On another side - lowest fragmentation, worst compaction - linked list.
In between there are B-Trees and others.
As you I understand, you stated as priority: space-saving - while taking care about performance.
I would recommend you mixed data-structure in order to achieve all requirements.
a kind of list of contiguous blocks of data
a kind of tree for current "add/remove" operation
when data are required on demand, allocate from tree. When deleted, keep track what's "deleted" using tree as well.
mixing -> during each operation (or on idle moments) do "step by step" de-fragmentation, and apply changes kept in tree to contiguous blocks, while moving them slowly.
This solution gives you fast response on demand, while "optimising" stuff while it's is used, (For example "each read of 10MB of data -> defragmantation of 1MB) or in idle moments.
The most simple solution is a free list: keep a linked list of free blocks, reusing the free space to store the address of the next block in the list.

Resources