I'm trying to build a Trie but on a mobile phone which has very limited memory capacity.
I figured that it is probably best that the whole structure be stored on disk, and only loaded as necessary since I can tolerate a few disk reads. But, after a few attempts, it seems like this is a very complicated thing to do.
What are some ways to store a Trie on disk (i.e. only partially loaded) and keep the fast lookup property?
Is this even a good idea to begin with?
The paper B-tries for disk-based string management answers your question.
It makes the observation:
To our knowledge, there has yet to be a proposal in literature for a
trie-based data structure, such as the burst trie, the can reside
efficiently on disk to support common string processing tasks.
I've only glanced at it briefly, but Shang's "Trie methods for text and spatial data on secondary storage" discusses paged trie representations, and might be a useful starting point.
Related
I am working on some project related with huge objects in physical memory in Windows.
I wanted to create really big structure of data, but therefore I found some problems.
While I am trying to allocate huge amount of data I can just create as large object as heap allows (it also depends on architecture of operating system).
I am not sure if this is restricted by private heap of thread, or some other way.
When I was looking for how operating system places data in memory, I found out that the data is stored in some particular order.
And here comes some questions...
If I want to create large objects, should I have one very large heap region to allocate memory inside? If so, I have to fragmentate data.
In other way, there came an idea, of finding starting addresses of empty regions, and then use this unused place to put data in some data structure.
If this idea is possible to realize, then how it could be done?
Another question is that, do you think that list would be the best option for that sort of huge object? Or maybe it would be better to use another data structure?
Do you think that chosen data structure could be divided into two regions of data separately, but standing as one object?
Thanks in advance, every answer for my questions could be helpful.
There seems to be some kind of misconception about memory allocation here.
(1) Most operating systems do not allocate memory linearly. There usually are discontinuities in the memory mapped to a process address space.
(2) If you want to allocate a huge amount of memory, you should do it directly with the operating system; not through a heap.
What Data Structure is best to use for file organization? Are B-Trees the best or is there another data structure which obtains faster access to files and good organization? Thanks
All file systems are different, so there are a huge number of data structures that actually get used in file systems.
Many file systems use some sort of bit vector (usually referred to as a bitmap) to track where certain free blocks are, since they have excellent performance for querying whether a specific block of disk is in use and (for disks that aren't overwhelmingly full) support reasonably fast lookups of free blocks.
Many older file systems (ext and ext2) stored directory structures using simple linked lists. Apparently this was actually fast enough for most applications, though some types of applications that used lots of large directories suffered noticeable performance hits.
The XFS file system was famous for using B+-trees for just about everything, including directory structures and its journaling system. From what I remember from my undergrad OS course, the philosophy was that since it took so long to write, debug, and performance tune the implementation of the B+-tree, it made sense to use it as much as possible.
Other file systems (ext3 and ext4) use a variant of the B-tree called the HTree that I'm not very familiar with. Apparently it uses some sort of hashing scheme to keep the branching factor high so that very few disk accesses are used.
I have heard anecdotally that some operating systems tried using splay trees to store their directory structures but ran into trouble with them. Specifically, it prevented multithreaded access to the same directory from multiple readers (since in a splay tree, each access reshapes the tree) and encountered an edge case where the tree would degenerate to a linked list if all elements of the tree were accesses sequentially. That said, I don't know if this is just an urban legend, since these problems would have been apparent before anyone tried to code them up.
Microsoft's FAT32 system used a huge array (the file allocation table) that store what files were stored where and which disk sectors follow one another logically in a file. The main drawback is that the table had to be set up in advance, so there ended up being upper limits on the sizes of files that could be stored on the disk. However, the array-based system was pretty easy to implement.
This is not an exhaustive list - I'm sure that other file systems use other data structures. However, I hope it helps give you a push in the right direction.
Hope this helps!
Why is Least Recently Used better than FIFO in relation to page files?
If you mean in terms of offloading memory pages to disk - if your process is frequently accessing a page, you really don't want it to be paged to disk, even if it was the very first one you accessed. On the other hand, if you haven't accessed a memory page for several days, it's unlikely that you'll do so in the near future.
If that's not what you mean, please edit your question to give more details.
There is no single cache algorithm which will always do well because that requires perfect knowledge of the future. (And if you know where to get that...) The dominance of LRU in VM cache design is the result of a long history of measuring system behavior. Given real workloads, LRU works pretty well a very large fraction of the time. However, it isn't very hard to construct a reference string for which FIFO would have superior performance over LRU.
Consider a linear sweep through a large address space much larger than the available pageable real memory. LRU is based on the assumption that "what you've touched recently you're likely to touch again", but the linear sweep completely violates that assumption. This is why some operating systems allow programs to advise the kernel about their reference behavior - one example is "mark and sweep" garbage collection typified by classic LISP interpreters. (And a major driver for work on more modern GCs like "generational".)
Another example is the symbol table in a certain antique macro processor (STAGE2). The binary tree is searched from the root for every symbol, and the string evaluation is being done on a stack. It turned out that reducing the available page frames by "wiring down" the root page of the symbol tree and the bottom page of the stack made a huge improvement in the page fault rate. The cache was tiny, and it churned violently, always pushing out the two most frequently referenced pages because the cache was smaller than the inter-reference distance to those pages. So a small cache worked better, but ONLY because those two page frames stolen from the cache were used wisely.
The net of all this is that LRU is the standard answer because it's usually pretty good for real workloads on systems that aren't hideously overloaded (VM many times the real memory available), and that is supported by years of careful measurements. However,
you can certainly find cases where alternative behavior will be superior. This is why measuring real systems is important.
Treat the RAM as a cache. In order to be an effective cache, it needs to keep the items most likely to be requested in memory.
LRU keeps the things that were most recently used in memory. FIFO keeps the things that were most recently added. LRU is, in general, more efficient, because there are generally memory items that are added once and never used again, and there are items that are added and used frequently. LRU is much more likely to keep the frequently-used items in memory.
Depending on access patterns, FIFO can sometimes beat LRU. An Adaptive Replacement Cache is hybrid that adapts its strategy based on actual usage patterns.
According to temporal locality of reference, memory that has been accessed recently is more likely to be accessed again soon.
I want to cache data on the client. What is the best algorithm/data structure that can be employed?
Case 1. The data to be stored requires extremely fast string searching capability.
Case 2. The cached data set can be large. I don't want to explode the client's memory usage and also I don't want to make a network and disk access calls which slows down my processing time on the client side
Solutions:
Case 1: I think suffix tree/Tries provides you with a good solution in this case.
Case 2: The two problems to consider here are:
To store large data with minimum memory consumption
Not to make any network calls to access any data which is not available in the cache.
LRU caching model is one solution I can think of but that does not prevent me from bloating the memory.
Is there any way to write down to a file and access without compromising the data (security aspect)?
Let me know if any point is not clear.
EDIT:
Josh, I know my requirements are non-realistic. To narrow down my requirement, I am looking for something which stores using LRU algorithm. It will be good if we can have dynamic size configuration for this LRU with a maximum limit to it. This will reduce the number of calls going to the network/database and provide a good performance as well.
If this LRU algorithm works on a compressed data which can be interpreted with a slight overhead (but less than a network call), it will be much better.
Check out all the available caching frameworks/libraries - I've found Ehcache to be very useful. You can also have it keep just some (most recent) in memory and failover to disk at a specified memory usage. The disk calls will still be a lot faster then network calls and you avoid taking all the memory.
Ehcache
Unfortunately, I think your expectations are unrealistic.
Keeping memory usage small, but also not making disk access calls means that you have nowhere to store the data.
Furthermore, to answer your question about security, there is no client side data storage (assuming you are talking about a web-application) that is "secure". You could encrypt it, but this will destroy your speed requirements as well as require server-side processing. Everything stored at and sent from the client is suspect.
Perhaps if you could describe the problem in greater detail we can suggest some realistic solutions.
I need to store large amounts of data on-disk in approximately 1k blocks. I will be accessing these objects in a way that is hard to predict, but where patterns probably exist.
Is there an algorithm or heuristic I can use that will rearrange the objects on disk based on my access patterns to try to maximize sequential access, and thus minimize disk seek time?
On modern OSes (Windows, Linux, etc) there is absolutely nothing you can do to optimise seek times! Here's why:
You are in a pre-emptive multitasking system. Your application and all it's data can be flushed to disk at any time - user switches task, screen saver kicks in, battery runs out of charge, etc.
You cannot guarantee that the file is contiguous on disk. Doing Aaron's first bullet point will not ensure an unfragmented file. When you start writing the file, the OS doesn't know how big the file is going to be so it could put it in a small space, fragmenting it as you write more data to it.
Memory mapping the file only works as long as the file size is less than the available address range in your application. On Win32, the amount of address space available is about 2Gb - memory used by application. Mapping larger files usually involves un-mapping and re-mapping portions of the file, which won't be the best of things to do.
Putting data in the centre of the file is no help as, for all you know, the central portion of the file could be the most fragmented bit.
To paraphrase Raymond Chen, if you have to ask about OS limits, you're probably doing something wrong. Treat your filesystem as an immutable black box, it just is what it is (I know, you can use RAID and so on to help).
The first step you must take (and must be taken whenever you're optimising) is to measure what you've currently got. Never assume anything. Verify everything with hard data.
From your post, it sounds like you haven't actually written any code yet, or, if you have, there is no performance problem at the moment.
The only real solution is to look at the bigger picture and develop methods to get data off the disk without stalling the application. This would usually be through asynchronous access and speculative loading. If your application is always accessing the disk and doing work with small subsets of the data, you may want to consider reorganising the data to put all the useful stuff in one place and the other data elsewhere. Without knowing the full problem domain it's not possible to to be really helpful.
Depending on what you mean by "hard to predict", I can think of a few options:
If you always seek based on the same block field/property, store the records on disk sorted by that field. This lets you use binary search for O(log n) efficiency.
If you seek on different block fields, consider storing an external index for each field. A b-tree gives you O(log n) efficiency. When you seek, grab the appropriate index, search it for your block's data file address and jump to it.
Better yet, if your blocks are homogeneous, consider breaking them down into database records. A database gives you optimized storage, indexing, and the ability to perform advanced queries for free.
Use memory-mapped file access rather than the usual open-seek-read/write pattern. This technique works on Windows and Unix platforms.
In this way the operating system's virtual memory system will handle the caching for you. Accesses of blocks that are already in memory will result in no disk seek or read time. Writes from memory back to disk are handled automatically and efficiently and without blocking your application.
Aaron's notes are good too as they will affect initial-load time for a chunk that's not in memory. Combine that with the memory-mapped technique -- after all it's easier to reorder chunks using memcpy() than by reading/writing from disk and attempting swapouts etc.
The most simple way to solve this is to use an OS which solves that for you under the hood, like Linux. Give it enough RAM to hold 10% of the objects in RAM and it will try to keep as many of them in the cache as possible reducing the load time to 0. The recent server versions of Windows might work, too (some of them didn't for me, that's why I'm mentioning this).
If this is a no go, try this algorithm:
Create a very big file on the harddisk. It is very important that you write this in one go so the OS will allocate a continuous space on disk.
Write all your objects into that file. Make sure that each object is the same size (or give each the same space in the file and note the length in the first few bytes of of each chunk). Use an empty harddisk or a disk which has just been defragmented.
In a data structure, keep the offsets of each data chunk and how often it is accessed. When it is accessed very often, swap its position in the file with a chunk that is closer to the start of the file and which has a lesser access count.
[EDIT] Access this file with the memory-mapped API of your OS to allow the OS to effectively cache the most used parts to get best performance until you can optimize the file layout next time.
Over time, heavily accessed chunks will bubble to the top. Note that you can collect the access patterns over some time, analyze them and do the reorder over night when there is little load on your machine. Or you can do the reorder on a completely different machine and swap the file (and the offset table) when that's done.
That said, you should really rely on a modern OS where a lot of clever people have thought long and hard to solve these issues for you.
That's an interesting challenge. Unfortunately, I don't know how to solve this out of the box, either. Corbin's approach sounds reasonable to me.
Here's a little optimization suggestion, at least: Place the most-accessed items at the center of your disk (or unfragmented file), not at the start of end. That way, seeking to lesser-used data will be closer by average. Err, that's pretty obvious, though.
Please let us know if you figure out a solution yourself.