Heap-like data structure with fast random access? - algorithm

My situation is the following:
I have a collection of entities, each of which has a "goodness" property.
I wish to grab the entities one at a time, from "best" to "worst."
After a "best" entity is grabbed, the "goodness" properties of several (relatively few) of my other entities change, and this change must be incorporated into my upcoming decision of the next "best" entity to grab.
Some (relatively few) entities may become "worthless" after a grab, and these should be removed from my collection.
It is easy for me to construct, given the entity that I just grabbed, the set of now-"dirty" objects, that is, the set of entities which potentially have a now-different "goodness," or have become "worthless."
So, I need a data structure that allows me to:
Quickly grab the "biggest" of a collection (as in, a max-heap).
Quickly update the underlying ordering of the objects in my collection to accommodate the situation described above. (Easy to do in a heap, if we can access the dirty objects' locations, e.g. array indices, within the underlying heap implementation.)
There is a guarantee that there are no collisions among the entries of my collection. (The entries are references to the entities I described above.)
The idea I have is to use a max-heap together with an unordered map, keyed on the heap entries, and having values equal to, e.g., the objects' respective indices in the underlying array in the heap implementation.
What I'm wondering is whether there may be a data structure which is better for this situation.

If few members are affected when the best entity is grabbed, then you might be able to improve the runtime by using a linked list and an unordered map (each with the original set of entities), and a max heap. After removing the best entity from the end of the linked list you'll use the map to locate the affected entities, removing them from the list and adding the non-worthless entities to the max heap. Thereafter, the next best entity is the greater of the entity at the end of the list or the max entity in the heap. The advantage of this setup is that removal from the linked list is a constant time operation, and insertion into the max heap will be a relatively small (compared to the total number of entities) log time operation.
Because entities' values can only get worse, you can lazily remove them from the linked list - if the item is worthless then remove it, and if its value has changed then flag it as "changed." Check the "changed" flag on the entity at the end of the linked list, and if it's "true" then remove the entity and add it to the max-heap. The advantage of lazy updates is that you usually won't need to update items that are in the heap (you'll just need to update the value of items in the linked list), and if an item is changed and then later made worthless then you can remove it from the linked list without ever having to add it to the heap.

Related

Are elements of the Hash Table's backing array Linked Lists from the initial point when using Separate Chaining?

For resolving hashing collision in the Hash Table data structure, we have one very popular strategy called Separate Chaining.
I'm aware, that in the Separate Chaining strategy, keys, which end up being collided into backing array's same index (due to the fact, that they're hashed into the same particular values), are Linked Lists.
I wonder whether the type of backing array is LinkedList<E>[] from the moment of creation of Hash Table (during separate chaining strategy implementation), or it's int[] and it gets converted to the LinkedList<E>[] array after first collision?
Because, having Linked Lists as each element of the backing array seems not the most optimal solution.. it means, that those Linked Lists, should be a list of the elements, which in turn, are Entries/Buckets of a pair of key-value.. and this all really consumes a lot of memory and resource, I reckon.
I did quite a research in different books and academic articles; yet, I still can't really get a clear answer on this.
Yes, separate chaining will cost more memory than probing or re-hashing. But the benefit is that you get more items in the hash table before performance begins to suffer. At some point you still have to re-index: typically when you realize that some bucket is over-represented or when the total number of occupied buckets exceeds some threshold.
Note that the backing array itself isn't a linked list. The backing array for a hash table that uses probing or re-hashing will probably be a dynamically-sized array of entries. Your entry would be something like:
class Entry {
String: key;
SomeObject: value;
}
If you're using separate chaining, the Entry object gets an additional field: a reference to the next item that hashed to the same bucket:
class Entry {
String: key;
SomeObject: value;
Entry: next;
}
The memory difference for the first item really isn't enough to worry about.
It's possible to write the code so that if a bucket has but a single item, it will contain just the key and value, and the bucket is converted to a linked list only on first collision. There is perhaps a small memory win there, and an even smaller performance gain. But the code is more complex and the gains aren't huge unless you know that the majority of your buckets won't have any collisions. Not worth the trouble of implementing, testing, and maintaining two different code paths.

Implement an efficient stack without pointers?

So, I'm working in an environment where pointers are non-existent (or at least, inaccessible), and I'm trying to efficiently implement a stack. I have a stack implementation working, but it's O(n), which of course isn't as efficient as the usual O(1) you get with pointer-based stacks. I just can't figure out a better way to implement this.
Some important background of the limitations of this environment: there's a global array of instances of a class called Entity; variables can only store signed integers; and there's no method of using pointers or even creating new arrays. Super limited.
Entities have members for (x,y,z) coordinates, a map of strings to integers for arbitrary data storage (of integers, at least), and a list of strings for arbitrary string storage. The environment provides no way of comparing two strings, except by comparing them to hard-coded values, and it provides no native way of comparing two integers, unless one is hard-coded; so to compare two variable integers, you have to subtract them and compare to 0 (very Assembly-like in that regard).
The implementation I have now adds a new Entity instance to the list for each entry in the stack, storing its value and index in its map with the keys Value and Index (I know, original). Whenever a value is pushed onto the stack, I iterate through the list and increment the Index of each existing Entity, then create a new Entity with an Index of 0. When it's popped, I iterate through the list, find the one with Index=0, and copy that value; I decrement the Index of every non-zero Entity I find on that list.
It works perfectly, but of course that's O(n) for both pushing and popping. Even if I were to track the head Index somewhere, the only way to find the entry with the matching Index would be to subtract the head Index from all the entries first, which is still O(n).
Is there any way to do this more efficiently than O(n) without access to pointers or even additional arrays? Or is this the best that can be done with these restrictions?

How to implement a collection that supports real-time filtering?

I want to implement a mutable sequential collection FilteredList that wraps another collection List and filters it based on a predicate.
Both the wrapped List and the exposed FilteredList are mutable and observable, and should be synchronized (so for example, if someone adds an element to List that element should appear in the correct position in FilteredList, and vice versa).
Elements that don't satisfy the predicate can still be added to FilteredList, but they will not be visible (they will still appear in the inner list).
The collections should support:
Insert(index,value) which inserts an element value at position index, pushing elements forward.
Remove(index) which removes the element at position index, moving all proceeding elements back.
Update(index, value), which updates the element at position index to be value.
I'm having trouble coming up with a good synchronization mechanism.
I don't have any strict complexity bounds, but real world efficiency is important.
The best way to avoid synchronization difficulties is to create a data structure that doesn't need them: use a single data structure to present the filtered and unfiltered data.
You should be able to do that with a modified skip list (actually, an indexable skip list), which will give you O(log n) access by index.
What you do is maintain two separate sets of forward pointers for each node, rather than just one set. The one set is for the unfiltered list, as in the normal skip list, and the other set is for the filtered list.
Adding to or removing from the list is the same for the filtered and unfiltered lists. That is, you find the node at index by following the appropriate filtered or unfiltered links, and then add or remove the node, updating both sets of link pointers.
This should be more efficient than a standard sequential list, because insertion and removal don't incur the cost of moving items up or down to make a hole or fill a gap; it's all done with references.
It takes a little more space per node, though. On average, skip list requires two extra references per node. Since you're building what is in effect two skip lists in one, expect your nodes to require, on average, four extra references per node.
Edit after comment
If, as you say, you don't control List, then you still maintain this dual skip list that I described. But the data stored in the skip list is just the index into List. You said that List is observable, so you get notification of all insert and delete operations, so you should be able to maintain an index by reacting to all notifications.
When somebody wants to operate on FilteredList, you use the filtered index links to find the List index of the FilteredList record the user wanted to affect. Then you pass the request onto List, using the translated index. And then you react to the observable notification from List.
Basically, you're just maintaining a secondary index into List, so that you can translate FilteredList indexes into List indexes.

What are the underlying data structures used for Redis?

I'm trying to answer two questions in a definitive list:
What are the underlying data structures used for Redis?
And what are the main advantages/disadvantages/use cases for each type?
So, I've read the Redis lists are actually implemented with linked lists. But for other types, I'm not able to dig up any information. Also, if someone were to stumble upon this question and not have a high level summary of the pros and cons of modifying or accessing different data structures, they'd have a complete list of when to best use specific types to reference as well.
Specifically, I'm looking to outline all types: string, list, set, zset and hash.
Oh, I've looked at these article, among others, so far:
http://redis.io/topics/data-types
http://redis.io/topics/data-types-intro
http://redis.io/topics/faq
I'll try to answer your question, but I'll start with something that may look strange at first: if you are not interested in Redis internals you should not care about how data types are implemented internally. This is for a simple reason: for every Redis operation you'll find the time complexity in the documentation and, if you have the set of operations and the time complexity, the only other thing you need is some clue about memory usage (and because we do many optimizations that may vary depending on data, the best way to get these latter figures are doing a few trivial real world tests).
But since you asked, here is the underlying implementation of every Redis data type.
Strings are implemented using a C dynamic string library so that we don't pay (asymptotically speaking) for allocations in append operations. This way we have O(N) appends, for instance, instead of having quadratic behavior.
Lists are implemented with linked lists.
Sets and Hashes are implemented with hash tables.
Sorted sets are implemented with skip lists (a peculiar type of balanced trees).
But when lists, sets, and sorted sets are small in number of items and size of the largest values, a different, much more compact encoding is used. This encoding differs for different types, but has the feature that it is a compact blob of data that often forces an O(N) scan for every operation. Since we use this format only for small objects this is not an issue; scanning a small O(N) blob is cache oblivious so practically speaking it is very fast, and when there are too many elements the encoding is automatically switched to the native encoding (linked list, hash, and so forth).
But your question was not really just about internals, your point was What type to use to accomplish what?.
Strings
This is the base type of all the types. It's one of the four types but is also the base type of the complex types, because a List is a list of strings, a Set is a set of strings, and so forth.
A Redis string is a good idea in all the obvious scenarios where you want to store an HTML page, but also when you want to avoid converting your already encoded data. So for instance, if you have JSON or MessagePack you may just store objects as strings. In Redis 2.6 you can even manipulate this kind of object server side using Lua scripts.
Another interesting usage of strings is bitmaps, and in general random access arrays of bytes, since Redis exports commands to access random ranges of bytes, or even single bits. For instance check this good blog post: Fast Easy real time metrics using Redis.
Lists
Lists are good when you are likely to touch only the extremes of the list: near tail, or near head. Lists are not very good to paginate stuff, because random access is slow, O(N).
So good uses of lists are plain queues and stacks, or processing items in a loop using RPOPLPUSH with same source and destination to "rotate" a ring of items.
Lists are also good when we want just to create a capped collection of N items where usually we access just the top or bottom items, or when N is small.
Sets
Sets are an unordered data collection, so they are good every time you have a collection of items and it is very important to check for existence or size of the collection in a very fast way. Another cool thing about sets is support for peeking or popping random elements (SRANDMEMBER and SPOP commands).
Sets are also good to represent relations, e.g., "What are friends of user X?" and so forth. But other good data structures for this kind of stuff are sorted sets as we'll see.
Sets support complex operations like intersections, unions, and so forth, so this is a good data structure for using Redis in a "computational" manner, when you have data and you want to perform transformations on that data to obtain some output.
Small sets are encoded in a very efficient way.
Hashes
Hashes are the perfect data structure to represent objects, composed of fields and values. Fields of hashes can also be atomically incremented using HINCRBY. When you have objects such as users, blog posts, or some other kind of item, hashes are likely the way to go if you don't want to use your own encoding like JSON or similar.
However, keep in mind that small hashes are encoded very efficiently by Redis, and you can ask Redis to atomically GET, SET or increment individual fields in a very fast fashion.
Hashes can also be used to represent linked data structures, using references. For instance check the lamernews.com implementation of comments.
Sorted Sets
Sorted sets are the only other data structures, besides lists, to maintain ordered elements. You can do a number of cool stuff with sorted sets. For instance, you can have all kinds of Top Something lists in your web application. Top users by score, top posts by pageviews, top whatever, but a single Redis instance will support tons of insertion and get-top-elements operations per second.
Sorted sets, like regular sets, can be used to describe relations, but they also allow you to paginate the list of items and to remember the ordering. For instance, if I remember friends of user X with a sorted set I can easily remember them in order of accepted friendship.
Sorted sets are good for priority queues.
Sorted sets are like more powerful lists where inserting, removing, or getting ranges from the the middle of the list is always fast. But they use more memory, and are O(log(N)) data structures.
Conclusion
I hope that I provided some info in this post, but it is far better to download the source code of lamernews from http://github.com/antirez/lamernews and understand how it works. Many data structures from Redis are used inside Lamer News, and there are many clues about what to use to solve a given task.
Sorry for grammar typos, it's midnight here and too tired to review the post ;)
Most of the time, you don't need to understand the underlying data structures used by Redis. But a bit of knowledge helps you make CPU v/s Memory trade offs. It also helps you model your data in an efficient manner.
Internally, Redis uses the following data structures :
String
Dictionary
Doubly Linked List
Skip List
Zip List
Int Sets
Zip Maps (deprecated in favour of zip list since Redis 2.6)
To find the encoding used by a particular key, use the command object encoding <key>.
1. Strings
In Redis, Strings are called Simple Dynamic Strings, or SDS. It's a smallish wrapper over a char * that allows you to store the length of the string and number of free bytes as a prefix.
Because the length of the string is stored, strlen is an O(1) operation. Also, because the length is known, Redis strings are binary safe. It is perfectly legal for a string to contain the null character.
Strings are the most versatile data structure available in Redis. A String is all of the following:
A string of characters that can store text. See SET and GET commands.
A byte array that can store binary data.
A long that can store numbers. See INCR, DECR, INCRBY and DECRBY commands.
An Array (of chars, ints, longs or any other data type) that can allow efficient random access. See SETRANGE and GETRANGE commands.
A bit array that allows you to set or get individual bits. See SETBIT and GETBIT commands.
A block of memory that you can use to build other data structures. This is used internally to build ziplists and intsets, which are compact, memory-efficient data structures for small number of elements. More on this below.
2. Dictionary
Redis uses a Dictionary for the following:
To map a key to its associated value, where value can be a string, hash, set, sorted set or list.
To map a key to its expiry timestamp.
To implement Hash, Set and Sorted Set data types.
To map Redis commands to the functions that handle those commands.
To map a Redis key to a list of clients that are blocked on that key. See BLPOP.
Redis Dictionaries are implemented using Hash Tables. Instead of explaining the implementation, I will just explain the Redis specific things :
Dictionaries use a structure called dictType to extend the behaviour of a hash table. This structure has function pointers, and so the following operations are extendable: a) hash function, b) key comparison, c) key destructor, and d) value destructor.
Dictionaries use the murmurhash2. (Previously they used the djb2 hash function, with seed=5381, but then the hash function was switched to murmur2. See this question for an explanation of the djb2 hash algorithm.)
Redis uses Incremental Hashing, also known as Incremental Resizing. The dictionary has two hash tables. Every time the dictionary is touched, one bucket is migrated from the first (smaller) hash table to the second. This way, Redis prevents an expensive resize operation.
The Set data structure uses a Dictionary to guarantee there are no duplicates. The Sorted Set uses a dictionary to map an element to its score, which is why ZSCORE is an O(1) operation.
3. Doubly Linked Lists
The list data type is implemented using Doubly Linked Lists. Redis' implementation is straight-from-the-algorithm-textbook. The only change is that Redis stores the length in the list data structure. This ensures that LLEN has O(1) complexity.
4. Skip Lists
Redis uses Skip Lists as the underlying data structure for Sorted Sets. Wikipedia has a good introduction. William Pugh's paper Skip Lists: A Probabilistic Alternative to Balanced Trees has more details.
Sorted Sets use both a Skip List and a Dictionary. The dictionary stores the score of each element.
Redis' Skip List implementation is different from the standard implementation in the following ways:
Redis allows duplicate scores. If two nodes have the same score, they are sorted by the lexicographical order.
Each node has a back pointer at level 0. This allows you to traverse elements in reverse order of the score.
5. Zip List
A Zip List is like a doubly linked list, except it does not use pointers and stores the data inline.
Each node in a doubly linked list has at 3 pointers - one forward pointer, one backward pointer and one pointer to reference the data stored at that node. Pointers require memory (8 bytes on a 64 bit system), and so for small lists, a doubly linked list is very inefficient.
A Zip List stores elements sequentially in a Redis String. Each element has a small header that stores the length and data type of the element, the offset to the next element and the offset to the previous element. These offsets replace the forward and backward pointers. Since the data is stored inline, we don't need a data pointer.
The Zip list is used to store small lists, sorted sets and hashes. Sorted sets are flattened into a list like [element1, score1, element2, score2, element3, score3] and stored in the Zip List. Hashes are flattened into a list like [key1, value1, key2, value2] etc.
With Zip Lists you have the power to make a tradeoff between CPU and Memory. Zip Lists are memory-efficient, but they use more CPU than a linked list (or Hash table/Skip List). Finding an element in the zip list is O(n). Inserting a new element requires reallocating memory. Because of this, Redis uses this encoding only for small lists, hashes and sorted sets. You can tweak this behaviour by altering the values of <datatype>-max-ziplist-entries and <datatype>-max-ziplist-value> in redis.conf. See Redis Memory Optimization, section "Special encoding of small aggregate data types" for more information.
The comments on ziplist.c are excellent, and you can understand this data structure completely without having to read the code.
6. Int Sets
Int Sets are a fancy name for "Sorted Integer Arrays".
In Redis, sets are usually implemented using hash tables. For small sets, a hash table is inefficient memory wise. When the set is composed of integers only, an array is often more efficient.
An Int Set is a sorted array of integers. To find an element a binary search algorithm is used. This has a complexity of O(log N). Adding new integers to this array may require a memory reallocation, which can become expensive for large integer arrays.
As a further memory optimization, Int Sets come in 3 variants with different integer sizes: 16 bits, 32 bits and 64 bits. Redis is smart enough to use the right variant depending on the size of the elements. When a new element is added and it exceeds the current size, Redis automatically migrates it to the next size. If a string is added, Redis automatically converts the Int Set to a regular Hash Table based set.
Int Sets are a tradeoff between CPU and Memory. Int Sets are extremely memory efficient, and for small sets they are faster than a hash table. But after a certain number of elements, the O(log N) retrieval time and the cost of reallocating memory become too much. Based on experiments, the optimal threshold to switch over to a regular hash table was found to be 512. However, you can increase this threshold (decreasing it doesn't make sense) based on your application's needs. See set-max-intset-entries in redis.conf.
7. Zip Maps
Zip Maps are dictionaries flattened and stored in a list. They are very similar to Zip Lists.
Zip Maps have been deprecated since Redis 2.6, and small hashes are stored in Zip Lists. To learn more about this encoding, refer to the comments in zipmap.c.
Redis stores keys pointing to values. Keys can be any binary value up to a reasonable size (using short ASCII strings is recommended for readability and debugging purposes). Values are one of five native Redis data types.
1.strings — a sequence of binary safe bytes up to 512 MB
2.hashes — a collection of key value pairs
3.lists — an in-insertion-order collection of strings
4.sets — a collection of unique strings with no ordering
5.sorted sets — a collection of unique strings ordered by user defined scoring
Strings
A Redis string is a sequence of bytes.
Strings in Redis are binary safe (meaning they have a known length not determined by any special terminating characters), so you can store anything up to 512 megabytes in one string.
Strings are the cannonical "key value store" concept. You have a key pointing to a value, where both key and value are text or binary strings.
For all possible operations on strings, see the
http://redis.io/commands/#string
Hashes
A Redis hash is a collection of key value pairs.
A Redis hash holds many key value pairs, where each key and value is a string. Redis hashes do not support complex values directly (meaning, you can't have a hash field have a value of a list or set or another hash), but you can use hash fields to point to other top level complex values. The only special operation you can perform on hash field values is atomic increment/decrement of numeric contents.
You can think of a Redis hashes in two ways: as a direct object representation and as a way to store many small values compactly.
Direct object representations are simple to understand. Objects have a name (the key of the hash) and a collection of internal keys with values. See the example below for, well, an example.
Storing many small values using a hash is a clever Redis massive data storage technique. When a hash has a small number of fields (~100), Redis optimizes the storage and access efficency of the entire hash. Redis's small hash storage optimization raises an interesting behavior: it's more efficient to have 100 hashes each with 100 internal keys and values rather than having 10,000 top level keys pointing to string values. Using Redis hashes to optimize your data storage this way does require additional programming overhead for tracking where data ends up, but if your data storage is primarly string based, you can save a lot of memory overhead using this one weird trick.
For all possible operations on hashes, see the hash docs
Lists
Redis lists act like linked lists.
You can insert to, delete from, and traverse lists from either the head or tail of a list.
Use lists when you need to maintain values in the order they were inserted. (Redis does give you the option to insert into any arbitrary list position if you need to, but your insertion performance will degrade if you insert far from your start position.)
Redis lists are often used as producer/consumer queues. Insert items into a list then pop items from the list. What happens if your consumers try to pop from a list with no elements? You can ask Redis to wait for an element to appear and return it to you immediately when it gets added. This turns Redis into a real time message queue/event/job/task/notification system.
You can atomically remove elements off either end of a list, enabling any list to be treated as a stack or a queue.
You can also maintain fixed-length lists (capped collections) by trimming your list to a specific size after every insertion.
For all possible operations on lists, see the lists docs
Sets
Redis sets are, well, sets.
A Redis set contains unique unordered Redis strings where each string only exists once per set. If you add the same element ten times to a set, it will only show up once. Sets are great for lazily ensuring something exists at least once without worrying about duplicate elements accumulating and wasting space. You can add the same string as many times as you like without needing to check if it already exists.
Sets are fast for membership checking, insertion, and deletion of members in the set.
Sets have efficient set operations, as you would expect. You can take the union, intersection, and difference of multiple sets at once. Results can either be returned to the caller or results can be stored in a new set for later usage.
Sets have constant time access for membership checks (unlike lists), and Redis even has convenient random member removal and returning ("pop a random element from the set") or random member returning without replacement ("give me 30 random-ish unique users") or with replacement ("give me 7 cards, but after each selection, put the card back so it can potentially be sampled again").
For all possible operations on sets, see the sets docs.
Sorted Sets
Redis sorted sets are sets with a user-defined ordering.
For simplicity, you can think of a sorted set as a binary tree with unique elements. (Redis sorted sets are actually skip lists.) The sort order of elements is defined by each element's score.
Sorted sets are still sets. Elements may only appear once in a set. An element, for uniqueness purposes, is defined by its string contents. Inserting element "apple" with sorting score 3, then inserting element "apple" with sorting score 500 results in one element "apple" with sorting score 500 in your sorted set. Sets are only unique based on Data, not based on (Score, Data) pairs.
Make sure your data model relies on the string contents and not the element's score for uniqueness. Scores are allowed to be repeated (or even zero), but, one last time, set elements can only exist once per sorted set. For example, if you try to store the history of every user login as a sorted set by making the score the epoch of the login and the value the user id, you will end up storing only the last login epoch for all your users. Your set would grow to size of your userbase and not your desired size of userbase * logins.
Elements are added to your set with scores. You can update the score of any element at any time, just add the element again with a new score. Scores are represented by floating point doubles, so you can specify granularity of high precision timestamps if needed. Multiple elements may have the same score.
You can retrieve elements in a few different ways. Since everything is sorted, you can ask for elements starting at the lowest scores. You can ask for elements starting at the highest scores ("in reverse"). You can ask for elements by their sort score either in natural or reverse order.
For all possible operations on sorted sets, see the sorted sets docs.

Best way to remove an entry from a hash table

What is the best way to remove an entry from a hashtable that uses linear probing? One way to do this would be to use a flag to indicate deleted elements? Are there any ways better than this?
An easy technique is to:
Find and remove the desired element
Go to the next bucket
If the bucket is empty, quit
If the bucket is full, delete the element in that bucket and re-add it to the hash table using the normal means. The item must be removed before re-adding, because it is likely that the item could be added back into its original spot.
Repeat step 2.
This technique keeps your table tidy at the expense of slightly slower deletions.
It depends on how you handle overflow and whether (1) the item being removed is in an overflow slot or not, and (2) if there are overflow items beyond the item being removed, whether they have the hash key of the item being removed or possibly some other hash key. [Overlooking that double condition is a common source of bugs in deletion implementations.]
If collisions overflow into a linked list, it is pretty easy. You're either popping up the list (which may have gone empty) or deleting a member from the middle or end of the linked list. Those are fun and not particularly difficult. There can be other optimizations to avoid excessive memory allocations and freeings to make this even more efficient.
For linear probing, Knuth suggests that a simple approach is to have a way to mark a slot as empty, deleted, or occupied. Mark a removed occupant slot as deleted so that overflow by linear probing will skip past it, but if an insertion is needed, you can fill the first deleted slot that you passed over [The Art of Computer Programming, vol.3: Sorting and Searching, section 6.4 Hashing, p. 533 (ed.2)]. This assumes that deletions are rather rare.
Knuth gives a nice refinment as Algorithm R6.4 [pp. 533-534] that instead marks the cell as empty rather than deleted, and then finds ways to move table entries back closer to their initial-probe location by moving the hole that was just made until it ends up next to another hole.
Knuth cautions that this will move existing still-occupied slot entries and is not a good idea if pointers to the slots are being held onto outside of the hash table. [If you have garbage-collected- or other managed-references in the slots, it is all right to move the slot, since it is the reference that is being used outside of the table and it doesn't matter where the slot that references the same object is in the table.]
The Python hash table implementation (arguable very fast) uses dummy elements to mark deletions. As you grow or shrink or table (assuming you're not doing a fixed-size table), you can drop the dummies at the same time.
If you have access to a copy, have a look at the article in Beautiful Code about the implementation.
The best general solutions I can think of include:
If you're can use a non-const iterator (ala C++ STL or Java), you should be able to remove them as you encounter them. Presumably, though, you wouldn't be asking this question unless you're using a const iterator or an enumerator which would be invalidated if the underlying collection is modified.
As you said, you could mark a deleted flag within the contained object. This doesn't release any memory or reduce collisions on the key, though, so it's not the best solution. Also requires the addition of a property on the class that probably doesn't really belong there. If this bothers you as much as it would me, or if you simply can't add a flag to the stored object (perhaps you don't control the class), you could store these flags in a separate hash table. This requires the most long-term memory use.
Push the keys of the to-be-removed items into a vector or array list while traversing the hash table. After releasing the enumerator, loop through this secondary list and remove the keys from the hash table. If you have a lot of items to remove and/or the keys are large (which they shouldn't be), this may not be the best solution.
If you're going to end up removing more items from the hash table than you're leaving in there, it may be better to create a new hash table, and as you traverse your original one, add to the new hash table only the items you're going to keep. Then replace your reference(s) to the old hash table with the new one. This saves a secondary list iteration, but it's probably only efficient if the new hash table will have significantly fewer items than the original one, and it definitely only works if you can change all the references to the original hash table, of course.
If your hash table gives you access to its collection of keys, you may be able to iterate through those and remove items from the hash table in one pass.
If your hash table or some helper in your library provides you with predicate-based collection modifiers, you may have a Remove() function to which you can pass a lambda expression or function pointer to identify the items to remove.
A common technique when time is a factor is to have a second table of deleted items, and clean up the main table when you have time. Commonly used in search engines.
How about enhancing the hash table to contain pointers like a linked list?
When you insert, if the bucket is full, create a pointer from this bucket to the bucket where the new field in stored.
While deleting something from the hashtable, the solution will be equivalent to how you write a function to delete a node from linkedlist.

Resources