Reducing memory overhead of a simple sorted array - algorithm

I have an array of items that are sorted by a key value, items are retrieved by doing a binary search. Simplified version of the items would look something like this:
struct Item
{
uint64_t key;
uint64_t data;
};
I'm looking for ways to reduce the overhead of the key. The key value is not used for anything except searching. Assuming insert cost is not a concern, but retrieval cost is, what alternative data structure could I use to reduce the bookkeeping overhead to something less than 64-bits per item?
The only other "gotcha" is that I need to be able to detect the case where a key isn't present in the set.

One obvious possibility would be to treat your key as 8 individual bytes and build a trie out of them. This combines the common prefixes in your keys, so if you have (for example) a thousand Items with the same first byte, you only store that first byte once instead of a thousand times.

In order to be able to detect the absence of a key from your set, you need to store your keys in one way or another. Since the keys are random, you can't compress them into fewer than 64 bits by using clever data structures. Ergo, they way you're doing it now is optimal in terms of memory consumption.
If there was some structure, or predictability, to the keys it would be a different story.

If the "keys are basically random", then you don't have much option other than what you are using right now. For 64bit integers you cannot even assume a dense set of keys.
Are there anything else about the keys that you can exploit? ... Maybe a lot of keys are near to each other ... or something else? ... In this cases you can build multi-level hash tables or tries for storing your data.

Related

Improving query access performance for unordered maps that are unchanging

I am looking for suggestions in improving the query time access for unordered maps. My code essentially just consists of 2 steps. In the first step, I populate the unordered map. After the first step, no more entries are ever added to the map. In the second step, the unordered map is only queried. Since the map is essentially unchanging, is there something that can be done to speed up the query time?
For instance, does stl provide any function that can adjust the internal allocations in the map to improve query time access? In other words, it is possible that more than one key was mapped to the same bucket in the unordered map. If more memory was allocated to the map, then chances of such a collision occurring can reduce. In that sense, I am curious as to whether there is anything that can be done knowing the fact that the unordered map will remain unchanged.
If measurements show this is important for you, then I'd suggest taking measurements for other hash table implementations outside the Standard Library, e.g. google's. Using closed hashing aka open addressing may well work better for you, especially if your hash table entries are small enough to store directly in the hash table buckets.
More generally, Marshall suggests finding a good hash function. Be careful though - sometimes a generally "bad" hash function performs better than a "good" one, if it works in nicely with some of the properties of your keys. For example, if you tend to have incrementing number, perhaps with a few gaps, then an identity (aka trivial) hash function that just returns the key can select hash buckets with far less collisions than a crytographic hash that pseudo-randomly (but repeatably) scatters keys with as little as a single bit of difference in uncorrelated buckets. Identity hashing can also help if you're looking up several nearby key values, as their buckets are probably nearby too and you'll get better cache utilisation. But, you've told us nothing about your keys, values, number of entries etc. - so I'll leave the rest with you.
You have two knobs that you can twist: The the hash function and number of buckets in the map. One is fixed at compile-time (the hash function), and the other you can modify (somewhat) at run-time.
A good hash function will give you very few collisions (non-equal values that have the same hash value). If you have many collisions, then there's not really much you can do to improve your lookup times. Worst case (all inputs hash to the same value) gives you O(N) lookup times. So that's where you want to focus your effort.
Once you have a good hash function, then you can play games with the number of buckets (via rehash) which can reduce collisions further.

When to use hash tables?

What are the cases when using hash table can improve performance, and when it does not? and what are the cases when using hash tables are not applicable?
What are the cases when using hash table can improve performance, and when it does not?
If you have reason to care, implement using hash tables and whatever else you're considering, put your actual data through, and measure which performs better.
That said, if the hash tables has the operations you need (i.e. you're not expecting to iterate it in sorted order, or compare it quickly to another hash table), and has millions or more (billions, trillions...) of elements, then it'll probably be your best choice, but a lot depends on the hash table implementation (especially the choice of closed vs. open hashing), object size, hash function quality and calculation cost / runtime), comparison cost, oddities of your computers memory performance at different cache levels... in short: too many things to make even an educated guess a better choice than measuring, when it matters.
and what are the cases when using hash tables are not applicable?
Mainly when:
The input can't be hashed (e.g. you're given binary blobs and don't know which bits in there are significant, but you do have an int cmp(const T&, const T&) function you could use for a std::map), or
the available/possible hash functions are very collision prone, or
you want to avoid worst-case performance hits for:
handling lots of hash-colliding elements (perhaps "engineered" by someone trying to crash or slow down your software)
resizing the hash table: unless presized to be large enough (which can be wasteful and slow when excessive memory's used), the majority of implementations will outgrow the arrays they're using for the hash table every now and then, then allocate a bigger array and copy content across: this can make the specific insertions that cause this rehashing to be much slower than the normal O(1) behaviour, even though the average is still O(1); if you need more consistent behaviour in all cases, something like a balance binary tree may serve
your access patterns are quite specialised (e.g. frequently operating on elements with keys that are "nearby" in some specific sort order), such that cache efficiency is better for other storage models that keep them nearby in memory (e.g. bucket sorted elements), even if you're not exactly relying on the sort order for e.g. iteration
We use Hash Tables to get access time of O(1). Imagine a dictionary. When you are looking for a word, eg "happy", you jump straight to 'H'. Here the hash function is determined by the starting alphabet. And then you look for happy within the H bucket (actually H bucket then HA bucket then HAP bucket anbd so on).
It doesn't make sense to use Hash Tables when your data is ordered or needs ordering like sorted numbers. (Alphabets are ordered ABCD....XYZ but it wouldn't matter if you switched A and Z, provided you know it is switched in your dictionary.)

What are the underlying data structures used for Redis?

I'm trying to answer two questions in a definitive list:
What are the underlying data structures used for Redis?
And what are the main advantages/disadvantages/use cases for each type?
So, I've read the Redis lists are actually implemented with linked lists. But for other types, I'm not able to dig up any information. Also, if someone were to stumble upon this question and not have a high level summary of the pros and cons of modifying or accessing different data structures, they'd have a complete list of when to best use specific types to reference as well.
Specifically, I'm looking to outline all types: string, list, set, zset and hash.
Oh, I've looked at these article, among others, so far:
http://redis.io/topics/data-types
http://redis.io/topics/data-types-intro
http://redis.io/topics/faq
I'll try to answer your question, but I'll start with something that may look strange at first: if you are not interested in Redis internals you should not care about how data types are implemented internally. This is for a simple reason: for every Redis operation you'll find the time complexity in the documentation and, if you have the set of operations and the time complexity, the only other thing you need is some clue about memory usage (and because we do many optimizations that may vary depending on data, the best way to get these latter figures are doing a few trivial real world tests).
But since you asked, here is the underlying implementation of every Redis data type.
Strings are implemented using a C dynamic string library so that we don't pay (asymptotically speaking) for allocations in append operations. This way we have O(N) appends, for instance, instead of having quadratic behavior.
Lists are implemented with linked lists.
Sets and Hashes are implemented with hash tables.
Sorted sets are implemented with skip lists (a peculiar type of balanced trees).
But when lists, sets, and sorted sets are small in number of items and size of the largest values, a different, much more compact encoding is used. This encoding differs for different types, but has the feature that it is a compact blob of data that often forces an O(N) scan for every operation. Since we use this format only for small objects this is not an issue; scanning a small O(N) blob is cache oblivious so practically speaking it is very fast, and when there are too many elements the encoding is automatically switched to the native encoding (linked list, hash, and so forth).
But your question was not really just about internals, your point was What type to use to accomplish what?.
Strings
This is the base type of all the types. It's one of the four types but is also the base type of the complex types, because a List is a list of strings, a Set is a set of strings, and so forth.
A Redis string is a good idea in all the obvious scenarios where you want to store an HTML page, but also when you want to avoid converting your already encoded data. So for instance, if you have JSON or MessagePack you may just store objects as strings. In Redis 2.6 you can even manipulate this kind of object server side using Lua scripts.
Another interesting usage of strings is bitmaps, and in general random access arrays of bytes, since Redis exports commands to access random ranges of bytes, or even single bits. For instance check this good blog post: Fast Easy real time metrics using Redis.
Lists
Lists are good when you are likely to touch only the extremes of the list: near tail, or near head. Lists are not very good to paginate stuff, because random access is slow, O(N).
So good uses of lists are plain queues and stacks, or processing items in a loop using RPOPLPUSH with same source and destination to "rotate" a ring of items.
Lists are also good when we want just to create a capped collection of N items where usually we access just the top or bottom items, or when N is small.
Sets
Sets are an unordered data collection, so they are good every time you have a collection of items and it is very important to check for existence or size of the collection in a very fast way. Another cool thing about sets is support for peeking or popping random elements (SRANDMEMBER and SPOP commands).
Sets are also good to represent relations, e.g., "What are friends of user X?" and so forth. But other good data structures for this kind of stuff are sorted sets as we'll see.
Sets support complex operations like intersections, unions, and so forth, so this is a good data structure for using Redis in a "computational" manner, when you have data and you want to perform transformations on that data to obtain some output.
Small sets are encoded in a very efficient way.
Hashes
Hashes are the perfect data structure to represent objects, composed of fields and values. Fields of hashes can also be atomically incremented using HINCRBY. When you have objects such as users, blog posts, or some other kind of item, hashes are likely the way to go if you don't want to use your own encoding like JSON or similar.
However, keep in mind that small hashes are encoded very efficiently by Redis, and you can ask Redis to atomically GET, SET or increment individual fields in a very fast fashion.
Hashes can also be used to represent linked data structures, using references. For instance check the lamernews.com implementation of comments.
Sorted Sets
Sorted sets are the only other data structures, besides lists, to maintain ordered elements. You can do a number of cool stuff with sorted sets. For instance, you can have all kinds of Top Something lists in your web application. Top users by score, top posts by pageviews, top whatever, but a single Redis instance will support tons of insertion and get-top-elements operations per second.
Sorted sets, like regular sets, can be used to describe relations, but they also allow you to paginate the list of items and to remember the ordering. For instance, if I remember friends of user X with a sorted set I can easily remember them in order of accepted friendship.
Sorted sets are good for priority queues.
Sorted sets are like more powerful lists where inserting, removing, or getting ranges from the the middle of the list is always fast. But they use more memory, and are O(log(N)) data structures.
Conclusion
I hope that I provided some info in this post, but it is far better to download the source code of lamernews from http://github.com/antirez/lamernews and understand how it works. Many data structures from Redis are used inside Lamer News, and there are many clues about what to use to solve a given task.
Sorry for grammar typos, it's midnight here and too tired to review the post ;)
Most of the time, you don't need to understand the underlying data structures used by Redis. But a bit of knowledge helps you make CPU v/s Memory trade offs. It also helps you model your data in an efficient manner.
Internally, Redis uses the following data structures :
String
Dictionary
Doubly Linked List
Skip List
Zip List
Int Sets
Zip Maps (deprecated in favour of zip list since Redis 2.6)
To find the encoding used by a particular key, use the command object encoding <key>.
1. Strings
In Redis, Strings are called Simple Dynamic Strings, or SDS. It's a smallish wrapper over a char * that allows you to store the length of the string and number of free bytes as a prefix.
Because the length of the string is stored, strlen is an O(1) operation. Also, because the length is known, Redis strings are binary safe. It is perfectly legal for a string to contain the null character.
Strings are the most versatile data structure available in Redis. A String is all of the following:
A string of characters that can store text. See SET and GET commands.
A byte array that can store binary data.
A long that can store numbers. See INCR, DECR, INCRBY and DECRBY commands.
An Array (of chars, ints, longs or any other data type) that can allow efficient random access. See SETRANGE and GETRANGE commands.
A bit array that allows you to set or get individual bits. See SETBIT and GETBIT commands.
A block of memory that you can use to build other data structures. This is used internally to build ziplists and intsets, which are compact, memory-efficient data structures for small number of elements. More on this below.
2. Dictionary
Redis uses a Dictionary for the following:
To map a key to its associated value, where value can be a string, hash, set, sorted set or list.
To map a key to its expiry timestamp.
To implement Hash, Set and Sorted Set data types.
To map Redis commands to the functions that handle those commands.
To map a Redis key to a list of clients that are blocked on that key. See BLPOP.
Redis Dictionaries are implemented using Hash Tables. Instead of explaining the implementation, I will just explain the Redis specific things :
Dictionaries use a structure called dictType to extend the behaviour of a hash table. This structure has function pointers, and so the following operations are extendable: a) hash function, b) key comparison, c) key destructor, and d) value destructor.
Dictionaries use the murmurhash2. (Previously they used the djb2 hash function, with seed=5381, but then the hash function was switched to murmur2. See this question for an explanation of the djb2 hash algorithm.)
Redis uses Incremental Hashing, also known as Incremental Resizing. The dictionary has two hash tables. Every time the dictionary is touched, one bucket is migrated from the first (smaller) hash table to the second. This way, Redis prevents an expensive resize operation.
The Set data structure uses a Dictionary to guarantee there are no duplicates. The Sorted Set uses a dictionary to map an element to its score, which is why ZSCORE is an O(1) operation.
3. Doubly Linked Lists
The list data type is implemented using Doubly Linked Lists. Redis' implementation is straight-from-the-algorithm-textbook. The only change is that Redis stores the length in the list data structure. This ensures that LLEN has O(1) complexity.
4. Skip Lists
Redis uses Skip Lists as the underlying data structure for Sorted Sets. Wikipedia has a good introduction. William Pugh's paper Skip Lists: A Probabilistic Alternative to Balanced Trees has more details.
Sorted Sets use both a Skip List and a Dictionary. The dictionary stores the score of each element.
Redis' Skip List implementation is different from the standard implementation in the following ways:
Redis allows duplicate scores. If two nodes have the same score, they are sorted by the lexicographical order.
Each node has a back pointer at level 0. This allows you to traverse elements in reverse order of the score.
5. Zip List
A Zip List is like a doubly linked list, except it does not use pointers and stores the data inline.
Each node in a doubly linked list has at 3 pointers - one forward pointer, one backward pointer and one pointer to reference the data stored at that node. Pointers require memory (8 bytes on a 64 bit system), and so for small lists, a doubly linked list is very inefficient.
A Zip List stores elements sequentially in a Redis String. Each element has a small header that stores the length and data type of the element, the offset to the next element and the offset to the previous element. These offsets replace the forward and backward pointers. Since the data is stored inline, we don't need a data pointer.
The Zip list is used to store small lists, sorted sets and hashes. Sorted sets are flattened into a list like [element1, score1, element2, score2, element3, score3] and stored in the Zip List. Hashes are flattened into a list like [key1, value1, key2, value2] etc.
With Zip Lists you have the power to make a tradeoff between CPU and Memory. Zip Lists are memory-efficient, but they use more CPU than a linked list (or Hash table/Skip List). Finding an element in the zip list is O(n). Inserting a new element requires reallocating memory. Because of this, Redis uses this encoding only for small lists, hashes and sorted sets. You can tweak this behaviour by altering the values of <datatype>-max-ziplist-entries and <datatype>-max-ziplist-value> in redis.conf. See Redis Memory Optimization, section "Special encoding of small aggregate data types" for more information.
The comments on ziplist.c are excellent, and you can understand this data structure completely without having to read the code.
6. Int Sets
Int Sets are a fancy name for "Sorted Integer Arrays".
In Redis, sets are usually implemented using hash tables. For small sets, a hash table is inefficient memory wise. When the set is composed of integers only, an array is often more efficient.
An Int Set is a sorted array of integers. To find an element a binary search algorithm is used. This has a complexity of O(log N). Adding new integers to this array may require a memory reallocation, which can become expensive for large integer arrays.
As a further memory optimization, Int Sets come in 3 variants with different integer sizes: 16 bits, 32 bits and 64 bits. Redis is smart enough to use the right variant depending on the size of the elements. When a new element is added and it exceeds the current size, Redis automatically migrates it to the next size. If a string is added, Redis automatically converts the Int Set to a regular Hash Table based set.
Int Sets are a tradeoff between CPU and Memory. Int Sets are extremely memory efficient, and for small sets they are faster than a hash table. But after a certain number of elements, the O(log N) retrieval time and the cost of reallocating memory become too much. Based on experiments, the optimal threshold to switch over to a regular hash table was found to be 512. However, you can increase this threshold (decreasing it doesn't make sense) based on your application's needs. See set-max-intset-entries in redis.conf.
7. Zip Maps
Zip Maps are dictionaries flattened and stored in a list. They are very similar to Zip Lists.
Zip Maps have been deprecated since Redis 2.6, and small hashes are stored in Zip Lists. To learn more about this encoding, refer to the comments in zipmap.c.
Redis stores keys pointing to values. Keys can be any binary value up to a reasonable size (using short ASCII strings is recommended for readability and debugging purposes). Values are one of five native Redis data types.
1.strings — a sequence of binary safe bytes up to 512 MB
2.hashes — a collection of key value pairs
3.lists — an in-insertion-order collection of strings
4.sets — a collection of unique strings with no ordering
5.sorted sets — a collection of unique strings ordered by user defined scoring
Strings
A Redis string is a sequence of bytes.
Strings in Redis are binary safe (meaning they have a known length not determined by any special terminating characters), so you can store anything up to 512 megabytes in one string.
Strings are the cannonical "key value store" concept. You have a key pointing to a value, where both key and value are text or binary strings.
For all possible operations on strings, see the
http://redis.io/commands/#string
Hashes
A Redis hash is a collection of key value pairs.
A Redis hash holds many key value pairs, where each key and value is a string. Redis hashes do not support complex values directly (meaning, you can't have a hash field have a value of a list or set or another hash), but you can use hash fields to point to other top level complex values. The only special operation you can perform on hash field values is atomic increment/decrement of numeric contents.
You can think of a Redis hashes in two ways: as a direct object representation and as a way to store many small values compactly.
Direct object representations are simple to understand. Objects have a name (the key of the hash) and a collection of internal keys with values. See the example below for, well, an example.
Storing many small values using a hash is a clever Redis massive data storage technique. When a hash has a small number of fields (~100), Redis optimizes the storage and access efficency of the entire hash. Redis's small hash storage optimization raises an interesting behavior: it's more efficient to have 100 hashes each with 100 internal keys and values rather than having 10,000 top level keys pointing to string values. Using Redis hashes to optimize your data storage this way does require additional programming overhead for tracking where data ends up, but if your data storage is primarly string based, you can save a lot of memory overhead using this one weird trick.
For all possible operations on hashes, see the hash docs
Lists
Redis lists act like linked lists.
You can insert to, delete from, and traverse lists from either the head or tail of a list.
Use lists when you need to maintain values in the order they were inserted. (Redis does give you the option to insert into any arbitrary list position if you need to, but your insertion performance will degrade if you insert far from your start position.)
Redis lists are often used as producer/consumer queues. Insert items into a list then pop items from the list. What happens if your consumers try to pop from a list with no elements? You can ask Redis to wait for an element to appear and return it to you immediately when it gets added. This turns Redis into a real time message queue/event/job/task/notification system.
You can atomically remove elements off either end of a list, enabling any list to be treated as a stack or a queue.
You can also maintain fixed-length lists (capped collections) by trimming your list to a specific size after every insertion.
For all possible operations on lists, see the lists docs
Sets
Redis sets are, well, sets.
A Redis set contains unique unordered Redis strings where each string only exists once per set. If you add the same element ten times to a set, it will only show up once. Sets are great for lazily ensuring something exists at least once without worrying about duplicate elements accumulating and wasting space. You can add the same string as many times as you like without needing to check if it already exists.
Sets are fast for membership checking, insertion, and deletion of members in the set.
Sets have efficient set operations, as you would expect. You can take the union, intersection, and difference of multiple sets at once. Results can either be returned to the caller or results can be stored in a new set for later usage.
Sets have constant time access for membership checks (unlike lists), and Redis even has convenient random member removal and returning ("pop a random element from the set") or random member returning without replacement ("give me 30 random-ish unique users") or with replacement ("give me 7 cards, but after each selection, put the card back so it can potentially be sampled again").
For all possible operations on sets, see the sets docs.
Sorted Sets
Redis sorted sets are sets with a user-defined ordering.
For simplicity, you can think of a sorted set as a binary tree with unique elements. (Redis sorted sets are actually skip lists.) The sort order of elements is defined by each element's score.
Sorted sets are still sets. Elements may only appear once in a set. An element, for uniqueness purposes, is defined by its string contents. Inserting element "apple" with sorting score 3, then inserting element "apple" with sorting score 500 results in one element "apple" with sorting score 500 in your sorted set. Sets are only unique based on Data, not based on (Score, Data) pairs.
Make sure your data model relies on the string contents and not the element's score for uniqueness. Scores are allowed to be repeated (or even zero), but, one last time, set elements can only exist once per sorted set. For example, if you try to store the history of every user login as a sorted set by making the score the epoch of the login and the value the user id, you will end up storing only the last login epoch for all your users. Your set would grow to size of your userbase and not your desired size of userbase * logins.
Elements are added to your set with scores. You can update the score of any element at any time, just add the element again with a new score. Scores are represented by floating point doubles, so you can specify granularity of high precision timestamps if needed. Multiple elements may have the same score.
You can retrieve elements in a few different ways. Since everything is sorted, you can ask for elements starting at the lowest scores. You can ask for elements starting at the highest scores ("in reverse"). You can ask for elements by their sort score either in natural or reverse order.
For all possible operations on sorted sets, see the sorted sets docs.

Hash table optimized for full iteration + key replacement

I have a hash table where the vast majority of accesses at run-time follow one of the following patterns:
Iterate through all key/value pairs. (The speed of this operation is critical.)
Modify keys (i.e. remove a key/value pair & add another with the same value but a different key. Detect duplicate keys & combine values if necessary.) This is done in a loop, affecting many thousands of keys, but with no other operations intervening.
I would also like it to consume as little memory as possible.
Other standard operations must be available, though they are used less frequently, e.g.
Insert a new key/value pair
Given a key, look up the corresponding value
Change the value associated with an existing key
Of course all "standard" hash table implementations, including standard libraries of most high-level-languages, have all of these capabilities. What I am looking for is an implementation that is optimized for the operations in the first list.
Issues with common implementations:
Most hash table implementations use separate chaining (i.e. a linked list for each bucket.) This works but I am hoping for something that occupies less memory with better locality of reference. Note: my keys are small (13 bytes each, padded to 16 bytes.)
Most open addressing schemes have a major disadvantage for my application: Keys are removed and replaced in large groups. That leaves deletion markers that increase the load factor, requiring the table to be re-built frequently.
Schemes that work, but are less than ideal:
Separate chaining with an array (instead of a linked list) per bucket:
Poor locality of reference, resulting from memory fragmentation as small arrays are reallocated many times
Linear probing/quadratic hashing/double hashing (with or without Brent's Variation):
Table quickly fills up with deletion markers
Cuckoo hashing
Only works for <50% load factor, and I want a high LF to save memory and speed up iteration.
Is there a specialized hashing scheme that would work well for this case?
Note: I have a good hash function that works well with both power-of-2 and prime table sizes, and can be used for double hashing, so this shouldn't be an issue.
Would Extendable Hashing help? Iterating though the keys by walking the 'directory' should be fast. Not sure if the "modify key for value" operation is any better with this scheme or not.
Based on how you're accessing the data, does it really make sense to use a hash table at all?
Since you're main use cases involve iteration - a sorted list or a btree might be a better data structure.
It doesnt seem like you really need the constant time random data access a hash table is built for.
You can do much better than a 50% load factor with cuckoo hashing.
Two hash functions with four items will get you over 90% with little effort. See this paper:
http://www.ru.is/faculty/ulfar/CuckooHash.pdf
I'm building a pre-computed dictionary using a cuckoo hash and getting a load factor of better than 99% with two hash functions and seven items per bucket.

Choosing a Data structure for very large data

I have x (millions) positive integers, where their values can be as big as allowed (+2,147,483,647). Assuming they are unique, what is the best way to store them for a lookup intensive program.
So far i thought of using a binary AVL tree or a hash table, where the integer is the key to the mapped data (a name). However am not to sure whether i can implement such large keys and in such large quantity with a hash table (wouldn't that create a >0.8 load factor in addition to be prone for collisions?)
Could i get some advise on which data structure might be suitable for my situation
The choice of structure depends heavily on how much memory you have available. I'm assuming based on the description that you need lookup but not to loop over them, find nearest, or other similar operations.
Best is probably a bucketed hash table. By placing hash collisions into buckets and keeping separate arrays in the bucket for keys and values, you can both reduce the size of the table proper and take advantage of CPU cache speedup when searching a bucket. Linear search within a bucket may even end up faster than binary search!
AVL trees are nice for data sets that are read-intensive but not read-only AND require ordered enumeration, find nearest and similar operations, but they're an annoyingly amount of work to implement correctly. You may get better performance with a B-tree because of CPU cache behavior, though, especially a cache-oblivious B-tree algorithm.
Have you looked into B-trees? The efficiency runs between log_m(n) and log_(m/2)(n) so if you choose m to be around 8-10 or so you should be able to keep your search depth to below 10.
Bit Vector , with the index set if the number is present. You can tweak it to have the number of occurrences of each number. There is a nice column about bit vectors in Bentley's Programming Pearls.
If memory isn't an issue a map is probably your best bet. Maps are O(1) meaning that as you scale up the number of items to be looked up the time is takes to find a value is the same.
A map where the key is the int, and the value is the name.
Do try hash tables first. There are some variants that can tolerate being very dense without significant slowdown (like Brent's variation).
If you only need to store the 32-bit integers and not any associated record, use a set and not a map, like hash_set in most C++ libraries. It would use only 4-bytes records plus some constant overhead and a little slack to avoid being 100%. In the worst case, to handle 'millions' of numbers you'd need a few tens of megabytes. Big, but nothing unmanageable.
If you need it to be much tighter, just store them sorted in a plain array and use binary search to fetch them. It will be O(log n) instead of O(1), but for 'millions' of records it's still just twentysomething steps to get any one of them. In C you have bsearch(), which is as fast as it can get.
edit: just saw in your question you talk about some 'mapped data (a name)'. are those names unique? do they also have to be in memory? if yes, they would definitely dominate the memory requirements. Even so, if the names are the typical english words, most would be 10 bytes or less, keeping the total size in the 'tens of megabytes'; maybe up to a hundred megs, still very manageable.

Resources