I understand that Ruby arrays are both dynamically allocated and dynamically resized; however, I can't seem to find any clear information on whether they are true arrays; i.e. they are contiguous in memory (more specifically, the references they hold are contiguous in memory).
My assumption would be that increasing the size of a Ruby array entails reallocation of the entire array to a larger contiguous memory block where required.
Is this correct, or is "array" a misnomer in this instance?
Having reviewed the source and the article referenced by Darek in the comments, I can confirm that Ruby's arrays are, indeed, genuine arrays and consist of contiguous memory blocks, where the element at a given index can be accessed in O(1) time.
It seems that Ruby over-allocates arrays to improve the efficiency of push and similar operations; however, when the capacity of the array is exceeded, the array is automatically reallocated at a larger size.
This is a fairly important distinction that seems to be largely neglected, so hopefully this information will be useful to others searching for similar enlightenment.
The Ruby Language Specification does not prescribe any particular memory representation for Array objects (or any object, actually). That would be too restricting for the implementors. In fact, it doesn't even prescribe that objects have to live in memory at all, which makes possible implementations like MagLev where the Object Memory is a distributed on-disk OO database instead of RAM.
The Ruby Language Specification also does not prescribe any particular performance characteristics for any methods of the Array class.
However, Ruby programmers have come to expect certain performance guarantees from certain Array methods (and any implementation that doesn't meet those guarantees will simply be ignored by the community), e.g.
Array#[] shall have a worst-case step complexity of O(#of items sliced)
Array#<< shall have a worst-case step complexity of O(n) and an amortized worst-case step complexity of O(1)
… and so on.
Basically, the typical performance guarantees you would expect from a dynamic array.
This more or less means that the only way to meet those performance guarantees is that the implementation must use contiguous storage and exponential resizing.
Related
I believe that in some languages other than Ruby, an Array lookup is O(1) because you know where the data starts, and you multiply the index by the size of the data the array is holding, and then access that memory location.
However, in Ruby, an Array can have objects from different classes, so how does it manage to do a lookup of O(1) complexity?
What #Neil Slater said, with a little more detail…
There are basically two plausible approaches to storing an array of heterogeneous objects of differing sizes:
Store the objects as a singly- or doubly-linked list, with the storage space for each individual object preceded by pointer(s) to the preceding and/or following objects. This structure has the advantage of making it very easy to insert new objects at arbitrary points without shifting around the rest of the array, but the huge downside is that looking up an object by its position is generally O(N), since you have to start from one end of the list and jump through it node-by-node until you arrive at the n-th one.
Store a table or array of constant-sized pointers to the individual objects. Since this lookup table contains constant-sized items in a contiguous ordered layout, looking up the addresses of individual objects O(1); the table is just a C-style array, in which lookup only takes 1-to-a-few machine instructions, even on RISC CPU architectures.
(The allocation strategies for storing the individual objects are also interesting and complex, but not immediately relevant to your question.)
Dynamic languages like Perl/Python/Ruby pretty much all opt for #2 for their general-purpose list/array types. In other words, they make lookup more efficient than inserting objects at random locations in the list, which is the better choice for many applications.
I'm not familiar with the implementation details for Ruby, but they are likely quite similar to those of Python's list type, whose performance and design is explained in wonderful detail at effbot.org.
Its implementation probably contains an array of memory addresses, pointing to the actual objects. Therefore it can still lookup without looping through the array.
I have a question about fundamentals in data structures.
I understand that array's access time is faster than a linked list. O(1)- array vs O(N) -linked list
But a linked list beats an array in removing an element since there is no shifting needing O(N)- array vs O(1) -linked list
So my understanding is that if the majority of operations on the data is delete then using a linked list is preferable.
But if the use case is:
delete elements but not too frequently
access ALL elements
Is there a clear winner? In a general case I understand that the downside of using the list is that I access each node which could be on a separate page while an array has better locality.
But is this a theoretical or an actual concern that I should have?
And is the mixed-type i.e. create a linked list from an array (using extra fields) good idea?
Also does my question depend on the language? I assume that shifting elements in array has the same cost in all languages (at least asymptotically)
Singly-linked lists are very useful and can be better performance-wise relative to arrays if you are doing a lot of insertions/deletions, as opposed to pure referencing.
I haven't seen a good use for doubly-linked lists for decades.
I suppose there are some.
In terms of performance, never make decisions without understanding relative performance of your particular situation.
It's fairly common to see people asking about things that, comparatively speaking, are like getting a haircut to lose weight.
Before writing an app, I first ask if it should be compute-bound or IO-bound.
If IO-bound I try to make sure it actually is, by avoiding inefficiencies in IO, and keeping the processing straightforward.
If it should be compute-bound then I look at what its inner loop is likely to be, and try to make that swift.
Regardless, no matter how much I try, there will be (sometimes big) opportunities to make it go faster, and to find them I use this technique.
Whatever you do, don't just try to think it out or go back to your class notes.
Your problem is different from anyone else's, and so is the solution.
The problem with a list is not just the fragmentation, but mostly the data dependency. If you access every Nth element in array you don't have locality, but the accesses may still go to memory in parallel since you know the address. In a list it depends on the data being retrieved, and therefore traversing a list effectively serializes your memory accesses, causing it to be much slower in practice. This of course is orthogonal to asymptotic complexities, and would harm you regardless of the size.
I'm receiving "order update" from stock exchange. Each order id is between 1 and 100 000 000, so I can use 100 million array to store 100 million orders and when update is received I can look-up order from array very fast just accessing it by index arrray[orderId]. I will spent several gigabytes of memory but this is OK.
Alternatively I can use hashmap, and because at any moment the number of "active" orders is limited (to, very roughly, 100 000), look-up will be pretty fast too, but probaly a little bit slower then array.
The question is - will hashmap be actually slower? Is it reasonably to create 100 millions array?
I need latency and nothing else, I completely don't care about memory, what should I choose?
Whenever considering performance issues, one experiment is worth a thousand expert opinions. Test it!
That said, I'll take a wild stab in the dark: it's likely that if you can convince your OS to keep your multi-gigabyte array resident in physical memory (this isn't necessarily easy - consider looking at the mlock and munlock syscalls), you'll have relatively better performance. Any such performance gain you notice (should one exist) will likely be by virtue of bypassing the cost of the hashing function, and avoiding the overheads associated with whichever collision-resolution and memory allocation strategies your hashmap implementation uses.
It's also worth cautioning that many hash table implementations have non-constant complexity for some operations (e.g., separate chaining could degrade to O(n) in the worst case). Given that you are attempting to optimize for latency, an array with very aggressive signaling to the OS memory manager (e.g., madvise and mlock) are likely to result in the closest to constant-latency lookups that you can get on a microprocessor easily.
While the only way to objectively answer this question is with performance tests, I will argue for using a Hashtable Map. (Caching and memory access can be so full of surprises; I do not have the expertise to speculate on which one will be faster, and when. Also consider that localized performance differences may be marginalized by other code.)
My first reason for "initially choosing" a hash is based off of the observation that there are 100M distinct keys but only 0.1M active records. This means that if using an array, index utilization will only be 0.1% - this is a very sparse array.
If the data is stored as values in the array then it needs to be relatively small or the array size will balloon. If the data is not stored in the array (e.g. array is of pointers) then the argument for locality of data in the array is partially mitigated. Either way, the simple array approach requires lots of unused space.
Since all the keys are already integers, the distribution (hash) function and can be efficiently implemented - there is no need to create a hash of a complex type/sequence so the "cost" of this function should approach zero.
So, my simple proposed hash:
Use linear probing backed by contiguous memory. It is simple, has good locality (especially during the probe), and avoids needing to do any form of dynamic allocation.
Pick a suitable initial bucket size; say, 2x (or 0.2M buckets, primed). Don't even give the hash a chance of resize. Note that this suggested bucket array size is only 0.2% the size of the simple array approach and could be reduced further as the size vs. collision rate can be tuned.
Create a good distribution function for the hash. It can also exploit knowledge of the ID range.
While I've presented specialized hashtable rules "optimized" for the given case, I would start with a normal Map implementation (be it a hashtable or tree) and test it .. if a standard implementation works suitably well, why not use it?
Now, test different candidates under expected and extreme loads - and pick the winner.
This seems to depend on the clustering of the IDs.
If the active IDs are clustered suitably already then, without hashing, the OS and/or L2 cache have a fair shot at holding on to the good data and keeping it low-latency.
If they're completely random then you're going to suffer just as soon as the number of active transactions exceeds the number of available cache lines or the size of those transactions exceeds the size of the cache (it's not clear which is likely to happen first in your case).
However, if the active IDs work out to have some unfortunate pattern which causes a high rate of contention (eg., it's a bit-pack of different attributes, and the frequently-varying attribute hits the hardware where it hurts), then you might benefit from using a 1:1 hash of the index to get back to the random case, even though that's usually considered a pretty bad case on its own.
As far as hashing for compaction goes; noting that some people are concerned about worst-case fallback behaviour for a hash collision, you might simply implement a cache of the full-sized table in contiguous memory, since that has a reasonably constrained worst case. Simply keep the busiest entry in the map, and fall back to the full table on collisions. Move the other entry into the map if it's more active (if you can find a suitable algorithm to decide this).
Even so, it's not clear that the necessary hash table size is sufficient to reduce the working set to being cacheable. How big are your orders?
The overhead of a hashmap vs. an array is almost none. I would bet on a hashmap of 100,000 records over an array of 100,000,000, without a doubt.
Remember also that, while you "don't care about memory", this also means you'd better have the memory to back it up - an array of 100,000,000 integers will take up 400mb, even if all of them are empty. You run the risk of your data being swapped out. If your data gets swapped out, you will get a performance hit of several orders of magnitude.
You should test and profile, as others have said. My random stab in the dark, though: A high-load-factor hash table will be the way to go here. One huge array is going to cost you a TLB miss and then a last-level cache miss per access. This is expensive. A hash table, given the working set size you mentioned, is probably only going to cost some arithmetic and an L1 miss.
Again, test both alternatives on representative examples. We're all just stabbing in the dark.
I've started reviewing data structures and algorithms before my final year of school starts to make sure I'm on top of everything. One review problem said "Implement a stack using a linked list or dynamic array and explain why you made the best choice".
To me, it seemed more intuitive to use a list with a tail pointer to implement a stack since it may need to be resized often. It seems like for a large amount of data, a list is the better choice since a dynamic array re-size is an expensive operation. Additionally, with a list, you don't need to allocate any more space than you actually need so it's more space efficient.
However, a dynamic array would definitely allow for adding data far quicker (except when it needs to be resized). However, I'm not sure if using an array is overall quicker, or only if it doesn't need to be resized.
The book's solution said "for storing very large objects, a list is a better implementation" but I don't understand why.
Which way is best? What factors should be used to determine which implementation is "best"? Also, is any of my logic here off?
There are many tradeoffs involved here and I don't think that there's a "correct" answer to this question.
If you implement the stack using a linked list with a tail pointer, then the worst-case runtime to push, pop, or peek is O(1). However, each element will have some extra overhead associated with it (namely, the pointer) that means that there is always O(n) overhead for the structure. Additionally, depending on the speed of your memory allocator, the cost of allocating new nodes for the stack might be noticeable. Also, if you were to continuously pop off all the elements from the stack, you might have a performance hit from poor locality, since there is no guarantee that the linked list cells will be stored contiguously in memory.
If you implement the stack with a dynamic array, then the amortized runtime to push or pop is O(1) and the worst-case cost of a peek is O(1). This means that if you care about the cost of any single operation in the stack, this may not be the best approach. That said, allocations are infrequent, so the total cost of adding or removing n elements is likely to be faster than the corresponding cost in the linked-list based approach. Additionally, the memory overhead of this approach is usually better than the memory overhead of the linked list. If your dynamic array just stores pointers to the elements, then the memory overhead in the worst-case occurs when half the elements are filled in, in which case there are n extra pointers (the same as in the case when you were using the linked list), and in the best case when the dynamic array is full there are no empty cells and the extra overhead is O(1). If, on the other hand, your dynamic array directly contains the elements, the memory overhead can be worse in the worst-case. Finally, because the elements are stored contiguously, there is better locality if you want to continuously push or pop elements from the stack, since all the elements are right next to each other in memory.
In short:
The linked-list approach has worst-case O(1) guarantees on each operation; the dynamic array has amortized O(1) guarantees.
The locality of the linked list is not as good as the locality of the dynamic array.
The total overhead of the dynamic array is likely to be smaller than the total overhead of the linked list, assuming both store pointers to their elements.
The total overhead of the dynamic array is likely to be greater than that of the linked list if the elements are stored directly.
Neither of these structures is clearly "better" than the other. It really depends on your use case. The best way to figure out which is faster would be to time both and see which performs better.
Hope this helps!
Well, for the small objects vs. large objects question, consider how much extra space to use for a linked list if you've got small objects on your stack. Then consider how much extra space you'll need if you've got a bunch of large objects on your stack.
Next, consider the same questions, but with an implementation based on dynamic arrays.
What matters is the number of times malloc() gets called in the course of running a task. It could take from hundreds to thousands of instructions to get you a block of memory. (The time in free() or GC should be proportional to that.) Also, keep a sense of perspective. This might be 99% of the total time, or only 1%, depending what else is happening.
I think you answered the question yourself. For a stack with a large number of items, the dynamic array would have excessive overhead costs (copying overhead) when simply adding an extra item to the top of the stack. With a list it's a simple switch of pointers.
Resizing the dynamic array would not be an expensive task if you design your implementation well.
For instance, to grow the array, if it is full, create a new array of twice the size, and copy items.
You will end up with an amortized cost of ~3N for adding N items.
I'm looking for a container that provides fastest unordered iterations through the encapsulated elements. In other words, "add once, iterate many times".
Is there one among OCaml's standard modules that is fast enough (such that further optimization of it would be useless)? Or some kind of third-party GPL-ready ones?
AFAIK there's just one OCaml compiler, so the concept of being fast is more or less clear...
...But after I saw a couple of answers, it appears, it's not. Of course, there's a plenty of data structures that allow O(n) iteration through container of size n. But the task I'm solving is one of those, where difference between O(n) and O(2n) matters ;-).
I also see that Arrays and Lists provide unnecessary information about the order of elements added, which I don't need. Maybe in "functional world" there exists data structures such that can trade this information for a bit of iteration speed.
In C I would outright pick a plain array. The question is, what should I pick in OCaml?
You are unlikely to do better than built-in arrays and lists, since they are hand-coded in C, unless you bind to your own native implementation of an iterator. An array will behave almost exactly like an array in C (a contiguously allocated block of memory containing a sequence of element values), possibly with some extra pointer indirections due to boxing. List are implemented exactly how you would expect: as cells with a value and a "next" pointer. Arrays will give you the best locality for unboxed types (especially floats, which have a super-special unboxed implementation).
For information about the implementation of arrays and lists, see Section 18.3 of the OCaml manual and the files byterun/mlvalues.h, byterun/array.c, and byterun/alloc.c in the OCaml source code.
From the questioner: indeed, Array appeared to be the fastest solution. However it only outperformed List by 7%. Maybe it was because the type of an array element was not plain enough: it was an algebraic type. Hashtbl performed 4 times worse, as expected.
So, I will pick Array and I'm accepting this one. good.
To know for sure, you're going to have to measure. Based on the machine instructions the compiler is likely to generate, I would try an array, then a list.
Access to an array element requires a bounds check, address arithmetic, and a load
Access to the head of a list requires a load, a test for empty list, and a load at a known compile-time offset.
The details of which is faster probably depend on your application and what else is happening on your machine. They also depend on the type of elements; for example, if they are floating-point numbers, ocamlopt may be clever enough to make an unboxed array, which will save you a level of indirection.
Other common data structures like hash tables or balanced trees generally require that you allocate some context somewhere to keep track of where you are. With an array, keeping track requires only an integer index; with a list, keeping track requires a single pointer. I think this is going to be hard to beat in another data structure.
Finally please note that there may be only one OCaml compiler, but it has two back ends: bytecode and native code. Naturally if you care about this level of performance, you are using the native-code ocamlopt version. Right?
Please take measurements and edit the results into your question.
Don't forget about Bigarrays, they are most close to C arrays (just a flat piece of memory), but cannot contain arbitrary OCaml values. Also consider switching bounds checking off (unsafe_set/get). And of course you should profile first.
The array - a linear piece of memory with the items visited in sequential order - best utilises the CPU's L1 data cache.
All common data structures are iterable in O(n) time, so the differences between data structures will only be constant (and very probably not significant).
At least lists and arrays allow iteration without significant overhead. I can't think of a situation where that would not be fast enough.