Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I only need the operations enque and dequeue.
From a theoretical standpoint, a singly-linked list with both a head and a tail. Remove from the front, append to the tail. There you have the theoretical O(1) constant-time complexity (even for worst-case) with less storage than a doubly-linked list.
From a practical standpoint, a growable array-based contiguous structure can perform better using circular indexing. The hardware excels at dealing with contiguous memory due to the spatial locality (cache lines that fit multiple adjacent elements, e.g.). Those have a worse algorithmic complexity with only amortized constant time and a worst-case linear-time complexity (though it's so infrequent that it generally doesn't matter very much).
Also from a practical standpoint, an unrolled list could work well (basically arrays of multiple elements stored in nodes that are linked together, giving you the locality of reference + guaranteed constant-time enqueues and dequeues).
"Best" is really hard to say here since it depends on your needs.
For example, the singly-linked list with a tail has the weakness of allocation/deallocation overhead per node and the lost locality of reference, unless you back that with an efficient, contiguous allocator that helps mitigate these weaknesses. It also pays a memory overhead of a list pointer/ref per element (plus potentially some more due to the separate allocations per node). As pointed out in the comments, linked lists are generally nowhere near as good as they sound because they don't line up quite as well (at least without a lot of help from an allocator) with the actual nature of the hardware.
The circular array has a weakness where it needs excess capacity to reduce the number of reallocations (or else that worst-case linear time complexity is going to kick in more frequently) and copies (though they could be shallow in some cases). Also since it's just one big contiguous memory block, if you're working with enormous data sets, you could get out of memory errors even on a machine with virtual addressing (out of memory doesn't necessarily mean all memory has been used up in such cases, it means that a contiguous set of unused pages matching the requested size wasn't found).
The unrolled list mitigates the list pointer and node allocation overhead but stores some excess capacity in nodes which could be quite wasteful if you're, say, using an unrolled list with the capacity to store 64 elements per node and you're just storing 3 elements in your queue.
You could use an array (continuous spaces in memory)
You could also use a linked list (not necessarily continous)
Array and its fancier derivatives (ArrayList, vector, etc) may be more complicated. They aren't as efficient because if you start adding too many elements, you may run out of continous memory space and you will have to copy everything in your queue over to a new block of memory.
A linked list to me seems pretty efficient as long as you keep track of the front and back (head and tail, whatever you want to call it).
This may help: https://www.cs.bu.edu/teaching/c/queue/linked-list/types.html
I do not recommend array (OR any data structure which is implemented on top of array) because dequeue operation will lead to shifting of all the elements.In this case i would go for single ended linkedList where you insert at the end and remove from the begining but if you want to remove from the last then you need doubly linkedlist because you will need a handle on penultimate node to remove the last node (dequeue) which would lead to scanning the complete list in case of single pointer linkedlist.
Related
I'm working on an avionics OS (thread layer) and I'm looking for an optimal solution regarding the following (simplified) requirement:
"Threads waiting for [various objects] are queued in priority order. For the same priority, threads are also queued in FIFO order".
[various objects] are e.g. semaphores.
I thought I could build such a waiting list using a classical linked list, which makes insertion/sorting relatively fast and easy, and which perfectly fits with expected usage (one thread goes in waiting state at a time). But I am working on a bare metal target and I don't have any libc support, thus I have no malloc (which is very useful for linked list!).
For sorting threads by priority I usually use binary heaps (http://en.wikipedia.org/wiki/Binary_heap) which is very efficient, but it can't be used here because "FIFO order" can not be managed this way.
Of course, I can do it with more classical sorting algorithms, but they are usually time-consuming, even for one insertion, because a lot of array elements may be moved at each insertion.
So I wonder if an appropriate algorithm exists... maybe a kind of improved binary heap?... Or a “static” linked list?... Or maybe the best thing is an allocator algorithm associated with a linked list?...
For information:
- the total number of threads is limited to 128, so memory need is always finite and can be known/reserved at compile time.
- I have a limited quantity or RAM, so I can hardly do constructions such a binary heap sorted by priority pointing on FIFOs (naturally ordered by arrival time)…
I'd really appreciate any idea and fresh look regarding this problem.
Thanks !
Probably you need a stable in-place sort - it will maintain relative order of items after sorting by priority, satisfying your FIFO requirement.
Pick anything from list in wiki, for example in-place merge sort, block sort and tim sort are both in-place and stable:
http://en.wikipedia.org/wiki/Sorting_algorithm
Regarding memory allocation and linked lists - maybe you can implement your own malloc?
You can allocate fixed size heap (128 * thread information size), and then use index of each block
as a pointer. So real pointer to object will be (heap start address) + index * (block size).
And then implement sorting as you normally would, but with indexes instead of pointers.
Another idea is to separate FIFO requirement from priority queue requirement, and sort containers with queues of same-priority items - but this would require dynamic list allocation and larger heap.
The standard technique for this problem is the Bentley-Saxe priority queue. I would describe it in detail here, along with some tips for how to implement it in-place with minimal memory requirements, but anything I said would just be reiterating Pat Morin's excellent answer over on the CS Theory StackExchange: Pat Morin's answer to "Is there a stable heap?" on cstheory.SE
I have a question about fundamentals in data structures.
I understand that array's access time is faster than a linked list. O(1)- array vs O(N) -linked list
But a linked list beats an array in removing an element since there is no shifting needing O(N)- array vs O(1) -linked list
So my understanding is that if the majority of operations on the data is delete then using a linked list is preferable.
But if the use case is:
delete elements but not too frequently
access ALL elements
Is there a clear winner? In a general case I understand that the downside of using the list is that I access each node which could be on a separate page while an array has better locality.
But is this a theoretical or an actual concern that I should have?
And is the mixed-type i.e. create a linked list from an array (using extra fields) good idea?
Also does my question depend on the language? I assume that shifting elements in array has the same cost in all languages (at least asymptotically)
Singly-linked lists are very useful and can be better performance-wise relative to arrays if you are doing a lot of insertions/deletions, as opposed to pure referencing.
I haven't seen a good use for doubly-linked lists for decades.
I suppose there are some.
In terms of performance, never make decisions without understanding relative performance of your particular situation.
It's fairly common to see people asking about things that, comparatively speaking, are like getting a haircut to lose weight.
Before writing an app, I first ask if it should be compute-bound or IO-bound.
If IO-bound I try to make sure it actually is, by avoiding inefficiencies in IO, and keeping the processing straightforward.
If it should be compute-bound then I look at what its inner loop is likely to be, and try to make that swift.
Regardless, no matter how much I try, there will be (sometimes big) opportunities to make it go faster, and to find them I use this technique.
Whatever you do, don't just try to think it out or go back to your class notes.
Your problem is different from anyone else's, and so is the solution.
The problem with a list is not just the fragmentation, but mostly the data dependency. If you access every Nth element in array you don't have locality, but the accesses may still go to memory in parallel since you know the address. In a list it depends on the data being retrieved, and therefore traversing a list effectively serializes your memory accesses, causing it to be much slower in practice. This of course is orthogonal to asymptotic complexities, and would harm you regardless of the size.
Over the past few days I have been preparing for my very first phone interview for a software development job. In researching questions I have come up with this article.
Every thing was great until I got to this passage,
"When would you use a linked list vs. a vector? "
Now from experience and research these are two very different data structures, a linked list being a dynamic array and a vector being a 2d point in space. The only correlation I can see between the two is if you use a vector as a linked list, say myVector(my value, pointer to neighbor)
Thoughts?
Vector is another name for dynamic arrays. It is the name used for the dynamic array data structure in C++. If you have experience in Java you may know them with the name ArrayList. (Java also has an old collection class called Vector that is not used nowadays because of problems in how it was designed.)
Vectors are good for random read access and insertion and deletion in the back (takes amortized constant time), but bad for insertions and deletions in the front or any other position (linear time, as items have to be moved). Vectors are usually laid out contiguously in memory, so traversing one is efficient because the CPU memory cache gets used effectively.
Linked lists on the other hand are good for inserting and deleting items in the front or back (constant time), but not particularly good for much else: For example deleting an item at an arbitrary index in the middle of the list takes linear time because you must first find the node. On the other hand, once you have found a particular node you can delete it or insert a new item after it in constant time, something you cannot do with a vector. Linked lists are also very simple to implement, which makes them a popular data structure.
I know it's a bit late for this questioner but this is a very insightful video from Bjarne Stroustrup (the inventor of C++) about why you should avoid linked lists with modern hardware.
https://www.youtube.com/watch?v=YQs6IC-vgmo
With the fast memory allocation on computers today, it is much quicker to create a copy of the vector with the items updated.
I don't like the number one answer here so I figured I'd share some actual research into this conducted by Herb Sutter from Microsoft. The results of the test was with up to 100k items in a container, but also claimed that it would continue to out perform a linked list at even half a million entities. Unless you plan on your container having millions of entities, your default container for a dynamic container should be the vector. I summarized more or less what he says, but will also link the reference at the bottom:
"[Even if] you preallocate the nodes within a linked list, that gives you half the performance back, but it's still worse [than a vector]. Why? First of all it's more space -- The per element overhead (is part of the reason) -- the forward and back pointers involved within a linked list -- but also (and more importantly) the access order. The linked list has to traverse to find an insertion point, doing all this pointer chasing, which is the same thing the vector was doing, but what actually is occurring is that prefetchers are that fast. Performing linear traversals with data that is mapped efficiently within memory (allocating and using say, a vector of pointers that is defined and laid out), it will outperform linked lists in nearly every scenario."
https://youtu.be/TJHgp1ugKGM?t=2948
Use vector unless "data size is big" or "strong safety guarantee is essential".
data size is big
:- vector inserting in middle take linear time(because of the need to shuffle things around),but other are constant time operation (like traversing to nth node).So there no much overhead if data size is small.
As per "C++ coding standards Book by Andrei Alexandrescu and Herb Sutter"
"Using a vector for small lists is almost always superior to using list. Even though insertion in the middle of the sequence is a linear-time operation for vector and a constant-time operation for list, vector usually outperforms list when containers are relatively small because of its better constant factor, and list's Big-Oh advantage doesn't kick in until data sizes get larger."
strong safety guarantee
List provide strong safety guaranty.
http://www.cplusplus.com/reference/list/list/insert/
As a correction on the Big O time of insertion and deletion within a linked list, if you have a pointer that holds the position of the current element, and methods used to move it around the list, (like .moveToStart(), .moveToEnd(), .next() etc), you can remove and insert in constant time.
I've started reviewing data structures and algorithms before my final year of school starts to make sure I'm on top of everything. One review problem said "Implement a stack using a linked list or dynamic array and explain why you made the best choice".
To me, it seemed more intuitive to use a list with a tail pointer to implement a stack since it may need to be resized often. It seems like for a large amount of data, a list is the better choice since a dynamic array re-size is an expensive operation. Additionally, with a list, you don't need to allocate any more space than you actually need so it's more space efficient.
However, a dynamic array would definitely allow for adding data far quicker (except when it needs to be resized). However, I'm not sure if using an array is overall quicker, or only if it doesn't need to be resized.
The book's solution said "for storing very large objects, a list is a better implementation" but I don't understand why.
Which way is best? What factors should be used to determine which implementation is "best"? Also, is any of my logic here off?
There are many tradeoffs involved here and I don't think that there's a "correct" answer to this question.
If you implement the stack using a linked list with a tail pointer, then the worst-case runtime to push, pop, or peek is O(1). However, each element will have some extra overhead associated with it (namely, the pointer) that means that there is always O(n) overhead for the structure. Additionally, depending on the speed of your memory allocator, the cost of allocating new nodes for the stack might be noticeable. Also, if you were to continuously pop off all the elements from the stack, you might have a performance hit from poor locality, since there is no guarantee that the linked list cells will be stored contiguously in memory.
If you implement the stack with a dynamic array, then the amortized runtime to push or pop is O(1) and the worst-case cost of a peek is O(1). This means that if you care about the cost of any single operation in the stack, this may not be the best approach. That said, allocations are infrequent, so the total cost of adding or removing n elements is likely to be faster than the corresponding cost in the linked-list based approach. Additionally, the memory overhead of this approach is usually better than the memory overhead of the linked list. If your dynamic array just stores pointers to the elements, then the memory overhead in the worst-case occurs when half the elements are filled in, in which case there are n extra pointers (the same as in the case when you were using the linked list), and in the best case when the dynamic array is full there are no empty cells and the extra overhead is O(1). If, on the other hand, your dynamic array directly contains the elements, the memory overhead can be worse in the worst-case. Finally, because the elements are stored contiguously, there is better locality if you want to continuously push or pop elements from the stack, since all the elements are right next to each other in memory.
In short:
The linked-list approach has worst-case O(1) guarantees on each operation; the dynamic array has amortized O(1) guarantees.
The locality of the linked list is not as good as the locality of the dynamic array.
The total overhead of the dynamic array is likely to be smaller than the total overhead of the linked list, assuming both store pointers to their elements.
The total overhead of the dynamic array is likely to be greater than that of the linked list if the elements are stored directly.
Neither of these structures is clearly "better" than the other. It really depends on your use case. The best way to figure out which is faster would be to time both and see which performs better.
Hope this helps!
Well, for the small objects vs. large objects question, consider how much extra space to use for a linked list if you've got small objects on your stack. Then consider how much extra space you'll need if you've got a bunch of large objects on your stack.
Next, consider the same questions, but with an implementation based on dynamic arrays.
What matters is the number of times malloc() gets called in the course of running a task. It could take from hundreds to thousands of instructions to get you a block of memory. (The time in free() or GC should be proportional to that.) Also, keep a sense of perspective. This might be 99% of the total time, or only 1%, depending what else is happening.
I think you answered the question yourself. For a stack with a large number of items, the dynamic array would have excessive overhead costs (copying overhead) when simply adding an extra item to the top of the stack. With a list it's a simple switch of pointers.
Resizing the dynamic array would not be an expensive task if you design your implementation well.
For instance, to grow the array, if it is full, create a new array of twice the size, and copy items.
You will end up with an amortized cost of ~3N for adding N items.
I've been working on a problem which I thought people might find interesting (and perhaps someone is aware of a pre-existing solution).
I have a large dataset consisting of a long list of pairs of pointers to objects, something like this:
[
(a8576, b3295),
(a7856, b2365),
(a3566, b5464),
...
]
There are way too many objects to keep in memory at any one time (potentially hundreds of gigabytes), so they need to be stored on disk, but can be cached in memory (probably using an LRU cache).
I need to run through this list processing every pair, which requires that both objects in the pair be loaded into memory (if they aren't already cached there).
So, the question: is there a way to reorder the pairs in the list to maximize the effectiveness of an in-memory cache (in other words: minimize the number of cache misses)?
Notes
Obviously, the re-ordering algorithm should be as fast as possible, and shouldn't depend on being able to have the entire list in memory at once (since we don't have enough RAM for that) - but it could iterate over the list several times if necessary.
If we were dealing with individual objects, not pairs, then the simple answer would be to sort them. This obviously won't work in this situation because you need to consider both elements in the pair.
The problem may be related to that of finding a minimum graph cut, but even if the problems are equivalent, I don't think solutions to min-cut meet
My assumption is that the heuristic would stream the data off the disk, and write it back in chunks in a better order. It may need to iterate over this several times.
Actually it may not just be pairs, it could be triplets, quadruplets, or more. I'm hoping that an algorithm that does this for pairs can be easily generalized.
Your problem is related to a similar one for computer graphics hardware:
When rendering indexed vertices in a triangle mesh, typically the hardware has a cache of most recently transformed vertices (~128 the last time I had to worry about it, but suspect the number is larger these days). Vertices not cached need a relatively expensive transform operation to calculate. "Mesh optimisation" to restructure triangle meshes to optimise cache usage used to be a pretty hot research topic. Googling
vertex cache optimisation
(or optimization :^) might find you some interesting material relevant to your problem. As other posters suggest, I suspect doing this effectively will depend on exploiting any inherent coherence in your data.
Another thing to bear in mind: as an LRU cache becomes overloaded it can be well worth changing to an MRU replacement strategy to at least hold some of the items in memory (rather than turning over the entire cache each pass). I seem to remember John Carmack has written some good material on this subject in connection with Direct3D texture caching strategies.
For start, you could mmap the list. That works if there's enough address space, not memory, e.g. on 64-bit CPUs. This makes it easier to access the elements in order.
You could sort that list according to a minimum distance in cache which considers both elements, which works well if the objects are in a contiguous space. The sorting function could be something like: compare (a, b) to (c, d) = (a - c) + (b - d) (which looks like a Hamming distance). Then you pull in slices of the object store and process according to the list.
EDIT: fixed a mistake in the distance.
Even though you're not just sorting this list, the general pattern of a multiway merge sort might be applicable - that is, consider some kind of (possibly recursive) breakdown of the set into smaller sets that can be dealt with in memory separately, and then a second phase where small chunks of the previously dealt-with sets can all be combined together. Even not knowing the specific nature of what you're doing with the pairs, it's safe to say that many algorithmic problems are made much more straightforward when you're dealing with sorted data (including graph problems, which might be what you have on your hands here).
I think the answer to this question is going to depend very heavily on exactly the access pattern of the pair of objects. As you said, just sorting the pointers would be best in a simple, non-paired case. In a more complex case it may still make sense to sort by one of the halves of the pair if the pattern is such that locality for those values is more important (if, for example, these are key/value pairs and you are doing a lot of searches, locality for the keys is infinitely more important than for the values).
So, really, my answer is that this question can't be answered in a general case.
For storing your structure, what you actually want is probably a B-tree. These are designed for what you're talking about--keeping track of large collections where you don't want to (or can't) keep the whole thing in memory.