Dequeue algorithm - algorithm

Write four O(1)-time procedures to insert elements into and delete elements from both ends of a deque constructed from an array.
In my implementation I have maintained 4 pointers front1,rear1,front2,rear2.
Do you have any other algorithm with less pointers and O(1) complexity ? Please explain.

There are two common ways to implement a deque:
Doubly linked list: You implement a doubly linked list, and maintain pointers to the front and the end of the list. It is easy to both insert and remove the start/end of the linked list in O(1) time.
A circular dynamic array: In here, you have an array, which is treated as circular array (so elements in index=arr.length-1 and index=0 are regarded as adjacent).
In this implementation you hold the index number of the "head", and the "tail". Adding element to the "head" is done to index head-1 (while moving the head backward), and adding element to the tail is done by writing it to index tail+1.
This method is amortized O(1), and has better constants then the linked list implementation. However, it is not "strict worst case" O(1), since if the number of elements exceeds the size of the array, you need to reallocate a new array and move elements from the old one to the new one. This takes O(n) time (but needs to be done after at least O(n) operations), and thus it is O(1) amortized analysis, but can still fall to O(n) from time to time.

Related

Why use linked lists, if at the end time complexity from arrays for deletion/insertion is equal?

I have following question. Why use Linked List, if time complexity for deletion of element for arrays is O(n) and for Linked List(with index given) is also O(n) since I also need to search through whole list?
While the asymptotic complexity may be the same, the constant factor may be very different. In particular, you might have a collection of "large" things that are expensive to move or copy, but cheap to match. So with the linked list, you do a (fast) O(n) seach to find an element, then a O(1) to insert/remove there. With the array you need the same O(n) search and then a slow O(n) move of all the other elements in the array to make/remove space.
It is also possible you may have another connected data structure (such as a hash table) with fast lookup ability giving references into your collection. In such a case, you can find an element in the list in O(1) time, then remove it in O(1) time.
Another advantage is that lists are more amenable to atomic update -- a single-link list can be updated (insert or remove) with a single (atomic) pointer write.

Finding time and space complexity

I have been learning about sorting.
Most of sorting algorithms(merge, quick, etc) utilise arrays.
I was thinking what if I did not sort an array in place.
An algorithm I thought of is
Iterate through each element in array - O(n).
For each element, compare the element with starting and ending element of a doubly linked list.
Add the element to correct position in linked list. (Start iterating from start/end of list based on which one would be faster).
When all elements in original array are sorted, create background thread which copies them into array. Until copy is not done, return index element by iterating over list.
When copy is done, return elements through array index.
Now, what would be time complexity of this and how do I calculate it?
Let's go through everything one step at a time.
Iterate through each element in array - O(n).
Yep!
For each element, compare the element with starting and ending element of a doubly linked list.
Add the element to correct position in linked list. (Start iterating from start/end of list based on which one would be faster).
Let's suppose that the doubly-linked list currently has k elements in it. Unfortunately, just by looking at the front and back element of the list, you won't be able to tell where in the list the element is likely to go. It's quite possible that your element is closer in value to the front element of the list than the back, but would actually belong just before the back element. You also don't have random access in a linked list, so in the worst case you may have to scan all k elements of the linked list trying to find the spot where this element belongs. That means that the work done is in the worst case going to be O(k). Now, each iteration of the algorithm increases k (the number of elements in the list) by one, so the work done is in the worst case 1 + 2 + 3 + ... + n = Θ(n2).
When all elements in original array are sorted, create background thread which copies them into array. Until copy is not done, return index element by iterating over list.
When copy is done, return elements through array index.
This is an interesting idea and it's hard to measure the complexity. If the background thread gets starved out or is really slow, then the cost of looking up any element will be O(n) in the worst case because you may have to scan over half the elements in the list to find the one you're looking for.
In total, your algorithm runs in time O(n2) and uses Θ(n) memory. It's essentially a variant of insertion sort (as #Yu Hao) pointed out and, in practice, I'd expect that this would be substantially slower than just using a standard O(n log n) sorting algorithm, or even an in-place insertion sort, due to the extra memory overhead and poor locality of reference afforded by linked lists.
The algorithm you describe is basically a variant version of Insertion sort.
The major reason of using a linked list here is to avoid the extra swap of elements in arrays. Comparing elements with both the head and tail of doubly linked list provides minor performance improvmenet, if any.
The time complexity is still O(N2) for random input.

Time complexity/performance of insertion operation for a Queue implementation (In java)

What is the performance of the insertion operation for a Queue implemented as:
(a) an array, with the items in unsorted order
(b) an array, with the items in sorted order
(c) a linked list, with the items in unsorted order.
For each operation, and each implementation, give the performance in Big Oh notation and explain enough of the algorithm to justify your answer. (e.g. it takes O(n) times because in the worse case.... the algorithm does such and such....).
Please explain in detail, it'll help me out a lot!
Short answer: it depends on your data structure.
In a naive array-based implementation (Assuming fixed size), I think it's pretty obvious that insertion is a constant operation (That is, O(1)), assuming that you don't run off the end of the array. This is similar in a cyclic array, with similar assumptions.
A dynamic array is a little more complicated. A dynamic array is a fixed-size array that you enlarge once it's filled to a certain point. So for a dynamic array that resizes when it reaches length k, the first k-1 insertions are constant (Just like inserting into an ordinary array) and the k-th insertion takes O(k+1) - the cost of duplicating the contents of the array into a larger container, and then inserting the element. You can show that this works out to O(1) insertion time, but that may be out of scope for your course.
As others have noted, sorted order doesn't affect a standard queue. If you are in fact dealing with a priority queue, then there are lots of possible implementations, which I'll let you research on your own. The best insertion time is O(1), but that implementation has some disadvantages. The standard implementation is O(log n) insertion.
With linked lists, the insertion time will depend on whether the head of the list is the head of the queue (i.e., whether you add onto the head or the tail).
If you're adding onto the head, then it's pretty easy to see that insertion is O(1). If you're adding onto the tail, then it's also easy to see that insertion is O(n) for a list of length n. The main point is that, whichever implementation you choose, insert will always be one of O(1) or O(n), and removal will always be the other.
However, there is a simple trick that will let you get both insert and removal to O(1) in either case. I'll leave it to you to consider how to do that.

EDIT: Never mind

EDIT: Wow I'm so sorry....I somehow confused the LinkedList and ArrayList columns in the second graph >_> I didn't sleep much ....sorry...At least one answer did help me in other ways, with a detailed explanation, so this post wasn't a TOTAL waste...
I did find some topics about this but there were contradictions in posts, so I wanted confirmation on who was correct.
This topic here is what I found:
When to use LinkedList over ArrayList?
The most upvoted answer says:
"For LinkedList
get is O(n)
add is O(1)
remove is O(n)
Iterator.remove is O(1)
For ArrayList
get is O(1)
add is O(1) amortized, but O(n) worst-case since the array must be resized and copied
remove is O(n)"
But then someone else posted a link here that says:
http://leepoint.net/notes-java/algorithms/big-oh/bigoh.html
Algorithm ArrayList LinkedList
access front O(1) O(1)
access back O(1) O(1)
access middle O(1) O(N)
insert at front O(N) O(1)
insert at back O(1) O(1)
insert in middle O(N) O(1)
There is no contradiction between the two sources cited in the question.
First a few thoughts about LinkedLists:
In a linked list, we need to move a pointer through the list to get access to any particular element to either delete it, examine it, or insert a new element before it. Since the java.util.LinkedList implementation contains a reference to the front and back of the list, we have immediate accesss to the front and back of the list and this explains why any operation involving the front or back of the list is O(1). If an operation is done using an Iterator, then the pointer is already where you need it to be. So to remove an element from the middle takes O(n) time, but if the Iterator already spent O(n) operations getting to the middle, then iter.remove() can execute in O(1).
Now conisider ArrayList:
Under the hood, ArrayList stores data in a primitive array. So while we can access any element in O(1) time, adding or removing an element will require that the entire array be shifted down by one element and this takes O(n) time. If we are adding or removing the last element, this does not require any shifting, so this can run in O(1).
This means that calling list.add(newItem) takes O(1), but occasionally there is no room at the end of the list, so the entire list needs to be copied into new memory before ArrayList can perform the add. However, since every time ArrayList resizes itself it doubles the previous capacity, this copy operation only happens log2 n times when adding n elements. So we still say that add runs in O(1) time. If you know how many elements you will be adding when the ArrayList is created, you can give it an initial capacity to improve performance by avoiding the copy operation.

Big O Notation Arrays vs. Linked List insertions

Big O Notation Arrays vs. Linked List insertions:
According to academic literature for arrays it is constant O(1) and for Linked Lists it is linear O(n).
An array only takes one multiplication and addition.
A linked list which is not laid out in contiguous memory requires traversal.
This question is, does O(1) and O(n) accurately describe indexing/search costs for arrays and linked lists respectively?
O(1) accurately describes inserting at the end of the array. However, if you're inserting into the middle of an array, you have to shift all the elements after that element, so the complexity for insertion in that case is O(n) for arrays. End appending also discounts the case where you'd have to resize an array if it's full.
For linked list, you have to traverse the list to do middle insertions, so that's O(n). You don't have to shift elements down though.
There's a nice chart on wikipedia with this: http://en.wikipedia.org/wiki/Linked_list#Linked_lists_vs._dynamic_arrays
Linked list Array Dynamic array Balanced tree
Indexing Θ(n) Θ(1) Θ(1) Θ(log n)
Insert/delete at beginning Θ(1) N/A Θ(n) Θ(log n)
Insert/delete at end Θ(1) N/A Θ(1) amortized Θ(log n)
Insert/delete in middle search time
+ Θ(1) N/A Θ(n) Θ(log n)
Wasted space (average) Θ(n) 0 Θ(n)[2] Θ(n)
Assuming you are talking about an insertion where you already know the insertion point, i.e. this does not take into account the traversal of the list to find the correct position:
Insertions in an array depend on where you are inserting, as you will need to shift the existing values. Worst case (inserting at array[0]) is O(x).
Insertion in a list is O(1) because you only need to modify next/previous pointers of adjacent items.
Insertion for arrays I'd imagine is slower. Sure, you have to iterate a linked list, but you have to allocate, save and deallocate memory to insert into an array.
What literature are you referencing? The size of an array is determined when the array is created, and never changes afterwards. Inserting really only can take place on free slots at the end of the array. Any other type of insertion may require resizing and this is certainly not O(1). The size of a linked list is implementation dependent, but must always be at least big enough to store all of its elements. Elements can be inserted anywhere in the list, and finding the appropriate index requires traversing.
tldr An unsorted array is analogous to a set. Like a set, elements can be added and removed, iterated over, and read. But, as with a set, it makes no sense to talk about inserting an element at a specific position, because to do so would be an attempt to impose a sorting order on what is, by definition, unsorted.
According to academic literature for arrays it is constant O(1) and for Linked Lists it is linear O(n).
It is worth understanding why the academic literature quotes array insert as O(1) for an array. There are several concepts to understand:
An array is defined as being unsorted (unless explicitly stated otherwise).
The length of an array, defined as the number of elements that the array contains, can be increased or decreased arbitrarily in O(1) time and there is no limit on the maximum size of an array.
(In a real computer this is not be the case, due to various factors such as memory size, virtual memory, swap space, etc. But for the purpose of algorithm asymptotic analysis these factors are not important - we care about how the running time of the algorithm changes as the input size increases towards infinity, not how it performs on a particular computer with a particular memory size and operating system.)
Insert and delete are O(1) because the array is an unsorted data structure.
Insert is not assignment
Consider what it actually means to add an element to an unsorted data structure. Since there is no defined sorting order, whatever order actually occurs is arbitrary and does not matter. If you think in terms of an object oriented API, the method signature would be something like:
Array.insert(Element e)
Note that this is the same as the insert methods for other data structures, like a linked list or sorted array:
LinkedList.insert(Element e)
SortedArray.insert(Element e)
In all of these cases, the caller of the insert method does not specify where the value being inserted ends up being stored - it is an internal detail of the data structure. Furthermore, it makes no sense for the caller to try and insert an element at a specific location in the data structure - either for a sorted or unsorted data structure. For an (unsorted) linked list, the list is by definition unsorted and therefore the sort order is irrelevant. For a sorted array, the insert operation will, by definition, insert an element at a specific point of the array.
Thus it makes no sense to define an array insert operation as:
Array.insert(Element e, Index p)
With such a definition, the caller would override an internal property of the data structure and impose an ordering constraint on an unsorted array - a constraint that does not exist in the definition of the array, because an array is unsorted.
Why does this misconception occur with arrays and not other data structures? Probably because programmers are used to dealing with arrays using the assignment operator:
array[0] = 10
array[1] = 20
The assignment operator gives the values of an array an explicit order. The important thing to note here is that assignment is not the same as insert:
insert : store the given value in the data structure without modifying existing elements.
insert in unsorted : store the given value in the data structure without modifying existing elements and the retrieval order is not important.
insert in sorted : store the given value in the data structure without modifying existing elements and the retrieval order is important.
assign a[x] = v : overwrite the existing data in location x with the given value v.
An unsorted array has no sort order, and hence insert does not need to allow overriding of the position. insert is not the same thing as assignment. Array insert is simply defined as:
Array.insert(v):
array.length = array.length + 1
// in standard algorithmic notation, arrays are defined from 1..n not 0..n-1
array[array.length] = v
Which is O(1).
Long ago on a system that had more RAM that disk space I implemented an indexed linked list that that was indexed as it was entered by hand or as it was loaded from disk. Each and every record was append to the next index in memory and the disk file opened the record appended to the end closed.
The program cashiered auction sales on a Model I Radio Shack computer and the the writes to disk were only insurance against power failure and for and archived record as in order to meet time constraints the data had to be fetched form RAM and printed in reverse order so the buyer could be ask if the first item that came up was the last one he purchased. Each buyer and Seller were linked to the last item of theirs that sold and that item was linked to the item before it. It was only a single link link list that was traversed from the bottom up.
Corrections were made with reversing entries. I used the same method for several things and I never found a faster system if the method would work for the job at hand and the index was saved to disk and didn't have to be rebuilt as the file reloaded to memory as it might in a power failure.
Later I wrote a program to edit more conventionally. It could also reorganize the data so it was grouped together.

Resources