EDIT: Wow I'm so sorry....I somehow confused the LinkedList and ArrayList columns in the second graph >_> I didn't sleep much ....sorry...At least one answer did help me in other ways, with a detailed explanation, so this post wasn't a TOTAL waste...
I did find some topics about this but there were contradictions in posts, so I wanted confirmation on who was correct.
This topic here is what I found:
When to use LinkedList over ArrayList?
The most upvoted answer says:
"For LinkedList
get is O(n)
add is O(1)
remove is O(n)
Iterator.remove is O(1)
For ArrayList
get is O(1)
add is O(1) amortized, but O(n) worst-case since the array must be resized and copied
remove is O(n)"
But then someone else posted a link here that says:
http://leepoint.net/notes-java/algorithms/big-oh/bigoh.html
Algorithm ArrayList LinkedList
access front O(1) O(1)
access back O(1) O(1)
access middle O(1) O(N)
insert at front O(N) O(1)
insert at back O(1) O(1)
insert in middle O(N) O(1)
There is no contradiction between the two sources cited in the question.
First a few thoughts about LinkedLists:
In a linked list, we need to move a pointer through the list to get access to any particular element to either delete it, examine it, or insert a new element before it. Since the java.util.LinkedList implementation contains a reference to the front and back of the list, we have immediate accesss to the front and back of the list and this explains why any operation involving the front or back of the list is O(1). If an operation is done using an Iterator, then the pointer is already where you need it to be. So to remove an element from the middle takes O(n) time, but if the Iterator already spent O(n) operations getting to the middle, then iter.remove() can execute in O(1).
Now conisider ArrayList:
Under the hood, ArrayList stores data in a primitive array. So while we can access any element in O(1) time, adding or removing an element will require that the entire array be shifted down by one element and this takes O(n) time. If we are adding or removing the last element, this does not require any shifting, so this can run in O(1).
This means that calling list.add(newItem) takes O(1), but occasionally there is no room at the end of the list, so the entire list needs to be copied into new memory before ArrayList can perform the add. However, since every time ArrayList resizes itself it doubles the previous capacity, this copy operation only happens log2 n times when adding n elements. So we still say that add runs in O(1) time. If you know how many elements you will be adding when the ArrayList is created, you can give it an initial capacity to improve performance by avoiding the copy operation.
Related
I have following question. Why use Linked List, if time complexity for deletion of element for arrays is O(n) and for Linked List(with index given) is also O(n) since I also need to search through whole list?
While the asymptotic complexity may be the same, the constant factor may be very different. In particular, you might have a collection of "large" things that are expensive to move or copy, but cheap to match. So with the linked list, you do a (fast) O(n) seach to find an element, then a O(1) to insert/remove there. With the array you need the same O(n) search and then a slow O(n) move of all the other elements in the array to make/remove space.
It is also possible you may have another connected data structure (such as a hash table) with fast lookup ability giving references into your collection. In such a case, you can find an element in the list in O(1) time, then remove it in O(1) time.
Another advantage is that lists are more amenable to atomic update -- a single-link list can be updated (insert or remove) with a single (atomic) pointer write.
I'm having a bit of trouble understanding why time complexity of link lists are O(1) according to this website. From what I understand if you want to delete an element surely you must traverse the list to find out where the element is located (if it even exists at all)? From what I understand shouldn't it be O(n) or am I missing something completely?
No, you are not missing something.
If you want to delete a specific element, the time complexity is O(n) (where n is the number of elements) because you have to find the element first.
If you want to delete an element at a specific index i, the time complexity is O(i) because you have to follow the links from the beginning.
The time complexity of insertion is only O(1) if you already have a reference to the node you want to insert after. The time complexity for removal is only O(1) for a doubly-linked list if you already have a reference to the node you want to remove. Removal for a singly-linked list is only O(1) if you already have references to the node you want to remove and the one before. All this is in contrast to an array-based list where insertions and removal are O(n) because you have to shift elements along.
The advantage of using a linked list rather than a list based on an array is that you can efficiently insert or remove elements while iterating over it. This means for example that filtering a linked list is more efficient than filtering a list based on an array.
Your are correct.
Deletion:
1.If pointer is given in this case Time Complexity is O(1).
2.You DON'T have pointer to the node to be deleted(Search and Delete).
In this case Time Complexity is O(n).
Write four O(1)-time procedures to insert elements into and delete elements from both ends of a deque constructed from an array.
In my implementation I have maintained 4 pointers front1,rear1,front2,rear2.
Do you have any other algorithm with less pointers and O(1) complexity ? Please explain.
There are two common ways to implement a deque:
Doubly linked list: You implement a doubly linked list, and maintain pointers to the front and the end of the list. It is easy to both insert and remove the start/end of the linked list in O(1) time.
A circular dynamic array: In here, you have an array, which is treated as circular array (so elements in index=arr.length-1 and index=0 are regarded as adjacent).
In this implementation you hold the index number of the "head", and the "tail". Adding element to the "head" is done to index head-1 (while moving the head backward), and adding element to the tail is done by writing it to index tail+1.
This method is amortized O(1), and has better constants then the linked list implementation. However, it is not "strict worst case" O(1), since if the number of elements exceeds the size of the array, you need to reallocate a new array and move elements from the old one to the new one. This takes O(n) time (but needs to be done after at least O(n) operations), and thus it is O(1) amortized analysis, but can still fall to O(n) from time to time.
What is the performance of the insertion operation for a Queue implemented as:
(a) an array, with the items in unsorted order
(b) an array, with the items in sorted order
(c) a linked list, with the items in unsorted order.
For each operation, and each implementation, give the performance in Big Oh notation and explain enough of the algorithm to justify your answer. (e.g. it takes O(n) times because in the worse case.... the algorithm does such and such....).
Please explain in detail, it'll help me out a lot!
Short answer: it depends on your data structure.
In a naive array-based implementation (Assuming fixed size), I think it's pretty obvious that insertion is a constant operation (That is, O(1)), assuming that you don't run off the end of the array. This is similar in a cyclic array, with similar assumptions.
A dynamic array is a little more complicated. A dynamic array is a fixed-size array that you enlarge once it's filled to a certain point. So for a dynamic array that resizes when it reaches length k, the first k-1 insertions are constant (Just like inserting into an ordinary array) and the k-th insertion takes O(k+1) - the cost of duplicating the contents of the array into a larger container, and then inserting the element. You can show that this works out to O(1) insertion time, but that may be out of scope for your course.
As others have noted, sorted order doesn't affect a standard queue. If you are in fact dealing with a priority queue, then there are lots of possible implementations, which I'll let you research on your own. The best insertion time is O(1), but that implementation has some disadvantages. The standard implementation is O(log n) insertion.
With linked lists, the insertion time will depend on whether the head of the list is the head of the queue (i.e., whether you add onto the head or the tail).
If you're adding onto the head, then it's pretty easy to see that insertion is O(1). If you're adding onto the tail, then it's also easy to see that insertion is O(n) for a list of length n. The main point is that, whichever implementation you choose, insert will always be one of O(1) or O(n), and removal will always be the other.
However, there is a simple trick that will let you get both insert and removal to O(1) in either case. I'll leave it to you to consider how to do that.
If somebody asked me:" What is the running time complexity of adding a new item to the back of an array-based list?" How do I need to answer it? It can be treated as being O(1) since it is random access. But what if the resize() method is called before inserting(resize()method is used to double the size of array when it is full)? In this case will be linear time. Therefore, which one is correct? O(1) or O(n)?
Amortized, it is O(1), though it depends on the strategy for increasing the size of the list.
If we just increase the size of the array by one when it is full, it is O(n), since when we perform many inserts, we have to copy the entire list for each insert.
If we double the size of the array each time it is full, we will be copying relatively rarely. Amortized, or averaged, this becomes O(1).
This data structure is often called a dynamic array.