I have following question. Why use Linked List, if time complexity for deletion of element for arrays is O(n) and for Linked List(with index given) is also O(n) since I also need to search through whole list?
While the asymptotic complexity may be the same, the constant factor may be very different. In particular, you might have a collection of "large" things that are expensive to move or copy, but cheap to match. So with the linked list, you do a (fast) O(n) seach to find an element, then a O(1) to insert/remove there. With the array you need the same O(n) search and then a slow O(n) move of all the other elements in the array to make/remove space.
It is also possible you may have another connected data structure (such as a hash table) with fast lookup ability giving references into your collection. In such a case, you can find an element in the list in O(1) time, then remove it in O(1) time.
Another advantage is that lists are more amenable to atomic update -- a single-link list can be updated (insert or remove) with a single (atomic) pointer write.
Linked List
A linked list’s insertion time complexity is O(1) for the actual operation, but requires O(n) time to traverse to the proper position. Most online resources list a linked list’s average insertion time as O(1):
https://stackoverflow.com/a/17410009/10426919
https://www.bigocheatsheet.com/
https://www.geeksforgeeks.org/time-complexities-of-different-data-structures/
BST
A binary search tree’s insertion requires the traversal of nodes, taking O(log n) time.
Problem
Am I mistaken to believe that insertion in a BST also takes O(1) time for the actual operation?
Similar to the nodes of a linked list, an insertion of a node in a BST will simply point the current node’s pointer to the inserted-node, and the inserted-node will point to the current node’s child node.
If my thinking is correct, why do most online resources list the average insert time for a BST to be O(log n), as opposed to O(1) like for a linked list?
It seems that for linked list, the actual insertion time is listed as the insertion time complexity, but for BST, the traversal time is listed as the insertion time complexity.
It reflects the usage. It's O(1) and O(log n) for the operations you'll actually request from them.
With a BST, you'll likely let it manage itself while you stay out of the implementation details. That is, you'll issue commands like tree.insert(value) or queries like tree.contains(value). And those things take O(log n).
With a linked list, you'll more likely manage it yourself, at least the positioning. You wouldn't issue commands like list.insert(value, index), unless the index is very small or you don't care about performance. You're more likely to issue commands like insertAfter(node, newNode) or insertBeginning(list, newNode), which do only take O(1) time. Note that I took these two from Wikipedia's Linked list operations > Singly linked lists section, which doesn't even have an operation for inserting at a certain position given as an index. Because in reality, you'll manage the "position" (in the form of a node) with the algorithm that uses the linked list, and the time to manage the position is attributed to that algorithm instead. That can btw also be O(1), examples are:
You're building a linked list from an array. You'll do this by keeping a variable referencing the last node. To append the next value/node, insert it after that last node (an O(1) operation indeed), and update your variable to reference the new last node instead (also O(1)).
Imagine you don't find a position with a linear scan but with a hash map, storing references directly to linked list nodes. Then looking up the reference takes O(1) and inserting after the looked-up node also again only takes O(1) time.
If you had shown us some of those "Most online resources list a linked list’s average insertion time as O(1)", we'd likely see that they're indeed showing insertion operations like insertAfterNode, not insertAtIndex. Edit now that you included some links in the question: My thoughts on those sources regarding the O(1) insertion for linked lists: The first one does point out that it's O(1) only if you already have something like an "iterator to the location". The second one in turn refers to the same Wikipedia section I showed above, i.e., with insertions after a given node or at the beginning of a list. The third one is, well, the worst site about programming I know, so I'm not surprised they just say O(1) without any further information.
Put differently, as I like real-world analogies: If you ask me how much it costs to replace part X inside a car motor, I might say $200, even though the part only costs $5. Because I wouldn't do that myself. I'd let a mechanic do that, and I'd have to pay for their work. But if you ask me how much it costs to replace the bell on a bicycle, I might say $5 when the bell costs $5. Because I'd do the replacing myself.
A binary search tree is ordered, and it's typically balanced (to avoid O(n) worst-case search times), which means that when you insert a value some amount of shuffling has to be done to balance out the tree. That rebalancing takes an average of O(log n) operations, whereas a Linked List only needs to update a fixed number of pointers once you've found your place to insert an item between nodes.
To insert into a linked list, you just need to maintain the end node of the list (assuming you are inserting at the end).
To insert into a binary search tree (BST), and to maintain the BST after insertion, there is no way you can do that in O(1) - since the tree might re-balance. This operation is not as simple as inserting into a linked list.
Check out some of the examples here.
The insertion time of a Linked List is actually depends on where you are inserting and the types of Linked List.
For example consider the following cases:
You are using a single linked list and you are going to insert at the end / middle, you would have running time of O(n) to traverse the list till the end node or middle node.
You are using double linked list (with two pointer first pointer points to head element and second pointer points to last element) and you are going to insert at the end, this time still you would have O(n) time complexity since you need to traverse to the middle of the list using either first or second pointer.
You are using single linked list and you are going to insert at the first position of the list, this time you would have complexity of O(1) since you don't need to traverse any node at all. The same is true for double linked list and insert position at the end of the list.
So you can see in worst cases scenario a Linked list would take O(n) instead of O(1).
Now in case of BST you can come up with O(log n) time if your BST is balanced and not skewed. If your TREE is skewed (where every elements are greater than the prev elements) this time you need to traverse all the nodes to find the insertion position. For example consider your tree is 1->2->4->6 and you are going to insert node 9, so you need to visit all the nodes to find to insertion position.
1
\
2
\
4
\
6 (last position after which new node going to insert)
\
9 (new insertion position for the new node)
Therefore you can see you need to visit all the nodes to find the proper place, if you have n-nodes you would have O(n+1) => O(n) running time complexity.
But if your BST is balanced and not skewed the situation changes dramatically, since every move you can eliminate nodes which is not comes under condition.
PS: What I mean by not comes under the condition you can take this as homework!
If you want to delete Node A then you have to traverse only one and complexity will O(1)
If you want to delete Node C then you have to traverse two times and complexity will O(n)
If you want to delete Node D then you have to traverse three times and complexity might be O(n)
However, the deletion complexity of the last node in a double linked list is O(1)
I don't get this point how it works?
I checked this link but I did not get/ did not understand my answer
Link
The complexity isn't in removing the item, but locating it.
In a doubly-linked list, you typically have a pointer to the last element in the list so that you can append. So if somebody asks you to delete the last element, you can just remove it.
If somebody asks you to delete the kth element of the list, you have to start at the beginning and traverse k links to find the element before you can delete it. That's going to be O(k), which in the worst case would be O(n-1).
Only case when deletion of last node from doubly linked list will be O(1) complexity is when you have direct access to this node , something like tail pointer. Otherwise you will have to traverse whole list which takes O(n).
When we determine time complexity we always take into account the worst case scenario. So why do we not assume the worst case scenario for deletion in a singly linked list (not knowing where the value is, and therefore needing to traverse the entire linked list)???
I'm using this as the "source of truth" http://bigocheatsheet.com/
For example, deletion in an array is considered O(n) because if we delete the first item, then we will need to reassign the index of every other item in the array. If we were to just delete the last item in the array then it would be constant time. But we assume the worst case scenario, which makes sense.
So why would we not do the same for a linked list, and assume the worst case scenario? In that case it seems to me it should be O(n) right?
Deletion from linked list means you know what element you delete. And all that you do is reassign reference from this element to next. Therefore this operation is O(1).
And of course search in linked list is O(n) as well as in array. But in array even if you know your element you MUST reassign all elements after that.
Is the time complexity for both operations equal to O(log n)?
Remeber: the list is ordered, always ordered, and not double linked.
Both insertion and deletion in an ordered linked list is O(n) - since you first need to find what you want to delete/add [in deletion find the relevant node, and in insert - find the correct location of it] - which is O(n) - even if the list is ordered, because you need to get to this place while iterating from the head.
An efficient special type of list that allows fast insertion, deletion and look up is called a skip list, and it uses more nodes to iterate quickly between non adjacent nodes