LinkedList Operation Cost - data-structures

What is the cost of finding an element in a linked list? I know that the cost of finding an element in a balanced binary search tree which is O(log n), but what about a linked list?

If you know nothing about the elements in the linked list and have no pointers into the list, the cost of searching for an element in a linked list is O(1) in the best case and O(n) in the worst-case. In the best case, you find the element at the very front, and in the worst case have to look at all elements before deciding that the element you're searching for isn't there.
This is much slower than a balanced binary search tree in the worst-case, so there are some variations on the linked list designed to speed up access. The skip list, for example, uses multiple parallel linked lists to make it possible to "skip" over elements in the list. This requires the elements to be stored in sorted order, but it does decrease the lookup time to expected O(log n).
Hope this helps!

Related

Time complexity of deletion in a linked list

I'm having a bit of trouble understanding why time complexity of link lists are O(1) according to this website. From what I understand if you want to delete an element surely you must traverse the list to find out where the element is located (if it even exists at all)? From what I understand shouldn't it be O(n) or am I missing something completely?
No, you are not missing something.
If you want to delete a specific element, the time complexity is O(n) (where n is the number of elements) because you have to find the element first.
If you want to delete an element at a specific index i, the time complexity is O(i) because you have to follow the links from the beginning.
The time complexity of insertion is only O(1) if you already have a reference to the node you want to insert after. The time complexity for removal is only O(1) for a doubly-linked list if you already have a reference to the node you want to remove. Removal for a singly-linked list is only O(1) if you already have references to the node you want to remove and the one before. All this is in contrast to an array-based list where insertions and removal are O(n) because you have to shift elements along.
The advantage of using a linked list rather than a list based on an array is that you can efficiently insert or remove elements while iterating over it. This means for example that filtering a linked list is more efficient than filtering a list based on an array.
Your are correct.
Deletion:
1.If pointer is given in this case Time Complexity is O(1).
2.You DON'T have pointer to the node to be deleted(Search and Delete).
In this case Time Complexity is O(n).

binary search tree vs sorted doubly linked list

I'm wondering, is there any difference between performance of those, provided binary search is used for sorted linked list insertion, search. And in which situations they perform differently or maybe for which purposes, say, list will be unusable or vice versa.
You can't do a binary search on a linked list (single or double) simply because there's no way to get to the middle of the list without traversing half of it (from one end).
There's no doubt a form of multi-level skip list that will do that but it seems to me that's just emulating a binary tree with a more complex structure.
A sorted linked list tends to be O(n) for search, insertion and deletion (the actual insertion /deletion itself is O(1) but you still have to find the insertion or deletion point first).
Alternatively, binary trees (balanced ones) are O(log n) for search, insertion and deletion (all these operations are proportional to the height of the tree).

What search tree should I use when I know all the probabilities of accessing each element

In case if I don't know the probabilities of accessing each element, but I'm sure that some elements will be accessed far more often then the others, I will use Splay tree. What should I use if I already know all the probabilities? I assume that there should be some data structure that is better than splay trees for this case.
I'm trying to imagine all the cases where and when should I use every type of the search trees. Maybe someone can post some links to articles about comparison of all the search trees, and similar structures?
EDIT I'd like to still have O(log n) as the worst case, but in avarage it should be faster. Splay trees are good example, but I'd like to predefine the configuration of this tree.
For example, I have an array of elements to store [a1, a2, .. an], and the probabilities for each element [p1, p2, .. pn], which define how often I will access each element. I can create splay tree, add each element to the splay tree (O(n log n)), and then access them with given probabilities to make the desired tree. So if I have probabilities [1/2, 1/4, 1/4], I need to splay the first element, to make it be among the first. So, I need to order elements by probabilities, and splay them in the order from the lowest to the highest access probability. That takes O(n log n) also. So, overall time of building such tree is O(n log n) with a big constant. My goal is to lower this number.
I do not mind using something else, but not search tree, but I'd like for the time to be lower then in case of Splay tree. And I want search, insert and delete be in the range of O(log n) amortized.
Edit: I didn't see that you wanted to update the tree dynamically - the below algorithm requires all elements and probabilities to be known in advance. I'll leave the post up in case someone in such a situation comes along.
If you happen to be in possession of the third edition of Introduction to Algorithms by Cormen et al., it describes a dynamic programming algorithm for creating optimal binary search trees when you know all of the probabilities.
Here is a rough outline of the algorithm: First, sort the elements (on element value, not probability). We don't yet know which element should be the root of the tree, but we know that all elements that will be to the left of the root in the tree will be to the left of that element in the list, and vice versa for the elements to the right of the root. If we choose the element at index k to be the root, we get two subproblems: how to construct an optimal tree for the elements 0 through k-1, and for the elements k+1 through n-1. Solve these problems recursively, so that you know the expected cost for a search in a tree where the root is element k. Do this for all possible choices of k, and you will find which tree is the best one. Use dynamic programming or memoization in order to save computation time.
Use a hash table.
You never mentioned needing ordered iteration, and by sacrificing this you can achieve amortized O(1) insert/access complexity, better than O(log n).
Specifically, use a hash table with linked list buckets, and use the move-to-front optimization. What this means is each time you search a bucket (linked list) with more than one item, you move the item found to the front of that bucket. The next time you access this element, it will already be at the front.
If you know the access probabilities, you can further refine the technique. When inserting a new element into a bucket, don't insert it onto the front, but rather insert such that you maintain most-probable-first order. Note the move-to-front technique will tend to perform this sort implicitly already, but you can help it bootstrap more quickly.
If your tree is not going to change once created, you probably should use a hash table or tango tree:
http://en.wikipedia.org/wiki/Tango_tree
Hash tables, when not overloaded, are O(1) lookup, degrading to a O(n) when overloaded.
Tango trees, once constructed, are O(loglogn) lookup. They do not support deletion or insertion.
There's also something known as a "perfect hash" that might be good for your use.

Time complexity for Insertion and deletion of elements from an ordered list

Is the time complexity for both operations equal to O(log n)?
Remeber: the list is ordered, always ordered, and not double linked.
Both insertion and deletion in an ordered linked list is O(n) - since you first need to find what you want to delete/add [in deletion find the relevant node, and in insert - find the correct location of it] - which is O(n) - even if the list is ordered, because you need to get to this place while iterating from the head.
An efficient special type of list that allows fast insertion, deletion and look up is called a skip list, and it uses more nodes to iterate quickly between non adjacent nodes

How to apply binary search O(log n) on a sorted linked list?

Recently I came across one interesting question on linked list. Sorted singly linked list is given and we have to search one element from this list.
Time complexity should not be more than O(log n). This seems that we need to apply binary search on this linked list. How? As linked list does not provide random access if we try to apply binary search algorithm it will reach O(n) as we need to find length of the list and go to the middle.
Any ideas?
It is certainly not possible with a plain singly-linked list.
Sketch proof: to examine the last node of a singly-linked list, we must perform n-1 operations of following a "next" pointer [proof by induction on the fact that there is only one reference to the k+1th node, and it is in the kth node, and it takes a operation to follow it]. For certain inputs, it is necessary to examine the last node (specifically, if the searched-for element is equal to or greater than its value). Hence for certain inputs, time required is proportional to n.
You either need more time, or a different data structure.
Note that you can do it in O(log n) comparisons with a binary search. It'll just take more time than that, so this fact is only of interest if comparisons are very much more expensive than list traversal.
You need to use skip list. This is not possible with a normal linked list (and I really want to learn if this is possible with normal list).
In Linked List, binary search may not achieve a complexity of O(log n) but least can be achieved a little by using Double Pointer Method as described here in this research work: http://www.ijcsit.com/docs/Volume%205/vol5issue02/ijcsit20140502215.pdf
As noted, this is not in general possible. However, in a language like C, if the list nodes are contiguously allocated, it would be possible to treat the structure as an array of nodes.
Obviously, this is only an answer to a trick question variant of this problem, but the problem is always an impossibility or a trick question.
Yes, it is possible in java language as below..
Collections.<T>binarySearch(List<T> list, T key)
for binary search on any List. It works on ArrayList and on LinkedList and on any other List.
Use MAPS to create LINK LISTS.
Map M , M[first element]=second element , M[second element]=third element ,
...
...
its a linked list...
but because its a map...
which internally uses binary search to search any element..
any searching of elements will take O(log n)

Resources