Scenario is as follows:-
I want to reverse the direction of the singly linked list, In other words, after the reversal all pointers should now point backwards..
Well the algorithm should take linear time.
The solution that i have thought of using another datastructure A Stack.. With the help of which the singly linked list would be easily reversed, with all pointers pointing backwards.. But i am in doubt, that whether the following implementation yeild linear time complexity.. Please comment on this.. And if any other efficient algorithm is in place, then please discuss..
Thanks.
You could do it like this: As long as there are nodes in the input list, remove its first node and insert it at the beginning of the output list:
node* reverse(node *in) {
out = NULL;
while (in) {
node = in;
in = in->next;
node->next = out;
out = node;
}
return out;
}
2 times O(N) = O(2*n) is still O(N). So first push N elements and then popping N elements from a stack is indeed linear in time, as you expected.
See also the section Multiplication by a Constant on the "Big O Notation" wikipedia entry.
If you put all of the nodes of your linked list in a stack, it will run in linear time, as you simply traverse the nodes on the stack backwards.
However, I don't think you need a stack. All you need to remember is the node you were just at, to reverse the pointer of the current node. Make note of the next node before you reverse the pointer at this node.
The previous answers have and already (and rightly) mentioned that the solution using pointer manipulation and the solution using stack are both O(n).
The remaining question is to compare the real run time (machine cycle complexity) performance of the two different implementations of the reverse() function.
I expect that the following two aspects might be relevant:
The stack implementation. Does it
require the maximum stack depth to
be explicitly specified? If so, how is that specified? If not, how
the stack does memory management as
the size grows arbitrarily large?
I guess that nodes have to be copied
from list to stack. [Is there a way
without copying?] In that case, the
copy complexity of the node needs to
be accounted for. Thats because the
size of the node can be
(arbitrarily) large.
Given these, in place reversal by manipulating pointers seems more attractive to me.
For a list of size n, you call n times push and n times pop, both of which are O(1) operations, so the whole operation is O(n).
You can use a stack to achieve a O(n) implementation. But the recursive solution IS using a stack (THE stack)! And, like all recursive algorithms, it is equivalent to looping. However, in this case, using recursion or an explicit stack would create a space complexity of O(n) which is completely unnecessary.
Related
So from what I am seeing I have been thaught like most people that the iterative version of DFS is just like iterative BFS besides two differences: replace queue with stack and mark a node as discovered after POP not after PUSH.
Two questions have really been puzzling me recently:
For certain cases it will result in different output, but is it really necessary to mark a node as visited after we POP? Why is it wrong to do it after we PUSH? As far as I can see this will result in the same consequence as to why we do it for BFS...to not have duplicates in our queue/stack.
Now the big one: My impression is that this kind of implementation of iterative DFS is not true DFS at all: If we think about the recursive version it is quite space efficient since it doesnt store all the possible neighbours at one level(as we would do in the iterative version), it only selects one and goes with it and then it backtracks and goes for a second one. As an extreme example think of a graph with one node in the center and conected to 100 leaf nodes. In the recursive implementation if we start from the middle node the underlying stack will grow to maximum 2 ... one for the middle node and one for every leaf it visits. If we do it as we have been thaught with the iterative version the stack will grow to 1000 elements. That doesnt seem right.
So with all those previous details in mind, my question is what would the approach be to have a true iterative DFS implementation?
Question 1: If you check+mark before the push, it uses less space but changes the order in which nodes are visited. You will visit everything, though.
Question 2: You are correct that an iterative DFS will usually put all the children of a node onto the stack at the same time. This increases the space used for some graphs, but it doesn't change the worst case space usage, and it's the easiest way so there's usually no reason to change that.
Occasionally you know that it will save a lot of space if you don't do this, and then you can write an iterative DFS that works more like the recursive one. Instead of pushing the next nodes to visit on the stack, you push a parent and a position in its list of children, or equivalent, which is pretty much what the recursive version has to remember when it recurses. In pseudo-code, it looks like this:
func DFS(start):
let visited = EMPTY_SET
let stack = EMPTY_STACK
visited.add(start)
visit(start)
stack.push( (start,0) )
while(!stack.isEmpty()):
let (parent, pos) = stack.pop()
if (pos < parent.numChildren()):
let child = parent.child[pos]
stack.push(parent,pos+1)
if (!visited.contains(child)):
visited.add(child)
visit(child)
stack.push( (child,0) )
You can see that it's a little more complicated, and the records you push on the stack are tuples, which is annoying in some cases. Often we'll use two stacks in parallel instead of creating tuples to push, or we'll push/pop two records at a time, depending on how nodes and child list positions have to be represented.
What is the most effective way of finding a maximum value in a set of variables?
I have seen solutions, such as
private double findMax(double... vals) {
double max = Double.NEGATIVE_INFINITY;
for (double d : vals) {
if (d > max) max = d;
}
return max;
}
But, what would be the most effective algorithm for doing this?
You can't reduce the complexity below O(n) if the list is unsorted... but you can improve the constant factor by a lot. Use SIMD. For example, in SSE you would use the MAXSS instruction to perform 4-ish compare+select operations in a single cycle. Unroll the loop a bit to reduce the cost of loop control logic. And then outside the loop, find the max out of the four values trapped in your SSE register.
This gives a benefit for any size list... also using multithreading makes sense for really large lists.
Assuming the list does not have elements in any particular order, the algorithm you mentioned in your question is optimal. It must look at every element once, thus it takes time directly proportional to the to the size of the list, O(n).
There is no algorithm for finding the maximum that has a lower upper bound than O(n).
Proof: Suppose for a contradiction that there is an algorithm that finds the maximum of a list in less than O(n) time. Then there must be at least one element that it does not examine. If the algorithm selects this element as the maximum, an adversary may choose a value for the element such that it is smaller than one of the examined elements. If the algorithm selects any other element as the maximum, an adversary may choose a value for the element such that it is larger than the other elements. In either case, the algorithm will fail to find the maximum.
EDIT: This was my attempt answer, but please look at the coments where #BenVoigt proposes a better way to optimize the expression
You need to traverse the whole list at least once
so it'd be a matter of finding a more efficient expression for if (d>max) max=d, if any.
Assuming we need the general case where the list is unsorted (if we keep it sorted we'd just pick the last item as #IgnacioVazquez points in the comments), and researching a little about branch prediction (Why is it faster to process a sorted array than an unsorted array? , see 4th answer) , looks like
if (d>max) max=d;
can be more efficiently rewritten as
max=d>max?d:max;
The reason is, the first statement is normally translated into a branch (though it's totally compiler and language dependent, but at least in C and C++, and even in a VM-based language like Java happens) while the second one is translated into a conditional move.
Modern processors have a big penalty in branches if the prediction goes wrong (the execution pipelines have to be reset), while a conditional move is an atomic operation that doesn't affect the pipelines.
The random nature of the elements in the list (one can be greater or lesser than the current maximum with equal probability) will cause many branch predictions to go wrong.
Please refer to the linked question for a nice discussion of all this, together with benchmarks.
I was looking for some simple implemented data structure which gets my needs fulfilled in least possible time (in worst possible case) :-
(1)To pop nth element (I have to keep relative order of elements intact)
(2)To access nth element .
I couldn't use array because it can't pop and i dont want to have a gap after deleting ith element . I tried to remove the gap , by exchanging nth element with next again with next untill last but that proves time ineffecient though array's O(1) is unbeatable .
I tried using vector and used 'erase' for popup and '.at()' for access , but even this is not cheap for time effeciency though its better than array .
What you can try is skip list - it support the operation you are requesting in O(log(n)). Another option would be tiered vector that is just slightly easier to implement and takes O(sqrt(n)). both structures are quite cool but alas not very popular.
Well , tiered vector implemented on array would i think best fit your purpose . Though the tiered vector concept may be knew and little tricky to understand at first but then once you get it , it opens lot of question and you get a handy weapon to tackle many question's data structure part very effeciently . So it is recommended that you master tiered vectors implementation.
An array will give you O(1) lookup but O(n) delete of the element.
A list will give you O(n) lookup bug O(1) delete of the element.
A binary search tree will give you O(log n) lookup with O(1) delete of the element. But it doesn't preserve the relative order.
A binary search tree used in conjunction with the list will give you the best of both worlds. Insert a node into both the list (to preserve order) and the tree (fast lookup). Delete will be O(1).
struct node {
node* list_next;
node* list_prev;
node* tree_right;
node* tree_left;
// node data;
};
Note that if the nodes are inserted into the tree using the index as the sort value, you will end up with another linked list pretending to be a tree. The tree can be balanced however in O(n) time once it is built which you would only have to incur once.
Update
Thinking about this more this might not be the best approach for you. I'm used to doing lookups on the data itself not its relative position in a set. This is a data centric approach. Using the index as the sort value will break as soon as you remove a node since the "higher" indices will need to change.
Warning: Don't take this answer seriously.
In theory, you can do both in O(1). Assuming this are the only operations you want to optimize for. The following solution will need lots of space (and it will leak space), and it will take long to create the data structure:
Use an array. In every entry of the array, point to another array which is the same, but with that entry removed.
Here is the problem, it is from Sedgwick's excellent Algorithms in Java (q 3.54)
Given a link to a node in a singly linked list that contains no null links (i.e. each node either links to itself or another node in the list) determine the number of different nodes without modifying any of the nodes and using no more than constant memory space.
How do you do it? scan through the list once using the hare and tortoise algorithm to work out whether it is circular in any way, and then scan through again to work out where the list becomes circular, then scan through again counting the number of nodes to this position? sounds a bit brute-force to me, I guess there is much more elegant solution.
The tortoise and hare algorithm can give you both the cycle length and the number of nodes before the cycle begins (λ and μ respectively).
The most elegant solution is Floyd's cycle-finding algorithm: http://en.wikipedia.org/wiki/Cycle_detection#Tortoise_and_hare
It runs in O(N) time, and only constant amount of memory is required.
Check out this: Puzzle: Loop in a Linked List
Pointer Marking: In practice, linked
lists are implemented using C structs
with at least a pointer; such a struct
in C shall be 4-byte aligned. So the
least significant two bits are zeros.
While traversing the list, you may
‘mark’ a pointer as traversed by
flipping the least significant bit. A
second traversal is for clearing these
bits.
just remenber where have you been and if you came at same node it is over.
Try storing entries in binary tree and you have O(N*log(N)) time and O(N) space comlexity
EDIT
You can use Log(N) space comlexity if you do not store every but in exponetial order link. That mean that you store 1st, 2nd, 4th, 8th, 16th and then if you get hit you have to continue from that point. Time comlexity for this one is N*Log(n)^2
Say I have a binary tree with the following definition for a node.
struct node
{
int key1 ;
int key2 ;
}
The binary search tree is created on the basis of key1. Now is it possible to rearrange the binary search tree on basis of key2 in O(1) space. Although I can do this in variable space using an array of pointers to nodes.
The actual problem where I require this is "counting number of occurrences of unique words in a file and displaying the result in decreasing order of frequency."
Here, a BST node is
{
char *word;
int freq ;
}
The BST is first created on basis of alphabetic order of words and finally I want it on basis of freq.
Am I wrong at choice of data structure i.e a BST?
I think you can create a new tree sorted by freq and push there all elements popping them from an old tree.
That could be O(1) though likely more like O(log N) which isn't big anyway.
Also, I don't know how you call it in C#, but in Python you can use list but sort it by two different keys in-place.
Map, BST are good if you need to have sorted output for your dictionnary.
And it is good if you need to mix up add, remove and lookup operations.
I don't think this is your need here. You load the dictionnary, sort it, then do only look up in it, that's right ?
In this case a sorted array is probably a better container. (See Item 23 from Effective STL from Scott Meyer).
(Update: simply consider that a map could generate more memory cache misses than a sorted array, as an array get its data contiguous in memory, and as each node in a map contain 2 pointers to other nodes in the map. When your objects are simple and take not much space in memory, a sorted vector is probable a better option. I warmly recommand you to read that item from Meyer's book)
About the kind of sort you are talking about, you will need that algorithm from the stl:
stable_sort.
The idea is to sort the dictionnary, then sort with stable_sort() on the frequence key.
It will give something like that (not tested actually, but you got the idea):
struct Node
{
char * word;
int key;
};
bool operator < (const Node& l, const Node& r)
{
return std::string(l.word) < std::string(r.word));
}
bool freq_comp(const Node& l, const Node& r)
{
return l.key < r.key;
}
std::vector<node> my_vector;
... // loading elements
sort(vector.begin(), vector.end());
stable_sort(vector.begin(), vector.end(), freq_comp);
Using a HashTable (Java) or Dictionary (.NET) or equivalent data structure in your language of choice (hash_set or hash_map in STL) will give you O(1) inserts during the counting phase, unlike the binary search tree which would be somewhere from O(log n) to O(n) on insert depending on whether it balances itself. If performance is really that important just make sure you try to initialize your HashTable to a large enough size that it won't need to resize itself dynamically, which can be expensive.
As for listing by frequency, I can't immediately think of a tricky way to do that without involving a sort, which would be O(n log n).
Here is my suggestion for re-balancing the tree based off of the new keys (well, I have 2 suggestions).
The first and more direct one is to somehow adapt Heapsort's "bubble-up" function (to use Sedgewick's name for it). Here is a link to wikipedia, there they call it "sift-up". It is not designed for an entirely-unbalanced tree (which is what you'd need), but I believe it demonstrates the basic flow of an in-place reordering of a tree. It may be a bit hard to follow because the tree is in fact stored in array rather than a tree (though the logic in a sense treats it as a tree) --- perhaps, though, you'll find such an array-based representation is best! Who knows.
The more crazy-out-there suggestion of mine is to use a splay tree. I think they're nifty, and here's the wiki link. Basically, whichever element you access is "bubbled up" to the top, but it maintains the BST invariants. So you maintain the original Key1 for building the initial tree, but hopefully most of the "higher-frequency" values will also be near the top. This may not be enough (as all it will mean is that higher-frequency words will be "near" the top of the tree, not necessarily ordered in any fashion), but if you do happen to have or find or make a tree-balancing algorithm, it may run a lot faster on such a splay tree.
Hope this helps! And thank you for an interesting riddle, this sounds like a good Haskell project to me..... :)
You can easily do this in O(1) space, but not in O(1) time ;-)
Even though re-arranging a whole tree recursively until it is sorted again seems possible, it is probably not very fast - it may be O(n) at best, probably worse in practice. So you might get a better result by adding all nodes to an array once you are done with the tree and just sorting this array using quicksort on frequency (which will be O(log n) on average). At least that's what I would do. Even tough it takes extra space it sounds more promising to me than re-arranging the tree in place.
One approach you could consider is to build two trees. One indexed by word, one indexed by freq.
As long as the tree nodes contain a pointer to the data node, you could access if via the word-based tree to update the info, but later access it by the freq-based tree to output.
Although, if speed is really that important, I'd be looking to get rid of the string as a key. String comparisons are notoriously slow.
If speed is not important, I think your best bet is to gather the data based on word and re-sort based on freq as yves has suggested.