Parallel sorting of a singly-linked list - performance

Is there any algorithm that makes it parallel sorting of a linked list worth it?
It's well known that Merge Sort is the best algorithm to use for sorting a linked list.
Most merge sorts are explained in terms of arrays, with each half recursively sorted. This would make it trivial to parallelize: sort each half independently then merge the two halves.
But a linked list doesn't have a "half-way" point; a linked list goes until it ends:
Head → [a] → [b] → [c] → [d] → [e] → [f] → [g] → [h] → [i] → [j] → ...
The implementation i have now walks the list once to get a count, then recursively splits the counts until we're comparing a node with it's NextNode. The recursion takes care of remembering where the two halves are.
This means the MergeSort of a linked list progresses linearly through the list. Since it seems to demand linearly progression through a list, i would think it then cannot be parallelized. The only way i could imagine it is by:
walk the list to get a count O(n)
walk half the list to get to the halfway point O(n/2)
then sort each half O(n log n)
But even if we did parallelize sorting (a,b) and (c,d) in separate threads, i would think that the false sharing during NextNode reordering would kill any virtue of parallelization.
Is there any parallel algorithms for sorting a linked list?
Array merge sort algorithm
Here is the standard algorithm for performing a merge sort on an array:
algorithm Merge-Sort
input:
an array, A (the values to be sorted)
an integer, p (the lower bound of the values to be sorted)
an integer, r (the upper bound of the values to be sorted)
define variables:
an integer, q (the midpoint of the values to be sorted)
q ← ⌊(p+r)/2⌋
Merge-Sort(A, p, q) //sort the lower half
Merge-Sort(A, q+1, r) //sort the upper half
Merge(A, p, q, r)
This algorithm is designed, and meant, for arrays, with arbitrary index access. To make it suitable for linked lists, it has to be modified.
Linked-list merge sort algorithm
This is (single-threaded) singly-linked list, merge sort, algorithm i currently use to sort the singly linked list. It comes from the Gonnet + Baeza Yates Handbook of Algorithms
algorithm sort:
input:
a reference to a list, r (pointer to the first item in the linked list)
an integer, n (the number of items to be sorted)
output:
a reference to a list (pointer to the sorted list)
define variables:
a reference to a list, A (pointer to the sorted top half of the list)
a reference to a list, B (pointer to the sorted bottom half of the list)
a reference to a list, temp (temporary variable used to swap)
if r = nil then
return nil
if n > 1 then
A ← sort(r, ⌊n/2⌋ )
B ← sort(r, ⌊(n+1)/2⌋ )
return merge( A, B )
temp ← r
r ← r.next
temp.next ← nil
return temp
A Pascal implementation would be:
function MergeSort(var r: list; n: integer): list;
begin
if r = nil then
Result := nil
else if n > 1 then
Result := Merge(MergeSort(r, n div 2), MergeSort(r, (n+1) div 2) )
else
begin
Result := r;
r := r.next;
Result.next := nil;
end
end;
And if my transcoding works, here's an on-the-fly C# translation:
list function MergeSort(ref list r, Int32 n)
{
if (r == null)
return null;
if (n > 1)
{
list A = MergeSort(r, n / 2);
list B = MergeSort(r, (n+1) / 2);
return Merge(A, B);
}
else
{
list temp = r;
r = r.next;
temp.next = null;
return temp;
}
}
What i need now is a parallel algorithm to sort a linked list. It doesn't have to be merge sort.
Some have suggested copying the next n items, where n items fit into a single cache-line, and spawn a task with those.
Sample data
algorithm GenerateSampleData
input:
an integer, n (the number of items to generate in the linked list)
output:
a reference to a node (the head of the linked list of random data to be sorted)
define variables:
a reference to a node, head (the returned head)
a reference to a node, item (an item in the linked list)
an integer, i (a counter)
head ← new node
item ← head
for i ← 1 to n do
item.value ← Random()
item.next ← new node
item ← item.next
return head
So we could generate a list of 300,000 random items by calling:
head := GenerateSampleData(300000);
Benchmarks
Time to generate 300,000 items 568 ms
MergeSort
count splitting variation 3,888 ms (baseline)
MergeSort
Slow-Fast midpoint finding 3,920 ms (0.8% slower)
QuickSort
Copy linked list to array 4 ms
Quicksort array 5,609 ms
Relink list 5 ms
Total 5,625 ms (44% slower)
Bonus Reading
Stackoverflow: What's the fastest algorithm for sorting a linked list?
Stackoverflow: Merge Sort a Linked List
Mergesort For Linked Lists
Parallel Merge Sort O(log n) pdf, 1986
Stackoverflow: Parallel Merge Sort (Closed, in typical SO nerd-rage fashion)
Parallel Merge Sort Dr. Dobbs, 3/24/2012
Eliminate False Sharing Dr. Dobbs, 3/14/2009
Mergesort For Linked Lists, by Simon Tatham (of Putty fame)

Mergesort is perfect for parallel sorting. Split the list in two halves and sort each of them in parallel, then merge the result. If you need more than two parallel sorting processes, do so recursively. If you don't happen to have infinitely many CPUs, you can omit parallelization at a certain recusion depth (which you will have to determine by testing).
BTW, the usual approach to splitting a list in two halves of roughly the same size is Floyd's Cycle Finding algorithm, also known as the hare and tortoise approach:
Node MergeSort(Node head)
{
if ((head == null) || (head.Next == null))
return head; //Oops, don't return null; what if only head.Next was null
Node firstHalf = head;
Node middle = GetMiddle(head);
Node secondHalf = middle.Next;
middle.Next = null; //cut the two halves
//Sort the lower and upper halves
firstHalf = MergeSort(firstHalf);
secondHalf = MergeSort(secondHalf);
//Merge the sorted halves
return Merge(firstHalf, secondHalf);
}
Node GetMiddle(Node head)
{
if (head == null || head.Next == null)
return null;
Node slow = head;
Node fast = head;
while ((fast.Next != null) && (fast.Next.Next != null))
{
slow = slow.Next;
fast = fast.Next.Next;
}
return slow;
}
After that, list and list2 are two lists of roughly the same size. Concatenating them would yield the original list. Of course, fast = fast->next->next needs further attention; this is just to demonstrate the general principle.

Merge-Sort is a Divide and Conquer Algorithm.
Arrays are good at dividing in the Middle.
Linked lists are innefficient to divide at the Middle, so instead, let's divide them while we walk through the list.
Take the first element and put it in list 1.
Take the second element and put it in list 2.
Take the third element and put it in list 1.
...
You have now divided the list in half efficiently, and with a bottom-up merge sort you can start the merging steps while you are still walking over the first list dividing it into odds and evens.

You can do merge sort in two ways. First you divide list in two half and then apply merge sort recursively on both parts and merge result. But there is another approach. You can split list into pairs and then merge pair of pairs recursively until you get single list which is result. See for example implementation of Data.List.sort in ghc haskell. This algorithm can be made parallel by spawning processes (or threads) for some appropriate amount of pairs at start and then also for merging their results until there is one.

For an inefficient solution, use the Quicksort algorithm: the first element in the list is used as a pivot, to partition the unsorted list into three (this uses O(n) of time). Then you recursively sort the lower and higher sublists in separate threads. The result is obtained by concatenating the lower sublist with the sublist of keys equal to the pivot and then the upper sublist in O(1) additional steps (instead of the slow merging).

i wanted to include a version that actually handles the parallel work (using native Windows thread-pool).
You don't want to put work into a threads all the way down the dividing recursion tree. You only want to schedule as much work as there are CPUs. This means you have to know how many (logical) CPUs there are. For example, if you had 8 cores, then the first
initial call: 1 thread
first recursion: becomes 2 threads
second recursion: becomes 4 threads
third recursion: becomes 8 threads
forth recursion: perform the work without splitting more threads
Handle this by querying for the number of processors in the system:
Int32 GetNumberOfProcessors()
{
SYSTEM_INFO systemInfo;
GetSystemInfo(ref systemInfo);
return systemInfo.dwNumberOfProcessors;
}
And then we change the recursive MergeSort function to support a numberOfProcessors argument:
public Node MergeSort(Node head)
{
return MergeSort(head, GetNumberOfProcessors());
}
Each time we recurse, we divide numberOfProcessors/2. When the recursive function stops seeing at least two processors available, it stops putting work into the thread pool, and calculates it on the same thread.
Node MergeSort(Node head, Int32 numberOfProcessors)
{
if ((head == null) || (head.Next == null))
return head;
Node firstHalf = head;
Node middle = GetMiddle(head);
Node secondHalf = middle.Next;
middle.Next = null;
//Sort the lower and upper halves
if (numberOfProcessors >= 2)
{
//Throw the work into the thread pool, since we have CPUs left
MergeSortOnTheadPool(ref firstHalf, ref secondHalf, numberOfProcessors / 2);
//i only split this into a separate function to keep
//the code short and easily readable
}
else
{
firstHalf = MergeSort(firstHalf, numberOfProcessors);
secondHalf = MergeSort(secondHalf, numberOfProcessors);
}
//Merge the sorted halves
return Merge(firstHalf, secondHalf);
}
This parallel work could be done using your favorite language-available mechanic. Since the language i actually use (which isn't the C# this code looks to be written in) doesn't support asynnc-await, the Task Parallel Library, or any other language integrated parallel system, we do it the old fashioned way: Event with Interlocked operations. It's a technique i read in an AMD whitepaper once; complete with their tricks to eliminate the subtle race conditions:
void MergeSortOnThreadPool(ref Node listA, ref Node listB)
{
Int32 nActiveThreads = 1; //Yes 1, to stop a race condition
using (Event doneEvent = new Event())
{
//Put everything the thread will need into a holder object
ContextInfo contextA = new Context();
contextA.DoneEvent = doneEvent;
contextA.ActiveThreads = AddressOf(nActiveThreads);
contextA.List = firstHalf;
contextA.NumberOfProcessors = numberOfProcessors/2;
InterlockedIncrement(nActiveThreads);
QueueUserWorkItem(MergeSortThreadProc, contextA);
//Put everything the second thead will need into another holder object
ContextInfo contextB = new Context();
contextB.DoneEvent = doneEvent;
contextB.ActiveThreads = AddressOf(nActiveThreads);
contextB.List = firstHalf;
contextB.NumberOfProcessors = numberOfProcessors/2;
InterlockedIncrement(nActiveThreads);
QueueUserWorkItem(MergeSortThreadProc, contextB);
//wait for the threads to finish
Int32 altDone = InterlockedDecrement(nThreads);
if (altDone > 0) then
doneEvent.WaitFor(INFINITE);
}
listA = contextA.Result; //returned to the caller as ref parameters
listB = contextB.Result;
}
The thread pool thread procedure has to do some housekeeping as well; watching to see if it's the last thread, and setting the event on it's way out:
void MergeSortThreadProc(Pointer Data);
{
Context context = Context(Data);
Node sorted = MergeSort(context.List, context.ProcessorsRemaining);
context.Result = sorted;
//Now do lifetime management
Int32 altDone = InterlockedDecrement(context.ActiveCount^);
if (altDone <= 0)
context.DoneEvent.SetEvent;
}
Note: Any code is released into the public domain. No attribution required.

Related

Merge several pre-sorted lists into a joint list in most effective way (and taking top elements of it )

I have such a pre-set:
many pre-sorted ordered lists, e.g., 1000 elements each sorted by a criteria (for instance, "last" based on a time)
need to make a joint list that maintains sort order and also contains 1000 last elements (so can discard elements of original lists that do not fit into top 1000).
However, selecting 1000 top can be done separately as well.
merging needs to be as fast, as efficient as possible. Re-sorting full merged list is not an option.
Use any priority queue-based data structure:
priority queue q = empty
for each list add first element to q
create an array next that contains next elements for every list (initially next element is a second element)
while result list is not full
take top element from the q and add to the result list
add next element of the corresponding list to the q (if any)
update next element of the corresponding list
This problem is known as Merge k sorted arrays.
In java, it is quit simple to resolve it. Just add elements into a sortedSet(the add to the sorted set is fast). And stop when you reach the 1000 top.
SortedSet<Integer> s = new TreeSet<>();
//s1,s2,s3 are the input lists here
int n = Math.max(Math.max(s2.size(), s1.size()), s3.size());
for (int i = 0; i < n || s.size() <= limit; i++) {
if (s1.get(i) != null) {
s.add(s1.get(i));
}
if (s2.get(i) != null) {
s.add(s2.get(i));
}
if (s3.get(i) != null) {
s.add(s3.get(i));
}
}

Return the kth element from the tail (or end) of a singly linked list

[Interview Question]
Write a function that would return the 5th element from the tail (or end) of a singly linked list of integers, in one pass, and then provide a set of test cases against that function.
It is similar to question : How to find nth element from the end of a singly linked list?, but I have an additional requirement that we should traverse the linked list only once.
This is my solution:
struct Lnode
{
int val;
Lnode* next;
Lnode(int val, Lnode* next=NULL) : val(val), next(next) {}
};
Lnode* kthFromTail(Lnode* start, int k)
{
static int count =0;
if(!start)
return NULL;
Lnode* end = kthFromTail(start->next, k);
if(count==k) end = start;
count++;
return end;
}
I'm traversing the linked list only once and using implicit recursion stack. Another way can be to have two pointers : fast and slow and the fast one being k pointers faster than the slow one.Which one seems to be better? I think the solution with two pointers will be complicated with many cases for ex: odd length list, even length list, k > length of list etc.This one employing recursion is clean and covers all such cases.
The 2-pointer solution doesn't fit your requirements as it traverses the list twice.
Yours uses a lot more memory - O(n) to be exact. You're creating a recursion stack equal to the number of items in the list, which is far from ideal.
To find the kth from last item...
A better (single-traversal) solution - Circular buffer:
Uses O(k) extra memory.
Have an array of length k.
For each element, insert at the next position into the array (with wrap-around).
At the end, just return the item at the next position in the array.
2-pointer solution:
Traverses the list twice, but uses only O(1) extra memory.
Start p1 and p2 at the beginning.
Increment p1 k times.
while p1 is not at the end
increment p1 and p2
p2 points to the kth from last element.
'n' is user provided value. eg, 5 from last.
int gap=0 , len=0;
myNode *tempNode;
while (currNode is not NULL)
{
currNode = currNode->next;
gap = gap+1;
if(gap>=n)
tempNode = currNode;
}
return tempNode;

Generalizing the find-min/find-max stack to arbitrary order statistics?

In this earlier question, the OP asked for a data structure similar to a stack supporting the following operations in O(1) time each:
Push, which adds a new element atop the stack,
Pop, which removes the top element from the stack,
Find-Max, which returns (but does not remove) the largest element of the stack, and
Find-Min, which returns (but does not remove) the smallest element of the stack.
A few minutes ago I found this related question asking for a clarification on a similar data structure that instead of allowing for the max and min to be queried, allows for the median element of the stack to be queried. These two data structures seem to be a special case of a more general data structure supporting the following operations:
Push, which pushes an element atop the stack,
Pop, which pops the top of the stack, and
Find-Kth, which for a fixed k determined when the structure is created, returns the kth largest element of the stack.
It is possible to support all of these operations by storing a stack and an balanced binary search tree holding the top k elements, which would enable all these operations to run in O(log k) time. My question is this: is it possible to implement the above data structure faster than this? That is, could we get O(1) for all three operations? Or perhaps O(1) for push and pop and O(log k) for the order statistic lookup?
Since the structure can be used to sort k elements with O(k) push and find-kth operations, every comparison-based implementation has at least one of these cost Omega(log k), even in an amortized sense, with randomization.
Push can be O(log k) and pop/find-kth can be O(1) (use persistent data structures; push should precompute the order statistic). My gut feeling based on working with lower bounds for comparison-based algorithms is that O(1) push/pop and O(log k) find-kth is doable but requires amortization.
I think what tophat was saying is, implement a purely functional data structure that supports only O(log k) insert and O(1) find-kth (cached by insert), and then make a stack of these structures. Push inserts into the top version and pushes the update, pop pops the top version, and find-kth operates on the top version. This is O(log k)/O(1)/(1) but super-linear space.
EDIT: I was working on O(1) push/O(1) pop/O(log k) find-kth, and I think it can't be done. The sorting algorithm that tophat referred to can be adapted to get √k evenly spaced order statistics of a length-k array in time O(k + (√k) log k). Problem is, the algorithm must know how each order statistic compares with all other elements (otherwise it might be wrong), which means that it has bucketed everything into one of √k + 1 buckets, which takes Ω(k log (√k + 1)) = Ω(k log k) comparisons on information theoretic grounds. Oops.
Replacing √k by keps for any eps > 0, with O(1) push/O(1) pop, I don't think find-kth can be O(k1 - eps), even with randomization and amortization.
Whether this is actually faster than your log k implementation, depending on which operations are used most frequently, I propose an implementation with O(1) Find-kth and Pop and O(n) Push, where n is the stack size. And I also want to share this with SO because it is just a hilarious data structure at first sight, but might even be reasonable.
It's best described by a doubly doubly linked stack, or perhaps more easily dscribed as a hybrid of a linked stack and a doubly linked sorted list. Basically each node maintains 4 references to other nodes, the next and previous in stack order and the next and previous in sorted order on the element size. These two linked lists can be implemented using the same nodes, but they work completely seperately, i.e. the sorted linked list doesn't have to know about the stack order and vice versa.
Like a normal linked stack, the collection itself will need to maintain a reference to the top node (and to the bottom?). To accomodate the O(1) nature of the Find-kth method, the collection will also keep a reference to the kth largest element.
The pop method works as follows:
The popped node gets removed from the sorted doubly linked list, just like a removal from a normal sorted linked list. It takes O(1) as the collection has a reference to the top. Depending on whether the popped element was larger or smaller than the kth element, the reference to the kth largest element is set to either the previous or the next. So the method still has O(1) complexity.
The push method works just like a normal addition to a sorted linked list, which is a O(n) operation. It start with the smallest element, and inserts the new node when a larger element is encountered. To maintain the correct reference to the kth largest element, again either the previous or next element to the current kth largest element is selected, depending on whether the pushed node was larger or smaller than the kth largest element.
Of course next to this, the reference to the 'top' of the stack has to be set in both methods. Also there's the problem of k > n, for which you haven't specified what the data structure should do. I hope it is clear how it works, otherwise I could add an example.
But ok, not entirely the complexity you had hoped for, but I find this an interesting 'solution'.
Edit: An implementation of the described structure
A bounty was issued on this question, which indicates my original answer wasn’t good enough:P Perhaps the OP would like to see an implementation?
I have implemented both the median problem and the fixed-k problem, in C#. The implementation of the tracker of the median is just a wrapper around the tracker of the kth element, where k can mutate.
To recap the complexities:
Push takes O(n)
Pop takes O(1)
FindKth takes O(1)
Change k takes O(delta k)
I have already described the algorithm in reasonable detail in my original post. The implementation is then fairly straightforward(but not so trivial to get right, as there are a lot of inequality signs and if statements to consider). I have commented only to indicate what is done, but not the details of how, as it would otherwise become too large. The code is already quite lengthy for a SO post.
I do want to provide the contracts of all non-trivial public members:
K is the index of the element in the sorted linked list to keep a reference too. Is it mutable and when set, the structure is immediately corrected for that.
KthValue is the value at that index, unless the structure doesn’t have k elements yet, in which case it returns a default value.
HasKthValue exists to easily distinguish these default values from elements which happened to be the default value of its type.
Constructors: a null enumerable is interpreted as an empty enumerable, and a null comparer is interpreted as the default. This comparer defines the order used when determining the kth value.
So this is the code:
public sealed class KthTrackingStack<T>
{
private readonly Stack<Node> stack;
private readonly IComparer<T> comparer;
private int k;
private Node smallestNode;
private Node kthNode;
public int K
{
get { return this.k; }
set
{
if (value < 0) throw new ArgumentOutOfRangeException();
for (; k < value; k++)
{
if (kthNode.NextInOrder == null)
return;
kthNode = kthNode.NextInOrder;
}
for (; k >= value; k--)
{
if (kthNode.PreviousInOrder == null)
return;
kthNode = kthNode.PreviousInOrder;
}
}
}
public T KthValue
{
get { return HasKthValue ? kthNode.Value : default(T); }
}
public bool HasKthValue
{
get { return k < Count; }
}
public int Count
{
get { return this.stack.Count; }
}
public KthTrackingStack(int k, IEnumerable<T> initialElements = null, IComparer<T> comparer = null)
{
if (k < 0) throw new ArgumentOutOfRangeException("k");
this.k = k;
this.comparer = comparer ?? Comparer<T>.Default;
this.stack = new Stack<Node>();
if (initialElements != null)
foreach (T initialElement in initialElements)
this.Push(initialElement);
}
public void Push(T value)
{
//just a like a normal sorted linked list should the node before the inserted node be found.
Node nodeBeforeNewNode;
if (smallestNode == null || comparer.Compare(value, smallestNode.Value) < 0)
nodeBeforeNewNode = null;
else
{
nodeBeforeNewNode = smallestNode;//untested optimization: nodeBeforeNewNode = comparer.Compare(value, kthNode.Value) < 0 ? smallestNode : kthNode;
while (nodeBeforeNewNode.NextInOrder != null && comparerCompare(value, nodeBeforeNewNode.NextInOrder.Value) > 0)
nodeBeforeNewNode = nodeBeforeNewNode.NextInOrder;
}
//the following code includes the new node in the ordered linked list
Node newNode = new Node
{
Value = value,
PreviousInOrder = nodeBeforeNewNode,
NextInOrder = nodeBeforeNewNode == null ? smallestNode : nodeBeforeNewNode.NextInOrder
};
if (newNode.NextInOrder != null)
newNode.NextInOrder.PreviousInOrder = newNode;
if (newNode.PreviousInOrder != null)
newNode.PreviousInOrder.NextInOrder = newNode;
else
smallestNode = newNode;
//the following code deals with changes to the kth node due the adding the new node
if (kthNode != null && comparer.Compare(value, kthNode.Value) < 0)
{
if (HasKthValue)
kthNode = kthNode.PreviousInOrder;
}
else if (!HasKthValue)
{
kthNode = newNode;
}
stack.Push(newNode);
}
public T Pop()
{
Node result = stack.Pop();
//the following code deals with changes to the kth node
if (HasKthValue)
{
if (comparer.Compare(result.Value, kthNode.Value) <= 0)
kthNode = kthNode.NextInOrder;
}
else if(kthNode.PreviousInOrder != null || Count == 0)
{
kthNode = kthNode.PreviousInOrder;
}
//the following code maintains the order in the linked list
if (result.NextInOrder != null)
result.NextInOrder.PreviousInOrder = result.PreviousInOrder;
if (result.PreviousInOrder != null)
result.PreviousInOrder.NextInOrder = result.NextInOrder;
else
smallestNode = result.NextInOrder;
return result.Value;
}
public T Peek()
{
return this.stack.Peek().Value;
}
private sealed class Node
{
public T Value { get; set; }
public Node NextInOrder { get; internal set; }
public Node PreviousInOrder { get; internal set; }
}
}
public class MedianTrackingStack<T>
{
private readonly KthTrackingStack<T> stack;
public void Push(T value)
{
stack.Push(value);
stack.K = stack.Count / 2;
}
public T Pop()
{
T result = stack.Pop();
stack.K = stack.Count / 2;
return result;
}
public T Median
{
get { return stack.KthValue; }
}
public MedianTrackingStack(IEnumerable<T> initialElements = null, IComparer<T> comparer = null)
{
stack = new KthTrackingStack<T>(initialElements == null ? 0 : initialElements.Count()/2, initialElements, comparer);
}
}
Of course you're always free to ask any question about this code, as I realize some things may not be obvious from the description and sporadic comments
The only actual working implementation I can wrap my head around is Push/Pop O(log k) and Kth O(1).
Stack (single linked)
Min Heap (size k)
Stack2 (doubly linked)
The value nodes will be shared between the Stack, Heap and Stack2
PUSH:
Push to the stack
If value >= heap root
If heap size < k
Insert value in heap
Else
Remove heap root
Push removed heap root to stack2
Insert value in heap
POP:
Pop from the stack
If popped node has stack2 references
Remove from stack2 (doubly linked list remove)
If popped node has heap references
Remove from the heap (swap with last element, perform heap-up-down)
Pop from stack2
If element popped from stack2 is not null
Insert element popped from stack2 into heap
KTH:
If heap is size k
Return heap root value
You could use a skip list . (I first thought of linked-list, but insertion is O(n) and amit corrected me with skip list. I think this data structure could be pretty interesting in your case)
With this data structure, inserting/deleting would take O(ln(k))
and finding the maximum O(1)
I would use :
a stack, containing your elements
a a stack containing the history of skip list (containing the k smallest elements)
(I realised it was the Kth largest..element. but it's pretty much the same problem)
when pushing (O(ln(k)):
if the element is less the kth element, delete the kth element (O(ln(k)) put it in the LIFO pile (O(1)) then insert the element in the skip list O(ln(k))
otherwise it's not in the skip list just put it on the pile (O(1))
When pushing you add a new skip list to the history, since this is similar to a copy on write it wouldn't take more than O(ln(k))
when popping (O(1):
you just pop from both stacks
getting kth element O(1):
always take the maximum element in the list (O(1))
All the ln(k) are amortised cost.
Example:
I will take the same example as yours (on Stack with find-min/find-max more efficient than O(n)) :
Suppose that we have a stack and add the values 2, 7, 1, 8, 3, and 9, in that order. and k = 3
I will represent it this way :
[number in the stack] [ skip list linked with that number]
first I push 2,7 and 1 (it doesn't make sens to look for the kth element in a list of less than k elements)
1 [7,2,1]
7 [7,2,null]
2 [2,null,null]
If I want the kth element I just need to take the max in the linked list: 7
now I push 8,3, 9
on the top of the stack I have :
8 [7,2,1] since 8 > kth element therefore skip list doesn't change
then :
3 [3,2,1] since 3 < kth element, the kth element has changed. I first delete 7 who was the previous kth element (O(ln(k))) then insert 3 O(ln(k)) => total O(ln(k))
then :
9 [3,2,1] since 9 > kth element
Here is the stack I get :
9 [3,2,1]
3 [3,2,1]
8 [7,2,1]
1 [7,2,1]
7 [7,2,null]
2 [2,null,null]
find k th element :
I get 3 in O(1)
now I can pop 9 and 3 (takes O(1)):
8 [7,2,1]
1 [7,2,1]
7 [7,2,null]
2 [2,null,null]
find kth element :
I get 7 in O(1)
and push 0 (takes O(ln(k) - insertion)
0 [2,1,0]
8 [7,2,1]
1 [7,2,1]
7 [7,2,null]
2 [2,null,null]
#tophat is right - since this structure could be used to implement a sort, it can't have less complexity than an equivalent sort algorithm. So how do you do a sort in less than O(lg N)? Use Radix Sort.
Here is an implementation which makes use of a Binary Trie. Inserting items into a binary Trie is essentially the same operation as performing a radix sort. The cost for inserting and deleting s O(m), where m is a constant: the number of bits in the key. Finding the next largest or smallest key is also O(m), accomplished by taking the next step in an in-order depth-first traversal.
So the general idea is to use the values pushed onto the stack as keys in the trie. The data to store is the occurance count of that item in the stack. For each pushed item: if it exists in the trie, increment its count, else store it with a count of 1. When you pop an item, find it, decrement the count, and remove it if the count is now 0. Both those operations are O(m).
To get O(1) FindKth, keep track of 2 values: The value of the Kth item, and how many instances of that value are in the first K item. (for example, for K=4 and a stack of [1,2,3,2,0,2], the Kth value is 2 and the "iCount" is 2.) Then when you push values < the KthValue, you simply decrement the instance count, and if it is 0, do a FindPrev on the trie to get the next smaller value.
When you pop values greater than the KthValue, increment the instance count if more instances of that vaue exist, else do a FindNext to get the next larger value.
(The rules are different if there are less than K items. In that case, you can simply track the max inserted value. When there are K items, the max will be the Kth.)
Here is a C implementation. It relies on a BinaryTrie (built using the example at PineWiki as a base) with this interface:
BTrie* BTrieInsert(BTrie* t, Item key, int data);
BTrie* BTrieFind(BTrie* t, Item key);
BTrie* BTrieDelete(BTrie* t, Item key);
BTrie* BTrieNextKey(BTrie* t, Item key);
BTrie* BTriePrevKey(BTrie* t, Item key);
Here is the Push function.
void KSStackPush(KStack* ks, Item val)
{
BTrie* node;
//resize if needed
if (ks->ct == ks->sz) ks->stack = realloc(ks->stack,sizeof(Item)*(ks->sz*=2));
//push val
ks->stack[ks->ct++]=val;
//record count of value instances in trie
node = BTrieFind(ks->trie, val);
if (node) node->data++;
else ks->trie = BTrieInsert(ks->trie, val, 1);
//adjust kth if needed
ksCheckDecreaseKth(ks,val);
}
Here is the helper to track the KthValue
//check if inserted val is in set of K
void ksCheckDecreaseKth(KStack* ks, Item val)
{
//if less than K items, track the max.
if (ks->ct <= ks->K) {
if (ks->ct==1) { ks->kthValue = val; ks->iCount = 1;} //1st item
else if (val == ks->kthValue) { ks->iCount++; }
else if (val > ks->kthValue) { ks->kthValue = val; ks->iCount = 1;}
}
//else if value is one of the K, decrement instance count
else if (val < ks->kthValue && (--ks->iCount<=0)) {
//if that was only instance in set,
//find the previous value, include all its instances
BTrie* node = BTriePrev(ks->trie, ks->kthValue);
ks->kthValue = node->key;
ks->iCount = node->data;
}
}
Here is the Pop function
Item KSStackPop(KStack* ks)
{
//pop val
Item val = ks->stack[--ks->ct];
//find in trie
BTrie* node = BTrieFind(ks->trie, val);
//decrement count, remove if no more instances
if (--node->data == 0)
ks->trie = BTrieDelete(ks->trie, val);
//adjust kth if needed
ksCheckIncreaseKth(ks,val);
return val;
}
And the helper to increase the KthValue
//check if removing val causes Kth to increase
void ksCheckIncreaseKth(KStack* ks, Item val)
{
//if less than K items, track max
if (ks->ct < ks->K)
{ //if removing the max,
if (val==ks->kthValue) {
//find the previous node, and set the instance count.
BTrie* node = BTriePrev(ks->trie, ks->kthValue);
ks->kthValue = node->key;
ks->iCount = node->data;
}
}
//if removed val was among the set of K,add a new item
else if (val <= ks->kthValue)
{
BTrie* node = BTrieFind(ks->trie, ks->kthValue);
//if more instances of kthValue exist, add 1 to set.
if (node && ks->iCount < node->data) ks->iCount++;
//else include 1 instance of next value
else {
BTrie* node = BTrieNext(ks->trie, ks->kthValue);
ks->kthValue = node->key;
ks->iCount = 1;
}
}
}
So this is algorithm is O(1) for all 3 operations. It can also support the Median operation: Start with KthValue = the first value, and whenever stack size changes by 2, do an IncreaseKth or DecreasesKth operation. The downside is that the constant is large. It is only a win when m < lgK. However, for small keys and large K, this may be good choice.
What if you paired the stack with a pair of Fibonacci Heaps? That could give amortized O(1) Push and FindKth, and O(lgN) delete.
The stack stores [value, heapPointer] pairs. The heaps store stack pointers.
Create one MaxHeap, one MinHeap.
On Push:
if MaxHeap has less than K items, insert the stack top into the MaxHeap;
else if the new value is less than the top of the MaxHeap, first insert the result of DeleteMax in the MinHeap, then insert the new item into MaxHeap;
else insert it into the MinHeap. O(1) (or O(lgK) if DeleteMax is needed)
On FindKth, return the top of the MaxHeap. O(1)
On Pop, also do a Delete(node) from the popped item's heap.
If it was in the MinHeap, you are done. O(lgN)
If it was in the MaxHeap, also perform a DeleteMin from the MinHeap and Insert the result in the MaxHeap. O(lgK)+O(lgN)+O(1)
Update:
I realized I wrote it up as K'th smallest, not K'th largest.
I also forgot a step when a new value is less than the current K'th smallest. And that step
pushes the worst case insert back to O(lg K). This may still be ok for uniformly distributed input and small K, as it will only hit that case on K/N insertions.
*moved New Idea to different answer - it got too large.
Use a Trie to store your values. Tries already have an O(1) insert complexity. You only need to worry about two things, popping and searching, but if you tweak your program a little, it would be easy.
When inserting (pushing), have a counter for each path that stores the number of elements inserted there. This will allow each node to keep track of how many elements have been inserted using that path, i.e. the number represents the number of elements that are stored beneath that path. That way, when you try to look for the kth element, it would be a simple comparison at each path.
For popping, you can have a static object that has a link to the last stored object. That object can be accessed from the root object, hence O(1). Of course, you would need to add functions to retrieve the last object inserted, which means the newly pushed node must have a pointer to the previously pushed element (implemented in the push procedure; very simple, also O(1)). You also need to decrement the counter, which means each node must have a pointer to the parent node (also simple).
For finding kth element (this is for smallest kth element, but finding the largest is very similar): when you enter each node you pass in k and the minimum index for the branch (for the root it would be 0). Then you do a simple if comparison for each path: if (k between minimum index and minimum index + pathCounter), you enter that path passing in k and the new minimum index as (minimum index + sum of all previous pathCounters, excluding the one you took). I think this is O(1), since increasing the number data within a certain range doesn't increase the difficulty of finding k.
I hope this helps, and if anything is not very clear, just let me know.

Find unique common element from 3 arrays

Original Problem:
I have 3 boxes each containing 200 coins, given that there is only one person who has made calls from all of the three boxes and thus there is one coin in each box which has same fingerprints and rest of all coins have different fingerprints. You have to find the coin which contains same fingerprint from all of the 3 boxes. So that we can find the fingerprint of the person who has made call from all of the 3 boxes.
Converted problem:
You have 3 arrays containing 200 integers each. Given that there is one and only one common element in these 3 arrays. Find the common element.
Please consider solving this for other than trivial O(1) space and O(n^3) time.
Some improvement in Pelkonen's answer:
From converted problem in OP:
"Given that there is one and only one common element in these 3 arrays."
We need to sort only 2 arrays and find common element.
If you sort all the arrays first O(n log n) then it will be pretty easy to find the common element in less than O(n^3) time. You can for example use binary search after sorting them.
Let N = 200, k = 3,
Create a hash table H with capacity ≥ Nk.
For each element X in array 1, set H[X] to 1.
For each element Y in array 2, if Y is in H and H[Y] == 1, set H[Y] = 2.
For each element Z in array 3, if Z is in H and H[Z] == 2, return Z.
throw new InvalidDataGivenByInterviewerException();
O(Nk) time, O(Nk) space complexity.
Use a hash table for each integer and encode the entries such that you know which array it's coming from - then check for the slot which has entries from all 3 arrays. O(n)
Use a hashtable mapping objects to frequency counts. Iterate through all three lists, incrementing occurrence counts in the hashtable, until you encounter one with an occurrence count of 3. This is O(n), since no sorting is required. Example in Python:
def find_duplicates(*lists):
num_lists = len(lists)
counts = {}
for l in lists:
for i in l:
counts[i] = counts.get(i, 0) + 1
if counts[i] == num_lists:
return i
Or an equivalent, using sets:
def find_duplicates(*lists):
intersection = set(lists[0])
for l in lists[1:]:
intersection = intersection.intersect(set(l))
return intersection.pop()
O(N) solution: use a hash table. H[i] = list of all integers in the three arrays that map to i.
For all H[i] > 1 check if three of its values are the same. If yes, you have your solution. You can do this check with the naive solution even, it should still be very fast, or you can sort those H[i] and then it becomes trivial.
If your numbers are relatively small, you can use H[i] = k if i appears k times in the three arrays, then the solution is the i for which H[i] = 3. If your numbers are huge, use a hash table though.
You can extend this to work even if you can have elements that can be common to only two arrays and also if you can have elements repeating elements in one of the arrays. It just becomes a bit more complicated, but you should be able to figure it out on your own.
If you want the fastest* answer:
Sort one array--time is N log N.
For each element in the second array, search the first. If you find it, add 1 to a companion array; otherwise add 0--time is N log N, using N space.
For each non-zero count, copy the corresponding entry into the temporary array, compacting it so it's still sorted--time is N.
For each element in the third array, search the temporary array; when you find a hit, stop. Time is less than N log N.
Here's code in Scala that illustrates this:
import java.util.Arrays
val a = Array(1,5,2,3,14,1,7)
val b = Array(3,9,14,4,2,2,4)
val c = Array(1,9,11,6,8,3,1)
Arrays.sort(a)
val count = new Array[Int](a.length)
for (i <- 0 until b.length) {
val j =Arrays.binarySearch(a,b(i))
if (j >= 0) count(j) += 1
}
var n = 0
for (i <- 0 until count.length) if (count(i)>0) { count(n) = a(i); n+= 1 }
for (i <- 0 until c.length) {
if (Arrays.binarySearch(count,0,n,c(i))>=0) println(c(i))
}
With slightly more complexity, you can either use no extra space at the cost of being even more destructive of your original arrays, or you can avoid touching your original arrays at all at the cost of another N space.
Edit: * as the comments have pointed out, hash tables are faster for non-perverse inputs. This is "fastest worst case". The worst case may not be so unlikely unless you use a really good hashing algorithm, which may well eat up more time than your sort. For example, if you multiply all your values by 2^16, the trivial hashing (i.e. just use the bitmasked integer as an index) will collide every time on lists shorter than 64k....
//Begineers Code using Binary Search that's pretty Easy
// bool BS(int arr[],int low,int high,int target)
// {
// if(low>high)
// return false;
// int mid=low+(high-low)/2;
// if(target==arr[mid])
// return 1;
// else if(target<arr[mid])
// BS(arr,low,mid-1,target);
// else
// BS(arr,mid+1,high,target);
// }
// vector <int> commonElements (int A[], int B[], int C[], int n1, int n2, int n3)
// {
// vector<int> ans;
// for(int i=0;i<n2;i++)
// {
// if(i>0)
// {
// if(B[i-1]==B[i])
// continue;
// }
// //The above if block is to remove duplicates
// //In the below code we are searching an element form array B in both the arrays A and B;
// if(BS(A,0,n1-1,B[i]) && BS(C,0,n3-1,B[i]))
// {
// ans.push_back(B[i]);
// }
// }
// return ans;
// }

Check if two linked lists merge. If so, where?

This question may be old, but I couldn't think of an answer.
Say, there are two lists of different lengths, merging at a point; how do we know where the merging point is?
Conditions:
We don't know the length
We should parse each list only once.
The following is by far the greatest of all I have seen - O(N), no counters. I got it during an interview to a candidate S.N. at VisionMap.
Make an interating pointer like this: it goes forward every time till the end, and then jumps to the beginning of the opposite list, and so on.
Create two of these, pointing to two heads.
Advance each of the pointers by 1 every time, until they meet. This will happen after either one or two passes.
I still use this question in the interviews - but to see how long it takes someone to understand why this solution works.
Pavel's answer requires modification of the lists as well as iterating each list twice.
Here's a solution that only requires iterating each list twice (the first time to calculate their length; if the length is given you only need to iterate once).
The idea is to ignore the starting entries of the longer list (merge point can't be there), so that the two pointers are an equal distance from the end of the list. Then move them forwards until they merge.
lenA = count(listA) //iterates list A
lenB = count(listB) //iterates list B
ptrA = listA
ptrB = listB
//now we adjust either ptrA or ptrB so that they are equally far from the end
while(lenA > lenB):
ptrA = ptrA->next
lenA--
while(lenB > lenA):
prtB = ptrB->next
lenB--
while(ptrA != NULL):
if (ptrA == ptrB):
return ptrA //found merge point
ptrA = ptrA->next
ptrB = ptrB->next
This is asymptotically the same (linear time) as my other answer but probably has smaller constants, so is probably faster. But I think my other answer is cooler.
If
by "modification is not allowed" it was meant "you may change but in the end they should be restored", and
we could iterate the lists exactly twice
the following algorithm would be the solution.
First, the numbers. Assume the first list is of length a+c and the second one is of length b+c, where c is the length of their common "tail" (after the mergepoint). Let's denote them as follows:
x = a+c
y = b+c
Since we don't know the length, we will calculate x and y without additional iterations; you'll see how.
Then, we iterate each list and reverse them while iterating! If both iterators reach the merge point at the same time, then we find it out by mere comparing. Otherwise, one pointer will reach the merge point before the other one.
After that, when the other iterator reaches the merge point, it won't proceed to the common tail. Instead will go back to the former beginning of the list that had reached merge-point before! So, before it reaches the end of the changed list (i.e. the former beginning of the other list), he will make a+b+1 iterations total. Let's call it z+1.
The pointer that reached the merge-point first, will keep iterating, until reaches the end of the list. The number of iterations it made should be calculated and is equal to x.
Then, this pointer iterates back and reverses the lists again. But now it won't go back to the beginning of the list it originally started from! Instead, it will go to the beginning of the other list! The number of iterations it made should be calculated and equal to y.
So we know the following numbers:
x = a+c
y = b+c
z = a+b
From which we determine that
a = (+x-y+z)/2
b = (-x+y+z)/2
c = (+x+y-z)/2
Which solves the problem.
Well, if you know that they will merge:
Say you start with:
A-->B-->C
|
V
1-->2-->3-->4-->5
1) Go through the first list setting each next pointer to NULL.
Now you have:
A B C
1-->2-->3 4 5
2) Now go through the second list and wait until you see a NULL, that is your merge point.
If you can't be sure that they merge you can use a sentinel value for the pointer value, but that isn't as elegant.
If we could iterate lists exactly twice, than I can provide method for determining merge point:
iterate both lists and calculate lengths A and B
calculate difference of lengths C = |A-B|;
start iterating both list simultaneously, but make additional C steps on list which was greater
this two pointers will meet each other in the merging point
Here's a solution, computationally quick (iterates each list once) but uses a lot of memory:
for each item in list a
push pointer to item onto stack_a
for each item in list b
push pointer to item onto stack_b
while (stack_a top == stack_b top) // where top is the item to be popped next
pop stack_a
pop stack_b
// values at the top of each stack are the items prior to the merged item
You can use a set of Nodes. Iterate through one list and add each Node to the set. Then iterate through the second list and for every iteration, check if the Node exists in the set. If it does, you've found your merge point :)
This arguably violates the "parse each list only once" condition, but implement the tortoise and hare algorithm (used to find the merge point and cycle length of a cyclic list) so you start at List A, and when you reach the NULL at the end you pretend it's a pointer to the beginning of list B, thus creating the appearance of a cyclic list. The algorithm will then tell you exactly how far down List A the merge is (the variable 'mu' according to the Wikipedia description).
Also, the "lambda" value tells you the length of list B, and if you want, you can work out the length of list A during the algorithm (when you redirect the NULL link).
Maybe I am over simplifying this, but simply iterate the smallest list and use the last nodes Link as the merging point?
So, where Data->Link->Link == NULL is the end point, giving Data->Link as the merging point (at the end of the list).
EDIT:
Okay, from the picture you posted, you parse the two lists, the smallest first. With the smallest list you can maintain the references to the following node. Now, when you parse the second list you do a comparison on the reference to find where Reference [i] is the reference at LinkedList[i]->Link. This will give the merge point. Time to explain with pictures (superimpose the values on the picture the OP).
You have a linked list (references shown below):
A->B->C->D->E
You have a second linked list:
1->2->
With the merged list, the references would then go as follows:
1->2->D->E->
Therefore, you map the first "smaller" list (as the merged list, which is what we are counting has a length of 4 and the main list 5)
Loop through the first list, maintain a reference of references.
The list will contain the following references Pointers { 1, 2, D, E }.
We now go through the second list:
-> A - Contains reference in Pointers? No, move on
-> B - Contains reference in Pointers? No, move on
-> C - Contains reference in Pointers? No, move on
-> D - Contains reference in Pointers? Yes, merge point found, break.
Sure, you maintain a new list of pointers, but thats not outside the specification. However the first list is parsed exactly once, and the second list will only be fully parsed if there is no merge point. Otherwise, it will end sooner (at the merge point).
I have tested a merge case on my FC9 x86_64, and print every node address as shown below:
Head A 0x7fffb2f3c4b0
0x214f010
0x214f030
0x214f050
0x214f070
0x214f090
0x214f0f0
0x214f110
0x214f130
0x214f150
0x214f170
Head B 0x7fffb2f3c4a0
0x214f0b0
0x214f0d0
0x214f0f0
0x214f110
0x214f130
0x214f150
0x214f170
Note becase I had aligned the node structure, so when malloc() a node, the address is aligned w/ 16 bytes, see the least 4 bits.
The least bits are 0s, i.e., 0x0 or 000b.
So if your are in the same special case (aligned node address) too, you can use these least 4 bits.
For example when travel both lists from head to tail, set 1 or 2 of the 4 bits of the visiting node address, that is, set a flag;
next_node = node->next;
node = (struct node*)((unsigned long)node | 0x1UL);
Note above flags won't affect the real node address but only your SAVED node pointer value.
Once found somebody had set the flag bit(s), then the first found node should be the merge point.
after done, you'd restore the node address by clear the flag bits you had set. while an important thing is that you should be careful when iterate (e.g. node = node->next) to do clean. remember you had set flag bits, so do this way
real_node = (struct node*)((unsigned long)node) & ~0x1UL);
real_node = real_node->next;
node = real_node;
Because this proposal will restore the modified node addresses, it could be considered as "no modification".
There can be a simple solution but will require an auxilary space. The idea is to traverse a list and store each address in a hash map, now traverse the other list and match if the address lies in the hash map or not. Each list is traversed only once. There's no modification to any list. Length is still unknown. Auxiliary space used: O(n) where 'n' is the length of first list traversed.
this solution iterates each list only once...no modification of list required too..though you may complain about space..
1) Basically you iterate in list1 and store the address of each node in an array(which stores unsigned int value)
2) Then you iterate list2, and for each node's address ---> you search through the array that you find a match or not...if you do then this is the merging node
//pseudocode
//for the first list
p1=list1;
unsigned int addr[];//to store addresses
i=0;
while(p1!=null){
addr[i]=&p1;
p1=p1->next;
}
int len=sizeof(addr)/sizeof(int);//calculates length of array addr
//for the second list
p2=list2;
while(p2!=null){
if(search(addr[],len,&p2)==1)//match found
{
//this is the merging node
return (p2);
}
p2=p2->next;
}
int search(addr,len,p2){
i=0;
while(i<len){
if(addr[i]==p2)
return 1;
i++;
}
return 0;
}
Hope it is a valid solution...
There is no need to modify any list. There is a solution in which we only have to traverse each list once.
Create two stacks, lets say stck1 and stck2.
Traverse 1st list and push a copy of each node you traverse in stck1.
Same as step two but this time traverse 2nd list and push the copy of nodes in stck2.
Now, pop from both stacks and check whether the two nodes are equal, if yes then keep a reference to them. If no, then previous nodes which were equal are actually the merge point we were looking for.
int FindMergeNode(Node headA, Node headB) {
Node currentA = headA;
Node currentB = headB;
// Do till the two nodes are the same
while (currentA != currentB) {
// If you reached the end of one list start at the beginning of the other
// one currentA
if (currentA.next == null) {
currentA = headA;
} else {
currentA = currentA.next;
}
// currentB
if (currentB.next == null) {
currentB = headB;
} else {
currentB = currentB.next;
}
}
return currentB.data;
}
We can use two pointers and move in a fashion such that if one of the pointers is null we point it to the head of the other list and same for the other, this way if the list lengths are different they will meet in the second pass.
If length of list1 is n and list2 is m, their difference is d=abs(n-m). They will cover this distance and meet at the merge point.
Code:
int findMergeNode(SinglyLinkedListNode* head1, SinglyLinkedListNode* head2) {
SinglyLinkedListNode* start1=head1;
SinglyLinkedListNode* start2=head2;
while (start1!=start2){
start1=start1->next;
start2=start2->next;
if (!start1)
start1=head2;
if (!start2)
start2=head1;
}
return start1->data;
}
Here is naive solution , No neeed to traverse whole lists.
if your structured node has three fields like
struct node {
int data;
int flag; //initially set the flag to zero for all nodes
struct node *next;
};
say you have two heads (head1 and head2) pointing to head of two lists.
Traverse both the list at same pace and put the flag =1(visited flag) for that node ,
if (node->next->field==1)//possibly longer list will have this opportunity
//this will be your required node.
How about this:
If you are only allowed to traverse each list only once, you can create a new node, traverse the first list to have every node point to this new node, and traverse the second list to see if any node is pointing to your new node (that's your merge point). If the second traversal doesn't lead to your new node then the original lists don't have a merge point.
If you are allowed to traverse the lists more than once, then you can traverse each list to find our their lengths and if they are different, omit the "extra" nodes at the beginning of the longer list. Then just traverse both lists one step at a time and find the first merging node.
Steps in Java:
Create a map.
Start traversing in the both branches of list and Put all traversed nodes of list into the Map using some unique thing related to Nodes(say node Id) as Key and put Values as 1 in the starting for all.
When ever first duplicate key comes, increment the value for that Key (let say now its value became 2 which is > 1.
Get the Key where the value is greater than 1 and that should be the node where two lists are merging.
We can efficiently solve it by introducing "isVisited" field. Traverse first list and set "isVisited" value to "true" for all nodes till end. Now start from second and find first node where flag is true and Boom ,its your merging point.
Step 1: find lenght of both the list
Step 2 : Find the diff and move the biggest list with the difference
Step 3 : Now both list will be in similar position.
Step 4 : Iterate through list to find the merge point
//Psuedocode
def findmergepoint(list1, list2):
lendiff = list1.length() > list2.length() : list1.length() - list2.length() ? list2.lenght()-list1.lenght()
biggerlist = list1.length() > list2.length() : list1 ? list2 # list with biggest length
smallerlist = list1.length() < list2.length() : list2 ? list1 # list with smallest length
# move the biggest length to the diff position to level both the list at the same position
for i in range(0,lendiff-1):
biggerlist = biggerlist.next
#Looped only once.
while ( biggerlist is not None and smallerlist is not None ):
if biggerlist == smallerlist :
return biggerlist #point of intersection
return None // No intersection found
int FindMergeNode(Node *headA, Node *headB)
{
Node *tempB=new Node;
tempB=headB;
while(headA->next!=NULL)
{
while(tempB->next!=NULL)
{
if(tempB==headA)
return tempB->data;
tempB=tempB->next;
}
headA=headA->next;
tempB=headB;
}
return headA->data;
}
Use Map or Dictionary to store the addressess vs value of node. if the address alread exists in the Map/Dictionary then the value of the key is the answer.
I did this:
int FindMergeNode(Node headA, Node headB) {
Map<Object, Integer> map = new HashMap<Object, Integer>();
while(headA != null || headB != null)
{
if(headA != null && map.containsKey(headA.next))
{
return map.get(headA.next);
}
if(headA != null && headA.next != null)
{
map.put(headA.next, headA.next.data);
headA = headA.next;
}
if(headB != null && map.containsKey(headB.next))
{
return map.get(headB.next);
}
if(headB != null && headB.next != null)
{
map.put(headB.next, headB.next.data);
headB = headB.next;
}
}
return 0;
}
A O(n) complexity solution. But based on an assumption.
assumption is: both nodes are having only positive integers.
logic : make all the integer of list1 to negative. Then walk through the list2, till you get a negative integer. Once found => take it, change the sign back to positive and return.
static int findMergeNode(SinglyLinkedListNode head1, SinglyLinkedListNode head2) {
SinglyLinkedListNode current = head1; //head1 is give to be not null.
//mark all head1 nodes as negative
while(true){
current.data = -current.data;
current = current.next;
if(current==null) break;
}
current=head2; //given as not null
while(true){
if(current.data<0) return -current.data;
current = current.next;
}
}
You can add the nodes of list1 to a hashset and the loop through the second and if any node of list2 is already present in the set .If yes, then thats the merge node
static int findMergeNode(SinglyLinkedListNode head1, SinglyLinkedListNode head2) {
HashSet<SinglyLinkedListNode> set=new HashSet<SinglyLinkedListNode>();
while(head1!=null)
{
set.add(head1);
head1=head1.next;
}
while(head2!=null){
if(set.contains(head2){
return head2.data;
}
}
return -1;
}
Solution using javascript
var getIntersectionNode = function(headA, headB) {
if(headA == null || headB == null) return null;
let countA = listCount(headA);
let countB = listCount(headB);
let diff = 0;
if(countA > countB) {
diff = countA - countB;
for(let i = 0; i < diff; i++) {
headA = headA.next;
}
} else if(countA < countB) {
diff = countB - countA;
for(let i = 0; i < diff; i++) {
headB = headB.next;
}
}
return getIntersectValue(headA, headB);
};
function listCount(head) {
let count = 0;
while(head) {
count++;
head = head.next;
}
return count;
}
function getIntersectValue(headA, headB) {
while(headA && headB) {
if(headA === headB) {
return headA;
}
headA = headA.next;
headB = headB.next;
}
return null;
}
If editing the linked list is allowed,
Then just make the next node pointers of all the nodes of list 2 as null.
Find the data value of the last node of the list 1.
This will give you the intersecting node in single traversal of both the lists, with "no hi fi logic".
Follow the simple logic to solve this problem:
Since both pointer A and B are traveling with same speed. To meet both at the same point they must be cover the same distance. and we can achieve this by adding the length of a list to another.

Resources