What is .info referring to in pseudocode - pseudocode

I am reading pseudo code for a fundamental linked list and some of the methods have ____.info in them. Here is an example:
Algorithm: ToArray() //Returns an array of items from the list
START
Let Current be a node;
Lat ItemArray be an array of items in the list;
Let X be an integer, initialized to 0; //assume array sub-scripting begins at 0
Current = First;
While(Current <> NULL)
Increment x by 1;
ItemArray[x-1] = Current.Info;
Current = Current.Next;
End-While;
Return ItemArray;
STOP
What does the .Info do/mean?

Current in your pseudo code seems to be an element of the list. That basically holds the pointer to the Next element and also the actual data at that position. It seems Info is a bad name for exactly that: the actual data at the position in the list represented by Current.

Related

Divide an odd size array into into two equal sets of same size and same sum after deleting any one element from the array

Given an array of odd size. You have to delete any one element from the array and then find whether it is possible to divide the remaining even size array into two sets of equal size and having same sum of their elements. It is mandatory to remove any one element from the array.
So Here I am assuming that it is necessary to remove 1 element from the array.
Please look at the code snippet below.
int solve(int idx, int s, int cntr, int val) {
if(idx == n)
if(cntr != 1)
return INT_MAX;
else
return abs((sum-val)-2*s);
int ans = INT_MAX;
if(cntr == 0)
ans = min(ans, solve(idx+1, s, cntr+1, arr[idx]));
else
ans = min(ans, min(solve(idx+1,s+arr[idx], cntr, val), solve(idx+1, s, cntr, val)));
return ans;
}
Here sum is the total sum of original array,
val is the
value of the element at any position which u want to delete, and cntr to keep track whether any value is removed from the array or not.
So the algo goes like this.
Forget that you need to delete any value, Then the problem becomes whether is it possible to divide the array into 2 equi-sum halves. Now we can think of this problem such as divide the array into 2 parts such that abs(sum-2*sum_of_any_half_part) is minimized. So With this idea Lets say I initially have a bucket s which can be the part of array which we are concerned about. So at each step we can either put any element into this part or leave it for the other part.
Now if we introduce the deletion part in to this problem, its just one small changes which is required. Now at each step instead of 2 you have 3 options.
To delete this particular element and then increase the cntr to 1 and the val to the value of the element at that index in the array.
don't do any thing with this element. This is equal to putting this element into other bucket/half
put this element into bucket s, i.e. increase value of s by arr[idx];
Now recursively check which gives the best result.
P.S. Look at the base case in the code snippet to have better idea.
In the end if the above solve function gives ans = 0 then that means yes we can divide the array into 2 equi-sum parts after deleting any element.
Hope this helps.

i had an interview and they ask me and i didn't know the answer i get a answer

The question is write a pseudo code that returns true if a given one way linked list reads the same in both directions and false otherwise. In addition we know the size of the list stored in a variable n. The expected solution should have computational complexity O(n) and memory complexity O(1).
example : 1->2->3->3->2->1 return true
example : 1->2->3->1->2->3 return false
It is possible to reverse a one way linked list, doing a single pass(see the end of this answer). The restriction on the additional memory is quite constraining in this task, so the best approach I figured is a bit hacky.
Iterate over the list and reverse the part that comes after its center(namely after position n/2) using the algorithm I mentioned above. Save a pointer to the last element in the list - you'll need it for step 2.
Simultaneously iterate over the list from the beginning to position n/2 and the reversed portion(from n to n/2). The elements that you iterate over in both portions should match. For this you need two variables - one iterating over the first portion and one for the second(and one to remember how many elements you've already processed).
Reverse back the second half of the list so that the list is not changed at the end.
Overall I meet the requirements of the task.
The algorithm to reverse a one-way linked list goes like this(using c++ as example):
struct node {
node* next;
int data;
};
// reverses a one-way list and returns pointer to the new list head
node* reverse(node* c) {
node* prev = NULL;
node* cur = c;
// I assume the list is NULL terminated;
while (cur) {
node* temp = cur->next;
cur->next = prev;
prev = cur;
cur = temp;
}
return prev;
}

Select an element from a stream with uniform distributed probability

Give you:
A stream (end of the stream is EOF)
A function next() to get the next element in the stream and advance the pointer in the stream
A random generator generating floats between 0 and 1 (inclusively) uniformly
Output:
An element that is proven to be randomly (uniformly distributed) chosen
You can one or two variables.
You are not allowed to use array / list, and you cannot tell the way that trying to get all elements out and store them all and then pick.
This is an interview question.
My thinking is:
I use a var cur to store most recent kept element
So, if i get a new element, I generate a random 0 or 1 using generator, if it is 0 then cur = new element; otherwise, continue;
If I get EOF, then return cur
Is my thinking correct? How to prove?
Here is a same question
How would you pick a uniform random element in linked list with unknown length?
Let the current element's index be i.
Choose to 'remember' the current element at probability 1/i. When EOF is reached, produced the element you remember.
At the end, for each element with index i there is a probability to be chosen:
A formal prove can be done using induction, following these guidelines.
This algorithm chooses the last element in the stream with a probability of 1/2, so unless the stream has size = 2, this is not a valid solution.
A valid way would be to assign a random float value drawn from a uniform distribution between [0..1] to every element and return the one with the largest (or smallest) value at the end. This can be done in O(1) auxiliary space, you just need to remember the largest value and the associated element.
This solution doesn't quite fit all of the parameters in the question. However, this solution is based on a real-world need and code.
private static final SecureRandom s_random = new SecureRandom(); // Use SecureRandom for truly random selection without a pattern
public static <V> V randomValue(Iterator<V> values)
{
V result, item;
int count;
result = null;
for (count = 1, values.hasNext(); count++)
{
item = values.next();
if (count == 1) // Always select the first element
result = item;
else if (s_random.nextnextInt(count) == 0) // Replace the previous random item with a new item with 1/count probability
result = item;
}
if (result == null)
throw new IllegalArgumentException("No value found");
return(result);
}
The above algorithm is taken from here
Here's my python code for this problem (also available with tests on my github):
import random as rnd
def sample(iterator):
""" Uniform sampling of an element from an stream.
The i-th element (indexed at 1) is sampled with probability 1/i.
This can be shown by recurrence to lead to uniform sampling.
Indeed, at step n we impose p(val==n)=1/n, and all elements i before n
were sampled with equal probability, which is also equal to 1/n since
p(val<n) = (n-1)*p(val==i) = 1-1/n = (n-1)/n => p(val==i) = 1/n """
i = 1
out = None
for elm in iterator:
if rnd.uniform(0,1) <= 1/i:
out = elm
i += 1
return out

In random draw: how to insure that a value is not re-drawn too soon

When drawing in random from a set of values in succession, where a drawn value is allowed to
be drawn again, a given value has (of course) a small chance of being drawn twice (or more) in immediate succession, but that causes an issue (for the purposes of a given application) and we would like to eliminate this chance. Any algorithmic ideas on how to do so (simple/efficient)?
Ideally we would like to set a threshold say as a percentage of the size of the data set:
Say the size of the set of values N=100, and the threshold T=10%, then if a given value is drawn in the current draw, it is guaranteed not to show up again in the next N*T=10 draws.
Obviously this restriction introduces bias in the random selection. We don't mind that a
proposed algorithm introduces further bias into the randomness of selection, what really
matters for this application is that the selection is just random enough to appear so
for a human observer.
As an implementation detail, the values are stored as database records, so database table flags/values can be used, or maybe external memory structures. Answers about the abstract case are welcome too.
Edit:
I just hit this other SO question here, which has good overlap with my own. Going through the good points there.
Here's an implementation that does the whole process in O(1) (for a single element) without any bias:
The idea is to treat the last K elements in the array A (which contains all the values) like a queue, we draw a value from the first N-k values in A, which is the random value, and swap it with an element in position N-Pointer, when Pointer represents the head of the queue, and it resets to 1 when it crosses K elements.
To eliminate any bias in the first K draws, the random value will be drawn between 1 and N-Pointer instead of N-k, so this virtual queue is growing in size at each draw until reaching the size of K (e.g. after 3 draws the number of possible values appear in A between indexes 1 and N-3, and the suspended values appear in indexes N-2 to N.
All operations are O(1) for drawing a single elemnent and there's no bias throughout the entire process.
void DrawNumbers(val[] A, int K)
{
N = A.size;
random Rnd = new random;
int Drawn_Index;
int Count_To_K = 1;
int Pointer = K;
while (stop_drawing_condition)
{
if (Count_To_K <= K)
{
Drawn_Index = Rnd.NextInteger(1, N-Pointer);
Count_To_K++;
}
else
{
Drawn_Index = Rnd.NextInteger(1, N-K)
}
Print("drawn value is: " + A[Drawn_Index])
Swap(A[Drawn_Index], A[N-Pointer])
Pointer--;
if (Pointer < 1) Pointer = K;
}
}
My previous suggestion, by using a list and an actual queue, is dependent on the remove method of the list, which I believe can be at best O(logN) by using an array to implement a self balancing binary tree, as the list has to have direct access to indexes.
void DrawNumbers(list N, int K)
{
queue Suspended_Values = new queue;
random Rnd = new random;
int Drawn_Index;
while (stop_drawing_condition)
{
if (Suspended_Values.count == K)
N.add(Suspended_Value.Dequeue());
Drawn_Index = Rnd.NextInteger(1, N.size) // random integer between 1 and the number of values in N
Print("drawn value is: " + N[Drawn_Index]);
Suspended_Values.Enqueue(N[Drawn_Index]);
N.Remove(Drawn_Index);
}
}
I assume you have an array, A, that contains the items you want to draw. At each time period you randomly select an item from A.
You want to prevent any given item, i, from being drawn again within some k iterations.
Let's say that your threshold is 10% of A.
So create a queue, call it drawn, that can hold threshold items. Also create a hash table that contains the drawn items. Call the hash table hash.
Then:
do
{
i = Get random item from A
if (i in hash)
{
// we have drawn this item recently. Don't draw it.
continue;
}
draw(i);
if (drawn.count == k)
{
// remove oldest item from queue
temp = drawn.dequeue();
// and from the hash table
hash.remove(temp);
}
// add new item to queue and hash table
drawn.enqueue(i);
hash.add(i);
} while (forever);
The hash table exists solely to increase lookup speed. You could do without the hash table if you're willing to do a sequential search of the queue to determine if an item has been drawn recently.
Say you have n items in your list, and you don't want any of the k last items to be selected.
Select at random from an array of size n-k, and use a queue of size k to stick the items you don't want to draw (adding to the front and removing from the back).
All operations are O(1).
---- clarification ----
Give n items, and a goal of not redrawing any of the last k draws, create an array and queue as follows.
Create an array A of size n-k, and put n-k of your items in the list (chosen at random, or seeded however you like).
Create a queue (linked list) Q and populate it with the remaining k items, again in random order or whatever order you like.
Now, each time you want to select an item at random:
Choose a random index from your array, call this i.
Give A[i] to whomever is asking for it, and add it to the front of Q.
Remove the element from the back of Q, and store it in A[i].
Everything is O(1) after the array and linked list are created, which is a one-time O(n) operation.
Now, you might wonder, what do we do if we want to change n (i.e. add or remove an element).
Each time we add an element, we either want to grow the size of A or of Q, depending on our logic for deciding what k is (i.e. fixed value, fixed fraction of n, whatever...).
If Q increases then the result is trivial, we just append the new element to Q. In this case I'd probably append it to the end of Q so that it gets in play ASAP. You could also put it in A, kicking some element out of A and appending it to the end of Q.
If A increases, you can use a standard technique for increasing arrays in amortized constant time. E.g., each time A fills up, we double it in size, and keep track of the number of cells of A that are live. (look up 'Dynamic Arrays' in Wikipedia if this is unfamiliar).
Set-based approach:
If the threshold is low (say below 40%), the suggested approach is:
Have a set and a queue of the last N*T generated values.
When generating a value, keep regenerating it until it's not contained in the set.
When pushing to the queue, pop the oldest value and remove it from the set.
Pseudo-code:
generateNextValue:
// once we're generated more than N*T elements,
// we need to start removing old elements
if queue.size >= N*T
element = queue.pop
set.remove(element)
// keep trying to generate random values until it's not contained in the set
do
value = getRandomValue()
while set.contains(value)
set.add(value)
queue.push(value)
return value
If the threshold is high, you can just turn the above on its head:
Have the set represent all values not in the last N*T generated values.
Invert all set operations (replace all set adds with removes and vice versa and replace the contains with !contains).
Pseudo-code:
generateNextValue:
if queue.size >= N*T
element = queue.pop
set.add(element)
// we can now just get a random value from the set, as it contains all candidates,
// rather than generating random values until we find one that works
value = getRandomValueFromSet()
//do
// value = getRandomValue()
//while !set.contains(value)
set.remove(value)
queue.push(value)
return value
Shuffled-based approach: (somewhat more complicated that the above)
If the threshold is a high, the above may take long, as it could keep generating values that already exists.
In this case, some shuffle-based approach may be a better idea.
Shuffle the data.
Repeatedly process the first element.
When doing so, remove it and insert it back at a random position in the range [N*T, N].
Example:
Let's say N*T = 5 and all possible values are [1,2,3,4,5,6,7,8,9,10].
Then we first shuffle, giving us, let's say, [4,3,8,9,2,6,7,1,10,5].
Then we remove 4 and insert it back in some index in the range [5,10] (say at index 5).
Then we have [3,8,9,2,4,6,7,1,10,5].
And continue removing the next element and insert it back, as required.
Implementation:
An array is fine if we don't care about efficient a whole lot - to get one element will cost O(n) time.
To make this efficient we need to use an ordered data structure that supports efficient random position inserts and first position removals. The first thing that comes to mind is a (self-balancing) binary search tree, ordered by index.
We won't be storing the actual index, the index will be implicitly defined by the structure of the tree.
At each node we will have a count of children (+ 1 for itself) (which needs to be updated on insert / remove).
An insert can be done as follows: (ignoring the self-balancing part for the moment)
// calling function
insert(node, value)
insert(node, N*T, value)
insert(node, offset, value)
// node.left / node.right can be defined as 0 if the child doesn't exist
leftCount = node.left.count - offset
rightCount = node.right.count
// Since we're here, it means we're inserting in this subtree,
// thus update the count
node.count++
// Nodes to the left are within N*T, so simply go right
// leftCount is the difference between N*T and the number of nodes on the left,
// so this needs to be the new offset (and +1 for the current node)
if leftCount < 0
insert(node.right, -leftCount+1, value)
else
// generate a random number,
// on [0, leftCount), insert to the left
// on [leftCount, leftCount], insert at the current node
// on (leftCount, leftCount + rightCount], insert to the right
sum = leftCount + rightCount + 1
random = getRandomNumberInRange(0, sum)
if random < leftCount
insert(node.left, offset, value)
else if random == leftCount
// we don't actually want to update the count here
node.count--
newNode = new Node(value)
newNode.count = node.count + 1
// TODO: swap node and newNode's data so that node's parent will now point to newNode
newNode.right = node
newNode.left = null
else
insert(node.right, -leftCount+1, value)
To visualize inserting at the current node:
If we have something like:
4
/
1
/ \
2 3
And we want to insert 5 where 1 is now, it will do this:
4
/
5
\
1
/ \
2 3
Note that when a red-black tree, for example, performs operations to keep itself balanced, none of these involve comparisons, so it doesn't need to know the order (i.e. index) of any already-inserted elements. But it will have to update the counts appropriately.
The overall efficiency will be O(log n) to get one element.
I'd put all "values" into a "list" of size N, then shuffle the list and retrieve values from the top of the list. Then you "insert" the retrieved value at a random position with any index >= N*T.
Unfortunately I'm not truly a math-guy :( So I simply tried it (in VB, so please take it as pseudocode ;) )
Public Class BiasedRandom
Private prng As New Random
Private offset As Integer
Private l As New List(Of Integer)
Public Sub New(ByVal size As Integer, ByVal threshold As Double)
If threshold <= 0 OrElse threshold >= 1 OrElse size < 1 Then Throw New System.ArgumentException("Check your params!")
offset = size * threshold
' initial fill
For i = 0 To size - 1
l.Add(i)
Next
' shuffle "Algorithm p"
For i = size - 1 To 1 Step -1
Dim j = prng.Next(0, i + 1)
Dim tmp = l(i)
l(i) = l(j)
l(j) = tmp
Next
End Sub
Public Function NextValue() As Integer
Dim tmp = l(0)
l.RemoveAt(0)
l.Insert(prng.Next(offset, l.Count + 1), tmp)
Return tmp
End Function
End Class
Then a simple check:
Public Class Form1
Dim z As Integer = 10
Dim k As BiasedRandom
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
k = New BiasedRandom(z, 0.5)
End Sub
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Dim j(z - 1)
For i = 1 To 10 * 1000 * 1000
j(k.NextValue) += 1
Next
Stop
End Sub
End Class
And when I check out the distribution it looks okay enough for an unarmed eye ;)
EDIT:
After thinking about RonTeller's argumentation, I have to admit that he is right. I don't think that there is a performance friendly way to achieve the wanted and to pertain a good (not more biased than required) random order.
I came to the follwoing idea:
Given a list (array whatever) like this:
0123456789 ' not shuffled to make clear what I mean
We return the first element which is 0. This one must not come up again for 4 (as an example) more draws but we also want to avoid a strong bias. Why not simply put it to the end of the list and then shuffle the "tail" of the list, i.e. the last 6 elements?
1234695807
We now return the 1 and repeat the above steps.
2340519786
And so on and so on. Since removing and inserting is kind of unnecessary work, one could use a simple array and a "pointer" to the actual element. I have changed the code from above to give a sample. It's slower than the first one, but should avoid the mentioned bias.
Public Function NextValue() As Integer
Static current As Integer = 0
' only shuffling a part of the list
For i = current + l.Count - 1 To current + 1 + offset Step -1
Dim j = prng.Next(current + offset, i + 1)
Dim tmp = l(i Mod l.Count)
l(i Mod l.Count) = l(j Mod l.Count)
l(j Mod l.Count) = tmp
Next
current += 1
Return l((current - 1) Mod l.Count)
End Function
EDIT 2:
Finally (hopefully), I think the solution is quite simple. The below code assumes that there is an array of N elements called TheArray which contains the elements in random order (could be rewritten to work with sorted array). The value DelaySize determines how long a value should be suspended after it has been drawn.
Public Function NextValue() As Integer
Static current As Integer = 0
Dim SelectIndex As Integer = prng.Next(0, TheArray.Count - DelaySize)
Dim ReturnValue = TheArray(SelectIndex)
TheArray(SelectIndex) = TheArray(TheArray.Count - 1 - current Mod DelaySize)
TheArray(TheArray.Count - 1 - current Mod DelaySize) = ReturnValue
current += 1
Return ReturnValue
End Function

Check if two linked lists merge. If so, where?

This question may be old, but I couldn't think of an answer.
Say, there are two lists of different lengths, merging at a point; how do we know where the merging point is?
Conditions:
We don't know the length
We should parse each list only once.
The following is by far the greatest of all I have seen - O(N), no counters. I got it during an interview to a candidate S.N. at VisionMap.
Make an interating pointer like this: it goes forward every time till the end, and then jumps to the beginning of the opposite list, and so on.
Create two of these, pointing to two heads.
Advance each of the pointers by 1 every time, until they meet. This will happen after either one or two passes.
I still use this question in the interviews - but to see how long it takes someone to understand why this solution works.
Pavel's answer requires modification of the lists as well as iterating each list twice.
Here's a solution that only requires iterating each list twice (the first time to calculate their length; if the length is given you only need to iterate once).
The idea is to ignore the starting entries of the longer list (merge point can't be there), so that the two pointers are an equal distance from the end of the list. Then move them forwards until they merge.
lenA = count(listA) //iterates list A
lenB = count(listB) //iterates list B
ptrA = listA
ptrB = listB
//now we adjust either ptrA or ptrB so that they are equally far from the end
while(lenA > lenB):
ptrA = ptrA->next
lenA--
while(lenB > lenA):
prtB = ptrB->next
lenB--
while(ptrA != NULL):
if (ptrA == ptrB):
return ptrA //found merge point
ptrA = ptrA->next
ptrB = ptrB->next
This is asymptotically the same (linear time) as my other answer but probably has smaller constants, so is probably faster. But I think my other answer is cooler.
If
by "modification is not allowed" it was meant "you may change but in the end they should be restored", and
we could iterate the lists exactly twice
the following algorithm would be the solution.
First, the numbers. Assume the first list is of length a+c and the second one is of length b+c, where c is the length of their common "tail" (after the mergepoint). Let's denote them as follows:
x = a+c
y = b+c
Since we don't know the length, we will calculate x and y without additional iterations; you'll see how.
Then, we iterate each list and reverse them while iterating! If both iterators reach the merge point at the same time, then we find it out by mere comparing. Otherwise, one pointer will reach the merge point before the other one.
After that, when the other iterator reaches the merge point, it won't proceed to the common tail. Instead will go back to the former beginning of the list that had reached merge-point before! So, before it reaches the end of the changed list (i.e. the former beginning of the other list), he will make a+b+1 iterations total. Let's call it z+1.
The pointer that reached the merge-point first, will keep iterating, until reaches the end of the list. The number of iterations it made should be calculated and is equal to x.
Then, this pointer iterates back and reverses the lists again. But now it won't go back to the beginning of the list it originally started from! Instead, it will go to the beginning of the other list! The number of iterations it made should be calculated and equal to y.
So we know the following numbers:
x = a+c
y = b+c
z = a+b
From which we determine that
a = (+x-y+z)/2
b = (-x+y+z)/2
c = (+x+y-z)/2
Which solves the problem.
Well, if you know that they will merge:
Say you start with:
A-->B-->C
|
V
1-->2-->3-->4-->5
1) Go through the first list setting each next pointer to NULL.
Now you have:
A B C
1-->2-->3 4 5
2) Now go through the second list and wait until you see a NULL, that is your merge point.
If you can't be sure that they merge you can use a sentinel value for the pointer value, but that isn't as elegant.
If we could iterate lists exactly twice, than I can provide method for determining merge point:
iterate both lists and calculate lengths A and B
calculate difference of lengths C = |A-B|;
start iterating both list simultaneously, but make additional C steps on list which was greater
this two pointers will meet each other in the merging point
Here's a solution, computationally quick (iterates each list once) but uses a lot of memory:
for each item in list a
push pointer to item onto stack_a
for each item in list b
push pointer to item onto stack_b
while (stack_a top == stack_b top) // where top is the item to be popped next
pop stack_a
pop stack_b
// values at the top of each stack are the items prior to the merged item
You can use a set of Nodes. Iterate through one list and add each Node to the set. Then iterate through the second list and for every iteration, check if the Node exists in the set. If it does, you've found your merge point :)
This arguably violates the "parse each list only once" condition, but implement the tortoise and hare algorithm (used to find the merge point and cycle length of a cyclic list) so you start at List A, and when you reach the NULL at the end you pretend it's a pointer to the beginning of list B, thus creating the appearance of a cyclic list. The algorithm will then tell you exactly how far down List A the merge is (the variable 'mu' according to the Wikipedia description).
Also, the "lambda" value tells you the length of list B, and if you want, you can work out the length of list A during the algorithm (when you redirect the NULL link).
Maybe I am over simplifying this, but simply iterate the smallest list and use the last nodes Link as the merging point?
So, where Data->Link->Link == NULL is the end point, giving Data->Link as the merging point (at the end of the list).
EDIT:
Okay, from the picture you posted, you parse the two lists, the smallest first. With the smallest list you can maintain the references to the following node. Now, when you parse the second list you do a comparison on the reference to find where Reference [i] is the reference at LinkedList[i]->Link. This will give the merge point. Time to explain with pictures (superimpose the values on the picture the OP).
You have a linked list (references shown below):
A->B->C->D->E
You have a second linked list:
1->2->
With the merged list, the references would then go as follows:
1->2->D->E->
Therefore, you map the first "smaller" list (as the merged list, which is what we are counting has a length of 4 and the main list 5)
Loop through the first list, maintain a reference of references.
The list will contain the following references Pointers { 1, 2, D, E }.
We now go through the second list:
-> A - Contains reference in Pointers? No, move on
-> B - Contains reference in Pointers? No, move on
-> C - Contains reference in Pointers? No, move on
-> D - Contains reference in Pointers? Yes, merge point found, break.
Sure, you maintain a new list of pointers, but thats not outside the specification. However the first list is parsed exactly once, and the second list will only be fully parsed if there is no merge point. Otherwise, it will end sooner (at the merge point).
I have tested a merge case on my FC9 x86_64, and print every node address as shown below:
Head A 0x7fffb2f3c4b0
0x214f010
0x214f030
0x214f050
0x214f070
0x214f090
0x214f0f0
0x214f110
0x214f130
0x214f150
0x214f170
Head B 0x7fffb2f3c4a0
0x214f0b0
0x214f0d0
0x214f0f0
0x214f110
0x214f130
0x214f150
0x214f170
Note becase I had aligned the node structure, so when malloc() a node, the address is aligned w/ 16 bytes, see the least 4 bits.
The least bits are 0s, i.e., 0x0 or 000b.
So if your are in the same special case (aligned node address) too, you can use these least 4 bits.
For example when travel both lists from head to tail, set 1 or 2 of the 4 bits of the visiting node address, that is, set a flag;
next_node = node->next;
node = (struct node*)((unsigned long)node | 0x1UL);
Note above flags won't affect the real node address but only your SAVED node pointer value.
Once found somebody had set the flag bit(s), then the first found node should be the merge point.
after done, you'd restore the node address by clear the flag bits you had set. while an important thing is that you should be careful when iterate (e.g. node = node->next) to do clean. remember you had set flag bits, so do this way
real_node = (struct node*)((unsigned long)node) & ~0x1UL);
real_node = real_node->next;
node = real_node;
Because this proposal will restore the modified node addresses, it could be considered as "no modification".
There can be a simple solution but will require an auxilary space. The idea is to traverse a list and store each address in a hash map, now traverse the other list and match if the address lies in the hash map or not. Each list is traversed only once. There's no modification to any list. Length is still unknown. Auxiliary space used: O(n) where 'n' is the length of first list traversed.
this solution iterates each list only once...no modification of list required too..though you may complain about space..
1) Basically you iterate in list1 and store the address of each node in an array(which stores unsigned int value)
2) Then you iterate list2, and for each node's address ---> you search through the array that you find a match or not...if you do then this is the merging node
//pseudocode
//for the first list
p1=list1;
unsigned int addr[];//to store addresses
i=0;
while(p1!=null){
addr[i]=&p1;
p1=p1->next;
}
int len=sizeof(addr)/sizeof(int);//calculates length of array addr
//for the second list
p2=list2;
while(p2!=null){
if(search(addr[],len,&p2)==1)//match found
{
//this is the merging node
return (p2);
}
p2=p2->next;
}
int search(addr,len,p2){
i=0;
while(i<len){
if(addr[i]==p2)
return 1;
i++;
}
return 0;
}
Hope it is a valid solution...
There is no need to modify any list. There is a solution in which we only have to traverse each list once.
Create two stacks, lets say stck1 and stck2.
Traverse 1st list and push a copy of each node you traverse in stck1.
Same as step two but this time traverse 2nd list and push the copy of nodes in stck2.
Now, pop from both stacks and check whether the two nodes are equal, if yes then keep a reference to them. If no, then previous nodes which were equal are actually the merge point we were looking for.
int FindMergeNode(Node headA, Node headB) {
Node currentA = headA;
Node currentB = headB;
// Do till the two nodes are the same
while (currentA != currentB) {
// If you reached the end of one list start at the beginning of the other
// one currentA
if (currentA.next == null) {
currentA = headA;
} else {
currentA = currentA.next;
}
// currentB
if (currentB.next == null) {
currentB = headB;
} else {
currentB = currentB.next;
}
}
return currentB.data;
}
We can use two pointers and move in a fashion such that if one of the pointers is null we point it to the head of the other list and same for the other, this way if the list lengths are different they will meet in the second pass.
If length of list1 is n and list2 is m, their difference is d=abs(n-m). They will cover this distance and meet at the merge point.
Code:
int findMergeNode(SinglyLinkedListNode* head1, SinglyLinkedListNode* head2) {
SinglyLinkedListNode* start1=head1;
SinglyLinkedListNode* start2=head2;
while (start1!=start2){
start1=start1->next;
start2=start2->next;
if (!start1)
start1=head2;
if (!start2)
start2=head1;
}
return start1->data;
}
Here is naive solution , No neeed to traverse whole lists.
if your structured node has three fields like
struct node {
int data;
int flag; //initially set the flag to zero for all nodes
struct node *next;
};
say you have two heads (head1 and head2) pointing to head of two lists.
Traverse both the list at same pace and put the flag =1(visited flag) for that node ,
if (node->next->field==1)//possibly longer list will have this opportunity
//this will be your required node.
How about this:
If you are only allowed to traverse each list only once, you can create a new node, traverse the first list to have every node point to this new node, and traverse the second list to see if any node is pointing to your new node (that's your merge point). If the second traversal doesn't lead to your new node then the original lists don't have a merge point.
If you are allowed to traverse the lists more than once, then you can traverse each list to find our their lengths and if they are different, omit the "extra" nodes at the beginning of the longer list. Then just traverse both lists one step at a time and find the first merging node.
Steps in Java:
Create a map.
Start traversing in the both branches of list and Put all traversed nodes of list into the Map using some unique thing related to Nodes(say node Id) as Key and put Values as 1 in the starting for all.
When ever first duplicate key comes, increment the value for that Key (let say now its value became 2 which is > 1.
Get the Key where the value is greater than 1 and that should be the node where two lists are merging.
We can efficiently solve it by introducing "isVisited" field. Traverse first list and set "isVisited" value to "true" for all nodes till end. Now start from second and find first node where flag is true and Boom ,its your merging point.
Step 1: find lenght of both the list
Step 2 : Find the diff and move the biggest list with the difference
Step 3 : Now both list will be in similar position.
Step 4 : Iterate through list to find the merge point
//Psuedocode
def findmergepoint(list1, list2):
lendiff = list1.length() > list2.length() : list1.length() - list2.length() ? list2.lenght()-list1.lenght()
biggerlist = list1.length() > list2.length() : list1 ? list2 # list with biggest length
smallerlist = list1.length() < list2.length() : list2 ? list1 # list with smallest length
# move the biggest length to the diff position to level both the list at the same position
for i in range(0,lendiff-1):
biggerlist = biggerlist.next
#Looped only once.
while ( biggerlist is not None and smallerlist is not None ):
if biggerlist == smallerlist :
return biggerlist #point of intersection
return None // No intersection found
int FindMergeNode(Node *headA, Node *headB)
{
Node *tempB=new Node;
tempB=headB;
while(headA->next!=NULL)
{
while(tempB->next!=NULL)
{
if(tempB==headA)
return tempB->data;
tempB=tempB->next;
}
headA=headA->next;
tempB=headB;
}
return headA->data;
}
Use Map or Dictionary to store the addressess vs value of node. if the address alread exists in the Map/Dictionary then the value of the key is the answer.
I did this:
int FindMergeNode(Node headA, Node headB) {
Map<Object, Integer> map = new HashMap<Object, Integer>();
while(headA != null || headB != null)
{
if(headA != null && map.containsKey(headA.next))
{
return map.get(headA.next);
}
if(headA != null && headA.next != null)
{
map.put(headA.next, headA.next.data);
headA = headA.next;
}
if(headB != null && map.containsKey(headB.next))
{
return map.get(headB.next);
}
if(headB != null && headB.next != null)
{
map.put(headB.next, headB.next.data);
headB = headB.next;
}
}
return 0;
}
A O(n) complexity solution. But based on an assumption.
assumption is: both nodes are having only positive integers.
logic : make all the integer of list1 to negative. Then walk through the list2, till you get a negative integer. Once found => take it, change the sign back to positive and return.
static int findMergeNode(SinglyLinkedListNode head1, SinglyLinkedListNode head2) {
SinglyLinkedListNode current = head1; //head1 is give to be not null.
//mark all head1 nodes as negative
while(true){
current.data = -current.data;
current = current.next;
if(current==null) break;
}
current=head2; //given as not null
while(true){
if(current.data<0) return -current.data;
current = current.next;
}
}
You can add the nodes of list1 to a hashset and the loop through the second and if any node of list2 is already present in the set .If yes, then thats the merge node
static int findMergeNode(SinglyLinkedListNode head1, SinglyLinkedListNode head2) {
HashSet<SinglyLinkedListNode> set=new HashSet<SinglyLinkedListNode>();
while(head1!=null)
{
set.add(head1);
head1=head1.next;
}
while(head2!=null){
if(set.contains(head2){
return head2.data;
}
}
return -1;
}
Solution using javascript
var getIntersectionNode = function(headA, headB) {
if(headA == null || headB == null) return null;
let countA = listCount(headA);
let countB = listCount(headB);
let diff = 0;
if(countA > countB) {
diff = countA - countB;
for(let i = 0; i < diff; i++) {
headA = headA.next;
}
} else if(countA < countB) {
diff = countB - countA;
for(let i = 0; i < diff; i++) {
headB = headB.next;
}
}
return getIntersectValue(headA, headB);
};
function listCount(head) {
let count = 0;
while(head) {
count++;
head = head.next;
}
return count;
}
function getIntersectValue(headA, headB) {
while(headA && headB) {
if(headA === headB) {
return headA;
}
headA = headA.next;
headB = headB.next;
}
return null;
}
If editing the linked list is allowed,
Then just make the next node pointers of all the nodes of list 2 as null.
Find the data value of the last node of the list 1.
This will give you the intersecting node in single traversal of both the lists, with "no hi fi logic".
Follow the simple logic to solve this problem:
Since both pointer A and B are traveling with same speed. To meet both at the same point they must be cover the same distance. and we can achieve this by adding the length of a list to another.

Resources