I have these two find algorithm which look the same to me. Can anyone help me out why they are actually different?
Find ( x ) :
if x.parent = x then
return x
else
return Find ( x.parent )
vs
Find ( x ) :
if x.parent = x then
return x
else
x.parent <- Find(x.parent)
return x.parent
I interpret the first one as
int i = 0;
return i++;
while the second one as
int i = 0;
int tmp = i++;
return tmp
which are exactly the same to me.
This looks like Disjoint-set data structure.
Now to the question:
For the sake of clarity first version is FindA, second is FindB.
Suppose you have structure:
0
|
1
|
2
|
...
n
First call to FindA(n) will return 0 in O(n), second call will return 0 in O(n) and so on.
If you call FindB(n) it will return 0 in O(n), but will also modify structure:
0
/ /|\
1 2...n
Now second call to FindB(n) will return 0 in O(1). More over FindB(k) will return 0 in O(1).
The second one will change the value of x.parent as a side effect to the result of find
Related
I am studying algorithm union find.
But I don't mind with the value.
Example,
0 to 3 are connected on the one line.
So, we draw the line like this.
0 - 1 - 2 - 3
In this case, where I found the 3's parent? Where is the 3's parent the 2?
int parent[1000001];
int Find(int x){
if(x == parent[x]){
return x;
}
else{
int y = Find(parent[x]);
parent[x] = y;
return y;
}
}
void Union(int x, int y){
x = Find(x);
y = Find(y);
parent[y] = x;
}
Which value actually becomes the parent of 3 depends on the order in which you unite pairs of values.
But first of all your code should initialise parent before connecting any points. When there are no connections yet, you must have for every i:
parent[i] = i
Then, if for instance your calls are like follows:
Union(0, 1)
Union(1, 2)
Union(2, 3)
...the parent relationship will evolve like this (pseudo code):
Union(0, 1)
x := 0
y := 1
parent[1] := 0
Union(1, 2)
x := 0 // because Find looks up the "root" of 1
y := 2
parent[2] := 0
Union(2, 3)
x := 0 // because Find looks up the "root" of 2
y := 3
parent[3] := 0
Remark
Your implementation of Union Find has path compression, but does not improve tree-balancing with a size or rank attribute. The implementation becomes most efficient if it uses both strategies. See Wikipedia on this topic.
Question:
Implement a function to check if a binary tree is balanced (i.e. no
two nodes differ in height from the root by more than 1).
Solution:
int maxDepth(Node *root)
{
if(!root) return 0;
return 1 + max(maxDepth(root->left), maxDepth(root->right));
}
int minDepth(Node *root)
{
if(!root) return 0;
return 1 + min(minDepth(root->left), minDepth(root->right));
}
bool isBalanced(Node *root)
{
return maxDepth(root)-minDepth(root) <= 1;
}
Can someone help me understand the intuition behind this solution? I'm struggling to "see" the recursion behind tree algorithms. I know that maxDepth and minDepth are supposed to find the height of the node of maximum depth and minimum depth in the tree, respectively, but I don't understand how the recursion works to do that.
More importantly, I don't quite know how I could come up with this solution on my own. So any tips as to how to approach tree problems in general would be greatly appreciated.
The best way to understand is look at the example:
a
/ \
b c
/ \
d e
when you call maxDepth on root node 'a' what will the following code do?
return 1 + max(maxDepth(root->left), maxDepth(root->right));
it will return 1 + max of maxDepth('b') or maxDepth('c')
maxDepth('b') will return 1 because:
1 + max( maxDepth(NULL), maxDepth(NULL) ) = 1 + (max (0,0)) = 1 + 0 = 0;
the above gets NULLs from 'b'->left and 'b'->right
so, getting back to maxDepth('a') now we know that it returns:
maxDepth('a') = 1 + max( 1, maxDepth('c'));
maxDepth('c') will follow the same steps and return 2. Hence:
maxDepth('a') = 1 + max( 1, 2 ) = 1 + 2 = 3
for minDepth() the flow is the same with the only difference in using min() instead of max().
There are two random functions f1(),f2().
f1() returns 1 with probability p1, and 0 with probability 1-p1.
f2() returns 1 with probability p2, and 0 with probability 1-p2.
I want to implement a new function f3() which returns 1 with probability p3(a given probability), and returns 0 with probability 1-p3. In the implemetion of function f3(), we can use function f1() and f2(), but you can't use any other random function.
If p3=0.5, an example of implemention:
int f3()
{
do
{
int a = f1();
int b = f1();
if (a==b) continue;
// when reachs here
// a==1 with probability p1(1-p1)
// b==1 with probability (1-p1)p1
if (a==1) return 1;//now returns 1 with probability 0.5
if (b==1) return 0;
}while(1)
}
This implemention of f3() will give a random function returns 1 with probability 0.5, and 0 with probability 0.5. But how to implement the f3() with p3=0.4? I have no idea.
I wonder, is that task possible? And how to implement f3()?
Thanks in advance.
p1 = 0.77 -- arbitrary value between 0 and 1
function f1()
if math.random() < p1 then
return 1
else
return 0
end
end
-- f1() is enough. We don't need f2()
p3 = 0.4 -- arbitrary value between 0 and 1
--------------------------
function f3()
left = 0
rigth = 1
repeat
middle = left + (right - left) * p1
if f1() == 1 then
right = middle
else
left = middle
end
if right < p3 then -- completely below
return 1
elseif left >= p3 then -- completely above
return 0
end
until false -- loop forever
end
This can be solved if p3 is a rational number.
We should use conditional probabilities for this.
For example, if you want to make this for p3=0.4, the method is the following:
Calculate the fractional form of p3. In our case it is p3=0.4=2/5.
Now generate as many random variables from the same distribution (let's say, from f1, we won't use f2 anyway) as the denominator, call them X1, X2, X3, X4, X5.
We should regenerate all these random X variables until their sum equals the numerator in the fractional form of p3.
Once this is achieved then we just return X1 (or any other Xn, where n was chosen independently of the values of the X variables). Since there are 2 1s among the 5 X variables (because their sum equals the numerator), the probability of X1 being 1 is exactly p3.
For irrational p3, the problem cannot be solved by using only f1. I'm not sure now, but I think, it can be solved for p3 of the form p1*q+p2*(1-q), where q is rational with a similar method, generating the appropriate amount of Xs with distribution f1 and Ys with distribution f2, until they have a specific predefined sum, and returning one of them. This still needs to be detailed.
First to say, that's a nice problem to tweak one's brain. I managed to solve the problem for p3 = 0.4, for what you just asked for! And I think, generalisation of such problem, is not so trivial. :D
Here is how, you can solve it for p3 = 0.4:
The intuition comes from your example. If we generate a number from f1() five times in an iteration, (see the code bellow), we can have 32 types of results like bellow:
1: 00000
2: 00001
3: 00010
4: 00011
.....
.....
32: 11111
Among these, there are 10 such results with exactly two 1's in it! After identifying this, the problem becomes simple. Just return 1 for any of the 4 combinations and return 0 for 6 others! (as probability 0.4 means getting 1, 4 times out of 10). You can do that like bellow:
int f3()
{
do{
int a[5];
int numberOfOneInA = 0;
for(int i = 0; i < 5; i++){
a[i] = f1();
if(a[i] == 1){
numberOfOneInA++;
}
}
if (numberOfOneInA != 2) continue;
else return a[0]; //out of 10 times, 4 times a[0] is 1!
}while(1)
}
Waiting to see a generalised solution.
Cheers!
Here is an idea that will work when p3 is of a form a/2^n (a rational number with a denominator that is a power of 2).
Generate n random numbers with probability distribution of 0.5:
x1, x2, ..., xn
Interpret this as a binary number in the range 0...2^n-1; each number in this range has equal probability. If this number is less than a, return 1, else return 0.
Now, since this question is in a context of computer science, it seems reasonable to assume that p3 is in a form of a/2^n (this a common representation of numbers in computers).
I implement the idea of anatolyg and Egor:
inline double random(void)
{
return static_cast<double>(rand()) / static_cast<double>(RAND_MAX);
}
const double p1 = 0.8;
int rand_P1(void)
{
return random() < p1;
}
int rand_P2(void)//return 0 with 0.5
{
int x, y; while (1)
{
mystep++;
x = rand_P1(); y = rand_P1();
if (x ^ y) return x;
}
}
double p3 = random();
int rand_P3(void)//anatolyg's idea
{
double tp = p3; int bit, x;
while (1)
{
if (tp * 2 >= 1) {bit = 1; tp = tp * 2 - 1;}
else {bit = 0; tp = tp * 2;}
x = rand_P2();
if (bit ^ x) return bit;
}
}
int rand2_P3(void)//Egor's idea
{
double left = 0, right = 1, mid;
while (1)
{
dashenstep++;
mid = left + (right - left) * p1;
int x = rand_P1();
if (x) right = mid; else left = mid;
if (right < p3) return 1;
if (left > p3) return 0;
}
}
With massive math computings, I get, assuming P3 is uniformly distributed in [0,1), then the expectation of Egor is (1-p1^2-(1-p1)^2)^(-1). And anatolyg is 2(1-p1^2-(1-p1)^2)^(-1).
Speaking Algorithmically , Yes It is possible to do that task done .
Even Programmatically , It is possible , but a complex problem .
Lets take an example .
Let
F1(1) = .5 which means F1(0) =.5
F2(2) = .8 which means F1(0) =.2
Let Suppose You need a F3, such that F3(1)= .128
Lets try Decomposing it .
.128
= (2^7)*(10^-3) // decompose this into know values
= (8/10)*(8/10)*(2/10)
= F2(1)&F2(1)*(20/100) // as no Fi(1)==2/10
= F2(1)&F2(1)*(5/10)*(4/10)
= F2(1)&F2(1)&F1(1)*(40/100)
= F2(1)&F2(1)&F1(1)*(8/10)*(5/10)
= F2(1)&F2(1)&F1(1)&F2(1)&F1(1)
So F3(1)=.128 if we define F3()=F2()&F2()&F2()&F1()&F1()
Similarly if you want F4(1)=.9 ,
You give it as F4(0)=F1(0) | F2(0) =F1(0)F2(0)=.5.2 =.1 ,which mean F4(1)=1-0.1=0.9
Which means F4 is zero only when both are zero which happens .
So making use this ( & , | and , not(!) , xor(^) if you want ) operations with a combinational use of f1,f2 will surely give you the F3 which is made purely out of f1,f2,
Which may be NP hard problem to find the combination which gives you the exact probability.
So, Finally the answer to your question , whether it is possible or not ? is YES and this is one way of doing it, may be many hacks can be made into it this to optimize this, which gives you any optimal way .
I'm supposed to use recursion to output the total number of unique north-east paths ne(x, y) to get from point A to point B, where B is x rows north and y columns east of A. In addition, I am required to print the possible unique NE paths.
I know how to use recursion to get the total number of unique paths. However, I am stuck with using recursion to print all the NE paths correctly.
This is the given output of some test cases:
image of output
Anyway, here's a screenshot of my faulty recursive code.
Please do give me advice where I went wrong. I have been burning a lot of time on this, but still I can't reach a solution.
I think you should print if( rows == 0 && cols == 0 ), because it's the case when you've reached point B.
Why are you using path+="N" in the first ne call in return? this will add "N" to original path and then you'll get path+"N"+"E" in the second call.
Try following:
public static int ne( int rows, int cols, String path )
{
if( rows == 0 && cols == 0 )
{
System.out.println(path);
return 1;
}
int npats = 0, wpaths = 0;
if( rows != 0 )
npaths = ne( rows-1, cols, path+"N" );
if( cols != 0 )
wpaths = ne( rows, cols-1, path+"E" );
return npaths + wpaths;
}
This is an interview question: "How to build a distributed algorithm to compute the balance of the parentheses ?"
Usually he balance algorithm scans a string form left to right and uses a stack to make sure that the number of open parentheses always >= the number of close parentheses and finally the number of open parentheses == the number of close parentheses.
How would you make it distributed ?
You can break the string into chunks and process each separately, assuming you can read and send to the other machines in parallel. You need two numbers for each string.
The minimum nesting depth achieved relative to the start of the string.
The total gain or loss in nesting depth across the whole string.
With these values, you can compute the values for the concatenation of many chunks as follows:
minNest = 0
totGain = 0
for p in chunkResults
minNest = min(minNest, totGain + p.minNest)
totGain += p.totGain
return new ChunkResult(minNest, totGain)
The parentheses are matched if the final values of totGain and minNest are zero.
I would apply the map-reduce algorithm in which the map function would compute a part of the string return either an empty string if parentheses are balanced or a string with the last parenthesis remaining.
Then the reduce function would concatenate the result of two returned strings by map function and compute it again returning the same result than map. At the end of all computations, you'd either obtain an empty string or a string containing the un-balanced parenthesis.
I'll try to have a more detailed explain on #jonderry's answer. Code first, in Scala
def parBalance(chars: Array[Char], chunkSize: Int): Boolean = {
require(chunkSize > 0, "chunkSize must be greater than 0")
def traverse(from: Int, until: Int): (Int, Int) = {
var count = 0
var stack = 0
var nest = 0
for (n <- from until until) {
val cur = chars(c)
if (cur == '(') {
count += 1
stack += 1
}
else if (cur == ')') {
count -= 1
if (stack > 0) stack -= 1
else nest -= 1
}
}
(nest, count)
}
def reduce(from: Int, until: Int): (Int, Int) = {
val m = (until + from) / 2
if (until - from <= chunkSize) {
traverse(from, until)
} else {
parallel(reduce(from, m), reduce(m, until)) match {
case ((minNestL, totGainL), (minNestR, totGainR)) => {
((minNestL min (minNestR + totGainL)), (totGainL + totGainR))
}
}
}
}
reduce(0, chars.length) == (0,0)
}
Given a string, if we remove balanced parentheses, what's left will be in a form )))(((, give n for number of ) and m for number of (, then m >= 0, n <= 0(for easier calculation). Here n is minNest and m+n is totGain. To make a true balanced string, we need m+n == 0 && n == 0.
In a parallel operation, how to we derive those for node from it's left and right? For totGain we just needs to add them up. When calculating n for node, it can just be n(left) if n(right) not contribute or n(right) + left.totGain whichever is smaller.