Reorganizing a formula containing Modulo - algorithm

I have a formula that looks like
a = (b + 1 + c)%d
I want to express c in terms of rest, i.e. have "C" on the LHS.
Any suggestions ?

a = (b + 1 + c)%d
a + n*d = b + 1 + c
a -1 - b + n*d = c
For any integer n.

Related

Is it possible to define an efficient hyper-operation sequence as a recursive squaring function over 2-adics in Haskell?

Let's define a 2-adic in Haskell as its infinite binary expansion:
data Adic = O Adic | I Adic deriving Show
We can represent finite numbers as follows:
zero = O zero
one = I zero
neg1 = I neg1
And we can easily define Inc and Add for Adic:
inc :: Adic -> Adic
inc (O a) = I a
inc (I a) = O (Inc a)
add :: Adic -> Adic -> Adic
add (O a) (O b) = O (add a b)
add (O a) (I b) = I (add a b)
add (I a) (O b) = I (add a b)
add (I a) (I b) = I (inc (add a b))
We can convert adics to ints follows:
a2i :: Int -> Adic -> Int
a2i 1 (O x) = 0
a2i 0 (I x) = -1
a2i s (O x) = 2 * i2a (s - 1) x
a2i s (I x) = 2 * i2a (s - 1) x + 1
This allows us to do basic arithmetic. For example:
print $ a2i 64 (add one neg1)
Would print 0, after computing 1 + -1 on 2-adics.
Now, I'm interested in the hyper-operation sequence function, i.e.:
hyp :: Int -> Adic -> Adic -> Adic
hyp 0 a b = add a b
hyp 1 a b = mul a b
hyp 2 a b = exp a b
hyp 3 a b = tet a b
...
An efficient way to implement it is by generalizing the exponentiation by squaring algorithm, as follows:
hyp :: Int -> Adic -> Adic -> Adic
hyp 0 a b = add a b
hyp s a 1 = a
hyp s a (O b) = let r = hyp s a b in hyp (s - 1) r r
hyp s a (I b) = let r = hyp s a (Inc b) in hyp (s - 1) r r
To visualize the function above, below is the evaluation of hyp 2 4 4, i.e., 4^4:
4^4 =
4^2 * 4^2 =
4^1 * 4^1 * 4^1 * 4^1 =
4 * 4 * 4 * 4 =
4*4 * 4*4 =
(4*2 + 4*2) * (4*2 * 4*2) =
(4*1 + 4*1 + 4*1 + 4*1) * (4*1 + 4*1 + 4*1 + 4*1) =
(4 + 4 + 4 + 4) * (4 + 4 + 4 + 4) =
16*16 =
16*8 + 16*8 =
16*4 + 16*4 + 16*4 + 16*4 =
16*2 + 16*2 + 16*2 + 16*2 + 16*2 + 16*2 + 16*2 + 16*2 =
16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 + 16*1 =
16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 + 16 =
256
Sadly, the function above doesn't actually work, because the hyp s a 1 = a line attempts to match an Adic against the Int 1, which makes no sense. What it should do, instead, is match against the 2-adic number 1. The problem is: 2-adics are infinite, so we can't match them. We can't even create an is one function. As such, while the function above seems elegant and efficient, it, as far as I can tell, can't be constructed. My question is: am I right in concluding hyp can't be implemented that way, or am I missing something? Is there an alternative approach that would let this work?

sum of series AP GP clrs appendix A.1-4

I am trying to prove an equation given in the CLRS exercise book. The equation is:
Sigma k=0 to k=infinity (k-1)/2^k = 0
I solved the LHS but my answer is 1 whereas the RHS should be 0
Following is my solution:
Let's say S = k/2^k = 1/2 + 2/2^2 + 3/2^3 + 4/2^4 ....
2S = 1 + 2/2 + 3/2^2 + 4/2^3 ...
2S - S = 1 + ( 2/2 - 1/2) + (3/2^2 - 2/2^2) + (4/2^3 - 3/2^3)..
S = 1+ 1/2 + 1/2^2 + 1/2^3 + 1/2^4..
S = 2 -- eq 1
Now let's say S1 = (k-1)/2^k = 0/2 + 1/2^2 + 2/2^3 + 3/2^4...
S - S1 = 1/2 + (2/2^2 - 1/2^2) + (3/2^3 - 2/2^3) + (4/2^4 - 3/2^4)....
S - S1 = 1/2 + 1/2^2 + 1/2^3 + 1/2^4...
= 1
From eq 1
2 - S1 = 1
S1 = 1
Whereas the required RHS is 0. Is there anything wrong with my solution? Thanks..
Yes, you have issues in your solution to the problem.
While everything is correct in formulating the value of S, you have calculated the value of S1 incorrectly. You missed substituting the value for k=0 in S1. Whereas, for S, even after putting the value of k, the first term will be 0, so no effect.
Therefore,
S1 = (k-1)/2^k = -1 + 0/2 + 1/2^2 + 2/2^3 + 3/2^4...
// you missed -1 here because you started substituting values from k=1
S - S1 = -(-1) + 1/2 + (2/2^2 - 1/2^2) + (3/2^3 - 2/2^3) + (4/2^4 - 3/2^4)....
S - S1 = 1 + (1/2 + 1/2^2 + 1/2^3 + 1/2^4...)
= 1 + 1
= 2.
From eq 1
2 - S1 = 2
S1 = 0.

Convert boolean expression to 3 input NOR

I was taking a look at this link http://lizarum.com/assignments/boolean_algebra/chapter3.html to try and solve an equation I have. The original equation is:
H = MC + MC' + CRD + M'CD'
I simplified it to
H = M + CRD + M'CD'
Here is my attempt:
H = ((M + CRD + M'CD')')'
H = ((M)' * (CRD)' * (M'CD')')'
H = (((M)')' + ((CRD)')' + ((M'CD')')'
H = ((M')' + (C'+ R' + D')' + (M + C' + D)')'
Is that final equation a 3 input NOR equation? I have a feeling that I'm missing a step that makes the first parentheses into three variables.
As a first step, notice that M C + M C' = M
This simplifies your equation to
H = M + CRD + M'CD'
You can then leave out the M'. If it were false, M would be true and thus H.
H = M + CRD + CD'
This allows you to factor-out C:
H = M + C(RD + D')
Term D in the parentheses can be left out (same argument as above)
H = M + C(R + D')
The final result is:
H = M + CR + CD'
You could have arrived at this result using a Karnaugh-Veitch map.
Convince yourself asking WolframAlpha.

Recurrences using Substitution Method

Determine the positive number c & n0 for the following recurrences (Using Substitution Method):
T(n) = T(ceiling(n/2)) + 1 ... Guess is Big-Oh(log base 2 of n)
T(n) = 3T(floor(n/3)) + n ... Guess is Big-Omega (n * log base 3 of n)
T(n) = 2T(floor(n/2) + 17) + n ... Guess is Big-Oh(n * log base 2 of n).
I am giving my Solution for Problem 1:
Our Guess is: T(n) = O (log_2(n)).
By Induction Hypothesis assume T(k) <= c * log_2(k) for all k < n,here c is a const & c > 0
T(n) = T(ceiling(n/2)) + 1
<=> T(n) <= c*log_2(ceiling(n/2)) + 1
<=> " <= c*{log_2(n/2) + 1} + 1
<=> " = c*log_2(n/2) + c + 1
<=> " = c*{log_2(n) - log_2(2)} + c + 1
<=> " = c*log_2(n) - c + c + 1
<=> " = c*log_2(n) + 1
<=> T(n) not_<= c*log_2(n) because c*log_2(n) + 1 not_<= c*log_2(n).
To solve this remedy used a trick a follows:
T(n) = T(ceiling(n/2)) + 1
<=> " <= c*log(ceiling(n/2)) + 1
<=> " <= c*{log_2 (n/2) + b} + 1 where 0 <= b < 1
<=> " <= c*{log_2 (n) - log_2(2) + b) + 1
<=> " = c*{log_2(n) - 1 + b} + 1
<=> " = c*log_2(n) - c + bc + 1
<=> " = c*log_2(n) - (c - bc - 1) if c - bc -1 >= 0
c >= 1 / (1 - b)
<=> T(n) <= c*log_2(n) for c >= {1 / (1 - b)}
so T(n) = O(log_2(n)).
This solution is seems to be correct to me ... My Ques is: Is it the proper approach to do?
Thanks to all of U.
For the first exercise:
We want to show by induction that T(n) <= ceiling(log(n)) + 1.
Let's assume that T(1) = 1, than T(1) = 1 <= ceiling(log(1)) + 1 = 1 and the base of the induction is proved.
Now, we assume that for every 1 <= i < nhold that T(i) <= ceiling(log(i)) + 1.
For the inductive step we have to distinguish the cases when n is even and when is odd.
If n is even: T(n) = T(ceiling(n/2)) + 1 = T(n/2) + 1 <= ceiling(log(n/2)) + 1 + 1 = ceiling(log(n) - 1) + 1 + 1 = ceiling(log(n)) + 1.
If n is odd: T(n) = T(ceiling(n/2)) + 1 = T((n+1)/2) + 1 <= ceiling(log((n+1)/2)) + 1 + 1 = ceiling(log(n+1) - 1) + 1 + 1 = ceiling(log(n+1)) + 1 = ceiling(log(n)) + 1
The last passage is tricky, but is possibile because n is odd and then it cannot be a power of 2.
Problem #1:
T(1) = t0
T(2) = T(1) + 1 = t0 + 1
T(4) = T(2) + 1 = t0 + 2
T(8) = T(4) + 1 = t0 + 3
...
T(2^(m+1)) = T(2^m) + 1 = t0 + (m + 1)
Letting n = 2^(m+1), we get that T(n) = t0 + log_2(n) = O(log_2(n))
Problem #2:
T(1) = t0
T(3) = 3T(1) + 3 = 3t0 + 3
T(9) = 3T(3) + 9 = 3(3t0 + 3) + 9 = 9t0 + 18
T(27) = 3T(9) + 27 = 3(9t0 + 18) + 27 = 27t0 + 81
...
T(3^(m+1)) = 3T(3^m) + 3^(m+1) = ((3^(m+1))t0 + (3^(m+1))(m+1)
Letting n = 3^(m+1), we get that T(n) = nt0 + nlog_3(n) = O(nlog_3(n)).
Problem #3:
Consider n = 34. T(34) = 2T(17+17) + 34 = 2T(34) + 34. We can solve this to find that T(34) = -34. We can also see that for odd n, T(n) = 1 + T(n - 1). We continue to find what values are fixed:
T(0) = 2T(17) + 0 = 2T(17)
T(17) = 1 + T(16)
T(16) = 2T(25) + 16
T(25) = T(24) + 1
T(24) = 2T(29) + 24
T(29) = T(28) + 1
T(28) = 2T(31) + 28
T(31) = T(30) + 1
T(30) = 2T(32) + 30
T(32) = 2T(33) + 32
T(33) = T(32) + 1
We get T(32) = 2T(33) + 32 = 2T(32) + 34, meaning that T(32) = -34. Working backword, we get
T(32) = -34
T(33) = -33
T(30) = -38
T(31) = -37
T(28) = -46
T(29) = -45
T(24) = -96
T(25) = -95
T(16) = -174
T(17) = -173
T(0) = -346
As you can see, this recurrence is a little more complicated than the others, and as such, you should probably take a hard look at this one. If I get any other ideas, I'll come back; otherwise, you're on your own.
EDIT:
After looking at #3 some more, it looks like you're right in your assessment that it's O(nlog_2(n)). So you can try listing a bunch of numbers - I did it from n=0 to n=45. You notice a pattern: it goes from negative numbers to positive numbers around n=43,44. To get the next even-index element of the sequence, you add powers of two, in the following order: 4, 8, 4, 16, 4, 8, 4, 32, 4, 8, 4, 16, 4, 8, 4, 64, 4, 8, 4, 16, 4, 8, 4, 32, ...
These numbers are essentially where you'd mark an arbitary-length ruler... quarters, halves, eights, sixteenths, etc. As such, we can solve the equivalent problem of finding the order of the sum 1 + 2 + 1 + 4 + 1 + 2 + 1 + 8 + ... (same as ours, divided by 4, and ours is shifted, but the order will still work). By observing that the sum of the first k numbers (where k is a power of 2) is equal to sum((n/(2^(k+1))2^k) = (1/2)sum(n) for k = 0 to log_2(n), we get that the simple recurrence is given by (n/2)log_2(n). Multiply by 4 to get ours, and shift x to the right by 34 and perhaps add a constant value to the result. So we're playing around with y = 2nlog_n(x) + k' for some constant k'.
Phew. That was a tricky one. Note that this recurrence does not admit of any arbitary "initial condiditons"; in other words, the recurrence does not describe a family of sequences, but one specific one, with no parameterization.

Simple equations solving

Think of a equations system like the following:
a* = b + f + g
b* = a + c + f + g + h
c* = b + d + g + h + i
d* = c + e + h + i + j
e* = d + i + j
f* = a + b + g + k + l
g* = a + b + c + f + h + k + l + m
h* = b + c + d + g + i + l + m + n
...
a, b, c, ... element of { 0, 1 }
a*, b*, c*, ... element of { 0, 1, 2, 3, 4, 5, 6, 7, 8 }
+ ... a normal integer addition
Some of the variables a, b, c... a*, b*, c*... are given. I want to calculate as much other variables (a, b, c... but not a*, b*, c*...) as logically possible.
Example:
given: a = 0; b = 0; c = 0;
given: a* = 1; b* = 2; c* = 1;
a* = b + f + g ==> 1 = 0 + f + g ==> 1 = f + g
b* = a + c + f + g + h ==> 2 = 0 + 0 + f + g + h ==> 2 = f + g + h
c* = b + d + g + h + i ==> 1 = 0 + d + g + h + i ==> 1 = d + g + h + i
1 = f + g
2 = f + g + h ==> 2 = 1 + h ==> h = 1
1 = d + g + h + i ==> 1 = d + g + 1 + i ==> d = 0; g = 0; i = 0;
1 = f + g ==> 1 = f + 0 ==> f = 1
other variables calculated: d = 0; f = 1; g = 0; h = 1; i = 0;
Can anybody think of a way to perform this operations automatically?
Brute force may be possible in this example, but later there are about 400 a, b, c... variables and 400 a*, b*, c*... variables.
This sounds a little like constraint propogation. You might find "Solving every Sudoku Puzzle" a good read to get the general idea.
The problem is NP-complete. Look at the system of equations:
2 = a + c + d1
2 = b + c + d2
2 = a + b + c + d3
Assume that d1,d2,d3 are dummy variables that are only used once and hence add no other constraints that di=0 or di=1. Hence from the first equation follows c=1 if a=0. From the second equation follows c=1 if b=0 and from the third one we get c=0 if a=1 and b=1 and hence we get the relation
c = a NAND b.
Thus we can express any boolean circuit using such a system of equations and therefore the boolean satisfyability problem can be reduced to solving such a system of equations.

Resources