I have a code for calculating factorial as below :
fact(1,1).
fact(X,R):- X1 is X-1, fact(X1,R1), R is R1*X.
In my mind this code shouldn't work right but it does! What is my reason? I think when we call fact(3,R), first it calculate "X1 is X1 -1". Then it goes to next rule fact(X1,R1). This will call the goal part again and the code execution will return to the goal "fact(X,R)" and this will continue until we reach to fact(1,1). It means it never goes to
R is R1*X part. So, it seems I am thinking wrong.
Can anyone tell me step by step about the code execution order in this code?
Thanks
Once we "reach" the fact(1,1), it will "return" to the calling recursive iteration and proceed to the part R is R1*X of that iteration, with R1=1. Then will return again to a previous level and so on. Let's look at a non-trivial iteration:
fact(3,R) :
X <- 3,
X1 <- 3-1 = 2,
fact(2,R1) :
X' <- 2,
X1' <- 2-1 = 1,
fact(1, R1'), => R1'=1 (matched from fact(1,1))
R'<- R1' * X' = 2
R1 = R' = 2
R <- R1*X = 2*3 = 6.
Here the variable with ' are denoting the variables corresponding to the fact(2,R) iteration. The variables without ' are for the topmost iteration.
Related
I have been away from Mathematica for quite a while and am trying to fix some old notebooks from v4 that are no longer working under v11. I'm also a tad rusty.
I am attempting to use functional minimization to fit a polynomial of variable degree to an arbitrary function (F) given a starting guess (ao) and domain of interest (d). Note that while F is arbitrary, its nature is such that the integral of the product of F and a polynomial (or F^2) can always be evaluated algebraically.
For the sake of example, I'll use the following inputs:
ao = { 1, 2, 3, 4 }
d = { -1, 1 }
F = Sin[x]
To do so, I create an array of 'indexed' variables
polyCoeff = Array[a,Length[a],0]
Result: polycoeff = {a[0], a[1], a[2], a[3]}
I then create the polynomial itself using the following
genPoly[{},x_] := 0
genPoly[a_List,x_] := First[a] + x genPoly[Rest[a],x]
poly = genPoly[polyCoeff,x]
Result: poly = a[0] + x (a[1] + x (a[2] + x a[3]))
I then define my objective function as the integral of the square of the error of the difference between this poly and the function I am attempting to fit:
Q = Integrate[ (poly - F[x])^2, {x, d[[1]],d[[2]]} ]
result: Q = 0.545351 - 2. a[0.]^2 + 0.66667 a[1.]^2 + .....
And this is where things break down. poly looks just as I expected: a polynomial in x with coefficients that look like a[0], a[1], a[2], ... But, Q is not exactly what I expected. I expected and got a new polynomial. But not the coefficients contained a[0.], a[1.], a[2.], ...
The next step is to create the initial guess for FindMinimum
init = Transpose[{polyCoeff,ao}]
Result: {{a[0],1},{a[1],2},{a[3],3},{a[4],4}}
This looks fine.
But when I make the call to FindMinimum, I get an error because the coefficients passed in the objective (a[0.],a[1.],...) do not match those passed in the initial guess (a[0],a[1],...).
S = FindMinimum[Q,init]
So I think my question is how do I keep Integrate from changing the arguments to my coefficients? But, I am open to other approaches as well. Keep in mind though that this is "legacy" work that I really don't want to have to completely revamp.
Thanks much for any/all help.
I'm currently stuck on a loop invariant proof in my home assignment. The algorithm that I need to prove correctness of, is:
Multiply(a,b)
x=a
y=0
WHILE x>=b DO
x=x-b
y=y+1
IF x=0 THEN
RETURN(y)
ELSE
RETURN(-1)
I've tried to look at several examples of loop invariants and I have some sense of idea of how its supposed to work out. However in this algorithm above, I have two exit conditions, and I'm a bit lost on how to approach this in a loop invariant proof. In particular its the termination part I'm struggling with, around the IF and ELSE statements.
So far what I've constructed is simply by looking at the termination of the algorithm in which case if x = 0 then it returns the value of y containing the value of n (number of iterations in the while loop), where as if x is not 0, and x < b then it returns -1. I just have a feeling I need to prove this some how.
I hope someone can help share some light on this for me, as the similar cases I've found in here, have not been sufficient.
Thanks alot in advance for your time.
Provided that the algorithm terminates (for this let's assume a>0 and b>0, which is sufficient), one invariant is that at every iteration of your while loop, you have x + by = a.
Proof:
at first, x = a and y = 0 so that's ok
If x + by = a, then (x - b) + (y + 1)b = a, which are the values of x and y for your next iteration
Illustration:
Multiply(a,b)
x=a
y=0
// x + by = a, is true
WHILE x>=b DO
// x + by = a, is true
x=x-b // X = x - b
y=y+1 // Y = y + 1
// x + by = a
// x - b + by + b = a
// (x-b) + (y+1)b = a
// X + bY = a, is still true
// x + by = a, will remain true when you exit the loop
// since we exited the loop, x < b
IF x=0 THEN
// 0 + by = a, and 0 < b
// y = a/b
RETURN(y)
ELSE
RETURN(-1)
This algorithm returns a/b when b divides a, and -1 otherwise. Multiply does not quite sound like an appropriate name for it...
We can't prove correctness without a specification of exactly what the function is supposed to do, which I can't find in your question. Even the name of the function doesn't help: as noted already, your function returns a/b most of the time when b divides a, and -1 otherwise. Multiply is an inappropriate name for it.
Furthermore, if b=0 and a>=b the "algorithm" doesn't terminate so it isn't even an algorithm.
As Alex M noted, a loop invariant for the loop is x + by = a. At the moment the loop exits, we also have x < b. There are no other guarantees on x because (presumably) a could be negative. If we had a guarantee that a and b are positive, then we could guarantee that 0<=x<b at the moment the loop exits, which would mean that it implements the division with remainder algorithm (at the end of the loop, y is quotient and x is remainder, and it terminates by an "infinite descent" type argument: a decreasing sequence of positive integers x must terminate). Then you could conclude that if x=0, b divides a evenly, and the quotient is returned, otherwise -1 is returned.
But that is not a proof, because we are lacking a specification for what the algorithm is supposed to do, and a specification on restrictions on its inputs. (Are a and b any positive integers? Negative and 0 not allowed?)
Can you please help me understand what ports in r if x = 0,1,2,3
y <-- 0
z <-- 1
r <-- z
while y < x {
Multiply z by 2;
Add z to r;
Increase y; }
In every looping step z is multiplied by 2, so you have the values 2,4,8,16... (or generally 2^n).
r is initially 1, and if you add z, you get 3,7,15,31 (generally 2^(n+1) - 1)
For x = 0 the loop will be skipped, so r stays 1
For x = 1 the loop will... uhm... loop one time, so you get 3
etc.
Apparently, the algorithm computes the sum of the powers of two from 0 to x and uses r as an accumulator for this. On termination, r holds the value 2^(x+1)-1.
I've just started with a Design Analysis and Algorithms course and we've begin with simple algorithms.
There is a division algorithm which I can't make any sense of.
function divide(x,)
Input: 2 integers x and y where y>=1
Output: quotient and remainder of x divided by y
if x=0: return (q,r)=(0,0)
(q,r)=divide(floor (x/2), y)
q=2q, r=2r
if x is odd: r=r+1
if r>=y: r=r-y, q=q+1
return(q,r)
* floor is lower bound
We were supposed to try this algo for 110011%101 ( binary values )...I tried something and I got a weird answer...converted into decimal values and it was wrong.
So I tried it using simple decimal values instead of binary first.
x=25, y=5
This is what I'm doing
1st: q=x,r= 12,5
2nd: q=x,r= 6,5
3rd: q=x,r= 3,5
4th: q=x,r= 1,5
5th: q=x,r= 0,5
How will this thing work? Everytime I will run it, the last value of last x will be 0(condition) it will stop and return q=0,r=0
Can someone guide me where I'm going wrong...
Thanks
I implemented your algorithm (with obvious correction in the arg list) in Ruby:
$ irb
irb(main):001:0> def div(x,y)
irb(main):002:1> return [0,0] if x == 0
irb(main):003:1> q,r = div(x >> 1, y)
irb(main):004:1> q *= 2
irb(main):005:1> r *= 2
irb(main):006:1> r += 1 if x & 1 == 1
irb(main):007:1> if r >= y
irb(main):008:2> r -= y
irb(main):009:2> q += 1
irb(main):010:2> end
irb(main):011:1> [q,r]
irb(main):012:1> end
=> nil
irb(main):013:0> div(25, 5)
=> [5, 0]
irb(main):014:0> div(25, 2)
=> [12, 1]
irb(main):015:0> div(144,12)
=> [12, 0]
irb(main):016:0> div(144,11)
=> [13, 1]
It's working, so you must not be tracking the recursion properly when you're trying to hand-trace it. I find it helpful to write the logic out on a new sheet of paper for each recursive call and place the old sheet of paper on top of a stack of prior calls. When I get to a return statement on the current sheet, wad it up, throw it away, and write the return value in place of the recursive call on the piece of paper on top of the stack. Carry through with the logic on this sheet until you get to another recursive call or a return. Keep repeating this until you run out of sheets on the stack - the return from the last piece of paper is the final answer.
The function has a recursive structure, which might be why it's a bit tricky. I'm assuming there's a typo in your function declaration where divide(x,) should be divide(x, y). Given that the desired result is x/y with the remainder, let's continue. The first line in the function definition claims that IF the numerator is 0, return 0 with a remainder of 0. This makes sense: while b != 0 and a = 0, a / b = 0 for all integers. Then we set the result to a recursive call with half the original numerator and the current denominator. At some point, "half the original numerator" turns into 0 and the base case is reached. There's a bit of computation at the end of each recursive call in what seems to be tail recursion. Because we divided by 2 on each deepning, multiply by 2 to get the original result and add 1 to the remainder if it's odd. It's hard to visualize in text alone so step through it on paper with a given problem.
Mathematically, the division algorithm (it's called that) states that the remainder must be less than or equal to 5 when you input 25,5.
The algorithm gives 0, 5. This might mean to NOT consider the remainder when the quotient is 0 or there needs to be a check on the size of the remainder.
function divide(x,) Input: 2 integers x and y where y>=1 Output: quotient and remainder of x divided by y
if x=0: return (q,r)=(0,0)
(q,r)=divide(floor (x/2), y)
q=2q, r=2r
if x is odd: r=r+1
if r>=y: r=r-y, q=q+1
return(q,r)
* floor is lower bound
If I remember correctly, this is one of the most basic ways of doing integral division in a simple ALU. It's nice because you can run all the recursive divisions in parallel, since each division is based on just looking at one less bit of the binary.
To understand what this does, simply walk through it on paper, as Chris Zhang suggested. Here's what divide(25,5) looks like:
(x,y)=(25,5)
divide(12, 5)
divide(6,5)
divide(3,5)
divide(1,5)
divide(0,5) // x = 0!
return(0,0)
(q,r)=(2*0,2*0)
x is odd, so (q,r)=(0,1)
r < y
return(0,1)
(q,r)=(2*0,2*1)
x is odd, so (q,r)=(0,3)
r < y
return(0,3)
(q,r)=(2*0,2*3)
x is even
r >= y, so (q,r)=(1,1)
return(1,1)
(q,r)=(2*1,2*1)
x is even
r < y
return(2,2)
(q,r)=(2*2,2*2)
x is odd, so (q,r)=(4,5)
r >= y, so (q,r)=(5,0)
return(5,0)
As you can see, it work - it gives you a q of 5 and an r of 0. The part you noticed, that you'll always eventually have a 0 term is what Chris properly calls "the base case" - the case that makes the recursive call unfold.
This algorithm works with any base number for the division and the multiplication. It uses the same principle as the following: "123 / 5 = (100 + 20 + 3) / 5 = 20 + 4 + r3 = 24r3", just done in binary.
I have a sine wave whose parameters I can determine (they are user-input). It's of the form y=a*sin(m*x + t)
I'd like to know whether anyone knows an efficient algorithm to figure out the range of y for a given interval which goes from [0, x] (x is again another input)
For example:
for y = sin(x) (i.e. a=1, t=0, m=1), for the interval [0, 4] I'd like an output like [1, -0.756802]
Please keep in mind, m and t can be anything. Thus, the y-curve does not have to start (or end) at 0 (or 1). It could start anywhere.
Also, please note that x will be discrete.
Any ideas?
PS: I'll use python for implementing the algorithm.
Since function y(x) = a*sin(m*x + t) is continuous, maximum will be either at one of the interval's ends or at the maximum inside interval, in this case dy/dx will be equal to zero.
So:
1. Find values of y(x) at the ends of interval.
2. Find out if dy/dx == a * m cos (mx + t) have zero(s) in interval, find out values of y(x) at the zero(s).
3. Choose point where y(x) have maximum value
If you have greater than one period then the result is just +/- a.
For less than one period you can evaluate y at the start/end points and then find any maxima between the start/end points by solving for y' = 0, i.e. cos(m*x + t) = 0.
All the answers are more or less the same. Thanks guys=)
I think I'd go with something like the following (note that I am renaming the variable I called "x" to "end". I had this "x" at the beginning which denoted the end of my interval on the X-axis):
1) Evaluate y at 0 and "end", use an if-block to assign the two values to the correct PRELIMINARY "min" and "max" of the range
2) Evaluate number of evolutions: "evolNr" = (m*end)/2Pi. If evolNr > 1, return [-a, a]
3) If evolNr < 1: First find the root of the derivative, which is at "firstRoot" = (1/2m)*Pi - phase + q * 1/m * Pi, where q = ceil(m/Pi * ((1/2m) * Pi - phase) ) --- this gives me the first root at some position x > 0. From then on I know that all other extremes are within firstRoot and "end", we have a new root every 1/m * Pi.
In code: for (a=firstRoot; a < end; a += 1/m*Pi) {Eval y at a, if > 0 its a maximum, update "max", otherwise update "min"}
return [min, max]