This is an algorithm question that I've been struggling with. I figured I could get some insight here. I need to make the following function in Haskell:
Declare the type and define a function that takes two numbers as input and finds their product by addition. That is, add the first number, as many times as second number, to itself.
My problem is that this is basically just multiplying two numbers together, but it says that I need to do it with addition. Does anyone have any clue on how to do this?
This is all I can come up with (it's not right): (x + x) * y
Thank you
if a is the first number and b the second
sum $ take a $ cycle [b]
should do ot
mult (x, y):
sum = 0
for 1 to y:
sum = sum + x
return sum
This is just the algorithm. I do not know Haskell. So the lambda expression in the other answer may be more appropriate. Also, I use an intermediate variable.
PS: forget the previous embarrassing recursive algorithm
Work it out by induction.
We know the answer to one simple (the simplest) problem: multiplying anything by 0 yields 0. So we write:
mul x 0 = 0
Now, the inductive step: we can build a solution to a bigger problem, if we know a solution to the smaller problem; that way we can always reduce any big problem to the smallest problem, for which we know the solution. So, for any y, the solution for y+1 can be found by adding x to the solution for y: mul x (y+1) = x + (mul x y). In Haskell we can't write (y+1) on the left hand side, so we write equivalently:
mul x y = x + (mul x (y-1))
This function will keep adding x until y is zero.
Try this also
multiply::(Num a,Eq a) => a -> a -> a
multiply a 0 = 0
multiply a b = a + multiply a (b - 1)
main = print $ multiply 5 7
Related
I am am mathmatica notebook to find an analytical solution to the follow constrained optimization problem:
Max y^(1-b)(x^b(1-a(x/(x+1)))) s.t. M = Px+qy
x,y
I have tried the following code:
Maximize[{y^(1-b)(x^b(1-a(x/(x+1)))), M==Px+qy}, {x,y}]
and in returns the same function as an output. In the function a, b, M, P, and q are all parameters. I have also tried assigning the parameters arbitrary values to test to see if mathmatica is not sure how to deal with the parameters. I used to following code:
Maximize[{y^(1-0.5)(x^0.5(1-0.75(x/(x+1)))), 1000=5x+5y},{x,y}]
and it returns the same function. However, if I remove the constraint it will solve the optimization problem.
Maximize[{y^(1-0.5)(x^0.5(1-0.75(x/(x+1))))},{x,y}]
{7.2912*^59,{x->2.89727*^60,y->2.93582*^60}}
I am not sure what to do. After reading about constrained optimization problem the syntax appears to be correct. Sorry, it this question is really basic I am very new to mathmatica, also since I am using a notebook I could not past the output from the first two lines in.
The constraint is incorrectly specified, it should be 1000 == 5 x + 5 y. Maximize works better with exact numbers.
Maximize[{Rationalize[y^(1 - 0.5) (x^0.5 (1 - 0.75 (x/(x + 1))))],
1000 == 5 x + 5 y}, {x, y}] // N
(* {25.7537, {x -> 96.97, y -> 103.03}} *)
Prove the correctness of the following recursive algorithm to multiply two natural numbers, for all integer constants c ≥ 2.
function multiply(y,z) comment Return the product yz.
1. if z = 0 then return(0) else
2. return(multiply(cy, z/c) + y · (z mod c))
I saw this algorithm in “Algorithm Design Manual”.
I know why it works correctly, but I want to know how this algorithm came to be. Is that a good way to think of multiply two natural number with a constant c?
(multiply(cy, z/c) + y · (z mod c))
When c is the base of your representation (like decimal), then this is how multiplication can be done "manually". It's the "shift and add" method.
In c-base cy is a single shift of y to the left (i.e. adding a zero at the right); and z/c is a single shift of z to the right: the right most digit is lost.
That lost digit is actually z mod c, which is multiplied with y separately.
Here is an example with c = 10, where the apostrophe signifies the value of variables in a recursive call.
We perform the multiplication with y for each separate digit of z (retrieved with z mod c). Each next product found in this way is written shifted one more place to the left. Usually the 0 is not padded at the right of this shifted product, but it is silently assumed:
354 y
x 29 z
----
3186 y(z mod c) = 354·9 = 3186
+ 708 y'(z' mod c) = yc(z/c mod c) = 3540·2 = 7080
------
10266
So the algorithm just relies on the mathematical basis for this "shift and add" method in a given c-base.
I need to check if first given term (for example s(s(nul)) (or 2)) is dividable by the second term, (for example s(nul) (or 1)).
What I want to do is multiply given term by two and then check if that term is smaller or equal to the other term (if it is equal - problem is solved).
So far I got this:
checkingIfDividable(X,X).
checkingIfDividable(X,Y) :-
X > Y,
multiplication(X,Y).
/* multiplication by two should occur here.
I can't figure it out. This solution does not work!*/
multiplication(Y):-
YY is Y * 2,
checkingIfDividable(X,YY).
I can't seem to figure out how to multiply a term by 2. Any ideas?
If a = n*b, n > 0, it is also a = n*b = (1+m)*b = b + m*b, m >= 0.
So if a is dividable by b, and a = b+x, then x is also dividable by b.
In Peano encoding, n = 1+m is written n = s(m).
Take it from here.
I have wrote a code to solve equation with like terms (eg:- x^2+5*x+6=0). Here 'x' has two values. I can take two values by entering ';'. But I need to get the all possible answers when I run the program at once. Is it possible in prolog?
Well for a quadratic equation, if the discriminant is zero, then
there is only one solution, so you can directly compute one or two
solutions, and return them in a list.
The discriminat is the expression under the square root. So the
classical prolog code for a real number solution reads as follows:
solve(A*_^2+B*_+C=0,L) :- D is B^2-4*A*C,
(D < 0 -> L = [];
D =:= 0 -> X is (-B)/(2*A), L = [X];
S is sqrt(D), X1 is (-B-S)/(2*A),
X2 is (-B+S)/(2*A), L=[X1,X2]).
Here is an example run:
Welcome to SWI-Prolog (threaded, 64 bits, version 8.1.0)
?- solve(1*x^2+5*x+6=0,L).
L = [-3.0, -2.0].
In chapter 1 on fixed points, the book says we can find fixed points of certain functions using
f(x) = f(f(x)) = f(f(f(x))) ....
What are those functions?
It doesn't work for y = 2y when i rewrite it as y = y/2 it works
Does y need to get smaller everytime? Or are there any general attributes that a function has to have to find fixed points by that method?
What conditions it should satisfy to work?
According to the Banach fixed-point theorem, such a point exists iff the mapping (function) is a contraction. That means that, for example, y=2x doesn't have fixed point and y = 0,999... * x has. In general, if f maps [a,b] to [a,b], then |f(x) - f(y)| should be equal to c * |x - y| for some 0 <= c < 1 (for all x, y from [a, b]).
Say you have:
f(x) = sin(x)
then x = 0 is a fixed point of the function since:
f(0) = sin(0) = 0
f(f(0)) = sin(sin(0)) = sin(0) = 0
Not every point along x is a fixed point of sin, only 0 is.
Different functions have different fixed points, if at all. You can find more on fixed points of functions at Wikidpedia