In chapter 1 on fixed points, the book says we can find fixed points of certain functions using
f(x) = f(f(x)) = f(f(f(x))) ....
What are those functions?
It doesn't work for y = 2y when i rewrite it as y = y/2 it works
Does y need to get smaller everytime? Or are there any general attributes that a function has to have to find fixed points by that method?
What conditions it should satisfy to work?
According to the Banach fixed-point theorem, such a point exists iff the mapping (function) is a contraction. That means that, for example, y=2x doesn't have fixed point and y = 0,999... * x has. In general, if f maps [a,b] to [a,b], then |f(x) - f(y)| should be equal to c * |x - y| for some 0 <= c < 1 (for all x, y from [a, b]).
Say you have:
f(x) = sin(x)
then x = 0 is a fixed point of the function since:
f(0) = sin(0) = 0
f(f(0)) = sin(sin(0)) = sin(0) = 0
Not every point along x is a fixed point of sin, only 0 is.
Different functions have different fixed points, if at all. You can find more on fixed points of functions at Wikidpedia
Related
Say that x and y are real numbers and y > 0. And say that I want to find for which values of A do (A + x + y > 0) and (A + x - y > 0) always hold, as long as x, y are in the domain.
How would I specify that on Wolfram Alpha? (Note: obviously these equations have no solution, but I just used it as an example.)
Or, if not on Wolfram, what software/website could I use?
I tried to write: solve for A: [input my first equation], y>0
but that didn't work, as it only gave integer solutions for when A, x, and y vary, instead of finding values of A such that it always holds no matter what x, y are.
https://www.wolframalpha.com/input?i=%28A+%2B+x+%2B+y+%3E+0%29+and+%28A+%2B+x+-+y+%3E+0%29+
[x>-A, -A - x<y<A + x]
Prove the correctness of the following recursive algorithm to multiply two natural numbers, for all integer constants c ≥ 2.
function multiply(y,z) comment Return the product yz.
1. if z = 0 then return(0) else
2. return(multiply(cy, z/c) + y · (z mod c))
I saw this algorithm in “Algorithm Design Manual”.
I know why it works correctly, but I want to know how this algorithm came to be. Is that a good way to think of multiply two natural number with a constant c?
(multiply(cy, z/c) + y · (z mod c))
When c is the base of your representation (like decimal), then this is how multiplication can be done "manually". It's the "shift and add" method.
In c-base cy is a single shift of y to the left (i.e. adding a zero at the right); and z/c is a single shift of z to the right: the right most digit is lost.
That lost digit is actually z mod c, which is multiplied with y separately.
Here is an example with c = 10, where the apostrophe signifies the value of variables in a recursive call.
We perform the multiplication with y for each separate digit of z (retrieved with z mod c). Each next product found in this way is written shifted one more place to the left. Usually the 0 is not padded at the right of this shifted product, but it is silently assumed:
354 y
x 29 z
----
3186 y(z mod c) = 354·9 = 3186
+ 708 y'(z' mod c) = yc(z/c mod c) = 3540·2 = 7080
------
10266
So the algorithm just relies on the mathematical basis for this "shift and add" method in a given c-base.
I need to check if first given term (for example s(s(nul)) (or 2)) is dividable by the second term, (for example s(nul) (or 1)).
What I want to do is multiply given term by two and then check if that term is smaller or equal to the other term (if it is equal - problem is solved).
So far I got this:
checkingIfDividable(X,X).
checkingIfDividable(X,Y) :-
X > Y,
multiplication(X,Y).
/* multiplication by two should occur here.
I can't figure it out. This solution does not work!*/
multiplication(Y):-
YY is Y * 2,
checkingIfDividable(X,YY).
I can't seem to figure out how to multiply a term by 2. Any ideas?
If a = n*b, n > 0, it is also a = n*b = (1+m)*b = b + m*b, m >= 0.
So if a is dividable by b, and a = b+x, then x is also dividable by b.
In Peano encoding, n = 1+m is written n = s(m).
Take it from here.
This is an algorithm question that I've been struggling with. I figured I could get some insight here. I need to make the following function in Haskell:
Declare the type and define a function that takes two numbers as input and finds their product by addition. That is, add the first number, as many times as second number, to itself.
My problem is that this is basically just multiplying two numbers together, but it says that I need to do it with addition. Does anyone have any clue on how to do this?
This is all I can come up with (it's not right): (x + x) * y
Thank you
if a is the first number and b the second
sum $ take a $ cycle [b]
should do ot
mult (x, y):
sum = 0
for 1 to y:
sum = sum + x
return sum
This is just the algorithm. I do not know Haskell. So the lambda expression in the other answer may be more appropriate. Also, I use an intermediate variable.
PS: forget the previous embarrassing recursive algorithm
Work it out by induction.
We know the answer to one simple (the simplest) problem: multiplying anything by 0 yields 0. So we write:
mul x 0 = 0
Now, the inductive step: we can build a solution to a bigger problem, if we know a solution to the smaller problem; that way we can always reduce any big problem to the smallest problem, for which we know the solution. So, for any y, the solution for y+1 can be found by adding x to the solution for y: mul x (y+1) = x + (mul x y). In Haskell we can't write (y+1) on the left hand side, so we write equivalently:
mul x y = x + (mul x (y-1))
This function will keep adding x until y is zero.
Try this also
multiply::(Num a,Eq a) => a -> a -> a
multiply a 0 = 0
multiply a b = a + multiply a (b - 1)
main = print $ multiply 5 7
I have a vector X of 20 real numbers and a vector Y of 20 real numbers.
I want to model them as
y = ax^2+bx + c
How to find the value of 'a' , 'b' and 'c'
and best fit quadratic equation.
Given Values
X = (x1,x2,...,x20)
Y = (y1,y2,...,y20)
i need a formula or procedure to find following values
a = ???
b = ???
c = ???
Thanks in advance.
Everything #Bartoss said is right, +1. I figured I just add a practical implementation here, without QR decomposition. You want to evaluate the values of a,b,c such that the distance between measured and fitted data is minimal. You can pick as measure
sum(ax^2+bx + c -y)^2)
where the sum is over the elements of vectors x,y.
Then, a minimum implies that the derivative of the quantity with respect to each of a,b,c is zero:
d (sum(ax^2+bx + c -y)^2) /da =0
d (sum(ax^2+bx + c -y)^2) /db =0
d (sum(ax^2+bx + c -y)^2) /dc =0
these equations are
2(sum(ax^2+bx + c -y)*x^2)=0
2(sum(ax^2+bx + c -y)*x) =0
2(sum(ax^2+bx + c -y)) =0
Dividing by 2, the above can be rewritten as
a*sum(x^4) +b*sum(x^3) + c*sum(x^2) =sum(y*x^2)
a*sum(x^3) +b*sum(x^2) + c*sum(x) =sum(y*x)
a*sum(x^2) +b*sum(x) + c*N =sum(y)
where N=20 in your case. A simple code in python showing how to do so follows.
from numpy import random, array
from scipy.linalg import solve
import matplotlib.pylab as plt
a, b, c = 6., 3., 4.
N = 20
x = random.rand((N))
y = a * x ** 2 + b * x + c
y += random.rand((20)) #add a bit of noise to make things more realistic
x4 = (x ** 4).sum()
x3 = (x ** 3).sum()
x2 = (x ** 2).sum()
M = array([[x4, x3, x2], [x3, x2, x.sum()], [x2, x.sum(), N]])
K = array([(y * x ** 2).sum(), (y * x).sum(), y.sum()])
A, B, C = solve(M, K)
print 'exact values ', a, b, c
print 'calculated values', A, B, C
fig, ax = plt.subplots()
ax.plot(x, y, 'b.', label='data')
ax.plot(x, A * x ** 2 + B * x + C, 'r.', label='estimate')
ax.legend()
plt.show()
A much faster way to implement solution is to use a nonlinear least squares algorithm. This will be faster to write, but not faster to run. Using the one provided by scipy,
from scipy.optimize import leastsq
def f(arg):
a,b,c=arg
return a*x**2+b*x+c-y
(A,B,C),_=leastsq(f,[1,1,1])#you must provide a first guess to start with in this case.
That is a linear least squares problem. I think the easiest method which gives accurate results is QR decomposition using Householder reflections. It is not something to be explained in a stackoverflow answer, but I hope you will find all that is needed with this links.
If you never heard about these before and don't know how it connects with you problem:
A = [[x1^2, x1, 1]; [x2^2, x2, 1]; ...]
Y = [y1; y2; ...]
Now you want to find v = [a; b; c] such that A*v is as close as possible to Y, which is exactly what least squares problem is all about.