How do I do this reduction? - lambda-calculus

I'm stuck on how to do this reduction, I have read this post and this pdf but I can't seem to find a solution:
(λx.yx)((λy.λt.yt)zx)=> (λx.yx)(λt.zxt) => y(λt.zxt)
but the solution should be yx according to online solvers.
could someone explain what passages I am doing wrong?
What are the passages that you should follow to do it right?

applicative order
β y := z
(λx.yx)((λy.λt.yt)zx)
= = =
(λx.yx)((λt.zt)x)
β t := x
(λx.yx)((λt.zt)x)
= = =
(λx.yx)(zx)
β x := zx
(λx.yx)(zx)
= = ====
y(zx)

A friend of mine had this solution that apperently corresponds to the real answer:
(λx.yx)((λy.λt.yt)zx) => y(((λy.λt.yt)z)x) => y((λt.zt)x) => y(zx) => yzx
my error would have been that I resolved the lambda as if (λx.yx)((λy.λt.yt)(zx)), I considered zx as a singol block not knowing that by default they aren't and that you need parenteses to specify it.
Only question that remains is why the professor answer yzx is different from the online answer yx.

Related

Solution of symbolic integration in Wolfram-Mathematica

I am trying to get the solution of the following symbolic integral using Wolfram-Mathematica:
Integrate[1/(w^2 + l^2 + 2*l*w*Sin[t*w]), w]
but it does not return any solutions. Any ideas how to fix this?
When you encounter trigonometric functions (sin, cos), it is often useful to go to the exponential form (see the Help on TrigToExp).
This solves your integral easily:
solutionExp = Integrate[TrigToExp[w^2 + l^2 + 2 l w Sin[t w]], w]
This solution can be brought back to trigonometric form with ExpToTrig:
solutionTrig = ExpToTrig[solutionExp]

Performance w/ calculating Hessian

[edit] The part about "f" is solved. Here is what I did:
Instead of using:
X = (F * W' - Y);
f = X' * X;
I'm now using:
X = F*W;
A = X'*F*W;
B = -2*X'*Y;
Y1 = Y'*Y;
f = A + B + Y1
This will give a massive speed up. Still, the problem with the Hessian of f remains.
[/edit]
So, I'm having some serious performance "problems" with a quadratic optimization problem I'm trying so solve in Matlab. The problem is not the optimization per se, but the calculation of the target function and the Hessian. Right now it looks like this (F and Y aren't random at all and will have real data, also it is not neccesarily unconstrainted, because then the solution would of course be (F'F)^-1*F'*Y):
W_a = sym('w_a_%d', [1 96]);
W_b = sym('w_b_%d', [1 96]);
for i = 1:96
W(1,2*(i-1)+1) = W_a(1,i);
W(1,2*i) = W_b(1,i);
end
F = rand(10000,192);
Y = rand(10000,1);
q = [];
for i = 1:192
q = [q sum(-Y(:).*F(:,i))];
end
q = 2*q;
q = double(q);
X = (F * W' - Y);
f = X' * X;
H = hessian(f);
H = double(H);
A=[]; b=[];
Aeq=[]; beq=[];
lb=[]; ub=[];
options=optimset('Algorithm', 'active-set', 'Display', 'off');
[xsol,~,exitflag,output]=quadprog(H, q, A, b, Aeq, beq, lb, ub, [], options);
The thing is: calculating f and H takes like forever.
I'm not expecting that there are ways to significantly speed this up, since Matlab is optimized for stuff like this. But maybe someone knows some open license software, that's almost as fast as Matlab, so that I could calculate f and H with that software on a faster machine (which unfortunately has no Matlab license ...) and then let Matlab do the optimization.
Right now I'm kinda lost in this :/
Thank you very much in advance. Even some keywords could help me here like "Look for software xy"
If speed is your concern, using symbolic methods is usually the wrong approach (especially for large systems or if you need to run something repeatedly). You'll need to calculate your Hessian numerically. There's an excellent utility on the MathWorks FileExchange that can do this for you: the DERIVESTsuite. It includes a numeric hessian function. You'll need to formulate your f as a function of X.

SICP - Which functions converge to fixed points?

In chapter 1 on fixed points, the book says we can find fixed points of certain functions using
f(x) = f(f(x)) = f(f(f(x))) ....
What are those functions?
It doesn't work for y = 2y when i rewrite it as y = y/2 it works
Does y need to get smaller everytime? Or are there any general attributes that a function has to have to find fixed points by that method?
What conditions it should satisfy to work?
According to the Banach fixed-point theorem, such a point exists iff the mapping (function) is a contraction. That means that, for example, y=2x doesn't have fixed point and y = 0,999... * x has. In general, if f maps [a,b] to [a,b], then |f(x) - f(y)| should be equal to c * |x - y| for some 0 <= c < 1 (for all x, y from [a, b]).
Say you have:
f(x) = sin(x)
then x = 0 is a fixed point of the function since:
f(0) = sin(0) = 0
f(f(0)) = sin(sin(0)) = sin(0) = 0
Not every point along x is a fixed point of sin, only 0 is.
Different functions have different fixed points, if at all. You can find more on fixed points of functions at Wikidpedia

Exponentiation of real numbers

I've come across an interesting exercise and it says: Implement a function x^y using standard functions of Turbo Pascal
For integer variables I can use for loop but I cannot understand how to work with real variables in this case.
I've been thinking about how to do this using Taylor series (can't understand how to use it for exponentiation) and I also found out that x^y = exp(y*log(x)) but there is only ln (natural logarithm) in standard functions...
PS
I'm not asking you to write code: give me advise or link or something that will help to solve this problem, please.
log(x) in your formula is natural logarithm, so you can use
x^y = exp(y*ln(x))
without any doubts. Both exp and ln are standard Turbo Pascal functions
(general formula is x^y = b^(y * base-b logarithm of x)
log x base y = ln(x) / ln(y) = (log x base 10)/(log y base 10)
Following link has more information regarding logarithms. Check out the "Changing the Base" section.
http://en.wikipedia.org/wiki/List_of_logarithmic_identities
You can change your base to natural logarithm and compute accordingly.
For x = 3.2, y = 2.5,
Say 3.2^2.5 = m
ln(m) = 2.5*ln(3.2)
Hence m = exp( 2.5 * ln(3.2) )
Actually for the above, you do not even need to change bases

Reduction from Atm to A (of my choice) , and from A to Atm

Reduction of many one , is not symmetric . I'm trying to prove it but it doesn't work
so well .
Given two languages A and B ,where A is defined as
A={w| |w| is even} , i.e. `w` has an even length
and B=A_TM , where A_TM is undecidable but Turing-recognizable!
Given the following Reduction:
f(w) = { (P(x):{accept;}),epsilon , if |w| is even
f(w) = { (P(x):{reject;}),epsilon , else
(Please forgive me for not using Latex , I have no experience with it)
As I can see, a reduction from A <= B (from A to A_TM) is possible , and works great.
However , I don't understand why B <= A , is not possible .
Can you please clarify and explain ?
Thanks
Ron
Assume for a moment that B <= A. Then there is a function f:Sigma*->Sigma* such that:
f(w) = x in A if w is in B
= x not in A if w is not in B
Therefore, we can describe the following algorithm [turing machine] M on input w:
1. w' <- f(w)
2. if |w'| is even return true
3. return false
It is easy to prove that M accepts w if and only if w is in B [left as an exercise to the reader], thus L(M) = B.
Also, M stops for any input w [from its construction]. Thus - L(M) is decideable.
But we got that L(M) = B is decideable - and that is a contradiction, because B = A_TM is undecideable!

Resources