say I am given the expression (i will refer to l as lambda):
lx.f1 f2 x
where f1 and f2 are functions and x suppose to some number.
how do you interpret this expression? is lx.(f1 f2) x the same as lx.f1 (f2 x)?
as an exemple, what will be the diffrence in the result of lx.(not eq0) x and lx.not (eq0 x)?
(eq0 is a function that return true if the parm equals 0 and not is the well known not function)
more formally T=lx.ly.x ,F=lx.ly.y ,not = lx.xFT and eq0 = lx.x(ly.F)T
f1 f2 x is the same as (f1 f2) x. Function application is left-associative.
is ln.(f1 f2) x the same as ln.f1 (f2 x)?
No, not at all. (f1 f2) x calls f1 with f2 as its argument and then calls the resulting function with x as its argument. f1 (f2 x) calls f2 with x as its argument and then calls f1 with the result of f2 x as its argument.
ln.(not eq0) x and ln.not (eq0 x)?
If we're talking about a typed lambda calculus and not expects a boolean as an argument, the former will simply cause a type error (because eq0 is a function and not a boolean). If we're talking about the untyped lambda calculus and true and false are represented as functions, it depends on how not is defined and how true and false are represented.
If true and false are Church booleans, i.e. true is a two-argument function that returns its first argument and false is a two-argument function that returns its second argument, then not is equivalent to the flip function, i.e. it takes a two-argument function and returns a two-argument function whose arguments have been reversed. So (not eq0) x will return a function that, when applied to two other arguments y and z, will evaluate to ((eq0 y) x) z. So if y is 0, it will return x, otherwise z.
Related
(λy.x z)c
I think a answer about this problem is x z.
If it is correct, why (λy.x z)c = x c is incorrect?
In this case, I refer to (λy.x z) = (λy.x)z = x. So I calculate it in the parenthesis first.
(λy.x z) c is not a problem, it is a λ-term.
You refer to λy.x z = (λy.x) z but there is no way to move the parentheses, otherwise it would mean they were useless.
λy. x z
Means the function which takes y as argument and returns x applied to z.
While (λy.x) z means the function which takes y as argument and returns x, the whole thing applied to z. Why would those two things be the same?
(They are not.)
So I have this boolean logic case.
three variables x, y, and z. The (ternary) parity function p(x, y, z) is a boolean function with value
• F, if an even number of the inputs x, y, and z have truth value T
• T, if an odd number of inputs have truth value T
this is my truth table (ill put just a few case) below
x y z - p(x, y, z)
F F F (?)
F F T evaluates to T because there is one T which is odd
F T T evaluates to F because there is two T which is even
my question is that, what if all of the three input evaluates to F. It is neither T odd or T false. So what would it evaluates to?
The definition of an even number x is that it is divisible by 2 with no remainder. 0/2 = 0, so if there are zero T, it will evaluate to F
I have defined a function name in the form
(define (name x y z) (function...))
I call name with the parameters int1 int2 int3 on a new line like this
(define (name int1 int2 int3))
and for some reason, I get the error message:
define: expected a variable, but found a number.
I am new to the language(Racket/Scheme) so I am wondering what made Dr.Racket expect a variable? I have used this exact form many times with integers and had no problem with it.
Here is an example of how to define a function and how to use it after the definition.
(define (add-them x y z) ; note x, y, and, z must me names
(+ x y z))
(add-them 1 2 3) ; no define when add-them is used.
The result is 6.
A bare symbol in code, like i, cons, and +, are variables. They evaluate to values. For +, when not shadowed by a lexical binding, would evaluate to the procedure for addition.
(+ a b) is code with 3 variables. Variable + needs to evaluate to a procedure and a and b needs to evaluate to types that #<procedure:+> expects.
If you have put the parentheses as in C, +(1 2), then that is two expressions. First + that evaluates to a procedure, but the value is not used, and the next expression (1 2) which is clearly an error since 1 is not a procedure.
Even though this is a post from a month ago, here is what I believe is an answer.
First you've defined your function name:
(define (name x y z) (function...))
Now you've tried to call name:
(define (name int1 int2 int3))
what happens here is that racket sees define first and thinks that you are defining something. This could be a function or a variable. (They have their similarities in racket.) Then it moves on to (name int1 int2 int3). Note that this is a function call and it does whatever you have defined above. In this case, I assume you have defined your function name to return a number. So now we are looking at (define some_number). What does this mean? Racket isn't too sure, because it was expecting a variable name.
Either one of the two below should work:
(define some_number (name int1 in2 int3)) //It defines `some_number` as whatever the result of the function call is.
(name int1 int2 int3) // simply calls the function with the arguments int1 int2 int3
This is a simple mini program I have here that simplifies addition expressions that are queried. I can't seem to figure out how to finish it off. When I query the following:
sim(sum(sum(x,1),5),Val,[x:X]).
My result is Val = X+1+5. I would like it to simplify all the way to X+6.
Here is the code:
sim(Var, Value, Lst) :- member(Var:Value, Lst).
sim(Num, Num, _) :- number(Num).
sim(sum(Left, Right), Value, Lst) :-
sim(Left, LeftVal, Lst),
sim(Right, RightVal, Lst),
so(Value,LeftVal,RightVal).
so(Result, X, Y) :-
number(X),
number(Y), !,
Result is X + Y.
so(Result, X, Y) :- // debugging so(Result,_,Y) :-
Result = X + Y. // Y value write(Y), Result = Y.
What I do know is that my program is trying to simplify X+1 before adding X+1 and 5. When I change the last line of my "so" method to only give Y to Result I get Val = 6. Before that line I write Y to the screen for debugging purposes and it gives me 1 5 because of the recursion. Which means X must be a var? Is there a corner case not here that will allow me to simplify addition all the way down?
What I am noticing is that "so" never adds 1 and 5 because they are never arguments together in the "so" method that checks for X and Y to be numbers. X and 1 are the first arguments, then upon recursion X+1 and 5 are the arguments and it doesn't execute because number(X) fails when X is X+1
Expanding on my comment above: here is an example of an expression simplifier that separates 'symbols' from 'values' using two lists.
Notice how it uses the fact, in parsing and unparsing, that the only operator joining symbols and values is +.
I am wondering if there is a way to generate a key based on the relationship between two entities in a way that the key for relationship a->b is the same as the key for relationship b->a.
Desirably this would be a hash function which takes either relationship member but generates the same output regardless of the order the members are presented in.
Obviously you could do this with numbers (e.g. add(2,3) is equivalent to add(3,2)). The problem for me is that I do not want add(1,4) to equal add(2,3). Obviously any hash function has overlap but I mean a weak sense of uniqueness.
My naive (and performance undesirable) thought is:
function orderIndifferentHash(string val1, string val2)
{
return stringMerge(hash(val1), hash(val2));
/* String merge will 'add' each character (with wrapping).
The pre-hash is to lengthen strings to at least 32 characters */
}
In your function orderIndifferentHash you could first order val1 and val2 by some criteria and then apply any hash function you like to get the result.
function orderIndifferentHash( val1, val2 ) {
if( val1 < val2 ) {
first = val1
second = val2
}
else {
first = val2
second = val1
}
hashInput = concat( first, second )
return someHash( hashInput )
// or as an alternative:
// return concat( someHash( first ), someHash( second ) )
}
With numbers, one way to achieve that is for two numbers x and y take the x-th prime and y-th prime and calculate the product of these primes. That way you will guarantee the uniqueness of the product for each distinct pair of x and y and independence from the argument order. Of course, in order to do that with any practically meaningful efficiency you'll need to keep a prime table for all possible values of x and y. If x and y are chosen from relatively small range, this will work. But if range is large, the table itself becomes prohibitively impractical, and you'll have no other choice but to accept some probability of collision (like keep a reasonably sized table of N primes and select the x%N-th prime for the given value of x).
Alternative solution, already mentioned in the other answers is to build a perfect hash function that works on your x and y values and then simply concatenate the hashes for x and y. The order independence is achieved by pre-sorting x and y. Of course, building a perfect hash is only possible for a set of arguments from a reasonably small range.
Something tells me that the primes-based approach will give you the shortest possible hash that satisfies the required conditions. No, not true.
You you are after:
Some function f(x, y) such that
f(x, y) == f(y, x)
f(x, y) != f(a, b) => (x == a and y == b) or (x == b and y == a)
There are going to be absolutely loads of these - off hand the one I can think of is "sorted concatenation":
Sort (x, y) by any ordering
Apply a hash function u(a) to x and y individually (where u(a) == u(b) implies a == b, and the length of u(a) is constant)
Concatenate u(x) and u(y).
In this case:
If x == y then then the two hashes are trivially the same, so without loss of generality x < y, hence:
f(y, x) = u(x) + u(y) = f(x, y)
Also, if f(x, y) == f(a, b), this means that either:
u(x) == u(a) and u(y) == u(b) => x == a and y == b, or
u(y) == u(a) and u(x) == u(b) => y == a and x == b
Short version:
Sort x and y, and then apply any hash function where the resulting hash length is constant.
Suppose you have any hash h(x,y). Then define f(x,y) = h(x,y) + h(y,x). Now you have a symmetric hash.
(If you do a trivial multiplicative "hash" then 1+3 and 2+2 might hash to the same value, but even something like h(x,y) = x*y*y will avoid that--just make sure there's some nonlinearity in at least one argument of the hash function.)