What's the point of not (!) logic? It seems that you can do everything not can do with all the other logical operators. Is there something that not can do that I am missing?
You won't deny that the NOT-operator is very convenient in a programming language even if the other operators and built-in constants available in that
language render it strictly redundant. Convenience is an adequate justification - in fact it is the justification - for almost all features of all general
programming languages. If we didn't care about convenience - which in programming, means productivity - we could write all programs with a set of
Turing-complete op-codes far smaller even than any assembly language.
The degree of inconvenience you would face in doing without the NOT-operator depends on the programming language you are considering and specifically
on the other operators and built-in-constants that the language provides and their semantics.
In C, for example, the equality operator == exists but there are no built-in constants representing truth and falsity: any integral value all of whose bits are 0 behaves as falsity in boolean operations and all other integral values behave as truth. !cond evaluates to 0 if cond evaluates non-zero and otherwise evaluates to 1. Thus
to say that cond is not true without coding !cond you have to code cond == 0, taking at least 2 keystrokes more.
Like C, C++ has equality and inequality operators but unlike C it represents the boolean truth valiues by the built-in constants true and false. Thus
to say that cond is not true in C++ without coding !cond you must code either cond != true or cond == false, taking at least 5 keystrokes more.
And the cost of doing without the NOT-operator can potentially compound beyond minor inconvenience. Which of the following can you understand first?:
!(p && !q) == (!p || q)
or:
(((p && (q == 0)) == 0) == ((p == 0) || q)
You can implement all logical operators solely with the NAND operator. The NOT operator is for convenience, just like all the others are. In fact, computer systems are implemented solely with either the NAND or the NOR operator. All other operators are abstractions put in place for convenience.
It is convenient, however. Since you mention the "!" operator, I assume you mean boolean operators in general programming languages. Then the not operator is very convenient. Imagine you wanted to express something like "print all names except 'Bob'". You could do that with the != operator, which is a further short-form of !(expression1 == expression2):
if( !(name == 'Bob') ) {
print name
}
Related
Two Booleans are equal if the're the same value, two numbers similarly. Two sets are equal if they have the same elements. In case of checking two sets for equality we can use the following scheme/racket function:
(define (same-set? l1 l2)
(and (subset? l1 l2) (subset? l2 l1)))
How would such a function be generated automatically? Can it be generated for arbitrary data type?
The basic properties of an equivalence relation are:
Substitution property: For any quantities a and b and any expression F(x), if a = b, then F(a) = F(b) (if both sides make sense, i.e. are well-formed).
Some specific examples of this are:
For any real numbers a, b, and c, if a = b, then a + c = b + c (here F(x) is x + c);
For any real numbers a, b, and c, if a = b, then a − c = b − c (here F(x) is x − c);
For any real numbers a, b, and c, if a = b, then ac = bc (here F(x) is xc);
For any real numbers a, b, and c, if a = b and c is not zero, then a/c = b/c (here F(x) is x/c).
Reflexive property: For any quantity a, a = a.
Symmetric property: For any quantities a and b, if a = b, then b = a.
Transitive property: For any quantities a, b, and c, if a = b and b = c, then a = c.
Is it possible to generate a function that obeys the above properties? Would that be enough? Could knowing the type of data help?
If you have any ideas on how to improve this question or tag it please comment.
I just want to expand on #Sorawee Porncharoenwase's answer a bit. They mentioned two kinds of equality, referential equality with eq?, and structural equality with equal?.
These different notions of equality should all follow the basic requirements of reflexivity, symmetry, and transitivity. But what sets them apart from each other is the guarantees that they give when they return true or false.
Some useful classes of equality to keep in mind are are reference-equality, structural-equality for all-time, structural-equality for the current time, and domain-specific equivalences.
Reference equality
The eq? function implements reference equality, and it has the strongest guarantees when it returns true, but when it returns false you haven't learned much.
(eq? x y) implies that x and y are literally the same object, and that any operation on x could be replaced with the same on y, including mutation. One thing that helped explain this to me was in the book Realm of Racket, saying that if you shave x, then y will also be shaved because it's the same object.
However, when (eq? x y) returns false that's pretty weak sauce. On the many data structures that involve allocating memory, eq? can return false simply because the pointers are different, even if they're immutable and everything else is the same.
This can be provided automatically by the programming language because it's really not much more than pointer-equality, and it doesn't have to generate any new behavior for new data structures.
Structural Equality for All-Time
This notion of equality is not currently well-supported by base Racket or standard Scheme, although libraries such as Rackjure can provide limited versions of this with functions like egal?. It implements reference equality on mutable data structures, but structural equality on immutable data structures.
This is meant to provide the guarantee that if (egal? x y) returns true now, then it has been true in the past and will continue to be true in the future as long as x and y both exist.
This can be provided automatically by the programming language as long as the language allows you to specify which data structures are immutable vs mutable, and enforces the immutability.
I'm not sure, but chaperone-of? may also be an example of following the ideas of "Structural Equality for All-Time", except that chaperone-of? isn't symmetric (and a naive symmetric-closure would lose transitivity).
If you want to read more, see Types of Equality in Pyret or Equal Rights for Functional Objects.
Structural Equality for the Current Time
The equal? function implements structural equality for the current time. This means two mutable data structures can be equal now if they currently have all equal components, even if they weren't equal in the past or won't be in the future due to mutation.
This can be provided automatically by the programming language as long as it always knows all the sub-parts of data contained within the data-structures.
Domain-specific Equivalences
For example for the domain of numbers and math, you might want the inexact number 2.0 to be equivalent to the exact integer 2. For the domain of string search, you might want case-insensitive equivalence for strings and characters so that A and a are equivalent. For the domain of sets, you might want order to be irrelevant so that (a b) and (b a) are equivalent.
Each domain is different, so this requires more effort on each domain. The programming language can't read your mind.
Two Booleans are equal if the're the same value, two numbers similarly. Two sets are equal if they have the same elements.
These are useful equalities, but they are not the only equalities that you can create. For instance, you can consider two numbers to be equal when their parities (odd/even) are the same. Or you can consider every number to be equal to each other.
How would such a function be generated automatically?
In general, it's not possible, because it depends on your intention. And no one can read your mind.
Is it possible to generate a function that obeys the above properties?
The answer is trivially yes. At the very least, you have (lambda (x y) #t), which says that every object is equal to every other object. It satisfies the equivalence relation properties, though it's totally useless.
For a non-trivial equality that works on all kinds of values, you have referential equality eq? which obeys the equivalence relation property (it could give you a weird result if you are using the unsafe API IIUC, but that's off-topic).
equal? can be used for structural equality on several values such as lists and those that are instances of a default transparent struct, and it also cooperates with custom equality that users provide. This is usually what you want to use in Racket.
Yes, it is definitely possible. Some programming languages allow for automatic equality function synthesis. Swift is a such example.
Without automatic synthesis, the developer has to write code for the equality, e.g., consider a struct:
struct Country: Equatable {
let name: String
let capital: String
var visited: Bool
static func == (lhs: Country, rhs: Country) -> Bool {
return lhs.name == rhs.name &&
lhs.capital == rhs.capital &&
lhs.visited == rhs.visited
}
}
With Swift 4.1 and higher, this is no longer necessary. The compiler generates the equality function for you:
struct Country: Equatable { // It's enough to just declare that the type is `Equatable` and the compiler do the rest
let name: String
let capital: String
var visited: Bool
}
Let's test it:
let france = Country(name: "France", capital: "Paris", visited: true)
let spain = Country(name: "Spain", capital: "Madrid", visited: true)
if france == spain { ... } // false
Update:
Even after Swift 4.1, it's possible to override the default implementation with own, custom logic. For example:
struct Country: Equatable {
let name: String
let countryCode: String
let capital: String
var visited: Bool
static func == (lhs: Country, rhs: Country) -> Bool {
return lhs.countryCode == rhs.countryCode
}
}
So, the developer is always in control. The equality won't be synthesised automatically, the developer has to add Equatable to the struct declaration. If after that they're not satisfied with the default implementation, or if it couldn't be inferred, there is always an option to override compiler's intention and provide a customized variant.
I understand that you can get the same behavior by using <= or !(x > y) but I usually rather think in terms of not greater instead of less than or equal, so having something like !> and !< would actually be really neat to have for me and would match the != operator perfectly.
The !(x > y) syntax requires more characters and reads: not: x is greater than y, which is unhandy and very unlike natural speech.
I have never seen !< or !> operators anywhere, but have been wondering ever since I started programming why they are not supported. Are there any reasons why not?
Using the least number of logical and comparison operators, and those the most closest to classic math and logic makes it simpler to reason about conditions.
There were programming languages like Natural and COBOL with operators like NOT LESS THAN, but reasoning about (a !< b) when thinking over a complex condition is much more difficult than !(a < b), which is equivalent to (a >= b).
An example:
!(!(a < b) && !(a > c))
Is equivalent to:
!((a >= b) && (a <= c))
which translates to:
(a < b) || (a > c)
There's nothing to gain from !((a !< b) && (a !> c))?
Suppose I've defined a few values for a function:
+(value[1] == "cats")
+(value[2] == "mice")
Is it possible to define a function like the following?
(undefined[X] == False) <= (value[X] == Y)
(undefined[X] == True) <= (value[X] does not exist)
My guess is that it can't, for two reasons:
(1) Queries are guaranteed to terminate in Datalog, and you could query for undefined[X] == True.
(2) According to Wikipedia, one of the ways Datalog differs from Prolog is that Datalog "requires that every variable appearing in a negative literal in the body of a clause also appears in some positive literal in the body of the clause".
But I'm not sure, because the terms involved ("terminate", "literal", "negative") have so many uses. (For instance: Does negative literal mean f[X] == not Y or does it mean not (f[X] == Y)? Does termination mean that it can evaluate a single expression like undefined[3] == True, or does it mean it would have found all X for which undefined[X] == True?)
Here another definition of "safe".
A safety condition says that every variable in the body of a rule must occur in at least one positive (i.e., not negated)
atom.
Source: Datalog and Recursive Query Processing
And an atom (or goal) is a predicate symbol (function) along with a list of terms as arguments. (Note that “term” and “atom” are used differently here than they are in Prolog.)
The safety problem is to decide whether the result of a given Datalog program can be guaranteed to be finite even when some source relations are infinite.
For example, the following rule is not safe because the Y variable appears only in a negative atom (i.e. not predicate2(Z,Y)).
rule(X,Y) :- predicate1(X,Z), not predicate2(Z,Y) .
To meet the condition of safety the Y variable should appear in a positive predicate too:
rule(X,Y) :- predicate1(X,Z), not predicate2(Z,Y), predicate3(Y) .
How is the 'is/2' Prolog predicate implemented?
I know that
X is 3*4
is equivalent with
is(X, 3*4)
But is the predicate implemented using imperative programming?
In other words, is the implementation equivalent with the following C code?
if(uninstantiated(x))
{
X = 3*4;
}
else
{
//signal an error
}
Or is it implemented using declarative programming and other predicates?
Depends on your Prolog, obviously, but any practical implementation will do its dirty work in C or another imperative language. Part of is/2 can be simulated in pure Prolog:
is(X, Expr) :-
evaluate(Expr, Value),
(var(X) ->
X = Value
;
X =:= Value
).
Where evaluate is a huge predicate that knows about arithmetic expressions. There are ways to implement large parts of it in pure Prolog too, but that will be both slow and painful. E.g. if you have a predicate that adds integers, then you can multiply them as well using the following (stupid) algorithm:
evaluate(X + Y, Value) :-
% even this can be done in Prolog using an increment predicate,
% but it would take O(n) time to do n/2 + n/2.
add(X, Y, Value).
evaluate(X * Y, Value) :-
(X == 0 ->
Value = 0
;
evaluate(X + -1, X1),
evaluate(X1, Y, Value1),
evaluate(Y + Value1, Value)
).
None of this is guaranteed to be either practical or correct; I'm just showing how arithmetic could be implemented in Prolog.
Would depend on the version of Prolog; for example, CProlog is (unsurprisingly) written in C, so all built-in predicates are implemented in a imperative language.
Prolog was developed for language parsing. So, a arithmetic expression like
3 + - ( 4 * 12 ) / 2 + 7
after parsing is just a prolog term (representing the parse tree), with operator/3 providing the semantics to guide the parser's operation. For basic arithmetic expressions, the terms are
'-'/2. Negation
'*'/2, '/'/2. Multiplication, division
'+'/2, '-'/2. Addition, subtraction
The sample expression above is parsed as
'+'( '+'( 3 , '/'( '-'( '*'(4,12) ) , 2 ) ) , 7 )
'is'/2 simply does a recursive walk of the parse tree representing the right hand side, evaluating each term in pretty much the same way an RPN (reverse polish notation) calculator does. Once that expression is evaluated, the result is unified with the left hand side.
Each basic operation — add, subtract, multiply, divide, etc. — has to be done in machine code, so at the end of the day, some machine code routine is being invoked to compute the result of each elemental operation.
Whether is/2 is written entirely in native code or written mostly in prolog, with just the leaf operations written in native code, is pretty much an implementation choice.
I am aware how we can evaluate an expression after converting into Polish Notations. However I would like to know how I can evaluate something like this:
If a < b Then a + b Else a - b
a + b happens in case condition a < b is True, otherwise, if False a - b is computed.
The grammar is not an issue here. Since I only need the algorithm to solve this problem. I am able evaluate boolean and algebraic expressions. But how can I go about solving the above problem?
Do you need to assign a+b or a-b to something?
You can do this:
int c = a < b ? a+b : a-b;
Or
int sign = a < b ? 1 : -1;
int c = a + (sign * b);
Refer to LISP language for S-express:
e.g
(if (> a b) ; if-part
(+ a b) ; then-part
(- a b)) ; else-part
Actually if you want evaluate just this simple if statement, toknize it and evaluate it, but if you want to evaluate somehow more complicated things, like nested if then else, if with experssions, multiple else, variable assignments, types, ... you need to use some parser, like LR parsers. You can use e.g Lex&Yacc to write a good parser for your own language. They support somehow complicated grammars. But if you want to know how does LR parser (or so) works, you should read into them, and see how they use their table to read tokens and parse them. e.g take a look at wiki page and see how does LR parser table works (it's something more than simple stack and is not easy to describe it here).
If your problem is just really parsing if statement, you can cheat from parser techniques, you can add empty thing after a < b, which means some action, and empty thing after else, which also means an action. When you parsed the condition, depending on correctness or wrongness you will run one of actions. By the way if you want to parse expressions inside if statement you need conditional stack, means something like SLR table.
Basically, you need to build in support for a ternary operator. IE, where currently you pop an operator, and then wait for 2 sequential values before resolving it, you need to wait for 3 if your current operation is IF, and 2 for the other operations.
To handle the if statement, you can consider the if statement in terms of C++'s ternary operator. Which formats you want your grammar to support is up to you.
a < b ? a + b : a - b
You should be able to evaluate boolean operators on your stack the way you currently evaluate arithmetic operations, so a < b should be pushed as
< a b
The if can be represented by its own symbol on the stack, we can stick with '?'.
? < a b
and the 2 possible conditions to evaluate need to separated by another operator, might as well use ':'
? < a b : + a b - a b
So now when you pop '?', you see it is the operator that needs 3 values, so put it aside as you normally would, and continue to evaluate the stack until you have 3 values. The ':' operator should be a binary operator, that simply pushes both of its values back onto the stack.
Once you have 3 values on the stack, you evaluate ? as:
If the first value is 1, push the 2nd value, throw away the third.
If the first value is 0, throw away the 2nd and push the 3rd.