What language is this expression and what does it mean?
x = (x << 13) ^x;
It could be any number of languages. In C and several other languages, << is a left-shift operator, and ^ is a bitwise XOR operator.
Both << and ^ ( left-shift and xor respectively) are bitwise operators and many languages like C, C++, Java have them
http://en.wikipedia.org/wiki/Operators_in_C_and_C%2B%2B#Bitwise_operators
In C, this would be "left shift x by 13 binary places, and take the XOR of this and x".
It is any C-derived language.
It means that the author only knows part of C. Otherwise they’d’ve written
x ^= x << 13;
to xor something with itself multiplied by 2¹³.
What language is this expression
That is C syntax. This could be any C-based programming language (C, C++, C#, Java, JavaScript). However, this is not PHP or Perl because sigils are not used.
what does it mean?
I actually can't read that code either - syntactic languages such as C are very hard to read. From what I understand from what other people said this is equivalent to:
(bit-xor (bit-shift-left x 13) x)
Related
Is their some way in SWI-Prolog to write predicates with three variables for example union(A,B,C) in the following form C = A ∪ B. For predicates with two variables I know their are operators to do that, but I am not sure if their is something similar in that case.
No.
Not directly. Prolog only supports defining unary operators (prefix/suffix operators such as -- 32 or 32 ++, both of which correspond to '--'/1 or '++'/1) and infix operators (e.g. X is Y which corresponds to is/2).
If you look at the operator definitions and precedences, you would need to define your union operator as an infix operator with a precedence of less than 700.
Then, reading a term like x = y ∪ z would yield '='( x , '∪'(y,z) ).
Another way to do it would be to write a DCG (definite clause grammar) to parse the text as desired. See this tutorial: https://www.metalevel.at/prolog/dcg
I understand that you can get the same behavior by using <= or !(x > y) but I usually rather think in terms of not greater instead of less than or equal, so having something like !> and !< would actually be really neat to have for me and would match the != operator perfectly.
The !(x > y) syntax requires more characters and reads: not: x is greater than y, which is unhandy and very unlike natural speech.
I have never seen !< or !> operators anywhere, but have been wondering ever since I started programming why they are not supported. Are there any reasons why not?
Using the least number of logical and comparison operators, and those the most closest to classic math and logic makes it simpler to reason about conditions.
There were programming languages like Natural and COBOL with operators like NOT LESS THAN, but reasoning about (a !< b) when thinking over a complex condition is much more difficult than !(a < b), which is equivalent to (a >= b).
An example:
!(!(a < b) && !(a > c))
Is equivalent to:
!((a >= b) && (a <= c))
which translates to:
(a < b) || (a > c)
There's nothing to gain from !((a !< b) && (a !> c))?
Most programming languages use ~ to represent a unary bitwise-not operation. Go, by contrast, uses ^:
fmt.Println(^1) // Prints -2
Why did the Go designers decide to break with convention here?
Because ^x is equivalent to m ^ x with m = "all bits set to 1" for unsigned x and m = -1 for signed x. Says so in the spec.
It's similar to how -x is 0 - x
How is the 'is/2' Prolog predicate implemented?
I know that
X is 3*4
is equivalent with
is(X, 3*4)
But is the predicate implemented using imperative programming?
In other words, is the implementation equivalent with the following C code?
if(uninstantiated(x))
{
X = 3*4;
}
else
{
//signal an error
}
Or is it implemented using declarative programming and other predicates?
Depends on your Prolog, obviously, but any practical implementation will do its dirty work in C or another imperative language. Part of is/2 can be simulated in pure Prolog:
is(X, Expr) :-
evaluate(Expr, Value),
(var(X) ->
X = Value
;
X =:= Value
).
Where evaluate is a huge predicate that knows about arithmetic expressions. There are ways to implement large parts of it in pure Prolog too, but that will be both slow and painful. E.g. if you have a predicate that adds integers, then you can multiply them as well using the following (stupid) algorithm:
evaluate(X + Y, Value) :-
% even this can be done in Prolog using an increment predicate,
% but it would take O(n) time to do n/2 + n/2.
add(X, Y, Value).
evaluate(X * Y, Value) :-
(X == 0 ->
Value = 0
;
evaluate(X + -1, X1),
evaluate(X1, Y, Value1),
evaluate(Y + Value1, Value)
).
None of this is guaranteed to be either practical or correct; I'm just showing how arithmetic could be implemented in Prolog.
Would depend on the version of Prolog; for example, CProlog is (unsurprisingly) written in C, so all built-in predicates are implemented in a imperative language.
Prolog was developed for language parsing. So, a arithmetic expression like
3 + - ( 4 * 12 ) / 2 + 7
after parsing is just a prolog term (representing the parse tree), with operator/3 providing the semantics to guide the parser's operation. For basic arithmetic expressions, the terms are
'-'/2. Negation
'*'/2, '/'/2. Multiplication, division
'+'/2, '-'/2. Addition, subtraction
The sample expression above is parsed as
'+'( '+'( 3 , '/'( '-'( '*'(4,12) ) , 2 ) ) , 7 )
'is'/2 simply does a recursive walk of the parse tree representing the right hand side, evaluating each term in pretty much the same way an RPN (reverse polish notation) calculator does. Once that expression is evaluated, the result is unified with the left hand side.
Each basic operation — add, subtract, multiply, divide, etc. — has to be done in machine code, so at the end of the day, some machine code routine is being invoked to compute the result of each elemental operation.
Whether is/2 is written entirely in native code or written mostly in prolog, with just the leaf operations written in native code, is pretty much an implementation choice.
I've been investigating functional programming, and it occurred to me that there could be a functional language which has (immutable) objects with methods, and which therefore supports method chaining (where chainable methods would return new instances rather than mutating the instance the method is called on and returning it).
This would have readability advantages as...
o.f().g().h()
... is arguably more readable than:
h(g(f(o)))
It would also allow you to associate particular functions with particular types of object, by making them methods of those types (which I understand to be one advantage of object-oriented langauges).
Are there any languages which behave like this? Are there any reasons to believe that this would be a bad idea?
(I know that you can program like this in e.g Javascript, but Javascript doesn't enforce immutability.)
yes, for example, F# uses the forward pipe (|>) operator which makes the code very readable. for example,
(1..20)
|> Seq.map(functionFoo)
|> Seq.map(functionBoo)
and so on...
Frege has this, it is known as TDNR (type directed name resolution).
Specifically, if x has type T, and y occurs in the namespace of T, then x.y is the same as (T.y x) which is in plain english y from the name space T applied to x.
Practical applications of this are: convenient syntax for record field access and access to native (i.e. Java, as Frege is compiled to Java) methods.
Scala sounds like a good fit - it's a hybrid functional / object-oriented language.
You don't need objects for that, just define your own reverse apply infix operator, which most functional languages allow you to do. Currying then does the rest. For example, in OCaml:
let (>>) x f = f x
Demo:
let f x y z = z * (x - y)
let g x = x + 1
let h x y = y * x
5 >> f 6 2 >> g >> h 2 (* = h 2 (g (f 6 2 5)) *)
(Or choose whatever operator name you prefer; others use |> for example.)