I'd like to be able to reason about code on paper better than just writing boxes or pseudocode.
The key thing here is paper. On a machine, I can most likely use a high-level language with a linter/compiler very quickly, and a keyboard restricts what can be done, somewhat.
A case study is APL, a language that we semi-jokingly describe as "write-only". Here is an example:
m ← +/3+⍳4
(Explanation: ⍳4 creates an array, [1,2,3,4], then 3 is added to each component, which are then summed together and the result stored in variable m.)
Look how concise that is! Imagine having to type those symbols in your day job! But, writing iota and arrows on a whiteboard is fine, saves time and ink.
Here's its haskell equivalent:
m = foldl (+) 0 (map (+3) [1..4])
And Python:
reduce(add, map(lambda x: x+3, range(4)))
But the principle behind these concise programming languages is different: they use words and punctuation to describe high-level actions (such as fold), whereas I want to write symbols for these common actions.
Does such a formalised pseudocode exist?
Not to be snarky, but you could use APL. It was after all originally invented as a mathematical notation before it was turned into a programming language. I seem to remember that there was something like what I think you are talking about in Backus' Turing Award lecture. Finally, maybe Z Notation is what you want: https://en.m.wikipedia.org/wiki/Z_notation
Related
I'm reading a research paper High Performance Dynamic Lock-Free Hash Tables
and List-Based Sets (Maged M. Michael) and I don't understand this pseudo-code syntax that's being used for examples.
Specifically these parts:
〈pmark,cur,ptag〉: MarkPtrType;
〈cmark,next,ctag〉: MarkPtrType;
nodeˆ.〈Mark,Next〉←〈0,cur〉;
if CAS(prev,〈0,cur,ptag〉,〈0,node,ptag+1〉)
Eg. (page 5, chapter 3)
UPDATE:
The ˆ. seems to be a Pascal notation for dereferencing the pointer and accessing the variable in the record (https://stackoverflow.com/a/1814936/8524584).
The ← arrow seems to be a Haskell like do notation operator for assignment, that assigns the result of an operation to a variable. This is a bit strange, since the paper is nearly a decade older than Haskell. It's probably some notation that Haskell also borrowed from. (https://en.wikibooks.org/wiki/Haskell/do_notation#Translating_the_bind_operator)
The 〈a, b〉 is a mathematical vector notation for an inner product of a vector (https://mathworld.wolfram.com/InnerProduct.html)
The wide angle bracket notation seems to be either an ad-hoc list notation or manipulation of multiple variables on a single line (thanks #graybeard for pointing it out). It might even be some kind of tuple.
This is what 〈pmark,cur,ptag〉: MarkPtrType; would look like in a C like language:
MarkPtrType pmark;
MarkPtrType cur;
MarkPtrType ptag;
// or some list assignment notation
// or a tuple
The ˆ. seems to be a Pascal notation for dereferencing the pointer and accessing the variable in the record (https://stackoverflow.com/a/1814936/8524584).
The ← arrow is an APL assignment notation, also similar to the Haskell's do notation operator for assignment.
Recently I am thinking about an algorithm constructed by myself. I call it Replacment Compiling.
It works as follows:
Define a language as well as its operators' precedence, such as
(1) store <value> as <id>, replace with: var <id> = <value>, precedence: 1
(2) add <num> to <num>, replace with: <num> + <num>, precedence: 2
Accept a line of input, such as store add 1 to 2 as a;
Tokenize it: <kw,store><kw,add><num,1><kw,to><2><kw,as><id,a><EOF>;
Then scan through all the tokens until reach the end-of-file, find the operation with highest precedence, and "pack" the operation:
<kw,store>(<kw,add><num,1><kw,to><2>)<kw,as><id,a><EOF>
Replace the "sub-statement", the expression in parenthesis, with the defined replacement:
<kw,store>(1 + 2)<kw,as><id,a><EOF>
Repeat until no more statements left:
(<kw,store>(1 + 2)<kw,as><id,a>)<EOF>
(var a = (1 + 2))
Then evaluate the code with the built-in function, eval().
eval("var a = (1 + 2)")
Then my question is: would this algorithm work, and what are the limitations? Is this algorithm works better on simple languages?
This won't work as-is, because there's no way of deciding the precedence of operations and keywords, but you have essentially defined parsing (and thrown in an interpretation step at the end). This looks pretty close to operator-precedence parsing, but I could be wrong in the details of your vision. The real keys to what makes a parsing algorithm are the direction/precedence it reads the code, whether the decisions are made top-down (figure out what kind of statement and apply the rules) or bottom-up (assemble small pieces into larger components until the types of statements are apparent), and whether the grammar is encoded as code or data for a generic parser. (I'm probably overlooking something, but this should give you a starting point to make sense out of further reading.)
More typically, code is generally parsed using an LR technique (LL if it's top-down) that's driven from a state machine with look-ahead and next-step information, but you'll also find the occasional recursive descent. Since they're all doing very similar things (only implemented differently), your rough algorithm could probably be refined to look a lot like any of them.
For most people learning about parsing, recursive-descent is the way to go, since everything is in the code instead of building what amounts to an interpreter for the state machine definition. But most parser generators build an LL or LR compiler.
And I'm obviously over-simplifying the field, since you can see at the bottom of the Wikipedia pages that there's a smattering of related systems that partly revolve around the kind of grammar you have available. But for most languages, those are the big-three algorithms.
What you've defined is a rewriting system: https://en.wikipedia.org/wiki/Rewriting
You can make a compiler like that, but it's hard work and runs slowly, and if you do a really good job of optimizing it then you'll get conventional table-driven parser. It would be better in the end to learn about those first and just start there.
If you really don't want to use a parser generating tool, then the easiest way to write a parser for a simple language by hand is usually recursive descent: https://en.wikipedia.org/wiki/Recursive_descent_parser
Eta Abstraction in lambda calculus means following.
A function f can be written as \x -> f x
Is Eta abstraction of any use while reducing lambda expressions?
Is it only an alternate way of writing certain expressions?
Practical use cases would be appreciated.
The eta reduction/expansion is just a consequence of the law that says that given
f = g
it must be, that for any x
f x = g x
and vice versa.
Hence given:
f x = (\y -> f y) x
we get, by beta reducing the right hand side
f x = f x
which must be true. Thus we can conclude
f = \y -> f y
First, to clarify the terminology, paraphrasing a quote from the Eta conversion article in the Haskell wiki (also incorporating Will Ness' comment above):
Converting from \x -> f x to f would
constitute an eta reduction, and moving in the opposite way
would be an eta abstraction or expansion. The term eta conversion can refer to the process in either direction.
Extensive use of η-reduction can lead to Pointfree programming.
It is also typically used in certain compile-time optimisations.
Summary of the use cases found:
Point-free (style of) programming
Allow lazy evaluation in languages using strict/eager evaluation strategies
Compile-time optimizations
Extensionality
1. Point-free (style of) programming
From the Tacit programming Wikipedia article:
Tacit programming, also called point-free style, is a programming
paradigm in which function definitions do not identify the arguments
(or "points") on which they operate. Instead the definitions merely
compose other functions
Borrowing a Haskell example from sth's answer (which also shows composition that I chose to ignore here):
inc x = x + 1
can be rewritten as
inc = (+) 1
This is because (following yatima2975's reasoning) inc x = x + 1 is just syntactic sugar for \x -> (+) 1 x so
\x -> f x => f
\x -> ((+) 1) x => (+) 1
(Check Ingo's answer for the full proof.)
There is a good thread on Stackoverflow on its usage. (See also this repl.it snippet.)
2. Allow lazy evaluation in languages using strict/eager evaluation strategies
Makes it possible to use lazy evaluation in eager/strict languages.
Paraphrasing from the MLton documentation on Eta Expansion:
Eta expansion delays the evaluation of f until the surrounding function/lambda is applied, and will re-evaluate f each time the function/lambda is applied.
Interesting Stackoverflow thread: Can every functional language be lazy?
2.1 Thunks
I could be wrong, but I think the notion of thunking or thunks belongs here. From the wikipedia article on thunks:
In computer programming, a thunk is a subroutine used to inject an
additional calculation into another subroutine. Thunks are primarily
used to delay a calculation until its result is needed, or to insert
operations at the beginning or end of the other subroutine.
The 4.2 Variations on a Scheme — Lazy Evaluation of the Structure and Interpretation of Computer Programs (pdf) has a very detailed introduction to thunks (and even though the latter has not one occurrence of the phrase "lambda calculus", it is worth reading).
(This paper also seemed interesting but didn't have the time to look into it yet: Thunks and the λ-Calculus.)
3. Compile-time optimizations
Completely ignorant on this topic, therefore just presenting sources:
From Georg P. Loczewski's The Lambda Calculus:
In 'lazy' languages like Lambda Calculus, A++, SML, Haskell, Miranda etc., eta conversion, abstraction and reduction alike, are mainly used within compilers. (See [Jon87] page 22.)
where [Jon87] expands to
Simon L. Peyton Jones
The Implementation of Functional Programming Languages
Prentice Hall International, Hertfordshire,HP2 7EZ, 1987.
ISBN 0 13 453325 9.
search results for "eta" reduction abstraction expansion conversion "compiler" optimization
4. Extensionality
This is another topic that I know little about, and this is more theoretical, so here it goes:
From the Lambda calculus wikipedia article:
η-reduction expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments.
Some other sources:
nLab entry on Eta-conversion that goes deeper into its connection with extensionality, and its relationship with beta-conversion
ton of info in the What's the point of η-conversion in lambda calculus? on the Theoretical Computer Science Stackexchange (but beware: the author of the accepted answer seems to have a beef with the commonly held belief about the relationsship between eta reduction and extensionality, so make sure to read the entire page. Most of it was over my head so I have no opinions.)
The question above has been cross-posted to Math Exchange as well
Speaking of "over my head" stuff: here's Conor McBride's take; the only thing I understood were that eta conversions can be controversial in certain context, but reading his reply was that of trying to figure out an alien language (couldn't resist)
Saved this page recursively in Internet Archive so if any of the links are not live anymore then that snapshot may have saved those too.
Mathematica 6 added TakeWhile, which has the syntax:
TakeWhile[list, crit]
gives elements ei from the beginning of list, continuing so long as crit[ei] is True.
There is however no corresponding "DropWhile" function. One can construct DropWhile using LengthWhile and Drop, but it almost seems as though one is discouraged from using DropWhile. Why is this?
To clarify, I am not asking for a way to implement this function. Rather: why is it not already present? It seems to me that there must be a reason for its absence other than an oversight, or it would have been corrected by now. Is there something inefficient, undesirable, or superfluous about DropWhile?
There appears to be some ambiguity about the function of DropWhile, so here is an example:
DropWhile = Drop[#, LengthWhile[#, #2]] &;
DropWhile[{1,2,3,4,5}, # <= 3 &]
Out= {4, 5}
Just a blind guess.
There are a lot list operations that could take a while criteria. For example:
Total..While
Accumulate..While
Mean..While
Map..While
Etc..While
They are not difficult to construct, anyway.
I think those are not included just because the number of "primitive" functions is already growing too long, and the criteria of "is it frequently needed and difficult to implement with good performance by the user?" is prevailing in those cases.
The ubiquitous Lists in Mathematica are fixed length vectors, and when they are of a machine numbers it is a packed array.
Thus the natural functions for a recursively defined linked list (e.g. in Lisp or Haskell) are not the primary tools in Mathematica.
So I am inclined to think this explains why Wolfram did not fill out its repertoire of manipulation functions.
Well I know it might sound a bit strange but yes my question is: "What is a unification algorithm".
Well, I am trying to develop an application in F# to act like Prolog. It should take a series of facts and process them when making queries.
I was suggested to get started in implementing a good unification algorithm but did not have a clue about this.
Please refer to this question if you want to get a bit deeper to what I want to do.
Thank you very much and Merry Christmas.
If you have two expressions with variables, then unification algorithm tries to match the two expressions and gives you assignment for the variables to make the two expressions the same.
For example, if you represented expressions in F#:
type Expr =
| Var of string // Represents a variable
| Call of string * Expr list // Call named function with arguments
And had two expressions like this:
Call("foo", [ Var("x"), Call("bar", []) ])
Call("foo", [ Call("woo", [ Var("z") ], Call("bar", []) ])
Then the unification algorithm should give you an assignment:
"x" -> Call("woo", [ Var("z") ]
This means that if you replace all occurrences of the "x" variable in the two expressions, the results of the two replacements will be the same expression. If you had expressions calling different functions (e.g. Call("foo", ...) and Call("bar", ...)) then the algorithm will tell you that they are not unifiable.
There is also some explanation in WikiPedia and if you search the internet, you'll surely find some useful description (and perhaps even an implementation in some functional language similar to F#).
I found Baader and Snyder's work to be most informative. In particular, they describe several unification algorithms (including Martelli and Montanari's near-linear algorithm using union-find), and describe both syntactic unification and various kinds of semantic unification.
Once you have unification, you'll also need backtracking. Kiselyov/Shan/Friedman's LogicT framework will help here.
Obviously, destructive unification would be much more efficient than a pure functional one, but much less F-sharpish as well. If it's a performance you're after, probably you will end up implementing a subset of WAM any way:
https://en.wikipedia.org/wiki/Warren_Abstract_Machine
And probably this could help: Andre Marien, Bart Demoen: A new Scheme for Unification in WAM.