I'm reading a research paper High Performance Dynamic Lock-Free Hash Tables
and List-Based Sets (Maged M. Michael) and I don't understand this pseudo-code syntax that's being used for examples.
Specifically these parts:
〈pmark,cur,ptag〉: MarkPtrType;
〈cmark,next,ctag〉: MarkPtrType;
nodeˆ.〈Mark,Next〉←〈0,cur〉;
if CAS(prev,〈0,cur,ptag〉,〈0,node,ptag+1〉)
Eg. (page 5, chapter 3)
UPDATE:
The ˆ. seems to be a Pascal notation for dereferencing the pointer and accessing the variable in the record (https://stackoverflow.com/a/1814936/8524584).
The ← arrow seems to be a Haskell like do notation operator for assignment, that assigns the result of an operation to a variable. This is a bit strange, since the paper is nearly a decade older than Haskell. It's probably some notation that Haskell also borrowed from. (https://en.wikibooks.org/wiki/Haskell/do_notation#Translating_the_bind_operator)
The 〈a, b〉 is a mathematical vector notation for an inner product of a vector (https://mathworld.wolfram.com/InnerProduct.html)
The wide angle bracket notation seems to be either an ad-hoc list notation or manipulation of multiple variables on a single line (thanks #graybeard for pointing it out). It might even be some kind of tuple.
This is what 〈pmark,cur,ptag〉: MarkPtrType; would look like in a C like language:
MarkPtrType pmark;
MarkPtrType cur;
MarkPtrType ptag;
// or some list assignment notation
// or a tuple
The ˆ. seems to be a Pascal notation for dereferencing the pointer and accessing the variable in the record (https://stackoverflow.com/a/1814936/8524584).
The ← arrow is an APL assignment notation, also similar to the Haskell's do notation operator for assignment.
Related
I'd like to be able to reason about code on paper better than just writing boxes or pseudocode.
The key thing here is paper. On a machine, I can most likely use a high-level language with a linter/compiler very quickly, and a keyboard restricts what can be done, somewhat.
A case study is APL, a language that we semi-jokingly describe as "write-only". Here is an example:
m ← +/3+⍳4
(Explanation: ⍳4 creates an array, [1,2,3,4], then 3 is added to each component, which are then summed together and the result stored in variable m.)
Look how concise that is! Imagine having to type those symbols in your day job! But, writing iota and arrows on a whiteboard is fine, saves time and ink.
Here's its haskell equivalent:
m = foldl (+) 0 (map (+3) [1..4])
And Python:
reduce(add, map(lambda x: x+3, range(4)))
But the principle behind these concise programming languages is different: they use words and punctuation to describe high-level actions (such as fold), whereas I want to write symbols for these common actions.
Does such a formalised pseudocode exist?
Not to be snarky, but you could use APL. It was after all originally invented as a mathematical notation before it was turned into a programming language. I seem to remember that there was something like what I think you are talking about in Backus' Turing Award lecture. Finally, maybe Z Notation is what you want: https://en.m.wikipedia.org/wiki/Z_notation
Sometimes the value of a variable accessed within the control-flow of a program cannot possibly have any effect on a its output. For example:
global var_1
global var_2
start program hello(var_3, var_4)
if (var_2 < 0) then
save-log-to-disk (var_1, var_3, var_4)
end-if
return ("Hello " + var_3 + ", my name is " + var_1)
end program
Here only var_1 and var_3 have any influence on the output, while var_2 and var_4 are only used for side effects.
Do variables such as var_1 and var_3 have a name in dataflow-theory/compiler-theory?
Which static dataflow analysis techniques can be used to discover them?
References to academic literature on the subject would be particularly appreciated.
The problem that you stated is undecidable in general,
even for the following very narrow special case:
Given a single routine P(x), where x is a parameter of type integer. Is the output of P(x) independent of the value of x, i.e., does
P(0) = P(1) = P(2) = ...?
We can reduce the following still undecidable version of the halting problem to the question above: Given a Turing machine M(), does the program
never stop on the empty input?
I assume that we use a (Turing-complete) language in which we can build a "Turing machine simulator":
Given the program M(), construct this routine:
P(x):
if x == 0:
return 0
Run M() for x steps
if M() has terminated then:
return 1
else:
return 0
Now:
P(0) = P(1) = P(2) = ...
=>
M() does not terminate.
M() does terminate
=> P(x) = 1 for a sufficiently large x
=> P(x) != P(0) = 0
So, it is very difficult for a compiler to decide whether a variable actually does not influence the return value of a routine; in your example, the "side effect routine" might manipulate one of its values (or even loop infinitely, which would most definitely change the return value of the routine ;-)
Of course overapproximations are still possible. For example, one might conclude that a variable does not influence the return value if it does not appear in the routine body at all. You can also see some classical compiler analyses (like Expression Simplification, Constant propagation) having the side effect of eliminating appearances of such redundant variables.
Pachelbel has discussed the fact that you cannot do this perfectly. OK, I'm an engineer, I'm willing to accept some dirt in my answer.
The classic way to answer you question is to do dataflow tracing from program outputs back to program inputs. A dataflow is the connection of a program assignment (or sideeffect) to a variable value, to a place in the application that consumes that value.
If there is (transitive) dataflow from a program output that you care about (in your example, the printed text stream) to an input you supplied (var2), then that input "affects" the output. A variable that does not flow from the input to your desired output is useless from your point of view.
If you focus your attention only the computations involved in the dataflows, and display them, you get what is generally called a "program slice" . There are (very few) commercial tools that can show this to you.
Grammatech has a good reputation here for C and C++.
There are standard compiler algorithms for constructing such dataflow graphs; see any competent compiler book.
They all suffer from some limitation due to Turing's impossibility proofs as pointed out by Pachelbel. When you implement such a dataflow algorithm, there will be places that it cannot know the right answer; simply pick one.
If your algorithm chooses to answer "there is no dataflow" in certain places where it is not sure, then it may miss a valid dataflow and it might report that a variable does not affect the answer incorrectly. (This is called a "false negative"). This occasional error may be satisfactory if
the algorithm has some other nice properties, e.g, it runs really fast on a millions of code. (The trivial algorithm simply says "no dataflow" in all places, and it is really fast :)
If your algorithm chooses to answer "yes there is a dataflow", then it may claim that some variable affects the answer when it does not. (This is called a "false positive").
You get to decide which is more important; many people prefer false positives when looking for a problem, because then you have to at least look at possibilities detected by the tool. A false negative means it didn't report something you might care about. YMMV.
Here's a starting reference: http://en.wikipedia.org/wiki/Data-flow_analysis
Any of the books on that page will be pretty good. I have Muchnick's book and like it lot. See also this page: (http://en.wikipedia.org/wiki/Program_slicing)
You will discover that implementing this is pretty big effort, for any real langauge. You are probably better off finding a tool framework that does most or all this for you already.
I use the following algorithm: a variable is used if it is a parameter or it occurs anywhere in an expression, excluding as the LHS of an assignment. First, count the number of uses of all variables. Delete unused variables and assignments to unused variables. Repeat until no variables are deleted.
This algorithm only implements a subset of the OP's requirement, it is horribly inefficient because it requires multiple passes. A garbage collection may be faster but is harder to write: my algorithm only requires a list of variables with usage counts. Each pass is linear in the size of the program. The algorithm effectively does a limited kind of dataflow analysis by elimination of the tail of a flow ending in an assignment.
For my language the elimination of side effects in the RHS of an assignment to an unused variable is mandated by the language specification, it may not be suitable for other languages. Effectiveness is improved by running before inlining to reduce the cost of inlining unused function applications, then running it again afterwards which eliminates parameters of inlined functions.
Just as an example of the utility of the language specification, the library constructs a thread pool and assigns a pointer to it to a global variable. If the thread pool is not used, the assignment is deleted, and hence the construction of the thread pool elided.
IMHO compiler optimisations are almost invariably heuristics whose performance matters more than effectiveness achieving a theoretical goal (like removing unused variables). Simple reductions are useful not only because they're fast and easy to write, but because a programmer using a language who understand basics of the compiler operation can leverage this knowledge to help the compiler. The most well known example of this is probably the refactoring of recursive functions to place the recursion in tail position: a pointless exercise unless the programmer knows the compiler can do tail-recursion optimisation.
For example, if you try (+ 3 4), how is it broken down and calculated in the source, specifically? Does it use recursion with add1?
The implementation of + is actually much more complicated than you might expect, because arithmetic is generic in Racket: it works on integers, rational numbers, complex numbers, and so on. You can even mix and match these kinds of numbers and it'll do the right thing. In the end, it's ultimately going to use arithmetic in C, which is what the runtime system is written in.
If you're curious, you can find more of the guts of the numeric tower here: https://github.com/plt/racket/blob/master/src/racket/src/numarith.c
Other pointers: Bignum arithmetic, the Scheme numeric tower, the Racket reference on numbers.
The + operator is a primitive operation, part of the core language. For efficiency reasons, it wouldn't make much sense to implement it as a recursive procedure.
Is it possible to obtain the maximum difference between two columns (for example starting and ending weights)?
Right now I'm leaning towards no as this would require a new column with the difference between the two columns for each row, then taking the max of that. Doing it the way I orginally intended doesn't work either since arithmetic operations are not allowed in the conditions of select operations (e.g. SIGMA (c1 - c2 < c3 - c4)(Table) is not allowed).
Disclosure: this is part of a homework question.
It can be done, exactly in the way you planned, but you need generalized projection for that. The generalized projection is the operator
Π(E1, E2,..., En)R
where R is a relation, and E1...En are expressions in the form a⊕b, where a and b are attributes of R or constants, and ⊕ is an arbitrary binary operator between them. The result is a relation with attributes E1...En.
This would allow you to project the differences into a new relation (R' := Π(x-y)R), then find the maximum on that, just as you planned.
If we're not allowed to use generalized projection, then I think we have no means to actually subtract an attribute from another, or to actually calculate anything from them, as the definition of projection allow only attribute names, and the definition of selection allow only expressions of the form aθb where a and b are attributes or constants and θ is a binary relational operator (this is logical, in its way, because if we have a relation R(X,Y), then we have no idea about the type of X or Y, making operations on them quite meaningless).
I think generalized projection is a great extension to relational algebra. It's obviously immensely useful in real life, and it can be defended even from a more scientific point of view: if we allow binary conditional operators on the values like "X > 50", then we made assumptions on the type already, rendering that point kind of moot. Your instructor may disagree, though.
If you're looking to do this in the real world, you should be able to do this with a subquery (or a view, which amounts to much the same thing), something like:
select max (diff) from (
select high - low as diff from blah blah blah
)
Whether this applies to the abstract world of relational algebra, I couldn't say. I'm too busy fixing those damn real-world problems :-)
Mathematica 6 added TakeWhile, which has the syntax:
TakeWhile[list, crit]
gives elements ei from the beginning of list, continuing so long as crit[ei] is True.
There is however no corresponding "DropWhile" function. One can construct DropWhile using LengthWhile and Drop, but it almost seems as though one is discouraged from using DropWhile. Why is this?
To clarify, I am not asking for a way to implement this function. Rather: why is it not already present? It seems to me that there must be a reason for its absence other than an oversight, or it would have been corrected by now. Is there something inefficient, undesirable, or superfluous about DropWhile?
There appears to be some ambiguity about the function of DropWhile, so here is an example:
DropWhile = Drop[#, LengthWhile[#, #2]] &;
DropWhile[{1,2,3,4,5}, # <= 3 &]
Out= {4, 5}
Just a blind guess.
There are a lot list operations that could take a while criteria. For example:
Total..While
Accumulate..While
Mean..While
Map..While
Etc..While
They are not difficult to construct, anyway.
I think those are not included just because the number of "primitive" functions is already growing too long, and the criteria of "is it frequently needed and difficult to implement with good performance by the user?" is prevailing in those cases.
The ubiquitous Lists in Mathematica are fixed length vectors, and when they are of a machine numbers it is a packed array.
Thus the natural functions for a recursively defined linked list (e.g. in Lisp or Haskell) are not the primary tools in Mathematica.
So I am inclined to think this explains why Wolfram did not fill out its repertoire of manipulation functions.