I am having problems for converting following algo in ocaml To implement this algorithm i used Set.Make(String) functor actualy in and out are 2 functions Can any one give me percise code help in ocaml for following
This is actually Algo for Live Variables[PDF] ..Help would be appreciated greatly
for all n, in[n] = out[n] = Ø
w = { set of all nodes }
repeat until w empty
n = w.pop( )
out[n] = ∪ n’ ∈ succ [n] in[n’]
in[n] = use[n] ∪ (out[n] — def [n])
if change to in[n],
for all predecessors m of n, w.push(m)
end
for all n, in[n] = out[n] = Ø
w = { set of all nodes }
repeat until w empty
n = w.pop( )
out[n] = ∪ n’ ∈ succ [n] in[n’]
in[n] = use[n] ∪ (out[n] — def [n])
if change to in[n],
for all predecessors m of n, w.push(m)
end
It's hard for me to tell what is exactly going on here. I think there is some alignment issues with your text --repeat until w empty should be repeating the next 5lines, right? And how are in and out functions, they look like arrays to me? Aside from those deficiencies I'll tackle some general rules I have followed.
I've had to translate a number of numerical methods in C and Fortran algorithms to functional languages and I have some suggestions for you.
0) Define the datatypes being used. This will really help you with the next step (spoiler: looking for functional constructs). Once you know the datatypes you can more accurately define the functional constructs you will eventually apply.
1) Look for functional constructs. fold, recursion, and maps, and when to use them. For example, the for all predecessors m is a fold (unsure if that it would fold over a tree or list, but none-the-less, a fold). While loops are a good place for recursion --but don't worry about making it a tail call, you can modify the parameters later to adhere to those requirements. Don't worry about being 100% pure. Remove enough impure constructs (references or arrays) to get a feel for the algorithm in a functional way.
2) Write any part of the algorithm that you can. Leaving functions blank, add dummy values, and just implement what you know --then you can ask SO better, more informed questions.
3) Re-write it. There is a good chance you missed some functional constructs or used an array or reference where you now realize you can use a list or set or by passing an accumulator. You may have defined a list, but later you realize you cannot randomly access it (well, it would be pretty detrimental to), or it needs to be traversed forward and back (a good place for a zipper). Either way, when you finally get it you'll know, and you should have a huge ear-to-ear grin on your face.
Related
Context
I started teaching myself lambda calculus last night and I am trying to determine if what I understand so far is correct.
Understanding
SKK is equivalent to the Identity combinator, I.
Where L stands for lambda:
S = LxLyLz((xz)(yz))
K = LxLy(x)
K essentially takes the next 2 (lambda) terms and gives back the first of those. S seems a little more complicated in the untyped lambda calculus.
My Interpretation
SK(any-lambda-term) is also equivalent to I.
I.e. the application of the application of S to K to Any-lambda-term is equivalent to the Identity combinator:
((S K)(Any)) = I = S K K = ((S K)(K))
I am using the convention of “left-association” in my above notation, if that helps (And I tried to make that clear in the 4th term above with parentheses. Everything I have read so far seems to use this convention).
Reasoning
S K = LyLz((K z)(y z))
The next lambda term will be substituted for y, let the term be Y.
S K Y = Lz((K z)(Y z))
(Y z) is the application of Y to z, also a lambda term.
(K z)returns the constant function that returns z, given another term input: (Y z).
Is my interpretation true? If not, can you provide an explanation? I would greatly appreciate it. Particularly if a sort of order of operations can be explained—I regularly find myself confused when considering when to evaluate. Perhaps that will be refined with practice.
Your intuition is correct, but an intuition proves nothing (alas...)
So, how can we prove your statement? Simply by showing that SKK and SKS have the same behaviour. "Behaviour" is an informal notion, which is formally capture by "semantics": if SKK and SKS are equals, then they should always reduce to the same term, according to the SKI-calculus semantics.
Now, there is a deep question, which is: what are the SKI-calculus? Actually, there is not a single way to answer that. What you implicitly do in your question is that you express SKI in terms of λ terms and you rely on the semantics of the λ calculus. This is absolutly correct. An other way to do it could have been to define directly SKI semantics. For instance, if you look at the wikipedia page, you can see that the semantics are not defined with lambda terms (and the fact that it correspond to lambda term is a (nice and expected) side effect). In the rest of this answer, I'll take the same approach as you do, and convert SKI terms in λ terms. A good exercise for you is to redo the proof, using the proper SKI semantics.
So, let formalize your question: your question is whether, for any SKI term t, SKKt = SKSt? Well... Let's see.
SKKt is encoded as (λx.λy.λz.(xz)(yz))(λx.λy.x)(λx.λy.x)t in the λ-calculus. We now just have to reduce it to a normal form (I detail every step, each time I reduce the leftmost λ, even tho it is not the fastest strategy):
(λx.λy.λz.(xz)(yz))(λx.λy.x)(λx.λy.x)t
= (λy.λz.((λx.λy.x)z)(yz))(λx.λy.x)t
= (λz.((λx.λy.x)z)((λx.λy.x)z))t
= ((λx.λy.x)t)((λx.λy.x)t)
= (λy.t)((λx.λy.x)t)
= t
So, the encoding of SKKt in the λ calculus reduces to t (as a sidenote, we just proved that SKK is equivalent to I here). To conclude our proof, we have to reduce SKSt and see whether it also reduces to t.
SKSt is encoded as (λx.λy.λz.(xz)(yz))(λx.λy.x)(λx.λy.λz.(xz)(yz))t. Let reduce it. (I don't detail as much this time)
(λx.λy.λz.(xz)(yz))(λx.λy.x)(λx.λy.λz.(xz)(yz))t
= ((λx.λy.x) t)((λx.λy.λz.(xz)(yz)) t))
= (λy.t)((λx.λy.λz.(xz)(yz)) t))
= t
Hurrah! It also reduce to t, so indeed, SKS and SKK are equivalent. It seems that the third combinator is not important: that as soon as you have SK?, it is equivalent to I. As an exercise, you can easily prove it (same strategy, if it is the case, then for any terms t and s, SKts = s). As mentionned above, an other good exercise is to redo the proof without using the λ semantics, but the proper SKI semantics.
Finally, my answer should raise a new question to you: we have two semantics, one that encodes SKI terms into λ terms, and one that does not. The question you may have is: are the two semantics equivalent? What does it mean for two semantics to be equivalent? If you are only starting to teach yourself λ calculus, it may be a bit early to try to answer those questions right now, but you can keep it in a corner of your head for when you'll get more familiar with formal languages.
I have been asked to prove if the following set is decidible, semi-decidible or not semi-decidible:
In other words, it is the set of inputs such that exists a Turing Machine encoded with the natural y with input p that returns its input.
Consider the set K as the set of naturals such that the Turing machine encoded with x and input x stops. This is demonstrated to be a non-decidible set.
I think that what I need is to find a reduction of K to L, but I don't know how to prove that L is decidible, semi-decidible or not semi-decidible.
L may not look decidable at first glance, because there is this nasty unbounded quantifier included, which seems to make necessary a possibly infinite search when you look for a y satisfying the condition for a specific p.
However, the answer is much simpler: There is a turing machine M which always returns its input, i.e. M(p) = p holds for all p in the considered language. Let y be a code of M. Then you can use this same y for all p, showing that L contains all words of the language. Hence L is of course decidable.
In fact, this is an example to demonstrate the principle of extensionality (if two sets have the same elements and one is decidable, then the other is decidable too, even if it doesn't look so).
I would like to know if there is a straightforward algorithm for rearranging simple symbolic algebraic expressions. Ideally I would like to be able to rewrite any such expression with one variable alone on the left hand side. For example, given the input:
m = (x + y) / 2
... I would like to be able to ask about x in terms of m and y, or y in terms of x and m, and get these:
x = 2*m - y
y = 2*m - x
Of course we've all done this algorithm on paper for years. But I was wondering if there was a name for it. It seems simple enough but if somebody has already cataloged the various "gotchas" it would make life easier.
For my purposes I won't need it to handle quadratics.
(And yes, CAS systems do this, and yes I know I could just use them as a library. I would like to avoid such a dependency in my application. I really would just like to know if there are named algorithms for approaching this problem.)
What you want is equation solving algorithm(s). But i bet that this is huge topic. In general case, there may be:
system of equations
equations may be non-linear, thus additional algorithms such as equation factorization needed.
knowledge how to reverse functions is needed,- for example =>
sin(x) + 10 = z, solving for x we reverse sin(), which is arcsin(). (Not all functions may be reversible !)
finally some equations may be hard-solvable even for CAS, such as
sin(x)+x=y, solve for x.
Hard answer is - your best bet is to take source code of some CAS,- for example you can take a look at MAXIMA CAS source code which is written in LISP. And find code which is responsible for equation solving.
Easy answer - if all that you need is solving equation which is linear and is composed only from basic operators +-*/. Then you know answer already - use old good paper method
- think of what rules we used on paper, and just re-write these rules as symbolic algorithm which manipulates equation string.
good luck !
It seems like what you're interested in doing is maintaining a system of linear equations and then, at any time, being able to solve for one variable in terms of all the others. If you encode the relationships as a matrix, it seems like you could then reduce the matrix to some nice form (for example, reduced row echelon form) to get the "simplest" dependencies amongst the variables (for some nice definition of "simplest.") Once you have the data like this, you should be able to read off all the dependencies by just looking at some row that has a nonzero entry for the variable in question, then normalizing it so that the variable has coefficient one.
A note - in general, you won't always get a unique solution for each variable. For example, given the trivial equations
x = y
x = z
Then solving for z could yield either "z = x" or "z = y," depending on how much simplification you want. Or alternatively, in a setup like
x = 2y + 3w
x = 9z
Returning a value for x could hand back either expression, or their sum over two, or a whole bunch of other things that are all technically true but not necessarily useful. I'm not sure how you'd handle this, but depending on the form of your equations you can probably find a way to handle it.
Convert your expression into a data structure (tree) in reverse Polish notation. Your tree is made up of nodes, each node has an operation, a left and a right. Each of the left and right can be a symbol (eg: "x") or another node. For example:
(x + (a + b))
Would become:
(+ x (+ a b))
Or in JSON:
["+", "x", ["+", "a", "b"]]
Your original expression m = (x + y) / 2 would look like this:
m = ["/", ["+", "x", "y"], "2"]
One of your desired expressions (solving for x) looks like this:
x = ["-", ["*", "m", "2"], "y"]
Can you see that the tree of expressions has been turned inside out and each operator has been reversed? The "-" is the reverse of the "+" and now wraps the "*" which is a reverse of the "/". The:
["+", "x", "y"]
Becomes:
["-", (something), "y"]
Where (something) is a reversal of the outer expression recursively. To try to explain the process: a) recursively descend the expression tree until you find the node containing the symbol you want to solve for, b) make a new node containing a reverse of this nodes operation, c) replace the symbol you want to solve for with the reversed outer expression, recursively doing this as you progress back outwards.
There are various simple ways in which the initial equation can be modified, performing the correct modifications in the correct order will lead to the correct solution. So how about seeing this as a search or even a pathfinding problem?
I read in an algorithmic book that the Ackermann function cannot be made tail-recursive (what they say is "it can't be transformed into an iteration"). I'm pretty perplex about this, so I tried and come up with this:
let Ackb m n =
let rec rAck cont m n =
match (m, n) with
| 0, n -> cont (n+1)
| m, 0 -> rAck cont (m-1) 1
| m, n -> rAck (fun x -> rAck cont (m-1) x) m (n-1)
in rAck (fun x -> x) m n
;;
(it's OCaml / F# code).
My problem is, I'm not sure that this is actually tail recursive. Could you confirm that it is? If not, why? And eventually, what does it mean when people say that the Ackermann function is not primitive recursive?
Thanks!
Yes, it is tail-recursive. Every function can be made tail-rec by an explicit transformation to Continuation Passing Style.
This does not mean that the function will execute in constant memory : you build stacks of continuations that must be allocated. It may be more efficient to defunctionalize the continuations to represent that data as a simple algebraic datatype.
Being primitive recursive is a very different notion, related to expressiveness of a certain form of recursive definition that is used in mathematical theory, but probably not very much relevant to computer science as you know it: they are of very reduced expressiveness, and systems with function composition (starting with Gödel's System T), such as all current programming languages, are much more powerful.
In term of computer languages, primtive recursive functions roughly correspond to programs without general recursion where all loop/iterations are statically bounded (the number of possible repetitions is known).
Yes.
By definition, any recursive function can be transformed into an iteration as long as it has access to an unbounded stack-like construct. The interesting question is whether it can be done without a stack or any other unbounded data storage.
A tail-recursive function can be turned into such an iteration only if the size of its arguments is bounded. In your example (and almost any recursive function that uses continuations), the cont parameter is for all means and purposes a stack that can grow to any size. Indeed, the entire point of continuation-passing style is to store data usually present on the call stack ("what to do after I return?") in a continuation parameter instead.
I am studying membership algorithms and I am working on this particular problem which says the following:
Exhibit an algorithm that, given any regular language L, determines whether or not L = L*
So, my first thought was, we have L* which is Kleene star of L and to determine if L = L*, well couldn't we just say that since L is regular, we know L* is by definition which states that the family of regular languages is closed under star-closure.
Therefore L will always be equal to L*?
I feel like there is definitely a lot more to it, there is probably something I am missing. Any help would be appreciated. Thanks again.
since L is regular, we know L* is by definition which states that the family of regular languages is closed under star-closure. Therefore L will always be equal to L*?
No. Regular(L) --> Regular(L*), but that does not mean that L == L*. Just because two languages are both regular does not mean that they are the same regular language. For instance, a* and b* are both regular languages, but this does not make them the same language.
A example of L != L* would be the language L = a*b*, and thus L* = (a*b*)*. The string abab is part of L* but not part of L.
As far as an algorithm goes, let me remind you that the concept of a regular language is one that can be parsed by a DFA - and for any given DFA, there is a single optimal reduction of that DFA.
The implication that you stated is wrong. Closedness under the Kleene star means only that L* is again regular, if L is regular.
One possibility to check whether L = L* is to compute the minimal automaton for both and then checking for equivalence.