L System Node Rewriting example - algorithm

This is my first post in stackover flow.
Recently I started reading the book called "Algorithmic beauty of plants" where, in chapter 1, he explains L system. (You can read the chapter here).
So as I understand there are two types of L systems. Edge rewriting and Node rewriting.
Edge rewriting is relatively very simple. There is a initial starter polygon and a generator. Each edge(side) of the initial polygon will be replaced with the generator.
But this node rewriting is very confusing. From what I gathered, there are two or more rules and with each iteration replace the variables in the rules with their constant counterparts.
With turtle interpretation these are standard rules
F : Move turtle forward in current direction (initial direction is up)
+ : rotate turtle clock wise
- : rotate turtle anti clock wise
[ : Push the current state of the turtle onto a pushdown operations stack.
The information saved on the stack contains the turtle’s position and orientation,
and possibly other attributes such as the color and width of lines being drawn.
] : Pop a state from the stack and make it the current state of the turtle
So consider the example as shown in this website. http://www.selcukergen.net/ncca_lsystems_research/lsystems.html
Axiom : FX
Rule : X= +F-F-F+FX
Angle : 45
so at n=0 (ignore the X in axiom)
its just F that means a straight line pointing up.
at n=1
replace X in axiom with rule
F+F-F-F+F (ignoring the X in the end again)
output is this
http://www.selcukergen.net/ncca_lsystems_research/images/noderewrite.jpg
a simple example with one rule is ok. but in the book "Algorithmic beauty of plants" at page 25 there are some rules I'm not sure how to interpret.
X
X = F[+X]F[-X]+X
F = FF
See this image.
https://lh6.googleusercontent.com/g3aPb1SQpvnzvDttsiiBgiUflrj7R2V29-D60IDahJs=w195-h344-no
at n=0
just 'X'. not sure what this means
at n=1
applying rule 1 (X->F[+X]F[-X]+X) : F[+]F[-]+ ignoring all X. this is just a straight line.
applying rule 2 (F->FF) : FF[+]FF[-]. this is just a straight line.
Final output should be turtle moving four times in up direction as for my understanding. Or at most the final output should contain just four lines.
I found a online L-system generator which i thought will help me in understanding this better so i inputted the same values and here is how the output looks like at n=1
https://lh6.googleusercontent.com/-mj7x0OzoPk4/VK-oMHJsCMI/AAAAAAAAD3o/Qlk_02_goAU/w526-h851-no/Capture%2B2.PNG
output is definitely not a straight line and worst part it has 5 lines that means there should be 5 F in the final output equation.
Help me understanding this node rewriting. Without understanding this i cant read further into the book.
Sorry for the long post, and for links in pre tag. i cant post more than 2 links.
Thanks for having patience of reading it from top to bottom.

L systems are very simple and rely on text substitutions.
With this starting information:
Axiom : FX
Rule : X= +F-F-F+FX
Then basically, to produce the next generation of the system you take the previous generation and for each character in it you apply the substitutions.
You can use this algorithm to produce a generation:
For each character in the previous generation:
Check if we have a substitution rule for that character
YES: Append the substitution
NO: Append the original character
Thus:
n(0) = FX
+-- from the X
|
v---+---v
n(1) = F+F-F-F+FX
^
+- the original F
If you had this start instead:
Axiom : ABA
Rule : A = AB
Then you would have this:
+--------+
| |
n(0) = ABA |
| | |
| ++ |
| | |
vv vv |
n(1) = ABBAB |
^ |
+-------+
Basically:
For every A in generation X, when producing generation X+1, output AB
For every other character without a rule, just output that character (this handles all the B's)
This would be a system that doubles in length for every generation:
Axiom : A
Rule : A = AA
would create:
n(0) = A
n(1) = AA
n(2) = AAAA
n(3) = AAAAAAAA

Related

A specific push down automaton for a language (PDA)

I'm wondering how could I design a pushdown automaton for this specific language.
I can't solve this..
L2 = { u ∈ {a, b}∗ : 3 ∗ |u|a = 2 ∗ |u|b + 1 }
So the number of 'a's multiplied by 3 is equals to number of 'b's multiplied by 2 and added 1.
The grammar corresponding to that language is something like:
S -> ab | ba |B
B -> abB1 | baB1 | aB1b | bB1a | B1ab | B1ba
B1 -> aabbbB1 | baabbB1 | [...] | aabbb | baabb | [...]
S generates the base case (basically strings with #a = 1 = #b) or B
B generates the base case + B1 (in every permutation)
B1 adds 2 'a' and 3 'b' to the base case (in fact if you keep adding this number of 'a' and 'b' the equation 3#a = 2#b + 1 will always be true!). I didn't finish writing B1, basically you need to add every permutation of 2 'a' and 3 'b'. I think you'll be able to do it on your own :)
When you're finished with the grammar, designing the PDA is simple. More info here.
3|u|a = 2|u|b + 1 <=> 3|u|a - 2|u|b = 1
The easiest way to design a PDA for this is to implement this equation directly.
For any string x, let f(x) = 3|x|a - 2|x|b. Then design a PDA such that, after processing any string x:
The stack depth is always equal to abs( floor( f(x)/3 ) );
The symbol on the top of the stack (if any), reflects the sign of floor( f(x)/3 ). You only need 2 kinds of stack symbols
The current state number = f(x) mod 3. Of course you only need 3 states.
From the state number and the symbol on top of the stack, you can detect when f(x) = 1, and at that condition the PDA accepts x as a string in the language.

Lazy Folding of Infinite Depth & Infinite Breadth Rose Tree to its Edge Paths

This question haskell fold rose tree paths delved into the code for folding a rose tree to its paths. I was experimenting with infinite rose trees, and I found that the provided solution was not lazy enough to work on infinite rose trees with infinity in both depth and breadth.
Consider a rose tree like:
data Rose a = Rose a [Rose a] deriving (Show, Functor)
Here's a finite rose tree:
finiteTree = Rose "root" [
Rose "a" [
Rose "d" [],
Rose "e" []
],
Rose "b" [
Rose "f" []
],
Rose "c" []
]
The output of the edge path list should be:
[["root","a","d"],["root","a","e"],["root","b","f"],["root","c"]]
Here is an infinite Rose tree in both dimensions:
infiniteRoseTree :: [[a]] -> Rose a
infiniteRoseTree ((root:_):breadthGens) = Rose root (infiniteRoseForest breadthGens)
infiniteRoseForest :: [[a]] -> [Rose a]
infiniteRoseForest (breadthGen:breadthGens) = [ Rose x (infiniteRoseForest breadthGens) | x <- breadthGen ]
infiniteTree = infiniteRoseTree depthIndexedBreadths where
depthIndexedBreadths = iterate (map (+1)) [0..]
The tree looks like this (it's just an excerpt, there's infinite depth and infinite breadth):
0
|
|
[1,2..]
/ \
/ \
/ \
[2,3..] [2,3..]
The paths would look like:
[[0,1,2..]..[0,2,2..]..]
Here was my latest attempt (doing it on GHCi causes an infinite loop, no streaming output):
rosePathsLazy (Rose x []) = [[x]]
rosePathsLazy (Rose x children) =
concat [ map (x:) (rosePathsLazy child) | child <- children ]
rosePathsLazy infiniteTree
The provided solution in the other answer also did not produce any output:
foldRose f z (Rose x []) = [f x z]
foldRose f z (Rose x ns) = [f x y | n <- ns, y <- foldRose f z n]
foldRose (:) [] infiniteTree
Both of the above work for the finite rose tree.
I tried a number of variations, but I can't figure out to make the edge folding operation lazy for infinite 2-dimensional rose tree. I feel like it has something to do with infinite amounts of concat.
Since the output is a 2 dimensional list. I can run a 2 dimensional take and project with a depth-limit or a breadth-limit or both at the same time!
Any help is appreciated!
After reviewing the answers here and thinking about it a bit more. I came to the realisation that this is unfoldable, because the resulting list is uncountably infinite. This is because an infinite depth & breadth rose tree is not a 2 dimensional data structure, but an infinite dimensional data structure. Each depth level confers an extra dimension. In other words, it is somewhat equivalent to an infinite dimensional matrix, imagine a matrix where each field is another matrix.. ad-infinitum. The cardinality of the infinite matrix is infinity ^ infinity, which has been proven (I think) to be uncountably infinite. This means any infinite dimensional data structure is not really computable in a useful sense.
To apply this to the rose tree, if we have infinite depth, then the paths never enumerate past the far left of the rose tree. That is this tree:
0
|
|
[1,2..]
/ \
/ \
/ \
[2,3..] [2,3..]
Would produce a path like: [[0,1,2..], [0,1,2..], [0,1,2..]..], and we'd never get past [0,1,2..].
Or in another way, if we have a list containing lists ad-infinitum. We can also never count (enumerate) it either, as there would be an infinite amount of dimensions that the code would jump to.
This also has some relationship to real numbers being uncountably infinite too. In a lazy list of infinite real numbers would just infinitely produce 0.000.. and never enumerate past that.
I'm not sure how to formalise the above explanation, but that's my intuition. (For reference see: https://en.wikipedia.org/wiki/Uncountable_set) It'd be cool to see someone expand on applying https://en.wikipedia.org/wiki/Cantor's_diagonal_argument to this problem.
This book seems to expand on it: https://books.google.com.au/books?id=OPFoJZeI8MEC&pg=PA140&lpg=PA140&dq=haskell+uncountably+infinite&source=bl&ots=Z5hM-mFT6A&sig=ovzWV3AEO16M4scVPCDD-gyFgII&hl=en&sa=X&redir_esc=y#v=onepage&q=haskell%20uncountably%20infinite&f=false
For some reason, dfeuer has deleted his answer, which included a very nice insight and only a minor, easily-fixed problem. Below I discuss his nice insight, and fix the easily-fixed problem.
His insight is that the reason the original code hangs is because it is not obvious to concat that any of the elements of its argument list are non-empty. Since we can prove this (outside of Haskell, with paper and pencil), we can cheat just a little bit to convince the compiler that it's so.
Unfortunately, concat isn't quite good enough: if you give concat a list like [[1..], foo], it will never draw elements from foo. The universe collection of packages can help here with its diagonal function, which does draw elements from all sublists.
Together, these two insights lead to the following code:
import Data.Tree
import Data.Universe.Helpers
paths (Node x []) = [[x]]
paths (Node x children) = map (x:) (p:ps) where
p:ps = diagonal (map paths children)
If we define a particular infinite tree:
infTree x = Node x [infTree (x+i) | i <- [1..]]
We can look at how it behaves in ghci:
> let v = paths (infTree 0)
> take 5 (head v)
[0,1,2,3,4]
> take 5 (map head v)
[0,0,0,0,0]
Looks pretty good! Of course, as observed by ErikR, we cannot have all paths in here. However, given any finite prefix p of an infinite path through t, there is a finite index in paths t whose element starts with prefix p.
Not a complete answer, but you might be interested in this detailed answer on how Haskell's permutations function is written so that it works on infinite lists:
What does this list permutations implementation in Haskell exactly do?
Update
Here's a simpler way to create an infinite Rose tree:
iRose x = Rose x [ iRose (x+i) | i <- [1..] ]
rindex (Rose a rs) [] = a
rindex (Rose _ rs) (x:xs) = rindex (rs !! x) xs
Examples:
rindex (iRose 0) [0,1,2,3,4,5,6] -- returns: 26
rindex infiniteTree [0,1,2,3,4,5,6] -- returns: 13
Infinite Depth
If a Rose tree has infinite depth and non-trivial width (> 1) there can't be an algorithm to list all of the paths just using a counting argument - the number of total paths is uncountable.
Finite Depth & Infinite Breadth
If the Rose tree has finite depth the number of paths is countable even if the trees have infinite breadth, and there is an algorithm which can produce all possible paths. Watch this space for updates.
ErikR has explained why you can't produce a list that necessarily contains all the paths, but it is possible to list paths lazily from the left. The simplest trick, albeit a dirty one, is to recognize that the result is never empty and force that fact on Haskell.
paths (Rose x []) = [[x]]
paths (Rose x children) = map (x :) (a : as)
where
a : as = concatMap paths children
-- Note that we know here that children is non-empty, and therefore
-- the result will not be empty.
For making very infinite rose trees, consider
infTree labels = Rose labels (infForest labels)
infForest labels = [Rose labels' (infForest labels')
| labels' <- map (: labels) [0..]]
As chi points out, while this definition of paths is productive, it will in some cases repeat the leftmost path forever, and never reach any more. Oops! So some attempt at fairness or diagonal traversal is necessary to give interesting/useful results.

algorithm for ant movement in 2 row table

Problem:
There is a table with below constraints:
a) it has 2 row only.
b) and n col. so basically its 2xN table but N is a power of two.
c) its short ends are joint together you can move from last element of the row to the first element of row if first element is not visited.
Now you are given 2 initial position i1 and i2 for ants and final destination f1 and f2. Now ant have to reach f1 and f2, but either of the ant can reach either of point . example if i1 reach f2 and i2 have to reach f1.
Allowed moves:-
1) Ant can move horizontally and vertically only no diagonal movement.
2) each cell can be visited at most by one ant and all the cell must be visited in the end.
Output:- path traced by two ants if all the cell are marked visited else -1. Need complexity for the algorithm also.
Max flow can be used to compute two disjoint paths, but it is not possible to express the constraint of visiting all squares in a generic fashion (it's possible that there's a one-off trick). This dynamic programming solution isn't the slickest, but the ideas behind it can be applied to many similar problems.
The first step is to decompose the instance recursively, bottoming out with little pieces. (The constraints on these pieces will become apparent shortly.) For this problem, the root piece is the whole 2-by-n array. The two child pieces of the root are 2-by-n/2 arrays. The four children of those pieces are 2-by-n/4 arrays. We bottom out with 2-by-1 arrays.
Now, consider what a solution looks like from the point of view of a piece.
+---+-- ... --+---+
A | B | | C | D
+---+-- ... --+---+
E | F | | G | H
+---+-- ... --+---+
Squares B, C, F, G are inside the piece. Squares A, E are the left boundary. Squares D, H are the right boundary. If an ant enters this piece, it does so from one of the four boundary squares (or its initial position, if that's inside the piece). Similarly, if an ant leaves this piece, it does so to one of the four boundary squares (or its final position, if that's inside the piece). Since each square can be visited at most once, there is a small number of possible permutations for the comings and goings of both ants:
Ant 1: enters A->B, leaves C->D
Ant 2: enters E->F, leaves G->H
Ant 1: enters A->B, leaves G->H
Ant 2: does not enter
Ant 1: enters A->B, leaves C->D, enters H->G, leaves F->E
Ant 2: does not enter
Ant 1: enters A->B, leaves F->E, enters H->G, leaves C->D
Ant 2: does not enter
...
The key fact is that what the ants do strictly outside of this piece has no bearing on what happens strictly inside. For each piece in the recursive decomposition, we can compute the set of comings and goings that are consistent with all squares in the piece being covered. For the 2-by-1 arrays, this can be accomplished with brute force.
+---+
A | B | C
+---+
D | E | F
+---+
In general, neither ant starts or ends inside this piece. Then the some of the possibilities are
Ant 1: A->B, B->E, E->F; Ant 2: none
Ant 1: A->B, B->C, F->E, E->D; Ant 2: none
Ant 1: A->B, B->C; Ant 2: D->E; E->F
Ant 1: A->B, B->C, D->E, E->F; Ant 2: none .
Now, suppose we have computed the sets (DP tables hereafter) for two adjacent pieces that are the children of one parent piece.
1|2
+---+-- ... --+---+---+-- ... --+---+
A | B | | C | D | | E | F
+---+-- ... --+---+---+-- ... --+---+
G | H | | I | J | | K | L
+---+-- ... --+---+---+-- ... --+---+
1|2
Piece 1 is to the left of the dividing line. Piece 2 is to the right of the dividing line. Once again, there is a small number of possibilities for traffic on the named squares. The DP table for the parent piece is indexed by the traffic at A, B, E, F, G, H, K, L. For each entry, we try all possibilities for the traffic at C, D, I, J, using the children's DP tables to determine whether the combined comings and goings are feasible.

Accessing first element of a matrix in Isabelle

Accessing the “first” element of a matrix
I want to write a proof about a trivial case of the determinant of a matrix, where the matrix consists of just a single element (i.e., the cardinality of 'n is one).
Thus the determinant (or det A) is the single element in the matrix.
However, it is not clear to me how to reference the single element of the matrix. I tried A $ zero $ zero, which did not work.
My current way to demonstrate the problem is to write ∀a∈(UNIV :: 'n set). det A = A $ a $ a. It assumes that the cardinality of the numeral type is one.
What is the correct way to write this trivial proof about determinants?
Here is my current code:
theory Notepad
imports
Main
"~~/src/HOL/Library/Polynomial"
"~~/src/HOL/Algebra/Ring"
"~~/src/HOL/Library/Numeral_Type"
"~~/src/HOL/Library/Permutations"
"~~/src/HOL/Multivariate_Analysis/Determinants"
"~~/src/HOL/Multivariate_Analysis/L2_Norm"
"~~/src/HOL/Library/Numeral_Type"
begin
lemma det_one_element_matrix:
fixes A :: "('a::comm_ring_1)^'n∷finite^'n∷finite"
assumes "card(UNIV :: 'n set) = 1"
shows "∀a∈(UNIV :: 'n set). det A = A $ a $ a"
proof-
(*sledgehammer proof of 1, 2 and ?thesis *)
have 1: "∀a∈(UNIV :: 'n set). UNIV = {a}"
by (metis (full_types) Set.set_insert UNIV_I assms card_1_exists ex_in_conv)
have 2:
"det A = (∏i∈UNIV. A $ i $ i)"
by (metis (mono_tags, lifting) "1" UNIV_I det_diagonal singletonD)
from 1 2 show ?thesis by (metis setprod_singleton)
qed
UPDATE:
Unfortunately, this is part of a larger theorem which is already proven for the cardinality of 'n∷finite > 1. In this theorem the type of matrix A is
fixed as A :: "('a::comm_ring_1)^'n∷finite^'n∷finite and the definition of the determinant is used in this larger theorem.
Therefore, I don't think I can change the type of my matrix A to ('a::comm_ring_1)^1^1) in oder to solve my larger theorem.
I feel that my previous answer is the better solution in general if it is possible to use, so I will leave it as-is. In your case where you are not able to use this approach, things get a little harder, unfortunately.
What you need to show is that:
There can only be a single element in your type 'n, and thus every element is equal;
Additionally, the definition of det also references permutations, so we need to show that there only exists a single function of type 'n ⇒ 'n, which happens to be equal to the function id.
With these in place, we can carry out the proof as follows:
lemma det_one_element_matrix:
fixes A :: "('a::comm_ring_1)^'n∷finite^'n∷finite"
assumes "card(UNIV :: 'n set) = 1"
shows "det A = A $ x $ x"
proof-
have 0: "⋀x y. (x :: 'n) = y"
by (metis (full_types) UNIV_I assms card_1_exists)
hence 1: "(UNIV :: 'n set) = {x}"
by auto
have 2: "(UNIV :: ('n ⇒ 'n) set) = {id}"
by (auto intro!: ext simp: 0)
thus ?thesis
by (auto simp: det_def permutes_def 0 1 2 sign_id)
qed
Using A $ zero $ zero (or A $ 0 $ 0) wouldn't have worked, because the vectors are indexed from 1: A $ 0 $ 0 is undefined, which makes it hard to prove anything about.
Playing a little myself, I came up with the following lemma:
lemma det_one_element_matrix:
"det (A :: ('a::comm_ring_1)^1^1) = A $ 1 $ 1"
by (clarsimp simp: det_def sign_def)
Instead of using a type 'a :: finite and assuming it has cardinality 1, I used the standard Isabelle 1 type which encodes both these facts into the type itself. (Similar types exist for all numerals, so you can write things like 'a ^ 23 ^ 72)
Incidentally, after typing in the lemma above, auto solve_direct quickly informed me that something already exists in the library stating the same result, a lemma named det_1.

symbolic computation

My problem: symbolic expression manipulation.
A symbolic expression is built starting from integer constants and variable with the help of operators like +, -, *, /, min,max. More exactly I would represent an expression in the following way (Caml code):
type sym_expr_t =
| PlusInf
| MinusInf
| Const of int
| Var of var_t
| Add of sym_expr_t * sym_expr_t
| Sub of sym_expr_t * sym_expr_t
| Mul of sym_expr_t * sym_expr_t
| Div of sym_expr_t * sym_expr_t
| Min of sym_expr_t * sym_expr_t
| Max of sym_expr_t * sym_expr_t
I imagine that in order to perform useful and efficient computation (eg. a + b - a = 0 or a + 1 > a) I need to have some sort of normal form and operate on it. The above representation will probably not work too good.
Can someone point me out how I should approach this? I don't necessary need code. That can be written easily if I know how. Links to papers that present representations for normal forms and/or algorithms for construction/ simplification/ comparison would also help.
Also, if you know of an Ocaml library that does this let me know.
If you drop out Min and Max, normal forms are easy: they're elements of the field of fractions on your variables, I mean P[Vars]/Q[Vars] where P, Q are polynomials. For Min and Max, I don't know; I suppose the simplest way is to consider them as if/then/else tests, and make them float to the top of your expressions (duplicating stuff in the process), for example P(Max(Q,R)) would be rewritten into P(if Q>R then Q else R), and then in if Q>R then P(Q) else P(R).
I know of two different ways to find normal forms for your expressions expr :
Define rewrite rules expr -> expr that correspond to your intuition, and show that they are normalizing. That can be done by directing the equations that you know are true : from Add(a,Add(b,c)) = Add(Add(a,b),c) you will derive either Add(a,Add(b,c)) -> Add(Add(a,b),c) or the other way around. But then you have an equation system for which you need to show Church-Rosser and normalization; dirty business indeed.
Take a more semantic approach of giving a "semantic" of your values : an element in expr is really a notation for a mathematical object that lives in the type sem. Find a suitable (unique) representation for objects of sem, then an evaluation function expr -> sem, then finally (if you wish to, but you don't need to for equality checking for example) a reification sem -> expr. The composition of both transformations will naturally give you a normalization procedure, without having to worry for example about direction of the Add rewriting (some arbitrary choice will arise naturally from your reification function). For example, for polynomial fractions, the semantic space would be something like:
.
type sem = poly * poly
and poly = (multiplicity * var * degree) list
and multiplicity = int
and degree = int
Of course, this is not always so easy. I don't see right know what representation give to a semantic space with Min and Max functions.
Edit: Regarding external libraries, I don't know any and I'm not sure there are. You should maybe look for bindings to other symbolic algebra software, but I haven't heard of it (there was a Jane Street Summer Project about that a few years ago, but I'm not sure there was any deliverable produced).
If you need that for a production application, maybe you should directly consider writing the binding yourselves, eg. to Sage or Maxima. I don't know what it would be like.
The usual approach to such a problem is:
Start with a string, such a as "a + 1 > a"
Go through a lexer, and separate your input into distinct tokens: [Variable('a'); Plus; Number(1); GreaterThan; Variable('a')]
Parse the tokens into a syntax tree (what you have now). This is where you use the operator precedence rules: Max( Add( Var('a'), Const(1)), Var('a'))
Make a function that can interpret the syntax tree to obtain your final result
let eval_expr expr = match expr with
| Number n -> n
| Add a b -> (eval_expr a) + (eval_expr b)
...
Pardon the syntax, I haven't used Ocaml in a while.
About libraries, I don't remember any out of the top of my mind, but there certainly are good ones easily available - this is the kind of task that the FP community loves doing.

Resources