How to translate box representation to parenthesis representation - scheme

I was working on something that was clear for a while but, at some point, I wasn't able to understand.
So I know that this image represents (a b c x d):
But, what I don't get is what happens for this one:
Since there is one with nothing on it, which points to two things, how to write with parenthesis that there are two ways with b x and d a
I would write it (c((b(x))(d(a)))) but I'm not really sure about it.
Thanks for the answer, I appreciate it;

I think the useful way of thinking about these is to realise two things:
all boxes are cons cells, aka pairs;
there is a slightly unhelpful shorthand being used, and if you redraw the diagram without that shorthand it is easier to translate.
The unhelpful shorthand is that a box which is drawn like this:
is, in fact simply this:
Where () is the unique 'empty-box' object which is traditionally called nil in Lisp, but does not have a name (I think) in Scheme as a standard.
So knowing this shorthand you can take the picture you have, which is the same as this one
and unfold it to this one
So, OK, now you have this unfolded picture, you can simply write down the representation of this as text, remembering that the representation of a cons cell is simply (l . r), say, where l and r are the left and right elements, and the representation of the empty-box object is ().
I'm not going to write down the textual representation of your problem since I don't want to do your homework – I want to explain how to do your homework! – but I will write down the representation of this, say:
Well, just following the boxes we get a representation which is
((x . ()) . (y . ())
Which, oops, isn't anything like the sort of answer you need to get.
But all is not lost. Now you need to know the final thing: there are three special rules which apply to printing (or reading) trees (graphs) of conses like this.
Rule 1. A cons which is of the form (<anything> . ()) can be written s (<anything>).
Rule 2. A cons whose right hand entry (cdr) is itself a cons can have the dot elided and the right hand cons spliced in. So (x . (y . ...)) can be written as (x y . ...), for instance.
Rule 3. When writing cons trees, you usually try and apply rules 1 and 2 to minimise the number of dots in the printed representation.
So let's apply these rules to the above structure.
we start with ((x . ()) . (y . ());
we can apply rule 1 twice to each half, to get ((x) . (y));
we can now apply rule 2 once, since the right hand side of the top object is a cons, to get ((x) y). And this is like the sort of representation you are expected to provide.
As a guide when applying these rules it is best to apply them from the inside out, and to apply all the rule 1's you can followed by the rule 2's.
So in summary the approach I would suggest is:
redraw the diagram with the shorthand for () removed;
read off the cons-tree structure from it as (... . ...);
use the rules above to minimize the number of dots in the cons-tree you have written down;
profit.

Start with pairs. Every box is (x . y) with x and y being the parts the box points to. eg. for the first one it's:
(a . (b . (c . (x . (d . ())))))
Now most lisps will remove the . and the parenteses around the cdr if they can.. Eg. (d . ()) => (d) and you can apply this simplification all the way such that the above can be displayed:
(a b c x d)
A missing dot between elements and end means . (...) in my head so while I don't see them I imagine the dot notation when looking at it.
The second box notation means clearly this:
(c . ((b . (x . ())) . (d . (a . ()))))
Now you can do the trick with removing dots if the cdr has parentheses in them. I leave that for you to do since you need to do this transformation in your mind both forward and backward. eg. you should be able to look at ((e f) x (i j)) and say ((e . (f . ())) . (x . ((i . (j . ()))))) and that accessing j is cadaddr (which might not exist since it is 5 and the requirement is 4 so you can split it up to (car (cdaddr '((e . (f . ())) . (x . ((i . (j . ()))))))) ; ==> j)

You can play with https://github.com/cbaggers/draw-cons-tree
CL-USER> (ql:quickload "draw-cons-tree")
CL-USER> (draw-cons-tree:draw-tree '(c ((b (x)) (d (a)))))
[o|o]---[o|/]
| |
C [o|o]---[o|/]
| |
| [o|o]---[o|/]
| | |
| D [o|/]
| |
| A
|
[o|o]---[o|/]
| |
B [o|/]
|
X
NIL
CL-USER> (draw-cons-tree:draw-tree '(c ((b (x)) d a)))
[o|o]---[o|/]
| |
C [o|o]---[o|o]---[o|/]
| | |
| D A
|
[o|o]---[o|/]
| |
B [o|/]
|
X
NIL
CL-USER>

Transform any box diagram by recursively applying the same transformation:
[ * | * ] ---> B
|
|
v
A
becomes
( {A} . {B} )
where {A} means the diagram A transformed by the same procedure.
A special case is that [ A | / ] (writing it as a shorthand for
[ * | / ]
|
|
v
A
) is the same as
[ A | * ] ---> NIL
and a base case is {NIL} === ().
The other base case is that anything not inside a box is just as it is.
It doesn't matter which way the arrow is drawn pointing, what matters is whether it starts from the first field of a box, or from the second field.
Your last diagram thus becomes
(c . { ... })
(c . ( {...} . { ... }))
(c . ( {...} . (d . { ... })))
(c . ( {...} . (d . (a . {NIL}))))
(c . ( (b . {...}) . (d . (a . {NIL}))))
(c . ( (b . (x . {NIL})) . (d . (a . {NIL}))))
(c . ( (b . (x . ( ))) . (d . (a . ( )))))
Having transformed your box-and-pointers diagram in full, you're left with a textual string representation of it. To simplify it by removing all the excessive dots and pairs of parentheses, finally apply the rule
(A B ... . (C ...)) === (A B ... C ...) ;; NB nothing after (C ...)!
in a purely syntactical manner, repeatedly, anywhere inside the string that you can, until it can't be applied any longer. Just remove both the dot and the following pair of the matching parentheses around the last subexpression in a list together, at once. Both B ... and C ... sequences can be empty, or non-empty sequences:
(c . ( (b . (x . ( ))) . (d . (a . ( )))))
(c . ( (b . (x )) . (d . (a . ( )))))
(c . ( (b . (x )) . (d . (a ))))
(c . ( (b . (x )) . (d a )))
(c . ( (b x ) . (d a )))
(c . ( (b x ) d a ))
Or you could perform it in different order, as
(c . ( (b . (x . ( ))) . (d . (a . ( )))))
(c (b . (x . ( ))) . (d . (a . ( ))) )
(c (b . (x . ( ))) d . (a . ( )) )
(c (b . (x . ( ))) d a . ( ) )
(c (b . (x )) d a )
I've left one more step for you to perform, in both cases. The end result will be the same.

Related

Example input and output for Racket

I don't want help solving this question however I would like to know exactly what it's asking for. And in order to better understand what it's asking for I'm asking if anyone could provide me with an example input and its corresponding output.
Write and certify a recursive procedure check which inputs an sexp s
and a list varlst
of identifiers and decides whether s belongs to the class of fully
parenthesized infix +-expressions fpip defined as follows:
var ::= a | b | c | d | e | f | g
fpip ::= var | (fpip + fpip)
Example of valid "fpip" expressions:
a
(a + b)
((a + b) + (c + d))
Explanation:
The first definition "var" tells you that it can be one of the symbols a ... g.
The second definition "fpip" tells you have either "var" or the compound expression of (fpip + fpip). Thus that means that a is a valid "fpip" since a is a valid "var". It also means (a + b) is a valid "fpip". What you get in addition by using "fpip" in place of "var" in the compund expression is nesting, like the valid "fpip" ((a + b) + (c + d)).
As a hint. Your procedure would mirror the definition. It will check if the argument is a var and if not it needs to check if it's like the second definition, which includes two recursive calls to check each part is also valid.
What is not explained very good is the purpose of varlist. I imagine that it represents allocated variables and that a "var" need not only be a ... g to be valid, but also that the identifiers also exists in varlist for it to be valid. This is an educated guess since I've made my share of interpreters, but I think it should have been specified clearer. eg. perhaps:
(fpip? 'c '(b a q)) ; ==> #f (c is in "var" definition but not in varlist)
(fpip? 'a '(b a q)) ; ==> #t (a is in "var" definition and in varlist)
(fpip? 'q '(b a q)) ; ==> #f (q is not in "var" definition)

Query on lambda calculus addition

How do I add two numbers in lambda calculus using below given arithmetic representation of addition?
m + n = λx.λy.(m x) (n x) y
2 = λa.λb.a (a b)
3 = λa.λb.a (a (a b))
You know what 2 is, what 3 is, and what addition is. Take the values and just stick 'em into the operation!
2 + 3 = (λx.λy.(m x) (n x) y) (λa.λb.a (a b)) (λa.λb.a (a (a b)))
|-------- + --------| |----- 2 -----| |------- 3 -------|
This is an application with a lambda on the left. Such a term is called a redex, and it can be β-reduced. The actual reduction is left as an exercise for the reader.

Mutator elisp functions

How can I define mutator elisp functions? that is, how can I send parameters to an elisp function that can be modified inside the function for use outside the function (similar to non const reference variables or pointers in C++)? For example, suppose I had a function foo defined like
(defun foo (a b c d)
;do some stuff to b, c, and d
.
.
.
)
I might like to call it, say, as follows
(defun bar (x)
(let ((a) (b) (c) (y))
.
.
.
;a, b and c are nil at this point
(foo x a b c)
(setq y (some-other-function-of a b c x and-other-variables))
.
.
.
)) ... )
y)
I know that I could throw all my parameters local to some function into one big old list, evaluate the list at the end of the function and then go fetch these variables from some other list set to be the return value of that function (a list of stuff), i.e.
(setq return-list (foo read-only-x read-only-y))
(setq v_1 (car return-list))
(setq v_2 (cadr return-list))
.
.
but are there any better ways? All I have accomplished so far in my attempts to solve this is exiting the function with variables no different to how they were passed in
As for why I want to be able to do this I am simply trying refactor some large function F in such way that all collections of expressions related to some nameable concepts c live in their own little modules c_1, c_2, c_3, ... c_n that I can call from within F with whatever arguments I need to be updated along the way. That is to say, I would like F to look something like:
(defun F ( ... )
(let ((a_1) (a_2) ... )
(c_1 a_1 ... a_m)
(c_2 a_h ... a_i)
.
.
.
(c_n a_j ... a_k)
.
.
.
))...))
Two ways I can think of:
make the "function" foo a macro and not a function (if possible)
pass a newly created cons (or more of them) into the function, and replace the car and cdr of them via setcar/setcdr
In case the function is too complex, you can also combine both approaches - have a macro foo that creates a cons of a and b and calls a function foo0 with that cons, and later unpacks the car and cdr again.
In case you need more than 2 args, just use more than one cons as a paramter.
Just to show you how it can be done, but please don't do it, it's bad style.
(defun set-to (in-x out-y)
(set out-y in-x))
(let (x)
(set-to 10 'x)
x)
There's a case when this won't work though:
(let (in-x)
(set-to 10 'in-x)
in-x)
It's a bit like this C++ code
void set_to(int x, int* y) {
*y = x;
}
int y;
set_to(10, &y);
Actually I wish there were no non-const references in C++, so that
each mutator would have to be called with a pointer like above.
Again, don't do it unless it's really necessary.
Use instead multiple-value-bind or cl-flet.

Why are difference lists more efficient than regular concatenation in Haskell?

I am currently working my way through the Learn you a Haskell book online, and have come to a chapter where the author is explaining that some list concatenations can be inefficient: For example
((((a ++ b) ++ c) ++ d) ++ e) ++ f
is supposedly inefficient. The solution the author comes up with is to use 'difference lists' defined as
newtype DiffList a = DiffList {getDiffList :: [a] -> [a] }
instance Monoid (DiffList a) where
mempty = DiffList (\xs -> [] ++ xs)
(DiffList f) `mappend` (DiffList g) = DiffList (\xs -> f (g xs))
I am struggling to understand why DiffList is more computationally efficient than a simple concatenation in some cases. Could someone explain to me in simple terms why the above example is so inefficient, and in what way the DiffList solves this problem?
The problem in
((((a ++ b) ++ c) ++ d) ++ e) ++ f
is the nesting. The applications of (++) are left-nested, and that's bad; right-nesting
a ++ (b ++ (c ++ (d ++ (e ++f))))
would not be a problem. That is because (++) is defined as
[] ++ ys = ys
(x:xs) ++ ys = x : (xs ++ ys)
so to find which equation to use, the implementation must dive into the expression tree
(++)
/ \
(++) f
/ \
(++) e
/ \
(++) d
/ \
(++) c
/ \
a b
until it finds out whether the left operand is empty or not. If it's not empty, its head is taken and bubbled to the top, but the tail of the left operand is left untouched, so when the next element of the concatenation is demanded, the same procedure starts again.
When the concatenations are right-nested, the left operand of (++) is always at the top, and checking for emptiness/bubbling up the head are O(1).
But when the concatenations are left-nested, n layers deep, to reach the first element, n nodes of the tree must be traversed, for each element of the result (coming from the first list, n-1 for those coming from the second etc.).
Let us consider a = "hello" in
hi = ((((a ++ b) ++ c) ++ d) ++ e) ++ f
and we want to evaluate take 5 hi. So first, it must be checked whether
(((a ++ b) ++ c) ++ d) ++ e
is empty. For that, it must be checked whether
((a ++ b) ++ c) ++ d
is empty. For that, it must be checked whether
(a ++ b) ++ c
is empty. For that, it must be checked whether
a ++ b
is empty. For that, it must be checked whether
a
is empty. Phew. It isn't, so we can bubble up again, assembling
a ++ b = 'h':("ello" ++ b)
(a ++ b) ++ c = 'h':(("ello" ++ b) ++ c)
((a ++ b) ++ c) ++ d = 'h':((("ello" ++ b) ++ c) ++ d)
(((a ++ b) ++ c) ++ d) ++ e = 'h':(((("ello" ++ b) ++ c) ++ d) ++ e)
((((a ++ b) ++ c) ++ d) ++ e) ++ f = 'h':((((("ello" ++ b) ++ c) ++ d) ++ e) ++ f)
and for the 'e', we must repeat, and for the 'l's too...
Drawing a part of the tree, the bubbling up goes like this:
(++)
/ \
(++) c
/ \
'h':"ello" b
becomes first
(++)
/ \
(:) c
/ \
'h' (++)
/ \
"ello" b
and then
(:)
/ \
'h' (++)
/ \
(++) c
/ \
"ello" b
all the way back to the top. The structure of the tree that becomes the right child of the top-level (:) finally, is exactly the same as the structure of the original tree, unless the leftmost list is empty, when the
(++)
/ \
[] b
nodes is collapsed to just b.
So if you have left-nested concatenations of short lists, the concatenation becomes quadratic because to get the head of the concatenation is an O(nesting-depth) operation. In general, the concatenation of a left-nested
(...((a_d ++ a_{d-1}) ++ a_{d-2}) ...) ++ a_2) ++ a_1
is O(sum [i * length a_i | i <- [1 .. d]]) to evaluate fully.
With difference lists (sans the newtype wrapper for simplicity of exposition), it's not important whether the compositions are left-nested
((((a ++) . (b ++)) . (c ++)) . (d ++)) . (e ++)
or right-nested. Once you have traversed the nesting to reach the (a ++), that (++) is hoisted to the top of the expression tree, so getting at each element of a is again O(1).
In fact, the whole composition is reassociated with difference lists, as soon as you require the first element,
((((a ++) . (b ++)) . (c ++)) . (d ++)) . (e ++) $ f
becomes
((((a ++) . (b ++)) . (c ++)) . (d ++)) $ (e ++) f
(((a ++) . (b ++)) . (c ++)) $ (d ++) ((e ++) f)
((a ++) . (b ++)) $ (c ++) ((d ++) ((e ++) f))
(a ++) $ (b ++) ((c ++) ((d ++) ((e ++) f)))
a ++ (b ++ (c ++ (d ++ (e ++ f))))
and after that, each list is the immediate left operand of the top-level (++) after the preceding list has been consumed.
The important thing in that is that the prepending function (a ++) can start producing its result without inspecting its argument, so that the reassociation from
($)
/ \
(.) f
/ \
(.) (e ++)
/ \
(.) (d ++)
/ \
(.) (c ++)
/ \
(a ++) (b ++)
via
($)---------
/ \
(.) ($)
/ \ / \
(.) (d ++) (e ++) f
/ \
(.) (c ++)
/ \
(a ++) (b ++)
to
($)
/ \
(a ++) ($)
/ \
(b ++) ($)
/ \
(c ++) ($)
/ \
(d ++) ($)
/ \
(e ++) f
doesn't need to know anything about the composed functions of the final list f, so it's just an O(depth) rewriting. Then the top-level
($)
/ \
(a ++) stuff
becomes
(++)
/ \
a stuff
and all elements of a can be obtained in one step. In this example, where we had pure left-nesting, only one rewriting is necessary. If instead of (for example) (d ++) the function in that place had been a left-nested composition, (((g ++) . (h ++)) . (i ++)) . (j ++), the top-level reassociation would leave that untouched and this would be reassociated when it becomes the left operand of the top-level ($) after all previous lists have been consumed.
The total work needed for all reassociations is O(number of lists), so the overall cost for the concatenation is O(number of lists + sum (map length lists)). (That means you can bring bad performance to this too, by inserting a lot of deeply left-nested ([] ++).)
The
newtype DiffList a = DiffList {getDiffList :: [a] -> [a] }
instance Monoid (DiffList a) where
mempty = DiffList (\xs -> [] ++ xs)
(DiffList f) `mappend` (DiffList g) = DiffList (\xs -> f (g xs))
just wraps that so that it is more convenient to handle abstractly.
DiffList (a ++) `mappend` DiffList (b ++) ~> DiffList ((a ++) . (b++))
Note that it is only efficient for functions that don't need to inspect their argument to start producing output, if arbitrary functions are wrapped in DiffLists, you have no such efficiency guarantees. In particular, appending ((++ a), wrapped or not) can create left-nested trees of (++) when composed right-nested, so you can create the O(n²) concatenation behaviour with that if the DiffList constructor is exposed.
It might help to look at the definition of concatenation:
[] ++ ys = ys
(x:xs) ++ ys = x : (xs ++ ys)
As you can see, in order to concatenate two lists you need to go over the left list and create a "copy" of it, just so you can change its end (this is because you can't directly change the end of the old list, due to immutability).
If you do your concatenations in the right associative way, there is no problem. Once inserted, a list will never have to be touched again (notice how ++'s definition never inspects the list on the right) so each list element is only inserted "once" for a total time of O(N).
--This is O(n)
(a ++ (b ++ (c ++ (d ++ (e ++ f)))))
However, if you do concatenation in a left associative way, then the "current" list will have to be "torn down" and "rebuild""every time you add another list fragment to the end. Each list element will be iterated over when it's inserted and whenever future fragments are appended as well! It's like the problem you get in C if you naïvely call strcat multiple times in a row.
As for difference lists, the trick is that they kind of keep an explicit "hole" at the end. When you convert a DList back to a normal list you pass it what you want to be in the hole and it will be ready to go. Normal lists, on the other hand, always plug up the hole in the end with [] so if you want to change it (when concatenating) then you need to rip open the list to get to that point.
The definition of difference lists with functions can look intimidating at first, but it's actually pretty simple. You can view them from an Object Oriented point of view by thinking of them as opaque objects "toList" method that receives the list that you should insert in the hole in the end returns the DL's internal prefix plus the tail that was provided. It's efficient because you only plug the "hole" in the end after you are done converting everything.

Converting a function from AND, NOT and OR gates to just NOR

I have a circuit made from AND NOT and OR gates, i need to convert it so that it only has NOR gates, this isn't working to well for me so any tips would be much appreciated.
This is the original function to convert:
~a~b~cd + ~a~bc~d + ~ab~c~d + ~abcd + a~b~c~d + a~bcd + ab~cd + abc~d
I'm assuming cd means c AND d. The rules are:
~a = a NOR a
a^b = (a NOR a) NOR (b NOR b)
a+b = (a NOR b) NOR (a NOR b)
From that, it's purely mechanical. I'll do the first part as an example:
~a~b~cd + ~a~bc~d
(a NOR a)(b NOR b)(c NOR C)d + (a NOR a)(b NOR b)c(d NOR d)
(((a NOR a) NOR (a NOR a)) NOR ((b NOR b) NOR (b NOR b)))(c NOR C)d + (a NOR a)(b NOR b)c(d NOR d)

Resources