Scheme Infix to Postfix - scheme

Let me establish that this is part of a class assignment, so I'm definitely not looking for a complete code answer. Essentially we need to write a converter in Scheme that takes a list representing a mathematical equation in infix format and then output a list with the equation in postfix format.
We've been provided with the algorithm to do so, simple enough. The issue is that there is a restriction against using any of the available imperative language features. I can't figure out how to do this in a purely functional manner. This is our fist introduction to functional programming in my program.
I know I'm going to be using recursion to iterate over the list of items in the infix expression like such.
(define (itp ifExpr)
(
; do some processing using cond statement
(itp (cdr ifExpr))
))
I have all of the processing implemented (at least as best I can without knowing how to do the rest) but the algorithm I'm using to implement this requires that operators be pushed onto a stack and used later. My question is how do I implement a stack in this function that is available to all of the recursive calls as well?

(Updated in response to the OP's comment; see the new section below the original answer.)
Use a list for the stack and make it one of the loop variables. E.g.
(let loop ((stack (list))
... ; other loop variables here,
; like e.g. what remains of the infix expression
)
... ; loop body
)
Then whenever you want to change what's on the stack at the next iteration, well, basically just do so.
(loop (cons 'foo stack) ...)
Also note that if you need to make a bunch of "updates" in sequence, you can often model that with a let* form. This doesn't really work with vectors in Scheme (though it does work with Clojure's persistent vectors, if you care to look into them), but it does with scalar values and lists, as well as SRFI 40/41 streams.
In response to your comment about loops being ruled out as an "imperative" feature:
(let loop ((foo foo-val)
(bar bar-val))
(do-stuff))
is syntactic sugar for
(letrec ((loop (lambda (foo bar) (do-stuff))))
(loop foo-val bar-val))
letrec then expands to a form of let which is likely to use something equivalent to a set! or local define internally, but is considered perfectly functional. You are free to use some other symbol in place of loop, by the way. Also, this kind of let is called 'named let' (or sometimes 'tagged').
You will likely remember that the basic form of let:
(let ((foo foo-val)
(bar bar-val))
(do-stuff))
is also syntactic sugar over a clever use of lambda:
((lambda (foo bar) (do-stuff)) foo-val bar-val)
so it all boils down to procedure application, as is usual in Scheme.
Named let makes self-recursion prettier, that's all; and as I'm sure you already know, (self-) recursion with tail calls is the way to go when modelling iterative computational processes in a functional way.
Clearly this particular "loopy" construct lends itself pretty well to imperative programming too -- just use set! or data structure mutators in the loop's body if that's what you want to do -- but if you stay away from destructive function calls, there's nothing inherently imperative about looping through recursion or the tagged let itself at all. In fact, looping through recursion is one of the most basic techniques in functional programming and the whole point of this kind of homework would have to be teaching precisely that... :-)
If you really feel uncertain about whether it's ok to use it (or whether it will be clear enough that you understand the pattern involved if you just use a named let), then you could just desugar it as explained above (possibly using a local define rather than letrec).

I'm not sure I understand this all correctly, but what's wrong with this simpler solution:
First:
You test if your argument is indeed a list:
If yes: Append the the MAP of the function over the tail (map postfixer (cdr lst)) to the a list containing only the head. The Map just applies the postfixer again to each sequential element of the tail.
If not, just return the argument unchanged.
Three lines of Scheme in my implementation, translates:
(postfixer '(= 7 (/ (+ 10 4) 2)))
To:
(7 ((10 4 +) 2 /) =)
The recursion via map needs no looping, not even tail looping, no mutation and shows the functional style by applying map. Unless I'm totally misunderstanding your point here, I don't see the need for all that complexity above.
Edit: Oh, now I read, infix, not prefix, to postfix. Well, the same general idea applies except taking the second element and not the first.

Related

Clojure comp doesn't tail-call-optimize (can create StackOverflow exception)

I got stuck on a Clojure program handling a very large amount of data (image data). When the image was larger than 128x128, the program would crash with a StackOverflow exception. Because it worked for smaller images, I knew it wasn't an infinite loop.
There were lots of possible causes of high memory usage, so I spent time digging around. Making sure I was using lazy sequences correctly, making sure to use recur as appropriate, etc. The turning point came when I realized that this:
at clojure.core$comp$fn__5792.invoke(core.clj:2569)
at clojure.core$comp$fn__5792.invoke(core.clj:2569)
at clojure.core$comp$fn__5792.invoke(core.clj:2569)
referred to the comp function.
So I looked at where I was using it:
(defn pipe [val funcs]
((apply comp funcs) val))
(pipe the-image-vec
(map
(fn [index] (fn [image-vec] ( ... )))
(range image-size)))
I was doing per-pixel operations, mapping a function to each pixel to process it. Interestingly, comp doesn't appear to benefit from tail-call optimization, or do any kind of sequential application of functions. It seems that it was just composing things in the basic way, which when there are 65k functions, understandably overflows the stack. Here's the fixed version:
(defn pipe [val funcs]
(cond
(= (count funcs) 0) val
:else (recur ((first funcs) val) (rest funcs))))
recur ensures the recursion gets tail-call optimized, preventing a stack buildup.
If anybody can explain why comp works this way (or rather, doesn't work this way), I'd love to be enlightened.
First, a more straightforward MCVE:
(def fs (repeat 1e6 identity))
((apply comp fs) 99)
#<StackOverflowError...
But why does this happen? If you look at the (abridged) comp source:
(defn comp
([f g]
(fn
([x] (f (g x)))
([f g & fs]
(reduce1 comp (list* f g fs))))
You can see that the whole thing is basically just 2 parts:
The first argument overload that does the main work of wrapping each composed function call in another function.
Reducing over the functions using comp.
I believe the first point is the problem. comp works by taking the list of functions and continually wrapping each set of calls in functions. Eventually, this will exhaust the stack space if you try to compose too many functions, as it ends up creating a massive function that's wrapping many other functions.
So, why can TCO not help here? Because unlike most StackOverflowErrors, recursion is not the problem here. The recursive calls only ever reach one frame deep in the variardic case at the bottom. The problem is the building up of a massive function, which can't simply be optimizated away.
Why were you able to "fix" it though? Because you have access to val, so you're able to evaluate the functions as you go instead of building up one function to call later. comp was written using a simple implementation that works fine for most cases, but fails for extreme cases like the one you presented. It's fairly trivial to write a specialized version that handles massive collections though:
(defn safe-comp [& fs]
(fn [value]
(reduce (fn [acc f]
(f acc))
value
(reverse fs))))
Of course note though, this doesn't handle multiple arities like the core version does.
Honestly though, in 3 and a bit years of using Clojure, I've never once written (apply comp ...). While it's certainly possible you have experienced a case I've never needed to deal with, it's more likely that you're using the wrong tool for the job here. When this code is complete, post it on Code Review and we may be able to suggest better ways of accomplishing what you're trying to do.

Scheme : How to quote result of function call?

Is it possible to quote result of function call?
For example
(quoteresult (+ 1 1)) => '2
At first blush, your question doesn’t really make any sense. “Quoting” is a thing that can be performed on datums in a piece of source code. “Quoting” a runtime value is at best a no-op and at worst nonsensical.
The example in your question illustrates why it doesn’t make any sense. Your so-called quoteresult form would evaluate (+ 1 1) to produce '2, but '2 evaluates to 2, the same thing (+ 1 1) evaluates to. How would the result of quoteresult ever be different from ordinary evaluation?
If, however, you want to actually produce a quote expression to be handed off to some use of dynamic evaluation (with the usual disclaimer that that is probably a bad idea), then you need only generate a list of two elements: the symbol quote and your function’s result. If that’s the case, you can implement quoteresult quite simply:
(define (quoteresult x)
(list 'quote x))
This is, however, of limited usefulness for most programs.
For more information on what quoting is and how it works, see What is the difference between quote and list?.

Common lisp macro syntax keywords: what do I even call this?

I've looked through On Lisp, Practical Common Lisp and the SO archives in order to answer this on my own, but those attempts were frustrated by my inability to name the concept I'm interested in. I would be grateful if anyone could just tell me the canonical term for this sort of thing.
This question is probably best explained by an example. Let's say I want to implement Python-style list comprehensions in Common Lisp. In Python I would write:
[x*2 for x in range(1,10) if x > 3]
So I begin by writing down:
(listc (* 2 x) x (range 1 10) (> x 3))
and then defining a macro that transforms the above into the correct comprehension. So far so good.
The interpretation of that expression, however, would be opaque to a reader not already familiar with Python list comprehensions. What I'd really like to be able to write is the following:
(listc (* 2 x) for x in (range 1 10) if (> x 3))
but I haven't been able to track down the Common Lisp terminology for this. It seems that the loop macro does exactly this sort of thing. What is it called, and how can I implement it? I tried macro-expanding a sample loop expression to see how it's put together, but the resulting code was unintelligible. Could anyone guide me in the right direction?
Thanks in advance.
Well, what for does is essentially, that it parses the forms supplied as its body. For example:
(defmacro listc (expr &rest forms)
;;
;;
;; (listc EXP for VAR in GENERATOR [if CONDITION])
;;
;;
(labels ((keyword-p (thing name)
(and (symbolp thing)
(string= name thing))))
(destructuring-bind (for* variable in* generator &rest tail) forms
(unless (and (keyword-p for* "FOR") (keyword-p in* "IN"))
(error "malformed comprehension"))
(let ((guard (if (null tail) 't
(destructuring-bind (if* condition) tail
(unless (keyword-p if* "IF") (error "malformed comprehension"))
condition))))
`(loop
:for ,variable :in ,generator
:when ,guard
:collecting ,expr)))))
(defun range (start end &optional (by 1))
(loop
:for k :upfrom start :below end :by by
:collecting k))
Apart from the hackish "parser" I used, this solution has a disadvantage, which is not easily solved in common lisp, namely the construction of the intermediate lists, if you want to chain your comprehensions:
(listc x for x in (listc ...) if (evenp x))
Since there is no moral equivalent of yield in common lisp, it is hard to create a facility, which does not require intermediate results to be fully materialized. One way out of this might be to encode the knowledge of possible "generator" forms in the expander of listc, so the expander can optimize/inline the generation of the base sequence without having to construct the entire intermediate list at run-time.
Another way might be to introduce "lazy lists" (link points to scheme, since there is no equivalent facility in common lisp -- you had to build that first, though it's not particularily hard).
Also, you can always have a look at other people's code, in particular, if they tries to solve the same or a similar problem, for example:
Iterate
Loop in SBCL
Pipes (which does the lazy list thing)
Macros are code transformers.
There are several ways of implementing the syntax of a macro:
destructuring
Common Lisp provides a macro argument list which also provides a form of destructuring. When a macro is used, the source form is destructured according to the argument list.
This limits how macro syntax looks like, but for many uses of Macros provides enough machinery.
See Macro Lambda Lists in Common Lisp.
parsing
Common Lisp also gives the macro the access to the whole macro call form. The macro then is responsible for parsing the form. The parser needs to be provided by the macro author or is part of the macro implementation done by the author.
An example would be an INFIX macro:
(infix (2 + x) * (3 + sin (y)))
The macro implementation needs to implement an infix parser and return a prefix expression:
(* (+ 2 x) (+ 3 (sin y)))
rule-based
Some Lisps provide syntax rules, which are matched against the macro call form. For a matching syntax rule the corresponding transformer will be used to create the new source form. One can easily implement this in Common Lisp, but by default it is not a provided mechanism in Common Lisp.
See syntax case in Scheme.
LOOP
For the implementation of a LOOP-like syntax one needs to write a parser which is called in the macro to parse the source expression. Note that the parser does not work on text, but on interned Lisp data.
In the past (1970s) this has been used in Interlisp in the so-called 'Conversational Lisp', which is a Lisp syntax with a more natural language like surface. Iteration was a part of this and the iteration idea has then brought to other Lisps (like Maclisp's LOOP, from where it then was brought to Common Lisp).
See the PDF on 'Conversational Lisp' by Warren Teitelmann from the 1970s.
The syntax for the LOOP macro is a bit complicated and it is not easy to see the boundaries between individual sub-statements.
See the extended syntax for LOOP in Common Lisp.
(loop for i from 0 when (oddp i) collect i)
same as:
(loop
for i from 0
when (oddp i)
collect i)
One problem that the LOOP macro has is that the symbols like FOR, FROM, WHEN and COLLECT are not the same from the "COMMON-LISP" package (a namespace). When I'm now using LOOP in source code using a different package (namespace), then this will lead to new symbols in this source namespace. For that reason some like to write:
(loop
:for i :from 0
:when (oddp i)
:collect i)
In above code the identifiers for the LOOP relevant symbols are in the KEYWORD namespace.
To make both parsing and reading easier it has been proposed to bring parentheses back.
An example for such a macro usage might look like this:
(iter (for i from 0) (when (oddp i) (collect i)))
same as:
(iter
(for i from 0)
(when (oddp i)
(collect i)))
In above version it is easier to find the sub-expressions and to traverse them.
The ITERATE macro for Common Lisp uses this approach.
But in both examples, one needs to traverse the source code with custom code.
To complement Dirk's answer a little:
Writing your own macros for this is entirely doable, and perhaps a nice exercise.
However there are several facilities for this kind of thing (albeit in a lisp-idiomatic way) out there of high quality, such as
Loop
Iterate
Series
Loop is very expressive, but has a syntax not resembling the rest of common lisp. Some editors don't like it and will indent poorly. However loop is defined in the standard. Usually it's not possible to write extentions to loop.
Iterate is even more expressive, and has a familiar lispy syntax. This doesn't require any special indentation rules, so all editors indenting lisp properly will also indent iterate nicely. Iterate isn't in the standard, so you'll have to get it yourself (use quicklisp).
Series is a framework for working on sequences. In most cases series will make it possible not to store intermediate values.

reduce, or explicit recursion?

I recently started reading through Paul Graham's On Lisp with a friend, and we realized that we have very different opinions of reduce: I think it expresses a certain kind of recursive form very clearly and concisely; he prefers to write out the recursion very explicitly.
I suspect we're each right in some context and wrong in another, but we don't know where the line is. When do you choose one form over the other, and what do you think about when making that choice?
To be clear about what I mean by reduce vs. explicit recursion, here's the same function implemented twice:
(defun my-remove-if (pred lst)
(fold (lambda (left right)
(if (funcall pred left)
right
(cons left right)))
lst :from-end t))
(defun my-remove-if (pred lst)
(if lst
(if (funcall pred (car lst))
(my-remove-if pred (cdr lst))
(cons (car lst) (my-remove-if pred (cdr lst))))
'()))
I'm afraid I started out a Schemer (now we're Racketeers?) so please let me know if I've botched the Common Lisp syntax. Hopefully the point will be clear even if the code is incorrect.
If you have a choice, you should always express your computational intent in the most abstract terms possible. This makes it easier for a reader to figure out your intentions, and it makes it easier for the compiler to optimize your code. In your example, when the compiler trivially knows you are doing a fold operation by virtue of you naming it, it also trivially knows that it could possibly parallelize the leaf operations. It would be much harder for a compiler to figure that out when you write extremely low level operations.
I'm going to take a slightly-subjective question and give a highly-subjective answer, since Ira already gave a perfectly pragmatic and logical one. :-)
I know writing things out explicitly is highly valued in some circles (the Python guys make it part of their "zen"), but even when I was writing Python I never understood it. I want to write at the highest level possible, all the time. When I want to write things out explicitly, I use assembly language. The point of using a computer (and a HLL) is to get it to do these things for me!
For your my-remove-if example, the reduce one looks fine to me (apart from the Scheme-isms like fold and lst :-)). I'm familiar with the concept of reduce, so all I need to understand it is figure out your f(x,y) -> z. For the explicit variant, I had to think it for a second: I have to figure out the loop myself. Recursion isn't the hardest concept out there, but I think it is harder than "a function of two arguments".
I also don't care for a whole line being repeated -- (my-remove-if pred (cdr lst)). I think I like Lisp in part because I'm absolutely ruthless at DRY, and Lisp allows me to be DRY on axes that other languages don't. (You could put in another LET at the top to avoid this, but then it's longer and more complex, which I think is another reason to prefer the reduction, though at this point I might just be rationalizing.)
I think maybe the contexts in which the Python guys, at least, dislike implicit functionality would be:
when no-one could be expected to guess the behavior (like frobnicate("hello, world", True) -- what does True mean?), or:
cases when it's reasonable for implicit behavior to change (like when the True argument gets moved, or removed, or replaced with something else, since there's no compile-time error in most dynamic languages)
But reduce in Lisp fails both of these criteria: it's a well-understood abstraction that everybody knows, and that isn't going to change, at least not on any timescale I care about.
Now, I absolutely believe there are some cases where it'd be easier for me to read an explicit function call, but I think you'd have to be pretty creative to come up with them. I can't think of any offhand, because reduce and mapcar and friends are really good abstractions.
In Common Lisp one prefers the higher-order functions for data structure traversal, filtering, and other related operations over recursion. That's also to see from many provided functions like REDUCE, REMOVE-IF, MAP and others.
Tail recursion is a) not supported by the standard, b) maybe invoked differently with different CL compilers and c) using tail recursion may have side effects on the generated machine code for surrounding code.
Often, for certain data structures, many of these above operations are implemented with LOOP or ITERATE and provided as higher-order function. There is a tendency to prefer new language extensions (like LOOP and ITERATE) for iterative code over using recursion for iteration.
(defun my-remove-if (pred list)
(loop for item in list
unless (funcall pred item)
collect item))
Here is also a version that uses the Common Lisp function REDUCE:
(defun my-remove-if (pred list)
(reduce (lambda (left right)
(if (funcall pred left)
right
(cons left right)))
list
:from-end t
:initial-value nil))

Append! in Scheme?

I'm learning R5RS Scheme at the moment (from PocketScheme) and I find that I could use a function that is built into some variants of Scheme but not all: Append!
In other words - destructively changing a list.
I am not so much interested in the actual code as an answer as much as understanding the process by which one could pass a list as a function (or a vector or string) and then mutate it.
example:
(define (append! lst var)
(cons (lst var))
)
When I use the approach as above, I have to do something like (define list (append! foo (bar)) which I would like something more generic.
Mutation, though allowed, is strongly discouraged in Scheme. PLT even went so far as to remove set-car! and set-cdr! (though they "replaced" them with set-mcar! and set-mcdr!). However, a spec for append! appeared in SRFI-1. This append! is a little different than yours. In the SRFI, the implementation may, but is not required to modify the cons cells to append the lists.
If you want to have an append! that is guaranteed to change the structure of the list that's being appended to, you'll probably have to write it yourself. It's not hard:
(define (my-append! a b)
(if (null? (cdr a))
(set-cdr! a b)
(my-append! (cdr a) b)))
To keep the definition simple, there is no error checking here, but it's clear that you will need to pass in a list of length at least 1 as a, and (preferably) a list (of any length) as b. The reason a must be at least length 1 is because you can't set-cdr! on an empty list.
Since you're interested in how this works, I'll see if I can explain. Basically, what we want to do is go down the list a until we get to the last cons pair, which is (<last element> . null). So we first see if a is already the last element in the list by checking for null in the cdr. If it is, we use set-cdr! to set it to the list we're appending, and we're done. If not, we have to call my-append! on the cdr of a. Each time we do this we get closer to the end of a. Since this is a mutation operation, we're not going to return anything, so we don't need to worry about forming our modified list as the return value.
Better late than never for putting in a couple 2-3 cents on this topic...
(1) There's nothing wrong with using the destructive procedures in Scheme while there is a single reference to the stucture being modified. So for example, building a large list efficiently, piecemeal via a single reference - and when complete, making that (now presumably not-to-be-modified) list known and referred to from various referents.
(2) I think APPEND! should behave like APPEND, only (potentially) destructively. And so APPEND! should expect any number of lists as arguments. Each list but the last would presumably be SET-CDR!'d to the next.
(3) The above definition of APPEND! is essentially NCONC from Mac Lisp and Common Lisp. (And other lisps).

Resources