I'm playing around with writing some code in scheme. Here's an example of doing a fibonacci:
(define (fib n)
; if n == 0:
(if (= n 0)
; return 0
0
; else if n==1
(if (= n 1)
; return 1
1
; else return fib(n-1) + fib(n-2)
(+ (fib (- n 1)) (fib (- n 2)))))
)
(fib 4)
(fib 3)
(fib 2)
(fib 1)
(fib 0)
3
2
1
1
0
My question is more about general advice. How in the world do you keep track of all the parentheses? It's not so much a matter of legibility (I don't care about that) but more just a matter of getting things right and not having to do trial-and-error in ten difference places to add in extra parens to see if things work when it's all said and done.
Is there a good way to get a better handle on the parens, such as a good IDE or web environment or what-not, or is the answer just "get used to it". ?
The usual non-IDE Lisp answer to this is indentation.
You let it guide you in reading the code. You take it on faith that the parens are balanced to follow the indentation structure of the code that you see.
While writing, you usually keep mental track of what's going on locally, and when you close a paren, you know which expression is it ending.
Keep these expressions short, start each new one on the new line under the previous one of the same semantic level, indented to the same level of indentation (the rainbow coloring for me personally is not helpful in the slightest, more like distracting and confusing).
As an example, your code properly formatted under these guidelines, becomes
(define (fib n)
(if (= n 0)
0
(if (= n 1)
1
(+ (fib (- n 1))
(fib (- n 2)))))
which we can read without paying the slightest attention to the parentheses.
And legibility is the whole point to writing the code in the first place, is it not?
Use either Emacs with Geiser, or alternatively DrRacket.
DrRacket will let you set the language to R5RS (or R7RS) if you wish to use standard Scheme rather than Racket, which is a close relative.
Related
The summation procedure of section 1.3.1 of SICP produces a linear recursive process with order of N space and time complexity. The code for this procedure is:
(define (sum-integers a b)
(if (< a b)
0
(+ a (sum-integers (+ a 1) b))))
What I would like to know is, if I decided that I want to sum a range of Fibonacci numbers using the analogous procedure:
(define (sum-fib a b)
(if (< a b)
0
(+ (fib a) (sum-fib (+ a 1) b))))
with fib defined as:
(define (fib n)
(cond ((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1))
(fib (- n 2))))))
How would I analyse the space and time complexity of sum-fib? Would I ignore the linear recursive flavor of the overall procedure and prioritize the tree recursion of fib within it as a worst case scenario? Would I have to somehow combine the space/time complexities of fib and sum-fib, and if so, how? Also, say I got sum-fib from another programmer and I was using it as a component in a larger system. If my program slowed down because of how fib was implemented, how would I know?
This is my first question on this platform so please also advise on how to better post and find answers to questions. Your contribution is appreciated.
There is a slight error in your code. After checking SICP, I am assuming you meant to use a > instead of a < in both sum-integers and sum-fib. That is the only modification I made, please correct me if it was done erroneously.
Note: I do not have a formal background, but this question has been unanswered for quite a while, so I thought I would share my thoughts for anyone else who happens across it.
Time
When dealing with the time complexity, we care about how many iterations are performed as n grows larger. Here, we can assume n to be the distance between a and b (inclusive) in sum-fib. The function sum-fib itself will only recurse n times in this case. If a was 0 and b was 9, then the function will run 10 times. This is completely linear, or O(n), but it isn't so simple: the next question to ask is what happens for each of these iterations?
We know that the summation part is linear, so all that's left is the Fibonnaci function. Inside, you see that it either immediately terminates ( O(1) ), or branches off into two recursive calls to itself. Big-O notation is concerned with the worst-case, meaning the branch. We'll have 1 call turn to 2, which turns to 4, which turns to 8, etc, n times. This behavior is O(2^n).
Don't forget that this is called n times as part of the overarching O(n) summation loop, so the total function will be O(n(2^n)).
Space
The space requirements of a function are a bit different. By writing out what's going on by hand, you can start to see the shape of the function form. This is what is shown early on in SICP, where a "pyramid" function is compared to a linear one.
One thing to keep in mind is that Scheme is tail-call optimized. This means that, if a recursive call is at the end of a function (meaning that there are no instructions which take place after the recursive call), then the frame can be reused, and no extra space is required. For example:
(define (loop n)
(if (> n 2)
0
(loop (+ n 1))))
Drawing out (loop 0) would be:
(loop 0)
(loop 1)
(loop 2)
0
You can see that the space required is linear. Compare this to:
(define (loop n)
(if (> n 2)
0
(+ n (loop (+ n 1)))))
With (loop 0):
(loop 0)
(1 + (loop 1))
(1 + (2 + (loop 2)))
(1 + (2 + 0))
(1 + 2)
3
You can see that the space required grows as the number of iterations required grows in this case.
In your case, the space required is going to increase dramatically as n increases, since fib generates a full tree for each number, and is not tail-recursive, nor is sum-fib.
I suspect that the space required will also be O(n(2^n)). The sum-fib function (ignoring the fib calls), seems to be linear in space, or O(n). It calls 2 fibs per iteration. Each fib branches off into 2 more, and is not tail-recursive, so the space required is O(2^n). Combining them, we get O(n(2^n)). Whether or not this will always be the case, I am not certain.
How to Test for Slow Functions
What you are looking for is called a profiler. It will watch your code while it runs, and report back to you with information on which functions took the most time, which functions were called most often, etc. For Scheme, Dr. Racket is an IDE which has a built-in profiler.
A word of advice: Get your software working first, then worry about profiling and optimizations. Many programmers get stuck in hyper-optimizing their code without first finishing to see where the true bottlenecks lie. You can spend weeks gaining a 1% performance boost utilizing arcane algorithms when it turns out that a 5-minute tweak could net you a 50% boost.
In Racket (and other Schemes, from what I can tell), the only way I know of to check whether two things are not equal is to explicitly apply not to the test:
(not (= num1 num2))
(not (equal? string1 string2))
It's obviously (not (that-big-of-deal?)), but it's such a common construction that I feel like I must be overlooking a reason why it's not built in.
One possible reason, I suppose, is that you can frequently get rid of the not by using unless instead of when, or by switching the order of the true/false branches in an if statement. But sometimes that just doesn't mimic the reasoning that you're trying to convey.
Also, I know the negated functions are easy to define, but so is <=, for example, and that is built in.
What are the design decisions for not having things like not-equal?, not-eqv?, not-eq? and != in the standard library?
First, you are correct that it is (not (that-big-of-a-deal?))1
The reason Racket doesn't include it out of the box is likely just because it adds a lot of extra primitives without much benefit. I will admit that a lot of languages do have != for not equal, but even in Java, if you want to do a deep equality check using equals() (analogous to equal? in Racket), you have to manually invert the result with a ! yourself.
Having both <= and > (as well as >= and <) was almost certainly just convenient enough to cause the original designers of the language to include it.
So no, there isn't any deep reason why there is not any shortcut for having a not-eq? function built into Racket. It just adds more primitives and doesn't happen to add much benefit. Especially as you still need not to exist on its own anyway.
1I love that pun by the way. Have some imaginary internet points.
I do miss not having a not= procedure (or ≠ as mentioned in #soegaard's comment), but not for the reasons you think.
All the numeric comparison operators are variadic. For example, (< a b c d) is the same as (and (< a b) (< b c) (< c d)). In the case of =, it checks whether all arguments are numerically equal. But there is no procedure to check whether all arguments are all unequal—and that is a different question from whether not all arguments are equal (which is what (not (= a b c d)) checks).
Yes, you can simulate that procedure using a fold. But still, meh.
Edit: Actually, I just answered my own question in this regard: the reason for the lack of a variadic ≠ procedure is that you can't just implement it using n-1 pairwise comparisons, unlike all the other numeric comparison operators. The straightforward approach of doing n-1 pairwise comparisons would mean that (≠ 1 2 1 2) would return true, and that's not really helpful.
I'll leave my original musings in place for context, and for others who wonder similar things.
Almost all of the predicates are inherited by Scheme, the standard #!racket originally followed. They kept the number of procedures to a minimum as a design principle and left it to the user to make more complex structures and code. Feel free to make the ones you'd like:
(define not-equal? (compose1 not equal?))
(define != (compose1 not =))
; and so on
You can put it in a module and require it. Keep it by convention so that people who read you code knows after a minute that everything not-<known predicate> and !-<known-predicate> are (compose not <known-predicate>)
If you want less work and you are not after using the result in filter then making a special if-not might suffice:
(define-syntax-rule (if-not p c a) (if p a c))
(define (leafs tree)
(let aux ((tree tree) (acc 0))
(if-not (pair? tree)
(+ acc 1) ; base case first
(aux (cdr tree)
(aux (car tree) acc)))))
But it's micro optimizations compared to what I would have written:
(define (leafs tree)
(let aux ((tree tree) (acc 0))
(if (not (pair? tree))
(+ acc 1) ; base case first
(aux (cdr tree)
(aux (car tree) acc)))))
To be honest if I were trying to squeeze out a not I would just have switched them manually since then optimizing speed thrumps optimal readability.
I find that one easily can define != (for numbers) using following macro:
(define-syntax-rule (!= a b)
(not(= a b)))
(define x 25)
(!= x 25)
(!= x 26)
Output:
#f
#t
That may be the reason why it is not defined in the language; it can easily be created, if needed.
I make simple factorial program in Clojure.
(defn fac [x y]
(if (= x 1) y (recur (- x 1) (* y x)))
)
(def fact [n] (fac n 1))
How can it be done faster? If it can be done some faster way.
You can find many fast factorial algorithms here: http://www.luschny.de/math/factorial/FastFactorialFunctions.htm
As commented above, Clojure is not the best language for that. Consider using C, C++, ForTran.
Be careful with the data structures that you use, because factorials grow really fast.
Here is my favorite:
(defn fact [n] (reduce *' (range 1 (inc n))))
The ' tells Clojure to use BigInteger transparently so as to avoid overflow.
With the help of your own fact function (or any other), we can define this extremely fast version:
(def fact* (mapv fact (cons 1 (range 1 21))))
This will give the right results for arguments in the range from 1 to 20 in constant time. Beyond that range, your version doesn't give correct results either (i.e. there's an integer overflow with (fact 21)).
EDIT: Here's an improved implementation that doesn't need another fact implementation, does not overflow and should be much faster during definition because it doesn't compute each entry in its lookup table from scratch:
(def fact (persistent! (reduce (fn [v n] (conj! v (*' (v n) (inc n))))
(transient [1])
(range 1000))))
EDIT 2: For a different fast solution, i.e. without building up a lookup table, it's probably best to use a library that's already highly optimized. Google's general utility library Guava includes a factorial implementation.
Add it to your project by adding this Leiningen dependency: [com.google.guava/guava "15.0"]. Then you need to (import com.google.common.math.BigIntegerMath) and can then call it with (BigIntegerMath/factorial n).
In Scheme, the function (map fn list0 [list1 .. listN]) comes with the restriction that the lists must have the same number of elements. Coming from Python, I'm missing the freedom of Python list comprehensions, which look a lot like map above, but without this restriction.
I'm tempted to implement an alternative "my-map", which allows for lists of differing size, iterating through the first N elements of all lists, where N is the length of the shortest list.
For example, let num be 10 and lst be (1 2 3). With my-map, I hope to write expressions like:
(my-map + (circular-list num) lst)))
And get:
(11 12 13)
I have an easier time reading this than the more conventional
(map + (lambda (arg) (+ num arg)) lst)
or
(map + (make-list (length lst) num) lst)
Two questions:
As a Scheme newbie, am I overlooked important reasons for the restriction on `map`?
Does something like `my-map` already exist in Scheme or in the SRFIs? I did take a look at srfi-42, but either it's not what I'm looking for, or it was, and it wasn't obvious.
First, note that map does allow empty lists, but of course if there's one empty list then all of them should be empty.
Second, have a look at the srfi-1 version of map -- it is specifically different from the R5RS version as follows:
This procedure is extended from its R5RS specification to allow the arguments to be of unequal length; it terminates when the shortest list runs out.
Third, most Scheme programmers would very much prefer
(map (lambda (arg) (+ num arg)) lst)
My guess is that Scheme is different from Python in a way that makes lambda expressions become more and more readable as you get used to the language.
And finally, there are some implementations that come with some form of a list comprehension. For example, in Racket you can write:
(for/list ([arg lst]) (+ num arg))
Finally taking the plunge to learn a Lisp dialect (Scheme), I have encountered two definitions of a list -
"Either the empty list or a pair whose cdr is a list".
"A collection of S-Expressions enclosed by parentheses".
Are these definitions equivalent?
They're as equivalent as {'a','b','c'} and "abc"
The former is the machine's logical representation of a list, the latter is how you represent it in your code.
And in scheme, you can pretty much treat everything as a list :) (Someone's going to downvote me for that, but I found it to be true when trying to think scheme-esque.)
I'm going to bring out my favourite dog-and-pony show!
(source: hedgee.com)
This corresponds to the following:
(let ((s5 (sqrt 5)))
(/ (- (expt (/ (1+ s5) 2) n)
(expt (/ (- 1 s5) 2) n)) s5))
The diagram illustrates your first statement (the empty-list object is denoted as a black box). The code snippet illustrates your second. :-)