Calculate Sums with accumulate - scheme

procedure accumulate is defined like this:
(define (accumulate combiner null-value term a next b)
(if (> a b) null-value
(combiner (term a)
(accumulate combiner null-value term (next a) next b))))
problem 1: x^n
;Solution: recursive without accumulate
(define (expon x n)
(if (> n 0) (* x
(expon x (- n 1))
)
1))
problem 2: x + x^2 + x^4 + x^6 + ...+ ,calculate for given n the first n elements of the sequence.
problem 3: 1 + x/1! + x^2/2! + ... + x^n/n!; calculate the sum for given x,n
possibly incorrect solution:
(define (exp1 x n)
(define (term i)
(define (term1 k) (/ x k))
(accumulate * 1 term1 1 1+ i))
(accumulate + 0 term 1 1+ n))
why the previous code is incorrect:
(exp1 0 3) -> 0 ; It should be 1
(exp1 1 1) -> 1 ; It should be 2

First off, I would say that your EXP1 procedure is operating at too low a level in being defined in terms of ACCUMULATE, and for the sake of perspicacity rewrite it instead in terms of sums and factorials:
(define (sum term a b)
(accumulate + 0 term a 1+ b))
(define (product term a b)
(accumulate * 1 term a 1+ b))
(define (identity x) x)
(define (fact n)
(if (= n 0)
1
(product identity 1 n)))
(define (exp1 x n)
(define (term i)
(/ (expon x i) (fact i)))
(sum term 1 n))
Now to your question: the reason you are getting (EXP1 0 3) → 0 is no more than that you forgot to add the 1 at the start of the series, and are just computing x/1! + x^2/2! + ... + x^n/n!
Changing EXP1 to include the missing term works as expected:
(define (exp1 x n)
(define (term i)
(/ (expon x i) (fact i)))
(+ 1 (sum term 1 n)))
=> (exp1 0 3)
1
=> (exp1 1 1)
2

Related

Returning the sum of positive squares

I'm trying to edit the current program I have
(define (sumofnumber n)
(if (= n 0)
1
(+ n (sumofnumber (modulo n 2 )))))
so that it returns the sum of an n number of positive squares. For example if you inputted in 3 the program would do 1+4+9 to get 14. I have tried using modulo and other methods but it always goes into an infinite loop.
The base case is incorrect (the square of zero is zero), and so is the recursive step (why are you taking the modulo?) and the actual operation (where are you squaring the value?). This is how the procedure should look like:
(define (sum-of-squares n)
(if (= n 0)
0
(+ (* n n)
(sum-of-squares (- n 1)))))
A definition using composition rather than recursion. Read the comments from bottom to top for the procedural logic:
(define (sum-of-squares n)
(foldl + ; sum the list
0
(map (lambda(x)(* x x)) ; square each number in list
(map (lambda(x)(+ x 1)) ; correct for range yielding 0...(n - 1)
(range n))))) ; get a list of numbers bounded by n
I provide this because you are well on your way to understanding the idiom of recursion. Composition is another of Racket's idioms worth exploring and often covered after recursion in educational contexts.
Sometimes I find composition easier to apply to a problem than recursion. Other times, I don't.
You're not squaring anything, so there's no reason to expect that to be a sum of squares.
Write down how you got 1 + 4 + 9 with n = 3 (^ is exponentiation):
1^2 + 2^2 + 3^2
This is
(sum-of-squares 2) + 3^2
or
(sum-of-squares (- 3 1)) + 3^2
that is,
(sum-of-squares (- n 1)) + n^2
Notice that modulo does not occur anywhere, nor do you add n to anything.
(And the square of 0 is 0 , not 1.)
You can break the problem into small chunks.
1. Create a list of numbers from 1 to n
2. Map a square function over list to square each number
3. Apply + to add all the numbers in squared list
(define (sum-of-number n)
(apply + (map (lambda (x) (* x x)) (sequence->list (in-range 1 (+ n 1))))))
> (sum-of-number 3)
14
This is the perfect opportunity for using the transducers technique.
Calculating the sum of a list is a fold. Map and filter are folds, too. Composing several folds together in a nested fashion, as in (sum...(filter...(map...sqr...))), leads to multiple (here, three) list traversals.
But when the nested folds are fused, their reducing functions combine in a nested fashion, giving us a one-traversal fold instead, with the one combined reducer function:
(define (((mapping f) kons) x acc) (kons (f x) acc)) ; the "mapping" transducer
(define (((filtering p) kons) x acc) (if (p x) (kons x acc) acc)) ; the "filtering" one
(define (sum-of-positive-squares n)
(foldl ((compose (mapping sqr) ; ((mapping sqr)
(filtering (lambda (x) (> x 0)))) ; ((filtering {> _ 0})
+) 0 (range (+ 1 n)))) ; +))
; > (sum-of-positive-squares 3)
; 14
Of course ((compose f g) x) is the same as (f (g x)). The combined / "composed" (pun intended) reducer function is created just by substituting the arguments into the definitions, as
((mapping sqr) ((filtering {> _ 0}) +))
=
( (lambda (kons)
(lambda (x acc) (kons (sqr x) acc)))
((filtering {> _ 0}) +))
=
(lambda (x acc)
( ((filtering {> _ 0}) +)
(sqr x) acc))
=
(lambda (x acc)
( ( (lambda (kons)
(lambda (x acc) (if ({> _ 0} x) (kons x acc) acc)))
+)
(sqr x) acc))
=
(lambda (x acc)
( (lambda (x acc) (if (> x 0) (+ x acc) acc))
(sqr x) acc))
=
(lambda (x acc)
(let ([x (sqr x)] [acc acc])
(if (> x 0) (+ x acc) acc)))
which looks almost as something a programmer would write. As an exercise,
((filtering {> _ 0}) ((mapping sqr) +))
=
( (lambda (kons)
(lambda (x acc) (if ({> _ 0} x) (kons x acc) acc)))
((mapping sqr) +))
=
(lambda (x acc)
(if (> x 0) (((mapping sqr) +) x acc) acc))
=
(lambda (x acc)
(if (> x 0) (+ (sqr x) acc) acc))
So instead of writing the fused reducer function definitions ourselves, which as every human activity is error-prone, we can compose these reducer functions from more atomic "transformations" nay transducers.
Works in DrRacket.

Count Fibonacci "cost" in scheme

I have to make a function that finds the "cost" of a Fibonacci number. My Fibonacci code is
(define fib (lambda (n) (cond
((< n 0) 'Error)
((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1)) (fib (- n 2)))))))
Each + or - that is used to evaluate a fib number is worth $1. Each < or > is worth $0.01. For example, 1 is worth $0.01, 2 is worth $3.03, etc. I don't know how to count the number of +, -, <, and >. Do I need the fib code in my fibCost code?
I'm not sure whether or not you wanted the solution to include the original code or not. There are direct ways of computing the cost, but I think it's interesting to look at ways that are similar to instrumenting the existing code. That is, what can we change so that something very much like the original code will compute what we want?
First, we can replace the arithmetic operators with a bit of indirection. That is, instead of calling (+ x y), you can call (op + 100 x y), which increments the total-cost variable.
(define (fib n)
(let* ((total-cost 0)
(op (lambda (fn cost . args)
(set! total-cost (+ total-cost cost))
(apply fn args))))
(let fib ((n n))
(cond
((op < 1 n 0) 'error)
((= n 0) 1)
((= n 1) 1)
(else (op + 100
(fib (op - 100 n 1))
(fib (op - 100 n 2))))))
total-cost))
That doesn't let us keep the original code, though. We can do better by defining local versions of the arithmetic operators, and then using the original code:
(define (fib n)
(let* ((total-cost 0)
(op (lambda (fn cost)
(lambda args
(set! total-cost (+ total-cost cost))
(apply fn args))))
(< (op < 1))
(+ (op + 100))
(- (op - 100)))
(let fib ((n n))
(cond
((< n 0) 'error)
((= n 0) 1)
((= n 1) 1)
(else (+ (fib (- n 1))
(fib (- n 2))))))
total-cost))
> (fib 1)
1
> (fib 2)
303
> (fib 3)
605
> (fib 4)
1209
What's nice about this approach is that if you start using macros to do some source code manipulation, you could actually use this as a sort of poor-man's profiler, or tracing system. (I'd suggest sticking with the more robust tools provided by the implementation, of course, but there are times when a technique like this can be useful.)
Additionally, this doesn't even have to compute the Fibonnaci number anymore. It's still computed because we do (apply fn args), but if we remove that, then we never even call the original arithmetic operation.
The quick and dirty solution would be to define a counter variable each time the cost procedure is started, and update it with the corresponding value at each branch of the recursion. For example:
(define (fib-cost n)
(let ((counter 0)) ; counter initialized with 0 at the beginning
(let fib ((n n)) ; inner fibonacci procedure
; update counter with the corresponding cost
(set! counter (+ counter 0.01))
(when (> n 1)
(set! counter (+ counter 3)))
(cond ((< n 0) 'Error)
((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1)) (fib (- n 2))))))
counter)) ; return the counter at the end
Answering your second question - no, we don't need the whole fib code; given that we're not interested in the actual value of fibonacci, the above can be further simplified to just make the required calls and ignore the returned values:
(define (fib-cost n)
(let ((counter 0)) ; counter initialized with 0 at the beginning
(let fib ((n n)) ; inner fibonacci procedure
; update counter with the corresponding cost
(set! counter (+ counter 0.01))
(when (> n 1)
(fib (- n 1))
(fib (- n 2))
(set! counter (+ counter 3))))
counter)) ; return the counter at the end
You have +/- just anytime you call the code recursively, in the else Part. So, easily anytime you enter the else part, you should count 3 of them. One for f(n-1), one for f(n-2) and one for f(n-1) + f(n-2).
Just for fun, a solution using syntactic extensions (aka "macros").
Let's define the following:
(define-syntax-rule (define-cost newf oldf thiscost totalcost)
(define (newf . parms)
(set! totalcost (+ totalcost thiscost))
(apply oldf parms)))
Now we create procedures based on the original procedures you want to have a cost:
(define-cost +$ + 100 cost)
(define-cost -$ - 100 cost)
(define-cost <$ < 1 cost)
so using +$ will do an addition and increase a cost counter by 1, and so on.
Now we adapt your inititial procedure to use the newly defined ones:
(define fib
(lambda (n)
(cond
((<$ n 0) 'Error)
((= n 0) 0)
((= n 1) 1)
(else
(+$ (fib (-$ n 1)) (fib (-$ n 2)))))))
For convenience, we create a macro to return both the result of a procedure and its cost:
(define-syntax-rule (howmuch f . args)
(begin
(set! cost 0)
(cons (apply f 'args) cost)))
then a cost variable
(define cost #f)
and off we go
> (howmuch fib 1)
'(1 . 1)
> (howmuch fib 2)
'(1 . 303)
> (howmuch fib 10)
'(55 . 26577)
> (howmuch fib 1)
'(1 . 1)

cosine function calculating scheme

Im making a scheme program that calculates
cos(x) = 1-(x^2/2!)+(x^4/4!)-(x^6/6!).......
whats the most efficient way to finish the program and how would you do the alternating addition and subtraction, thats what I used the modulo for but doesnt work for 0 and 1 (first 2 terms). x is the intial value of x and num is the number of terms
(define cosine-taylor
(lambda (x num)
(do ((i 0 (+ i 1)))
((= i num))
(if(= 0 (modulo i 2))
(+ x (/ (pow-tr2 x (* i 2)) (factorial (* 2 i))))
(- x (/ (pow-tr2 x (* i 2)) (factorial (* 2 i))))
))
x))
Your questions:
whats the most efficient way to finish the program? Assuming you want use the Taylor series expansion and simply sum up the terms n times, then your iterative approach is fine. I've refined it below; but your algorithm is fine. Others have pointed out possible loss of precision issues; see below for my approach.
how would you do the alternating addition and subtraction? Use another 'argument/local-variable' of odd?, a boolean, and have it alternate by using not. When odd? subtract when not odd? add.
(define (cosine-taylor x n)
(let computing ((result 1) (i 1) (odd? #t))
(if (> i n)
result
(computing ((if odd? - +) result (/ (expt x (* 2 i)) (factorial (* 2 i))))
(+ i 1)
(not odd?)))))
> (cos 1)
0.5403023058681398
> (cosine-taylor 1.0 100)
0.5403023058681397
Not bad?
The above is the Scheme-ish way of performing a 'do' loop. You should easily be able to see the correspondence to a do with three locals for i, result and odd?.
Regarding loss of numeric precision - if you really want to solve the precision problem, then convert x to an 'exact' number and do all computation using exact numbers. By doing that, you get a natural, Scheme-ly algorithm with 'perfect' precision.
> (cosine-taylor (exact 1.0) 100)
3982370694189213112257449588574354368421083585745317294214591570720658797345712348245607951726273112140707569917666955767676493702079041143086577901788489963764057368985531760218072253884896510810027045608931163026924711871107650567429563045077012372870953594171353825520131544591426035218450395194640007965562952702049286379961461862576998942257714483441812954797016455243/7370634274437294425723020690955000582197532501749282834530304049012705139844891055329946579551258167328758991952519989067828437291987262664130155373390933935639839787577227263900906438728247155340669759254710591512748889975965372460537609742126858908788049134631584753833888148637105832358427110829870831048811117978541096960000000000000000000000000000000000000000000000000
> (inexact (cosine-taylor (exact 1.0) 100))
0.5403023058681398
we should calculate the terms in iterative fashion to prevent the loss of precision from dividing very large numbers:
(define (cosine-taylor-term x)
(let ((t 1.0) (k 0))
(lambda (msg)
(case msg
((peek) t)
((pull)
(let ((p t))
(set! k (+ k 2))
(set! t (* (- t) (/ x (- k 1)) (/ x k)))
p))))))
Then it should be easy to build a function to produce an n-th term, or to sum the terms up until a term is smaller than a pre-set precision value:
(define t (cosine-taylor-term (atan 1)))
;Value: t
(reduce + 0 (map (lambda(x)(t 'pull)) '(1 2 3 4 5)))
;Value: .7071068056832942
(cos (atan 1))
;Value: .7071067811865476
(t 'peek)
;Value: -2.4611369504941985e-8
A few suggestions:
reduce your input modulo 2pi - most polynomial expansions converge very slowly with large numbers
Keep track of your factorials rather than computing them from scratch each time (once you have 4!, you get 5! by multiplying by 5, etc)
Similarly, all your powers are powers of x^2. Compute x^2 just once, then multiply the "x power so far" by this number (x2), rather than taking x to the n'th power
Here is some python code that implements this - it converges with very few terms (and you can control the precision with the while(abs(delta)>precision): statement)
from math import *
def myCos(x):
precision = 1e-5 # pick whatever you need
xr = (x+pi/2) % (2*pi)
if xr > pi:
sign = -1
else:
sign = 1
xr = (xr % pi) - pi/2
x2 = xr * xr
xp = 1
f = 1
c = 0
ans = 1
temp = 0
delta = 1
while(abs(delta) > precision):
c += 1
f *= c
c += 1
f *= c
xp *= x2
temp = xp / f
c += 1
f *= c
c += 1
f *= c
xp *= x2
delta = xp/f - temp
ans += delta
return sign * ans
Other than that I can't help you much as I am not familiar with scheme...
For your general enjoyment, here is a stream implementation. The stream returns an infinite sequence of taylor terms based on the provided func. The func is called with the current index.
(define (stream-taylor func)
(stream-map func (stream-from 0)))
(define (stream-cosine x)
(stream-taylor (lambda (n)
(if (zero? n)
1
(let ((odd? (= 1 (modulo n 2))))
;; Use `exact` if desired...
;; and see #WillNess above; save 'last'; use for next; avoid expt/factorial
((if odd? - +) (/ (expt x (* 2 n)) (factorial (* 2 n)))))))))
> (stream-fold + 0 (stream-take 10 (stream-cosine 1.0)))
0.5403023058681397
Here's the most streamlined function I could come up with.
It takes advantage of the fact that the every term is multiplied by (-x^2) and divided by (i+1)*(i+2) to come up with the text term.
It also takes advantage of the fact that we are computing factorials of 2, 4, 6. etc. So it increments the position counter by 2 and compares it with 2*N to stop iteration.
(define (cosine-taylor x num)
(let ((mult (* x x -1))
(twice-num (* 2 num)))
(define (helper iter prev-term prev-out)
(if (= iter twice-num)
(+ prev-term prev-out)
(helper (+ iter 2)
(/ (* prev-term mult) (+ iter 1) (+ iter 2))
(+ prev-term prev-out))))
(helper 0 1 0)))
Tested at repl.it.
Here are some answers:
(cosine-taylor 1.0 2)
=> 0.5416666666666666
(cosine-taylor 1.0 4)
=> 0.5403025793650793
(cosine-taylor 1.0 6)
=> 0.5403023058795627
(cosine-taylor 1.0 8)
=> 0.5403023058681398
(cosine-taylor 1.0 10)
=> 0.5403023058681397
(cosine-taylor 1.0 20)
=> 0.5403023058681397

Implementation of Simpson's Rule (SICP Exercise 1.29)

Following is my code for SICP exercise 1.29. The exercise asks us to implement
Simpson's Rule using higher order procedure sum. It's supposed to be more
accurate than the original integral procedure. But I don't know why it's not
the case in my code:
(define (simpson-integral f a b n)
(define h (/ (- b a) n))
(define (next x) (+ x (* 2 h)))
(* (/ h 3) (+ (f a)
(* 4 (sum f (+ a h) next (- b h)))
(* 2 (sum f (+ a (* 2 h)) next (- b (* 2 h))))
(f b))))
Some explanations of my code: As
h/3 * (y_{0} + 4*y_{1} + 2*y_{2} + 4*y_{3} + 2*y_{4} + ... + 2*y_{n-2} + 4*y_{n-1} + y_{n})
equals
h/3 * (y_{0}
+ 4 * (y_{1} + y_{3} + ... + y_{n-1})
+ 2 * (y_{2} + y_{4} + ... + y_{n-2})
+ y_{n})
I just use sum to compute y_{1} + y_{3} + ... + y_{n-1} and y_{2} +
y_{4} + ... + y_{n-2}.
Complete code here:
#lang racket
(define (cube x) (* x x x))
(define (sum term a next b)
(if (> a b)
0
(+ (term a)
(sum term (next a) next b))))
(define (integral f a b dx)
(define (add-dx x) (+ x dx))
(* (sum f (+ a (/ dx 2.0)) add-dx b)
dx))
(define (simpson-integral f a b n)
(define h (/ (- b a) n))
(define (next x) (+ x (* 2 h)))
(* (/ h 3) (+ (f a)
(* 4 (sum f (+ a h) next (- b h)))
(* 2 (sum f (+ a (* 2 h)) next (- b (* 2 h))))
(f b))))
Some tests(The exact value should be 0.25):
> (integral cube 0 1 0.01)
0.24998750000000042
> (integral cube 0 1 0.001)
0.249999875000001
> (simpson-integral cube 0 1.0 100)
0.23078806666666699
> (simpson-integral cube 0 1.0 1000)
0.24800798800666748
> (simpson-integral cube 0 1.0 10000)
0.2499999999999509
In your solution the x-values are computed as follows:
h = (b-a)/n
x1 = a+1
x3 = x1 +2*h
x5 = x3 +2*h
...
This means rounding errors slowly accumulate.
It happens when (b-a)/n is not representable as floating point.
If we instead compute xi as a+ (i*(b-a))/n you will get more accurate results.
This variant of your solution uses the above method to compute the xi.
(define (simpson-integral3 f a b n)
(define h (/ (- b a) n))
(define (next i) (+ i 2))
(define (f* i) (f (+ a (/ (* i (- b a)) n))))
(* (/ h 3)
(+ (f a)
(* 4 (sum f* 1 next n))
(* 2 (sum f* 2 next (- n 1)))
(f b))))
There's a problem in how you're constructing the terms, the way you're alternating between even terms (multiplied by 2) and odd terms (multiplied by 4) is not correct. I solved this problem by passing an additional parameter to sum to keep track of the current term's even-or-odd nature, there are other ways but this worked for me, and the accuracy got improved:
(define (sum term a next b i)
(if (> a b)
0
(+ (term a i)
(sum term (next a) next b (+ i 1)))))
(define (simpson-integral f a b n)
(let* ((h (/ (- b a) n))
(term (lambda (x i)
(if (even? i)
(* 2.0 (f x))
(* 4.0 (f x)))))
(next (lambda (x) (+ x h))))
(* (+ (f a)
(sum term a next b 1)
(f b))
(/ h 3.0))))
(simpson-integral cube 0 1 1000)
=> 0.2510004999999994

multiplicative inverse of modulo m in scheme

I've written the code for multiplicative inverse of modulo m. It works for most of the initial cases but not for some. The code is below:
(define (inverse x m)
(let loop ((x (modulo x m)) (a 1))
(cond ((zero? x) #f) ((= x 1) a)
(else (let ((q (- (quotient m x))))
(loop (+ m (* q x)) (modulo (* q a) m)))))))
For example it gives correct values for (inverse 5 11) -> 9 (inverse 9 11) -> 5 (inverse 7 11 ) - > 8 (inverse 8 12) -> #f but when i give (inverse 5 12) it produces #f while it should have been 5. Can you see where the bug is?
Thanks for any help.
The algorithm you quoted is Algorithm 9.4.4 from the book Prime Numbers by Richard Crandall and Carl Pomerance. In the text of the book they state that the algorithm works for both prime and composite moduli, but in the errata to their book they correctly state that the algorithm works always for prime moduli and mostly, but not always, for composite moduli. Hence the failure that you found.
Like you, I used Algorithm 9.4.4 and was mystified at some of my results until I discovered the problem.
Here's the modular inverse function that I use now, which works with both prime and composite moduli, as long as its two arguments are coprime to one another. It is essentially the extended Euclidean algorithm that #OscarLopez uses, but with some redundant calculations stripped out. If you like, you can change the function to return #f instead of throwing an error.
(define (inverse x m)
(let loop ((x x) (b m) (a 0) (u 1))
(if (zero? x)
(if (= b 1) (modulo a m)
(error 'inverse "must be coprime"))
(let* ((q (quotient b x)))
(loop (modulo b x) x u (- a (* u q)))))))
Does it have to be precisely that algorithm? if not, try this one, taken from wikibooks:
(define (egcd a b)
(if (zero? a)
(values b 0 1)
(let-values (((g y x) (egcd (modulo b a) a)))
(values g (- x (* (quotient b a) y)) y))))
(define (modinv a m)
(let-values (((g x y) (egcd a m)))
(if (not (= g 1))
#f
(modulo x m))))
It works as expected:
(modinv 5 11) ; 9
(modinv 9 11) ; 5
(modinv 7 11) ; 8
(modinv 8 12) ; #f
(modinv 5 12) ; 5
I think this is the Haskell code on that page translated directly into Scheme:
(define (inverse p q)
(cond ((= p 0) #f)
((= p 1) 1)
(else
(let ((recurse (inverse (mod q p) p)))
(and recurse
(let ((n (- p recurse)))
(div (+ (* n q) 1) p)))))))
It looks like you're trying to convert it from recursive to tail-recursive, which is why things don't match up so well.
These two functions below can help you as well.
Theory
Here’s how we find the multiplicative inverse d. We want e*d = 1(mod n), which means that ed + nk = 1 for some integer k. So we’ll write a procedure that solves the general equation ax + by = 1, where a and b are given, x and y are variables, and all of these values are integers. We’ll use this procedure to solve ed + nk = 1 for d and k. Then we can throw away k and simply return d.
>
(define (ax+by=1 a b)
(if (= b 0)
(cons 1 0)
(let* ((q (quotient a b))
(r (remainder a b))
(e (ax+by=1 b r))
(s (car e))
(t (cdr e)))
(cons t (- s (* q t))))))
This function is a general solution to an equation in form of ax+by=1 where a and b is given.The inverse-mod function simply uses this solution and returns the inverse.
(define inverse-mod (lambda (a m)
(if (not (= 1 (gcd a m)))
(display "**Error** No inverse exists.")
(if (> 0(car (ax+by=1 a m)))
(+ (car (ax+by=1 a m)) m)
(car (ax+by=1 a m))))))
Some test cases are :
(inverse-mod 5 11) ; -> 9 5*9 = 45 = 1 (mod 11)
(inverse-mod 9 11) ; -> 5
(inverse-mod 7 11) ; -> 8 7*8 = 56 = 1 (mod 11)
(inverse-mod 5 12) ; -> 5 5*5 = 25 = 1 (mod 12)
(inverse-mod 8 12) ; -> error no inverse exists

Resources