In SICP, there is a problem (exercise 1.15) that says
Exercise 1.15. The sine of an angle (specified in radians) can be
computed by making use of the approximation sin x x if x is
sufficiently small, and the trigonometric identity
sin(r) = 3sin(r/3) - 4sin^3(r/3)
to reduce the size of the argument of sin. (For purposes of this
exercise an angle is considered ``sufficiently small'' if its
magnitude is not greater than 0.1 radians.) These ideas are incorporated
in the following procedures:
(define (cube x) (* x x x))
(define (p x) (- (* 3 x) (* 4 (cube x))))
(define (sine angle)
(if (not (> (abs angle) 0.1))
angle
(p (sine (/ angle 3.0)))))
a. How many times is the procedure p applied when (sine 12.15) is evaluated?
b. What is the order of growth in space and number of steps
(as a function of a) used by the process generated by the
sine procedure when (sine a) is evaluated?
You can analyze it by running it, and see that it becomes O(loga) where a is the input angle in radians.
However, this isn't sufficient. This should be provable via recurrence relation. I can set up a recurrence relation as such:
T(a) = 3T(a/3) - 4T(a/3)^3
Which is homogenous:
3T(a/3) - 4T(a/3)^3 - T(a) = 0
However, it is non-linear. I am unsure how to get the characteristic equation for this so that I can solve it and prove to myself O(loga) is more than just "intuitively true". No tutorial on the internet seems to cover this analysis, and the only thing I saw conclusively said that non-linear recurrences are pretty much impossible to solve.
Can anyone help?
Thank you!
You're confusing the computation with the amount of time the computation takes.
If θ≤0.1, then T(θ) is 1. Otherwise, it is T(θ/3)+k, where k is the time it takes to do four multiplications, a subtraction, and some miscellaneous bookkeeping.
It is evident that the argument for the ith recursion will be θ/3i, and therefore that the recursion will continue until θ/3i≤0.1. Since the smallest value of i for which that inequality is true is ⌈log3(θ/0.1)⌉, we can see that T(θ) = k*⌈log3(θ/0.1)⌉, which is O(logθ). (I left out the small constant factor which differentiates the last recursion from the other ones, since it makes no difference.)
Related
New to programming and have a question regarding defining an equation that is a fraction and has exponents.
This is the question:
Define a function tanh which, given a positive number x , returns the hyperbolic tangent of
x defined as
tanh(x ) = (e^2x − 1)/(e^2x + 1)
You may use the built in SCHEME function (expt b e) function to complete this problem.
(expt b e) computes b^e
I understand that (expt b e) is how you write exponents, but I don't understand how to write "e^x".
expt is the power function, whereas exp is the natural exponential function.
(exp x) gives you ex.
I have a simple lisp program here that computes an approximation of the average length between two points chosen uniformly at random on the unit interval. If I run the program, I get a rational number 16666671666667/50000000000000, but when I (naively) try to format the rational number to 20 places, some of the precision is thrown away 0.33333343000000000000. I think that, under the hood, SBCL is casting the rational to a floating point number before formatting it, but I'm not really sure how to tell. I'm just using the expression (format t "~20$~%" (scale-all-line-contributions 10000000 1). Is there a way to convert a rational number to decimal notation keeping as much precision as possible? I understand the format system is powerful and expansive, but I'm having trouble finding documentation about it specifically related to rational numbers.
Here's the code below for completeness' sake, since it isn't very long.
(defun number-of-pairs (n i)
"get the number of pairs with distance x
for non-zero distances we have to consider two cases"
(cond
((= i 0) n)
((> i 0) (* 2 (- n i)))
((> i n) 0)))
(defun line-contribution (n i power)
"get the number of segments of length i in a line of n segments and their weight combined"
(let
((number-of-pairs (number-of-pairs n i))
(weight-of-pair (expt i power)))
(* number-of-pairs weight-of-pair)))
(defun all-line-contributions (n power)
"get the line contributions for reach [0 .. n]"
(loop for i from 1 upto (- n 1) summing (line-contribution n i power)))
(defun normalized-all-line-contributions (n power)
"normalize line contributions by number of pairs"
(let ((pair-count (expt n 2)))
(/ (all-line-contributions n power) pair-count)))
(defun scale-all-line-contributions (n power)
"scale the line contributions by the distance n
this will guarantee convergence"
(/ (normalized-all-line-contributions n power) (expt n power)))
(print (scale-all-line-contributions 10000000 1))
(format t "~20$~%" (scale-all-line-contributions 10000000 1))
edit: fixed logic error in code. new rational number, float pair is 33333333333333/100000000000000 0.33333334000000000000
You can use either coerce or float. For instance:
(format t "~20$" (coerce 16666671666667/50000000000000 'long-float))
; prints 0.33333343333334000000
(format t "~a" (float 16666671666667/50000000000000 1.0l0))
; prints 0.33333343333334d0
Note that a coercion to long-float can produce different results in different implementations of Common Lisp (in particular in CLISP).
The second parameter to float is a prototype: you should provide any float literal and the first parameter will be converted to the same kind of float.
I'm having trouble on doing a homework exercise.
I need to describe an efficient algorithm which solves the polynomial interpolation problem:
Let P[i,j] be the polynomial interpolation of the points (xi, yi),...,(xj,yj). Find 3 simple polynomials q(x), r(x), s(x) of degree 0 or 1 such that:
P[i,j+1]={q(x)P[i,j](x)-r(x)P[i+1,j+1](x)}/s(x)
Given the points (x1,y1),....(xn,yn), describe an efficient dynamic programming algorithm based on the recurrence relation which you found in section 1 for computing the coefficients a0,...an-1 of the polynomial interpolation.
Well, I know how to solve the polynomial interpolation problem using Newton polynomial which looks quite similar to the above recurrence relation but I don't see how it helps me to find q(x), r(x), s(x) of degree 0 or 1, and assuming I have the correct q(x), r(x), s(x)- how do I solve this problem using dynamic programming?
Any help will be much appreciated.
q(x) = (x at {j+1}) - x
r(x) = (x at i) - x
s(x) = (x at {j+1}) - (x at i)
x at i or x at j mean their place in the ordered list of input points.
Some explanations:
First we need to understand what P[i,j](x) means.
Put all your initial (x,y) pairs in the main diagonal of an n x n matrix.
Now you can extract P[0,0](x) to be the y value of the point in your matrix at (0,0).
P[0,1] is the linear interpolation of the points in your matrix at (0,0) and (1,1). This will be a straight line function.
((x at 0 - x)(y at 1) - (x at 1 - x)(y at 0))
---------------------------------------------
(x at 1 - x at 0)
P[0,2] is the linear interpolation of two previous linear interpolations, which means that your ys now will be the linear functions which you calculated at the previous step.
This is also the dynamic algorithm which builds the full polynom.
I highly recommend you have a look at this very good lecture, and the full lecture notes.
An iterative version of odd? for non-negative integer arguments can be written using and, or, and not. To do so, you have to take advantage of the fact that and and or are special forms that evaluate their arguments in order from left to right, exiting as soon as the value is determined. Write (boolean-odd? x) without using if or cond, but using and, or, not (boolean) instead. You may use + and -, but do not use quotient, remainder, /, etc.
A number is even if two divides it evenly, and odd if there if there is a remainder of one. In general, when you divide a number k by a number n, the remainder is one element of the set {0,1,…n-1}. You can generalize your question by asking whether, when k is divided by n, the remainder is in some privileged set of remainder values. Since this is almost certainly homework, I do not want to provide a direct answer to your question, but I'll answer this more general version, without sticking to the constraints of using only and and or.
(define (special-remainder? k n special-remainders)
(if (< k n)
(member k special-remainders)
(special-remainder? (- k n) special-remainders)))
This special-remainder? recursively divides k by n until a remainder less than n is found. Then n is tested for its specialness. In the case that you're considering, you'll be able to eliminate special-remainders, because you don't need (member k special-remainders). Since you only have one special remainder, you can just check whether k is that special remainder.
A positive odd number can be defined as 1 + 2n. Thus an odd number is:
If x is 1
If x is greater than 1 and x-2 is odd.
Thus one* solution that is tail recursive/iterative looks like this:
(define (odd? x)
(or (= ...) ; #t if 1
(and ... ; #f if less than 1
(odd? ...))); recurse with 2 less
*having played around with it it's many ways to do do this and still have it iterative and without if/cond.
I have been trying to find a tight bound time complexity for this function with respect to just one of the arguments. I thought it was O(p^2) (or rather big theta) but I am not sure anymore.
(define (acc p n)
(define (iter p n result)
(if (< p 1)
result
(iter (/ p 2) (- n 1) (+ result n))))
(iter p n 1))
#sarahamedani, why would this be O(p^2)? It looks like O(log p) to me. The runtime should be insensitive to the value of n.
You are summing a series of numbers, counting down from n. The number of times iter will iterate depends on how many times p can be halved without becoming less than 1. In other words, the position of the leftmost '1' bit in p, minus one, is the number of times iter will iterate. That means the number of times iter runs is proportional to log p.
You might try to eyeball it, or go from it more systematically. Assuming we're doing this from scratch, we should try build a recurrence relation from the function definition.
We can assume, for the moment, a very simple machine model where arithmetic operations and variable lookups are constant time.
Let iter-cost be the name of the function that counts how many steps it takes to compute iter, and let it be a function of p, since iter's termination depends only on p. Then you should be able to write expressions for iter-cost(0). Can you do that for iter-cost(1), iter-cost(2), iter-cost(3), and iter-cost(4)?
More generally, given an p greater than zero, can you express iter-cost(p)? It will be in terms of constants and a recurrent call to iter-cost. If you can express it as a recurrence, then you're in a better position to express it in a closed form.