More Efficient Runtime in Scheme - AVL's - runtime

So I basically have the function avl? that run's in O(n^2), this is so because everytime im recursing, I'm calling height which is O(n) function (where n is number of nodes in a tree).
(define (height t)
(cond
[(empty? t) 0]
[else (+ 1 (max (height (BST-left t)) (height (BST-right t))))]))
(define (avl? t)
(cond
[(empty? t) #t]
[else (and (avl? (BST-left t))
(avl? (BST-right t))
(>= 1 (abs (- (height (BST-left t))
(height (BST-right t))))))]))
My problem is that i want to make avl? run in O(n) time. I was given the hint: "You should try to limit calling your height function within a constant time no matter how large the BST you are applied to. In this way, you can get a O(n) running time over all." ... I'm not sure how to make my height run in constant time thou. Any suggestion to make my avl? run in O(n) rather than O(n^2)?

If you are not allowed to store the height in the tree, you can avoid recomputing it by having a worker function that tells you the height of a tree and if it's an AVL tree. Then each node is looked at exactly once, and you have an O(n) algorithm. Then call the worker from the wrapper that forgets the height part of the worker's result. You should of course short-cut, so if some subtree is determined to violate the balancing condition, don't bother checking any more subtrees, return #f and a bogus height.

Another option would be storing the height in every node, where the value represents the height of the subtree rooted in that node. Clearly, with this approach returning the height of a subtree would be an O(1) operation.
That implies that all the operations that modify the tree (insertion, deletion, etc.) must keep the height up to date whenever there's a structural change in the tree.

Related

Creating and solving a recurrence relation for sine approximation

In SICP, there is a problem (exercise 1.15) that says
Exercise 1.15. The sine of an angle (specified in radians) can be
computed by making use of the approximation sin x x if x is
sufficiently small, and the trigonometric identity
sin(r) = 3sin(r/3) - 4sin^3(r/3)
to reduce the size of the argument of sin. (For purposes of this
exercise an angle is considered ``sufficiently small'' if its
magnitude is not greater than 0.1 radians.) These ideas are incorporated
in the following procedures:
(define (cube x) (* x x x))
(define (p x) (- (* 3 x) (* 4 (cube x))))
(define (sine angle)
(if (not (> (abs angle) 0.1))
angle
(p (sine (/ angle 3.0)))))
a. How many times is the procedure p applied when (sine 12.15) is evaluated?
b. What is the order of growth in space and number of steps
(as a function of a) used by the process generated by the
sine procedure when (sine a) is evaluated?
You can analyze it by running it, and see that it becomes O(loga) where a is the input angle in radians.
However, this isn't sufficient. This should be provable via recurrence relation. I can set up a recurrence relation as such:
T(a) = 3T(a/3) - 4T(a/3)^3
Which is homogenous:
3T(a/3) - 4T(a/3)^3 - T(a) = 0
However, it is non-linear. I am unsure how to get the characteristic equation for this so that I can solve it and prove to myself O(loga) is more than just "intuitively true". No tutorial on the internet seems to cover this analysis, and the only thing I saw conclusively said that non-linear recurrences are pretty much impossible to solve.
Can anyone help?
Thank you!
You're confusing the computation with the amount of time the computation takes.
If θ≤0.1, then T(θ) is 1. Otherwise, it is T(θ/3)+k, where k is the time it takes to do four multiplications, a subtraction, and some miscellaneous bookkeeping.
It is evident that the argument for the ith recursion will be θ/3i, and therefore that the recursion will continue until θ/3i≤0.1. Since the smallest value of i for which that inequality is true is ⌈log3(θ/0.1)⌉, we can see that T(θ) = k*⌈log3(θ/0.1)⌉, which is O(logθ). (I left out the small constant factor which differentiates the last recursion from the other ones, since it makes no difference.)

calculate a running time of a function

I have trouble with coming up the running time of a function which calls other functions. For example, here is a function that convert the binary tree to a list:
(define (tree->list-1 tree)
(if (null? tree)
’()
(append (tree->list-1 (left-branch tree))
(cons (entry tree)
(tree->list-1 (right-branch tree))))))
The explanation is T(n) = 2*T(n/2) + O(n/2) because procedure append takes linear time. 
Solving above equation, we get T(n) = O(n * log n).
However, the cons is also a procedure that combines two element. In this case it goes though all the entry node, why don't we add another O(n) in the solution?
Thank you for any help.
Consider O(n^2) which is clearly quadratic.
Now consider O(n^2 + n), this still is quadratic, hence we can reduce this to O(n^2) as the + n is not significant (it does not change the "order of magnitude" (not sure this is the right term)).
The same applies here so we can reduce O([n*log(n)] + n) to O(n*log(n)). However we may not reduce this to O(log(n)) as this would be logarithmic, which is not.
If I understand correctly, you are asking about the difference between append and cons.
The time used by (cons a b) does not depend on the values of a and b. The call allocates some memory, tags it with a type tag ("pair") and stores pointers to the values a and b in the pair.
Compare this to (append xs ys). Here append needs to make a new list consisting of the elements in both xs and ys. This means that if xs is a list of n elements, then append needs to allocate n new pairs to hold the elements of xs.
In short: append needs to copy the elements in xs and thus the time is proportional to the length of xs. The function cons uses the time no matter what arguments it is called with.

scheme procedure path that takes an integer n and a BST that contains integer n

Write a Scheme procedure path that takes an integer n and a binary search tree bst that contains the integer n, and returns a string of ones and zeroes. A move left corresponds to a character zero ('0') and a move right corresponds to a character one ('1').
For example:
(path 17 '(14 (7 () (12 () ()))
(26 (20 (17 () ())())(31 () ()))))
"100".
In the above example, we get strings 100 as we find the path. I had tried but my path is incorrect.
You didn't provide the code you've written so far, so I'll sketch the answer so you can fill-in the blanks with your own solution. Assuming that n is in the tree:
(define (path n bst)
(cond ((< <???> n) ; if the current element is less than n
(string-append "1" <???>)) ; append 1, advance recursion to the right subtree
((> <???> n) ; if the current element is greater than n
(string-append "0" <???>)) ; append 0, advance recursion to the left subtree
(else ; we found n
<???>))) ; base case, end recursion with empty string
The trick is to traverse the tree and accumulate the answer at the same time; given that the output is a string we build it along the way using string-append.

DrRacket procedure body help (boolean-odd? x)

An iterative version of odd? for non-negative integer arguments can be written using and, or, and not. To do so, you have to take advantage of the fact that and and or are special forms that evaluate their arguments in order from left to right, exiting as soon as the value is determined. Write (boolean-odd? x) without using if or cond, but using and, or, not (boolean) instead. You may use + and -, but do not use quotient, remainder, /, etc.
A number is even if two divides it evenly, and odd if there if there is a remainder of one. In general, when you divide a number k by a number n, the remainder is one element of the set {0,1,…n-1}. You can generalize your question by asking whether, when k is divided by n, the remainder is in some privileged set of remainder values. Since this is almost certainly homework, I do not want to provide a direct answer to your question, but I'll answer this more general version, without sticking to the constraints of using only and and or.
(define (special-remainder? k n special-remainders)
(if (< k n)
(member k special-remainders)
(special-remainder? (- k n) special-remainders)))
This special-remainder? recursively divides k by n until a remainder less than n is found. Then n is tested for its specialness. In the case that you're considering, you'll be able to eliminate special-remainders, because you don't need (member k special-remainders). Since you only have one special remainder, you can just check whether k is that special remainder.
A positive odd number can be defined as 1 + 2n. Thus an odd number is:
If x is 1
If x is greater than 1 and x-2 is odd.
Thus one* solution that is tail recursive/iterative looks like this:
(define (odd? x)
(or (= ...) ; #t if 1
(and ... ; #f if less than 1
(odd? ...))); recurse with 2 less
*having played around with it it's many ways to do do this and still have it iterative and without if/cond.

time complexity of the acc function in scheme?

I have been trying to find a tight bound time complexity for this function with respect to just one of the arguments. I thought it was O(p^2) (or rather big theta) but I am not sure anymore.
(define (acc p n)
(define (iter p n result)
(if (< p 1)
result
(iter (/ p 2) (- n 1) (+ result n))))
(iter p n 1))
#sarahamedani, why would this be O(p^2)? It looks like O(log p) to me. The runtime should be insensitive to the value of n.
You are summing a series of numbers, counting down from n. The number of times iter will iterate depends on how many times p can be halved without becoming less than 1. In other words, the position of the leftmost '1' bit in p, minus one, is the number of times iter will iterate. That means the number of times iter runs is proportional to log p.
You might try to eyeball it, or go from it more systematically. Assuming we're doing this from scratch, we should try build a recurrence relation from the function definition.
We can assume, for the moment, a very simple machine model where arithmetic operations and variable lookups are constant time.
Let iter-cost be the name of the function that counts how many steps it takes to compute iter, and let it be a function of p, since iter's termination depends only on p. Then you should be able to write expressions for iter-cost(0). Can you do that for iter-cost(1), iter-cost(2), iter-cost(3), and iter-cost(4)?
More generally, given an p greater than zero, can you express iter-cost(p)? It will be in terms of constants and a recurrent call to iter-cost. If you can express it as a recurrence, then you're in a better position to express it in a closed form.

Resources