DrRacket procedure body help (boolean-odd? x) - scheme

An iterative version of odd? for non-negative integer arguments can be written using and, or, and not. To do so, you have to take advantage of the fact that and and or are special forms that evaluate their arguments in order from left to right, exiting as soon as the value is determined. Write (boolean-odd? x) without using if or cond, but using and, or, not (boolean) instead. You may use + and -, but do not use quotient, remainder, /, etc.

A number is even if two divides it evenly, and odd if there if there is a remainder of one. In general, when you divide a number k by a number n, the remainder is one element of the set {0,1,…n-1}. You can generalize your question by asking whether, when k is divided by n, the remainder is in some privileged set of remainder values. Since this is almost certainly homework, I do not want to provide a direct answer to your question, but I'll answer this more general version, without sticking to the constraints of using only and and or.
(define (special-remainder? k n special-remainders)
(if (< k n)
(member k special-remainders)
(special-remainder? (- k n) special-remainders)))
This special-remainder? recursively divides k by n until a remainder less than n is found. Then n is tested for its specialness. In the case that you're considering, you'll be able to eliminate special-remainders, because you don't need (member k special-remainders). Since you only have one special remainder, you can just check whether k is that special remainder.

A positive odd number can be defined as 1 + 2n. Thus an odd number is:
If x is 1
If x is greater than 1 and x-2 is odd.
Thus one* solution that is tail recursive/iterative looks like this:
(define (odd? x)
(or (= ...) ; #t if 1
(and ... ; #f if less than 1
(odd? ...))); recurse with 2 less
*having played around with it it's many ways to do do this and still have it iterative and without if/cond.

Related

Lucas numbers streaming on Scheme

To show how to check primality with the Lucas sequence, let’s look at
an example. Consider the first few numbers in the Lucas sequence:
1,3,4,7,11,18,29,47,76,123,...
If we are interested to see if a particular number, say 7, is prime,
we take the 7th number in the Lucas sequence and subtract 1. The 7th
Lucas number is 29. Subtract 1 and we have 28. Now, if this result is
evenly divisible by 7, 7 is probably prime. In our example, 28 is
divisible by 7, so it is probably prime. We call these numbers Lucas
pseudoprimes.
If we were interested in another number, say 8, we would take the 8th
Lucas number and subtract 1. In this case, we get 46, which is not
evenly divisible by 8. So, 8 is definitely not prime. This method is
useful for filtering out composite numbers.
So, the general procedure to determine if an integer p could possibly
be prime is as follows:
Find the pth Lucas number
Subtract 1
Check to see if p divides the result evenly
Define a SCHEME function, named (lucas-pseudoprime p) which uses
this approach to determine if an integer p is a Lucas pseudoprime.
I have this so far:
(define (lucas-pseudoprime p)
(define (helper-lucas goal current next)
(if (= goal current)
(head next)
(helper-lucas goal (+ current 1) next))
(let ((solution (- p 1))
(if (= (module solution p) 0) #t
#f)))))
Not sure what's going wrong with this.

Nth Digit From the Right

How do you create a function that will allow you to find the Nth digit from the right in a large number? For example, (f 2345 2) will yield 4. I am a beginning student so I am programming with DrRacket and I'm hoping that the code can be usable in Racket.
Remember that the right most digit is the least significant and thus you can cut it off by taking the quotient by 10 and get the last by taking the remainder by 10.
For nth index you first need to get the remainder of a division by (expt 10 n) and the quotient of that and (expt 10 (- n 1)).
Imagine your function taking a base such that you can pick the 3rd octal digit like this:
(f #o143 #o3 #o10) ; ==> 1
; same in decimal notation
(f 99 3 8) ; ==> 1

How do i find Fibonacci numbers in Scheme?

(define (fib n)
(fib-iter 1 0 n))
(define (fib-iter a b count)
(if (= count 0)
b
(fib-iter (+ a b) a (- count 1))))
Just having some fun with SICP.
I completely understand the idea of Fibonacci algorithm but this code made me stuck up.
What exactly the last line does in compare to imperative style thing(is this just a basic recursion or)?
The procedure is implementing the Fibonacci series as an iterative process. In this case, fib is the main procedure that calls fib-iter, which does the actual work by means of an iteration. Notice that count is used to control the number of iterations we want, whereas a and b are used to store the results of the Fibonacci series for n-1 and n-2 respectively. The line (fib-iter (+ a b) a (- count 1)) is advancing the iteration to the next values.
Please take the time to read about iterative vs. recursive processes in the book, also read about tail recursion - these are the concepts you need to grasp for really understanding what's happening in the example.
For comparison, let's see how the same procedures would look using a more conventional syntax (Python's):
def fib(n):
return fib_iter(1, 0, n)
def fib_iter(a, b, count):
while count != 0: # same as asking `(if (= count 0) ...)`
a, b = a + b, a # same as passing `(+ a b) a` to recursive call
count -= 1 # same as `(- count 1)`
return b # same as returning `b` at end of recursion
As you see, the fib_iter procedure is simply iterating over a range of values controlled by the count variable, assigning a and b to the next values in the series, up until a number of iterations is completed; at this point the result is in b and is returned.

Analysing the runtime efficiency of a Haskell function

I have the following solution in Haskell to Problem 3:
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
divisors :: Integer -> [Integer]
divisors n = [d | d <- [1..n], n `mod` d == 0]
main = print (head (filter isPrime (filter ((==0) . (n `mod`)) [n-1,n-2..])))
where n = 600851475143
However, it takes more than the minute limit given by Project Euler. So how do I analyze the time complexity of my code to determine where I need to make changes?
Note: Please do not post alternative algorithms. I want to figure those out on my own. For now I just want to analyse the code I have and look for ways to improve it. Thanks!
Two things:
Any time you see a list comprehension (as you have in divisors), or equivalently, some series of map and/or filter functions over a list (as you have in main), treat its complexity as Θ(n) just the same as you would treat a for-loop in an imperative language.
This is probably not quite the sort of advice you were expecting, but I hope it will be more helpful: Part of the purpose of Project Euler is to encourage you to think about the definitions of various mathematical concepts, and about the many different algorithms that might correctly satisfy those definitions.
Okay, that second suggestion was a bit too nebulous... What I mean is, for example, the way you've implemented isPrime is really a textbook definition:
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
-- p is prime if its only divisors are 1 and p.
Likewise, your implementation of divisors is straightforward:
divisors :: Integer -> [Integer]
divisors n = [d | d <- [1..n], n `mod` d == 0]
-- the divisors of n are the numbers between 1 and n that divide evenly into n.
These definitions both read very nicely! Algorithmically, on the other hand, they are too naïve. Let's take a simple example: what are the divisors of the number 10? [1, 2, 5, 10]. On inspection, you probably notice a couple things:
1 and 10 are pairs, and 2 and 5 are pairs.
Aside from 10 itself, there can't be any divisors of 10 that are greater than 5.
You can probably exploit properties like these to optimize your algorithm, right? So, without looking at your code -- just using pencil and paper -- try sketching out a faster algorithm for divisors. If you've understood my hint, divisors n should run in sqrt n time. You'll find more opportunities along these lines as you continue. You might decide to redefine everything differently, in a way that doesn't use your divisors function at all...
Hope this helps give you the right mindset for tackling these problems!
Let's start from the top.
divisors :: Integer -> [Integer]
divisors n = [d | d <- [1..n], n `mod` d == 0]
For now, let's assume that certain things are cheap: incrementing numbers is O(1), doing mod operations is O(1), and comparisons with 0 are O(1). (These are false assumptions, but what the heck.) The divisors function loops over all numbers from 1 to n, and does an O(1) operation on each number, so computing the complete output is O(n). Notice that here when we say O(n), n is the input number, not the size of the input! Since it takes m=log(n) bits to store n, this function takes O(2^m) time in the size of the input to produce a complete answer. I'll use n and m consistently to mean the input number and input size below.
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
In the worst case, p is prime, which forces divisors to produce its whole output. Comparison to a list of statically-known length is O(1), so this is dominated by the call to divisors. O(n), O(2^m)
Your main function does a bunch of things at once, so let's break down subexpressions a bit.
filter ((==0) . (n `mod`))
This loops over a list, and does an O(1) operation on each element. This is O(m), where here m is the length of the input list.
filter isPrime
Loops over a list, doing O(n) work on each element, where here n is the largest number in the list. If the list happens to be n elements long (as it is in your case), this means this is O(n*n) work, or O(2^m*2^m) = O(4^m) work (as above, this analysis is for the case where it produces its entire list).
print . head
Tiny bits of work. Let's call it O(m) for the printing part.
main = print (head (filter isPrime (filter ((==0) . (n `mod`)) [n-1,n-2..])))
Considering all the subexpressions above, the filter isPrime bit is clearly the dominating factor. O(4^m), O(n^2)
Now, there's one final subtlety to consider: throughout the analysis above, I've consistently made the assumption that each function/subexpression was forced to produce its entire output. As we can see in main, this probably isn't true: we call head, which only forces a little bit of the list. However, if the input number itself isn't prime, we know for sure that we must look through at least half the list: there will certainly be no divisors between n/2 and n. So, at best, we cut our work in half -- which has no effect on the asymptotic cost.
Daniel Wagner's answer explains the general strategy of deriving bounds for the runtime complexity rather well. However, as is usually the case for general strategies, it yields too conservative bounds.
So, just for the heck of it, let's investigate this example in some more detail.
main = print (head (filter isPrime (filter ((==0) . (n `mod`)) [n-1,n-2..])))
where n = 600851475143
(Aside: if n were prime, this would cause a runtime error when checking n `mod` 0 == 0, thus I change the list to [n, n-1 .. 2] so that the algorithm works for all n > 1.)
Let's split up the expression into its parts, so we can see and analyse each part more easily
main = print answer
where
n = 600851475143
candidates = [n, n-1 .. 2]
divisorsOfN = filter ((== 0) . (n `mod`)) candidates
primeDivisors = filter isPrime divisorsOfN
answer = head primeDivisors
Like Daniel, I work with the assumption that arithmetic operations, comparisons etc. are O(1) - although not true, that's a good enough approximation for all remotely reasonable inputs.
So, of the list candidates, the elements from n down to answer have to be generated, n - answer + 1 elements, for a total cost of O(n - answer + 1). For composite n, we have answer <= n/2, then that's Θ(n).
Generating the list of divisors as far as needed is then Θ(n - answer + 1) too.
For the number d(n) of divisors of n, we can use the coarse estimate d(n) <= 2√n.
All divisors >= answer of n have to be checked for primality, that's at least half of all divisors.
Since the list of divisors is lazily generated, the complexity of
isPrime :: Integer -> Bool
isPrime p = (divisors p) == [1, p]
is O(smallest prime factor of p), because as soon as the first divisor > 1 is found, the equality test is determined. For composite p, the smallest prime factor is <= √p.
We have < 2√n primality checks of complexity at worst O(√n), and one check of complexity Θ(answer), so the combined work of all prime tests carried out is O(n).
Summing up, the total work needed is O(n), since the cost of each step is O(n) at worst.
In fact, the total work done in this algorithm is Θ(n). If n is prime, generating the list of divisors as far as needed is done in O(1), but the prime test is Θ(n). If n is composite, answer <= n/2, and generating the list of divisors as far as needed is Θ(n).
If we don't consider the arithmetic operations to be O(1), we have to multiply with the complexity of an arithmetic operation on numbers the size of n, that is O(log n) bits, which, depending on the algorithms used, usually gives a factor slightly above log n and below (log n)^2.

time complexity of the acc function in scheme?

I have been trying to find a tight bound time complexity for this function with respect to just one of the arguments. I thought it was O(p^2) (or rather big theta) but I am not sure anymore.
(define (acc p n)
(define (iter p n result)
(if (< p 1)
result
(iter (/ p 2) (- n 1) (+ result n))))
(iter p n 1))
#sarahamedani, why would this be O(p^2)? It looks like O(log p) to me. The runtime should be insensitive to the value of n.
You are summing a series of numbers, counting down from n. The number of times iter will iterate depends on how many times p can be halved without becoming less than 1. In other words, the position of the leftmost '1' bit in p, minus one, is the number of times iter will iterate. That means the number of times iter runs is proportional to log p.
You might try to eyeball it, or go from it more systematically. Assuming we're doing this from scratch, we should try build a recurrence relation from the function definition.
We can assume, for the moment, a very simple machine model where arithmetic operations and variable lookups are constant time.
Let iter-cost be the name of the function that counts how many steps it takes to compute iter, and let it be a function of p, since iter's termination depends only on p. Then you should be able to write expressions for iter-cost(0). Can you do that for iter-cost(1), iter-cost(2), iter-cost(3), and iter-cost(4)?
More generally, given an p greater than zero, can you express iter-cost(p)? It will be in terms of constants and a recurrent call to iter-cost. If you can express it as a recurrence, then you're in a better position to express it in a closed form.

Resources