Approximation of growth of a function - algorithm

I am trying to learn about analysis of algorithms, but I find running times a little bit hard to understand. I have a problem where I am supposed to find the exact approximation of the growth of a function and the big-O notation.
I am wondering if the order of growth of D is N, because function D is only executing step() at most n times?
I have the function
void step() {
count++;
}
void functionD(int n) {
if (n % 2 == 0) {
step();
}
else
step();
}

It seems that you've made a typo, since your claim
because functionD is only executing step() at most n times?
sounds strange. Just have look. Both if branches do the same:
void functionD(int n) {
if (n % 2 == 0) { // either n is even
step(); // ... do the step()
}
else // or n is odd
step(); // ... do exactly the same - step()
}
so we can simplify the code into
void functionD(int n) {
// either n is even or odd do the step;
step();
}
Now, since step does nothing but increment a conter
void step() {
count++;
}
functionD increments a counter and that's all:
void functionD(int n) {
count++;
}
Finally, we have for functionD: ignore n, increment a counter, this operation has evident O(1) time complexity (functionD ignores n).

Since the body of the if statement is the same as the body of the else statement, the step function will always be executed.
Since there is no kind of loop or recursion, the algorithm runs in constant time.

Related

How can I optimize Ackermann Function?

I am required to find an optimization for the Ackermann function and explain the problem with the Ackermann problem itself. However, I'm not really sure where I should start. I understand that Ackermann's function grows faster than any primitive recursive function. Maybe using a BigInteger to store the result could help? or maybe using memoization?
For example I was thinking of using something like a bottom-up fibonacci solution, if we know that at A(0,1) = 1+1, A(1,0) = A(0,1), A(1,1) = A(0,A(1,0)) and I can build from there depending on the 'n'.
Does that sound reasonable or is it unachievable? What is the actual problem that leads it to grow this fast even for small numbers?
class Ackermann
{
static int ack(int m, int n)
{
if (m == 0)
{
return n + 1;
}
else if((m > 0) && (n == 0))
{
return ack(m - 1, 1);
}
else if((m > 0) && (n > 0))
{
return ack(m - 1, ack(m, n - 1));
}else
return n + 1;
}
public static void main(String args[])
{
System.out.println(ack(1, 2));
}
}
From this post, you can find the following:
A real gain of execution time can only be achieved by not recalculating subresults over and over again. Memoization is an optimization technique where the results of function calls are cached and returned when the same inputs occur again. See for instance Ward (1993). Grossman & Zeitman (1988) published a cunning algorithm which computes A(i, n) within O(i x A(i,n)) time and within O(i) space.

How to write recurrence of any recursive function?

I have seen many examples, but couldn't get the right procedure to write the recurrence of any recursive function. For example: If I see these functions,
how can I write an accurate recurrence?
void print (int n)
{
if (n>0)
{cout<< n<<endl;
print(n-1);}
}
and
int power (int x, int n)
{
If (n==0)
return 1;
else
return x * power (x, n-1);
}
and
Int fib (int n)
{
If (n==0 or n==1)
return 1;
else
return fib(n-1) +fib (n-2);
}
and
int numberofDigits(int n) {
if ((-10 < n) && (n < 10))
return 1;
else
return 1 + numberofDigits(n/10);
}
Those are all recursive functions. We can see, some of the functions like (fibonacci, factorial etc) functions have pre-described recurrences on the internet.
But how to write a recurrence by ourselves?
There are generally to cases to a recursive function:
The recursive case
The "base" case
For example in the Fibonacci case the base case was "if n is 1 or 0 then the answer is 1", while the recursive case is defined as the sum of the two previous numbers.
The way to create your own recursive function is to isolate the recursive and base cases, and then implement. So the canonical recursive function would look like this:
type my_recursion(type var)
{
if (/* base case */)
return /* base case value */;
else
return my_recursion(/* pass something in here */);
}

Big O notation of recursive functions

I have been practicing big O notation and I seem to understand it except for when it comes to recursive functions. I can deal with the simple ones (like when it is O(n) or O(1) but anything else I usually just get lost.
Below are three that are practice problems, I would really appreciate if someone explained the process on how they find the answer.
public static int method2(int n)
{
if (n < 1) throw new IllegalArgumentException();
if (n == 1)
return 2;
else
return method2(n - 1) * (2 * n);
}
public static void method3(int n)
{
if (n < 1) throw new IllegalArgumentException();
if (n == 1)
System.out.print("1");
else {
method3(n - 1);
System.out.print(", " + n);
}
}
public static void method4(int n)
{
if (n < 1) throw new IllegalArgumentException();
if (n==1) System.out.print(1);
else if (n==2) System.out.print("1 1");
else {
System.out.print((n+1)/2+ " ");
method4(n-2);
System.out.print(" "+(n+1)/2);
}
}
}
This examples are quite straightforward.
In method2, you start with n, and go down by 1 until it comes to 1. so you will have n calls of method2 and that is O(n).
method3 is same as method2, just different operation. You call method3 until its 1 by decreasing n by 1.
In method4 you decrease n by 2, until its 1 or 2, so you will do n/2 steps, which is still O(n).
This are not quite the best examples to understand the speed of recursive functions. You need to play around with examples that have more than one option for recursion.
All this options can be transform to for loops with same complexity, so try to think in that way if it can help you.

Base and recursive cases in Recursive Functions

I am new to recursion and want to know if the below code can be called a recursive function,although the function is calling itself not sure whether it satisfies the condition to have base and recursive cases.
Below function sorts flight tickets based on starting point of the trip ( the trip is assumed to have multiple transitions and tickets are shuffled like ABC-> EFG, XYZ-> MNO, EFG -> KLM , KLM->XYZ).
public List<Ticket> sortTickets(List<Ticket> shuffled, String source, int n) {
if (sortedList.size() != shuffled.size()) {
n = (n < 0) ? shuffled.size() - 1 : n;
if (shuffled.get(n).source == source) {
sortedList.add(shuffled.get(n));
source = shuffled.get(n).destination;
return sortTickets(shuffled, source, n - 1);
} else {
return sortTickets(shuffled, source, n - 1);
}
} else {
return sortedList;
}
}
Every function that calls itself is recursive (regardless of having base cases).
A recursive function without base cases will always infinitely recurse (not that lack of base cases will necessarily prevent infinite recursion - it might be that one of the execution paths of the function never gets to a base case).
Your function has a base case - when sortedList.size() == shuffled.size().

Keeping track of the median of an expanding array

Interview Question:
Edited Below
You are given an array. You make 2 heaps out of it, one minheap and the other max heap. Now find the median of the array using these 2 provided heaps in O(nlog n) time.
Corrected Question
Numbers are randomly generated and stored into an (expanding) array. How would you keep track of the median?
Solution
This problem can be solved using 2 heaps and the median can always be accessed in O(1) time.
Here's how you use both heaps. Note that I'm assuming you don't know the number of elements, and this is why we must pop until we pop something from the min heap that is larger than or equal to what we pop from the max heap. Note that we return the average because in the case of a set like {1, 2, 3, 4} the median is actually 2.5 (the average of the two "middle" values). I'm assuming double as the value type, but this can obviously be anything. Here:
double min = minheap.pop();
double max = maxheap.pop();
while(min < max) {
min = minheap.pop();
max = maxheap.pop();
}
return (min + max) / 2;
Since popping is O(log n) and we have to pop O(n / 2) values, this is O(n log n).
A working implementation in java, using 2 heaps, O(n log n). At any time I keep the two heaps balanced in size (ie. they differ at most by 1, if we entered n elements such that n%2==1). Getting the median is O(1). Adding a new element is O(log n).
public class MedianOfStream {
private int count;
private PriorityQueue<Integer> highs, lows;
public MedianOfStream() {
highs = new PriorityQueue<Integer>(11, new Comparator<Integer>() {
#Override
public int compare(Integer arg0, Integer arg1) {
return arg0.compareTo(arg1);
}
});
lows = new PriorityQueue<Integer>(11, new Comparator<Integer>() {
#Override
public int compare(Integer arg0, Integer arg1) {
return arg1.compareTo(arg0);
}
});
}
private int getMedian() {
if (count == 0)
return 0;
if (lows.size() == highs.size()) {
return (lows.peek() + highs.peek()) / 2;
} else if (lows.size() < highs.size()) {
return highs.peek();
}
return lows.peek();
}
private void swap(){
int h = highs.poll();
int l = lows.poll();
highs.add(l);
lows.add(h);
}
public int updateMedian(int n) {
count++;
if (count == 1)
lows.add(n);
else if (count==2) {
highs.add(n);
if(highs.peek()<lows.peek()) {
swap(); // O(log n)
}
}
else {
if (n > highs.peek()) {
lows.add(highs.poll()); // O(log n)
highs.add(n); // O(log n)
} else {
highs.add(lows.poll()); // O(log n)
lows.add(n); // O(log n)
}
if(highs.peek()<lows.peek()) {
swap(); // O(log n)
}
}
// if we added an even # of items,
// the heaps must be exactly the same size,
// otherwise we tolerate a 1-off difference
if (Math.abs(lows.size() - highs.size()) > (count % 2)) {
if (lows.size() < highs.size()) {
lows.add(highs.poll()); // O(log n)
} else {
highs.add(lows.poll()); // O(log n)
}
}
return getMedian(); // O(1)
}
}
Popping from a heap is an O(log N) operation, so you can achieve O(N log N) by popping half the elements from one of the heaps and taking the last popped value (you'd have to handle edge cases). This doesn't take advantage of the other heap though.
You can achieve O(N) using the selection algorithm, but the constant factor is very high. The former suggestion is probably better if you already have a heap.
JavaScript solution using two heaps:
function addNewNumber(minHeap, maxHeap, randomNumber) {
if (maxHeap.size() === minHeap.size()) {
if (minHeap.peek() && randomNumber > minHeap.peek()) {
maxHeap.insert(minHeap.remove());
minHeap.insert(randomNumber);
} else {
maxHeap.insert(randomNumber);
}
} else {
if (randomNumber < maxHeap.peek()) {
minHeap.insert(maxHeap.remove());
maxHeap.insert(randomNumber);
} else {
minHeap.insert(randomNumber);
}
}
}
function getMedian(minHeap, maxHeap) {
if (!maxHeap.size()) {
return 0;
}
if (minHeap.size() === maxHeap.size()) {
return (minHeap.peek() + maxHeap.peek()) / 2;
} else {
return maxHeap.peek();
}
}

Resources