What is the bound or T(n) of this algorithm? - algorithm

def unknownsort(A[],x,y):
if x ==y+1:
if A[x]>A[y]:
switch A[x] and A[y]
elif y > x+1:
z = (y-x+1)/3
unkownsort(A[],x,y-z)
unkownsort(A[],x+z,y)
unkownsort(A[],x,y-z)
Is there a name for this equation? For T(n) what is have is
T(n)= 3T(n) + Theta(n) is this right? I was planning to use Master's Theorem but im not sure exactly if this is right. Also what do you call this process of finding T(n)
I was thinking unkownsort is called three times so, T(n) = 3T(n), but it has a base case depending on the size of the input so T(n) = 3T(n)+theta(n). Now I was wondering if this equation would be wrong because of "z" since z manipulates the size of my array.
Somehow ive come up with this: T(n) = 3T(n/3)+1. Is this correct now?

Ok, homework, let's stick to hints then.
In 3T(n), the 3 is correct since there are 3 recursive calls, but the T(n) is not - the n should be (in the form n/c) the size which the next recursive calls work with, currently you're saying the size is the same.
The Theta(n) is incorrect - apart from the recursive calls, how much work is done in the function? Does the amount of work depend on x and y (you should probably assume that any arithmetic operation always takes a constant amount of time, although this isn't strictly true)?
Did you give us the whole function? If so, I'm not convinced that that algorithm of yours does anything particularly useful (while it looks like it is sorting, I'm not convinced it is, but I could be wrong), and thus probably doesn't have a name.
The T(n) equation is called a recurrence relation, so the process of finding it would simply be called the process of finding the recurrence relation (I'm not aware of a single term to denote this).
(The updated equation you edited into your question is correct)

Related

Time-complexity derivation procedure in generic way for algorithms

I have been reading a lot of articles on data structures and algorithms and everyone only says the most generic way of calculating the time complexity and is usually defined as the time taken for the execution considering variations in input and to iterate an array of n elements let the code be as below and the Big-O complexity is O(n).
for (int i=0;i<a.length;i++)
System.out.println(a[i]);
Agreed thats the way of calculating the time complexity but what about recursive algorithms and how does one come to the conclusion of logarithmic expressions and stuff while calculating time complexity. There is no standard that I came across or aware of so far for deriving those complexities. If yes can someone please throw some light or refer me where to start.
Thanks in advance. Please don't mark as duplicate as there could be many who are facing the same issue of understanding and derving time-complexities after getting tired from different tutorials on web.
Unfortunately, there's no general-purpose algorithm you can follow that, given an arbitrary piece of code, will tell you its time complexity. This is due, in part, to the fact that there's no general way to determine whether an arbitrary piece of code will even halt in the first place. If we could take an arbitrary piece of code and work out its time complexity - assuming it even has one - we could potentially use that to determine whether it would terminate, and that's not something we can do.
As an example of why this is hard, consider this piece of code:
int n = /* get user input */
while (n > 1) {
if (n % 2 == 0) n /= 2;
else n = 3*n + 1;
}
This code traces out the "hailstone sequence" starting at the user's number n. Surprisingly, no one knows whether this process always terminates, and so no one currently has any upper bound at all on how many steps this loop is going to take to terminate.
In practice, working out how long a piece of code takes to run requires a mix of different techniques. For example, the Master Theorem is helpful in determining how long it takes for many recursive functions to terminate. For other, more complex recursive functions, we can often write out a recurrence relation for the runtime, then use a battery of techniques to solve those recurrences. Sometimes it's helpful to work from the inside out, replacing inner loops with simpler expressions and seeing what comes out. Sometimes, it's important to know useful summations like 1/1 + 1/2 + 1/3 + ... + 1/n = Θ(log n), or that 20 + 21 + ... + 2k = Θ(2k). Sometimes, you work out the runtime by thinking about how the code works and what each step does. And sometimes, it takes years to work out just how fast a piece of code is.

Big-o complexity problem - Linear cumulative power

Background
I am trying to work through some problems I found from Stanfords "Design & Analysis of Algorithms" course from 2013. In particular, problem 3 from problem set 1 here.
In summary it states:
You are stuck on a desert island with a radio that can transmit a distress signal at integer power levels (i.e. 1W, 2W, 3W, etc.).
If you transmit a signal with high enough power, you will receive a response and be rescued.
Unfortunately you do not know how much power, n, is needed.
The problem requests you design an algorithm that uses Θ(n)W total power.
Being a 5pt question from problem set 1, I presume this is easier than I am finding it.
My question is
...what is this algorithm?....or how can I change my thinking to find one?
Where I'm stuck
The question states that the strategy "just increase power by 1 watt each time" will result in an Θ(n^2)W total power. Indeed, this is true, as the total power used by any n is n * (n+1) / 2.
However, I can't think of any strategy that isn't:
greater than linear; or
a strategy where I cheat by "not doing anything for a few consecutive n's".
Also, if I ignore the discreteness of the radio for a minute and analyse the problem as a continuous linear function, the total power should be generalisable to a function g(n) of the form g(n) = Kn + B (where K and B are constants). This linear function would represent the integral of the function we need to use for controlling the radio.
Then, if I take the derivative of this function, dg(n)/dn, I'm left with K. I.e. If I want to have linear total power, I should just drive the radio at a constant power for n times...but this would only result in a rescue if I happened to guess K correctly the first time.
EDIT
Yes, I had already thought of doubling etc....but the answers here pointed out the error in my thinking. I had been trying to solve the question "design an algorithm that has linear cumulative power consumption"...which I think is impossible. As pointed out by the answers, I should have thought about it as "for a given n, design an algorithm that will consume Kn"...i.e. what the question posed.
I've read the assignment...
It states that the radio is capable of transmitting in integers, but it doesn't mean you should try it one by one and go over all the integers until n.
Well, I could give you the answer but I'll try just to lead you to think about it on your own:
Please notice that you need to transmit a signal equal or greater than n, so there is no way you are "going to far". Now, with the concepts of complexity, if you go over all the signals you'll get a series of (1+2+3+...+n) which equals to Θ(n^2), try to think of a pattern which you can skip some of those and getting a series that results a sum of Θ(n).
This task is similar to searching algorithms which naively goes for Θ(n^2), but there are algorithms reduced to less than that - you should go and explore how they work :)
If you want an approach for an answer:
You can start with 1W and each step double it for the next transmit. That way you will do log(n) attempts, and each attempt costs i which i is the power of that attempt. So the cumulative power will be something like that: (1+2+4+8+16+...+n) which equals to 2n-1 and fits the requirement of Θ(n)
Well here is a simple algorithm and complexity analysis:
Initialy try with power=1W
If it wasn't received try with power=2*previous_power until it is received
Complexity:
So basically we stop when power p>= n, where n is the desired threshold.
We know that:
p>=n and p/2<n => n<=p<2n
In order to reach the pW (i.e the desired level in order to be received) this means you tried previously with p/2, prior to that p/4... and initially with 1, so let's sum up all the steps:
1+2+4+...+p/2+p -> 2*p ~ Θ(p) = Θ(n)

Summations - Closed Form - Where to Start

I am struggling to understand basics as it related to forming a closed form expression from a summation. I understand the goal at hand, but do not understand the process for which to follow in order to accomplish the goal.
Find a closed form for the sum k+2k+3k+...+K^2. Prove your claim
My first approach was to turn it into a recurrence relation, which did not work cleanly. After that I would attempt to turn from a recurrence relation into a closed form, but I am unsuccessful in getting there.
Does anyone know of a strong approach for solving such problems? Or any simplistic tutorials that can be provided? The material I find online does not help, and causes further confusion.
Thanks
No one gave the mathematical approach, so I am adding the mathematical approach to this AP problem.
Given series is 1k + 2k + 3k + .... + k.k(OR k^2)
Therefore, it means that there are altogether k terms together in the given series.
Next, as here all the consecutive terms are greater than the previous term by a constant common difference,i.e., k.
So, this is an Arithmetic Progression.
Now, to calculate the general summation, the formula is given by :-
S(n) = n/2{a(1)+a(n)}
where,S(n) is the summation of series upto n terms
n is the number of terms in the series,
a(1) is the first term of the series, and
a(n) is the last(n th) term of the series.
Here,fitting the terms of the given series into the summation formula, we get :-
S(n) = k/2{1k + k.k} = (k/2){k+k^2) = [(k^2)/2 + (k^3)/2]*.
If you are interested in a general algorithm to compute sums like these (and more complicated ones) I can't recommend the book A=B enough.
The authors have been so kind to make the pdf freely available:
http://www.math.upenn.edu/~wilf/AeqB.html
Enjoy!
Asad has explained a mathematical approach in the comments to solving this.
If you are interested in a programming approach that works for more complicated expressions, then you can use Sympy in Python.
For example:
import sympy
x,k = sympy.symbols('x k')
print sympy.sum(x*k,(x,1,k))
prints:
k*(k/2 + k**2/2)

Algorithms and Recursion Help?

I have the following recursion: T(n) = 2*T(n/4) + T(n/2) + n and I need to know the exact equation, I know Master theorem won't help me, and the iteration seems to be wrong...
Please tell me how to do it in general for such recursions.
Thanks in advance.
Hey all, thanks for replying I need complexity. I need to understand how to solve such problems.
T(n) = O(nlogn) and W(nlogn)
To prove that, by definition of O, we need to find constants n0 and c such that:
for every n>=n0, T(n)<=cnlogn.
We will use induction on n to prove that T(n)<=cnlogn for all n>=n0
Let's skip the base case for now... (we'll return later)
Hipothesis: We assume that for every k<n, T(k)<=cklogk
Thesis: We want to prove that T(n)<=cnlogn
But, T(n)=2T(n/4)+T(n/2)+n
Using the hipothesis we get:
T(n)<=2(c(n/4)log(n/4))+c(n/2)log(n/2)+n=cnlogn + n(1-3c/2)
So, taking c>=2/3 would prove our thesis, because then T(n)<=cnlogn
Now we need to prove the base case:
We will take n0=2 because if we take n0=1, the logn would be 0 and that wouldn't work with our thesis. So our base cases would be n=2,3,4. We need the following propositions to be true:
T(2) <= 2clog2
T(3) <= 3clog3
T(4) <= 4clog4
So, by taking c=max{2/3, T(2)/2, T(3)/3log3, T(4)/8} and n0=2, we would be finding constants c and n0 such that for every natural n>=n0, T(n)<=cnlogn
The demonstration for T(n) = W(nlogn) is analog.
So basically, in these cases where you can't use the Masther Theorem, you need to 'guess' the result and prove it by induction.
For more information on these kind of demonstrations, refer to 'Introduction to Algorithms'
First of all you need to define some limits on this, otherwise it won't ever end and you will stuck up with OverflowException.
Something like the n is integer and the minimal value is 0.
Could you please bring up more details on your question in this manner ?
This won't help you figure out how to do it necessarily, but apparently Wolfram Alpha can get the right answer. Perhaps you can look for documentation or have Mathematica show you the steps it takes in solving this:
Wolfram Alpha: T(n)=2*T(n/4)+T(n/2)+n
To put crude upper and lower bounds on the search space, you could have recognized your T(n) is bounded above by 3T(n/2) + n and below by 2T(n/4) + n... so O(n^(3/2)) and W(n), by the master theorem.
In general, solving recurrence relations hard problem.

How to implement this O(1) algorithm for this question?

I have variable x, and the functions f1(x), f2(x), .... fn(x) (n can be up to 1 million). The values of these functions are 1 or 0. So, how to write the algorithm,which can quickly pick up the functions which return 1? thanks.
Here I present mine. It has O(n) time complexity, which is not efficient enough.
List funHaveTrueValues = new ArrayList();
for (int i=1; i<=n; ++i){
if (fi(x)==true){
funHaveTrueValues.add(fi);
}
}
}
Could anybody propose O(1) algorithm? thanks!
Unless you know a bit more about the functions than you are telling us, there cannot be an O(1) algorithm for that. You have to look at every function's output at least once, making every algorithm for this problem run in Ω(n).
There is Grover's Algorithm which does it in O(sqrt(n)) but it requires a quantum computer.
If you can assume that each f is O(1), then making at most 1.000.000 calls to them still has a constant upper bound. Thus I believe your sketched approach is O(1), if you limit it to 1.000.000 calls.
Edit
As I got a few downvotes on this, I' try to clarify the reasoning. Given the information at hand, there is no faster way to solve this than to evaluate all f. If the question is really "Is there a faster/more clever way to do this?" then the answer is (as many have answered) no.
If the question however is in the style of "I got this question on a complexity theory test" (or similar) then it might be a "gotcha!". This is the case I aimed for with my answer. In the generalized problem (with n functions, no limits) the time complexity is O(n) granted that each function behaves as an O(1) oracle. By introducing the roof of 1.000.000 functions, time complexity gets a constant upper bound of O(1000000 * 1) = O(1).
If x does change you'd need to evaluate every function anyways, so it would still be O(n). You might, however, determine for which x the result might be 0 or 1 (if it's possible to get something like: x <= y always results in 0, x > y is always 1) and store those thresholds. You'd then only have to evaluate the functions once and later just check x against the calculated thresholds. Note that this highly depends on what your fn(x) really does.
Thus the key to something O(1) like might be caching, as long as the fn(x) results are cacheable with reasonable effort.
You must evaluate each function at least once, and there are n functions. Therefore you cannot do better than O(n) (unless of course you precompute the output for all possible inputs and store it in a table!).
this is not possible, you have to run your functions for all n elements anyway, which means n-functions

Resources