Way to check complexity of algorithm - algorithm

I recently had a friend challange me to solve a question that he saw on a job interview:
N elements are split into k arrays. Find an algorithm that returns whether any of the elements are identical in (k log N) time.
Please do not provide an answer, I'd really like to solve it myself.
The question I have is: Is there a website similar to regexpal to test the complexity of my algorithm? If not, does anyone have any suggestions on how to actually find the complexity?
I have a general idea, but it's been a while since I last tried to do a problem like this.
Edit: New question to add to this. how does K Log N compare to N log N. Obviously when K is 1 it's just log N, which is more efficient than O(n), but if K >= n it's even worse than N log N, correct?

You usually find the complexity of the algorithm by reasoning and mathematical proof. Running the algorithm on some data won't give you the true complexity (BigOh or Theta) it'll just give you some estimate on the data that was provided.
Quicksort for instance is average BigOh n log n but worst case BigOh n^2, the worst case happens if you select a bad pivot.
You need to figure out the worst cases yourself and do the analysis I'm afraid. If you need help (or a reminder) doing the analysis a good place to look is YouTube or iTunesU in the computer science category on major US universities (for instance the MIT Introduction to Algorithms).

Edit: New question to add to this. how does K Log N compare to N log N. Obviously when K is 1 it's just log N, which is more efficient than O(n), but if K >= n it's even worse than N log N, correct?
To answer your edit, you could use the klogn algorithm when k < n and the nlogn algo when n < k

Related

How to write algorithm given a runtime

I'm currently taking an algorithm analysis course. One of the questions of a quiz was to write an algorithm with the runtime T(n) = 4T(3n/4) + n^2 where the algorithm does not have to do anything significant.
I couldn't find any similar examples so I'm unsure how to proceed.
To simplify how to think about this kind of problem, just use an array of n elements to represent a problem of size n.
Then, the running time T(n) represents the algorithm run on the array.
The running time 4T(3n/4) represents the algorithm run on 3 quarters of the array four times.
The running time n^2 represents some quadratic operation on the array (for example, a bubble sort).
silly_algo (arr[], n)
if n == 0 return
for i : 1..4
silly_algo(arr, 3*n/4)
bubblesort(arr, n)

Compute size N that can be solved in certain amount of time

I am working on an exercise (note no homework question) where a number of steps that can be exercised by a computer are given and one is asked to compute N in relation to certain time intervals for multiple functions some functions.
I have no problem doing this for functions such as f(n) = n, n^2, n^3 and the like.
But when it comes to f(n) = lgn, sqrt(n), n log n, 2^n, and n! i run into problems.
It is clear to me that I that I have to construct a term of the form func(n) = interval and then have to get n.
But how to do this with the functions above?
Can somebody please give me an example, or name the inverse functions so that I can look it up on wikipedia or somewhere else.
Your question isn't so much about algorithms, or complexity, but about inversions of math formulas.
It's easy to solve for n in n^k = N in a closed form. Unfortunately, for most other functions it is either not known or known that it is not possible. In particular, for n log(n), the solution involves the Lambert function, which doesn't help you much.
In most cases, you will have to solve this kind of stuff numerically.

How much time (Big-O) will an algorithm take which can rule out one third of possible numbers from 1 to N in each step?

I am abstracting the problem out. (it has nothing to do with prime numbers)
How much time (in terms of Big-O) will it take to determine if n is the solution?
If suppose I was able to design an algorithm which can rule out one third of the numbers from the possible answers {1,2,...,n} in the first step. Then successively rules out one third of the "remaining" numbers until all numbers are tested.
I have thought a lot about it but cant figure out it will be O(n log₃(n)) or O(log₃(n))
It depends on the algorithm, and on the value of N. You should be able to figure out and program an algorithm that takes O (sqrt (N)) rather easily, and it's not difficult to go down to O (sqrt (N) / log N). Anything better requires some rather deep mathematics, but there are algorithms that are a lot faster for large N.
Now when you say O (N log N), please don't guess these things. O (N log N) is ridiculous. The most stupid algorithm where you use nothing than the definition of a prime number is O (N).
Theoretically, the best effort is O(log^3 N), but the corresponding algorithm is not something you could figure out easily. See http://en.wikipedia.org/wiki/AKS_primality_test
There are more practical probabilistic algorithms though.
BTW. About 'ruling out one third' etc. It does not matter would it be 'log base 3' or 'log base 10' and so on. O(log N) roughly means 'any base logarithm' because they all can be reduced to each other by constant multiplier only. So the complexity of such algorithm will be log N * complexity_of_reduction_step. But the problem is, that 'single step' will hardly take the constant time. And if so, it will not help in achieving O(log N).

Slowest Computational Complexity (Big-O)

Out of these algorithms, I know Alg1 is the fastest, since it is n squared. Next would be Alg4 since it is n cubed, and then Alg2 is probably the slowest since it is 2^n (which is supposed to have a very poor performance).
However Alg3 and Alg5 are something I have yet to come across in my reading in terms of speed. How do these two algorithms rank up to the other 3 in terms of which is faster and slower? Thanks for any help.
Edit: Now that I think about it, is Alg3 referring to O(n log n)? If the ln inside of it means 'log', then that would make it the fastest.
The ascending order would be: n·log(n) < n2 < n3 < 2n < n! for n ≥ 10.
Also have a look at the Big-O Algorithm Complexity Cheat Sheet.

All Possible Sorting algorithm analysis

The “APS” (All Possible Sorting) algorithm sorts an array A of size n by generating all possible sequences of elements of n, and for each sequence, checking to see if the elements are in sorted (ascending) order.
a) What is the worst-case time complexity of APS? Explain your logic / show your work.
My answer:
Worst case is O(n!) because it generates all possible sequences and then checks if sorted.
Preferably, I would like someone to tell me if I'm right or wrong and how to get to the answer. This big O stuff confuses me.
APS is generating all possible permutations of N elements, which gives you n! different possible sortings, so you are correct.
Proving that it is O(n!) just requires you to prove that the runtime is asymptotically upper-bounded by n!, which basically means you have to prove that:
f(n) = O(n!) if, for some m and c, |f(n)| < |m * n!| for all n > c.
Proving this is easier if you have the actual algorithm written out, but if you walk through your logic it should do the trick.

Resources