Examining an algorithm on a sorted array - algorithm

I have a sorted array of length n and I am using linear search to compare my value to every element in the array, then i perform a linear search on the array of size n/2 and then on a size of n/4, n/8 etc until i do a linear search on an array of length 1. In this case n is a power of 2, what are the number of comparisons performed?
Not sure exactly if this response is correct but I thought that the number of comparisons would be
T(2n) = (n/2) +(n/4) + ... + 1.
My reasoning for this was because you have to go through every element and then you just keep adding it, but I am still not sure. If someone could walk me through this I would appreciate it

The recurrence you have set up in your question is a bit off, since if n is the length of your input, then you wouldn't denote the length of the input by 2n. Instead, you'd write it as n = 2k for some choice of k. Once you have this, then you can do the math like this:
The size of half the array is 2k / 2 = 2k-1
The size of one quarter of the array is 2k / 4 = 2k-2
...
If you sum up all of these values, you get the following:
2k + 2k-1 + 2k-2 + ... + 2 + 1 = 2k+1 - 1
You can prove this in several ways: you can use induction, or use the formula for the sum of a geometric series, etc. This arises frequently in computer science, so it's worth committing to memory.
This means that if n = 2k, your algorithm runs in time
2k+1 - 1 = 2(2k) - 1 = 2n - 1
So the runtime is 2n - 1, which is Θ(n).
Hope this helps!

Related

What is the time complexity of this BFS algorithm?

I looked at LeetCode question 270. Perfext Squares:
Given an integer n, return the least number of perfect square numbers that sum to n.
A perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself. For example, 1, 4, 9, and 16 are perfect squares while 3 and 11 are not.>
Example 1:
Input: n = 12
Output: 3
Explanation: 12 = 4 + 4 + 4.
I solved it using the following algorithm:
def numSquares(n):
squares = [i**2 for i in range(1, int(n**0.5)+1)]
step = 1
queue = {n}
while queue:
tempQueue = set()
for node in queue:
for square in squares:
if node-square == 0:
return step
if node < square:
break
tempQueue.add(node-square)
queue = tempQueue
step += 1
It basically tries to go from goal number to 0 by subtracting each possible number, which are : [1 , 4, 9, .. sqrt(n)] and then does the same work for each of the numbers obtained.
Question
What is the time complexity of this algorithm? The branching in every level is sqrt(n) times, but some branches are destined to end early... which makes me wonder how to derive the time complexity.
If you think about what you're doing, you can imagine that you're doing a breadth-first search over a graph with n + 1 nodes (all the natural numbers between 0 and n, inclusive) and some number of edges m, which we'll determine later on. Your graph is essentially represented as an adjacency list, since at each point you iterate over all the outgoing edges (squares less than or equal to your number) and stop as soon as you consider a square that's too large. As a result, the runtime will be O(n + m), and all we have to do now is work out what m is.
(There's another cost here in computing all the square roots up to and including n, but that takes time O(n1/2), which is dominated by the O(n) term.)
If you think about it, the number of outgoing edges from each number k will be given by the number of perfect squares less than or equal to k. That value is equal to ⌊√k⌋ (check this for a few examples - it works!). This means that the total number of edges is upper-bounded by
√0 + √1 + √2 + ... + √n
We can show that this sum is Θ(n3/2). First, we'll upper-bound this sum at O(n3/2), which we can do by noting that
√0 + √1 + √2 + ... + √n
≤ √n + √n + √ n + ... + √n (n+1) times
= (n + 1)√n
= O(n3/2).
To lower-bound this at Ω(n3/2), notice that
√0 + √1 + √2 + ... + √ n
≥ √(n/2) + √(n/2 + 1) + ... + √(n) (drop the first half of the terms)
≥ √(n/2) + √(n/2) + ... + √(n/2)
= (n / 2)√(n / 2)
= Ω(n3/2).
So overall, the number of edges is Θ(n3/2), so using a regular analysis of breadth-first search we can see that the runtime will be O(n3/2).
This bound is likely not tight, because this assumes that you visit every single node and every single edge, which isn't going to happen. However, I'm not sure how to tighten things much beyond this.
As a note - this would be a great place to use A* search instead of breadth-first search, since you can fairly easily come up with heuristics to underestimate the remaining total distance (say, take the number and divide it by the largest perfect square less than it). That would cause the search to focus on extremely promising paths that jump rapidly toward 0 before less-good paths, like, say, always taking steps of size one.
Hope this helps!
Some observations:
The number of squares up to n is √n (floored to the nearest integer)
After the first iteration of the while loop, tempQueue will have √n entries
tempQueue can never have more than n entries, since all these values are positive, less than n and unique.
Every natural number can be written as the sum of four integer squares. So that means your BFS algorithm's while loop will iterate at the most 4 times. If the return statement did not get executed during any of the first 3 iterations, it is guaranteed it will in the 4th.
Every statement (except for the initialisation of squares) runs in constant time, even the call to .add().
The initialisation of squares has a list comprehension loop that has √n iterations, and range runs in constant time, so that initialisation has a time complexity of O(√n).
Now we can set a ceiling to the number of times the if node-square == 0 statement is executed (or any other statement in the innermost loop's body):
1⋅√n + √n⋅√n + n⋅√n + n⋅√n
Each of the 4 terms corresponds to an iteration of the while loop. The left factor of each product corresponds to the maximum size of queue in that particular iteration, and the factor at the right corresponds to the size of squares (always the same). This simplifies to:
√n + n + 2n3⁄2
In terms of time complexity this is:
O(n3⁄2)
This is the worst case time complexity. When the while loop only has to iterate twice, it is O(n), and when only once (when n is a square), it is O(√n).

Resizing dynamic array with the size of Fibonacci numbers

We have a dynamic array with the size of Fibonacci numbers. Assume that F(k) is the array's current size(F(k) is the k-th number of Fibonacci series). We have two rules here:
1)If after inserting an element in the array, the number of array elements is F(k-1), we create a new array with the size of F(k+1) and copy the previous elements to the new array.
2)If after deleting an element from the array, the number of array elements is F(k-3), we create a new array with the size of F(k-1) and copy the previous elements to the new array.
At first, the array is empty and has size of 2. We want to show that for every sequence of actions(insert or delete), every action has amortized time complexity of O(1).
For solving this problem, I realize that there is at least F(k-1)-F(k-2) actions taken between two array growing actions, and copying the elements take O(F(k-1)) time. Also, there are at least F(k-2)+F(k-3) actions taken between two array shrinking actions, and copying the elements take O(F(k-3)) time.
Can you help me solving this problem?
The amortized analysis is summing each time of copy which is T(n) = F(1) + F(2) + ... + F(k) if we suppose n = F(k). We know that T(n) = F(k+2) -1.
As T(n) = F(k+2) - 1 = F(k+1) + F(k) - 1 = 2F(k) + F(k-1) - 1= 2*n + F(k-1) - 1< 3n - 1, hence amotized cost is T(n)/n < 3 and it means T(n) = Theta(1) in amortized sence.

Why is iterative k-way merge O(nk^2)?

k-way merge is the algorithm that takes as input k sorted arrays, each of size n. It outputs a single sorted array of all the elements.
It does so by using the "merge" routine central to the merge sort algorithm to merge array 1 to array 2, and then array 3 to this merged array, and so on until all k arrays have merged.
I had thought that this algorithm is O(kn) because the algorithm traverses each of the k arrays (each of length n) once. Why is it O(nk^2)?
Because it doesn't traverse each of the k arrays once. The first array is traversed k-1 times, the first as merge(array-1,array-2), the second as merge(merge(array-1, array-2), array-3) ... and so on.
The result is k-1 merges with an average size of n*(k+1)/2 giving a complexity of O(n*(k^2-1)/2) which is O(nk^2).
The mistake you made was forgetting that the merges are done serially rather than in parallel, so the arrays are not all size n.
Actually in the worst case scenario,there will be n comparisons for the first array, 2n for the second, 3n for the third and soon till (k - 1)n.
So now the complexity becomes simply
n + 2n + 3n + 4n + ... + (k - 1)n
= n(1 + 2 + 3 + 4 + ... + (k - 1))
= n((k - 1)*k) / 2
= n(k^2 - k) / 2
= O(nk ^ 2)
:-)
How about this:
Step 1:
Merge arrays (1 and 2), arrays (3 and 4), and so on. (k/2 array merges of 2n, total work kn).
Step 2:
Merge array (1,2 and 3,4), arrays (5,6 and 7,8), and so on (k/4 merges of 4n, total work kn).
Step 3:
Repeat...
There will be log(k) such "Steps", each with kn work. Hence total work done = O(k.n.log(k)).
Even otherwise, if we were to just sort all the elements of the array we could still merge everything in O(k.n.log(k.n)) time.
k-way merge is the algorithm that takes as input k sorted arrays, each of size n. It outputs a single sorted array of all the elements.
I had thought that this algorithm is O(kn)
We can disprove that by contradiction. Define a sorting algorithm for m items that uses your algorithm with k=m and n=1. By the hypothesis, the sorting algorithm succeeds in O(m) time. Contradiction, it's known that any sorting algorithm has worst case at least O(m log m).
You don't have to compare items 1 by 1 each time.
You should simply maintain the most recent K items in a sorted set.
You remove the smallest and relace it by its next element. This should be n.log(k)
Relevant article. Disclaimer: I participated in writing it
1) You have k sorted arrays, each of size n. Therefore total number of elements = k * n
2) Take the first element of all k arrays and create a sequence. Then find the minimum of this sequence. This min value is stored in the output array. Number of comparisons to find the minimum of k elements is k - 1.
3) Therefore the total number of comparisons
= (comparisons/element) * number of elements
= (k - 1) * k * n
= k^2 * n // approximately
A common implementation keeps an array of indexes for each one of the k sorted arrays {i_1, i_2, i__k}. On each iteration the algorithm finds the minimum next element from all k arrays and store it in the output array. Since you are doing kn iterations and scanning k arrays per iteration the total complexity is O(k^2 * n).
Here's some pseudo-code:
Input: A[j] j = 1..k : k sorted arrays each of length n
Output: B : Sorted array of length kn
// Initialize array of indexes
I[j] = 0 for j = 1..k
q = 0
while (q < kn):
p = argmin({A[j][I[j]]}) j = 1..k // Get the array for which the next unprocessed element is minimal (ignores arrays for which I[j] > n)
B[q] = A[p][I[p]]
I[p] = I[p] + 1
q = q + 1
You have k arrays each with n elements. This means total k*n elements.
Consider it a matrix of k*n. To add first element to the merged/ final array, you need to compare heads of k arrays. This means for one element in final array you need to do k comparisons.
So from 1 and 2, for Kn elements, total time taken is O(kk*n).
For those who want to know the details or need some help with this, I'm going expand on Recurse's answer and follow-up comment
We only need k-1 merges because the last array is not merged with anything
The formula for summing the terms of an arithmetic sequence is helpful; Sn=n(a1 + an)2
Stepping through the first 4 merges of k arrays with n elements
+-------+-------------------+-------------+
| Merge | Size of new array | Note |
+-------+-------------------+-------------+
| 1 | n+n = 2n | first merge |
| 2 | 2n+n = 3n | |
| 3 | 3n+n = 4n | |
| 4 | 4n+n = 5n | |
| k-1 | (k-1)n+n = kn | last merge |
+-------+-------------------+-------------+
To find the average size, we need to sum all the sizes and divide by the number of merges (k-1). Using the formula for summing the first n terms, Sn=n(a1 + an)2, we only need the first and last terms:
a1=2n (first term)
an=kn (last term)
We want to sum all the terms so n=k-1 (the number of terms we have). Plugging in the numbers we get a formula for the sum of all terms
Sn = ( (k-1)(2n+kn) )/2
However, to find the average size we must divide by the number of terms (k-1). This cancels out the k-1 in the numerator and we're left with an average of size of
(2n + kn)/2
Now we have the average size, we can multiply it by the number of merges, which is k-1. To make the multiplication easier, ignore the /2, and just multiply the numerators:
(k-1)(2n+kn)
= (k^2)n + kn - 2n
At this point you could reintroduce the /2, but there shouldn't be any need since it's clear the dominant term is (k^2)*n

Selection i'th smallest number algorithm

I'm reading Introduction to Algorithms book, second edition, the chapter about Medians and Order statistics. And I have a few questions about randomized and non-randomized selection algorithms.
The problem:
Given an unordered array of integers, find i'th smallest element in the array
a. The Randomized_Select algorithm is simple. But I cannot understand the math that explains it's work time. Is it possible to explain that without doing deep math, in more intuitive way? As for me, I'd think that it should work for O(nlog n), and in worst case it should be O(n^2), just like quick sort. In avg randomizedPartition returns near middle of the array, and array is divided into two each call, and the next recursion call process only half of the array. The RandomizedPartition costs (p-r+1)<=n, so we have O(n*log n). In the worst case it would choose every time the max element in the array, and divide the array into two parts - (n-1) and (0) each step. That's O(n^2)
The next one (Select algorithm) is more incomprehensible then previous:
b. What it's difference comparing to previous. Is it faster in avg?
c. The algorithm consists of five steps. In first one we divide the array into n/5 parts each one with 5 elements (beside the last one). Then each part is sorted using insertion sort, and we select 3rd element (median) of each. Because we have sorted these elements, we can be sure that previous two <= this pivot element, and the last two are >= then it. Then we need to select avg element among medians. In the book stated that we recursively call Select algorithm for these medians. How we can do that? In select algorithm we are using insertion sort, and if we are swapping two medians, we need to swap all four (or even more if it is more deeper step) elements that are "children" for each median. Or do we create new array that contain only previously selected medians, and are searching medians among them? If yes, how can we fill them in original array, as we changed their order previously.
The other steps are pretty simple and look like in the randomized_partition algorithm.
The randomized select run in O(n). look at this analysis.
Algorithm :
Randomly choose an element
split the set in "lower than" set L and "bigger than" set B
if the size of "lower than" is j-1 we found it
if the size is bigger, then Lookup in L
or lookup in B
The total cost is the sum of :
The cost of splitting the array of size n
The cost of lookup in L or the cost of looking up in B
Edited: I Tried to restructure my post
You can notice that :
We always go next in the set with greater amount of elements
The amount of elements in this set is n - rank(xj)
1 <= rank(xi) <= n So 1 <= n - rank(xj) <= n
The randomness of the element xj directly affect the randomness of the number of element which
are greater xj(and which are smaller than xj)
if xj is the element chosen , then you know that the cost is O(n) + cost(n - rank(xj)). Let's call rank(xj) = rj.
To give a good estimate we need to take the expected value of the total cost, which is
T(n) = E(cost) = sum {each possible xj}p(xj)(O(n) + T(n - rank(xj)))
xj is random. After this it is pure math.
We obtain :
T(n) = 1/n *( O(n) + sum {all possible values of rj when we continue}(O(n) + T(n - rj))) )
T(n) = 1/n *( O(n) + sum {1 < rj < n, rj != i}(O(n) + T(n - rj))) )
Here you can change variable, vj = n - rj
T(n) = 1/n *( O(n) + sum { 0 <= vj <= n - 1, vj!= n-i}(O(n) + T(vj) ))
We put O(n) outside the sum , gain a factor
T(n) = 1/n *( O(n) + O(n^2) + sum {1 <= vj <= n -1, vj!= n-i}( T(vj) ))
We put O(n) and O(n^2) outside, loose a factor
T(n) = O(1) + O(n) + 1/n *( sum { 0 <= vj <= n -1, vj!= n-i} T(vj) )
Check the link on how this is computed.
For the non-randomized version :
You say yourself:
In avg randomizedPartition returns near middle of the array.
That is exactly why the randomized algorithm works and that is exactly what it is used to construct the deterministic algorithm. Ideally you want to pick the pivot deterministically such that it produces a good split, but the best value for a good split is already the solution! So at each step they want a value which is good enough, "at least 3/10 of the array below the pivot and at least 3/10 of the array above". To achieve this they split the original array in 5 at each step, and again it is a mathematical choice.
I once created an explanation for this (with diagram) on the Wikipedia page for it... http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm

What does Logn actually mean?

I am just studying for my class in Algorithms and have been looking over QuickSort. I understand the algorithm and how it works, but not how to get the number of comparisons it does, or what logn actually means, at the end of the day.
I understand the basics, to the extent of :
x=logb(Y) then
b^x = Y
But what does this mean in terms of algorithm performance? It's the number of comparisons you need to do, I understand that...the whole idea just seems so unintelligible though. Like, for QuickSort, each level K invocation involves 2^k invocations each involving sublists of length n/2^K.
So, summing to find the number of comparisons :
log n
Σ 2^k. 2(n/2^k) = 2n(1+logn)
k=0
Why are we summing up to log n ? Where did 2n(1+logn) come from? Sorry for the vagueness of my descriptions, I am just so confused.
If you consider a full, balanced binary tree, then layer by layer you have 1 + 2 + 4 + 8 + ... vertices. If the total number of vertices in the tree is 2^n - 1 then you have 1 + 2 + 4 + 8 + ... + 2^(n-1) vertices, counting layer by layer. Now, let N = 2^n (the size of the tree), then the height of the tree is n, and n = log2(N) (the height of the tree). That's what the log(n) means in these Big O expressions.
below is a sample tree:
1
/ \
2 3
/ \ / \
4 5 6 7
number of nodes in tree is 7 but high of tree is log 7 = 3, log comes when you have divide and conquer methods, in quick sort you divide list into 2 sublist, and continue this until rich small lists, divisions takes logn time (in average case), because the high of division is log n, partitioning in each level takes O(n), because in each level in average you partition N numbers, (may be there are too many list for partitioning, but average number of numbers is N in each level, in fact some of count of lists is N). So for simple observation if you have balanced partition tree you have log n time for partitioning, which means high of tree.
1 forget about b-trees for sec
here' math : log2 N = k is same 2^k=N .. its the definion of log
, it could be natural log(e) N = k aka e^k = n,, or decimal log10 N = k is 10^k = n
2 see perfect , balanced tree
1
1+ 1
1 + 1 + 1+ 1
8 ones
16 ones
etc
how many elements? 1+2+4+8..etc , so for 2 level b-tree there are 2^2-1 elements, for 3 level tree 2^3-1 and etc.. SO HERE'S MAGIC FORMULA: N_TREE_ELEMENTS= number OF levels^ 2 -1 ,or using definition of log : log2 number OF levels= number_of_tree_elements (Can forget about -1 )
3 lets say there's a task to find element in N elements b-tree, w/ K levels (aka height)
where how b-tree is constructed log2 height = number_of_tree elements
MOST IMPORTANT
so by how b-tree is constructed you need no more then 'height' OPERATIONS to find element in all N elements , or less.. so WHAT IS HEIGHT equals : log2 number_of_tree_elements..
so you need log2 N_number_of_tree_elements.. or log(N) for shorter
To understand what O(log(n)) means you might wanna read up on Big O notaion. In shot it means, that if your data set gets 1024 times bigger you runtime will only be 10 times longer (or less)(for base 2).
MergeSort runs in O(n*log(n)), which means it will take 10 240 times longer. Bubble sort runs in O(n^2), which means it will take 1024^2 = 1 048 576 times longer. So there are really some time to safe :)
To understand your sum, you must look at the mergesort algorithm as a tree:
sort(3,1,2,4)
/ \
sort(3,1) sort(2,4)
/ \ / \
sort(3) sort(1) sort(2) sort(4)
The sum iterates over each level of the tree. k=0 it the top, k= log(n) is the buttom. The tree will always be of height log2(n) (as it a balanced binary tree).
To do a little math:
Σ 2^k * 2(n/2^k) =
2 * Σ 2^k * (n/2^k) =
2 * Σ n*2^k/2^k =
2 * Σ n =
2 * n * (1+log(n)) //As there are log(n)+1 steps from 0 to log(n) inclusive
This is of course a lot of work to do, especially if you have more complex algoritms. In those situations you get really happy for the Master Theorem, but for the moment it might just get you more confused. It's very theoretical so don't worry if you don't understand it right away.
For me, to understand issues like this, this is a good way to think about it.

Resources