Can any one tell me what is the worst time complexity of below code?
Is it linear or bigger?
void fun(int[] nums){
{
int min = min(nums);
int max = max(nums);
for(int i= min; i<=max;i++){
print(i); //constant complexity for print
}
}
int min(int[] nums);//return min in nums in linear time
int max(int[] nums);//return max in nums in linear time
where
0 <= nums.length <= 10^4 and -10^9 <= nums[i] <= 10^9
Can I say that time complexity of this code is O(Max(nums[i]) - Min(nums[i])) and can I say, this is linear time complexity?
As the complexity is linear with respect to the range R = max - min of the data, I would call it a pseudo-linear complexity. O(N + R).
This is detailed in this Wikipedia entry: Pseudo-polynomial time
As mentioned in the introduction of this article:
In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the numeric value of the input (the largest integer present in the input)—but not necessarily in the length of the input (the number of bits required to represent it), which is the case for polynomial time algorithms.
Generally, when analysing the complexity of a given algorithm, we don't make any specific assumption about the inherent range limitation of a particular targeted language, except of course if this is especially mentionned in the problem.
If the range of numbers is constant (ie -10^9 <= nums[i] <= 10^9) then
for(int i= min; i<=max;i++){
print(i); //constant complexity for print
}
is in O(1), ie constant because you know, it iterates at most 2 * 10^9 numbers, regardless of how many numbers there are in the nums[] array. Thus it does not depend on the size of the input array.
Consider the following input arrays
nums = [-10^9, 10^9]; //size 2
nums = [-10^9, -10^9 + 1, -10^9 + 2, ..., 10^9 - 2, 10^9 - 1, 10^9] //size 2 * 10^9 + 1
for both min and max will have the same values -10^9 and 10^9 respectively. Thus your loop will iterate all numbers from -10^9 to 10^9. And even if there were 10^100000 numbers in the orginal array, the for loop will at most iterate from -10^9 to 10^9.
And you say min() and max() are in O(n), thus your overall algorithm would also be in O(n). But if you then take into account that the given maximum length (10^4) of the array is by magnitudes smaller then the limit of your numbers, you can even neglect calling min and max
And as for your comment
For ex. array =[1,200,2,6,4,100]. In this case we can find min and max in linear time(O(n) where n is length of array). Now, my for loop complexity is O(200) or O(n^3) which is much more than length of array. Can I still say its linear complexity
The size of the array and the values in the array are completely independent of each other. Thus you cannot express the complexity of the for loop in terms of n (as explained above). If you really want to take into account also the range of the numbers, you have to express it somehow like this O(n + r) where n is the size of the array, and r is the range of the numbers.
Given an array nums
Count no. of pairs (two elements) where bitwise AND is greater than K
Brute force
for i in range(0,n):
for j in range(i+1,n):
if a[i]&a[j] > k:
res += 1
Better version:
preprocess to remove all elements ≤k
and then brute force
But i was wondering, what would be the limit in complexity here?
Can we do better with a trie, hashmap approach like two-sum?
( I did not find this problem on Leetcode so I thought of asking here )
Let size_of_input_array = N. Let the input array be of B-bit numbers
Here is an easy to understand and implement solution.
Eliminate all values <= k.
The above image shows 5 10-bit numbers.
Step 1: Adjacency Graph
Store a list of set bits. In our example, 7th bit is set for numbers at index 0,1,2,3 in the input array.
Step 2: The challenge is to avoid counting the same pairs again.
To solve this challenge we take help of union-find data structure as shown in the code below.
//unordered_map<int, vector<int>> adjacency_graph;
//adjacency_graph has been filled up in step 1
vector<int> parent;
for(int i = 0; i < input_array.size(); i++)
parent.push_back(i);
int result = 0;
for(int i = 0; i < adjacency_graph.size(); i++){ // loop 1
auto v = adjacency_graph[i];
if(v.size() > 1){
int different_parents = 1;
for (int j = 1; j < v.size(); j++) { // loop 2
int x = find(parent, v[j]);
int y = find(parent, v[j - 1]);
if (x != y) {
different_parents++;
union(parent, x, y);
}
}
result += (different_parents * (different_parents - 1)) / 2;
}
}
return result;
In the above code, find and union are from union-find data structure.
Time Complexity:
Step 1:
Build Adjacency Graph: O(BN)
Step 2:
Loop 1: O(B)
Loop 2: O(N * Inverse of Ackermann’s function which is an extremely slow-growing function)
Overall Time Complexity
= O(BN)
Space Complexity
Overall space complexity = O(BN)
First, prune everything <= k. Also Sort the value list.
Going from the most significant bit to the least significant we are going to keep track of the set of numbers we are working with (initially all ,s=0, e=n).
Let p be the first position that contains a 1 in the current set at the current position.
If the bit in k is 0, then everything that would yield a 1 world definetly be good and we need to investigate the ones that get a 0. We have (end - p) * (end-p-1) /2 pairs in the current range and (end-p) * <total 1s in this position larger or equal to end> combinations with larger previously good numbers, that we can add to the solution. To continue we update end = p. We want to count 1s in all the numbers above, because we only counted them before in pairs with each other, not with the numbers this low in the set.
If the bit in k is 1, then we can't count any wins yet, but we need to eliminate everything below p, so we update start = p.
You can stop once you went through all the bits or start==end.
Details:
Since at each step we eliminate either everything that has a 0 or everything that has a 1, then everything between start and end will have the same bit-prefix. since the values are sorted we can do a binary search to find p.
For <total 1s in this position larger than p>. We already have the values sorted. So we can compute partial sums and store for every position in the sorted list the number of 1s in every bit position for all numbers above it.
Complexity:
We got bit-by-bit so L (the bit length of the numbers), we do a binary search (logN), and lookup and updates O(1), so this is O(L logN).
We have to sort O(NlogN).
We have to compute partial bit-wise sums O(L*N).
Total O(L logN + NlogN + L*N).
Since N>>L, L logN is subsummed by NlogN. Since L>>logN (probably, as in you have 32 bit numbers but you don't have 4Billion of them), then NlogN is subsummed by L*N. So complexity is O(L * N). Since we also need to keep the partial sums around the memory complexity is also O(L * N).
I made up my own interview-style problem, and have a question on the big O of my solution. I will state the problem and my solution below, but first let me say that the obvious solution involves a nested loop and is O(n2). I believe I found a O(n) solution, but then I realized it depends not only on the size of the input, but the largest value of the input. It seems like my running time of O(n) is only a technicality, and that it could easily run in O(n2) time or worse in real life.
The problem is:
For each item in a given array of positive integers, print all the other items in the array that are multiples of the current item.
Example Input:
[2 9 6 8 3]
Example Output:
2: 6 8
9:
6:
8:
3: 9 6
My solution (in C#):
private static void PrintAllDivisibleBy(int[] arr)
{
Dictionary<int, bool> dic = new Dictionary<int, bool>();
if (arr == null || arr.Length < 2)
return;
int max = arr[0];
for(int i=0; i<arr.Length; i++)
{
if (arr[i] > max)
max = arr[i];
dic[arr[i]] = true;
}
for(int i=0; i<arr.Length; i++)
{
Console.Write("{0}: ", arr[i]);
int multiplier = 2;
while(true)
{
int product = multiplier * arr[i];
if (dic.ContainsKey(product))
Console.Write("{0} ", product);
if (product >= max)
break;
multiplier++;
}
Console.WriteLine();
}
}
So, if 2 of the array items are 1 and n, where n is the array length, the inner while loop will run n times, making this equivalent to O(n2). But, since the performance is dependent on the size of the input values, not the length of the list, that makes it O(n), right?
Would you consider this a true O(n) solution? Is it only O(n) due to technicalities, but slower in real life?
Good question! The answer is that, no, n is not always the size of the input: You can't really talk about O(n) without defining what the n means, but often people use imprecise language and imply that n is "the most obvious thing that scales here". Technically we should usually say things like "This sort algorithm performs a number of comparisons that is O(n) in the number of elements in the list": being specific about both what n is, and what quantity we are measuring (comparisons).
If you have an algorithm that depends on the product of two different things (here, the length of the list and the largest element in it), the proper way to express that is in the form O(m*n), and then define what m and n are for your context. So, we could say that your algorithm performs O(m*n) multiplications, where m is the length of the list and n is the largest item in the list.
An algorithm is O(n) when you have to iterate over n elements and perform some constant time operation in each iteration. The inner while loop of your algorithm is not constant time as it depends on the hugeness of the biggest number in your array.
Your algorithm's best case run-time is O(n). This is the case when all the n numbers are same.
Your algorithm's worst case run-time is O(k*n), where k = the max value of int possible on your machine if you really insist to put an upper bound on k's value. For 32 bit int the max value is 2,147,483,647. You can argue that this k is a constant, but this constant is clearly
not fixed for every case of input array; and,
not negligible.
Would you consider this a true O(n) solution?
The runtime actually is O(nm) where m is the maximum element from arr. If the elements in your array are bounded by a constant you can consider the algorithm to be O(n)
Can you improve the runtime? Here's what else you can do. First notice that you can ensure that the elements are different. ( you compress the array in hashmap which stores how many times an element is found in the array). Then your runtime would be max/a[0]+max/a[1]+max/a[2]+...<= max+max/2+...max/max = O(max log (max)) (assuming your array arr is sorted). If you combine this with the obvious O(n^2) algorithm you'd get O(min(n^2, max*log(max)) algorithm.
Say you're given a large set of numbers (size n) and asked to compute the average of the data. You only have enough space and memory for c numbers at one time. What is the run-time complexity of this data?
To compute an average for the whole dataset, the complexity would be O(n). Consider the following algorithm:
set sum = 0;
for(i = 0; i < n; i++){ // Loop n times
add value of n to sum;
}
set average = sum / n;
Since we can disregard the two constant time operations, the main operation (adding value to sum) occurs n times.
In this particular example, you only have data for 'c' numbers at one time. For each individual group, you'll need a time complexity of O(c). However, this will not change your overall complexity, because ultimately you will be making n passes.
To provide a concrete example, consider the case n = 100 and c = 40, and your values are passed in an array. Your first loop would have 40 passes, the second another 40, and the third only twenty. Regardless, you have made 100 passes through the loop.
This assumes also that it is a constant time operation to get the second set of numbers.
It is O(n).
A basic (though not particularly stable) algorithm computes it iteratively as follows:
mean = 0
for n = 0,1,2,.. length(arr)-1
mean = (mean*n + arr[n])/(n+1)
A variant of this algorithm can be used to parse the data from the array in sets of c numbers, but it is still linear in n.
Ie, spelling out the seralization:
To spell it out, you can do this:
mean = 0
for m = 0, c, 2c, ..., arr_length -1
sub_arr = request_sub_arr_between(m,min(m+c-1, total_length(arr)-1))
for i = 0, 1, ..., length(sub_arr)
n = m + i
mean = (mean*n + sub_arr[i])/(n+1)
This is still O(n), as we are only doing a bounded number of things for each n. In fact, the algorithm given at the top of this answer is a variant of this with c=1. If sub_arr is not kept in local memory, but sub_arr[n] is read at each step, then we are only storing 3 numbers at any step.
How to check if n can be partitioned to sum of a sequence of consecutive prime numbers.
For example, 12 is equal to 5+7 which 5 and 7 are consecutive primes, but 20 is equal to 3+17 which 3 and 17 are not consecutive.
Note that, repetition is not allowed.
My idea is to find and list all primes below n, then use 2 loops to sum all primes. The first 2 numbers, second 2 numbers, third 2 numbers etc. and then first 3 numbers, second 3 numbers and so far. But it takes lot of time and memory.
Realize that a consecutive list of primes is defined only by two pieces of information, the starting and the ending prime number. You just have to find these two numbers.
I assume that you have all the primes at your disposal, sorted in the array called primes. Keep three variables in memory: sum which initially is 2 (the smallest prime), first_index and last_index which are initially 0 (index of the smallest prime in array primes).
Now you have to "tweak" these two indices, and "travel" the array along the way in the loop:
If sum == n then finish. You have found your sequence of primes.
If sum < n then enlarge the list by adding next available prime. Increment last_index by one, and then increment sum by the value of new prime, which is primes[last_index]. Repeat the loop. But if primes[last_index] is larger than n then there is no solution, and you must finish.
If sum > n then reduce the list by removing the smallest prime from the list. Decrement sum by that value, which is primes[first_index], and then increment first_index by one. Repeat the loop.
Dialecticus's algorithm is the classic O(m)-time, O(1)-space way to solve this type of problem (here I'll use m to represent the number of prime numbers less than n). It doesn't depend on any mysterious properties of prime numbers. (Interestingly, for the particular case of prime numbers, AlexAlvarez's algorithm is also linear time!) Dialecticus gives a clear and correct description, but seems at a loss to explain why it is correct, so I'll try to do this here. I really think it's valuable to take the time to understand this particular algorithm's proof of correctness: although I had to read a number of explanations before it finally "sank in", it was a real "Aha!" moment when it did! :) (Also, problems that can be efficiently solved in the same manner crop up quite a lot.)
The candidate solutions this algorithm tries can be represented as number ranges (i, j), where i and j are just the indexes of the first and last prime number in a list of prime numbers. The algorithm gets its efficiency by ruling out (that is, not considering) sets of number ranges in two different ways. To prove that it always gives the right answer, we need to show that it never rules out the only range with the right sum. To that end, it suffices to prove that it never rules out the first (leftmost) range with the right sum, which is what we'll do here.
The first rule it applies is that whenever we find a range (i, j) with sum(i, j) > n, we rule out all ranges (i, k) having k > j. It's easy to see why this is justified: the sum can only get bigger as we add more terms, and we have determined that it's already too big.
The second, trickier rule, crucial to the linear time complexity, is that whenever we advance the starting point of a range (i, j) from i to i+1, instead of "starting again" from (i+1, i+1), we start from (i+1, j) -- that is, we avoid considering (i+1, k) for all i+1 <= k < j. Why is it OK to do this? (To put the question the other way: Couldn't it be that doing this causes us to skip over some range with the right sum?)
[EDIT: The original version of the next paragraph glossed over a subtlety: we might have advanced the range end point to j on any previous step.]
To see that it never skips a valid range, we need to think about the range (i, j-1). For the algorithm to advance the starting point of the current range, so that it changes from (i, j) to (i+1, j), it must have been that sum(i, j) > n; and as we will see, to get to a program state in which the range (i, j) is being considered in the first place, it must have been that sum(i, j-1) < n. That second claim is subtle, because there are two different ways to arrive in such a program state: either we just incremented the end point, meaning that the previous range was (i, j-1) and this range was found to be too small (in which case our desired property sum(i, j-1) < n obviously holds); or we just incremented the start point after considering (i-1, j) and finding it to be too large (in which case it's not obvious that the property still holds).
What we do know, however, is that regardless of whether the end point was increased from j-1 to j on the previous step, it was definitely increased at some time before the current step -- so let's call the range that triggered this end point increase (k, j-1). Clearly sum(k, j-1) < n, since this was (by definition) the range that caused us to increase the end point from j-1 to j; and just as clearly k <= i, since we only process ranges in increasing order of their start points. Since i >= k, sum(i, j-1) is just the same as sum(k, j-1) but with zero or more terms removed from the left end, and all of these terms are positive, so it must be that sum(i, j-1) <= sum(k, j-1) < n.
So we have established that whenever we increase i to i+1, we know that sum(i, j-1) < n. To finish the analysis of this rule, what we (again) need to make use of is that dropping terms from either end of this sum can't make it any bigger. Removing the first term leaves us with sum(i+1, j-1) <= sum(i, j-1) < n. Starting from that sum and successively removing terms from the other end leaves us with sum(i+1, j-2), sum(i+1, j-3), ..., sum(i+1, i+1), all of which we know must be less than n -- that is, none of the ranges corresponding to these sums can be valid solutions. Therefore we can safely avoid considering them in the first place, and that's exactly what the algorithm does.
One final potential stumbling block is that it might seem that, since we are advancing two loop indexes, the time complexity should be O(m^2). But notice that every time through the loop body, we advance one of the indexes (i or j) by one, and we never move either of them backwards, so if we are still running after 2m loop iterations we must have i + j = 2m. Since neither index can ever exceed m, the only way for this to hold is if i = j = m, which means that we have reached the end: i.e. we are guaranteed to terminate after at most 2m iterations.
The fact that primes have to be consecutive allows to solve quite efficiently this problem in terms of n. Let me suppose that we have previously computed all the primes less or equal than n. Therefore, we can easily compute sum(i) as the sum of the first i primes.
Having this function precomputed, we can loop over the primes less or equal than n and see whether there exists a length such that starting with that prime we can sum up to n. But notice that for a fixed starting prime, the sequence of sums is monotone, so we can binary search over the length.
Thus, let k be the number of primes less or equal than n. Precomputing the sums has cost O(k) and the loop has cost O(klogk), dominating the cost. Using the Prime number theorem, we know that k = O(n/logn), and then the whole algorithm has cost O(n/logn log(n/logn)) = O(n).
Let me put a code in C++ to make it clearer, hope there are not bugs:
#include <iostream>
#include <vector>
using namespace std;
typedef long long ll;
int main() {
//Get the limit for the numbers
int MAX_N;
cin >> MAX_N;
//Compute the primes less or equal than MAX_N
vector<bool> is_prime(MAX_N + 1, true);
for (int i = 2; i*i <= MAX_N; ++i) {
if (is_prime[i]) {
for (int j = i*i; j <= MAX_N; j += i) is_prime[j] = false;
}
}
vector<int> prime;
for (int i = 2; i <= MAX_N; ++i) if (is_prime[i]) prime.push_back(i);
//Compute the prefixed sums
vector<ll> sum(prime.size() + 1, 0);
for (int i = 0; i < prime.size(); ++i) sum[i + 1] = sum[i] + prime[i];
//Get the number of queries
int n_queries;
cin >> n_queries;
for (int z = 1; z <= n_queries; ++z) {
int n;
cin >> n;
//Solve the query
bool found = false;
for (int i = 0; i < prime.size() and prime[i] <= n and not found; ++i) {
//Do binary search over the lenght of the sum:
//For all x < ini, [i, x] sums <= n
int ini = i, fin = int(prime.size()) - 1;
while (ini <= fin) {
int mid = (ini + fin)/2;
int value = sum[mid + 1] - sum[i];
if (value <= n) ini = mid + 1;
else fin = mid - 1;
}
//Check the candidate of the binary search
int candidate = ini - 1;
if (candidate >= i and sum[candidate + 1] - sum[i] == n) {
found = true;
cout << n << " =";
for (int j = i; j <= candidate; ++j) {
cout << " ";
if (j > i) cout << "+ ";
cout << prime[j];
}
cout << endl;
}
}
if (not found) cout << "No solution" << endl;
}
}
Sample input:
1000
5
12
20
28
17
29
Sample output:
12 = 5 + 7
No solution
28 = 2 + 3 + 5 + 7 + 11
17 = 2 + 3 + 5 + 7
29 = 29
I'd start by noting that for a pair of consecutive primes to sum to the number, one of the primes must be less than N/2, and the other prime must be greater than N/2. For them to be consecutive primes, they must be the primes closest to N/2, one smaller and the other larger.
If you're starting with a table of prime numbers, you basically do a binary search for N/2. Look at the primes immediately larger and smaller than that. Add those numbers together and see if they sum to your target number. If they don't, then it can't be the sum of two consecutive primes.
If you don't start with a table of primes, it works out pretty much the same way--you still start from N/2 and find the next larger prime (we'll call that prime1). Then you subtract N-prime1 to get a candidate for prime2. Check if that's prime, and if it is, search the range prime2...N/2 for other primes to see if there was a prime in between. If there's a prime in between your number is a sum of non-consecutive primes. If there's no other prime in that range, then it is a sum of consecutive primes.
The same basic idea applies for sequences of 3 or more primes, except that (of course) your search starts from N/3 (or whatever number of primes you want to sum to get to the number).
So, for three consecutive primes to sum to N, 2 of the three must be the first prime smaller than N/3 and the first prime larger than N/3. So, we start by finding those, then compute N-(prime1+prime2). That gives use our third candidate. We know these three numbers sum to N. We still need to prove that this third number is a prime. If it is prime, we need to verify that it's consecutive to the other two.
To give a concrete example, for 10 we'd start from 3.333. The next smaller prime is 3 and the next larger is 5. Those add to 8. 10-8 = 2. 2 is prime and consecutive to 3, so we've found the three consecutive primes that add to 10.
There are some other refinements you can make as well. The most obvious would be based on the fact that all primes (other than 2) are odd numbers. Therefore (assuming we can ignore 2), an even number can only be the sum of an even number of primes, and an odd number can only be a sum of an odd number of primes. So, given 123456789, we know immediately that it can't possibly be the sum of 2 (or 4, 6, 8, 10, ...) consecutive primes, so the only candidates to consider are 3, 5, 7, 9, ... primes. Of course, the opposite works as well: given, say, 12345678, the simple fact that it's even lets us immediately rule out the possibility that it could be the sum of 3, 5, 7 or 9 consecutive primes; we only need to consider sequences of 2, 4, 6, 8, ... primes. We violate this basic rule only when we get to a large enough number of primes that we could include 2 as part of the sequence.
I haven't worked through the math to figure out exactly how many that would be be for a given number, but I'm pretty sure it should be fairly easy and it's something we want to know anyway (because it's the upper limit on the number of consecutive primes to look for for a given number). If we use M for the number of primes, the limit should be approximately M <= sqrt(N), but that's definitely only an approximation.
I know that this question is a little old, but I cannot refrain from replying to the analysis made in the previous answers. Indeed, it has been emphasized that all the three proposed algorithms have a run-time that is essentially linear in n. But in fact, it is not difficult to produce an algorithm that runs at a strictly smaller power of n.
To see how, let us choose a parameter K between 1 and n and suppose that the primes we need are already tabulated (if they must be computed from scratch, see below). Then, here is what we are going to do, to search a representation of n as a sum of k consecutive primes:
First we search for k<K using the idea present in the answer of Jerry Coffin; that is, we search k primes located around n/k.
Then to explore the sums of k>=K primes we use the algorithm explained in the answer of Dialecticus; that is, we begin with a sum whose first element is 2, then we advance the first element one step at a time.
The first part, that concerns short sums of big primes, requires O(log n) operations to binary search one prime close to n/k and then O(k) operations to search for the other k primes (there are a few simple possible implementations). In total this makes a running time
R_1=O(K^2)+O(Klog n).
The second part, that is about long sums of small primes, requires us to consider sums of consecutive primes p_1<\dots<p_k where the first element is at most n/K.
Thus, it requires to visit at most n/K+K primes (one can actually save a log factor by a weak version of the prime number theorem). Since in the algorithm every prime is visited at most O(1) times, the running time is
R_2=O(n/K) + O(K).
Now, if log n < K < \sqrt n we have that the first part runs with O(K^2) operations and the second part runs in O(n/K). We optimize with the choice K=n^{1/3}, so that the overall running time is
R_1+R_2=O(n^{2/3}).
If the primes are not tabulated
If we also have to find the primes, here is how we do it.
First we use Erathostenes, that in C_2=O(T log log T) operations finds all the primes up to T, where T=O(n/K) is the upper bound for the small primes visited in the second part of the algorithm.
In order to perform the first part of the algorithm we need, for every k<K, to find O(k) primes located around n/k. The Riemann hypothesis implies that there are at least k primes in the interval [x,x+y] if y>c log x (k+\sqrt x) for some constant c>0. Therefore a priori we need to find the primes contained in an interval I_k centered at n/k with width |I_k|= O(k log n)+O(\sqrt {n/k} log n).
Using the sieve Eratosthenes to sieve the interval I_k requires O(|I_k|log log n) + O(\sqrt n) operations. If k<K<\sqrt n we get a time complexity C_1=O(\sqrt n log n log log n) for every k<K.
Summing up, the time complexity C_1+C_2+R_1+R_2 is maximized when
K = n^{1/4} / (log n \sqrt{log log n}).
With this choice have the sublinear time complexity
R_1+R_2+C_1+C_2 = O(n^{3/4}\sqrt{log log n}.
If we do not assume the Riemann Hypothesis we will have to search on larger intervals, but we still get at the end a sublinear time complexity. If instead we assume stronger conjectures on prime gaps, we may only need to search on intervals I_k with width |I_k|=k (log n)^A for some A>0. Then, instead of Erathostenes, we can use other deterministic primality tests. For example, suppose that you can test a single number for primality in O((log n)^B) operations, for some B>0.
Then you can search the interval I_k in O(k(log n)^{A+B}) operations. In this case the optimal K is still K\approx n^{1/3}, up to logarithmic factors, and so the total complexity is O(n^{2/3}(log n)^D for some D>0.