Finding Θ for an algorithm - algorithm

I have the below pseudocode that takes a given unsorted array of length size and finds the range by finding the max and min values in the array. I'm just learning about the various time efficiency methods, but I think the below code is Θ(n), as a longer array adds a fixed number of actions (3).
For example, ignoring the actual assignments to max and min (as the unsorted array is arbitrary and these assignments are unknown in advance), an array of length 2 would only require 5 actions total (including the final range calculation). An array of length 4 only uses 9 actions total, again adding the final range calculation. An array of length 12 uses 25 actions.
This all points me to Θ(n), as it is a linear relationship. Is this correct?
Pseudocode:
// Traverse each element of the array, storing the max and min values
// Assuming int size exists that is size of array a[]
// Assuming array is a[]
min = a[0];
max = a[0];
for(i = 0; i < size; i++) {
if(min > a[i]) { // If current min is greater than val,
min = a[i]; // replace min with val
}
if(max < a[i]) { // If current max is smaller than val,
max = a[i]; // replace max with val
}
}
range = max – min; // range is largest value minus smallest

You're right. It's O(n).
An easy way to tell in simple code (like the one above) is to see how many for() loops are nested, if any. For every "normal" loop (from i = 0 -> n), you add a factor of n.
[Edit2]: That is, if you have code like this:
array a[n]; //Array with n elements.
for(int i = 0; i < n; ++i){ //Happens n times.
for(int j = 0; j < n; ++j){ //Happens n*n times.
//something //Happens n*n times.
}
}
//Overall complexity is O(n^2)
Whereas
array a[n]; //Array with n elements.
for(int i = 0; i < n; ++i){ //Happens n times.
//something //Happens n times.
}
for(int j = 0; j < n; ++j){ //Happens n times.
//something //Happens n times.
}
//Overall complexity is O(2n) = O(n)
This is pretty rudimentary, but useful if someone has not taken an Algorithm course.
The procedures within your for() loop are irrelevant in a complexity question.
[Edit]: This assumes that size actually means the size of array a.

Yes, this would be Θ(n). Your reasoning is a little skewed though.
You have to look at every item in your loop so you're bounded above by a linear function. Conversely, you are also bounded below by a linear function (the same one in fact), because you can't avoid looking at every element.
O(n) only requires that you bound above, Omega(n) requires that you bound below.
Θ(n) says you're bounded on both sides.

Let size be n, then it's clear to see that you always have 2n comparisons and of course the single assignment at the end. So you always have 2n + 1 operations in this algorithm.
In the worst case scenario, you have 2n assignments, thus 2n + 1 + 2n = 4n + 1 = O(n).
In the best case scenrio, you have 0 assignments, thus 2n + 1 + 0 = 2n + 1 = Ω(n).
Therefore, we have that both the best and worst case perform in linear time. Hence, Ɵ(n).

Yeah this surely is O(n) algorithm. I don't think you really need to drill down to see number of comparisons to arrive on the conclusion about the complexity of the algorithm. Just try to see how the number of comparisons will change with the increasing size of the input. For O(n) the comparisons should have a linear increase with the increase in input. For O(n^2) it increases by some multiple of n and so on.

Related

Q: Count array pairs with bitwise AND > k ~ better than O(N^2) possible?

Given an array nums
Count no. of pairs (two elements) where bitwise AND is greater than K
Brute force
for i in range(0,n):
for j in range(i+1,n):
if a[i]&a[j] > k:
res += 1
Better version:
preprocess to remove all elements ≤k
and then brute force
But i was wondering, what would be the limit in complexity here?
Can we do better with a trie, hashmap approach like two-sum?
( I did not find this problem on Leetcode so I thought of asking here )
Let size_of_input_array = N. Let the input array be of B-bit numbers
Here is an easy to understand and implement solution.
Eliminate all values <= k.
The above image shows 5 10-bit numbers.
Step 1: Adjacency Graph
Store a list of set bits. In our example, 7th bit is set for numbers at index 0,1,2,3 in the input array.
Step 2: The challenge is to avoid counting the same pairs again.
To solve this challenge we take help of union-find data structure as shown in the code below.
//unordered_map<int, vector<int>> adjacency_graph;
//adjacency_graph has been filled up in step 1
vector<int> parent;
for(int i = 0; i < input_array.size(); i++)
parent.push_back(i);
int result = 0;
for(int i = 0; i < adjacency_graph.size(); i++){ // loop 1
auto v = adjacency_graph[i];
if(v.size() > 1){
int different_parents = 1;
for (int j = 1; j < v.size(); j++) { // loop 2
int x = find(parent, v[j]);
int y = find(parent, v[j - 1]);
if (x != y) {
different_parents++;
union(parent, x, y);
}
}
result += (different_parents * (different_parents - 1)) / 2;
}
}
return result;
In the above code, find and union are from union-find data structure.
Time Complexity:
Step 1:
Build Adjacency Graph: O(BN)
Step 2:
Loop 1: O(B)
Loop 2: O(N * Inverse of Ackermann’s function which is an extremely slow-growing function)
Overall Time Complexity
= O(BN)
Space Complexity
Overall space complexity = O(BN)
First, prune everything <= k. Also Sort the value list.
Going from the most significant bit to the least significant we are going to keep track of the set of numbers we are working with (initially all ,s=0, e=n).
Let p be the first position that contains a 1 in the current set at the current position.
If the bit in k is 0, then everything that would yield a 1 world definetly be good and we need to investigate the ones that get a 0. We have (end - p) * (end-p-1) /2 pairs in the current range and (end-p) * <total 1s in this position larger or equal to end> combinations with larger previously good numbers, that we can add to the solution. To continue we update end = p. We want to count 1s in all the numbers above, because we only counted them before in pairs with each other, not with the numbers this low in the set.
If the bit in k is 1, then we can't count any wins yet, but we need to eliminate everything below p, so we update start = p.
You can stop once you went through all the bits or start==end.
Details:
Since at each step we eliminate either everything that has a 0 or everything that has a 1, then everything between start and end will have the same bit-prefix. since the values are sorted we can do a binary search to find p.
For <total 1s in this position larger than p>. We already have the values sorted. So we can compute partial sums and store for every position in the sorted list the number of 1s in every bit position for all numbers above it.
Complexity:
We got bit-by-bit so L (the bit length of the numbers), we do a binary search (logN), and lookup and updates O(1), so this is O(L logN).
We have to sort O(NlogN).
We have to compute partial bit-wise sums O(L*N).
Total O(L logN + NlogN + L*N).
Since N>>L, L logN is subsummed by NlogN. Since L>>logN (probably, as in you have 32 bit numbers but you don't have 4Billion of them), then NlogN is subsummed by L*N. So complexity is O(L * N). Since we also need to keep the partial sums around the memory complexity is also O(L * N).

I'm confused about space complexity

I'm a little confused about the space complexity.
int fn_sum(int a[], int n){
int result =0;
for(int i=0; i<n ; i++){
result += a[i];
}
return result;
}
In this case, is the space complexity O(n) or O(1)?
I think it uses only result,i variables so it is O(1). What's the answer?
(1) Space Complexity: how many memory do your algorithm allocate according to input size?
int fn_sum(int a[], int n){
int result = 0; //here you have 1 variable allocated
for(int i=0; i<n ; i++){
result += a[i];
}
return result;
}
as the variable you created (result) is a single value (it's not a list, an array, etc.), your space complexity is O(1), since the space usage is constant, which means: it doesn't change according to the size of the inputs, it's just a single and constant value.
(2) Time Complexity: how do the number of operations of your algorithm relates to the size of the input?
int fn_sum(int a[], int n){ //the input is an array of size n
int result = 0; //1 variable definition operation = O(1)
for(int i=0; i<n ; i++){ //loop that will run n times whatever it has inside
result += a[i]; //1 sum operation = O(1) that runs n times = n * O(1) = O(n)
}
return result; //1 return operation = O(1)
}
all the operations you do take O(1) + O(n) + O(1) = O(n + 2) = O(n) time, following the rules of removing multiplicative and additive constants from the function.
I answer bit differently:
Since memory space consumed by int fn_sum(int a[], int n) doesn't correlate with the number of input items its algorithmic complexity in this regard is O(1).
However runtime complexity is O(N) since it iterates over N items.
And yes, there are algorithms that consume more memory and get faster. Classic one is caching operations.
https://en.wikipedia.org/wiki/Space_complexity
If int means the 32-bit signed integer type, the space complexity is O(1) since you always allocate, use and return the same number of bits.
If this is just pseudocode and int means integers represented in their binary representations with no leading zeroes and maybe an extra sign bit (imagine doing this algorithm by hand), the analysis is more complicated.
If negatives are allowed, the best case is alternating positive and negative numbers so that the result never grows beyond a constant size - O(1) space.
If zero is allowed, an equally good case is to put zero in the whole array. This is also O(1).
If only positive numbers are allowed, the best case is more complicated. I expect the best case will see some number repeated n times. For the best case, we'll want the smallest representable number for the number of bits involved; so, I expect the number to be a power of 2. We can work out the sum in terms of n and the repeated number:
result = n * val
result size = log(result) = log(n * val) = log(n) + log(val)
input size = n*log(val) + log(n)
As val grows without bound, the log(val) term dominates in result size, and the n*log(val) term dominates in the input size; the best-case is thus like the multiplicative inverse of the input size, so also O(1).
The worst case should be had by choosing val to be as small as possible (we choose val = 1) and letting n grow without bound. In that case:
result = n
result size = log(n)
input size = 2 * log(n)
This time, the result size grows like half the input size as n grows. The worst-case space complexity is linear.
Another way to calculate space complexity is to analyze whether the memory required by your code scales/increases according to the input given.
Your input is int a[] with size being n. The only variable you have declared is result.
No matter what the size of n is, result is declared only once. It does not depend on the size of your input n.
Hence you can conclude your space complexity to be O(1).

Is time complexity for this function is O(1)?

I was reviewing some old notes on algorithms today, and this got me thinking.
Complexity O(1) means execution time for function is independent on data.
So let's suppose we have a function to add all elements in array.
int add(int[] array){
int sum =0;
for (int i=0;i<ARRAY_MAX_SIZE;i++){
sum= sum + (i<array.length?array[i]:0);
}
return sum;
}
where ARRAY_MAX_SIZE is maximum possible size of array. I know this code is not efficient i don't want to discuss this. But operator + is called same amount time each time and it is not affected by size of data.
Does that means complexity of this function is O(1)?
Yes. O(1) means constant time, not fast/efficient/optimal.
Big-O complexity ignores the complexity of constant steps. A division (slow) is just as "complex" as an increment (fast).
The actual answer is "it depends".
There are two different sets of things happening here:
ARRAY_MAX_SIZE times, you:
Increment and test a for loop
add to the total
array.length times, you:;
access array[i]
ARRAY_MAX_SIZE - array.length times, you:;
load the constant zero
So the total runtime is
t = k_1 * ARRAY_MAX_SIZE +
k_2 * n +
k_3 * (ARRAY_MAX_SIZE - n)
So you look at how k_2 and k_3 compare. Are they basically equal? Then it's O(1). Is k_2 >> k_3? Then it's O(n).
Why might k_2 >> k_3? Because array[i] is accessing memory, and memory is comparatively very slow:
The only interesting part is array[i] is used only n times. This means you add an operation to deference the array to get the ith element only n times. I wouldn't count this normally, but wouldn't this maybe make it O(n)? Just playing devil's advocate.
I would suppose this would be the true O(1) equivalent.
int add(int[] array){
int sum =0;
int len = array.length;
for (int i=0;i<ARRAY_MAX_SIZE;i++){
sum= sum + array[i%len] & (i < len ? 0xFFFFFFFF : 0);
}
return sum;
}
If you had a maximum array size, then the complexity would be O(1). But this has other consequences. array.length would need to be less than ARRAY_MAX_SIZE, so array.length is bounded by a constant, making the following O(1) as well:
for(int i=0; i<array.length; i++) {
sum = sum + array[i];
}
So we'd usually just ignore any limits on array sizes to get useful results for algorithm complexities.
This is obviously assuming that ARRAY_MAX_SIZE is maximum possible size of array (as it was defined in the question), and not some other value.

Choosing k out of n

I want to choose k elements uniformly at random out of a possible n without choosing the same number twice. There are two trivial approaches to this.
Make a list of all n possibilities. Shuffle them (you don't need
to shuffle all n numbers just k of them by performing the first
k steps of Fisher Yates). Choose the first k. This approach
takes O(k) time (assuming allocating an array of size n takes
O(1) time) and O(n) space. This is a problem if k is very
small relative to n.
Store a set of seen elements. Choose a number at random from [0, n-1]. While the element is in the set then choose a new number.
This approach takes O(k) space. The run-time is a little more
complicated to analyze. If k = theta(n) then the run-time is
O(k*lg(k))=O(n*lg(n)) because it is the coupon collector's
problem. If k is small relative to n then it takes slightly
more than O(k) because of the probability (albeit low) of choosing
the same number twice. This is better than the above solution in
terms of space but worse in terms of run-time.
My question:
is there an O(k) time, O(k) space algorithm for all k and n?
With an O(1) hash table, the partial Fisher-Yates method can be made to run in O(k) time and space. The trick is simply to store only the changed elements of the array in the hash table.
Here's a simple example in Java:
public static int[] getRandomSelection (int k, int n, Random rng) {
if (k > n) throw new IllegalArgumentException(
"Cannot choose " + k + " elements out of " + n + "."
);
HashMap<Integer, Integer> hash = new HashMap<Integer, Integer>(2*k);
int[] output = new int[k];
for (int i = 0; i < k; i++) {
int j = i + rng.nextInt(n - i);
output[i] = (hash.containsKey(j) ? hash.remove(j) : j);
if (j > i) hash.put(j, (hash.containsKey(i) ? hash.remove(i) : i));
}
return output;
}
This code allocates a HashMap of 2×k buckets to store the modified elements (which should be enough to ensure that the hash table is never rehashed), and just runs a partial Fisher-Yates shuffle on it.
Here's a quick test on Ideone; it picks two elements out of three 30,000 times, and counts the number of times each pair of elements gets chosen. For an unbiased shuffle, each ordered pair should appear approximately 5,000 (&pm;100 or so) times, except for the impossible cases where both elements would be equal.
Your second approach does not take Theta(k log k) time on average, it takes about n/(n-k+1) + n/(n-k+2) + ... + n/n operations, which is less than k(n/(n-k)) since you have k terms which are each smaller than n/(n-k). For k <= n/2, it takes under 2*k operations on average. For k>n/2, you can choose a random subset of size n-k, and take the complement. So, this is already an O(k) average time and space algorithm.
What you could use is the following algorithm (using javascript instead of pseudocode):
var k = 3;
var n = [1,2,3,4,5,6];
// O(k) iterations
for(var i = 0, tmp; i < k; ++i) {
// Random index O(1)
var index = Math.floor(Math.random() * (n.length - i));
// Output O(1)
console.log(n[index]);
// Swap and lookup O(1)
tmp = n[index];
n[index] = n[n.length - i - 1];
n[n.length - i - 1] = tmp;
}
In short, you swap the selected value with the last item and in the next iteration sample from the reduced subset. This assumes your original set is wholly unique.
The storage is O(n), if you wish to retrieve the numbers as a set, just refer to the last k entries from n.

Time Complexity - Calculating Worst Case For Algorithms

I am reading some information on time complexity and I'm quite confused as to how the following time complexities are achieved and if there is a particular set of rules or methods for working this out?
1)
Input: int n
for(int i = 0; i < n; i++){
print("Hello World, ");
}
for(int j = n; j > 0; j--){
print("Hello World");
}
Tight: 6n + 5
Big O: O(n)
2)
Input: l = array of comparable items
Output: l = array of sorted items
Sort:
for(int i = 0; i < l.length; i++){
for(int j = 0; j < l.length; j++){
if(l{i} > l{j}){
} }
Swap(l{i},l{j});
}
return ls;
Worst Case Time Complexity: 4n2 +3n+2 = O(n2)
For a given algorithm, time complexity or Big O is a way to provide some fair enough estimation of "total elementary operations performed by the algorithm" in relationship with the given input size n.
Type-1
Lets say you have an algo like this:
a=n+1;
b=a*n;
there are 2 elementary operations in the above code, no matter how big your n is, for the above code a computer will always perform 2 operations, as the algo does not depend on the size of the input, so the Big-O of the above code is O(1).
Type-2
For this code:
for(int i = 0; i < n; i++){
a=a+i;
}
I hope you understand the Big-O in O(n), as elementary operation count directly depend on the size of n
Type-3
Now what about this code:
//Loop-1
for(int i = 0; i < n; i++){
print("Hello World, ");
}
//Loop-2
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++) {
x=x+j;
}
}
As you can see loop-1 is O(n) and loop-2 is O(n^2). So it feel like total complexity should be O(n)+O(n^2). But no, the time complexity of the above code is O(n^2). Why? Because we are trying to know the fair enough count of elementary operations performed by the algorithm for a given input size n, which will be comparatively easy to understand by another person. With this logic, O(n)+O(n^2) become O(n^2), or O(n^2)+O(n^3)+O(n^4) become O(n^4)!
Again, you may ask: But how? How all the lower power of Big-O become so insignificant as we add it with a higher power of Big-O, that we can completely omit them (lower powers) when we are describing the complexity of our algorithm to another human?
I will try show the reason for this case: O(n)+O(n^2)=O(n^2).
Lets say n=1000 then the exact count for O(n) is 1000 operations and the exact count for O(n^2) is 1000*1000=1000000, so O(n^2) is 1000 time bigger than O(n), which means your program will spend most of the execution time in O(n^2) and thus it is not worth to mention that your algorithm also has some O(n).
PS. Pardon my English :)
In the first example, the array has n elements, and you go through these elements Twice. The first time you start from index 0 until i, and the second time you start from index n to 0. So, to simplify this, we can say that it took you 2n. When dealing with Big O notation, you should keep in mind that we care about the bounds:
As a result, O(2n)=O(n)
and O(an+b)=O(n)
Input: int n // operation 1
for(int i = 0; i < n; i++){ // operation 2
print("Hello World, "); // Operation 3
}
for(int j = n; j > 0; j--) // Operation 4
{
print("Hello World"); //Operation 5
}
As you can see, we have a total of 5 operations outside the loops.
Inside the first loop, we do three internal operations: checking if i is less than n, printing "Hello World", and incrementing i .
Inside the second loop, we also have three internal operations.
So, the total number of of opetations that we need is: 3n ( for first loop) + 3n ( second loop) + 5 ( operations outside the loop). As a result, the total number of steps required is 6n+5 ( that is your tight bound).
As I mentioned before, O( an +b )= n because once an algorithm is linear, a and b do not have a great impact when n is very large.
So, your time complexity will become : O(6n+5) =O(n).
You can use the same logic for the second example keeping in mind that two nested loops take n² instead of n.
I will slightly modify Johns answer. Defining n is one constant operation, defining integer i and assigning it to 0 is 2 constant operations. defining integer j and assigning with n is another 2 constant operations. checking the conditions for i,j inside for loop,increment,print statement depends on n so the total will be 3n+3n+5 which is equal to 6n+5. Here we cannot skip any of the statements during execution so its average case running time will also be its worst case running time which is O(n)

Resources