Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am currently working on program and want to convert ArrayList to an array but in less than O(n) time.
for (int i = 0; i < list.size(); i++) {
if (list.get(i) != null) {
arr[i] = list.get(i);
}
}
If n is the length of the list then O(n) means in this case that you look at each element in the list and copy it.
Now you say you want to convert it in less than O(n). This means you have to ignore some elements in the list. if not it would be O(n) again. But which do you ignore? Remember you are not allowed to look at all elements else it would be in O(n) again.
Let's say you know that the list contains booleans where n/2 are true and the others are false. In the best-case scenario all true values would be in the first half of the list.
Now you can stop iterating at n/2 of the list but you need to add the false values again to your Array. Now you are in O(n) again.
Let's make another assumption. You can always ignore the last value of the list. This means that you only iterate n-1 times that then is O(n-1) but in big O notation, you ignore constants so it gets O(n) again.
It is not possible to copy all n elements of a list to an Array in lower than O(n).
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a question about the time complexity of the solution for the "two number sum" problem.
You get an array of integer numbers and should return true if any two distinct numbers in the array sum up to a defined target sum or not.
The problem can be found here: https://leetcode.com/problems/two-sum/
Here is one possible solution:
vector<int> twoSum(vector<int> &numbers, int target)
{
//Key is the number and value is its index in the vector.
unordered_map<int, int> hash;
vector<int> result;
for (int i = 0; i < numbers.size(); i++) {
int numberToFind = target - numbers[i];
//if numberToFind is found in map, return them
if (hash.find(numberToFind) != hash.end()) {
//+1 because indices are NOT zero based
result.push_back(hash[numberToFind] + 1);
result.push_back(i + 1);
return result;
}
//number was not found. Put it in the map.
hash[numbers[i]] = i;
}
return result;
}
The "for loop" time complexity is O(n) because it goes through n array elements.
The find() function's average time complexity is O(1). You look into that here: http://www.cplusplus.com/reference/unordered_map/unordered_map/find/
Since 1 * n is still n, the average case time complexity is still just O(n).
I thought when we talk about time complexity we always mean the worst-case time complexity.
Operations on a hash map are then O(n) or at least O(log n).
In a coding interview would you say the algorithm above runs in O(n^2) or O(n)?
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Currently I have this sorting algorithm:
public static void sort(int[] A, int i, int j) {
if (i == j) return;
if(j == i+1) {
if(A[i] > A[j]) {
int temp = A[i];
A[i] = A[j];
A[j] = temp;
}
}
else {
int k = (int) Math.floor((j-i+1)/3);
sort(A,i, j-k);
sort(A,i+k, j);
sort(A,i,j-k);
}
}
It's sorting correctly, however, the asymptotic comparison is quite high: with T(n) = 3T(n-f(floor(n/3)) and f(n) = theta(n^(log(3/2)3)
Therefore, I'm currently thinking of replacing the third recursion sort(A,i,j-k) with an newly written, iterative method to optimize the algorithm. However, I'm not really sure how to approach the problem and would love to gather some ideas. Thank you!
If I understand this correct, you first sort the first 2/3 of the list, then the last 2/3, then sort the first 2/3 again. This actually works, since any misplaced items (items in the first or last 2/3 that actually belong into the last or first 1/3) will be shifted and then correctly sorted in the next pass of the algorithm.
There are certainly two points that can be optimized:
in the last step, the first 1/3 and the second 1/3 (and thus the first and second half of the region to sort) are already in sorted order, so instead of doing a full sort, you could just use a merge-algorithm, which should run in O(n)
instead of sorting the first and last 2/3, and then merging the elements from the overlap into the first 1/3, as explained above, you could sort the first 1/2 and last 1/2 and then merge those parts, without overlap; the total length of array processed is the same (2/3 + 2/3 + 2/3 vs. 1/2 + 1/2 + 2/2), but the merging-part will be faster than the sorting-part
However, after the second "optimization" you will in fact have more or less re-invented Merge Sort.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
suppose that we have a m*n matrix that each rows are in order. so, i only know that order of best algorithm for this problem is O(m(log m + log n)).
(It was a test question and result is this order)
but i don't know how this algorithm works
One idea can be like this.
If I ask you what is the rank of a given number x in the original matrix? How do you answer this question?
One answer can be:
Just binary search the first occurrence of x or greater element on each row. and then add the individual ranks.
int rank = 1;
for (int i = 0; i < m; ++i) {
rank += std::lower_bound(matrix[i].begin(), matrix[i].end(), x);
}
This can be done in O(m * log n) time(m binary searches on n sized arrays).
Now we just need to do a binary search on x(between 0 and INT_MAX or matrix[0][k]) to find the kth rank. Since INT_MAX is const, that will make the overall time complexity O(m * log n) theoretically. One optimization, which can be done use intelligent ranges in place of matrix[i].begin(), matrix[i].end().
PS: Still wondering the O(m*(log m + log n)) or O( m * (log mn)) solution.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I have this algorithm here:
sumup(int n) {
int s = ???, k = 0;
while(k != n) {
k = s*(2*k-1)*(2*k-1);
s = k;
}
return s;
}
And I need to find out what its purpose is. It doesn't even seem to work with most numbers and it just returns n again anyway, once its done.
Does anybody have any idea what this algorithm is used for?
I assumed it was for square roots, but it doesn't really seem to work either way.
At the end of the for loop s and k are equal. Before the next iteration k != n is checked. This is equivalent to s != n. So the loop runs until s == n holds and then n is returned. So the function get the input n, runs for some time and returns n at the end.
The questions are:
Does it terminate? Under what conditions?
Only if s and n fit together. E.g. if 0 < n < s holds the algorithm will not terminate.
How long does it take, if it terminates?
k is initialized with 0 and becomes the value of s after the first iteration. From there it is basically cubed every iteration. Solving s^(3^x) = n leads to a complexity of Θ(log log n).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
An array of integers contains elements such that each element is 1 more or less than its preceding element. Now we are given a number, we need to determine the index of that number's first occurrence in the array.
Need to optimize linear search. Its not homework.
My algorithm would be like this:
p=0
if (A[p] == x) then idx=p and the algorithm is finished else goto next step
set p += |x-A[p]|
goto 2.
say A[p] > x. Then, since A items increase or decrease by 1, idx is for sure at least (A[p] - x) index away from p. Same principle goes for A[p] < x.
int p=0, idx=-1;
while (p<len && p>=0)
if (A[p]==x)
idx = p;
else
p+= abs(x-A[p]);
Time complexity: The worst case would be O(n). I expect the average case to be better than O(n) ( I think it's O(log n) but not sure about it).
Running time: Definitely better than linear search for all cases.
You can't be faster than linear. The following code should be as fast as you can go:
int findFirstIndex(int num, int[] array) {
int i = 0;
while (i< array.length)
if (array[i] == num) return i;
else i += abs(array[i] - num)
return -1
}
But it's still O(array.length) in the worst case. Think for example if you are looking for 1 in an array containing only 0. Then, you can't skip any position.
Start from first position; now consider the difference between the searched number N and first number. if array[0] == N then we finished; otherwise we have to jump of abs(array[0] -N ) positions; now just repeat this until the end of the array.