I have a question: my solution to the problem of finding common item in two Arrays was this one:
func commonItem2(list1 []string, list2 []string) bool {
list1 = append(list1, list2...)
sort.Strings(list1)
for i := 0; i < len(list1)-1; i++ {
if list1[i] == list1[i+1] {
return true
}
}
return false
} //O(n)
Would that really be O(n)?
Since I'm appending one array to the other and that is as far as I know O(n), don't know the O of the sort.Strings function in Go though.
Any ideas?
Time complexity of the sort.Strings function in Golang is O(n*log(n)), in which n is the length of the array.
As today, internally, sort function makes one call to data.Len to determine n, and O(n*log(n)) calls to data.Less and data.Swap. It uses quick sort for slices with > 12 elements, and uses shell sort for slices with <= 12 elements.
Related
Its mentioned at many places that bubble sort's worst case happens when we have sorted array in reverse order. But we can get $\Theta(n^2)$ complexity in other case also.
I have wrote the following code for bubble sort (C):
void bubbleSort(int * arr, int size)
{
for ( int i = size; i > 1; i-- ) // i is size of Unsorted array
{
bool swapped = false;
for ( int j = 0 ; j <= i-2; j++ )
{
if ( arr[j] > arr[j+1] )
{
swap(arr+j, arr+j+1);
swapped = true;
}
}
if ( swapped == false )
{
break;
}
}
}
Now for worst case, I was thinking that we should never encounter the break. That means we should at least have one swap in each pass. So now I started constructing an example array starting from size = 2. So one of the arrays satisfying the condition is {2,3,4,5,1}.
For any such array I am getting $\Theta(n^2)$ complexity, which is same as the worst case complexity described elsewhere.
But is this array the most worst case? In the case of descending ordered array, we have one swap for each comparison, but in my example we have a single swap in each pass. So the actual run time would be lower in my example (same number of comparisons in both examples though).
I get that big theta is just an approximation, but is the "worst case" described with respect to asymptotic approximation or actual run time?
Edit: In the book: "Data Structures with C (Schaum's Outline Series)"
on page 2.15 its written:
The time is measured by counting the number of key operations - in sorting and searching algorithms, the number of comparisons.
Both the examples above have same number of comparisons, so should they both be worst cases?
https://leetcode.com/problems/k-closest-points-to-origin
when I try to solve this leecode problem, I am curious about how to get the best parameters in my algorithm.
runtime
[1]: https://i.stack.imgur.com/enVe3.png
here is my code:
func kClosest(points [][]int, k int) [][]int {
res := make([][]int,0,k)
max := 0
for i,v := range points {
p := v[0]*v[0]+v[1]*v[1]
if len(res) <k {
if p > max {
max = p
}
res = append(res,v)
if i == k-1 {
sort.Slice(res,func(i,j int) bool {
return res[i][0]*res[i][0]+res[i][1]*res[i][1] < res[j][0]*res[j][0]+res[j][1]*res[j][1]
})
}
continue
}
if p > max {
continue
}
res = append(res,v)
// the 50 is the parameters I want to optimal
if len(res) > 50*k {
sort.Slice(res,func(i,j int) bool {
return res[i][0]*res[i][0]+res[i][1]*res[i][1] < res[j][0]*res[j][0]+res[j][1]*res[j][1]
})
res = res[:k]
max = res[k-1][0]*res[k-1][0]+res[k-1][1]*res[k-1][1]
}
}
sort.Slice(res,func(i,j int) bool {
return res[i][0]*res[i][0]+res[i][1]*res[i][1] < res[j][0]*res[j][0]+res[j][1]*res[j][1]
})
res = res[:k]
return res
}
I think you're using the essentially wrong algorithm -- you're repeatedly sorting the slice and truncating it when it gets too long to try to avoid the O(n log n) runtime. This gives you O(n log k) performance overall (each sort is O(k log k), and you sort approximately n/k times). You can more easily get O(n log k) performance by having a max-heap to which you insert elements one by one (removing the max beforehand when the heap is already at size k and the new element is smaller than the max).
But best is use QuickSelect to select the smallest k elements. It's O(n) time, and the question is obviously geared towards this as a solution because it doesn't require the answer to be in any particular order. It's not in the standard library, but you can easily find implementations online if you can't write it yourself. As a matter of optimization, it's probably better to precompute a slice of the vector lengths coupled with indexes into the original slice and quickselect that rather than the original slice, but it's hard to know without profiling.
I am currently solving coding problems on leetcode and ran into a problem solution I don't really understand. We are meant to return the index of a target inside non-decreasing ordered integer array with distinct numbers. If the array does not include the target, we return the index it should be in. The constraint for the function is, that it should run in O(logn) time. So I kept thinking about some divide and conquer way to do it but then saw this solution:
var searchInsert = function(nums, target) {
for (i = 0; i<nums.length; i++) {
if (nums[i] >= target) {
return i
}
}
return nums.length
};
How is this O(logn)? Isn't the worst case that the target is the array maximum and we, therefor, iterate over the whole array?
Basic approach to delete multiple keys from an array of Maps in Go, would be by using nested loops, that is with a parent loop for iterating over array of Maps and an inner loop of slice of Keys to be deleted. Is there a way to do this without using nested loops. Just trying to figure out a way to get better time complexity that O(n^2).
Removing a slice of keys within a Map with time complexity better than
O(n^2) in Go
What is n? Why is time complexity O(n^2)?
Consider real code and real cases:
package main
func mapsDeleteUniqueKeys(maps []map[string]int, keys []string) {
// iterations = len(keys) × len(maps)
for _, k := range keys {
for _, m := range maps {
delete(m, k)
}
}
}
func mapsDeleteDuplicateKeys(maps []map[string]int, keys []string) {
// iterations = len(keys) + (len(unique) × len(maps))
unique := make(map[string]struct{}, len(keys))
for _, k := range keys {
unique[k] = struct{}{}
}
for k := range unique {
for _, m := range maps {
delete(m, k)
}
}
}
func main() {}
What is the growth function for the number of keys? What is the growth function for the number of maps? What is the growth function for the number of keys times the number of maps?
Is the time spent on each iteration significant?
In your case: What do the keys represent? In your case: What do the maps represent?
What worst-case time complexity do you expect to encounter in the real world?
What Go benchmarks have you run? What were the results?
Time complexity appears to be O(len(keys) × len(maps)), or O(k × m),.
In practice len(maps), the number of maps, is likely small relative to len(keys), the number of keys. The number of maps may also be constant. Therefore, the time complexity is likely near O(len(keys)) or O(n).
I have calculated the answer to be n raised to 2/3 . Could anyone tell me what is the worst case Big O()
Whether it is worst case or not that depends on the algorithm that you are using.
But in general, if you are having the number of operations to be n^(2/3) then the complexity in terms of O-notation is O(n^(2/3)).
Let me explain this in a bit more detail (so that we can get rid of the "general" word and give a definitive answer).
Consider a simple algorithm that finds the specific number in an array of n elements.
If your algorithm is something like this:
find(arr, number) {
boolean found = false;
for(i = 0; i < arr.length; i++) {
if(arr[i] == number) {
found = true;
}
}
return found;
}
The time complexity to find a number in an array using the above algorithm is O(n) always. By always I mean that what so-ever is the input, the above algorithm will always have n iterations (considering the length of array is n).
Now compare and contrast this algorithm with the following:
find(arr, number) {
for(i = 0; i < arr.length; i++) {
if(arr[i] == number) {
return true;
}
}
return false;
}
Now the time-complexity depends on the input of the array. (If you have an array with 10^8 elements and the first element matches with the element that you are searching for then you are done and you can instantly return without iterating over the whole array). So the worst-case complexity here becomes O(n) but the best case here is O(1).
So it basically depends on your algorithm how it is working. I assume that your intent when you say "exact" is that you have first version of find that I described above. And if that is the case then yes, the worst-case time complexity is O(n^2/3)