The question was given an array of n size with values ranging from [1-100]. Create a sorting algorithm and discuss it's time, space, and optimality. I don't have a decent understanding of asymptotic analysis and wasn't sure how to answer this.
Algorithm:
void sort(int a[], int n) {
int temp[100] = {0};
for(int i=0; i<n; i++)
temp[a[i]-1]++;
int c = 0;
for(int i=0; i<100; i++)
for(int j=0; j<temp[i]; j++) {
a[c] = i+1;
c++;
}
}
It's O(n), please refer Count Sort
Related
I have this code and cannot understand the Big-O of this... Thanks
for(i = 0; i<n; i++){
for(j = i; j<n; j++){
if (arr[j]%2!=0){
if (minodd > arr[j]){
}
}
}
}
One of the best ways to approach this problem is to break it down into smaller parts.
First, lets look at your inner loop:
for(j = i; j<n; j++){
if (arr[j]%2!=0){ // O(1)
if (minodd > arr[j]){ // O(1)
}
}
}
The if-statements are O(1) or constant time so we can ignore those and we get just the inner for loop:
for(j = i; j<n; j++){
... // O(1) + O(1)
}
Since the worst case scenario is it loops n times we have O(n) + O(1) + O(1) which can be simplified to O(n) which is called linear time.
Next, lets zoom out and replace the inner loop with our new info:
for(i = 0; i<n; i++){
for(j = i; j<n; j++){
if (arr[j]%2!=0){
if (minodd > arr[j]){
}
}
}
}
becomes:
for(i = 0; i<n; i++){
O(n)
}
Since we know the outside for loop will cycle n times in the worst case, and the inside for loop will cycle n times in the worst case: We get O(n x n) or O(n²) which is also know as polynomial time.
Doesn't this just go on for forever?
You have i < n in your inner loop, so I think it's O(inf).
Now that you've updated the loop, I think #e2-e4 is right:
#include <stdio.h>
int eqn(int n)
{
return n > 0 ? n + eqn(n - 1) : 0;
}
int main(int argc, char **argv)
{
int i, j, n, v, a;
v = 0;
n = 5;
for (i = 0; i < n; i++) {
for (j = i; j < n; j++) {
v++;
}
}
// v = 15 ? 15
printf("v = %d ? %d\n", v, eqn(n));
return 0;
}
Can someone give me a good explanation for Big-O problem
int fun(int n){
for (int i = 2; i<=n; i++)
for(int j= 2; j<2n: j+=i
System.out.println("Ok");
}
I have this following code. I need to calculate this algorithm complexity but i have no idea where to start. This algorithm has 3 nested loops so i guess its complexity is n^3 or am i wrong?
public static void RadixSort(DataArray data)
{
IList> digits = new List>();
for (int i = 0; i < 10; i++)
{
digits.Add(new List<int>());
}
for (int i = 0; i < data.Length; i++)
{
for (int j = 0; j < data.Length; j++)
{
int digit = (int)((data[j] % Math.Pow(10, i + 1)) / Math.Pow(10, i));
digits[digit].Add((int)data[j]);
}
int index = 0;
for (int k = 0; k < digits.Count; k++)
{
IList<int> selDigit = digits[k];
for (int l = 0; l < selDigit.Count; l++)
{
data.Swap(index++, selDigit[l]);
//data[index++] = selDigit[l];
}
}
for (int k = 0; k < digits.Count; k++)
{
digits[k].Clear();
}
}
}
Calculating complexity is more complex than just look at the number of nested loops. If you have a triple nested loop like this:
for(int i=0; i<n; i++)
for(int j=0; j<n; j++)
for(int k=0; k<n; k++)
it will be O(n³), assuming n is not changing in the loop. However, if you consider your case:
for(int i=0; i<n; i++)
for(int j=0; j<m; j++)
for(int k=0; k<m; k++)
the time complexity will instead be O(m²n).
And even the simplest sorting algorithms, like bouble sort, selection sort and insertions sort is O(n²), so if your implementation is worse than that you're doing something wrong. The time complexity for radix sort is O(wn), where w is a measure of the size of the elements.
When uncertain about complexity, a reasonable approach is to add counters to the inner-loop code and at the end of the routine print out the counts. Next, vary the size of the input to see how the results change. The empirical results can immediately confirm or deny your analytic or intuited results.
How can we sort an array A of positive integers in range [0,n^2/2] in O(n) time?
How can we sort the array if integers are in range [0,n^3/2] in O(n)?
I am new to algorithms. I would appreciate if someone explains the significance of the range along with the solution.
#include<bits/stdc++.h>
using namespace std;
int main()
{
unordered_map<int, int> map;
int n;
int *arr = new int [n];
cin >> n;
for(int i=0; i<n; i++)
cin >> arr[i];
int max = arr[0];
for(int i=0; i<n; i++)
if(arr[i] > max)
max = arr[i];
for(int i=0; i<n; i++)
map[arr[i]] = 1;
int j = 0;
for(int i=0; i<=max; i++)
if(map[i] == 1)
arr[j++] = i;
for(int i=0; i<n; i++)
cout << arr[i] << " ";
}
Use a hash map and increase the value at the key (that is the value of arr[i]) by one. After that loop through the hash map till the max value of the array that you has been found at the beginning. Overwrite the array at index j whenever the value at the particular key is found greater than 1. The resultant array is sorted.
Note: This only only works on positive non-repeating integers.
int max(int[] a, int m)
{
int n = a.length;
int t = a[0];
for(int i=1; i<n-m; i++)
if(a[i]>t) t = a[i];
return t;
}
int[] unknownSort(int[] a)
{
int n = a.length;
for(int i=n-1,j=0; i>0; i--,j++)
a[i] = max(a,j);
return a;
}
It is much faster than bubble sort, but a little slower than insertion sort.
That's not a sorting algorithm, it doesn't work.
Looks like a clumsy implementation of selection sort: http://en.wikipedia.org/wiki/Selection_sort
It will be a Selection Sort if you write it correctly. Your algrorithm will just corrupt the data in the array.