Consider the above procedure, which finds the location LOC1 of the largest element and the location LOC2 of the second largest element in an array DATA with n>1 elements. Let C(n) denote the number of comparisons during the execution of the procedure.
So, I was unable to find the following points related to it:
Find C(n) for the best case.
Find C(n) for the worst case.
Find C(n) for the average case for n=4, assuming all arrangements of the given elements in DATA are equally likely.
#include<iostream>
using namespace std;
void findd(int arr[],int n,int loc1,int loc2)
{
int first=arr[0],second=arr[1];
if(first<second)
{
int temp=first;
first=second;
second=temp;
loc2=0,loc1=1;
}
for(int i=2;i<n;i++)
{
if(first<arr[i])
{
second=first;
first=arr[i];
loc2=loc1;
loc1=i;
}
else if(second<arr[i])
{
second=arr[i];
loc2=i;
}
}
cout<<"index of max element"<<loc1+1<<" index of min element "<<loc2+1<<"\n";
}
int main()
{
int n;
cin>>n;
int arr[n];
for(int i=0;i<n;i++)
{
cin>>arr[i];
}
findd(arr,n,0,1);
}
so accoriding to the solution and your algorithm your this code will have
for n elements :
O(n)= worst complexity....as it will iterate n times if the max no is at the end
O(n)= best case as if the max no is present at the beginning but still it have to compare it with all the elements in the array
O(n)= average complexity
O(1)=space complexity...
hope you will like the answer ..:)
Related
I am trying to sort strings alphabetically in linear time and thought about using tries for this, my question is What's the time complexity of running a Pre-Order transversal on tries? is it O(n) ?
You have to be a little careful with the way you measure complexity in this case. A lot of times, people pretend that sorting N strings with a comparison-based sort takes O(N log N) time, but that is not really true in the worst case unless the length of the strings is bounded. It is the expected time if the strings are randomized, however, so it's not a bad approximation for many use cases.
If you want to account for possible long strings with long common prefixes, then you change the meaning of N to refer to the total size of the input, including all the strings. With this new definition, you can sort a list of strings in O(N) time.
Inserting the strings into a trie, or better a radix tree (https://en.wikipedia.org/wiki/Radix_tree) and then doing a preorder traversal is one way, and yes that works in O(N) time, where N is the total size of the input.
But it's faster and easier to do a radix sort: https://en.wikipedia.org/wiki/Radix_sort The Most-Significant-Digit-First variant works best with variable-length inputs.
Radix Sort can be applied in this case to sort them in O(n) refer to the following code implemented in c++:
#include<iostream>
using namespace std;
class RadixSort {
public:
static char charAt(string s,int n){
return s[n];
}
static void countingSort(string arr[],int n,int index,char lower,char upper){
int countArray[(upper-lower)+2];
string tempArray[n];
for(int i =0; i < sizeof(countArray)/sizeof(countArray[0]); i++)
countArray[i]=0;
//increase count for char at index
for(int i=0;i<n;i++){
int charIndex = (arr[i].length()-1 < index) ? 0 : (charAt(arr[i],index) - lower+1);
countArray[charIndex]++;
}
//sum up countArray;countArray will hold last index for the char at each strings index
for(int i=1;i<sizeof(countArray)/sizeof(countArray[0]);i++){
countArray[i] += countArray[i-1];
}
for(int i=n-1;i>=0;i--){
int charIndex = (arr[i].length()-1 < index) ? 0 : (charAt(arr[i],index) - lower+1);
tempArray[countArray[charIndex]-1] = arr[i];
countArray[charIndex]--;
}
for(int i=0;i<sizeof(tempArray)/sizeof(tempArray[0]);i++){
arr[i] = tempArray[i];
}
}
static void radixSort(string arr[],int n,char lower,char upper){
int maxIndex = 0;
for(int i=0;i<n;i++){
if(arr[i].length()-1 > maxIndex){
maxIndex = arr[i].length()-1;
}
}
for(int i=maxIndex;i>=0;i--){
countingSort(arr,n,i,lower,upper);
}
}
};
int main(){
string arr[] = {"a", "aa", "aaa","kinga", "bishoy","computer","az"};
int n = sizeof(arr)/sizeof(arr[0]);
RadixSort::radixSort(arr,n,'a','z');
for(int i=0;i<n;i++){
cout<<arr[i]<<" ";
}
return 0;
}
No. it is not O(n). it is Omega(k(log(k))n).
without any other restriction,and this is the case as i understand from your question, it is just comparison based sorting algorithm.
Sorting an array of length k is in Omega(klog(k)),
and doing it n times, without any connections between the times, will lead to
Omega(klog(k)n).
You can read more here:
https://www.geeksforgeeks.org/lower-bound-on-comparison-based-sorting-algorithms/
If you look at k as bounded, because there is no ENGLISH word longer then 10^1000000 (Which probably larger than atoms on Earth), then sort an array of bounded length is in O(1), and doing it n time will lead to O(n).
You get a lot from dealing with infinity, but sometimes you have to pay back...
I got the following question:
Assume you got an array A with n distinguished numbers, and assume you can store the n-elements in new data structure (one that could help you in solving the question below) while saving the that the store time is bounded by O(n).
Write an algorithm for a function max (i,j) which will get as input two index i greater then j , and will return as output the maximum between A[i], A[i+1],...,A[j]. max(i,j) should be bounded by O(log(n)).
I thought about binary tree but could not think about a why of how to store the numbers. One option that I could thought about that take O(n) store time is creating a 'tournament tree' , but I failed to find an algorithm to max using this kind of data structure.
This is a homework question, but couldn't find the tag for it.
This is the most typical application of segment tree.
Given an array of number, you can build a segment on top of it with O(n) time complexity and perform query on intervals/ranges in O(logn) time.
Some common application example - finding the sum of elements from index i to j where 0 <= i <= j <= n - 1, finding the maximum/minimum of elements from index i to j where 0 <= i <= j <= n - 1 etc.
You can solve it using priority queues.
using namespace std;
#include<iostream>
#include<vector>
#include<algorithm>
int deq[100],length=0;
void increase_value(int arr[],int i,int val)
{
arr[i]=val;
while(i>1 && arr[i/2]<arr[i])
{
int temp=arr[i];
arr[i]=arr[i/2];
arr[i/2]=temp;
i=i/2;
}
}
void insert_element(int arr[],int val)
{
length=length+1;
increase_value(arr,length,val);
}
int main()
{
int arr[10000];
int size,lw,up;
cin>>size;
for(int i=1;i<=size;i++)
{
cin>>arr[i];
}
cin>>lw>>up;
for(int i=lw;i<=up;i++)
{
insert_element(deq,arr[i]);
}
cout<<deq[1]<<endl;
}
Series of k blocks is given (k1, k2, ..., ki). Each block starts on position ai and ends on position bi and its height is 1. Blocks are placed consecutively. If the block overlaps another one, it is being attached on its top. My task is to calculate the highest tower of blocks.
I have created an algorithm which time complexity is about O(n^2), but I know there is a faster solution with the usage of skiplist.
#include <iostream>
struct Brick
{
int begin;
int end;
int height = 1;
};
bool DoOverlap(Brick a, Brick b)
{
return (a.end > b.begin && a.begin < b.end)
}
int theHighest(Brick bricks[], int n)
{
int height = 1;
for (size_t i = 1; i < n; i++)
{
for (size_t j = 0; j < i; j++)
{
if (bricks[i].height <= bricks[j].height && DoOverlap(bricks[i], bricks[j]))
{
bricks[i].height = bricks[j].height + 1;
if (bricks[i].height > height)
height = bricks[i].height;
}
}
}
return height;
}
That's an example drawing of created construction.
You can simply use 2 pointers after sorting the blocks based on their starting positions, if their starting positions match sort them based on their ending positions. Then simply use the 2 pointers to find the maximum height.
Time complexity : O(NlogN)
You can find the demo link here
#include <bits/stdc++.h>
using namespace std;
#define ii pair<int,int>
bool modified_sort(const pair<int,int> &a,
const pair<int,int> &b)
{
if (a.first == b.first) {
return (a.second <b.second);
}
return (a.first <b.first);
}
int main() {
// your code goes here
vector<ii> blocks;
int n; // no of blocks
int a,b;
cin>>n;
for (int i=0;i<n;i++) {
cin>>a>>b;
blocks.push_back(ii(a,b));
}
sort(blocks.begin(), blocks.end(), modified_sort);
int start=0,end=0;
int max_height=0;
while(end<n) {
while(start<end && blocks[start].second <= blocks[end].first)
{
start++;
}
max_height = max(max_height,(end-start+1));
end++;
}
cout<<max_height<<endl;
return 0;
}
Here is a straightforward solution (without skip lists):
Create an array heights
Iterate through the blocks.
For every block
Check the existing entries in the heights array for the positions the current block occupies by iterating over them. Determine their maximum.
Increase the values in the heights array for the current block to the maximum+1 determined in the previous step.
keep score of the maximum tower you have built during the scan.
This problem is isomorphic to a graph traversal. Each interval (block) is a node of the graph. Two blocks are connected by an edge iff their intervals overlap (a stacking possibility). The example you give has graph edges
1 2
1 3
2 3
2 5
and node 4 has no edges
Your highest stack is isomorphic to the longest cycle-free path in the graph. This problem has well-known solutions.
BTW, I don't think your n^2 algorithm works for all orderings of blocks. Try a set of six blocks with one overlap each, such as the intervals [n, n+3] for n in {2, 4, 6, 8, 10, 12}. Feed all permutations of these blocks to your algorithm, and see whether it comes up with a height of 6 for each.
Complexity
I think the highest complexity is likely to be sorting the intervals to accelerate marking the edges. The sort will be O(n log n). Adding edges is O(n d) where d is the mean degree of the graph (and n*d is the number of edges).
I don't have the graph traversal algorithm solidly in mind, but I expect that it's O(d log n).
It looks like you can store your already processed blocks in a skip list. The blocks should be ordered by starting position. Then to find overlapping blocks at each step you should search in this skip list which is O(log n) on average. You find first overlapping block then iterate to next and so on until you meet first non-overlapping block.
So on average you can get O(n * (log(n) + m)) where m - is the mean number of overlapping blocks. In the worst case you still get O(n^2).
This is a practice question for the understanding of Divide and conquer algorithms.
You are given an array of N sorted integers. All the elements are distinct except one
element is repeated twice. Design an O (log N) algorithm to find that element.
I get that array needs to be divided and see if an equal counterpart is found in the next index, some variant of binary search, I believe. But I can't find any solution or guidance regarding that.
You can not do it in O(log n) time because at any step even if u divide the array in 2 parts, u can not decide which part to consider for further processing and which should be left.
On the other hand if the consecutive numbers are all present in the array then by looking at the index and the value in the index we can decide if the duplicate number is in left side or right side of the array.
D&C should look something like this
int Twice (int a[],int i, int j) {
if (i >= j)
return -1;
int k = (i+j)/2;
if (a[k] == a[k+1])
return k;
if (a[k] == a[k-1])
return k-1;
int m = Twice(a,i,k-1);
int n = Twice(a,k+1,j);
return m != -1 ? m : n;
}
int Twice (int a[], int n) {
return Twice(a,0,n);
}
But it has complexity O(n). As it is said above, it is not possible to find O(lg n) algorithm for this problem.
I am working on to find the kth smallest element in min heap. I have got a code for this whose complexity is O(k log k). I tried to improve it to O(k).
Below is the code.
struct heap{
int *array;
int count;
int capacity;
};
int kthsmallestelement(struct heap *h,int i,int k){
if(i<0||i>=h->count)
return INT_MIN;
if(k==1)
return h->array[i];
k--;
int j=2*i+1;
int m=2*i+2;
if(h->array[j] < h->array[m])
{
int x=kthsmallestelement(h,j,k);
if(x==INT_MIN)
return kthsmallestelement(h,m,k);
return x;
}
else
{
int x=kthsmallestelement(h,m,k);
if(x==INT_MIN)
return kthsmallestelement(h,j,k);
return x;
}
}
My code is traversing k elements in heap and thus complexity is O(k).
Is it correct?
Your code, and in fact, its entire approach - are completely wrong, IIUC.
In a classic min-heap, the only thing you know is that each path from the root to the children is non-decreasing. There are no other constraints, in particular no constraints between the paths.
It follows that the k-th smallest element can be anywhere in the first 2k element. If you are just using the entire heap's array built & maintained using the classic heap algorithm, any solution will necessarily be Ω(min(n, 2k)). Anything below that will require additional requirements on the array's structure, an additional data structure, or both.