Hey guys i need some help for this piece of code, computation had become a problem coz i dont know the exact format in computing this code. any help would do.
int fib(int n)
{
int prev = -1;
int result = 1;
int sum = 0;
for(int i = 0;i <= n;++ i)
{
sum = result + prev;
prev = result;
result = sum;
}
return result;
}
I am not sure exactly what you are asking, maybe you can clarify
The time complexity of this algorithm is O(n). The loop will execute n times until i is greater than n. i starts at zero and increments by 1 every iteration of this loop until n.
I hope this helps
Related
What is the time complexity on the following algorithm?
I am aware that permutations without checking duplicates take O(n!), but I am specifically interested in the shoudSwap function, which is shown below:
// Returns true if str[curr] does not match with any of the
// characters after str[start]
bool shouldSwap(char str[], int start, int curr)
{
for (int i = start; i < curr; i++)
if (str[i] == str[curr])
return 0;
return 1;
}
If n is the size of the char array str[], then the time complexity of shouldSwap() is O(n) because it will iterate at most once over every element in the str array.
Here is the happy number question in leetcode
This is one of the solution
Using Floyd Cycle detection algorithm.
int digitSquareSum(int n) {
int sum = 0, tmp;
while (n) {
tmp = n % 10;
sum += tmp * tmp;
n /= 10;
}
return sum;
}
bool isHappy(int n) {
int slow, fast;
slow = fast = n;
do {
slow = digitSquareSum(slow);
fast = digitSquareSum(fast);
fast = digitSquareSum(fast);
} while(slow != fast);
if (slow == 1) return 1;
else return 0;
}
Is there a chance to have infinite loop?
There would only be an infinite loop if iterating digitSquareSum could grow without bounds. But when it is called with an n digit number the result is always smaller than 100n so this does not happen because for n >= 4 the result is always smaller than the number used as input.
All that ignores that integers in the computer in most languages cannot be arbitrarily large, you would get an integer overflow if the result could grow mathematically to infinity. The result would then be likely wrong but there would still not be an infinite loop.
As per problem statement:
Write a solution with O(n) time complexity and O(1) additional space
complexity. Given an array a that contains only numbers in the range
from 1 to a.length, find the first duplicate number for which the
second occurrence has the minimal index. In other words, if there are
more than 1 duplicated numbers, return the number for which the second
occurrence has a smaller index than the second occurrence of the other
number does. If there are no such elements, return -1
I followed my code according to constraints and still I'm getting time complexity error. Here's my solution:
int firstDuplicate(std::vector<int> a)
{
long long int n = a.size();
int cnt=0;
for(long long int i=0;i<n;i++)
{
//cout<<a[i]<<" "<<cnt<<endl;
if(a[i]==n||a[i]==-n)
{ cnt++;
if(cnt>1)
return n;
}
else if(a[abs(a[i])]<0)
return -a[i];
else
a[a[i]] = -a[a[i]];
}
return -1;
}
Can anyone suggest me better algorithm or whats wrong with this algorithm?
The algorithm for this problem is as follows:
For each number in the array, a, each time we see that number, we make a[abs(a[i]) - 1] negative. While iterating through a, if at some point we find that a[abs(a[i] - 1] is negative, we return a[i]. If we reach the last item in the array without finding a negative number, we return -1.
I feel like, this is what you were trying to get at, but you might have overcomplicated things. In code this is:
int firstDuplicate(std::vector<int> a)
{
for (int i = 0; i < a.size(); i += 1)
{
if (a[abs(a[i]) - 1] < 0)
return abs(a[i]);
else
a[abs(a[i]) - 1] = -a[abs(a[i]) - 1];
}
return -1;
}
This runs in O(n) time, with O(1) space complexity.
You can use the indexes to mark whether an element has occured before or not, if the value at idx is negative then it has already occurred before
int firstDuplicate(std::vector<int> a)
{
long long int n = a.size();
int cnt=0;
for(long long int i=0;i<n;i++)
{
int idx = a[i] - 1;
if(a[idx] < 0){
return a[i];
}
a[idx] *= -1;
}
return -1;
}
For EX: A sequence is giving 1 3 2 4 now i have to find the number of increasing sequences.
I came to know about BIT algorithm which is give me O(nlog2n) solution as compared to O(n2).
Code is as follow
void update(int idx ,int val){
while (idx <= MaxVal){
tree[idx] += val;
idx += (idx & -idx);
}
}
To read
int read(int idx){
int sum = 0;
while (idx > 0){
sum += tree[idx];
idx -= (idx & -idx);
}
return sum;
}
I can't understand how they are using BIT algorithms can you please help me
Binary indexed tree's read function will return the number of values which is equals or less than idx.
So, by insert each element one by one, from 0 to n (n is number of elements)
For each element, we need to know how many values that are less than this current element, and has already added to the BIT. Assume that this number is x, so the number of increasing sequence that end at this element is 2^x
After calculating all sequences that ended at this element, we need to add this element into BIT
Pseudo code:
long result = 0;
BIT tree = //initialize BIT tree
for(int i = 0; i < n; i++){
int number = tree.read(data[i] - 1);// Get the number of element that less than data[i];
result += 1L<< number;
tree.update(data[i], 1);
}
As update and read function has O(log n) time complexity, the above algo has time complexity O(n log n)
I have an algorithm to compute the powerset of a set using all of the bits between 0 and 2^n:
public static <T> void findPowerSetsBitwise(Set<T> set, Set<Set<T>> results){
T[] arr = (T[]) set.toArray();
int length = arr.length;
for(int i = 0; i < 1<<length; i++){
int k = i;
Set<T> newSubset = new HashSet<T>();
int index = arr.length - 1;
while(k > 0){
if((k & 1) == 1){
newSubset.add(arr[index]);
}
k>>=1;
index --;
}
results.add(newSubset);
}
}
My question is: What is the running time of this algorithm. The loop is running 2^n times and in each iteration the while loop runs lg(i) times. So I think the running time is
T(n) = the sum from i=0 to i=2^n of lg(i)
But I don't know how to simplify this further, I know this can be solved in O(2^n) time (not space) recursively, so I'm wondering if the method above is better or worse than this, timewise as it's better in space.
sigma(lg(i)) where i in (1,2^n)
= lg(1) + lg(2) + ... + lg(2^n)
= lg(1*2*...*2^n)
= lg((2^n)!)
> lg(2^2^n)
= 2^n
thus, the suggested solution is worth in terms of time complexity then the recursive O(2^n) one.
EDIT:
To be exact, we know that for each k - log(k!) is in Theta(klogk), thus for k=2^n we get that lg((2^n)!) is in Theta(2^nlog(2^n) = Theta(n*2^n)
Without attempting to solve or simulate, it is easy to see that this is worse than O(2^n) because this is 2^n * $value where $value is greater than one (for all i > 2) and increases as n increases, so it is obviously not a constant.
Applying Sterling's formula, it is actually O(n*2^n).