What is the running time of this powerset algorithm - algorithm

I have an algorithm to compute the powerset of a set using all of the bits between 0 and 2^n:
public static <T> void findPowerSetsBitwise(Set<T> set, Set<Set<T>> results){
T[] arr = (T[]) set.toArray();
int length = arr.length;
for(int i = 0; i < 1<<length; i++){
int k = i;
Set<T> newSubset = new HashSet<T>();
int index = arr.length - 1;
while(k > 0){
if((k & 1) == 1){
newSubset.add(arr[index]);
}
k>>=1;
index --;
}
results.add(newSubset);
}
}
My question is: What is the running time of this algorithm. The loop is running 2^n times and in each iteration the while loop runs lg(i) times. So I think the running time is
T(n) = the sum from i=0 to i=2^n of lg(i)
But I don't know how to simplify this further, I know this can be solved in O(2^n) time (not space) recursively, so I'm wondering if the method above is better or worse than this, timewise as it's better in space.

sigma(lg(i)) where i in (1,2^n)
= lg(1) + lg(2) + ... + lg(2^n)
= lg(1*2*...*2^n)
= lg((2^n)!)
> lg(2^2^n)
= 2^n
thus, the suggested solution is worth in terms of time complexity then the recursive O(2^n) one.
EDIT:
To be exact, we know that for each k - log(k!) is in Theta(klogk), thus for k=2^n we get that lg((2^n)!) is in Theta(2^nlog(2^n) = Theta(n*2^n)

Without attempting to solve or simulate, it is easy to see that this is worse than O(2^n) because this is 2^n * $value where $value is greater than one (for all i > 2) and increases as n increases, so it is obviously not a constant.

Applying Sterling's formula, it is actually O(n*2^n).

Related

is there a infinite loop in this happy number solution in leetcode?

Here is the happy number question in leetcode
This is one of the solution
Using Floyd Cycle detection algorithm.
int digitSquareSum(int n) {
int sum = 0, tmp;
while (n) {
tmp = n % 10;
sum += tmp * tmp;
n /= 10;
}
return sum;
}
bool isHappy(int n) {
int slow, fast;
slow = fast = n;
do {
slow = digitSquareSum(slow);
fast = digitSquareSum(fast);
fast = digitSquareSum(fast);
} while(slow != fast);
if (slow == 1) return 1;
else return 0;
}
Is there a chance to have infinite loop?
There would only be an infinite loop if iterating digitSquareSum could grow without bounds. But when it is called with an n digit number the result is always smaller than 100n so this does not happen because for n >= 4 the result is always smaller than the number used as input.
All that ignores that integers in the computer in most languages cannot be arbitrarily large, you would get an integer overflow if the result could grow mathematically to infinity. The result would then be likely wrong but there would still not be an infinite loop.

Find the greatest prime number with 7 as the last digit in {1, ..., n}

Let's suppose n is an integer around 250000. Using Java, I need to find the greatest prime number that ends with 7 and belongs to {1, ..., n}. Also, I need to keep an eye on computational complexity and try to lower it as much as I can.
So I was thinking of using Sieve of Eratosthenes for n, and then just checking my array of bool values
int start = (n % 10 < 7 ? n - (n % 10 + 3) : n - (n % 10 - 7) )
for (int i = start; i >= 0; i-= 10){
if(primes[i])
return i;
}
It would keep the whole thing simple i guess, but I was wondering what would be the more efficient approach be. Unless there is a way to easily avoid having an array, but I couldn't think of any.
Below here, you will find my implementation of Sieve of Eratosthenes algorithm for finding prime numbers between 1 and 250000 and also how I make use of it, to filter out all the prime number ending in 7.
The overall time complexity of this algorithm is O(N) because all the implementation is done in sieve algo.
import java.io.*;
import java.util.*;
public class Main {
public static void main(String[] args) {
int N = 250000;
ArrayList<Integer> primeWithEnding7 = new ArrayList<Integer>();
int maxPrimeNum7 = 0;
boolean[] isPrime = new boolean[N + 1];
for (int i = 2; i <= N; i++) {
isPrime[i] = false;
}
for (int i = 2; i <= N; i++) {
if (!isPrime[i]) {
int rem = i%10;
if(rem == 7) {
maxPrimeNum7 = Math.max(maxPrimeNum7, i);
primeWithEnding7.add(i);
}
for (int j = i+i; j <= N; j+=i) {
isPrime[j] = true;
}
}
}
// Print all the prime numbers ending in 7
for(int i: primeWithEnding7) {
System.out.print(i + " ");
}
System.out.println();
System.out.println("Max number is " + maxPrimeNum7);
}
}
Now let's take an example to understand why this algorithm will work for us.
So let's suppose N = 30. Now when the loop starts from 2, if 7 was not prime it would have been covered as non-prime in the inner loop j, the fact that i reaches to 7 proves that it's a prime number, So I keep a global array list as my data structure to add only those prime numbers that end in 7 and because I use % operator to calculate the last digit of the number, the time complexity of that step is O(1), so total time complexity of the algorithm comes to O(N).
Let me know, if I have made any mistake in the algorithm, I will fix it.
Hope this helps!

Interview Ques - find the KTHSMALLEST element in an unsorted array

This problem asked to find k'th smallest element in an unsorted array of non-negative integers.
Here main problem is memory limit :( Here we can use constant extra space.
First I tried a O(n^2) method [without any extra memory] which gave me TLE.
Then i tried to use priority queue [extra memory] which gave me MLE :(
Any idea how to solve the problem with constant extra space and within time limit.
You can use a O(n^2) method with some pruning, which will make the program like O(nlogn) :)
Declare two variable low = maximum value which position is less than k and high = lowest value which position is greater than k
Keep track of the low and high value you already processed.
Whenever a new value comes check if it is in the [low , high] boundary. If yes then process it otherwise skip the value.
That's it :) I think it will pass both TLE and MLE :)
Have a look at my code :
int low=0,high=1e9;
for(int i=0;i<n;i++) // n is the total number of element
{
if(!(A[i]>=low&&A[i]<=high)) // A is the array in which the element are saved
continue;
int cnt=0,cnt1=0; // cnt is for the strictly less value and cnt1 for same value. Because value can be duplicate.
for(int j=0;j<n;j++)
{
if(i!=j&&A[i]>A[j])
cnt++;
if(A[i]==A[j])
cnt1++;
if(cnt>k)
break;
}
if(cnt+cnt1<k)
low=A[i]+1;
else if(cnt>=k)
high=A[i]-1;
if(cnt<k&&(cnt+cnt1)>=k)
{
return A[i];
}
}
You can do an in-place Selection Algorithm.
The idea is similar to quicksort, but recurse only on the relevant part of the array, not all of it. Note that the algorithm can be implemented with O(1) extra space pretty easily - since it's recursive call is a tail call.
This leads to an O(n) solution on average case (just be sure to pick a pivot at random in order to make sure you don't fall into pre-designed edge cases such as a sorted list). That can be improved to worst case O(n) using median of median technique, but with significantly worse constants.
Binary search on the answer for the problem.
2 major observations here :
Given that all values in the array are of type 'int', their range can be defined as [0, 2^31]. That would be your search space.
Given a value x, I can always tell in O(n) if the kth smallest element is smaller than x or greater than x.
A rough pseudocode :
start = 0, end = 2^31 - 1
while start <= end
x = (start + end ) / 2
less = number of elements less than or equal to x
if less > k
end = x - 1
elif less < k
start = x + 1
else
ans = x
end = x - 1
return ans
Hope this helps.
I believe I found a solution that is similar to that of #AliAkber but is slightly easier to understand (I keep track of fewer variables).
It passed all tests on InterviewBit
Here's the code (Java):
public int kthsmallest(final List<Integer> a, int k) {
int lo = Integer.MIN_VALUE;
int hi = Integer.MAX_VALUE;
int champ = -1;
for (int i = 0; i < a.size(); i++) {
int iVal = a.get(i);
int count = 0;
if (!(iVal > lo && iVal < hi)) continue;
for (int j = 0; j < a.size(); j++) {
if (a.get(j) <= iVal) count++;
if (count > k) break;
}
if (count > k && iVal < hi) hi = iVal;
if (count < k && iVal > lo) lo = iVal;
if (count >= k && (champ == -1 || iVal < champ))
champ = iVal;
}
return champ;
}

FIbonacci Time Complexity for non recursive function

Hey guys i need some help for this piece of code, computation had become a problem coz i dont know the exact format in computing this code. any help would do.
int fib(int n)
{
int prev = -1;
int result = 1;
int sum = 0;
for(int i = 0;i <= n;++ i)
{
sum = result + prev;
prev = result;
result = sum;
}
return result;
}
I am not sure exactly what you are asking, maybe you can clarify
The time complexity of this algorithm is O(n). The loop will execute n times until i is greater than n. i starts at zero and increments by 1 every iteration of this loop until n.
I hope this helps

time complexity for code and an order of magnitude improvement

I have the following problem:
For the following code, with reason, give the time complexity of the function.
Write a function which performs the same task but which is an order-of magnitude improvement in time complexity. A function with greater (time or space) complexity will not get credit.
Code:
int something(int[] a) {
for (int i = 0; i < n; i++)
if (a[i] % 2 == 0) {
temp = a[i];
for(int j = i; j > 0; j--)
a[j] = a[j-1];
a[0] = temp;
}
}
I'm thinking that since the temp = a[i] assignment in the worst case is done n times, a time complexity of n is assigned to that, and a[j] = a[j-1] is run n(n+1)/2 times so a time complexity value of (n2+n)/2 is assigned to that, summing them returns a time complexity of n+0.5n2+0.5n, removing the constants would lead to 2n+n2 and a complexity of n2.
For the order of magnitude improvement:
int something(int[] a) {
String answer = "";
for (int i = 0; i < n; i++) {
if (a[i] % 2 == 0) answer = a[i] + answer;
else answer = answer + a[i];
}
for (int i = 0; i < n; i++)
a[i] = answer.charAt(i);
}
The code inside the first for-loop is executed n times and in the second for-loop n times, summing gives a time complexity figure of 2n.
Is this correct? Or am I doing something wrong?
I suppose your function is to arrange a list with all the even numbers at the beginning of the list and then followed by the odd numbers.
For the first function the complexity is O(n2) as you have specified.
But for the second function the complexity is O(n) if the operator + which is used for appending is implemented as a constant time operation. Usually the append operator + is implemented as a constant time operation without any hidden complexity. So we can conclude that the second operation takes O(n) time.

Resources