is there a infinite loop in this happy number solution in leetcode? - algorithm

Here is the happy number question in leetcode
This is one of the solution
Using Floyd Cycle detection algorithm.
int digitSquareSum(int n) {
int sum = 0, tmp;
while (n) {
tmp = n % 10;
sum += tmp * tmp;
n /= 10;
}
return sum;
}
bool isHappy(int n) {
int slow, fast;
slow = fast = n;
do {
slow = digitSquareSum(slow);
fast = digitSquareSum(fast);
fast = digitSquareSum(fast);
} while(slow != fast);
if (slow == 1) return 1;
else return 0;
}
Is there a chance to have infinite loop?

There would only be an infinite loop if iterating digitSquareSum could grow without bounds. But when it is called with an n digit number the result is always smaller than 100n so this does not happen because for n >= 4 the result is always smaller than the number used as input.
All that ignores that integers in the computer in most languages cannot be arbitrarily large, you would get an integer overflow if the result could grow mathematically to infinity. The result would then be likely wrong but there would still not be an infinite loop.

Related

Interview Ques - find the KTHSMALLEST element in an unsorted array

This problem asked to find k'th smallest element in an unsorted array of non-negative integers.
Here main problem is memory limit :( Here we can use constant extra space.
First I tried a O(n^2) method [without any extra memory] which gave me TLE.
Then i tried to use priority queue [extra memory] which gave me MLE :(
Any idea how to solve the problem with constant extra space and within time limit.
You can use a O(n^2) method with some pruning, which will make the program like O(nlogn) :)
Declare two variable low = maximum value which position is less than k and high = lowest value which position is greater than k
Keep track of the low and high value you already processed.
Whenever a new value comes check if it is in the [low , high] boundary. If yes then process it otherwise skip the value.
That's it :) I think it will pass both TLE and MLE :)
Have a look at my code :
int low=0,high=1e9;
for(int i=0;i<n;i++) // n is the total number of element
{
if(!(A[i]>=low&&A[i]<=high)) // A is the array in which the element are saved
continue;
int cnt=0,cnt1=0; // cnt is for the strictly less value and cnt1 for same value. Because value can be duplicate.
for(int j=0;j<n;j++)
{
if(i!=j&&A[i]>A[j])
cnt++;
if(A[i]==A[j])
cnt1++;
if(cnt>k)
break;
}
if(cnt+cnt1<k)
low=A[i]+1;
else if(cnt>=k)
high=A[i]-1;
if(cnt<k&&(cnt+cnt1)>=k)
{
return A[i];
}
}
You can do an in-place Selection Algorithm.
The idea is similar to quicksort, but recurse only on the relevant part of the array, not all of it. Note that the algorithm can be implemented with O(1) extra space pretty easily - since it's recursive call is a tail call.
This leads to an O(n) solution on average case (just be sure to pick a pivot at random in order to make sure you don't fall into pre-designed edge cases such as a sorted list). That can be improved to worst case O(n) using median of median technique, but with significantly worse constants.
Binary search on the answer for the problem.
2 major observations here :
Given that all values in the array are of type 'int', their range can be defined as [0, 2^31]. That would be your search space.
Given a value x, I can always tell in O(n) if the kth smallest element is smaller than x or greater than x.
A rough pseudocode :
start = 0, end = 2^31 - 1
while start <= end
x = (start + end ) / 2
less = number of elements less than or equal to x
if less > k
end = x - 1
elif less < k
start = x + 1
else
ans = x
end = x - 1
return ans
Hope this helps.
I believe I found a solution that is similar to that of #AliAkber but is slightly easier to understand (I keep track of fewer variables).
It passed all tests on InterviewBit
Here's the code (Java):
public int kthsmallest(final List<Integer> a, int k) {
int lo = Integer.MIN_VALUE;
int hi = Integer.MAX_VALUE;
int champ = -1;
for (int i = 0; i < a.size(); i++) {
int iVal = a.get(i);
int count = 0;
if (!(iVal > lo && iVal < hi)) continue;
for (int j = 0; j < a.size(); j++) {
if (a.get(j) <= iVal) count++;
if (count > k) break;
}
if (count > k && iVal < hi) hi = iVal;
if (count < k && iVal > lo) lo = iVal;
if (count >= k && (champ == -1 || iVal < champ))
champ = iVal;
}
return champ;
}

FIbonacci Time Complexity for non recursive function

Hey guys i need some help for this piece of code, computation had become a problem coz i dont know the exact format in computing this code. any help would do.
int fib(int n)
{
int prev = -1;
int result = 1;
int sum = 0;
for(int i = 0;i <= n;++ i)
{
sum = result + prev;
prev = result;
result = sum;
}
return result;
}
I am not sure exactly what you are asking, maybe you can clarify
The time complexity of this algorithm is O(n). The loop will execute n times until i is greater than n. i starts at zero and increments by 1 every iteration of this loop until n.
I hope this helps

Big O - O(N^2) or O(N^2 + 1)?

I'm reading this Big O article (and some other book references) trying to figure out what changes affect my algorithm.
so given the following O(N^2) code:
bool ContainsDuplicates(String[] strings)
{
for(int i = 0; i < strings.Length; i++)
{
for(int j = 0; j < strings.Length; j++)
{
if(i == j) // Don't compare with self
{
continue;
}
if(strings[i] == strings[j])
{
return true;
}
}
}
return false;
}
I made the following change:
bool ContainsDuplicates(String[] strings)
{
for(int i = 0; i < strings.Length; i++)
{
for(int j = 0; j < strings.Length; j++)
{
if(i != j) // Don't compare with self
{
if(strings[i] == strings[j])
{
return true;
}
}
}
}
return false;
}
Now both IF's are nested and 'continue' is removed. Does this algorithm really became a O(N^2 + 1) ? and why ?
As far as I see the IF check was there before regardless, so initially thought it would still be a O(N^2).
Big O is describing how execution time grows as a chosen parameter becomes large.
In your example, if we wanted to be exact, the formula would be:
Time taken = Time(start) + Time(external loop) * N + Time (continue) * N + Time (no continue) * N^2
Which can be rewritten as
Time taken = a + b * N + c * N^2
Now, as N becomes larger and larger, it's clear that overall this will be shaped like a parabola. The order zero and order one terms become irrelevant as N grows to infinity.
Time taken (large N) ~= c * N^2
Finally, since we are interested in discussing qualitatively and not quantitatively, we simply describe the algorirhm as N^2
O(N^2) means that the algorithm will behave approximately as c * N^2 for large values of N
It is a similar concept to o(x) in calculus (with the difference that small-o is for parameters going to zero.

What is the running time of this powerset algorithm

I have an algorithm to compute the powerset of a set using all of the bits between 0 and 2^n:
public static <T> void findPowerSetsBitwise(Set<T> set, Set<Set<T>> results){
T[] arr = (T[]) set.toArray();
int length = arr.length;
for(int i = 0; i < 1<<length; i++){
int k = i;
Set<T> newSubset = new HashSet<T>();
int index = arr.length - 1;
while(k > 0){
if((k & 1) == 1){
newSubset.add(arr[index]);
}
k>>=1;
index --;
}
results.add(newSubset);
}
}
My question is: What is the running time of this algorithm. The loop is running 2^n times and in each iteration the while loop runs lg(i) times. So I think the running time is
T(n) = the sum from i=0 to i=2^n of lg(i)
But I don't know how to simplify this further, I know this can be solved in O(2^n) time (not space) recursively, so I'm wondering if the method above is better or worse than this, timewise as it's better in space.
sigma(lg(i)) where i in (1,2^n)
= lg(1) + lg(2) + ... + lg(2^n)
= lg(1*2*...*2^n)
= lg((2^n)!)
> lg(2^2^n)
= 2^n
thus, the suggested solution is worth in terms of time complexity then the recursive O(2^n) one.
EDIT:
To be exact, we know that for each k - log(k!) is in Theta(klogk), thus for k=2^n we get that lg((2^n)!) is in Theta(2^nlog(2^n) = Theta(n*2^n)
Without attempting to solve or simulate, it is easy to see that this is worse than O(2^n) because this is 2^n * $value where $value is greater than one (for all i > 2) and increases as n increases, so it is obviously not a constant.
Applying Sterling's formula, it is actually O(n*2^n).

Return all prime numbers smaller than M

Given an integer M. return all prime numbers smaller than M.
Give a algorithm as good as you can. Need to consider time and space complexity.
The Sieve of Eratosthenes is a good place to start.
http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
A couple of additional performance hints:
You only need to test up to the square root of M, since every composite number has at least one prime factor less than or equal to its square root
You can cache known primes as you generate them and test subsequent numbers against only the numbers in this list (instead of every number below sqrt(M))
You can obviously skip even numbers (except for 2, of course)
The usual answer is to implement the Sieve of Eratosthenes, but this is really only a solution for finding the list of all prime numbers smaller than N. If you want primality tests for specific numbers, there are better choices for large numbers.
Sieve of Eratosthenes is good.
i'm a novice programmer in c# (and new to S.O.), so this may be a bit verbose. nevertheless, i've tested this, and i works.
this is what i've come up with:
for (int i = 2; i <= n; i++)
{
while (n % i == 0)
{
Console.WriteLine(i.ToString());
n /= i;
}
}
Console.ReadLine();
π(n) count the primes less than or equal to n. Pafnuty Chebyshev has shown that if
limn→∞ π(n)/(n/ln(n))
exists, it is 1. There are a lot of values that are approximately equal to π(n) actually, as shown in the table.
It gives right number of prime number for this number format.I hope this will be helpful.
You can do it using a bottom up dynamic programming approach called the Sieve of Eratosthenes
Basically you create a boolean cache of all numbers upto n and you mark each the multiples of each number as not_prime.
Further optimizations can be gained by checking only upto sqrt(n) since any composite number will have at least one divisor less that sqrt(n)
public int countPrimes(int n) {
if(n==0){
return 0;
}else{
boolean[] isPrime=new boolean[n];
for(int i=2;i<n;i++){
isPrime[i]=true;
}
/* Using i*i<n instead of i<Math.sqrt(n)
to avoid the exepnsive sqrt operation */
for(int i=2;i*i<n;i++){
if(!isPrime[i]){
continue;
}
for(int j=i*i;j<n;j+=i){
isPrime[j]=false;
}
}
int counter=0;
for(int i=2;i<n;i++){
if(isPrime[i]){
counter++;
}
}
return counter;
}
}
This is what I developed for Seive of Eratosthenes. There would be better implementations,of course.
//finds number of prime numbers less than length
private static int findNumberOfPrimes(int length) {
int numberOfPrimes = 1;
if (length == 2) {
return 1;
}
int[] arr = new int[length];
//creating an array of numbers less than 'length'
for (int i = 0; i < arr.length; i++) {
arr[i] = i + 1;
}
//starting with first prime number 2, all the numbers divisible by 2(and upcoming) is replaced with -1
for (int i = 2; i < arr.length && arr[i] != -1; i++) {
for (int j = i; j < arr.length; j++) {
if (arr[j] % arr[i] == 0) {
arr[j] = -1;
numberOfPrimes += 1;
}
}
}
return numberOfPrimes;
}
The Sieve of Atkin is also the best algorithm to implement in this case and it takes only O(N) operations and O(N) space. Please refer https://en.wikipedia.org/wiki/Sieve_of_Atkin for detailed explanation of the algorithm and pseudocode.

Resources