Is there any heap based on Pell sequence(or Pell number) instead of Fibonacci number(like the Fibonacci heap)?
One thing to note is that the Fibonacci heap is not really "based" on the Fibonacci number (its structure doesn't look at all like it's related to Fibonacci numbers); it's the analysis of the Fibonacci heap where the Fibonacci numbers appear. You use the Fibonacci sequence to bound the number of trees in the heap of n elements with a value related to the nth Fibonacci number, thus demonstrating that the worst-case behavior of some of the operations can't be worse than O(log n).
As for your question about Pell numbers, I am not aware of any data structures that rely on the sequence (I actually hadn't encountered that sequence before!). The Fibonacci sequence arises so much instead of other similar recurrent sequences due to a lot of interesting properties of the sequence that aren't necessarily true of other recurrence relations; I wrote about this in my answer to this question. I would assume that Pell numbers might be usable in some data structures or analyses, but the structure required to satisfy the recurrence relation doesn't seem to arise in any data structures or algorithms I have encountered.
EDIT: I did find an interesting paper using Pell numbers in the analysis of certain sequences of values, which you can find here.
Hope this helps!
# Pell number using python without any functions
import sys
num = int(input("Enter a positive number: "))
if num <= 0:
sys.exit("invalid input please try again")
a = 0
b = 1
c = 0
if num == 1:
print("Pell number is {}".format(a))
elif num == 2:
print("Pell number is {}".format(b))
elif num >= 3:
counter = 3
while (counter <= num):
answer = a + (b*2)
a = b
b = answer
counter +=1
print("Pell number is {}".format(answer))
Related
Interview question: How many Fibonacci numbers exists less than a given number k? Can you find a function in terms of k, to get the number of fibonacci number less than k?
Example : n = 6
Answer: 6 as (0, 1, 1, 2, 3, 5)
Easy enough, write a loop or use the recursive definition of Fibonacci. However, that sounds too easy... is there a way to do this using the closed-form definition? (https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression)
Here is a close-form Python solution which is O(1). It uses Binet's formula (from the Wikipedia article that you linked to):
>>> from math import sqrt,log
>>> def numFibs(n): return int(log(sqrt(5)*n)/log((1+sqrt(5))/2))
>>> numFibs(10)
6
Which tracks with 1,1,2,3,5,8
The point is that the second term in Binet's formula is negligible and it is easy enough to invert the result of neglecting it.
The above formula counts the number of Fibonacci numbers which are less than or equal to n. It jumps by 1 with each new Fibonacci number. So, for example, numFibs(12) = 6 and numFibs(13) = 7. 13 is the 7th Fibonacci number, so if you want the number of Fibobacci numbers which are strictly smaller than n you have to introduce a lag. Something like:
def smallerFibs(n):
if n <= 1:
return 0
else:
return min(numFibs(n-1),numFibs(n))
Now smallerFibs(13) is still 6 but then smallerFibs(14) = 7. This is of course still O(1).
I think it's fairly easy to see the growth of this number, at least. By the Binet / De-Moivre formula,
fn = (φn - ψn) / 5
Since |ψ| < 1 < φ, then
fn ∼ φn / 5.
From this it follows that the number of Fibonacci numbers smaller than x grows like logφ(5x).
I have a conceptual doubt regarding Dynamic Programming:
In a dynamic programming solution, the space requirement is always at least as big as the number of unique sub problems.
I thought it in terms of Fibonacci numbers:
f(n) = f(n-1) + f(n-2)
Here there are two subproblems, the space required will be at least O(n) if input is n.
Right?
But, the answer is False.
Can someone explain this?
The answer is indeed false.
For example, in your fibonacci series, you can use Dynamic Programming with O(1) space, by remembering the only 2 last numbers:
fib(n):
prev = current = 1
i = 2
while i < n:
next = prev + current
prev = current
current = next
return current
This is a common practice where you don't need all smaller subproblems to solve the bigger one, and you can discard most of them and save some space.
If you implement Fibonacci calculation using bottom-up DP, you can discard earlier results which you don't need. This is an example:
fib = [0, 1]
for i in xrange(n):
fib = [fib[1], fib[0] + fib[1]]
print fib[1]
As this example shows, you only need memorize the last two elements in the array.
This statement is not correct. But it's almost correct.
Generally dynamic programming solution needs O(number of subproblems) space. In other words, if there is a dynamic programming solution to the problem it could be implemented using O(number of subproblems) memory.
In your particular problem "calculation of Fibonacci numbers", if you write down straightforward dynamic programming solution:
Integer F(Integer n) {
if (n == 0 || n == 1) return 1;
if (memorized[n]) return memorized_value[n];
memorized_value[n] = F(n - 1) + F(n - 2);
memorized[n] = true;
return memorized_value[n];
}
it will use O(number of subproblems) memory. But as you mentioned, by analyzing the recurrence you can come up with a more optimal solution that uses O(1) memory.
P.S. The recurrence for Fibonacci numbers that you've mentioned has n + 1 subproblems. Usually by subproblems people are referring to all f values you need to calculate to calculate a particular f value. Here you need to calculate f(0), f(1), f(2), ..., f(n) in order to compute f(n).
I wrote this prime factorization function, can someone explain the runtime to me? It seems fast to me as it continuously decomposes a number into primes without having to check if the factors are prime and runs from 2 to the number in the worst case.
I know that no functions yet can factor primes in polynomial time. Also, how does the run time relate asymptotically to factoring large primes?
function getPrimeFactors(num) {
var factors = [];
for (var i = 2; i <= num; i++) {
if (num % i === 0) {
num = num / i;
factors.push(i);
i--;
}
}
return factors;
}
In your example, if num is prime then it would take exactly num - 1 steps. This would mean that the algorithm's runtime is O(num) (where O stands for a pessimistic case). But in case of algorithm that operate on numbers things get a little bit more tricky (thanks for noticing thegreatcontini and Chris)! We always describe complexity as a function of input size. In this case the input is a number num and it is represented with log(num) bits. So the input size is of log(num). Because num = 2 ^ (log(num)) then your algorithm is of complexity O(2^k) where k = log(num) - size of your input.
This is what makes this problem hard - input is very, very small and any polynomial from num leads to exponential algorithm ...
On a side note #rici is right, you need to check only up to sqrt(num), thus easily reducing the runtime to O(sqrt(num)) or more correctly O(sqrt(2) ^ k).
From Programming Pearls: Column 12: A Sample Problem:
The input consists of two integers m and n, with m < n. The output is
a sorted list of m random integers in the range 0..n-1 in which no
integer occurs more than once. For probability buffs, we desire a
sorted selection without replacement in which each selection occurs
with equal probability.
The author provides one solution:
initialize set S to empty
size = 0
while size < m do
t = bigrand() % n
if t is not in S
insert t into S
size++
print the elements of S in sorted order
In the above pseudocode, bigrand() is a function returns a large random integer (much larger than m and n).
Can anyone help me prove the correctness of the above algorithm?
According to my understanding, every output should have the probability of 1/C(n, m).
How to prove the above algorithm can guarantee the output with the probability of 1/C(n, m)?
Each solution this algorithm yields is valid.
How many solutions are there?
Up to last line there (sorting) are n*(n-1)*(n-2)*..*(n-m) different permutations or
n!/(n-m)! and each result has same probability
When you sort you reduce number of possible solutions by m!.
So number of possible outputs is n!/((n-m)!*m!) and this is what you asked for.
n!/((n-m)!m!) = C(n,m)
I have developed an algorithm to find factors of a given number. Thus it also helps in finding if the given number is a prime number. I feel this is the fastest algorithm for finding factors or prime numbers.
This algorithm finds if a give number is prime in time frame of 5*N (where N is the input number). So I hope I can call this a linear time algorithm.
How can I verify if this is the fastest algorithm available? Can anybody help in this matter? (faster than GNFS and others known)
Algorithm is given below
Input: A Number (whose factors is to be found)
Output: The two factor of the Number. If the one of the factor found is 1 then it can be concluded that the
Number is prime.
Integer N, mL, mR, r;
Integer temp1; // used for temporary data storage
mR = mL = square root of (N);
/*Check if perfect square*/
temp1 = mL * mR;
if temp1 equals N then
{
r = 0; //answer is found
End;
}
mR = N/mL; (have the value of mL less than mR)
r = N%mL;
while r not equals 0 do
{
mL = mL-1;
r = r+ mR;
temp1 = r/mL;
mR = mR + temp1;
r = r%mL;
}
End; //mR and mL has answer
Please provide your comments.. dont hesitate to contact me for any more information.
Thanks,
Harish
http://randomoneness.blogspot.com/2011/09/algorithm-to-find-factors-or-primes-in.html
"Linear time" means time proportional to the length of the input data: the number of bits in the number you're trying to factorize, in this case. Your algorithm does not run in linear time, or anything close to it, and I'm afraid it's much slower than many existing factoring algorithms. (Including, e.g., GNFS.)
The size of the input in this case is not n, but the number of bits in n, so the running time of your algorithm is exponential in the size of the input. This is known as pseudo-polynomial time.
I haven't looked closely at your algorithm, but prime number tests are usually faster than O(n) (where n is the input number). Take for example this simple one:
def isprime(n):
for f in range(2,int(sqrt(n))):
if n % f == 0:
return "not prime"
return "prime"
Here it is determined in O(sqrt(n)) if n is prime or not, simply by checking all possible factors up to sqrt(n).