I do understand the algorithm but can't find a way to define its complexity, the only thing i know is it have something to with the second parameter, because if it was smaller the steps will be fewer. Do you have any idea how can i do this? and is there any general way to define time complexity for any given algorithm?
egyptian multiplication algorithm:
def egMul(x, y):
res = 0
while(y>0):
if(y%2==0):
x = x * 2
y = y / 2
else:
y = y - 1
res = res + x
return res
This code performs Theta(log(y)) arithmetic operations. By considering the binary representation of y, you can see it performs the else branch for each 1 that appears in the binary representation, and it performs the first branch of the if (the one that divides y by 2) floor(log_2(y)) times.
Related
Let A be a sorted array containing n distinct positive integers. Let x be a positive integer such that both x and 2x are not in A.
Describe an efficient algorithm to find the number of integers in A that are larger than x and smaller than 2x
what is the complexity of the algorithm? can someone write a pseudo code without using libraries?
I know this can be done with a linear time complexity but binary search could be modify to achieve this.
The following is the linear time complexity solution I came up with
def find_integers(A, x):
integers = 0
for element in A:
if element > x and element < 2*x:
integers += 1
return integers
As you already found out you can use binary search. Search for x and 2x and note the positions in the list, you can calculate the number of integers from the differnce between the two positions.
Since you are using python, the bisect module can help you with binary search.
I'm on a bit of a mission to improve everybody's default binary search. As I described in this answer: How can I simplify this working Binary Search code in C? ...
...pretty much all of my binary searches search for positions instead of elements, since it's faster and easier to get right.
It's also easily adaptable to situations like this:
def countBetweenXand2X(A,x):
if x<=0:
return 0
# find position of first element > x
minpos = 0
maxpos = len(A)
while minpos < maxpos:
testpos = minpos + (maxpos-minpos)//2
if A[testpos] > x:
maxpos = testpos
else:
minpos = testpos+1
start = minpos;
# find position of first element >= 2x
maxpos = len(A)
while minpos < maxpos:
testpos = minpos + (maxpos-minpos)//2
if A[testpos] >= x*2:
maxpos = testpos
else:
minpos = testpos+1
return minpos - start
This is just 2 binary searches, so the complexity remains O(log N). Also note that the second search begins at the position found in the first search, since we know the second position must be >= the first one. We accomplish that just by leaving minpos alone instead of resetting it to zero.
I need to derive an algorithm in C++ to calculate integer powers m^n that uses the loop invariant r = y^n and the loop condition y != m.
I tried using the instruction y= y+1 to advance, but I don´t know how to obtain (y+1)^n from y^n, and it shouldn't be difficult to find . So, probably, this isn't the correct path to follow
Could you help me to derive the program?
EDIT: this is a problem from the subject Data Structures and Algorithms. The difficulty ( if there is at all) shouldn't be mathematic.
EDIT2: Just to clarify, the difficulty of the problem is using the invariant y^n and loop condition y != m. If I vary the n I'm not achieving that
Given w and P such that 2^w > m, P > 2^(wn), and 2^((P-1)/2) = -1 mod P,
then 2 is a generator mod P, and there will be some x such that 2^x = m mod P, so:
if (m<=1 || n==1)
return m;
if (n==0)
return 1;
let y = 2;
let r = 1<<n;
while(y!=m)
{
y = (y*2)%P;
r = (r*(1<<n))%P;
}
return r;
Unless your function needs to produce bignum results, you can just pick the largest P that fits into an integer in your language.
There is no useful relation between (y+1)^n and y^n (you can write (y+1)^n = (√(y^n)+1)^n or (y+1)^n = (1+1/y)^n y^n, but this leads you nowhere).
If y was factored, you could exploit (a.b)^n = (a^n).(b^n), but you would need a table of the nth powers of the primes.
I can't see an answer that makes sense.
You can also think of the Binomial theorem,
(y+1)^n = y^n + n y^(n-1) + n(n-1)/2 y^(n-2) + ... 1
but this is worse than anything: you need to compute n binomial coefficients, and update all powers of y from 0 to n. The total cost of the computation would be ridiculously high.
I am reading an algorithms textbook and I am stumped by this question:
Suppose we want to compute the value x^y, where x and y are positive
integers with m and n bits, respectively. One way to solve the problem is to perform y - 1 multiplications by x. Can you give a more efficient algorithm that uses only O(n) multiplication steps?
Would this be a divide and conquer algorithm? y-1 multiplications by x would run in theta(n) right? .. I don't know where to start with this question
I understand this better in an iterative way:
You can compute x^z for all powers of two: z = (2^0, 2^1, 2^2, ... ,2^(n-1))
Simply by going from 1 to n and applying x^(2^(i+1)) = x^(2^i) * x^(2^i).
Now you can use these n values to compute x^y:
result = 1
for i=0 to n-1:
if the i'th bit in y is on:
result *= x^(2^i)
return result
All is done in O(n)
Apply a simple recursion for divide and conquer.
Here i am posting a more like a pseudo code.
x^y :=
base case: if y==1 return x;
if y%2==0:
then (x^2)^(y/2;
else
x.(x^2)^((y-1)/2);
The y-1 multiplications solution is based on the identity x^y = x * x^(y-1). By repeated application of the identity, you know that you will decrease y down to 1 in y-1 steps.
A better idea is to decrease y more "energically". Assuming an even y, we have x^y = x^(2*y/2) = (x^2)^(y/2). Assuming an odd y, we have x^y = x^(2*y/2+1) = x * (x^2)^(y/2).
You see that you can halve y, provided you continue the power computation with x^2 instead of x.
Recursively:
Power(x, y)=
1 if y = 0
x if y = 1
Power(x * x, y / 2) if y even
x * Power(x * x, y / 2) if y odd
Another way to view it is to read y as a sum of weighted bits. y = b0 + 2.b1 + 4.b2 + 8.b3...
The properties of exponentiation imply:
x^y = x^b0 . x^(2.b1) . x^(4.b2) . x^(8.b2)...
= x^b0 . (x^2)^b1 . (x^4)^b2 . (x^8)^b3...
You can obtain the desired powers of x by squaring, and the binary decomposition of y tells you which powers to multiply.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I read the method to calculate the square root of any number and the algorithm is as follows:
double findSquareRoot(int n) {
double x = n;
double y = 1;
double e = 0.00001;
while(x-y >= e) {
x = (x+y)/2;
y = n/x;
}
return x;
}
My question regarding this method are
How it calculates the square root? I didn't understand the mathematics behind this. How x=(x+y)/2 and y=n/x converges to square root of n. Explain this mathematics.
What is the complexity of this algorithm?
It is easy to see if you do some runs and print the successive values of x and y. For example for 100:
50.5 1.9801980198019802
26.24009900990099 3.8109612300726345
15.025530119986813 6.655339226067038
10.840434673026925 9.224722348894286
10.032578510960604 9.96752728032478
10.000052895642693 9.999947104637101
10.000000000139897 9.999999999860103
See, the trick is that if x is not the square root of n, then it is above or below the real root, and n/x is always on the other side. So if you calculate the midpoint of x and n/x it will be somewhat nearer to the real root.
And about the complexity, it is actually unbounded, because the real root will never reached. That's why you have the e parameter.
This is a typical application of Newton's method for calculating the square root of n. You're calculating the limit of the sequence:
x_0 = n
x_{i+1} = (x_i + n / x_i) / 2
Your variable x is the current term x_i and your variable y is n / x_i.
To understand why you have to calculate this limit, you need to think of the function:
f(x) = x^2 - n
You want to find the root of this function. Its derivative is
f'(x) = 2 * x
and Newton's method gives you the formula:
x_{i+1} = x_i - f(x_i) / f'(x_1) = ... = (x_i + n / x_i) / 2
For completeness, I'm copying here the rationale from #rodrigo's answer, combined with my comment to it. This is helpful if you want to forget about Newton's method and try to understand this algorithm alone.
The trick is that if x is not the square root of n, then it is
an approximation which lies either above or below the real root, and y = n/x is always on the
other side. So if you calculate the midpoint of (x+y)/2, it will be
nearer to the real root than the worst of these two approximations
(x or y). When x and y are close enough, you're done.
This will also help you find the complexity of the algorithm. Say that d is the distance of the worst of the two approximations to the real root r. Then the distance between the midpoint (x+y)/2 and r is at most d/2 (it will help you if you draw a line to visualize this). This means that, with each iteration, the distance is halved. Therefore, the worst-case complexity is logarithmic w.r.t. to the distance of the initial approximation and the precision that is sought. For the given program, it is
log(|n-sqrt(n)|/epsilon)
I think all information can be found in wikipedia.
The basic idea is that if x is an overestimate to the square root of a non-negative real number S then S/x, will be an underestimate and so the average of these two numbers may reasonably be expected to provide a better approximation.
With each iteration this algorithm doubles correct digits in answer, so complexity is linear to desired accuracy's logarithm.
Why does it work? As stated here, if you will do infinite iterations you'll get some value, let's name it L. L has to satisfy equasion L = (L + N/L)/2 (as in algorithm), so L = sqrt(N). If you're worried about convergence, you may calculate squared relative errors for each iteration (Ek is error, Ak is computed value):
Ek = (Ak/sqrt(N) - 1)²
if:
Ak = (Ak-1 + N/Ak-1)/2 and Ak = sqrt(N)(sqrt(Ek) + 1)
you may derive recurrence relation for Ek:
Ek = Ek-1²/[4(sqrt(Ek-1) + 1)²]
and limit of it is 0, so limit of A1,A2... sequence is sqrt(N).
The mathematical explanation is that, over a small range, the arithmetic mean is a reasonable approximation to the geometric mean, which is used to calculate the square root. As the iterations get closer to the true square root, the difference between the arithmetic mean and the geometric mean vanishes, and the approximation gets very close. Here is my favorite version of Heron's algorithm, which first normalizes the input n over the range 1 ≤ n < 4, then unrolls the loop for a fixed number of iterations that is guaranteed to converge.
def root(n):
if n < 1: return root(n*4) / 2
if 4 <= n: return root(n/4) * 2
x = (n+1) / 2
x = (x + n/x) / 2
x = (x + n/x) / 2
x = (x + n/x) / 2
x = (x + n/x) / 2
x = (x + n/x) / 2
return x
I discuss several programs to calculate the square root at my blog.
Is there any known algorithm that can generate a shuffled range [0..n) in linear time and constant space (when output produced iteratively), given an arbitrary seed value?
Assume n may be large, e.g. in the many millions, so a requirement to potentially produce every possible permutation is not required, not least because it's infeasible (the seed value space would need to be huge). This is also the reason for a requirement of constant space. (So, I'm specifically not looking for an array-shuffling algorithm, as that requires that the range is stored in an array of length n, and so would use linear space.)
I'm aware of question 162606, but it doesn't present an answer to this particular question - the mappings from permutation indexes to permutations given in that question would require a huge seed value space.
Ideally, it would act like a LCG with a period and range of n, but the art of selecting a and c for an LCG is subtle. Simply satisfying the constraints for a and c in a full period LCG may satisfy my requirements, but I am wondering if there are any better ideas out there.
Based on Jason's answer, I've made a simple straightforward implementation in C#. Find the next largest power of two greater than N. This makes it trivial to generate a and c, since c needs to be relatively prime (meaning it can't be divisible by 2, aka odd), and (a-1) needs to be divisible by 2, and (a-1) needs to be divisible by 4. Statistically, it should take 1-2 congruences to generate the next number (since 2N >= M >= N).
class Program
{
IEnumerable<int> GenerateSequence(int N)
{
Random r = new Random();
int M = NextLargestPowerOfTwo(N);
int c = r.Next(M / 2) * 2 + 1; // make c any odd number between 0 and M
int a = r.Next(M / 4) * 4 + 1; // M = 2^m, so make (a-1) divisible by all prime factors, and 4
int start = r.Next(M);
int x = start;
do
{
x = (a * x + c) % M;
if (x < N)
yield return x;
} while (x != start);
}
int NextLargestPowerOfTwo(int n)
{
n |= (n >> 1);
n |= (n >> 2);
n |= (n >> 4);
n |= (n >> 8);
n |= (n >> 16);
return (n + 1);
}
static void Main(string[] args)
{
Program p = new Program();
foreach (int n in p.GenerateSequence(1000))
{
Console.WriteLine(n);
}
Console.ReadKey();
}
}
Here is a Python implementation of the Linear Congruential Generator from FryGuy's answer. Because I needed to write it anyway and thought it might be useful for others.
import random
import math
def lcg(start, stop):
N = stop - start
# M is the next largest power of 2
M = int(math.pow(2, math.ceil(math.log(N+1, 2))))
# c is any odd number between 0 and M
c = random.randint(0, M/2 - 1) * 2 + 1
# M=2^m, so make (a-1) divisible by all prime factors and 4
a = random.randint(0, M/4 - 1) * 4 + 1
first = random.randint(0, M - 1)
x = first
while True:
x = (a * x + c) % M
if x < N:
yield start + x
if x == first:
break
if __name__ == "__main__":
for x in lcg(100, 200):
print x,
Sounds like you want an algorithm which is guaranteed to produce a cycle from 0 to n-1 without any repeats. There are almost certainly a whole bunch of these depending on your requirements; group theory would be the most helpful branch of mathematics if you want to delve into the theory behind it.
If you want fast and don't care about predictability/security/statistical patterns, an LCG is probably the simplest approach. The wikipedia page you linked to contains this (fairly simple) set of requirements:
The period of a general LCG is at most
m, and for some choices of a much less
than that. The LCG will have a full
period if and only if:
c and m are relatively prime,
a - 1 is divisible by all prime factors of m
a - 1 is a multiple of 4 if m is a multiple of 4
Alternatively, you could choose a period N >= n, where N is the smallest value that has convenient numerical properties, and just discard any values produced between n and N-1. For example, the lowest N = 2k - 1 >= n would let you use linear feedback shift registers (LFSR). Or find your favorite cryptographic algorithm (RSA, AES, DES, whatever) and given a particular key, figure out the space N of numbers it permutes, and for each step apply encryption once.
If n is small but you want the security to be high, that's probably the trickiest case, as any sequence S is likely to have a period N much higher than n, but is also nontrivial to derive a nonrepeating sequence of numbers with a shorter period than N. (e.g. if you could take the output of S mod n and guarantee nonrepeating sequence of numbers, that would give information about S that an attacker might use)
See my article on secure permutations with block ciphers for one way to do it.
Look into Linear Feedback Shift Registers, they can be used for exactly this.
The short way of explaining them is that you start with a seed and then iterate using the formula
x = (x << 1) | f(x)
where f(x) can only return 0 or 1.
If you choose a good function f, x will cycle through all values between 1 and 2^n-1 (where n is some number), in a good, pseudo-random way.
Example functions can be found here, e.g. for 63 values you can use
f(x) = ((x >> 6) & 1) ^ ((x >> 5) & 1)