efficient X within limit algorithm - performance

I determine limits as limit(0)=0; limit(y)=2*1.08^(y-1), y∈{1,2,3,...,50} or if you prefeer iterative functions:
limit(0)=0
limit(1)=2
limit(y)=limit(y-1)*1.08, x∈{2,3,4,...,50}
Exmples:
limit(1) = 2*1.08^0 = 2
limit(2) = 2*1.08^1 = 2.16
limit(3) = 2*1.08^2 = 2.3328
...
for a given x∈[0,infinity) I want an efficient formula to calculate y so that limit(y)>x and limit(y-1)≤x or 50 if there is none.
Any ideas?
or is pre-calculating the 50 limits and using a couple of ifs the best solution?
I am using erlang as language, but I think it will not make much of a difference.

limit(y) = 2 * 1.08^(y-1)
limit(y) > x >= limit(y - 1)
Now if I haven't made a mistake,
2 * 1.08^(y - 1) > x >= 2 * 1.08^(y - 2)
1.08^(y - 1) > x / 2 >= 1.08^(y - 2)
y - 1 > log[1.08](x / 2) >= y - 2
y + 1 > 2 + ln(x / 2) / ln(1.08) >= y
y <= 2 + ln(x / 2) / ln(1.08) < y + 1
Which gives you
y = floor(2 + ln(x / 2) / ln(1.08))

Related

Algorithm for expressing given number as a sum of two squares

My problem is as follows:
I'm given a natural number n and I want to find all natural numbers x and y such that
n = x² + y²
Since this is addition order does not matter so I count (x,y) and (y,x) as one solution.
My initial algorithm is to assume that y>x, for all x compute y²=n-x² and check if y is a natural number using binary search on y².
for(x=1;2*x*x<n;x++)
{
y_squared=n-x*x;
if(isSquare(y_squared)==false)
continue;
//rest of the code
}
Is there any improvement for my algorithm? I already checked if n can have solutions using two squares theorem, but I want to know how many there are.
My algorithm is O(sqrt(n) * log(n) )
Thanks in advance.
You can reduce this to O(sqrt(n)) this way:
all_solutions(n):
x = 0
y = floor(sqrt(n))
while x <= y
if x * x + y * y < n
x++
else if x * x + y * y > n
y--
else
// found a match
print(x, y)
x++
y--
This algorithm will find and print all possible solutions and will always terminate for x <= sqrt(n / 2) and y >= sqrt(n / 2), leading to at most sqrt(n / 2) + (sqrt(n) - sqrt(n / 2)) = sqrt(n) iterations being performed.
A variation of Paul's, keeping track of the sum of squares and adjusting it just with additions/subtractions:
Pseudo-code: (evaluate x++ + x and y-- + y left-to-right, or do it like in the Python code below)
x = 0
y = floor(sqrt(n))
sum = y * y
while x <= y
if sum < n
sum += x++ + x
else if sum > n
sum -= y-- + y
else
print(x, y)
sum += 2 * (++x - y--)
Java:
static void allSolutions(int n) {
int x = 0, y = (int) Math.sqrt(n), sum = y * y;
while (x <= y) {
if (sum < n) {
sum += x++ + x;
} else if (sum > n) {
sum -= y-- + y;
} else {
System.out.println(x + " " + y);
sum += 2 * (++x - y--);
}
}
}
Python:
from math import isqrt
def all_solutions(n):
x = 0
y = isqrt(n)
sum = y ** 2
while x <= y:
if sum < n:
x += 1
sum += 2 * x - 1
elif sum > n:
sum -= 2 * y - 1
y -= 1
else:
# found a match
print(x, y)
x += 1
sum += 2 * (x - y)
y -= 1
Demo:
>>> all_solutions(5525)
7 74
14 73
22 71
25 70
41 62
50 55

The sum from 1 to n in theta(log n)

Is there anyway to calculate the sum of 1 to n in Theta(log n)?
Of course, the obvious way to do it is sum = n*(n+1)/2.
However, for practicing, I want to calculate in Theta(log n).
For example,
sum=0; for(int i=1; i<=n; i++) { sum += i}
this code will calculate in Theta(n).
Fair way (without using math formulas) assumes direct summing all n values, so there is no way to avoid O(n) behavior.
If you want to make some artificial approach to provide exactly O(log(N)) time, consider, for example, using powers of two (knowing that Sum(1..2^k = 2^(k-1) + 2^(2*k-1) - for example, Sum(8) = 4 + 32). Pseudocode:
function Sum(n)
if n < 2
return n
p = 1 //2^(k-1)
p2 = 2 //2^(2*k-1)
while p * 4 < n:
p = p * 2;
p2 = p2 * 4;
return p + p2 + ///sum of 1..2^k
2 * p * (n - 2 * p) + ///(n - 2 * p) summands over 2^k include 2^k
Sum(n - 2 * p) ///sum of the rest over 2^k
Here 2*p = 2^k is the largest power of two not exceeding N. Example:
Sum(7) = Sum(4) + 5 + 6 + 7 =
Sum(4) + (4 + 1) + (4 + 2) + (4 + 3) =
Sum(4) + 3 * 4 + Sum(3) =
Sum(4) + 3 * 4 + Sum(2) + 1 * 2 + Sum(1) =
Sum(4) + 3 * 4 + Sum(2) + 1 * 2 + Sum(1) =
2 + 8 + 12 + 1 + 2 + 2 + 1 = 28

Approach and Code for o(log n) solution

f(N) = 0^0 + 1^1 + 2^2 + 3^3 + 4^4 + ... + N^N.
I want to calculate (f(N) mod M).
These are the constraints.
1 ≤ N ≤ 10^9
1 ≤ M ≤ 10^3
Here is my code
test=int(input())
ans = 0
for cases in range(test):
arr=[int(x) for x in input().split()]
N=arr[0]
mod=arr[1]
#ret=sum([int(y**y) for y in range(N+1)])
#ans=ret
for i in range(1,N+1):
ans = (ans + pow(i,i,mod))%mod
print (ans)
I tried another approach but in vain.
Here is code for that
from functools import reduce
test=int(input())
answer=0
for cases in range(test):
arr=[int(x) for x in input().split()]
N=arr[0]
mod=arr[1]
answer = reduce(lambda K,N: x+pow(N,N), range(1,N+1)) % M
print(answer)
Two suggestions:
Let 0^0 = 1 be what you use. This seems like the best guidance I have for how to handle that.
Compute k^k by multiplying and taking the modulus as you go.
Do an initial pass where all k (not exponents) are changed to k mod M before doing anything else.
While computing (k mod M)^k, if an intermediate result is one you've already visited, you can cut back on the number of iterations to continue by all but up to one additional cycle.
Example: let N = 5 and M = 3. We want to calculate 0^0 + 1^1 + 2^2 + 3^3 + 4^4 + 5^5 (mod 3).
First, we apply suggestion 3. Now we want to calculate 0^0 + 1^1 + 2^2 + 0^3 + 1^4 + 2^5 (mod 3).
Next, we begin evaluating and use suggestion 1 immediately to get 1 + 1 + 2^2 + 0^3 + 1^4 + 2^5 (mod 3). 2^2 is 4 = 1 (mod 3) of which we make a note (2^2 = 1 (mod 3)). Next, we find 0^1 = 0, 0^2 = 0 so we have a cycle of size 1 meaning no further multiplication is needed to tell 0^3 = 0 (mod 3). Note taken. Similar process for 1^4 (we tell on the second iteration that we have a cycle of size 1, so 1^4 is 1, which we note). Finally, we get 2^1 = 2 (mod 3), 2^2 = 1(mod 3), 2^3 = 2(mod 3), a cycle of length 2, so we can skip ahead an even number which exhausts 2^5 and without checking again we know that 2^5 = 2 (mod 3).
Our sum is now 1 + 1 + 1 + 0 + 1 + 2 (mod 3) = 2 + 1 + 0 + 1 + 2 (mod 3) = 0 + 0 + 1 + 2 (mod 3) = 0 + 1 + 2 (mod 3) = 1 + 2 (mod 3) = 0 (mod 3).
These rules will be helpful to you since your cases see N much larger than M. If this were reversed - if N were much smaller than M - you'd get no benefit from my method (and taking the modulus w.r.t. M would affect the outcome less).
Pseudocode:
Compute(N, M)
1. sum = 0
2. for i = 0 to N do
3. term = SelfPower(i, M)
4. sum = (sum + term) % M
5. return sum
SelfPower(k, M)
1. selfPower = 1
2. iterations = new HashTable
3. for i = 1 to k do
4. selfPower = (selfPower * (k % M)) % M
5. if iterations[selfPower] is defined
6. i = k - (k - i) % (i - iterations[selfPower])
7. clear out iterations
8. else iterations[selfPower] = i
9. return selfPower
Example execution:
resul = Compute(5, 3)
sum = 0
i = 0
term = SelfPower(0, 3)
selfPower = 1
iterations = []
// does not enter loop
return 1
sum = (0 + 1) % 3 = 1
i = 1
term = SelfPower(1, 3)
selfPower = 1
iterations = []
i = 1
selfPower = (1 * 1 % 3) % 3 = 1
iterations[1] is not defined
iterations[1] = 1
return 1
sum = (1 + 1) % 3 = 2
i = 2
term = SelfPower(2, 3)
selfPower = 1
iterations = []
i = 1
selfPower = (1 * 2 % 3) % 3 = 2
iterations[2] is not defined
iterations[2] = 1
i = 2
selfPower = (2 * 2 % 3) % 3 = 1
iterations[1] is not defined
iterations[1] = 2
return 1
sum = (2 + 1) % 3 = 0
i = 3
term = SelfPower(3, 3)
selfPower = 1
iterations = []
i = 1
selfPower = (1 * 3 % 0) % 3 = 0
iterations[0] is not defined
iterations[0] = 1
i = 2
selfPower = (0 * 3 % 0) % 3 = 0
iterations[0] is defined as 1
i = 3 - (3 - 2) % (2 - 1) = 3
iterations is blank
return 0
sum = (0 + 0) % 3 = 0
i = 4
term = SelfPower(4, 3)
selfPower = 1
iterations = []
i = 1
selfPower = (1 * 4 % 3) % 3 = 1
iterations[1] is not defined
iterations[1] = 1
i = 2
selfPower = (1 * 4 % 3) % 3 = 1
iterations[1] is defined as 1
i = 4 - (4 - 2) % (2 - 1) = 4
iterations is blank
return 1
sum = (0 + 1) % 3 = 1
i = 5
term = SelfPower(5, 3)
selfPower = 1
iterations = []
i = 1
selfPower = (1 * 5 % 3) % 3 = 2
iterations[2] is not defined
iterations[2] = 1
i = 2
selfPower = (2 * 5 % 3) % 3 = 1
iterations[1] is not defined
iterations[1] = 2
i = 3
selfPower = (1 * 5 % 3) % 3 = 2
iterations[2] is defined as 1
i = 5 - (5 - 3) % (3 - 1) = 5
iterations is blank
return 2
sum = (1 + 2) % 3 = 0
return 0
why not just use simple recursion to find the recursive sum of the powers
def find_powersum(s):
if s == 1 or s== 0:
return 1
else:
return s*s + find_powersum(s-1)
def find_mod (s, m):
print(find_powersum(s) % m)
find_mod(4, 4)
2

Number of ways to divide a number

Given a number N, print in how many ways it can be represented as
N = a + b + c + d
with
1 <= a <= b <= c <= d; 1 <= N <= M
My observation:
For N = 4: Only 1 way - 1 + 1 + 1 + 1
For N = 5: Only 1 way - 1 + 1 + 1 + 2
For N = 6: 2 ways - 1 + 1 + 1 + 3
1 + 1 + 2 + 2
For N = 7: 3 ways - 1 + 1 + 1 + 4
1 + 1 + 2 + 3
1 + 2 + 2 + 2
For N = 8: 5 ways - 1 + 1 + 1 + 5
1 + 1 + 2 + 4
1 + 1 + 3 + 3
1 + 2 + 2 + 3
2 + 2 + 2 + 2
So I have reduced it to a DP solution as follows:
DP[4] = 1, DP[5] = 1;
for(int i = 6; i <= M; i++)
DP[i] = DP[i-1] + DP[i-2];
Is my observation correct or am I missing any thing. I don't have any test cases to run on. So please let me know if the approach is correct or wrong.
It's not correct. Here is the correct one:
Lets DP[n,k] be the number of ways to represent n as sum of k numbers.
Then you are looking for DP[n,4].
DP[n,1] = 1
DP[n,2] = DP[n-2, 2] + DP[n-1,1] = n / 2
DP[n,3] = DP[n-3, 3] + DP[n-1,2]
DP[n,4] = DP[n-4, 4] + DP[n-1,3]
I will only explain the last line and you can see right away, why others are true.
Let's take one case of n=a+b+c+d.
If a > 1, then n-4 = (a-1)+(b-1)+(c-1)+(d-1) is a valid sum for DP[n-4,4].
If a = 1, then n-1 = b+c+d is a valid sum for DP[n-1,3].
Also in reverse:
For each valid n-4 = x+y+z+t we have a valid n=(x+1)+(y+1)+(z+1)+(t+1).
For each valid n-1 = x+y+z we have a valid n=1+x+y+z.
Unfortunately, your recurrence is wrong, because for n = 9, the solution is 6, not 8.
If p(n,k) is the number of ways to partition n into k non-zero integer parts, then we have
p(0,0) = 1
p(n,k) = 0 if k > n or (n > 0 and k = 0)
p(n,k) = p(n-k, k) + p(n-1, k-1)
Because there is either a partition of value 1 (in which case taking this part away yields a partition of n-1 into k-1 parts) or you can subtract 1 from each partition, yielding a partition of n - k. It's easy to show that this process is a bijection, hence the recurrence.
UPDATE:
For the specific case k = 4, OEIS tells us that there is another linear recurrence that depends only on n:
a(n) = 1 + a(n-2) + a(n-3) + a(n-4) - a(n-5) - a(n-6) - a(n-7) + a(n-9)
This recurrence can be solved via standard methods to get an explicit formula. I wrote a small SAGE script to solve it and got the following formula:
a(n) = 1/144*n^3 + 1/32*(-1)^n*n + 1/48*n^2 - 1/54*(1/2*I*sqrt(3) - 1/2)^n*(I*sqrt(3) + 3) - 1/54*(-1/2*I*sqrt(3) - 1/2)^n*(-I*sqrt(3) + 3) + 1/16*I^n + 1/16*(-I)^n + 1/32*(-1)^n - 1/32*n - 13/288
OEIS also gives the following simplification:
a(n) = round((n^3 + 3*n^2 -9*n*(n % 2))/144)
Which I have not verified.
#include <iostream>
using namespace std;
int func_count( int n, int m )
{
if(n==m)
return 1;
if(n<m)
return 0;
if ( m == 1 )
return 1;
if ( m==2 )
return (func_count(n-2,2) + func_count(n - 1, 1));
if ( m==3 )
return (func_count(n-3,3) + func_count(n - 1, 2));
return (func_count(n-1, 3) + func_count(n - 4, 4));
}
int main()
{
int t;
cin>>t;
cout<<func_count(t,4);
return 0;
}
I think that the definition of a function f(N,m,n) where N is the sum we want to produce, m is the maximum value for each term in the sum and n is the number of terms in the sum should work.
f(N,m,n) is defined for n=1 to be 0 if N > m, or N otherwise.
for n > 1, f(N,m,n) = the sum, for all t from 1 to N of f(S-t, t, n-1)
This represents setting each term, right to left.
You can then solve the problem using this relationship, probably using memoization.
For maximum n=4, and N=5000, (and implementing cleverly to quickly work out when there are 0 possibilities), I think that this is probably computable quickly enough for most purposes.

Can integer division be rounded up, instead of down?

Is there a way to round the result of integer division up to the nearest integer, rather than down?
For example, I would like to change the default behavior:
irb(main):001:0> 5 / 2
=> 2
To the following behavior:
irb(main):001:0> 5 / 2
=> 3
The function you are looking for is ceil.
Ceil returns the nearest integer, rounded upwards, for a floating point number.
4/3 = 1
4.0/3.0 = 1.3333...3
(4.0/3.0).ceil = 2
Also, note that this rounds in the positive direction, so
(-4.0/3.0).ceil = -1, NOT -2
Also, there is the corresponding floor function which rounds downwards.
This is rather an algorithm question than a ruby specific question.
Try (a + b - 1) / b. For example
(5 + 2 - 1) / 2 #=> 3
(10 + 3 - 1) / 3 #=> 4
(6 + 3 - 1) / 3 #=> 2
You can define an instance method, say divide_by, in the Integer class (monkey patch):
class Integer
def divide_by(divisor)
(self + divisor - 1) / divisor
end
end
According to my benchmark result, it's about 1/2 times faster than the to_f then ceil solution.
CORRECTION
The method shown above gives wrong result when both the dividend and the divisor are negative.
Here's the method that gives the correct result in all cases: (a * 2 + b) / (b * 2)
a = 5
b = 2
(a * 2 + b) / (b * 2) #=> 3
a = 6
b = 2
(a * 2 + b) / (b * 2) #=> 3
a = 5
b = 1
(a * 2 + b) / (b * 2) #=> 5
a = -5
b = 2
(a * 2 + b) / (b * 2) #=> -2 (-2.5 rounded up to -2)
a = 5
b = -2
(a * 2 + b) / (b * 2) #=> -2 (-2.5 rounded up to -2)
a = -5
b = -2
(a * 2 + b) / (b * 2) #=> 3
a = 10
b = 0
(a * 2 + b) / (b * 2) #=> raises ZeroDivisionError
The monkey patch should be
class Integer
def divide_by(divisor)
(self * 2 + divisor) / (divisor * 2)
end
end
Mathematical Proof:
The dividend a and the divisor b meets the equation a = kb + m where a, b, k, m are all integers, b is not zero, and m is between b and 0 (can be 0).
For example, when a is 5 and b is 2, then a = 2b + 1, thus in this case, k = 2 and m = 1.
Another example for negative divisor, a is 5, b is -2, then a = -3b + (-1), thus k = -3 and m = -1.
(2a + b) / 2b
= (2(kb + m) + b) / 2b
= (2kb + b + 2m) / 2b
When m = 0
(2kb + b + 2m) / 2b
= (2k + 1)b / 2b
= k + (1 / 2)
= k + 0 # in Ruby
= k # in Ruby
and since k = a / b, we got the correct answer.
When m is not 0,
(2kb + b + 2m) / 2b
= ((2k + 2)b - b + 2m) / 2b
= (k + 1) + (2m - b) / 2b
If b > 0, then 2m - b < b so (2m - b) / 2b < 1 / 2. So the second term is always 0 in integer division.
If b < 0, then 2m - b > b and still (2m - b) / 2b < 1 / 2 so the second term is still 0.
In either case, (2a + b) / 2b is rounded to k + 1 when m is not 0.
If what you actually want is to Integer div and round up if there's any left over, just do it straightforward as the logic would dictate on paper using a second line of modular operation (%) to check the remainders of the division:
a = 5
b = 2
result = a / b #=> 2
result += 1 if (a % b).positive?
#=> 3
a = 6
b = 3
result = a / b #=> 2
result += 1 if (a % b).positive?
#=> 2

Resources