Counting numbers co-prime to n which are less than m, m
I thought of doing this by (phi(n)/n)*m, but it always have some small error.
One way can be using inclusion-exclusion principle, but i am looking for a better algorithm than that.
eg
n = 20 m = 10
{1, 3, 7, 9}
Ans = 4
First you can find all x < m that x is prime and x is divisor of n. It is calculate in O(m * (x.count))
i = 1;
while x[i] not empty do
{
j = 1;
while x[i] * j < m
{
s[(x[i] * j)] = false;
j++;
}
i++;
}
Now you can find all s[k] that s[k] = true.
It is calculate in O(m)
So you can do all steps in O(m * (x.count))
Related
I've been staring at this for a while and it's not sinking in. I think I understand at a basic level what's going on. E.g. A = {1, 2, 3, 4}
Sum = A[0] + [A[0] + A[1]] + [A[0] + A[1] + A[2]] + [A[0] + A[1] + A[2] + A[3]]
However, I'm not able to understand the steps via the explanation/notation below - or at least, it's a little fuzzy. Could someone please explain the steps/walk through what's happening.
Example 1.4 (Sums of subarrays). The problem is to compute, for each subarray a[j..j +m−1] of size m in an array a of size n, the partial sum of its elements s[j] = ∑ m−1 k=0 a[j+k]; j = 0,...,n−m. The total number of these subarrays is n−m+1.
At first glance, we need to compute n−m+1 sums, each of m items, so that the running time is proportional to m(n−m+1). If m is fixed, the time depends still linearly on n. But if m is growing with n as a fraction of n, such as m = n 2, then T(n) = cn 2n 2 +1= 0.25cn2 +0.5cn. The relative weight of the linear part, 0.5cn, decreases quickly with respect to the quadratic one as n increases.
Well, the explanation, you provided seems to be not about your understanding of the problem. I think, your Example 1.4 is really about following.
A = {1, 2, 3, 4}, m = 3.
Sum = (A[0] + A[1] + A[2]) + (A[1] + A[2] + A[3]).
Here you have n-m+1 (4-3+1=2) subsums of m(3) elements each. The described algorithm can be preformed in code like this:
function SumOfSubarrays(A, n, m) {
sum = 0;
//loop for subarrays
for (j = 0; j <= n - m; j++;) {
//loop for elements in each subarray
for (k = 0; k <= m - 1; k++) {
sum += A[j + k];
}
}
}
Time complexity of this algorithm depends linearly on n. But, as it is said in Example 1.4, if m growths as a fraction of n, then time complexity becomes quadratic.
You need totally m(n−m+1) operations: (n−m+1) for outer loop as it is a number of subarrays, m for inner loop as it is a number of elements in each subarray. If m depends on n then you have, for example:
m = 0.5 * n
m(n-m+1) = 0.5n(n-0.5n+1) = 0.5n(0.5n-1) = 0.25n^2 - 0.5n
Where quadratic part growths faster as it is quadratic.
I am looking for an efficient algorithm of the problem, for any N find all i and j such that N=i^j.
I can solve it of O(N^2) as follows,
for i=1 to N
{
for j=1 to N
{
if((Power(i,j)==N)
print(i,j)
}
}
I am looking for better algorithm(or program in any language)if possible.
Given that i^j=N, you can solve the equation for j by taking the log of both sides:
j log(i) = log(N) or j = log(N) / log(i). So the algorithm becomes
for i=2 to N
{
j = log(N) / log(i)
if((Power(i,j)==N)
print(i,j)
}
Note that due to rounding errors with floating point calculations, you might want to check j-1 and j+1 as well, but even so, this is an O(n) solution.
Also, you need to skip i=1 since log(1) = 0 and that would result in a divide-by-zero error. In other words, N=1 needs to be treated as a special case. Or not allowed, since the solution for N=1 is i=1 and j=any value.
As M Oehm pointed out in the comments, an even better solution is to iterate over j, and compute i with pow(n,1.0/j). That reduces the time complexity to O(logN), since the maximum value for j is log2(N).
Here is a method you can use.
Lets say you have to solve an equation:
a^b = n //b and n are known
You can find this using binary search. If, you get a condition such that,
x^b < n and (x+1)^b > n
Then, no pair (a,b) exists such that a^b = n.
If you apply this method in range for b from 1..log(n), you should get all possible pairs.
So, complexity of this method will be: O(log n * log n)
Follow these steps:
function ifPower(n,b)
min=1, max=n;
while(min<max)
mid=min + (max-min)/2
k = mid^b, l = (mid + 1)^b;
if(k == n)
return mid;
if(l == n)
return mid + 1;
if(k < n && l > n)
return -1;
if(k < n)
max = mid - 1;
else
min = mid + 2; //2 as we are already checking for mid+1
return -1;
function findAll(n)
s = log2(n)
for i in range 2 to s //starting from 2 to ignore base cases, powers 0,1...you can handle them if required
p = ifPower(n,i)
if(b != -1)
print(b,i)
Here, in the algorithm, a^b means a raised to power of b and not a xor b (its obvs, but just saying)
for i = 1 to n do
j = 2
while j < i do
j = j * j
I think it's time complexity is : log(n!) = n * log(n).
but solution said that it is : n * loglog(n) and I didn't understand why?
In the explanation below, I assume that all arithmetic and comparison operations are O(1).
for i = 1 to n do
The below is repeated N times, which makes the n * part in the solution.
j = 2
while j < i do
j = j * j
The above calculates the first number of the following sequence that's >= i :
2 = 2^(2^0)
4 = 2^(2^1)
16 = 2^(2^2)
256 = 2^(2^3)
65536 = 2^(2^4)
...
So the only thing you need to do is to find the relation between i and 2^(2^i). And log(log(2^(2^i))) = log(2^i) = i.
Let's break it down and work from the inside out.
Imagine the following:
j = 2
while j < n do
j = j * 2
j goes 2, 4, 8, 16..., so if n doubles in size, it only takes roughly one more iteration for j to surpass it. That's basically the definition of logarithmic.
The inner loop in your case is a bit different:
j = 2
while j < n do
j = j * j
Now j goes 2, 4, 16, 256, 65536... and surpasses n even more easily. In the first case, j was growing exponentially per iteration, now it's growing doubly exponentially. But we're interested in the inverse--j surpasses n in log(log(n)) steps.
Then the outer loop just means you do that n times.
I think it is log(logn) because the cycle repeats log(logn) times...
j=1;
i=2;
while (i <= n) do {
B[j] = A[i];
j = j + 1;
i = i * i;
}
You are right, it is O(lg(lg n)) where lg stands for base 2 logarithm.
The reason being that the sequence of values of i is subject to the rule i = prev(i) * prev(i), which turns out to be 2, 2^2, 2^4, 2^8, ... for steps 1, 2, 3, 4, .... In other words, the value of i after k iterations is 2^{2^k}.
Thus, the loop will stop as soon as 2^{2^k} > n or k > lg(lg(n)) (Just take lg twice to both sides of the inequality. The inequality remains valid because lg is an increasing function.)
Here is the problem that tagged as dynamic-programming (Given a number N, find the number of ways to write it as a sum of two or more consecutive integers) and example 15 = 7+8, 1+2+3+4+5, 4+5+6
I solved with math like that :
a + (a + 1) + (a + 2) + (a + 3) + ... + (a + k) = N
(k + 1)*a + (1 + 2 + 3 + ... + k) = N
(k + 1)a + k(k+1)/2 = N
(k + 1)*(2*a + k)/2 = N
Then check that if N divisible by (k+1) and (2*a+k) then I can find answer in O(sqrt(N)) time
Here is my question how can you solve this by dynamic-programming ? and what is the complexity (O) ?
P.S : excuse me, if it is a duplicate question. I searched but I can find
The accepted answer was great but the better approach wasn't clearly presented. Posting my java code as below for reference. It might be quite verbose, but explains the idea more clearly. This assumes that the consecutive integers are all positive.
private static int count(int n) {
int i = 1, j = 1, count = 0, sum = 1;
while (j<n) {
if (sum == n) { // matched, move sub-array section forward by 1
count++;
sum -= i;
i++;
j++;
sum +=j;
} else if (sum < n) { // not matched yet, extend sub-array at end
j++;
sum += j;
} else { // exceeded, reduce sub-array at start
sum -= i;
i++;
}
}
return count;
}
We can use dynamic programming to calculate the sums of 1+2+3+...+K for all K up to N. sum[i] below represents the sum 1+2+3+...+i.
sum = [0]
for i in 1..N:
append sum[i-1] + i to sum
With these sums we can quickly find all sequences of consecutive integers summing to N. The sum i+(i+1)+(i+2)+...j is equal to sum[j] - sum[i] + 1. If the sum is less than N, we increment j. If the sum is greater than N, we increment i. If the sum is equal to N, we increment our counter and both i and j.
i = 0
j = 0
count = 0
while j <= N:
cur_sum = sum[j] - sum[i] + 1
if cur_sum == N:
count++
if cur_sum <= N:
j++
if cur_sum >= N:
i++
There are better alternatives than using this dynamic programming solution though. The sum array can be calculated mathematically using the formula k(k+1)/2, so we could calculate it on-the-fly without need for the additional storage. Even better though, since we only ever shift the end-points of the sum we're working with by at most 1 in each iteration, we can calculate it even more efficiently on the fly by adding/subtracting the added/removed values.
i = 0
j = 0
sum = 0
count = 0
while j <= N:
cur_sum = sum[j] - sum[i] + 1
if cur_sum == N:
count++
if cur_sum <= N:
j++
sum += j
if cur_sum >= N:
sum -= i
i++
For odd N, this problem is equivalent to finding the number of divisors of N not exceeding sqrt(N). (For even N, there is a couple of twists.) That task takes O(sqrt(N)/ln(N)) if you have access to a list of primes, O(sqrt(N)) otherwise.
I don't see how dynamic programming can help here.
In order to solve the problem we will try all sums of consecutive integers in [1, M], where M is derived from M(M+1)/2 = N.int count = 0
for i in [1,M]
for j in [i, M]
s = sum(i, j) // s = i + (i+1) + ... + (j-1) + j
if s == N
count++
if s >= N
break
return count
Since we do not want to calculate sum(i, j) in every iteration from scratch we'll use a technique known as "memoization". Let's create a matrix of integers sum[M+1][M+1] and set sum[i][j] to i + (i+1) + ... + (j-1) + j.for i in [1, M]
sum[i][i] = i
int count = 0
for i in [1, M]
for j in [i + 1, M]
sum[i][j] = sum[i][j-1] + j
if sum[i][j] == N
count++
if sum[i][j] >= N
break
return count
The complexity is obviously O(M^2), i.e. O(N)
1) For n >= 0 an integer, the sum of integers from 0 to n is n*(n+1)/2. This is classic : write this sum first like this :
S = 0 + 1 + ... + n
and then like this :
S = n + (n-1) + ... + 0
You see that 2*S is equal to (0+n) + (1 + n-1)) + ... + (n+0) = (n+1)n, so that S = n(n+1)/2 indeed. (Well known but is prefered my answer to be self contained).
2) From 1, if we note cons(m,n) the sum m+(m+1)+...(n-1)+n the consecutive sum of integers between posiive (that is >=0) such that 1<=m<=n we see that :
cons(m,n) = (0+1+...+n) - (0+1+...+(m-1)) which gives from 1 :
cons(m,n) = n*(n+1)/ - m(m-1)/2
3) The question is then recasted into the following : in how many ways can we write N in the form N = cons(m,n) with m,n integers such that 1<=m<=n ? If we have N = cons(m,n), this is equivalent to m^2 - m + (2N -n^2 -n) = 0, that is, the real polynomial T^2 - m + (2N -n^2 -n) has a real root, m : its discriminant delta must then be a square. But we have :
delta = 1 - 3*(2*N - n^2 - n)
And this delta is an integer which must be a square. There exists therefore an integer M such that :
delta = 1 - 3*(2*N - n^2 - n) = M^2
that is
M^2 = 1 - 6*N + n(n+1)
n(n+1) is always dividible by 2 (it's for instance 2 times our S from the beginning, but here is a more trivial reason, among to consecutive integers, one must be even) and therefore M^2 is odd, implying that M must be odd.
4) Rewrite or previous equation as :
n^2 + n + (1-6*N - M^2) = 0
This show that the real polynomial X^2 + X + (1-6*N - M^2) has a real zero, n : its discriminant gamma must therefore be a square, but :
gamma = 1 - 4*(1-6*N-M^2)
and this must be a square, so that here again, there exist an integer G such that
G^2 = 1 - 4*(1-6*N-M^2)
G^2 = 1 + 4*(2*N + m*(m-1))
which shows that, as M is odd, G is odd also.
5) Substracting M^2 = 1 - 4*(2*N - n*(n+1)) to G^2 = 1 + 4*(2*N + m*(m-1))) yields to :
G^2 - M^2 = 4*(2*N + m*(m-1)) + 4*(2*N -n*(n+1))
= 16*N + 4*( m*(m-1) - n*(n+1) )
= 16*N - 8*N (because N = cons(m,n))
= 8*N
And finally this can be rewritten as :
(G-M)*(G+M) = 8*N, that is
[(G-M)/2]*[(G+M)/2] = 2*N
where (G-M)/2 and (G+M)/2 are integers (G-M and G+M are even since G and M are odd)
6) Thus, at each manner to write N as cons(m,n), we can associate a way (and only one way, as M and G are uniquely determined) to factor 2*N into the product x*y, with x = (G-M)/2 and y = (G+M)/2 where G and M are two odd integers. Since G = x + y and M = -x + y, as G and M are odd, we see that x and y should have opposite parities. Thus among x and y, one is even and the other is odd. Thus 2*N = x*y where among x and y, one is even and the other is odd. Lets c be the odd one among x and y, and d be the even one. Then 2*N = c*d, thus N = c*(d/2). So c is and odd number dividing N, and is uniquely determined by N, as soon as N = cons(m,n). Reciprocally, as soon as N has an odd divisor, one can reverse engineer all this stuff to find n and m.
7) *Conclusion : there exist a one to one correspondance between the number of ways of writing N = cons(m,n) (which is the number of ways of writing N as sum of consecutive integers, as we have seen) and the number of odd divisors of N.*
8) Finally, the number we are looking for is the number of odd divisors of n. I guess that solving this one by DP or whatever is easier than solving the previous one.
When you think it upside down (Swift)...
func cal(num : Int) -> Int {
let halfVal = Double(Double(num)/2.0).rounded(.up)
let endval = Int((halfVal/2).rounded(.down))
let halfInt : Int = Int(halfVal)
for obj in (endval...halfInt).reversed() {
var sum : Int = 0
for subVal in (1...obj).reversed() {
sum = sum + subVal
if sum > num {
break
}
if sum == num {
noInt += 1
break
}
}
}
return noInt
}