I have an array A along with 3 variables k, x and y.
I have to find number of unordered pairs (i,j) such that the sum of two elements mod k equals x and the product of the same two elements mod k is equal to y. Pairs need not be distinct. In other words, the number of (i,j) so that
(A[i]+A[j])%k == x and (A[i]*A[j])%k == y where 0 <= i < j < size of A.
For example, let A={1,2,3,2,1}, k=2, x=1, y=0. Then the answer is 6, because the pairs are: (1,2), (1,2), (2,3), (2,1), (3,2), and (2,1).
I used a brute force approach, but obviously this is not acceptable.
Modulo-arithmetic has the following two rules:
((a mod k) * (b mod k)) mod k = (a * b) mod k
((a mod k) + (b mod k)) mod k = (a + b) mod k
Thus we can sort all values into a hashtable with separate chaining and k buckets.
Addition
Find m < k, such that for a given n < k: (n + m) mod k = x.
There is exactly one solution to this problem:
if n < x: m < x must hold. Thus m = x - n
if n == x: m = 0
if n > x: we need to find m such that n + m = x + k. Thus m = x + k - n
This way, we can easily determine for each list of values the corresponding values such that for any pair (a, b) of the crossproduct of the two lists (a + b) mod k = x holds.
Multiplication
Multiplication is a bit trickier. Luckily we've already been given the matching congruence-class for addition (see above), which must as well be the matching congruence-class for the multiplication, since both constraints need to hold. To verify that the given congruence-class matches, we only need to check that (n * m) mod k = y (n and m defined as above). If this expression holds, we can build pairs, otherwise no matching elements exist.
Implementation
This would be the working python-code for the above example:
def modmuladd(ls, x, y, k):
result = []
# create tuples of indices and values
indices = zip(ls, range(0, len(ls)))
# split up into congruence classes
congruence_cls = [[] for i in range(0, k)]
for p in indices:
congruence_cls[p[0] % k].append(p)
for n in range(0, k):
# congruence class to match addition
if n < x:
m = x - n
elif n == x:
m = 0
else:
m = x + k - n
# check if congruence class matches for multiplication
if (n * m) % k != y or len(congruence_cls[m]) == 0:
continue # no matching congruence class
# add matching tuple to result
result += [(a, b) for a in congruence_cls[n] for b in congruence_cls[m] if a[1] <= b[1]]
result += [(a, b) for a in congruence_cls[m] for b in congruence_cls[n] if a[1] <= b[1]]
# sort result such according to indices of first and second element, remove duplicates
sorted_res = sorted(sorted(set(result), key=lambda p: p[1][1]), key=lambda p: p[0][1])
# remove indices from result-set
return [(p[0][0], p[1][0]) for p in sorted_res]
Note that sorting and elimination of duplicates is only required since this code concentrates on the usage of congruence-classes than perfect optimization. This example can be easily tweaked to provided ordering without the sorting by minor modifications.
Test run
print(modmuladd([1, 2, 3, 2, 1], 1, 0, 2))
Output:
[(1, 2), (1, 2), (2, 3), (2, 1), (3, 2), (2, 1)]
EDIT:
Worst-case complexity of this algorithm is still O(n^2), due to the fact that building all possible pairs of a list of size n is O(n^2). With this algorithm however the search for matching pairs can be cut down to O(k) with O(n) preprocessing. Thus counting resulting pairs can be done in O(n) with this approach. Assuming the numbers are distributed equally over the congruence-classes, this algorithm could build all pairs that are part of the solution-set in O(n^2/k^2).
EDIT 2:
An implementation that only counts would work like this:
def modmuladdct(ls, x, y, k):
result = 0
# split up into congruence classes
congruence_class = {}
for v in ls:
if v % k not in congruence_class:
congruence_class[(v % k)] = [v]
else:
congruence_class[v % k].append(v)
for n in congruence_class.keys():
# congruence class to match addition
m = (x - n + k) % k
# check if congruence class matches for multiplication
if (n * m % k != y) or len(congruence_class[m]) == 0:
continue # no matching congruence class
# total number of pairs that will be built
result += len(congruence_class[n]) * len(congruence_class[m])
# divide by two since each pair would otherwise be counted twice
return result // 2
Each pair would appear exactly twice in the result: once in-order and once with reversed order. By dividing the result by two this is being corrected. Runtime is O(n + k) (assuming dictionary-operations are O(1)).
The number of loops is C(2, n) = 5!/(2!(5-2)! = 10 loops in your case, and there is nothing magic that would drastically reduce the number of loops.
In JS you can do:
A = [1, 2, 3, 2, 1];
k = 2;
x = 1;
y = 0;
for(i=0; i<A.length; i++) {
for(j=i+1; j<A.length; j++) {
if ((A[i]+A[j])%k !== x) {
continue;
}
if ((A[i]*A[j])%k !== y) {
continue;
}
console.log('('+A[i]+', '+A[j]+')');
}
}
Ignoring A, we can find all solutions of n * (x - n) == y mod k for 0 <= n < k. That's a simple O(k) algorithm -- check each such n in turn.
We can count, for each n, how often A[i] == n, and then reconstruct the counts of pairs. For if cs is an array of these counts, and n is a solution of n * (x - n) == y mod k, then there's cs[n] * cs[(x-n)^k] pairs of things in A that solve our equations corresponding to this n. To avoid double counting we only count n such that n < (x - n) % k.
def count_pairs(A, k, x, y):
cs = [0] * k
for a in A:
cs[a % k] += 1
pairs = ((i, (x-i)%k) for i in xrange(k) if i * (x-i) % k == y)
return sum(cs[i] * cs[j] for i, j in pairs if i < j)
print count_pairs([1, 2, 3, 2, 1], 2, 1, 0)
Overall, this constructs the counts in O(|A|) time, and the remaining code runs in O(k) time. It uses O(k) space.
Related
B is a subsequence of A if and only if we can turn A to B by removing zero or more element(s).
A = [1,2,3,4]
B = [1,4] is a subsequence of A.(Just remove 2 and 4).
B = [4,1] is not a subsequence of A.
Count all subsequences of A that satisfy this condition : A[i]%i = 0
Note that i starts from 1 not 0.
Example :
Input :
5
2 2 1 22 14
Output:
13
All of these 13 subsequences satisfy B[i]%i = 0 condition.
{2},{2,2},{2,22},{2,14},{2},{2,22},{2,14},{1},{1,22},{1,14},{22},{22,14},{14}
My attempt :
The only solution that I could came up with has O(n^2) complexity.
Assuming the maximum element in A is C, the following is an algorithm with time complexity O(n * sqrt(C)):
For every element x in A, find all divisors of x.
For every i from 1 to n, find every j such that A[j] is a multiple of i, using the result of step 1.
For every i from 1 to n and j such that A[j] is a multiple of i (using the result of step 2), find the number of B that has i elements and the last element is A[j] (dynamic programming).
def find_factors(x):
"""Returns all factors of x"""
for i in range(1, int(x ** 0.5) + 1):
if x % i == 0:
yield i
if i != x // i:
yield x // i
def solve(a):
"""Returns the answer for a"""
n = len(a)
# b[i] contains every j such that a[j] is a multiple of i+1.
b = [[] for i in range(n)]
for i, x in enumerate(a):
for factor in find_factors(x):
if factor <= n:
b[factor - 1].append(i)
# There are dp[i][j] sub arrays of A of length (i+1) ending at b[i][j]
dp = [[] for i in range(n)]
dp[0] = [1] * n
for i in range(1, n):
k = x = 0
for j in b[i]:
while k < len(b[i - 1]) and b[i - 1][k] < j:
x += dp[i - 1][k]
k += 1
dp[i].append(x)
return sum(sum(dpi) for dpi in dp)
For every divisor d of A[i], where d is greater than 1 and at most i+1, A[i] can be the dth element of the number of subsequences already counted for d-1.
JavaScript code:
function getDivisors(n, max){
let m = 1;
const left = [];
const right = [];
while (m*m <= n && m <= max){
if (n % m == 0){
left.push(m);
const l = n / m;
if (l != m && l <= max)
right.push(l);
}
m += 1;
}
return right.concat(left.reverse());
}
function f(A){
const dp = [1, ...new Array(A.length).fill(0)];
let result = 0;
for (let i=0; i<A.length; i++){
for (d of getDivisors(A[i], i+1)){
result += dp[d-1];
dp[d] += dp[d-1];
}
}
return result;
}
var A = [2, 2, 1, 22, 14];
console.log(JSON.stringify(A));
console.log(f(A));
I believe that for the general case we can't provably find an algorithm with complexity less than O(n^2).
First, an intuitive explanation:
Let's indicate the elements of the array by a1, a2, a3, ..., a_n.
If the element a1 appears in a subarray, it must be element no. 1.
If the element a2 appears in a subarray, it can be element no. 1 or 2.
If the element a3 appears in a subarray, it can be element no. 1, 2 or 3.
...
If the element a_n appears in a subarray, it can be element no. 1, 2, 3, ..., n.
So to take all the possibilities into account, we have to perform the following tests:
Check if a1 is divisible by 1 (trivial, of course)
Check if a2 is divisible by 1 or 2
Check if a3 is divisible by 1, 2 or 3
...
Check if a_n is divisible by 1, 2, 3, ..., n
All in all we have to perform 1+ 2 + 3 + ... + n = n(n - 1) / 2 tests, which gives a complexity of O(n^2).
Note that the above is somewhat inaccurate, because not all the tests are strictly necessary. For example, if a_i is divisible by 2 and 3 then it must be divisible by 6. Nevertheless, I think this gives a good intuition.
Now for a more formal argument:
Define an array like so:
a1 = 1
a2 = 1× 2
a3 = 1× 2 × 3
...
a_n = 1 × 2 × 3 × ... × n
By the definition, every subarray is valid.
Now let (m, p) be such that m <= n and p <= n and change a_mtoa_m / p`. We can now choose one of two paths:
If we restrict p to be prime, then each tuple (m, p) represents a mandatory test, because the corresponding change in the value of a_m changes the number of valid subarrays. But that requires prime factorization of each number between 1 and n. By the known methods, I don't think we can get here a complexity less than O(n^2).
If we omit the above restriction, then we clearly perform n(n - 1) / 2 tests, which gives a complexity of O(n^2).
all here is the problem statement from an old contest on codeforces
A sequence of l integers b 1, b 2, ..., b l (1 ≤ b 1 ≤ b 2 ≤ ... ≤ b
l ≤ n) is called good if each number divides (without a remainder) by
the next number in the sequence. More formally for all i
(1 ≤ i ≤ l - 1).
Given n and k find the number of good sequences of length k. As the
answer can be rather large print it modulo 1000000007 (109 + 7).
I have formulated my dp[i][j] as the number of good sequences of length i which ends with the jth number, and the transition table as the following pseudocode
dp[k][n] =
for each factor of n as i do
for j from 1 to k - 1
dp[k][n] += dp[j][i]
end
end
But in the editorial it is given as
Lets define dp[i][j] as number of good sequences of length i that ends in j.
Let's denote divisors of j by x1, x2, ..., xl. Then dp[i][j] = sigma dp[i - 1][xr]
But in my understanding, we need two sigmas, one for the divisors and the other for length. Please help me correct my understanding.
My code ->
MOD = 10 ** 9 + 7
N, K = map(int, input().split())
dp = [[0 for _ in range(N + 1)] for _ in range(K + 1)]
for k in range(1, K + 1):
for n in range(1, N + 1):
c = 1
for i in range(1, n):
if n % i != 0:
continue
for j in range(1, k):
c += dp[j][i]
dp[k][n] = c
c = 0
for i in range(1, N + 1):
c = (c + dp[K][i]) % MOD
print(c)
Link to the problem: https://codeforces.com/problemset/problem/414/B
So let's define dp[i][j] as the number of good sequences of length exactly i and which ends with a value j as its last element.
Now, dp[i][j] = Sum(dp[i-1][x]) for all x s.t. x is a divisor of i. Note that x can be equal to j itself.
This is true because if there is some sequence of length i-1 which we have already found that ends with some value x, then we can simply add j to its end and form a new sequence which satisfies all the conditions.
I guess your confusion is with the length. The thing is that since our current length is i, we can add j to the end of a sequence only if its length is i-1, we cannot iterate over other lengths.
Hope this is clear.
I have come across this algorithmic problem that I was not able to solve: https://prologin.org/train/2017/semifinal/collection_de_feuilles (in French).
N, K, and M[] will be given as the input. N refers to the number of piles of items, and in M[i] is the number of items in the i-th pile. You can only merge the i-th pile of M[i] items into the j-th pile of M[j] items if j > i, and the cost of this merge is defined to be M[i] * (j - i). The output is the minimum cost of merging the initial N piles into K piles.
My idea was to use a function min_rearrange(x, num_piles) which calculates the minimum cost to rearrange piles from M[x] to M[N - 1] into the specified number of piles. When num_piles is equal to 1, this function returns the sum of the the costs to move M[j] into M[N -
1], x ≤ j < N. Otherwise, since there must exist an i with x ≤ i ≤ N - num_piles that we move all the piles from M[x] to M[i - 1] into M[i], we calculate that sum and then recursively call min_rearrange(i + 1, num_piles - 1) to find the minimum cost.
I have also tried to memoize the solutions:
# https://prologin.org/train/2017/semifinal/collection_de_feuilles
n, k = map(int, input().split())
piles = list(map(int, input().split()))
memory = {}
def min_rearrange(x, num_piles):
"""Min cost to rearrange piles[x:] into num_piles"""
if (x, num_piles) in memory:
return memory[x, num_piles]
if num_piles == 1:
memory[x, num_piles] = sum([(n - 1 - i) * piles[i] for i in range(x, n)])
return memory[x, num_piles]
min_cost = float('inf')
for i in range(x, n - num_piles + 1):
cost = sum([(i - j) * piles[j] for j in range(x, i)])
min_cost = min(min_cost, cost + min_rearrange(i + 1, num_piles - 1))
memory[x, num_piles] = min_cost
return min_cost
print(min_rearrange(0, k))
But it takes too much time for large input sizes. I'd like to know how the problem can be solved more efficiently.
Assume we have 2 sorted arrays of integers with sizes of n and m. What is the best way to find median of all m + n numbers?
It's easy to do this with log(n) * log(m) complexity. But i want to solve this problem in log(n) + log(m) time. So is there any suggestion to solve this problem?
Explanation
The key point of this problem is to ignore half part of A and B each step recursively by comparing the median of remaining A and B:
if (aMid < bMid) Keep [aMid +1 ... n] and [bLeft ... m]
else Keep [bMid + 1 ... m] and [aLeft ... n]
// where n and m are the length of array A and B
As the following: time complexity is O(log(m + n))
public double findMedianSortedArrays(int[] A, int[] B) {
int m = A.length, n = B.length;
int l = (m + n + 1) / 2;
int r = (m + n + 2) / 2;
return (getkth(A, 0, B, 0, l) + getkth(A, 0, B, 0, r)) / 2.0;
}
public double getkth(int[] A, int aStart, int[] B, int bStart, int k) {
if (aStart > A.length - 1) return B[bStart + k - 1];
if (bStart > B.length - 1) return A[aStart + k - 1];
if (k == 1) return Math.min(A[aStart], B[bStart]);
int aMid = Integer.MAX_VALUE, bMid = Integer.MAX_VALUE;
if (aStart + k/2 - 1 < A.length) aMid = A[aStart + k/2 - 1];
if (bStart + k/2 - 1 < B.length) bMid = B[bStart + k/2 - 1];
if (aMid < bMid)
return getkth(A, aStart + k / 2, B, bStart, k - k / 2); // Check: aRight + bLeft
else
return getkth(A, aStart, B, bStart + k / 2, k - k / 2); // Check: bRight + aLeft
}
Hope it helps! Let me know if you need more explanation on any part.
Here's a very good solution I found in Java on Stack Overflow. It's a method of finding the K and K+1 smallest items in the two arrays where K is the center of the merged array.
If you have a function for finding the Kth item of two arrays then finding the median of the two is easy;
Calculate the weighted average of the Kth and Kth+1 items of X and Y
But then you'll need a way to find the Kth item of two lists; (remember we're one indexing now)
If X contains zero items then the Kth smallest item of X and Y is the Kth smallest item of Y
Otherwise if K == 2 then the second smallest item of X and Y is the smallest of the smallest items of X and Y (min(X[0], Y[0]))
Otherwise;
i. Let A be min(length(X), K / 2)
ii. Let B be min(length(Y), K / 2)
iii. If the X[A] > Y[B] then recurse from step 1. with X, Y' with all elements of Y from B to the end of Y and K' = K - B, otherwise recurse with X' with all elements of X from A to the end of X, Y and K' = K - A
If I find the time tomorrow I will verify that this algorithm works in Python as stated and provide the example source code, it may have some off-by-one errors as-is.
Take the median element in list A and call it a. Compare a to the center elements in list B. Lets call them b1 and b2 (if B has odd length then exactly where you split b depends on your definition of the median of an even length list, but the procedure is almost identical regardless). if b1≤a≤b2 then a is the median of the merged array. This can be done in constant time since it requires exactly two comparisons.
If a is greater than b2 then we add the top half of A to the top of B and repeat. B will no longer be sorted, but it doesn't matter. If a is less than b1 then we add the bottom half of A to the bottom of B and repeat. These will iterate log(n) times at most (if the median is found sooner then stop, of course).
It is possible that this will not find the median. If this is the case then the median is in B. If so, perform the same algorithm with A and B reversed. This will require log(m) iterations. In total you will have performed at most 2*(log(n)+log(m)) iterations of a constant time operation, so you have solved the problem in order log(n)+log(m) time.
This is essentially the same answer as was given by iehrlich, but written out more explicitly.
Yes, this can be done. Given two arrays, A and B, in the worst-case scenario you have to first perform a binary search in A, and then, if it fails, binary search in B looking for the median. On each step of a binary search, you check if the current element is actually a median of a merged A+B array. Such check takes constant time.
Let's see why such check is constant. For simplicity, let's assume that |A| + |B| is an odd number, and that all numbers in both arrays are different. You can remove these restrictions later by applying the usual median definition approach (i.e., how to calculate the median of an array containing duplicates, or of an array with even length). Anyway, given that, we know for sure, that in the merged array there will be (|A| + |B| - 1) / 2 elements to the right and to the left of an actual median. In the process of a binary search in A, we know the index of current element x in array A (let it be i). Now, if x satisfies the condition B[j] < x < B[j+1], where i + j == (|A| + |B| - 1) / 2, then x is your median.
The overall complexity is O(log(max(|A|, |B|)) time and O(1) memory.
I have given a Number A where 1<=A<=10^6 and a Number K. I have to find the all the numbers between 1 to A where A%i==k and i is 1<=i<=A. Is there any better solution than looping
Simple Solution
for(int i=1;i<=A;i++)
if(A%i==k) count++;
Is there any better solution than iterating all the numbers between 1 to A
The expression A % i == k is equivalent to A == n * i + k for any integer value of n that gives a value of A within the stated bounds.
This can be rearranged as n * i = A - k, and can be solved by finding all the factors of A - k that are multiples of i (where k < i <= A).
Here are a couple of examples:
A = 100, k = 10
F = factor_list(A-k) = factor_list(90) = [1,2,3,5,6,9,10,15,18,30,45,90]
(discard all factors less than or equal to k)
Result: [15,18,30,45,90]
A = 288, k = 32
F = [2,4,8,16,32,64,128,256]
Result: [64,128,256]
If A - k is prime, then there is either one solution (A-k) or none (if A-k <= k).