How can we develop a dynamic programming algorithm that calculates the minimum number of different primes that sum to x?
Assume the dynamic programming calculates the minimum number of different primes amongst which the largest is p for each couple of x and p. Can someone help?
If we assume the Goldbach conjecture is true, then every even integer > 2 is the sum of two primes.
So we know the answer if x is even (1 if x==2, or 2 otherwise).
If x is odd, then there are 3 cases:
x is prime (answer is 1)
x-2 is prime (answer is 2)
otherwise x-3 is an even number bigger than 2 (answer is 3)
First of all, you need a list of primes up to x. Let's call this array of integers primes.
Now we want to populate the array answer[x][p], where x is the sum of primes and p is maximum for each prime in the set (possibly including, but not necessarily including p).
There are 3 possibilities for answer[x][p] after all calculations:
1) if p=x and p is prime => answer[x][p] contains 1
2) if it's not possible to solve problem for given x, p => answer[x][p] contains -1
3) if it's possible to solve problem for given x, p => answer[x][p] contains number of primes
There is one more possible value for answer[x][p] during calculations:
4) we did not yet solve the problem for given x, p => answer[x][p] contains 0
It's quite obvious that 0 is not the answer for anything but x=0, so we are safe initializing array with 0 (and making special treatment for x=0).
To calculate answer[x][p] we can iterate (let q is prime value we are iterating on) through all primes up to (including) p and find minimum over 1+answer[x-q][q-1] (do not consider all answer[x-q][q-1]=-1 cases though). Here 1 comes for q and answer[x-q][q-1] should be calculated in recursive call or before this calculation.
Now there's small optimization: iterate primes from higher to lower and if x/q is bigger than current answer, we can stop, because to make sum x we will need at least x/q primes anyway. For example, we will not even consider q=2 for x=10, as we'd already have answer=3 (actually, it includes 2 as one of 3 primes - 2+3+5, but we've already got it through recursive call answer(10-5, 4)), since 10/2=5, that is we'd get 5 as answer at best (in fact it does not exist for q=2).
package ru.tieto.test;
import java.util.ArrayList;
public class Primers {
static final int MAX_P = 10;
static final int MAX_X = 10;
public ArrayList<Integer> primes= new ArrayList<>();
public int answer[][] = new int[MAX_X+1][MAX_P+1];
public int answer(int x, int p) {
if (x < 0)
return -1;
if (x == 0)
return 0;
if (answer[x][p] != 0)
return answer[x][p];
int max_prime_idx = -1;
for (int i = 0;
i < primes.size() && primes.get(i) <= p && primes.get(i) <= x;
i++)
max_prime_idx = i;
if (max_prime_idx < 0) {
answer[x][p] = -1;
return -1;
}
int cur_answer = x+1;
for (int i = max_prime_idx; i >= 0; i--) {
int q = primes.get(i);
if (x / q >= cur_answer)
break;
if (x == q) {
cur_answer = 1;
break;
}
int candidate = answer(x-q, q-1);
if (candidate == -1)
continue;
if (candidate+1 < cur_answer)
cur_answer = candidate+1;
}
if (cur_answer > x)
answer[x][p] = -1;
else
answer[x][p] = cur_answer;
return answer[x][p];
}
private void make_primes() {
primes.add(2);
for (int p = 3; p <= MAX_P; p=p+2) {
boolean isPrime = true;
for (Integer q : primes) {
if (q*q > p)
break;
if (p % q == 0) {
isPrime = false;
break;
}
}
if (isPrime)
primes.add(p);
}
// for (Integer q : primes)
// System.out.print(q+",");
// System.out.println("<<");
}
private void init() {
make_primes();
for (int p = 0; p <= MAX_P; p++) {
answer[0][p] = 0;
answer[1][p] = -1;
}
for (int x = 2; x <= MAX_X; x++) {
for (int p = 0; p <= MAX_P; p++)
answer[x][p] = 0;
}
for (Integer p: primes)
answer[p][p] = 1;
}
void run() {
init();
for (int x = 0; x <= MAX_X; x++)
for (int p = 0; p <= MAX_P; p++)
answer(x, p);
}
public static void main(String[] args) {
Primers me = new Primers();
me.run();
// for (int x = 0; x <= MAX_X; x++) {
// System.out.print("x="+x+": {");
// for (int p = 0; p <= MAX_P; p++) {
// System.out.print(String.format("%2d=%-3d,",p, me.answer[x][p]));
// }
// System.out.println("}");
// }
}
}
Start with a list of all primes lower than x.
Take the largest. Now we need to solve the problem for (x - pmax). At this stage, that will be easy, x - pmax will be low. Mark all the primes as "used" and store solution 1. Now take the largest prime still in the list and repeat until all the primes are either used or rejected. If (x - pmax) is high, the problem gets more complex.
That's your first pass, brute force algorithm. Get that working first before considering how to speed things up.
Assuming you're not using goldbach conjecture, otherwise see Peter de Rivaz excellent answer, :
dynamic programming generally takes advantage of overlapping subproblems. Usually you go top down, but in this case bottom up may be simpler
I suggest you sum various combinations of primes.
lookup = {}
for r in range(1, 3):
for primes in combinations_with_replacement(all_primes, r):
s = sum(primes)
lookup[s] = lookup.get(s, r) //r is increasing, so only set it if it's not already there
this will start getting slow very quickly if you have a large number of primes, in that case, change max r to something like 1 or 2, whatever the max that is fast enough for you, and then you will be left with some numbers that aren't found, to solve for a number that doesnt have a solution in lookup, try break that number into sums of numbers that are found in lookup (you may need to store the prime combos in lookup and dedupe those combinations).
Related
I have to find the best algorithm to define pairing between the items from two lists as in the figure. The pair is valid only if the number of node in list A is lower than number of node in list B and there are no crosses between links. The quality of the matching algorithm is determined by the total number of links.
I firstly tried to use a very simple algorithm: take a node in the list A and then look for the first node in list B that is higher than the former. The second figure shows a test case where this algorithm is not the best one.
Simple back-tracking can work (it may not be optimal, but it will certainly work).
For each legal pairing A[i], B[j], there are two choices:
take it, and make it illegal to try to pair any A[x], B[y] with x>i and y<j
not take it, and look at other possible pairs
By incrementally adding legal pairs to a bunch of pairs, you will eventually exhaust all legal pairings down a path. The number of valid pairings in a path is what you seek to maximize, and this algorithm will look at all possible answers and is guaranteed to work.
Pseudocode:
function search(currentPairs):
bestPairing = currentPairs
for each currently legal pair:
nextPairing = search(copyOf(currentPairs) + this pair)
if length of nextPairing > length of bestPairing:
bestPairing = nextPairing
return bestPairing
Initially, you will pass an empty currentPairs. Searching for legal pairs is the tricky part. You can use 3 nested loops that look at all A[x], B[y], and finally, if A[x] < B[y], look against all currentPairs to see if the there is a crossing line (the cost of this is roughly O(n^3)); or you can use a boolean matrix of valid pairings, which you update at each level (less computation time, down to O(n^2) - but more expensive in terms of memory)
Here a Java implementation.
For convinience I first build a map with the valid choices for each entry of list(array) a to b.
Then I loop throuough the list, making no choice and the valid choices for a connection to b.
Since you cant go back without crossing the existing connections I keep track of the maximum assigned in b.
Works at least for the two examples...
public class ListMatcher {
private int[] a ;
private int[] b ;
private Map<Integer,List<Integer>> choicesMap;
public ListMatcher(int[] a, int[] b) {
this.a = a;
this.b = b;
choicesMap = makeMap(a,b);
}
public Map<Integer,Integer> solve() {
Map<Integer,Integer> solution = new HashMap<>();
return solve(solution, 0, -1);
}
private Map<Integer,Integer> solve(Map<Integer,Integer> soFar, int current, int max) {
// done
if (current >= a.length) {
return soFar;
}
// make no choice from this entry
Map<Integer, Integer> solution = solve(new HashMap<>(soFar),current+1, max);
for (Integer choice : choicesMap.get(current)) {
if (choice > max) // can't go back
{
Map<Integer,Integer> next = new HashMap<>(soFar);
next.put(current, choice);
next = solve(next, current+1, choice);
if (next.size() > solution.size()) {
solution = next;
}
}
}
return solution;
}
// init possible choices
private Map<Integer, List<Integer>> makeMap(int[] a, int[] b) {
Map<Integer,List<Integer>> possibleMap = new HashMap<>();
for(int i = 0; i < a.length; i++) {
List<Integer> possible = new ArrayList<>();
for(int j = 0; j < b.length; j++) {
if (a[i] < b[j]) {
possible.add(j);
}
}
possibleMap.put(i, possible);
}
return possibleMap;
}
public static void main(String[] args) {
ListMatcher matcher = new ListMatcher(new int[]{3,7,2,1,5,9,2,2},new int[]{4,5,10,1,12,3,6,7});
System.out.println(matcher.solve());
matcher = new ListMatcher(new int[]{10,1,1,1,1,1,1,1},new int[]{2,2,2,2,2,2,2,101});
System.out.println(matcher.solve());
}
}
Output
(format: zero-based index_in_a=index_in_b)
{2=0, 3=1, 4=2, 5=4, 6=5, 7=6}
{1=0, 2=1, 3=2, 4=3, 5=4, 6=5, 7=6}
Your solution isn't picked because the solutions making no choice are picked first.
You can change this by processing the loop first...
Thanks to David's suggestion, I finally found the algorithm. It is an LCS approach, replacing the '=' with an '>'.
Recursive approach
The recursive approach is very straightforward. G and V are the two vectors with size n and m (adding a 0 at the beginning of both). Starting from the end, if last from G is larger than last from V, then return 1 + the function evaluated without the last item, otherwise return max of the function removing last from G or last from V.
int evaluateMaxRecursive(vector<int> V, vector<int> G, int n, int m) {
if ((n == 0) || (m == 0)) {
return 0;
}
else {
if (V[n] < G[m]) {
return 1 + evaluateMaxRecursive(V, G, n - 1, m - 1);
} else {
return max(evaluateMaxRecursive(V, G, n - 1, m), evaluateMaxRecursive(V, G, n, m - 1));
}
}
};
The recursive approach is valid with small number of items, due to the re-evaluation of same lists that occur during the loop.
Non recursive approach
The non recursive approach goes in the opposite direction and works with a table that is filled in after having clared to 0 first row and first column. The max value is the value in the bottom left corner of the table
int evaluateMax(vector<int> V, vector<int> G, int n, int m) {
int** table = new int* [n + 1];
for (int i = 0; i < n + 1; ++i)
table[i] = new int[m + 1];
for (int i = 0; i < n + 1; i++)
for (int t = 0; t < m + 1; t++)
table[i][t] = 0;
for (int i = 1; i < m + 1; i++)
for (int t = 1; t < n + 1; t++) {
if (G[i - 1] > V[t - 1]) {
table[t] [i] = 1 + table[t - 1][i - 1];
}
else {
table[t][i] = max(table[t][i - 1], table[t - 1][i]);
}
}
return table[n][m];
}
You can find more details here LCS - Wikipedia
I came to this problem in a challenge.
There are two arrays A and B both of size of N and we need to return the count of pairs (A[i],B[j]) where gcd(A[i],B[j])==1 and A[i] != B[j].
I could only think of brute force approach which exceeded time limit for few test cases.
for(int i=0; i<n; i++) {
for(int j=0; j<n; j++) {
if(__gcd(a[i],b[j])==1) {
printf("%d %d\n", a[i], b[j]);
}
}
}
Can you advice time efficient algorithm to solve this.
Edit: Not able to share question link as this was from a hiring challenge. Adding the constraints and input/output format as I remember.
Input -
First line will contain N, the number of elements present in both arrays.
Second line will contain N space separated integers, elements of array A.
Third line will contain N space separated integers, elements of array B.
Output -
The count of pairs A[i],A[j] as per the conditions.
Constraints -
1 <= N <= 10^5
1 < A[i],B[j] <= 10^9 where i,j < N
The first step is to use Eratosthenes sieve to calculate the prime numbers up to sqrt(10^9). This sieve can then be used to quickly find all prime factors of any number less than 10^9 (see the getPrimeFactors(...) function in the code sample below).
Next, for each A[i] with prime factors p0, p1, ..., pk, we compute all possible sub-products X - p0, p1, p0p1, p2, p0p2, p1p2, p0p1p2, p3, p0p3, ..., p0p1p2...pk and count them in map cntp[X]. Effectively, the map cntp[X] tells us the number of elements A[i] divisible by X, where X is a product of prime numbers to the power of 0 or 1. So for example, for the number A[i] = 12, the prime factors are 2, 3. We will count cntp[2]++, cntp[3]++ and cntp[6]++.
Finally, for each B[j] with prime factors p0, p1, ..., pk, we again compute all possible sub-products X and use the Inclusion-exclusion principle to count all non-coprime pairs C_j (i.e. the number of A[i]s that share at least one prime factor with B[j]). The numbers C_j are then subtracted from the total number of pairs - N*N to get the final answer.
Note: the Inclusion-exclusion principle looks like this:
C_j = (cntp[p0] + cntp[p1] + ... + cntp[pk]) -
(cntp[p0p1] + cntp[p0p2] + ... + cntp[pk-1pk]) +
(cntp[p0p1p2] + cntp[p0p1p3] + ... + cntp[pk-2pk-1pk]) -
...
and accounts for the fact that in cntp[X] and cntp[Y] we could have counted the same number A[i] twice, given that it is divisible by both X and Y.
Here is a possible C++ implementation of the algorithm, which produces the same results as the naive O(n^2) algorithm by OP:
// get prime factors of a using pre-generated sieve
std::vector<int> getPrimeFactors(int a, const std::vector<int> & primes) {
std::vector<int> f;
for (auto p : primes) {
if (p > a) break;
if (a % p == 0) {
f.push_back(p);
do {
a /= p;
} while (a % p == 0);
}
}
if (a > 1) f.push_back(a);
return f;
}
// find coprime pairs A_i and B_j
// A_i and B_i <= 1e9
void solution(const std::vector<int> & A, const std::vector<int> & B) {
// generate prime sieve
std::vector<int> primes;
primes.push_back(2);
for (int i = 3; i*i <= 1e9; ++i) {
bool isPrime = true;
for (auto p : primes) {
if (i % p == 0) {
isPrime = false;
break;
}
}
if (isPrime) {
primes.push_back(i);
}
}
int N = A.size();
struct Entry {
int n = 0;
int64_t p = 0;
};
// cntp[X] - number of times the product X can be expressed
// with prime factors of A_i
std::map<int64_t, int64_t> cntp;
for (int i = 0; i < N; i++) {
auto f = getPrimeFactors(A[i], primes);
// count possible products using non-repeating prime factors of A_i
std::vector<Entry> x;
x.push_back({ 0, 1 });
for (auto p : f) {
int k = x.size();
for (int i = 0; i < k; ++i) {
int nn = x[i].n + 1;
int64_t pp = x[i].p*p;
++cntp[pp];
x.push_back({ nn, pp });
}
}
}
// use Inclusion–exclusion principle to count non-coprime pairs
// and subtract them from the total number of prairs N*N
int64_t cnt = N; cnt *= N;
for (int i = 0; i < N; i++) {
auto f = getPrimeFactors(B[i], primes);
std::vector<Entry> x;
x.push_back({ 0, 1 });
for (auto p : f) {
int k = x.size();
for (int i = 0; i < k; ++i) {
int nn = x[i].n + 1;
int64_t pp = x[i].p*p;
x.push_back({ nn, pp });
if (nn % 2 == 1) {
cnt -= cntp[pp];
} else {
cnt += cntp[pp];
}
}
}
}
printf("cnt = %d\n", (int) cnt);
}
Live example
I cannot estimate the complexity analytically, but here are some profiling result on my laptop for different N and uniformly random A[i] and B[j]:
For N = 1e2, takes ~0.02 sec
For N = 1e3, takes ~0.05 sec
For N = 1e4, takes ~0.38 sec
For N = 1e5, takes ~3.80 sec
For comparison, the O(n^2) approach takes:
For N = 1e2, takes ~0.00 sec
For N = 1e3, takes ~0.15 sec
For N = 1e4, takes ~15.1 sec
For N = 1e5, takes too long, didn't wait to finish
Python Implementation:
import math
from collections import defaultdict
def sieve(MAXN):
spf = [0 for i in range(MAXN)]
spf[1] = 1
for i in range(2, MAXN):
spf[i] = i
for i in range(4, MAXN, 2):
spf[i] = 2
for i in range(3, math.ceil(math.sqrt(MAXN))):
if (spf[i] == i):
for j in range(i * i, MAXN, i):
if (spf[j] == j):
spf[j] = i
return(spf)
def getFactorization(x,spf):
ret = list()
while (x != 1):
ret.append(spf[x])
x = x // spf[x]
return(list(set(ret)))
def coprime_pairs(N,A,B):
MAXN=max(max(A),max(B))+1
spf=sieve(MAXN)
cntp=defaultdict(int)
for i in range(N):
f=getFactorization(A[i],spf)
x=[[0,1]]
for p in f:
k=len(x)
for i in range(k):
nn=x[i][0]+1
pp=x[i][1]*p
cntp[pp]+=1
x.append([nn,pp])
cnt=0
for i in range(N):
f=getFactorization(B[i],spf)
x=[[0,1]]
for p in f:
k=len(x)
for i in range(k):
nn=x[i][0]+1
pp=x[i][1]*p
x.append([nn,pp])
if(nn%2==1):
cnt+=cntp[pp]
else:
cnt-=cntp[pp]
return(N*N-cnt)
import random
N=10001
A=[random.randint(1,N) for _ in range(N)]
B=[random.randint(1,N) for _ in range(N)]
print(coprime_pairs(N,A,B))
Most of us are familiar with the maximum sum subarray problem. I came across a variant of this problem which asks the programmer to output the maximum of all subarray sums modulo some number M.
The naive approach to solve this variant would be to find all possible subarray sums (which would be of the order of N^2 where N is the size of the array). Of course, this is not good enough. The question is - how can we do better?
Example: Let us consider the following array:
6 6 11 15 12 1
Let M = 13. In this case, subarray 6 6 (or 12 or 6 6 11 15 or 11 15 12) will yield maximum sum ( = 12 ).
We can do this as follow:
Maintaining an array sum which at index ith, it contains the modulus sum from 0 to ith.
For each index ith, we need to find the maximum sub sum that end at this index:
For each subarray (start + 1 , i ), we know that the mod sum of this sub array is
int a = (sum[i] - sum[start] + M) % M
So, we can only achieve a sub-sum larger than sum[i] if sum[start] is larger than sum[i] and as close to sum[i] as possible.
This can be done easily if you using a binary search tree.
Pseudo code:
int[] sum;
sum[0] = A[0];
Tree tree;
tree.add(sum[0]);
int result = sum[0];
for(int i = 1; i < n; i++){
sum[i] = sum[i - 1] + A[i];
sum[i] %= M;
int a = tree.getMinimumValueLargerThan(sum[i]);
result = max((sum[i] - a + M) % M, result);
tree.add(sum[i]);
}
print result;
Time complexity :O(n log n)
Let A be our input array with zero-based indexing. We can reduce A modulo M without changing the result.
First of all, let's reduce the problem to a slightly easier one by computing an array P representing the prefix sums of A, modulo M:
A = 6 6 11 2 12 1
P = 6 12 10 12 11 12
Now let's process the possible left borders of our solution subarrays in decreasing order. This means that we will first determine the optimal solution that starts at index n - 1, then the one that starts at index n - 2 etc.
In our example, if we chose i = 3 as our left border, the possible subarray sums are represented by the suffix P[3..n-1] plus a constant a = A[i] - P[i]:
a = A[3] - P[3] = 2 - 12 = 3 (mod 13)
P + a = * * * 2 1 2
The global maximum will occur at one point too. Since we can insert the suffix values from right to left, we have now reduced the problem to the following:
Given a set of values S and integers x and M, find the maximum of S + x modulo M
This one is easy: Just use a balanced binary search tree to manage the elements of S. Given a query x, we want to find the largest value in S that is smaller than M - x (that is the case where no overflow occurs when adding x). If there is no such value, just use the largest value of S. Both can be done in O(log |S|) time.
Total runtime of this solution: O(n log n)
Here's some C++ code to compute the maximum sum. It would need some minor adaptions to also return the borders of the optimal subarray:
#include <bits/stdc++.h>
using namespace std;
int max_mod_sum(const vector<int>& A, int M) {
vector<int> P(A.size());
for (int i = 0; i < A.size(); ++i)
P[i] = (A[i] + (i > 0 ? P[i-1] : 0)) % M;
set<int> S;
int res = 0;
for (int i = A.size() - 1; i >= 0; --i) {
S.insert(P[i]);
int a = (A[i] - P[i] + M) % M;
auto it = S.lower_bound(M - a);
if (it != begin(S))
res = max(res, *prev(it) + a);
res = max(res, (*prev(end(S)) + a) % M);
}
return res;
}
int main() {
// random testing to the rescue
for (int i = 0; i < 1000; ++i) {
int M = rand() % 1000 + 1, n = rand() % 1000 + 1;
vector<int> A(n);
for (int i = 0; i< n; ++i)
A[i] = rand() % M;
int should_be = 0;
for (int i = 0; i < n; ++i) {
int sum = 0;
for (int j = i; j < n; ++j) {
sum = (sum + A[j]) % M;
should_be = max(should_be, sum);
}
}
assert(should_be == max_mod_sum(A, M));
}
}
For me, all explanations here were awful, since I didn't get the searching/sorting part. How do we search/sort, was unclear.
We all know that we need to build prefixSum, meaning sum of all elems from 0 to i with modulo m
I guess, what we are looking for is clear.
Knowing that subarray[i][j] = (prefix[i] - prefix[j] + m) % m (indicating the modulo sum from index i to j), our maxima when given prefix[i] is always that prefix[j] which is as close as possible to prefix[i], but slightly bigger.
E.g. for m = 8, prefix[i] being 5, we are looking for the next value after 5, which is in our prefixArray.
For efficient search (binary search) we sort the prefixes.
What we can not do is, build the prefixSum first, then iterate again from 0 to n and look for index in the sorted prefix array, because we can find and endIndex which is smaller than our startIndex, which is no good.
Therefore, what we do is we iterate from 0 to n indicating the endIndex of our potential max subarray sum and then look in our sorted prefix array, (which is empty at the beginning) which contains sorted prefixes between 0 and endIndex.
def maximumSum(coll, m):
n = len(coll)
maxSum, prefixSum = 0, 0
sortedPrefixes = []
for endIndex in range(n):
prefixSum = (prefixSum + coll[endIndex]) % m
maxSum = max(maxSum, prefixSum)
startIndex = bisect.bisect_right(sortedPrefixes, prefixSum)
if startIndex < len(sortedPrefixes):
maxSum = max(maxSum, prefixSum - sortedPrefixes[startIndex] + m)
bisect.insort(sortedPrefixes, prefixSum)
return maxSum
From your question, it seems that you have created an array to store the cumulative sums (Prefix Sum Array), and are calculating the sum of the sub-array arr[i:j] as (sum[j] - sum[i] + M) % M. (arr and sum denote the given array and the prefix sum array respectively)
Calculating the sum of every sub-array results in a O(n*n) algorithm.
The question that arises is -
Do we really need to consider the sum of every sub-array to reach the desired maximum?
No!
For a value of j the value (sum[j] - sum[i] + M) % M will be maximum when sum[i] is just greater than sum[j] or the difference is M - 1.
This would reduce the algorithm to O(nlogn).
You can take a look at this explanation! https://www.youtube.com/watch?v=u_ft5jCDZXk
There are already a bunch of great solutions listed here, but I wanted to add one that has O(nlogn) runtime without using a balanced binary tree, which isn't in the Python standard library. This solution isn't my idea, but I had to think a bit as to why it worked. Here's the code, explanation below:
def maximumSum(a, m):
prefixSums = [(0, -1)]
for idx, el in enumerate(a):
prefixSums.append(((prefixSums[-1][0] + el) % m, idx))
prefixSums = sorted(prefixSums)
maxSeen = prefixSums[-1][0]
for (a, a_idx), (b, b_idx) in zip(prefixSums[:-1], prefixSums[1:]):
if a_idx > b_idx and b > a:
maxSeen = max((a-b) % m, maxSeen)
return maxSeen
As with the other solutions, we first calculate the prefix sums, but this time we also keep track of the index of the prefix sum. We then sort the prefix sums, as we want to find the smallest difference between prefix sums modulo m - sorting lets us just look at adjacent elements as they have the smallest difference.
At this point you might think we're neglecting an essential part of the problem - we want the smallest difference between prefix sums, but the larger prefix sum needs to appear before the smaller prefix sum (meaning it has a smaller index). In the solutions using trees, we ensure that by adding prefix sums one by one and recalculating the best solution.
However, it turns out that we can look at adjacent elements and just ignore ones that don't satisfy our index requirement. This confused me for some time, but the key realization is that the optimal solution will always come from two adjacent elements. I'll prove this via a contradiction. Let's say that the optimal solution comes from two non-adjacent prefix sums x and z with indices i and k, where z > x (it's sorted!) and k > i:
x ... z
k ... i
Let's consider one of the numbers between x and z, and let's call it y with index j. Since the list is sorted, x < y < z.
x ... y ... z
k ... j ... i
The prefix sum y must have index j < i, otherwise it would be part of a better solution with z. But if j < i, then j < k and y and x form a better solution than z and x! So any elements between x and z must form a better solution with one of the two, which contradicts our original assumption. Therefore the optimal solution must come from adjacent prefix sums in the sorted list.
Here is Java code for maximum sub array sum modulo. We handle the case we can not find least element in the tree strictly greater than s[i]
public static long maxModulo(long[] a, final long k) {
long[] s = new long[a.length];
TreeSet<Long> tree = new TreeSet<>();
s[0] = a[0] % k;
tree.add(s[0]);
long result = s[0];
for (int i = 1; i < a.length; i++) {
s[i] = (s[i - 1] + a[i]) % k;
// find least element in the tree strictly greater than s[i]
Long v = tree.higher(s[i]);
if (v == null) {
// can't find v, then compare v and s[i]
result = Math.max(s[i], result);
} else {
result = Math.max((s[i] - v + k) % k, result);
}
tree.add(s[i]);
}
return result;
}
Few points from my side that might hopefully help someone understand the problem better.
You do not need to add +M to the modulo calculation, as mentioned, % operator handles negative numbers well, so a % M = (a + M) % M
As mentioned, the trick is to build the proxy sum table such that
proxy[n] = (a[1] + ... a[n]) % M
This then allows one to represent the maxSubarraySum[i, j] as
maxSubarraySum[i, j] = (proxy[j] - proxy[j]) % M
The implementation trick is to build the proxy table as we iterate through the elements, instead of first pre-building it and then using. This is because for each new element in the array a[i] we want to compute proxy[i] and find proxy[j] that is bigger than but as close as possible to proxy[i] (ideally bigger by 1 because this results in a reminder of M - 1). For this we need to use a clever data structure for building proxy table while keeping it sorted and
being able to quickly find a closest bigger element to proxy[i]. bisect.bisect_right is a good choice in Python.
See my Python implementation below (hope this helps but I am aware this might not necessarily be as concise as others' solutions):
def maximumSum(a, m):
prefix_sum = [a[0] % m]
prefix_sum_sorted = [a[0] % m]
current_max = prefix_sum_sorted[0]
for elem in a[1:]:
prefix_sum_next = (prefix_sum[-1] + elem) % m
prefix_sum.append(prefix_sum_next)
idx_closest_bigger = bisect.bisect_right(prefix_sum_sorted, prefix_sum_next)
if idx_closest_bigger >= len(prefix_sum_sorted):
current_max = max(current_max, prefix_sum_next)
bisect.insort_right(prefix_sum_sorted, prefix_sum_next)
continue
if prefix_sum_sorted[idx_closest_bigger] > prefix_sum_next:
current_max = max(current_max, (prefix_sum_next - prefix_sum_sorted[idx_closest_bigger]) % m)
bisect.insort_right(prefix_sum_sorted, prefix_sum_next)
return current_max
Total java implementation with O(n*log(n))
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.util.TreeSet;
import java.util.stream.Stream;
public class MaximizeSumMod {
public static void main(String[] args) throws Exception{
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
Long times = Long.valueOf(in.readLine());
while(times --> 0){
long[] pair = Stream.of(in.readLine().split(" ")).mapToLong(Long::parseLong).toArray();
long mod = pair[1];
long[] numbers = Stream.of(in.readLine().split(" ")).mapToLong(Long::parseLong).toArray();
printMaxMod(numbers,mod);
}
}
private static void printMaxMod(long[] numbers, Long mod) {
Long maxSoFar = (numbers[numbers.length-1] + numbers[numbers.length-2])%mod;
maxSoFar = (maxSoFar > (numbers[0]%mod)) ? maxSoFar : numbers[0]%mod;
numbers[0] %=mod;
for (Long i = 1L; i < numbers.length; i++) {
long currentNumber = numbers[i.intValue()]%mod;
maxSoFar = maxSoFar > currentNumber ? maxSoFar : currentNumber;
numbers[i.intValue()] = (currentNumber + numbers[i.intValue()-1])%mod;
maxSoFar = maxSoFar > numbers[i.intValue()] ? maxSoFar : numbers[i.intValue()];
}
if(mod.equals(maxSoFar+1) || numbers.length == 2){
System.out.println(maxSoFar);
return;
}
long previousNumber = numbers[0];
TreeSet<Long> set = new TreeSet<>();
set.add(previousNumber);
for (Long i = 2L; i < numbers.length; i++) {
Long currentNumber = numbers[i.intValue()];
Long ceiling = set.ceiling(currentNumber);
if(ceiling == null){
set.add(numbers[i.intValue()-1]);
continue;
}
if(ceiling.equals(currentNumber)){
set.remove(ceiling);
Long greaterCeiling = set.ceiling(currentNumber);
if(greaterCeiling == null){
set.add(ceiling);
set.add(numbers[i.intValue()-1]);
continue;
}
set.add(ceiling);
ceiling = greaterCeiling;
}
Long newMax = (currentNumber - ceiling + mod);
maxSoFar = maxSoFar > newMax ? maxSoFar :newMax;
set.add(numbers[i.intValue()-1]);
}
System.out.println(maxSoFar);
}
}
Adding STL C++11 code based on the solution suggested by #Pham Trung. Might be handy.
#include <iostream>
#include <set>
int main() {
int N;
std::cin>>N;
for (int nn=0;nn<N;nn++){
long long n,m;
std::set<long long> mSet;
long long maxVal = 0; //positive input values
long long sumVal = 0;
std::cin>>n>>m;
mSet.insert(m);
for (long long q=0;q<n;q++){
long long tmp;
std::cin>>tmp;
sumVal = (sumVal + tmp)%m;
auto itSub = mSet.upper_bound(sumVal);
maxVal = std::max(maxVal,(m + sumVal - *itSub)%m);
mSet.insert(sumVal);
}
std::cout<<maxVal<<"\n";
}
}
As you can read in Wikipedia exists a solution called Kadane's algorithm, which compute the maximum subarray sum watching ate the maximum subarray ending at position i for all positions i by iterating once over the array. Then this solve the problem with with runtime complexity O(n).
Unfortunately, I think that Kadane's algorithm isn't able to find all possible solution when more than one solution exists.
An implementation in Java, I didn't tested it:
public int[] kadanesAlgorithm (int[] array) {
int start_old = 0;
int start = 0;
int end = 0;
int found_max = 0;
int max = array[0];
for(int i = 0; i<array.length; i++) {
max = Math.max(array[i], max + array[i]);
found_max = Math.max(found_max, max);
if(max < 0)
start = i+1;
else if(max == found_max) {
start_old=start;
end = i;
}
}
return Arrays.copyOfRange(array, start_old, end+1);
}
I feel my thoughts are aligned with what have been posted already, but just in case - Kotlin O(NlogN) solution:
val seen = sortedSetOf(0L)
var prev = 0L
return max(a.map { x ->
val z = (prev + x) % m
prev = z
seen.add(z)
seen.higher(z)?.let{ y ->
(z - y + m) % m
} ?: z
})
Implementation in java using treeset...
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.TreeSet;
public class Main {
public static void main(String[] args) throws IOException {
BufferedReader read = new BufferedReader(new InputStreamReader(System.in)) ;
String[] str = read.readLine().trim().split(" ") ;
int n = Integer.parseInt(str[0]) ;
long m = Long.parseLong(str[1]) ;
str = read.readLine().trim().split(" ") ;
long[] arr = new long[n] ;
for(int i=0; i<n; i++) {
arr[i] = Long.parseLong(str[i]) ;
}
long maxCount = 0L ;
TreeSet<Long> tree = new TreeSet<>() ;
tree.add(0L) ;
long prefix = 0L ;
for(int i=0; i<n; i++) {
prefix = (prefix + arr[i]) % m ;
maxCount = Math.max(prefix, maxCount) ;
Long temp = tree.higher(prefix) ;
System.out.println(temp);
if(temp != null) {
maxCount = Math.max((prefix-temp+m)%m, maxCount) ;
}
//System.out.println(maxCount);
tree.add(prefix) ;
}
System.out.println(maxCount);
}
}
Here is one implementation of solution in java for this problem which works using TreeSet in java for optimized solution !
public static long maximumSum2(long[] arr, long n, long m)
{
long x = 0;
long prefix = 0;
long maxim = 0;
TreeSet<Long> S = new TreeSet<Long>();
S.add((long)0);
// Traversing the array.
for (int i = 0; i < n; i++)
{
// Finding prefix sum.
prefix = (prefix + arr[i]) % m;
// Finding maximum of prefix sum.
maxim = Math.max(maxim, prefix);
// Finding iterator poing to the first
// element that is not less than value
// "prefix + 1", i.e., greater than or
// equal to this value.
long it = S.higher(prefix)!=null?S.higher(prefix):0;
// boolean isFound = false;
// for (long j : S)
// {
// if (j >= prefix + 1)
// if(isFound == false) {
// it = j;
// isFound = true;
// }
// else {
// if(j < it) {
// it = j;
// }
// }
// }
if (it != 0)
{
maxim = Math.max(maxim, prefix - it + m);
}
// adding prefix in the set.
S.add(prefix);
}
return maxim;
}
public static int MaxSequence(int[] arr)
{
int maxSum = 0;
int partialSum = 0;
int negative = 0;
for (int i = 0; i < arr.Length; i++)
{
if (arr[i] < 0)
{
negative++;
}
}
if (negative == arr.Length)
{
return 0;
}
foreach (int item in arr)
{
partialSum += item;
maxSum = Math.Max(maxSum, partialSum);
if (partialSum < 0)
{
partialSum = 0;
}
}
return maxSum;
}
Modify Kadane algorithm to keep track of #occurrence. Below is the code.
#python3
#source: https://github.com/harishvc/challenges/blob/master/dp-largest-sum-sublist-modulo.py
#Time complexity: O(n)
#Space complexity: O(n)
def maxContiguousSum(a,K):
sum_so_far =0
max_sum = 0
count = {} #keep track of occurrence
for i in range(0,len(a)):
sum_so_far += a[i]
sum_so_far = sum_so_far%K
if sum_so_far > 0:
max_sum = max(max_sum,sum_so_far)
if sum_so_far in count.keys():
count[sum_so_far] += 1
else:
count[sum_so_far] = 1
else:
assert sum_so_far < 0 , "Logic error"
#IMPORTANT: reset sum_so_far
sum_so_far = 0
return max_sum,count[max_sum]
a = [6, 6, 11, 15, 12, 1]
K = 13
max_sum,count = maxContiguousSum(a,K)
print("input >>> %s max sum=%d #occurrence=%d" % (a,max_sum,count))
I am given N numbers i want to calculate sum of a factorial modulus m
For Example
4 100
12 18 2 11
Ans = (12! + 18! +2!+11!)%100
Since the 1<N<10^5 and Numbers are from 1<Ni<10^17
How to calculate it in efficient time.
Since the recursive approach will fail i.e
int fact(int n){
if(n==1) return 1;
return n*fact(n-1)%m;
}
if you precalculate factorials, using every operation %m, and will use hints from comments about factorials for numbers bigger than m you will get something like this
fact = new int[m];
f = fact[0] = 1;
for(int i = 1; i < m; i++)
{
f = (f * i) % m;
fact[i] = f;
}
sum = 0
for each (n in numbers)
{
if (n < m)
{
sum = (sum + fact[n]) % m
}
}
I'm not sure if it's best but it should work in a reasonable amount of time.
Upd: Code can be optimized using knowledge that if for some number j, (j!)%m ==0 than for every n > j (n!)%m ==0 , so in some cases (usually when m is not a prime number) it's not necessary to precalculate factorials for all numbers less than m
try this:
var numbers = [12,18,2,11]
function fact(n) {
if(n==1) return 1;
return n * fact(n-1);
}
var accumulator = 0
$.each(numbers, function(index, value) {
accumulator += fact(value)
})
var answer = accumulator%100
alert(accumulator)
alert(answer)
you can see it running here:
http://jsfiddle.net/orw4gztf/1/
Given
f(n) = 1+x+x^2+x^3+……+x^n, (n >=0 && n is a integer)
input x, n, how can we work out the result with a greater efficiency?
It's a geometric progression. Noting that
(x-1)f(n) = x^{n+1}-1
you get
f(n)=(x^{n+1}-1)/(x-1).
This does n multiplies and n increments. It's easy to put the sum into closed form, but computing the closed form requires evaluating xn+1, which could also end up doing n multiplies, but doesn't require a divide.
Although this is actually valid C, think of it as pseudocode. A real implementation would check for negative n rather than looping through half the int numberspace. If you needed to apply this to an integer x rather than a floating point x, this would definitely be the way to go.
double polysum(int n, double x) {
double a = 1;
while (n--) a = x * a + 1;
return a;
}
public class Test {
public static void main(String args[]) {
int x = 2, n = 10;
Double sum = new Double(0);
for (int i = 0 ; i <= n ; i++) {
sum = sum + Math.pow(x, i);
}
System.out.println(sum);
}
}