Minimum compression time for a range of files - algorithm

A bit algorithmic problem, or may be optimization one, or Dynamic Programming.
Let's say we have N files to compress.
The average compression ratio is L.
The compression time of a file depends on two factors -
1. Size of the file currently being processed, and
2. Memory space left in system (Total = M, occupied = sum of file size of compressed and uncompressed files)
So
t(i) = K * s(i) / (M-L*(s(1)+s(2)+....+s(i))-(s(i+1) + s(i+2) + .....+ s(n))
where s(i) is the size of ith file and t(i) is the time taken to compress ith file.
What I have to do, is to calculate the optimal series of the files to be compressed so that total time required is minimum. So how to compute that series?

It seems that the best approach is to sort files by size and process it. This greedy approach may be explained as "compress small file first to avoid compressing it after big file".
Possible approvement is:
if we have two files A,B such that size(A) <= size(B) we can prove that time
t(A,B) <= t(B,A)
A/M + B/(M - L*A) <= B/M + B/(M - L*B)
A*(1/M - 1/(M - L*B)) <= B*(1/M - 1/(M - L*A))
B/A >= (1/M - 1/(M - L*B)) / (1/M - 1/(M - L*A)) = B*(M - L*A) / (A*(M - L*B))
1 >= (M - L*A)/(M - L*B)
-L*B >= -L*A
B >= A
so that mean first equation was right too (if didn't failed somewhere :D)
Sorting give us the guarantee of A < B for every pair of files.
I wrote O(N!) bruteforce for N <= 10. And it gives sorted arrays for every test I can think about.
test : N, L, M, K and N files
8 0.5 80.0 1.0
7 1 6 3 4 5 6 5
result :
0.515769
1 3 4 5 5 6 6 7
#include <iostream>
#include <algorithm>
using namespace std;
// will work bad for cnt > 10 because 10! = 3628800
int perm[] = {0,1,2,3,4,5,6,7,8,9};
int bestPerm[10];
double sizes[10];
double calc(int cnt, double L, double M, double K, double T) {
double res = 0.0, usedMemory = 0.0;
for(int i = 0; i < cnt; i++) {
int ind = perm[i];
res += K * sizes[ind] / (M - L * usedMemory - (T - usedMemory));
usedMemory += sizes[ind];
}
return res;
}
int main() {
int cnt;
double L,M,K,T = 0.0;
cin >> cnt >> L >> M >> K;
for(int i = 0; i < cnt; i++)
cin >> sizes[i], T += sizes[i];
double bruteRes = 1e16;
int bruteCnt = 1;
for(int i = 2; i <= cnt; i++)
bruteCnt *= i;
for(int i = 0; i < bruteCnt; i++) {
double curRes = calc(cnt, L, M, K, T);
if( bruteRes > curRes ) {
bruteRes = curRes;
for(int j = 0; j < cnt; j++)
bestPerm[j] = perm[j];
}
next_permutation(perm, perm + cnt);
}
cout << bruteRes << "\n";
for(int i = 0; i < cnt; i++)
cout << sizes[bestPerm[i]] << " ";
cout << "\n";
return 0;
}
Updated Implementation for case when L is different for all files pastebin (it seems that bruteforce prefer to sort them by descending order of compression ratio L[i] and use the smaller files first, if L is equal).

Suppose you have a schedule that claims to be optimal. Consider any file and the one processed just after it. If you could improve the schedule by swapping them, it couldn't be optimal. So if you can show that it is always best to process a small file before a large one when the two are side by side then you can show that the best schedule is in sorted order with the smallest files first, because you can improve any other schedule.
Because you are just swapping two adjacent files the times taken to process files before and after these two are not changed - the same amount of memory is available before and after. You might as well scale the problem so that one of the files is of size one unit. Supposing that you have a total of K units of memory free before the first file, and supposing the second file is of size x units with a compression ratio of 1:L you end up with something like 1/K + x/(K+L) - x/K - 1/(K - xL) as the difference in compression times due to this pair of files - my algebra is horribly error-prone, but I think this boils down to something like L^2x(1-x) over something complicated but positive, which shows that for a pair of files you always want to compress the short one first, so by what I said earlier the best schedule is in sorted order with the shortest file first.

Related

Finding kth element in the nth order of Farey Sequence

Farey sequence of order n is the sequence of completely reduced fractions, between 0 and 1 which when in lowest terms have denominators less than or equal to n, arranged in order of increasing size. Detailed explanation here.
Problem
The problem is, given n and k, where n = order of seq and k = element index, can we find the particular element from the sequence. For examples answer for (n=5, k =6) is 1/2.
Lead
There are many less than optimal solution available, but am looking for a near-optimal one. One such algorithm is discussed here, for which I am unable to understand the logic hence unable to apply the examples.
Question
Can some please explain the solution with more detail, preferably with an example.
Thank you.
I've read the method provided in your link, and the accepted C++ solution to it. Let me post them, for reference:
Editorial Explanation
Several less-than-optimal solutions exist. Using a priority queue, one
can iterate through the fractions (generating them one by one) in O(K
log N) time. Using a fancier math relation, this can be reduced to
O(K). However, neither of these solution obtains many points, because
the number of fractions (and thus K) is quadratic in N.
The “good” solution is based on meta-binary search. To construct this
solution, we need the following subroutine: given a fraction A/B
(which is not necessarily irreducible), find how many fractions from
the Farey sequence are less than this fraction. Suppose we had this
subroutine; then the algorithm works as follows:
Determine a number X such that the answer is between X/N and (X+1)/N; such a number can be determined by binary searching the range
1...N, thus calling the subroutine O(log N) times.
Make a list of all fractions A/B in the range X/N...(X+1)/N. For any given B, there is at most one A in this range, and it can be
determined trivially in O(1).
Determine the appropriate order statistic in this list (doing this in O(N log N) by sorting is good enough).
It remains to show how we can construct the desired subroutine. We
will show how it can be implemented in O(N log N), thus giving a O(N
log^2 N) algorithm overall. Let us denote by C[j] the number of
irreducible fractions i/j which are less than X/N. The algorithm is
based on the following observation: C[j] = floor(X*B/N) – Sum(C[D],
where D divides j). A direct implementation, which tests whether any D
is a divisor, yields a quadratic algorithm. A better approach,
inspired by Eratosthene’s sieve, is the following: at step j, we know
C[j], and we subtract it from all multiples of j. The running time of
the subroutine becomes O(N log N).
Relevant Code
#include <cassert>
#include <algorithm>
#include <fstream>
#include <iostream>
#include <vector>
using namespace std;
const int kMaxN = 2e5;
typedef int int32;
typedef long long int64_x;
// #define int __int128_t
// #define int64 __int128_t
typedef long long int64;
int64 count_less(int a, int n) {
vector<int> counter(n + 1, 0);
for (int i = 2; i <= n; i += 1) {
counter[i] = min(1LL * (i - 1), 1LL * i * a / n);
}
int64 result = 0;
for (int i = 2; i <= n; i += 1) {
for (int j = 2 * i; j <= n; j += i) {
counter[j] -= counter[i];
}
result += counter[i];
}
return result;
}
int32 main() {
// ifstream cin("farey.in");
// ofstream cout("farey.out");
int64_x n, k; cin >> n >> k;
assert(1 <= n);
assert(n <= kMaxN);
assert(1 <= k);
assert(k <= count_less(n, n));
int up = 0;
for (int p = 29; p >= 0; p -= 1) {
if ((1 << p) + up > n)
continue;
if (count_less((1 << p) + up, n) < k) {
up += (1 << p);
}
}
k -= count_less(up, n);
vector<pair<int, int>> elements;
for (int i = 1; i <= n; i += 1) {
int b = i;
// find a such that up/n < a / b and a / b <= (up+1) / n
int a = 1LL * (up + 1) * b / n;
if (1LL * up * b < 1LL * a * n) {
} else {
continue;
}
if (1LL * a * n <= 1LL * (up + 1) * b) {
} else {
continue;
}
if (__gcd(a, b) != 1) {
continue;
}
elements.push_back({a, b});
}
sort(elements.begin(), elements.end(),
[](const pair<int, int>& lhs, const pair<int, int>& rhs) -> bool {
return 1LL * lhs.first * rhs.second < 1LL * rhs.first * lhs.second;
});
cout << (int64_x)elements[k - 1].first << ' ' << (int64_x)elements[k - 1].second << '\n';
return 0;
}
Basic Methodology
The above editorial explanation results in the following simplified version. Let me start with an example.
Let's say, we want to find 7th element of Farey Sequence with N = 5.
We start with writing a subroutine, as said in the explanation, that gives us the "k" value (how many Farey Sequence reduced fractions there exist before a given fraction - the given number may or may not be reduced)
So, take your F5 sequence:
k = 0, 0/1
k = 1, 1/5
k = 2, 1/4
k = 3, 1/3
k = 4, 2/5
k = 5, 1/2
k = 6, 3/5
k = 7, 2/3
k = 8, 3/4
k = 9, 4/5
k = 10, 1/1
If we can find a function that finds the count of the previous reduced fractions in Farey Sequence, we can do the following:
int64 k_count_2 = count_less(2, 5); // result = 4
int64 k_count_3 = count_less(3, 5); // result = 6
int64 k_count_4 = count_less(4, 5); // result = 9
This function is written in the accepted solution. It uses the exact methodology explained in the last paragraph of the editorial.
As you can see, the count_less() function generates the same k values as in our hand written list.
We know the values of the reduced fractions for k = 4, 6, 9 using that function. What about k = 7? As explained in the editorial, we will list all the reduced fractions in range X/N and (X+1)/N, here X = 3 and N = 5.
Using the function in the accepted solution (its near bottom), we list and sort the reduced fractions.
After that we will rearrange our k values, as in to fit in our new array as such:
k = -, 0/1
k = -, 1/5
k = -, 1/4
k = -, 1/3
k = -, 2/5
k = -, 1/2
k = -, 3/5 <-|
k = 0, 2/3 | We list and sort the possible reduced fractions
k = 1, 3/4 | in between these numbers
k = -, 4/5 <-|
k = -, 1/1
(That's why there is this piece of code: k -= count_less(up, n);, it basically remaps the k values)
(And we also subtract one more during indexing, i.e.: cout << (int64_x)elements[k - 1].first << ' ' << (int64_x)elements[k - 1].second << '\n';. This is just to basically call the right position in the generated array.)
So, for our new re-mapped k values, for N = 5 and k = 7 (original k), our result is 2/3.
(We select the value k = 0, in our new map)
If you compile and run the accepted solution, it will give you this:
Input: 5 7 (Enter)
Output: 2 3
I believe this is the basic point of the editorial and accepted solution.

What is maximum water colledted between two histograms?

I recently came across this problem:
You are given height of n histograms each of width 1. You have to choose any two histograms such that if it starts raining and all other histograms(except the two you have selected) are removed, then the water collected between the two histograms is maximised.
Input:
9
3 2 5 9 7 8 1 4 6
Output:
25
Between third and last histogram.
This is a variant of Trapping rain water problem.
I tried two solutions but both had worst case complexity of N^2. How can we optimise further.
Sol1: Brute force for every pair.
int maxWaterCollected(vector<int> hist, int n) {
int ans = 0;
for (int i= 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
ans = max(ans, min(hist[i], hist[j]) * (j - i - 1));
}
}
return ans;
}
Sol2: Keep a sequence of histograms in increasing order of height. For every histogram, find its best histogram in this sequence. now, if all histograms are in increasing order then this solution also becomes N^2.
int maxWaterCollected(vector<int> hist, int n) {
vector< pair<int, int> > increasingSeq(1, make_pair(hist[0], 0)); // initialised with 1st element.
int ans = 0;
for (int i = 1; i < n; i++) {
// compute best result from current increasing sequence
for (int j = 0; j < increasingSeq.size(); j++) {
ans = max(ans, min(hist[i], increasingSeq[j].first) * (i - increasingSeq[j].second - 1));
}
// add this histogram to sequence
if (hist[i] > increasingSeq.back().first) {
increasingSeq.push_back(make_pair(hist[i], i));
}
}
return ans;
}
Use 2 iterators, one from begin() and one from end() - 1.
until the 2 iterator are equal:
Compare current result with the max, and keep the max
Move the iterator with smaller value (begin -> end or end -> begin)
Complexity: O(n).
Jarod42 has the right idea, but it's unclear from his terse post why his algorithm, described below in Python, is correct:
def candidates(hist):
l = 0
r = len(hist) - 1
while l < r:
yield (r - l - 1) * min(hist[l], hist[r])
if hist[l] <= hist[r]:
l += 1
else:
r -= 1
def maxwater(hist):
return max(candidates(hist))
The proof of correctness is by induction: the optimal solution either (1) belongs to the candidates yielded so far or (2) chooses histograms inside [l, r]. The base case is simple, because all histograms are inside [0, len(hist) - 1].
Inductively, suppose that we're about to advance either l or r. These cases are symmetric, so let's assume that we're about to advance l. We know that hist[l] <= hist[r], so the value is (r - l - 1) * hist[l]. Given any other right endpoint r1 < r, the value is (r1 - l - 1) * min(hist[l], hist[r1]), which is less because r - l - 1 > r1 - l - 1 and hist[l] >= min(hist[l], hist[r1]). We can rule out all of these solutions as suboptimal, so it's safe to advance l.

Dynamic programming based zigzag puzzle

I found this interesting dynamic programming problem where it's required to re-order a sequence of integers in order to maximize the output.
Steve has got N liquor bottles. Alcohol quantity of ith bottle is given by A[i]. Now he wants to have one drink from each of the bottles, in such a way that the total hangover is maximised.
Total hangover is calculated as follow (Assume the 'alcohol quantity' array uses 1-based indexing) :
int hangover=0 ;
for( int i=2 ; i<=N ; i++ ){
hangover += i * abs(A[i] - A[i-1]) ;
}
So, obviously the order in which he drinks from each bottle changes the Total hangover. He can drink the liquors in any order but not more than one drink from each bottle. Also once he starts drinking a liquor he will finish that drink before moving to some other liquor.
Steve is confused about the order in which he should drink so that the hangover is maximized. Help him find the maximum hangover he can have, if he can drink the liquors in any order.
Input Format :
First line contain number of test cases T. First line of each test case contains N, denoting the number of fruits. Next line contain N space separated integers denoting the sweetness of each fruit.
2
7
83 133 410 637 665 744 986
4
1 5 9 11
I tried everything that I could but I wasn't able to achieve a O(n^2) solution. By simply calculating the total hangover over all the permutations has a O(n!) time complexity. Can this problem be solved more efficiently?
Thanks!
My hunch: use a sort of "greedy chaining algorithm" instead of DP.
1) find the pair with the greatest difference (O(n^2))
2) starting from either, find successively the next element with the greatest difference, forming a sort of "chain" (2 x O(n^2))
3) once you've done it for both you'll have two "sums". Return the largest one as your optimal answer.
This greedy strategy should work because the nature of the problem itself is greedy: choose the largest difference for the last bottle, because this has the largest index, so the result will always be larger than some "compromising" alternative (one that distributes smaller but roughly uniform differences to the indices).
Complexity: O(3n^2). Can prob. reduce it to O(3/2 n^2) if you use linked lists instead of a static array + boolean flag array.
Pseudo-ish code:
int hang_recurse(int* A, int N, int I, int K, bool* F)
{
int sum = 0;
for (int j = 2; j <= N; j++, I--)
{
int maxdiff = 0, maxidx;
for (int i = 1; i <= N; i++)
{
if (F[i] == false)
{
int diff = abs(F[K] - F[i]);
if (diff > maxdiff)
{
maxdiff = diff;
maxidx = i;
}
}
}
K = maxidx;
F[K] = true;
sum += maxdiff * I;
}
return sum;
}
int hangover(int* A, int N)
{
bool* F = new bool[N];
int maxdiff = 0;
int maxidx_i, maxidx_j;
for (int j = 2; j <= N; j++, I--)
{
for (int i = 1; i <= N; i++)
{
int diff = abs(F[j] - F[i]);
if (diff > maxdiff)
{
maxdiff = diff;
maxidx_i = i;
maxidx_j = j;
}
}
}
F[maxidx_i] = F[maxidx_j] = true;
int maxsum = max(hang_recurse(A, N, N - 1, maxidx_i, F),
hang_recurse(A, N, N - 1, maxidx_j, F));
delete [] F;
return maxdiff * N + maxsum;
}

select a group of pairs in order to minimize rms of group

Simplified problem
I have ~40 resistors (all the same value +-5%) and I need to select 12 of them so that they are as similar as possible.
Solution: I list them in order and take the 12 consecutive with the smallest RMS.
The actual problem
I have ~40 resistors (all the same value +-5%) and I have to choose 12 pairs of them so that the resistance of the pairs is as similar as possible.
Notes
The resistance of the pair (R1,R2) is R1+R2.
I do not really care about the programming language, but let's say that I'm looking for a solution in C++ or Python, the two languages I'm most familiar with.
This gives reasonably good results (in MATLAB)
a = ones(40,1) + rand(40,1)*0.1-0.05; % The resistors
vec = zeros(40,2); % Initialize matrix
indices = zeros(40,2); % Initialize matrix
a = sort(a); % Sort vector of resistors
for ii = 1:length(a)
vec(ii,:) = [a(ii) a(ii)]; % Assign resistor values to row ii of vec
indices(ii,:) = [ii,ii]; % Corresponding resistor number (index)
for jj = 1:length(a)
if sum(abs((a(ii)+a(jj))-2*mean(a))) < abs(sum(vec(ii,:))-2*mean(a))
vec(ii,:) = [a(ii) a(jj)]; % Check if the new set is better than the
indices(ii,:) = [ii, jj]; % previous, and update vec and indices if true.
end
end
end
[x, idx] = sort(sum(vec')'); % Sort the sum of the pairs
final_list = indices(idx); % The indices of the sorted pairs
This is the result when I plot it:
This is not optimal but should give somewhat decent results. It's very fast though so if you ever need to choose 1000 pairs out of 10000 resistors...
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <time.h>
#define GROUPS 12
#define N 40
int compare (const void * a, const void * b)
{
return ( *(int*)a - *(int*)b );
}
int main ()
{
// generate random numbers
float *values = (float *)malloc(sizeof(float) * N);
srand(time(0));
for (int i = 0; i < N; i++)
values[i] = 950 + rand()%101;
qsort(values, N, sizeof(float), compare);
// find "best" pairing
float bestrms = -1;
int beststart = -1;
float bestmean = -1;
for (int start = 0; start <= N - 2 * GROUPS; start++)
{
float sum = 0;
for (int i = start; i < start + 2 * GROUPS; i++)
sum += values[i];
float mean = sum / GROUPS;
float square = 0;
for (int i = 0; i < GROUPS; i++)
{
int x = start + 2 * GROUPS - 1 - i;
float first = values[start + i];
// in a sorted sequence of 24 resistors, always pair 1st with 24th, 2nd with 23rd, etc
float second = values[start + 2 * GROUPS - 1 - i];
float err = mean - (first + second);
square += err * err;
}
float rms = sqrt(square/GROUPS);
if (bestrms == -1 || rms < bestrms)
{
bestrms = rms;
beststart = start;
bestmean = mean;
}
}
for (int i = 0; i < GROUPS; i++)
{
float first = values[beststart + i];
float second = values[beststart + 2 * GROUPS - 1 - i];
float err = bestmean - (first + second);
printf("(%f, %f) %f %f\n", first, second, first + second, err);
}
printf("mean %f rms %f\n", bestmean, bestrms);
free(values);
}
Sort them and then pair 1 with 2, 3 with 4, 5 with 6 and so on. Find the difference between each pair and sort again, choosing the 12 with the least difference.
sort them by resistance
pair 1 with 40, 2 with 39 etc, compute R1+R2 for each pair and pick the best set of 12 pairs (needs another sorting step). compute the mean of all select (R1+R2).
try to refine this initial solution successively by trying to plug in one of the remaining 16 resistors for one of the 24 chosen ones. an attempt would be successful if combined resistance of the new pair is closer to the mean than the combined resistance of the old pair. repeat this step until you can't find any further improvement.
this solution will definitely not always compute the optimal solution but it might be good enough. another idea would be simulated annealing but that would be a lot more work and still not guarantee to find the best solution.

3-PARTITION problem

here is another dynamic programming question (Vazirani ch6)
Consider the following 3-PARTITION
problem. Given integers a1...an, we
want to determine whether it is
possible to partition of {1...n} into
three disjoint subsets I, J, K such
that
sum(I) = sum(J) = sum(K) = 1/3*sum(ALL)
For example, for input (1; 2; 3; 4; 4;
5; 8) the answer is yes, because there
is the partition (1; 8), (4; 5), (2;
3; 4). On the other hand, for input
(2; 2; 3; 5) the answer is no. Devise
and analyze a dynamic programming
algorithm for 3-PARTITION that runs in
time poly- nomial in n and (Sum a_i)
How can I solve this problem? I know 2-partition but still can't solve it
It's easy to generalize 2-sets solution for 3-sets case.
In original version, you create array of boolean sums where sums[i] tells whether sum i can be reached with numbers from the set, or not. Then, once array is created, you just see if sums[TOTAL/2] is true or not.
Since you said you know old version already, I'll describe only difference between them.
In 3-partition case, you keep array of boolean sums, where sums[i][j] tells whether first set can have sum i and second - sum j. Then, once array is created, you just see if sums[TOTAL/3][TOTAL/3] is true or not.
If original complexity is O(TOTAL*n), here it's O(TOTAL^2*n).
It may not be polynomial in the strictest sense of the word, but then original version isn't strictly polynomial too :)
I think by reduction it goes like this:
Reducing 2-partition to 3-partition:
Let S be the original set, and A be its total sum, then let S'=union({A/2},S).
Hence, perform a 3-partition on the set S' yields three sets X, Y, Z.
Among X, Y, Z, one of them must be {A/2}, say it's set Z, then X and Y is a 2-partition.
The witnesses of 3-partition on S' is the witnesses of 2-partition on S, thus 2-partition reduces to 3-partition.
If this problem is to be solvable; then sum(ALL)/3 must be an integer. Any solution must have SUM(J) + SUM(K) = SUM(I) + sum(ALL)/3. This represents a solution to the 2-partition problem over concat(ALL, {sum(ALL)/3}).
You say you have a 2-partition implementation: use it to solve that problem. Then (at least) one of the two partitions will contain the number sum(ALL)/3 - remove the number from that partion, and you've found I. For the other partition, run 2-partition again, to split J from K; after all, J and K must be equal in sum themselves.
Edit: This solution is probably incorrect - the 2-partition of the concatenated set will have several solutions (at least one for each of I, J, K) - however, if there are other solutions, then the "other side" may not consist of the union of two of I, J, K, and may not be splittable at all. You'll need to actually think, I fear :-).
Try 2: Iterate over the multiset, maintaining the following map: R(i,j,k) :: Boolean which represents the fact whether up to the current iteration the numbers permit division into three multisets that have sums i, j, k. I.e., for any R(i,j,k) and next number n in the next state R' it holds that R'(i+n,j,k) and R'(i,j+n,k) and R'(i,j,k+n). Note that the complexity (as per the excersize) depends on the magnitude of the input numbers; this is a pseudo-polynomialtime algorithm. Nikita's solution is conceptually similar but more efficient than this solution since it doesn't track the third set's sum: that's unnecessary since you can trivially compute it.
As I have answered in same another question like this, the C++ implementation would look something like this:
int partition3(vector<int> &A)
{
int sum = accumulate(A.begin(), A.end(), 0);
if (sum % 3 != 0)
{
return false;
}
int size = A.size();
vector<vector<int>> dp(sum + 1, vector<int>(sum + 1, 0));
dp[0][0] = true;
// process the numbers one by one
for (int i = 0; i < size; i++)
{
for (int j = sum; j >= 0; --j)
{
for (int k = sum; k >= 0; --k)
{
if (dp[j][k])
{
dp[j + A[i]][k] = true;
dp[j][k + A[i]] = true;
}
}
}
}
return dp[sum / 3][sum / 3];
}
Let's say you want to partition the set $X = {x_1, ..., x_n}$ in $k$ partitions.
Create a $ n \times k $ table. Assume the cost $M[i,j]$ be the maximum sum of $i$ elements in $j$ partitions. Just recursively use the following optimality criterion to fill it:
M[n,k] = min_{i\leq n} max ( M[i, k-1], \sum_{j=i+1}^{n} x_i )
Using these initial values for the table:
M[i,1] = \sum_{j=1}^{i} x_i and M[1,j] = x_j
The running time is $O(kn^2)$ (polynomial )
Create a three dimensional array, where size is count of elements, and part is equal to to sum of all elements divided by 3. So each cell of array[seq][sum1][sum2] tells can you create sum1 and sum2 using max seq elements from given array A[] or not. So compute all values of array, result will be in cell array[using all elements][sum of all element / 3][sum of all elements / 3], if you can create two sets without crossing equal to sum/3, there will be third set.
Logic of checking: exlude A[seq] element to third sum(not stored), check cell without element if it has same two sums; OR include to sum1 - if it is possible to get two sets without seq element, where sum1 is smaller by value of element seq A[seq], and sum2 isn't changed; OR include to sum2 check like previous.
int partition3(vector<int> &A)
{
int part=0;
for (int a : A)
part += a;
if (part%3)
return 0;
int size = A.size()+1;
part = part/3+1;
bool array[size][part][part];
//sequence from 0 integers inside to all inside
for(int seq=0; seq<size; seq++)
for(int sum1=0; sum1<part; sum1++)
for(int sum2=0;sum2<part; sum2++) {
bool curRes;
if (seq==0)
if (sum1 == 0 && sum2 == 0)
curRes = true;
else
curRes= false;
else {
int curInSeq = seq-1;
bool excludeFrom = array[seq-1][sum1][sum2];
bool includeToSum1 = (sum1>=A[curInSeq]
&& array[seq-1][sum1-A[curInSeq]][sum2]);
bool includeToSum2 = (sum2>=A[curInSeq]
&& array[seq-1][sum1][sum2-A[curInSeq]]);
curRes = excludeFrom || includeToSum1 || includeToSum2;
}
array[seq][sum1][sum2] = curRes;
}
int result = array[size-1][part-1][part-1];
return result;
}
Another example in C++ (based on the previous answers):
bool partition3(vector<int> const &A) {
int sum = 0;
for (int i = 0; i < A.size(); i++) {
sum += A[i];
}
if (sum % 3 != 0) {
return false;
}
vector<vector<vector<int>>> E(A.size() + 1, vector<vector<int>>(sum / 3 + 1, vector<int>(sum / 3 + 1, 0)));
for (int i = 1; i <= A.size(); i++) {
for (int j = 0; j <= sum / 3; j++) {
for (int k = 0; k <= sum / 3; k++) {
E[i][j][k] = E[i - 1][j][k];
if (A[i - 1] <= k) {
E[i][j][k] = max(E[i][j][k], E[i - 1][j][k - A[i - 1]] + A[i - 1]);
}
if (A[i - 1] <= j) {
E[i][j][k] = max(E[i][j][k], E[i - 1][j - A[i - 1]][k] + A[i - 1]);
}
}
}
}
return (E.back().back().back() / 2 == sum / 3);
}
You really want Korf's Complete Karmarkar-Karp algorithm (http://ac.els-cdn.com/S0004370298000861/1-s2.0-S0004370298000861-main.pdf, http://ijcai.org/papers09/Papers/IJCAI09-096.pdf). A generalization to three-partitioning is given. The algorithm is surprisingly fast given the complexity of the problem, but requires some implementation.
The essential idea of KK is to ensure that large blocks of similar size appear in different partitions. One groups pairs of blocks, which can then be treated as a smaller block of size equal to the difference in sizes that can be placed as normal: by doing this recursively, one ends up with small blocks that are easy to place. One then does a two-coloring of the block groups to ensure that the opposite placements are handled. The extension to 3-partition is a bit complicated. The Korf extension is to use depth-first search in KK order to find all possible solutions or to find a solution quickly.

Resources