Suppose I have an array of integers int a[] = {0, 1, ... N-1}, where N is the size of a. Now I need to generate all permutations of a s that a[i] != i for all 0 <= i < N. How would you do that?
Here's some C++ implementing an algorithm based on a bijective proof of the recurrence
!n = (n-1) * (!(n-1) + !(n-2)),
where !n is the number of derangements of n items.
#include <algorithm>
#include <ctime>
#include <iostream>
#include <vector>
static const int N = 12;
static int count;
template<class RAI>
void derange(RAI p, RAI a, RAI b, int n) {
if (n < 2) {
if (n == 0) {
for (int i = 0; i < N; ++i) p[b[i]] = a[i];
if (false) {
for (int i = 0; i < N; ++i) std::cout << ' ' << p[i];
std::cout << '\n';
} else {
++count;
}
}
return;
}
for (int i = 0; i < n - 1; ++i) {
std::swap(a[i], a[n - 1]);
derange(p, a, b, n - 1);
std::swap(a[i], a[n - 1]);
int j = b[i];
b[i] = b[n - 2];
b[n - 2] = b[n - 1];
b[n - 1] = j;
std::swap(a[i], a[n - 2]);
derange(p, a, b, n - 2);
std::swap(a[i], a[n - 2]);
j = b[n - 1];
b[n - 1] = b[n - 2];
b[n - 2] = b[i];
b[i] = j;
}
}
int main() {
std::vector<int> p(N);
clock_t begin = clock();
std::vector<int> a(N);
std::vector<int> b(N);
for (int i = 0; i < N; ++i) a[i] = b[i] = i;
derange(p.begin(), a.begin(), b.begin(), N);
std::cout << count << " permutations in " << clock() - begin << " clocks for derange()\n";
count = 0;
begin = clock();
for (int i = 0; i < N; ++i) p[i] = i;
while (std::next_permutation(p.begin(), p.end())) {
for (int i = 0; i < N; ++i) {
if (p[i] == i) goto bad;
}
++count;
bad:
;
}
std::cout << count << " permutations in " << clock() - begin << " clocks for next_permutation()\n";
}
On my machine, I get
176214841 permutations in 13741305 clocks for derange()
176214841 permutations in 14106430 clocks for next_permutation()
which IMHO is a wash. Probably there are improvements to be made on both sides (e.g., reimplement next_permutation with the derangement test that scans only the elements that changed); that's left as an exercise to the reader.
If you have access to C++ STL, use next_permutation, and do an additional check of a[i] != i in a do-while loop.
If you want to avoid the filter approach that others have suggested (generate the permutations in lexicographic order and skip those with fixed points), then you should generate them based on cycle notation rather than one-line notation (discussion of notation).
The cycle-type of a permutation of n is a partition of n, that is a weakly decreasing sequence of positive integers that sums to n. The condition that a permutation has no fixed points is equivalent to its cycle-type having no 1s. For example, if n=5, then the possible cycle-types are
5
4,1
3,2
3,1,1
2,2,1
2,1,1,1
1,1,1,1,1
Of those, only 5 and 3,2 are valid for this problem since all others contain a 1. Therefore the strategy is to generate partitions with smallest part at least 2, then for each such partition, generate all permutations with that cycle-type.
The permutations you are looking for are called derangements. As others have observed, uniformly randomly distributed derangements can be generated by generating uniformly randomly distributed permutations and then rejecting permutations that have fixed points (where a[i] == i). The rejection method runs in time e*n + o(n) where e is Euler's constant 2.71828... . An alternative algorithm similar to #Per's runs in time 2*n + O(log^2 n). However, the fastest algorithm I've been able to find, an early rejection algorithm, runs in time (e-1)*(n-1). Instead of waiting for the permutation to be generated and then rejecting it (or not), the permutation is tested for fixed points while it is being constructed, allowing for rejection at the earliest possible moment. Here's my implementation of the early rejection method for derangements in Java.
public static int[] randomDerangement(int n)
throws IllegalArgumentException {
if (n<2)
throw new IllegalArgumentException("argument must be >= 2 but was " + n);
int[] result = new int[n];
boolean found = false;
while (!found) {
for (int i=0; i<n; i++) result[i] = i;
boolean fixed = false;
for (int i=n-1; i>=0; i--) {
int j = rand.nextInt(i+1);
if (i == result[j]) {
fixed = true;
break;
}
else {
int temp = result[i];
result[i] = result[j];
result[j] = temp;
}
}
if (!fixed) found = true;
}
return result;
}
For an alternative approach, see my post at Shuffle list, ensuring that no item remains in same position.
Just a hunch: I think lexicographic permutation might be possible to modify to solve this.
Re-arrange the array 1,2,3,4,5,6,... by swapping pairs of odd and even elements into 2,1,4,3,6,5,... to construct the permutation with lowest lexicographic order. Then use the standard algorithm, with the additional constraint that you cannot swap element i into position i.
If the array has an odd number of elements, you will have to make another swap at the end to ensure that element N-1 is not in position N-1.
Here's a small recursive approach in python:
def perm(array,permutation = [], i = 1):
if len(array) > 0 :
for element in array:
if element != i:
newarray = list(array)
newarray.remove(element)
newpermutation = list(permutation)
newpermutation.append(element)
perm(newarray,newpermutation,i+1)
else:
print permutation
Running perm(range(1,5)) will give the following output:
[2, 1, 4, 3]
[2, 3, 4, 1]
[2, 4, 1, 3]
[3, 1, 4, 2]
[3, 4, 1, 2]
[3, 4, 2, 1]
[4, 1, 2, 3]
[4, 3, 1, 2]
[4, 3, 2, 1]
Related
I'm going through an exercise to partition a set into K subsets with equal sum.
Let's say
Input : arr = [2, 1, 4, 5, 6], K = 3
Output : Yes
we can divide above array into 3 parts with equal
sum as [[2, 4], [1, 5], [6]]
I found a solution here,
http://www.geeksforgeeks.org/partition-set-k-subsets-equal-sum/
// C++ program to check whether an array can be
// partitioned into K subsets of equal sum
#include <bits/stdc++.h>
using namespace std;
// Recursive Utility method to check K equal sum
// subsetition of array
/**
array - given input array
subsetSum array - sum to store each subset of the array
taken - boolean array to check whether element
is taken into sum partition or not
K - number of partitions needed
N - total number of element in array
curIdx - current subsetSum index
limitIdx - lastIdx from where array element should
be taken */
bool isKPartitionPossibleRec(int arr[], int subsetSum[], bool taken[],
int subset, int K, int N, int curIdx, int limitIdx)
{
if (subsetSum[curIdx] == subset)
{
/* current index (K - 2) represents (K - 1) subsets of equal
sum last partition will already remain with sum 'subset'*/
if (curIdx == K - 2)
return true;
// recursive call for next subsetition
return isKPartitionPossibleRec(arr, subsetSum, taken, subset,
K, N, curIdx + 1, N - 1);
}
// start from limitIdx and include elements into current partition
for (int i = limitIdx; i >= 0; i--)
{
// if already taken, continue
if (taken[i])
continue;
int tmp = subsetSum[curIdx] + arr[i];
// if temp is less than subset then only include the element
// and call recursively
if (tmp <= subset)
{
// mark the element and include into current partition sum
taken[i] = true;
subsetSum[curIdx] += arr[i];
bool nxt = isKPartitionPossibleRec(arr, subsetSum, taken,
subset, K, N, curIdx, i - 1);
// after recursive call unmark the element and remove from
// subsetition sum
taken[i] = false;
subsetSum[curIdx] -= arr[i];
if (nxt)
return true;
}
}
return false;
}
// Method returns true if arr can be partitioned into K subsets
// with equal sum
bool isKPartitionPossible(int arr[], int N, int K)
{
// If K is 1, then complete array will be our answer
if (K == 1)
return true;
// If total number of partitions are more than N, then
// division is not possible
if (N < K)
return false;
// if array sum is not divisible by K then we can't divide
// array into K partitions
int sum = 0;
for (int i = 0; i < N; i++)
sum += arr[i];
if (sum % K != 0)
return false;
// the sum of each subset should be subset (= sum / K)
int subset = sum / K;
int subsetSum[K];
bool taken[N];
// Initialize sum of each subset from 0
for (int i = 0; i < K; i++)
subsetSum[i] = 0;
// mark all elements as not taken
for (int i = 0; i < N; i++)
taken[i] = false;
// initialize first subsubset sum as last element of
// array and mark that as taken
subsetSum[0] = arr[N - 1];
taken[N - 1] = true;
// call recursive method to check K-substitution condition
return isKPartitionPossibleRec(arr, subsetSum, taken,
subset, K, N, 0, N - 1);
}
// Driver code to test above methods
int main()
{
int arr[] = {2, 1, 4, 5, 3, 3};
int N = sizeof(arr) / sizeof(arr[0]);
int K = 3;
if (isKPartitionPossible(arr, N, K))
cout << "Partitions into equal sum is possible.\n";
else
cout << "Partitions into equal sum is not possible.\n";
}
This works in all scenarios.
Question
Let's say if I pass arr = [4, 4, 1, 3, 2, 3, 2, 1] and k = 4,the algorithm tried to solve it by adding 1+2+2 and then, 3+3 or 3+1 and so on. It doesn't gets the partition and finally solves it to [[4,1], [4,1], [3,2], [3,2]]. I am not sure how does this algorithm finds the alternative? I'm not able to follow up with the recursion.
What are the ways to solve it? Is the backtracking the only way?
Thanks!
So, this problem I dont have any clue how to solve it the problem statement is :
Given a set S of N integers the task is decide if it is possible to
divide them into K non-empty subsets such that the sum of elements in
every of the K subsets is equal.
N can be at max 20. K can be at max 8
The problem is to be solved specifically using DP+Bitmasks!
I cannot understand where to start ! As there are K sets to be maintained , I cannot take K states each representing some or the other!!
If I try taking the whole set as a state and K as the other, I have issues in creating a recurrent relation!
Can you help??
The link to original problem Problem
You can solve the problem in O(N * 2^N), so the K is meaningless for the complexity.
First let me warn you about the corner case N < K with all the numbers being zero, in which the answer is "no".
The idea of my algorithm is the following. Assume we have computed the sum of each of the masks (that can be done in O(2^N)). We know that for each of the groups, the sum should be the total sum divided by K.
We can do a DP with masks in which the state is just a binary mask telling which numbers have been used. The key idea in removing the K from the algorithm complexity is noticing that if we know which numbers have been used, we know the sum so far, so we also know which group we are filling now (current sum / group sum). Then just try to select the next number for the group: it will be valid if we do not exceed the group expected sum.
You can check my C++ code:
#include <iostream>
#include <vector>
#include <cstring>
using namespace std;
typedef long long ll;
ll v[21 + 5];
ll sum[(1 << 21) + 5];
ll group_sum;
int n, k;
void compute_sums(int position, ll current_sum, int mask)
{
if (position == -1)
{
sum[mask] = current_sum;
return;
}
compute_sums(position - 1, current_sum, mask << 1);
compute_sums(position - 1, current_sum + v[position], (mask << 1) + 1);
}
void solve_case()
{
cin >> n >> k;
for (int i = 0; i < n; ++i)
cin >> v[i];
memset(sum, 0, sizeof(sum));
compute_sums(n - 1, 0, 0);
group_sum = sum[(1 << n) - 1];
if (group_sum % k != 0)
{
cout << "no" << endl;
return;
}
if (group_sum == 0)
{
if (n >= k)
cout << "yes" << endl;
else
cout << "no" << endl;
return;
}
group_sum /= k;
vector<int> M(1 << n, 0);
M[0] = 1;
for (int mask = 0; mask < (1 << n); ++mask)
{
if (M[mask])
{
int current_group = sum[mask] / group_sum;
for (int i = 0; i < n; ++i)
{
if ((mask >> i) & 1)
continue;
if (sum[mask | (1 << i)] <= group_sum * (current_group + 1))
M[mask | (1 << i)] = 1;
}
}
}
if (M[(1 << n) - 1])
cout << "yes" << endl;
else
cout << "no" << endl;
}
int main()
{
int cases;
cin >> cases;
for (int z = 1; z <= cases; ++z)
solve_case();
}
Here's the working O(K*2^N*N) implementation in JavaScript. From the pseudo code https://discuss.codechef.com/questions/58420/sanskar-editorial
http://jsfiddle.net/d7q4o0nj/
function equality(set, size, count) {
if(size < count) { return false; }
var total = set.reduce(function(p, c) { return p + c; }, 0);
if((total % count) !== 0) { return false }
var subsetTotal = total / count;
var search = {0: true};
var nextSearch = {};
for(var i=0; i<count; i++) {
for(var bits=0; bits < (1 << size); bits++){
if(search[bits] !== true) { continue; }
var sum = 0;
for(var j=0; j < size; j++) {
if((bits & (1 << j)) !== 0) { sum += set[j]; }
}
sum -= i * subsetTotal;
for(var j=0; j < size; j++) {
if((bits & (1 << j)) !== 0) { continue; }
var testBits = bits | (1 << j);
var tmpTotal = sum + set[j];
if(tmpTotal == subsetTotal) { nextSearch[testBits] = true; }
else if(tmpTotal < subsetTotal) { search[testBits] = true; }
}
}
search = nextSearch;
nextSearch = {};
}
if(search[(1 << size) - 1] === true) {
return true;
}
return false;
}
console.log(true, equality([1,2,3,1,2,3], 6, 2));
console.log(true, equality([1, 2, 4, 5, 6], 5, 3));
console.log(true, equality([10,20,10,20,10,20,10,20,10,20], 10, 5));
console.log(false, equality([1,2,4,5,7], 5, 3));
EDIT The algorithm finds all of the bitmasks (which represent subsets bits) that meet the criteria (having a sum tmpTotal less than or equal to the ideal subset sum subsetTotal). Repeating this process by the amount of subsets required count, you either have a bitmask where all size bits are set which means success or the test fails.
EXAMPLE
set = [1, 2, 1, 2]
size = 4
count = 2, we want to try to partition the set into 2 subsets
subsetTotal = (1+2+1+2) / 2 = 3
Iteration 1:
search = {0b: true, 1b: true, 10b: true, 100b: true, 1000b: true, 101b: true}
nextSearch = {11b: true, 1100b: true, 110b: true, 1001b: true }
Iteration 2:
search = {11b: true, 1100b: true, 110b: true, 1001b: true, 111b: true, 1101b: true }
nextSearch = {1111b: true}
Final Check
(1 << size) == 10000b, (1 << size) - 1 == 1111b
Since nextSearch[ 1111b ] exists we return success.
UPD: I confused N and K with each other and my idea is true but not efficient.Efficient idea added at the end
Assume that so far you've created k-1 subsets, and now you want to create the k-th subset. For creating the k-th subset, you need to be able to answer these two questions:
1- What should be the sum of elements of k-th subset?
2- Which elements have been used so far ?
Answering the first question is easy, the sum should be equal to sum of all elements divided by K, let's name it subSum.
For second question, we need to have the state of each element, used or not. Here we need to use bitmask idea.
Here's the dp recurrence:
dp[i][mask] = means is it possible to create i subsets with sum of each equals to subSum, using the elements which are 1(not used) in mask (in its bit representation), So dp[i][mask] is a boolean type.
dp[i][mask] = OR(dp[i-1][mask2]) for all possible mask2 states. mask2 will be produced by converting some 1's of mask to 0's, i.e. those 1's that we want to be the elements of i-th subset.
For checking all possible mask2, you need to check all 2^n possible subsets of available 1's bits.Therefore, totaly, the time complexity will be O(N*(2^n)*(2^n)). In your problem is 20*2^8*2^8= 10*2^17 < 10^7 which can pass the time limit.
Obviously, for base case you have to handle dp[0][mask] on your own, without using the recurrence.Final answer is whether dp[K][2^N-1] is true or not.
__UPD__: For getting a better performance,before get into DP, you could preprocess all subsets with sum of subSum. Then, for calculating mask2, you just need to iterate over the preprocessed list, and see whether the AND operation of them with mask would result in the subset in the list or not.
UPD2:
For having an efficient solution, instead of finding proper mask2, we could use the fact that at each step, we know the sum of elements till that point. So we could add elements one by one into the mask, and whenever we had a sum which is divisible by K we could go to the next step for creating next subset.
if (sum of used elements of mask is divisible by K)
dp[i][mask]= dp[i+1][mask];
else
dp[i][mask]|=dp[i][mask ^(1<<i)] provided that i-th item is not used and can not exceed the current sum more than i*subSum.
There's an array A containing (positive and negative) integers. Find a (contiguous) subarray whose elements' absolute sum is minimal, e.g.:
A = [2, -4, 6, -3, 9]
|(−4) + 6 + (−3)| = 1 <- minimal absolute sum
I've started by implementing a brute-force algorithm which was O(N^2) or O(N^3), though it produced correct results. But the task specifies:
complexity:
- expected worst-case time complexity is O(N*log(N))
- expected worst-case space complexity is O(N)
After some searching I thought that maybe Kadane's algorithm can be modified to fit this problem but I failed to do it.
My question is - is Kadane's algorithm the right way to go? If not, could you point me in the right direction (or name an algorithm that could help me here)? I don't want a ready-made code, I just need help in finding the right algorithm.
If you compute the partial sums
such as
2, 2 +(-4), 2 + (-4) + 6, 2 + (-4) + 6 + (-3)...
Then the sum of any contiguous subarray is the difference of two of the partial sums. So to find the contiguous subarray whose absolute value is minimal, I suggest that you sort the partial sums and then find the two values which are closest together, and use the positions of these two partial sums in the original sequence to find the start and end of the sub-array with smallest absolute value.
The expensive bit here is the sort, so I think this runs in time O(n * log(n)).
This is C++ implementation of Saksow's algorithm.
int solution(vector<int> &A) {
vector<int> P;
int min = 20000 ;
int dif = 0 ;
P.resize(A.size()+1);
P[0] = 0;
for(int i = 1 ; i < P.size(); i ++)
{
P[i] = P[i-1]+A[i-1];
}
sort(P.begin(),P.end());
for(int i = 1 ; i < P.size(); i++)
{
dif = P[i]-P[i-1];
if(dif<min)
{
min = dif;
}
}
return min;
}
I was doing this test on Codility and I found mcdowella answer quite helpful, but not enough I have to say: so here is a 2015 answer guys!
We need to build the prefix sums of array A (called P here) like: P[0] = 0, P[1] = P[0] + A[0], P[2] = P[1] + A[1], ..., P[N] = P[N-1] + A[N-1]
The "min abs sum" of A will be the minimum absolute difference between 2 elements in P. So we just have to .sort() P and loop through it taking every time 2 successive elements. This way we have O(N + Nlog(N) + N) which equals to O(Nlog(N)).
That's it!
The answer is yes, Kadane's algorithm is definitely the way to go for solving your problem.
http://en.wikipedia.org/wiki/Maximum_subarray_problem
Source - I've closely worked with a PhD student who's entire PhD thesis was devoted to the maximum subarray problem.
def min_abs_subarray(a):
s = [a[0]]
for e in a[1:]:
s.append(s[-1] + e)
s = sorted(s)
min = abs(s[0])
t = s[0]
for x in s[1:]:
cur = abs(x)
min = cur if cur < min else min
cur = abs(t-x)
min = cur if cur < min else min
t = x
return min
You can run Kadane's algorithmtwice(or do it in one go) to find minimum and maximum sum where finding minimum works in same way as maximum with reversed signs and then calculate new maximum by comparing their absolute value.
Source-Someone's(dont remember who) comment in this site.
Here is an Iterative solution in python. It's 100% correct.
def solution(A):
memo = []
if not len(A):
return 0
for ind, val in enumerate(A):
if ind == 0:
memo.append([val, -1*val])
else:
newElem = []
for i in memo[ind - 1]:
newElem.append(i+val)
newElem.append(i-val)
memo.append(newElem)
return min(abs(n) for n in memo.pop())
Short Sweet and work like a charm. JavaScript / NodeJs solution
function solution(A, i=0, sum =0 ) {
//Edge case if Array is empty
if(A.length == 0) return 0;
// Base case. For last Array element , add and substart from sum
// and find min of their absolute value
if(A.length -1 === i){
return Math.min( Math.abs(sum + A[i]), Math.abs(sum - A[i])) ;
}
// Absolute value by adding the elem with the sum.
// And recusrively move to next elem
let plus = Math.abs(solution(A, i+1, sum+A[i]));
// Absolute value by substracting the elem from the sum
let minus = Math.abs(solution(A, i+1, sum-A[i]));
return Math.min(plus, minus);
}
console.log(solution([-100, 3, 2, 4]))
Here is a C solution based on Kadane's algorithm.
Hopefully its helpful.
#include <stdio.h>
int min(int a, int b)
{
return (a >= b)? b: a;
}
int min_slice(int A[], int N) {
if (N==0 || N>1000000)
return 0;
int minTillHere = A[0];
int minSoFar = A[0];
int i;
for(i = 1; i < N; i++){
minTillHere = min(A[i], minTillHere + A[i]);
minSoFar = min(minSoFar, minTillHere);
}
return minSoFar;
}
int main(){
int A[]={3, 2, -6, 4, 0}, N = 5;
//int A[]={3, 2, 6, 4, 0}, N = 5;
//int A[]={-4, -8, -3, -2, -4, -10}, N = 6;
printf("Minimum slice = %d \n", min_slice(A,N));
return 0;
}
public static int solution(int[] A) {
int minTillHere = A[0];
int absMinTillHere = A[0];
int minSoFar = A[0];
int i;
for(i = 1; i < A.length; i++){
absMinTillHere = Math.min(Math.abs(A[i]),Math.abs(minTillHere + A[i]));
minTillHere = Math.min(A[i], minTillHere + A[i]);
minSoFar = Math.min(Math.abs(minSoFar), absMinTillHere);
}
return minSoFar;
}
int main()
{
int n; cin >> n;
vector<int>a(n);
for(int i = 0; i < n; i++) cin >> a[i];
long long local_min = 0, global_min = LLONG_MAX;
for(int i = 0; i < n; i++)
{
if(abs(local_min + a[i]) > abs(a[i]))
{
local_min = a[i];
}
else local_min += a[i];
global_min = min(global_min, abs(local_min));
}
cout << global_min << endl;
}
I've just done the following Codility Peaks problem. The problem is as follows:
A non-empty zero-indexed array A consisting of N integers is given.
A peak is an array element which is larger than its neighbors. More precisely, it is an index P such that 0 < P < N − 1, A[P − 1] < A[P] and A[P] > A[P + 1].
For example, the following array A:
A[0] = 1
A[1] = 2
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
has exactly three peaks: 3, 5, 10.
We want to divide this array into blocks containing the same number of elements. More precisely, we want to choose a number K that will yield the following blocks:
A[0], A[1], ..., A[K − 1],
A[K], A[K + 1], ..., A[2K − 1],
...
A[N − K], A[N − K + 1], ..., A[N − 1].
What's more, every block should contain at least one peak. Notice that extreme elements of the blocks (for example A[K − 1] or A[K]) can also be peaks, but only if they have both neighbors (including one in an adjacent blocks).
The goal is to find the maximum number of blocks into which the array A can be divided.
Array A can be divided into blocks as follows:
one block (1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2). This block contains three peaks.
two blocks (1, 2, 3, 4, 3, 4) and (1, 2, 3, 4, 6, 2). Every block has a peak.
three blocks (1, 2, 3, 4), (3, 4, 1, 2), (3, 4, 6, 2). Every block has a peak.
Notice in particular that the first block (1, 2, 3, 4) has a peak at A[3], because A[2] < A[3] > A[4], even though A[4] is in the adjacent block.
However, array A cannot be divided into four blocks, (1, 2, 3), (4, 3, 4), (1, 2, 3) and (4, 6, 2), because the (1, 2, 3) blocks do not contain a peak. Notice in particular that the (4, 3, 4) block contains two peaks: A[3] and A[5].
The maximum number of blocks that array A can be divided into is three.
Write a function:
class Solution { public int solution(int[] A); }
that, given a non-empty zero-indexed array A consisting of N integers, returns the maximum number of blocks into which A can be divided.
If A cannot be divided into some number of blocks, the function should return 0.
For example, given:
A[0] = 1
A[1] = 2
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
the function should return 3, as explained above.
Assume that:
N is an integer within the range [1..100,000];
each element of array A is an integer within the range [0..1,000,000,000].
Complexity:
expected worst-case time complexity is O(N*log(log(N)))
expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments).
Elements of input arrays can be modified.
My Question
So I solve this with what to me appears to be the brute force solution – go through every group size from 1..N, and check whether every group has at least one peak. The first 15 minutes I was trying to solve this I was trying to figure out some more optimal way, since the required complexity is O(N*log(log(N))).
This is my "brute-force" code that passes all the tests, including the large ones, for a score of 100/100:
public int solution(int[] A) {
int N = A.length;
ArrayList<Integer> peaks = new ArrayList<Integer>();
for(int i = 1; i < N-1; i++){
if(A[i] > A[i-1] && A[i] > A[i+1]) peaks.add(i);
}
for(int size = 1; size <= N; size++){
if(N % size != 0) continue;
int find = 0;
int groups = N/size;
boolean ok = true;
for(int peakIdx : peaks){
if(peakIdx/size > find){
ok = false;
break;
}
if(peakIdx/size == find) find++;
}
if(find != groups) ok = false;
if(ok) return groups;
}
return 0;
}
My question is how do I deduce that this is in fact O(N*log(log(N))), as it's not at all obvious to me, and I was surprised I pass the test cases. I'm looking for even the simplest complexity proof sketch that would convince me of this runtime. I would assume that a log(log(N)) factor means some kind of reduction of a problem by a square root on each iteration, but I have no idea how this applies to my problem. Thanks a lot for any help
You're completely right: to get the log log performance the problem needs to be reduced.
A n.log(log(n)) solution in python [below]. Codility no longer test 'performance' on this problem (!) but the python solution scores 100% for accuracy.
As you've already surmised:
Outer loop will be O(n) since it is testing whether each size of block is a clean divisor
Inner loop must be O(log(log(n))) to give O(n log(log(n))) overall.
We can get good inner loop performance because we only need to perform d(n), the number of divisors of n. We can store a prefix sum of peaks-so-far, which uses the O(n) space allowed by the problem specification. Checking whether a peak has occurred in each 'group' is then an O(1) lookup operation using the group start and end indices.
Following this logic, when the candidate block size is 3 the loop needs to perform n / 3 peak checks. The complexity becomes a sum: n/a + n/b + ... + n/n where the denominators (a, b, ...) are the factors of n.
Short story: The complexity of n.d(n) operations is O(n.log(log(n))).
Longer version:
If you've been doing the Codility Lessons you'll remember from the Lesson 8: Prime and composite numbers that the sum of harmonic number operations will give O(log(n)) complexity. We've got a reduced set, because we're only looking at factor denominators. Lesson 9: Sieve of Eratosthenes shows how the sum of reciprocals of primes is O(log(log(n))) and claims that 'the proof is non-trivial'. In this case Wikipedia tells us that the sum of divisors sigma(n) has an upper bound (see Robin's inequality, about half way down the page).
Does that completely answer your question? Suggestions on how to improve my python code are also very welcome!
def solution(data):
length = len(data)
# array ends can't be peaks, len < 3 must return 0
if len < 3:
return 0
peaks = [0] * length
# compute a list of 'peaks to the left' in O(n) time
for index in range(2, length):
peaks[index] = peaks[index - 1]
# check if there was a peak to the left, add it to the count
if data[index - 1] > data[index - 2] and data[index - 1] > data[index]:
peaks[index] += 1
# candidate is the block size we're going to test
for candidate in range(3, length + 1):
# skip if not a factor
if length % candidate != 0:
continue
# test at each point n / block
valid = True
index = candidate
while index != length:
# if no peak in this block, break
if peaks[index] == peaks[index - candidate]:
valid = False
break
index += candidate
# one additional check since peaks[length] is outside of array
if index == length and peaks[index - 1] == peaks[index - candidate]:
valid = False
if valid:
return length / candidate
return 0
Credits:
Major kudos to #tmyklebu for his SO answer which helped me a lot.
I'm don't think that the time complexity of your algorithm is O(Nlog(logN)).
However, it is certainly much lesser than O(N^2). This is because your inner loop is entered only k times where k is the number of factors of N. The number of factors of an integer can be seen in this link: http://www.cut-the-knot.org/blue/NumberOfFactors.shtml
I may be inaccurate but from the link it seems,
k ~ logN * logN * logN ...
Also, the inner loop has a complexity of O(N) since the number of peaks can be N/2 in the worst case.
Hence, in my opinion, the complexity of your algorithm is O(NlogN) at best but it must be sufficient to clear all test cases.
#radicality
There's at least one point where you can optimize the number of passes in the second loop to O(sqrt(N)) -- collect divisors of N and iterate through them only.
That will make your algo a little less "brute force".
Problem definition allows for O(N) space complexity. You can store divisors without violating this condition.
This is my solution based on prefix sums. Hope it helps:
class Solution {
public int solution(int[] A) {
int n = A.length;
int result = 1;
if (n < 3)
return 0;
int[] prefixSums = new int[n];
for (int i = 1; i < n-1; i++)
if (A[i] > A[i-1] && A[i] > A[i+1])
prefixSums[i] = prefixSums[i-1] + 1;
else
prefixSums[i] = prefixSums[i-1];
prefixSums[n-1] = prefixSums[n-2];
if (prefixSums[n-1] <= 1)
return prefixSums[n-1];
for (int i = 2; i <= prefixSums[n-2]; i++) {
if (n % i != 0)
continue;
int prev = 0;
boolean containsPeak = true;
for (int j = n/i - 1; j < n; j += n/i) {
if (prefixSums[j] == prev) {
containsPeak = false;
break;
}
prev = prefixSums[j];
}
if (containsPeak)
result = i;
}
return result;
}
}
def solution(A):
length = len(A)
if length <= 2:
return 0
peek_indexes = []
for index in range(1, length-1):
if A[index] > A[index - 1] and A[index] > A[index + 1]:
peek_indexes.append(index)
for block in range(3, int((length/2)+1)):
if length % block == 0:
index_to_check = 0
temp_blocks = 0
for peek_index in peek_indexes:
if peek_index >= index_to_check and peek_index < index_to_check + block:
temp_blocks += 1
index_to_check = index_to_check + block
if length/block == temp_blocks:
return temp_blocks
if len(peek_indexes) > 0:
return 1
else:
return 0
print(solution([1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2, 1, 2, 5, 2]))
I just found the factors at first,
then just iterated in A and tested all number of blocks to see which is the greatest block division.
This is the code that got 100 (in java)
https://app.codility.com/demo/results/training9593YB-39H/
A javascript solution with complexity of O(N * log(log(N))).
function solution(A) {
let N = A.length;
if (N < 3) return 0;
let peaks = 0;
let peaksTillNow = [ 0 ];
let dividers = [];
for (let i = 1; i < N - 1; i++) {
if (A[i - 1] < A[i] && A[i] > A[i + 1]) peaks++;
peaksTillNow.push(peaks);
if (N % i === 0) dividers.push(i);
}
peaksTillNow.push(peaks);
if (peaks === 0) return 0;
let blocks;
let result = 1;
for (blocks of dividers) {
let K = N / blocks;
let prevPeaks = 0;
let OK = true;
for (let i = 1; i <= blocks; i++) {
if (peaksTillNow[i * K - 1] > prevPeaks) {
prevPeaks = peaksTillNow[i * K - 1];
} else {
OK = false;
break;
}
}
if (OK) result = blocks;
}
return result;
}
Solution with C# code
public int GetPeaks(int[] InputArray)
{
List<int> lstPeaks = new List<int>();
lstPeaks.Add(0);
for (int Index = 1; Index < (InputArray.Length - 1); Index++)
{
if (InputArray[Index - 1] < InputArray[Index] && InputArray[Index] > InputArray[Index + 1])
{
lstPeaks.Add(1);
}
else
{
lstPeaks.Add(0);
}
}
lstPeaks.Add(0);
int totalEqBlocksWithPeaks = 0;
for (int factor = 1; factor <= InputArray.Length; factor++)
{
if (InputArray.Length % factor == 0)
{
int BlockLength = InputArray.Length / factor;
int BlockCount = factor;
bool isAllBlocksHasPeak = true;
for (int CountIndex = 1; CountIndex <= BlockCount; CountIndex++)
{
int BlockStartIndex = CountIndex == 1 ? 0 : (CountIndex - 1) * BlockLength;
int BlockEndIndex = (CountIndex * BlockLength) - 1;
if (!(lstPeaks.GetRange(BlockStartIndex, BlockLength).Sum() > 0))
{
isAllBlocksHasPeak = false;
}
}
if (isAllBlocksHasPeak)
totalEqBlocksWithPeaks++;
}
}
return totalEqBlocksWithPeaks;
}
There is actually an O(n) runtime complexity solution for this task, so this is a humble attempt to share that.
The trick to go from the proposed O(n * loglogn) solutions to O(n) is to calculate the maximum gap between any two peaks (or a leading or trailing peak to the corresponding endpoint).
This can be done while building the peak hash in the first O(n) loop.
Then, if the gap is 'g' between two consecutive peaks, then the minimum group size must be 'g/2'. It will simply be 'g' between start and first peak, or last peak and end. Also, there will be at least one peak in any group from group size 'g', so the range to check for is: g/2, 1+g/2, 2+g/2, ... g.
Therefore, the runtime is the sum over d = g/2, g/2+1, ... g) * n/d where 'd' is the divisor'.
(sum over d = g/2, 1 + g/2, ... g) * n/d = n/(g/2) + n/(1 + g/2) + ... + (n/g)
if g = 5, this n/5 + n/6 + n/7 + n/8 + n/9 + n/10 = n(1/5+1/6+1/7+1/8+1/9+1/10)
If you replace each item with the largest element, then you get sum <= n * (1/5 + 1/5 + 1/5 + 1/5 + 1/5) = n
Now, generalising this, every element is replaced with n / (g/2).
The number of items from g/2 to g is 1 + g/2 since there are (g - g/2 + 1) items.
So, the whole sum is: n/(g/2) * (g/2 + 1) = n + 2n/g < 3n.
Therefore, the bound on the total number of operations is O(n).
The code, implementing this in C++, is here:
int solution(vector<int> &A)
{
int sizeA = A.size();
vector<bool> hash(sizeA, false);
int min_group_size = 2;
int pi = 0;
for (int i = 1, pi = 0; i < sizeA - 1; ++i) {
const int e = A[i];
if (e > A[i - 1] && e > A[i + 1]) {
hash[i] = true;
int diff = i - pi;
if (pi) diff /= 2;
if (diff > min_group_size) min_group_size = diff;
pi = i;
}
}
min_group_size = min(min_group_size, sizeA - pi);
vector<int> hash_next(sizeA, 0);
for (int i = sizeA - 2; i >= 0; --i) {
hash_next[i] = hash[i] ? i : hash_next[i + 1];
}
for (int group_size = min_group_size; group_size <= sizeA; ++group_size) {
if (sizeA % group_size != 0) continue;
int number_of_groups = sizeA / group_size;
int group_index = 0;
for (int peak_index = 0; peak_index < sizeA; peak_index = group_index * group_size) {
peak_index = hash_next[peak_index];
if (!peak_index) break;
int lower_range = group_index * group_size;
int upper_range = lower_range + group_size - 1;
if (peak_index > upper_range) {
break;
}
++group_index;
}
if (number_of_groups == group_index) return number_of_groups;
}
return 0;
}
var prev, curr, total = 0;
for (var i=1; i<A.length; i++) {
if (curr == 0) {
curr = A[i];
} else {
if(A[i] != curr) {
if (prev != 0) {
if ((prev < curr && A[i] < curr) || (prev > curr && A[i] > curr)) {
total += 1;
}
} else {
prev = curr;
total += 1;
}
prev = curr;
curr = A[i];
}
}
}
if(prev != curr) {
total += 1;
}
return total;
I agree with GnomeDePlume answer... the piece on looking for the divisors on the proposed solution is O(N), and that could be decreased to O(sqrt(N)) by using the algorithm provided on the lesson text.
So just adding, here is my solution using Java that solves the problem on the required complexity.
Be aware, it has way more code then yours - some cleanup (debug sysouts and comments) would always be possible :-)
public int solution(int[] A) {
int result = 0;
int N = A.length;
// mark accumulated peaks
int[] peaks = new int[N];
int count = 0;
for (int i = 1; i < N -1; i++) {
if (A[i-1] < A[i] && A[i+1] < A[i])
count++;
peaks[i] = count;
}
// set peaks count on last elem as it will be needed during div checks
peaks[N-1] = count;
// check count
if (count > 0) {
// if only one peak, will need the whole array
if (count == 1)
result = 1;
else {
// at this point (peaks > 1) we know at least the single group will satisfy the criteria
// so set result to 1, then check for bigger numbers of groups
result = 1;
// for each divisor of N, check if that number of groups work
Integer[] divisors = getDivisors(N);
// result will be at least 1 at this point
boolean candidate;
int divisor, startIdx, endIdx;
// check from top value to bottom - stop when one is found
// for div 1 we know num groups is 1, and we already know that is the minimum. No need to check.
// for div = N we know it's impossible, as all elements would have to be peaks (impossible by definition)
for (int i = divisors.length-2; i > 0; i--) {
candidate = true;
divisor = divisors[i];
for (int j = 0; j < N; j+= N/divisor) {
startIdx = (j == 0 ? j : j-1);
endIdx = j + N/divisor-1;
if (peaks[startIdx] == peaks[endIdx]) {
candidate = false;
break;
}
}
// if all groups had at least 1 peak, this is the result!
if (candidate) {
result = divisor;
break;
}
}
}
}
return result;
}
// returns ordered array of all divisors of N
private Integer[] getDivisors(int N) {
Set<Integer> set = new TreeSet<Integer>();
double sqrt = Math.sqrt(N);
int i = 1;
for (; i < sqrt; i++) {
if (N % i == 0) {
set.add(i);
set.add(N/i);
}
}
if (i * i == N)
set.add(i);
return set.toArray(new Integer[]{});
}
Thanks,
Davi
Can you please help me solving this one?
You have an unordered array X of n integers. Find the array M containing n elements where Mi is the product of all integers in X except for Xi. You may not use division. You can use extra memory. (Hint: There are solutions faster than O(n^2).)
The basic ones - O(n^2) and one using division is easy. But I just can't get another solution that is faster than O(n^2).
Let left[i] be the product of all elements in X from 1..i. Let right[i] be the product of all elements in X from i..N. You can compute both in O(n) without division in the following way: left[i] = left[i - 1] * X[i] and right[i] = right[i + 1] * X[i];
Now we will compute M: M[i] = left[i - 1] * right[i + 1]
Note: left and right are arrays.
Hope it is clear:)
Here's a solution in Python. I did the easy way with division to compare against the hard way without. Do I get the job?
L = [2, 1, 3, 5, 4]
prod = 1
for i in L: prod *= i
easy = map(lambda x: prod/x, L)
print easy
hard = [1]*len(L)
hmm = 1
for i in range(len(L) - 1):
hmm *= L[i]
hard[i + 1] *= hmm
huh = 1
for i in range(len(L) - 1, 0, -1):
huh *= L[i]
hard[i - 1] *= huh
print hard
O(n) - http://nbl.cewit.stonybrook.edu:60128/mediawiki/index.php/TADM2E_3.28
two passes -
int main (int argc, char **argv) {
int array[] = {2, 5, 3, 4};
int fwdprod[] = {1, 1, 1, 1};
int backprod[] = {1, 1, 1, 1};
int mi[] = {1, 1, 1, 1};
int i, n = 4;
for (i=1; i<=n-1; i++) {
fwdprod[i]=fwdprod[i-1]*array[i-1];
}
for (i=n-2; i>=0; i--) {
backprod[i] = backprod[i+1]*array[i+1];
}
for (i=0;i<=n-1;i++) {
mi[i]=fwdprod[i]*backprod[i];
}
return 0;
}
Old but very cool, I've been asked this at an interview myself and seen several solutions since but this is my favorite as taken from
http://www.polygenelubricants.com/2010/04/on-all-other-products-no-division.html
static int[] products(int... nums) {
final int N = nums.length;
int[] prods = new int[N];
java.util.Arrays.fill(prods, 1);
for (int // pi----> * <----pj
i = 0, pi = 1 , j = N-1, pj = 1 ;
(i < N) & (j >= 0) ;
pi *= nums[i++] , pj *= nums[j--] )
{
prods[i] *= pi ; prods[j] *= pj ;
System.out.println("pi up to this point is " + pi + "\n");
System.out.println("pj up to this point is " + pj + "\n");
System.out.println("prods[i]:" + prods[i] + "pros[j]:" + prods[j] + "\n");
}
return prods;
}
Here's what's going on, if you write out prods[i] for all the iterations, you'll see the following being calculated
prods[0], prods[n-1]
prods[1], prods[n-2]
prods[2], prods[n-3]
prods[3], prods[n-4]
.
.
.
prods[n-3], prods[2]
prods[n-2], prods[1]
prods[n-1], prods[0]
so each prods[i] get hit twice, one from the going from head to tail and once from tail to head, and both of these iterations are accumulating the product as they
traverse towards the center so it's easy to see we'll get exactly what we need, we just need to be careful and see that it misses the element itself and that's where
it gets tricky. the key lies in the
pi *= nums[i++], pj *= nums[j--]
in the for loop conditional itself and not in the body which do not happen until the end of the
iteration. so for
prods[0],
it starts at 1*1 and then pi gets set to 120 after, so prods[0] misses the first elements
prods[1], it's 1 * 120 = 120 and then pi gets set to 120*60 after
so on and so on
O(nlogn) approach:
int multiply(int arr[], int start, int end) {
int mid;
if (start > end) {
return 1;
}
if (start == end) {
return arr[start];
}
mid = (start+end)/2;
return (multiply(arr, start, mid)*multiply(arr, mid+1, end));
}
int compute_mi(int arr[], int i, int n) {
if ((i >= n) || (i < 0)) {
return 0;
}
return (multiply(arr, 0, i-1)*multiply(arr, i+1, n-1));
}
Here is my solution in Python: Easy way but with high computational cost may be?
def product_list(x):
ans = [p for p in range(len(x))]
for i in range(0, len(x)):
a = 1
for j in range(0, len(x)):
if i != j:
a = a*x[j]
ans[i] = a
return ans