If I was given a array of positive integer, like [2,19,6,16,5,10,7,4,11,6], I wish to find
the biggest subset sum attainable from the above array so that the sum is divisible by 3. I try to solve it using dynamic programming
let dp[i][j] to be the biggest sum attained up to index i in array with remainder of j, which is
0,1,2 since I am finding something divisible by 3.
And I have two implementation below:
int n = nums.length;
int[][] dp = new int[n+1][3];
dp[0][0] = 0;
dp[0][1] = Integer.MIN_VALUE;
dp[0][2] = Integer.MIN_VALUE;
for(int i = 1; i <= n; i++) {
for(int j = 0; j < 3; j++) {
int remain = nums[i-1] % 3;
int remainder = (j + 3 - remain) % 3;
dp[i][j] = Math.max(dp[i-1][remainder] + nums[i-1], dp[i-1][j]);
}
}
return dp[n][0];
int n = nums.length;
int[][] dp = new int[n+1][3];
dp[0][0] = nums[0] % 3 == 0 ? nums[0] : Integer.MIN_VALUE;
dp[0][1] = nums[0] % 3 == 1 ? nums[0] : Integer.MIN_VALUE;
dp[0][2] = nums[0] % 3 == 2 ? nums[0] : Integer.MIN_VALUE;
for(int i = 1; i < n; i++) {
for(int j = 0; j < 3; j++) {
int remain = nums[i] % 3;
int remainder = (j + 3 - remain) % 3;
dp[i][j] = Math.max(dp[i-1][remainder] + nums[i], dp[i-1][j]);
}
}
return dp[n-1][0] == Integer.MIN_VALUE ? 0 : dp[n-1][0];
Both implementation above was base on the fact that I either add nums[i] or not, and I add the nums[i] to the table with the corresponding remainder before/after I added nums[i], which is like knapsack DP, but the first version pass all test cases and the one below failed for some of them. Like [2,19,6,16,5,10,7,4,11,6], it gives 81 instead of the correct answer 84, can anyone explain why the second version is wrong?
The first version is calculating the largest subset sum divisible by 3; the second version calculates the largest sum divisible by 3 of a subset that includes nums[0], the first element.
The sole difference in the two versions is the base case for dynamic programming. The first version has the correct base cases: after processing zero elements, the only subset sum possible is zero. In the second version, the base case starts at 1, and implies that after processing one element, the only subset sum possible is the one containing that first element. All future subset sums are forced to use that element.
Try running the code on the array [1, 3]. The second version will return zero, because it does not consider subsets without 1.
How to calculate number of unordered pairs in an array whose bitwise AND is a power of 2. For ex if the array is [10,7,2,8,3]. The answer is 6.
Explanation(0-based index):
a[0]&a[1] = 2
a[0]&a[2] = 2
a[0]&a[3] = 8
a[0]&a[4] = 2
a[1]&a[2] = 2
a[2]&a[4] = 2
The only approach that comes to my mind is brute force. How to optimize it to perform in O(n) or O(n*log(n))?
The constraints on the size of array can be at max 10^5. And the value in that array can be upto 10^12.
Here is the brute force code that I tried.
int ans = 0;
for (int i = 0; i < a.length; i++) {
for (int j = i + 1; j < a.length; j++) {
long and = a[i] & a[j];
if ((and & (and - 1)) == 0 && and != 0)
ans++;
}
}
System.out.println(ans);
Although this answer is for a smaller range constraint (possibly suited up to about 2^20), I thought I'd add it since it may add some useful information.
We can adapt the bit-subset dynamic programming idea to have a solution with O(2^N * N^2 + n * N) complexity, where N is the number of bits in the range, and n is the number of elements in the list. (So if the integers were restricted to [1, 1048576] or 2^20, with n at 100,000, we would have on the order of 2^20 * 20^2 + 100000*20 = 421,430,400 iterations.)
The idea is that we want to count instances for which we have overlapping bit subsets, with the twist of adding a fixed set bit. Given Ai -- for simplicity, take 6 = b110 -- if we were to find all partners that AND to zero, we'd take Ai's negation,
110 -> ~110 -> 001
Now we can build a dynamic program that takes a diminishing mask, starting with the full number and diminishing the mask towards the left
001
^^^
001
^^
001
^
Each set bit on the negation of Ai represents a zero, which can be ANDed with either 1 or 0 to the same effect. Each unset bit on the negation of Ai represents a set bit in Ai, which we'd like to pair only with zeros, except for a single set bit.
We construct this set bit by examining each possibility separately. So where to count pairs that would AND with Ai to zero, we'd do something like
001 ->
001
000
we now want to enumerate
011 ->
011
010
101 ->
101
100
fixing a single bit each time.
We can achieve this by adding a dimension to the inner iteration. When the mask does have a set bit at the end, we "fix" the relevant bit by counting only the result for the previous DP cell that would have the bit set, and not the usual union of subsets that could either have that bit set or not.
Here is some JavaScript code to demonstrate with testing at the end comparing to the brute-force solution.
var debug = 0;
function bruteForce(a){
let answer = 0;
for (let i = 0; i < a.length; i++) {
for (let j = i + 1; j < a.length; j++) {
let and = a[i] & a[j];
if ((and & (and - 1)) == 0 && and != 0){
answer++;
if (debug)
console.log(a[i], a[j], a[i].toString(2), a[j].toString(2))
}
}
}
return answer;
}
function f(A, N){
const n = A.length;
const hash = {};
const dp = new Array(1 << N);
for (let i=0; i<1<<N; i++){
dp[i] = new Array(N + 1);
for (let j=0; j<N+1; j++)
dp[i][j] = new Array(N + 1).fill(0);
}
for (let i=0; i<n; i++){
if (hash.hasOwnProperty(A[i]))
hash[A[i]] = hash[A[i]] + 1;
else
hash[A[i]] = 1;
}
for (let mask=0; mask<1<<N; mask++){
// j is an index where we fix a 1
for (let j=0; j<=N; j++){
if (mask & 1){
if (j == 0)
dp[mask][j][0] = hash[mask] || 0;
else
dp[mask][j][0] = (hash[mask] || 0) + (hash[mask ^ 1] || 0);
} else {
dp[mask][j][0] = hash[mask] || 0;
}
for (let i=1; i<=N; i++){
if (mask & (1 << i)){
if (j == i)
dp[mask][j][i] = dp[mask][j][i-1];
else
dp[mask][j][i] = dp[mask][j][i-1] + dp[mask ^ (1 << i)][j][i - 1];
} else {
dp[mask][j][i] = dp[mask][j][i-1];
}
}
}
}
let answer = 0;
for (let i=0; i<n; i++){
for (let j=0; j<N; j++)
if (A[i] & (1 << j))
answer += dp[((1 << N) - 1) ^ A[i] | (1 << j)][j][N];
}
for (let i=0; i<N + 1; i++)
if (hash[1 << i])
answer = answer - hash[1 << i];
return answer / 2;
}
var As = [
[5, 4, 1, 6], // 4
[10, 7, 2, 8, 3], // 6
[2, 3, 4, 5, 6, 7, 8, 9, 10],
[1, 6, 7, 8, 9]
];
for (let A of As){
console.log(JSON.stringify(A));
console.log(`DP, brute force: ${ f(A, 4) }, ${ bruteForce(A) }`);
console.log('');
}
var numTests = 1000;
for (let i=0; i<numTests; i++){
const N = 6;
const A = [];
const n = 10;
for (let j=0; j<n; j++){
const num = Math.floor(Math.random() * (1 << N));
A.push(num);
}
const fA = f(A, N);
const brute = bruteForce(A);
if (fA != brute){
console.log('Mismatch:');
console.log(A);
console.log(fA, brute);
console.log('');
}
}
console.log("Done testing.");
Transform your array of values into an array of index sets, where each set corresponds to a particular bit and contains the indexes of the value from the original set that have the bit set. For example, your example array A = [10,7,2,8,3] becomes B = [{1,4}, {0,1,2,4}, {1}, {0,3}]. A fixed-sized array of bitvectors is an ideal data structure for this, as it makes set union/intersection/setminus relatively easy and efficient.
Once you have that array of sets B (takes O(nm) time where m is the size of your integers in bits), iterate over every element i of A again, computing ∑j|Bj∖i∖⋃kBk:k≠j∧i∈Bk|:i∈Bj. Add those all together and divide by 2, and that should be the number of pairs (the "divide by 2" is because this counts each pair twice, as what it is counting is the number of numbers each number pairs with). Should only take O(nm2) assuming you count the setminus operations as O(1) -- if you count them as O(n), then you're back to O(n2), but at least your constant factor should be small if you have efficient bitsets.
Pseudocode:
foreach A[i] in A:
foreach bit in A[i]:
B[bit] += {i}
pairs = 0
foreach A[i] in A:
foreach B[j] in B:
if i in B[j]:
tmp = B[j] - {i}
foreach B[k] in B:
if k != j && i in B[k]:
tmp -= B[k]
pairs += |tmp|
return pairs/2
I am implementing the approach described in this question for the same problem but I dont think this is working.
For those who dont want to go through the mathematics there, here is the algebra in gist:
Average = Sum(S1)/n(S1) = Sum(S2)/ n(S2) = Sum(Total)/n(Total)
where n() stands for the number of elements in the array & Sum() stands for the cumulative sum
S1 and S2 are mutually exclusive subsets of array Total. Thus to find the required subset where this condition will hold true, we find Sum(S1) = Sum(Total) * n(S1)/n(Total)
My approach:
#include <bits/stdc++.h>
using namespace std;
bool SubsetSum(vector<int> &A, int Sum)
{
bool dp[Sum+1][A.size()+1];
int i, j;
for(i=0; i<= A.size(); i++)
dp[0][i] = false; // When sum = 0
for(i=0; i<=Sum; i++)
dp[i][0] = 1; // When num of elements = 0
for(i = 1; i <= A.size(); i++)
{
for(j=1; j<= Sum; j++)
{
dp[i][j] = dp[i-1][j];
if(j-A[i-1] >= 0)
dp[i][j] = dp[i][j] || dp[i-1][j-A[i-1]];
}
}
return dp[Sum][A.size()];
}
void avgset(vector<int> &A) {
int total = accumulate(A.begin(), A.end(), 0); // Cumulative sum of the vector A
int ntotal = A.size(); // Total number of elements
int i;
for(i=1; i<=ntotal; i++) // Subset size can be anything between 1 to the number of elements in the total subset
{
if((total * i) % ntotal == 0)
{
if(SubsetSum(A, (total * i)/ntotal)) // Required subset sum = total * i)/ntotal
cout<<"Array can be broken into 2 arrays each with equal average of "<<(total * i)/ntotal<<endl;
}
}
}
int main()
{
vector<int> A = {1, 7, 15, 29, 11, 9};
avgset(A);
return 0;
}
This code outputs:
Array can be broken into 2 arrays each with equal average of 12
Array can be broken into 2 arrays each with equal average of 36
Array can be broken into 2 arrays each with equal average of 60
But these answers are wrong.
For example, when subset sum = 12, the corresponding elements will be {11, 1}. Then:
(11 + 1)/2 != (7 + 15 + 29 + 9)/4
Have I misunderstood something here?
Have I misunderstood something here?
Seems you did.
For given array average always is equal to 12 - total average, first subset average, second subset average.
So you have to check:
1-element subset with sum 12 - does not exist
2-element subset with sum 24 - does exist 9+15
3-element subset with sum 36 - does not exist
and there is no need to check larger sums (>n/2)
The element number of subset sum should be specified. Find element less equal than n/2 is enough. And there are other errors. Codes as below:
bool SubsetSum(vector<int> &A, int number, int Sum)
{
bool dp[Sum+1][A.size()+1];
int i, j;
for(i=0; i<= A.size(); i++)
for (j = 0; j <= Sum; j++)
dp[j][i] = false; // When sum = 0
dp[0][0] = true; // When num = 0 of 0 elements
for(i = 1; i <= A.size(); i++)
{
for(j=Sum; j>=A[i-1]; j--)
{
for (int k = A.size(); k > 0; k--)
dp[j][k] = dp[j][k] || dp[j-A[i-1]][k-1];
}
}
return dp[Sum][number];
}
void avgset(vector<int> &A) {
int total = accumulate(A.begin(), A.end(), 0); // Cumulative sum of the vector A
int ntotal = A.size(); // Total number of elements
int i;
for(i=1; i<=ntotal/2; i++) // Subset size can be anything between 1 to the number of elements in the total subset
{
if((total * i) % ntotal == 0)
{
if(SubsetSum(A, i, (total * i)/ntotal)) // Required subset sum = total * i)/ntotal
cout<<"Array can be broken into 2 arrays each with equal average of "<<(total * i)/ntotal<<endl;
}
}
}
output:
Array can be broken into 2 arrays each with equal average of 24
I've just done the following Codility Peaks problem. The problem is as follows:
A non-empty zero-indexed array A consisting of N integers is given.
A peak is an array element which is larger than its neighbors. More precisely, it is an index P such that 0 < P < N − 1, A[P − 1] < A[P] and A[P] > A[P + 1].
For example, the following array A:
A[0] = 1
A[1] = 2
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
has exactly three peaks: 3, 5, 10.
We want to divide this array into blocks containing the same number of elements. More precisely, we want to choose a number K that will yield the following blocks:
A[0], A[1], ..., A[K − 1],
A[K], A[K + 1], ..., A[2K − 1],
...
A[N − K], A[N − K + 1], ..., A[N − 1].
What's more, every block should contain at least one peak. Notice that extreme elements of the blocks (for example A[K − 1] or A[K]) can also be peaks, but only if they have both neighbors (including one in an adjacent blocks).
The goal is to find the maximum number of blocks into which the array A can be divided.
Array A can be divided into blocks as follows:
one block (1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2). This block contains three peaks.
two blocks (1, 2, 3, 4, 3, 4) and (1, 2, 3, 4, 6, 2). Every block has a peak.
three blocks (1, 2, 3, 4), (3, 4, 1, 2), (3, 4, 6, 2). Every block has a peak.
Notice in particular that the first block (1, 2, 3, 4) has a peak at A[3], because A[2] < A[3] > A[4], even though A[4] is in the adjacent block.
However, array A cannot be divided into four blocks, (1, 2, 3), (4, 3, 4), (1, 2, 3) and (4, 6, 2), because the (1, 2, 3) blocks do not contain a peak. Notice in particular that the (4, 3, 4) block contains two peaks: A[3] and A[5].
The maximum number of blocks that array A can be divided into is three.
Write a function:
class Solution { public int solution(int[] A); }
that, given a non-empty zero-indexed array A consisting of N integers, returns the maximum number of blocks into which A can be divided.
If A cannot be divided into some number of blocks, the function should return 0.
For example, given:
A[0] = 1
A[1] = 2
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
the function should return 3, as explained above.
Assume that:
N is an integer within the range [1..100,000];
each element of array A is an integer within the range [0..1,000,000,000].
Complexity:
expected worst-case time complexity is O(N*log(log(N)))
expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments).
Elements of input arrays can be modified.
My Question
So I solve this with what to me appears to be the brute force solution – go through every group size from 1..N, and check whether every group has at least one peak. The first 15 minutes I was trying to solve this I was trying to figure out some more optimal way, since the required complexity is O(N*log(log(N))).
This is my "brute-force" code that passes all the tests, including the large ones, for a score of 100/100:
public int solution(int[] A) {
int N = A.length;
ArrayList<Integer> peaks = new ArrayList<Integer>();
for(int i = 1; i < N-1; i++){
if(A[i] > A[i-1] && A[i] > A[i+1]) peaks.add(i);
}
for(int size = 1; size <= N; size++){
if(N % size != 0) continue;
int find = 0;
int groups = N/size;
boolean ok = true;
for(int peakIdx : peaks){
if(peakIdx/size > find){
ok = false;
break;
}
if(peakIdx/size == find) find++;
}
if(find != groups) ok = false;
if(ok) return groups;
}
return 0;
}
My question is how do I deduce that this is in fact O(N*log(log(N))), as it's not at all obvious to me, and I was surprised I pass the test cases. I'm looking for even the simplest complexity proof sketch that would convince me of this runtime. I would assume that a log(log(N)) factor means some kind of reduction of a problem by a square root on each iteration, but I have no idea how this applies to my problem. Thanks a lot for any help
You're completely right: to get the log log performance the problem needs to be reduced.
A n.log(log(n)) solution in python [below]. Codility no longer test 'performance' on this problem (!) but the python solution scores 100% for accuracy.
As you've already surmised:
Outer loop will be O(n) since it is testing whether each size of block is a clean divisor
Inner loop must be O(log(log(n))) to give O(n log(log(n))) overall.
We can get good inner loop performance because we only need to perform d(n), the number of divisors of n. We can store a prefix sum of peaks-so-far, which uses the O(n) space allowed by the problem specification. Checking whether a peak has occurred in each 'group' is then an O(1) lookup operation using the group start and end indices.
Following this logic, when the candidate block size is 3 the loop needs to perform n / 3 peak checks. The complexity becomes a sum: n/a + n/b + ... + n/n where the denominators (a, b, ...) are the factors of n.
Short story: The complexity of n.d(n) operations is O(n.log(log(n))).
Longer version:
If you've been doing the Codility Lessons you'll remember from the Lesson 8: Prime and composite numbers that the sum of harmonic number operations will give O(log(n)) complexity. We've got a reduced set, because we're only looking at factor denominators. Lesson 9: Sieve of Eratosthenes shows how the sum of reciprocals of primes is O(log(log(n))) and claims that 'the proof is non-trivial'. In this case Wikipedia tells us that the sum of divisors sigma(n) has an upper bound (see Robin's inequality, about half way down the page).
Does that completely answer your question? Suggestions on how to improve my python code are also very welcome!
def solution(data):
length = len(data)
# array ends can't be peaks, len < 3 must return 0
if len < 3:
return 0
peaks = [0] * length
# compute a list of 'peaks to the left' in O(n) time
for index in range(2, length):
peaks[index] = peaks[index - 1]
# check if there was a peak to the left, add it to the count
if data[index - 1] > data[index - 2] and data[index - 1] > data[index]:
peaks[index] += 1
# candidate is the block size we're going to test
for candidate in range(3, length + 1):
# skip if not a factor
if length % candidate != 0:
continue
# test at each point n / block
valid = True
index = candidate
while index != length:
# if no peak in this block, break
if peaks[index] == peaks[index - candidate]:
valid = False
break
index += candidate
# one additional check since peaks[length] is outside of array
if index == length and peaks[index - 1] == peaks[index - candidate]:
valid = False
if valid:
return length / candidate
return 0
Credits:
Major kudos to #tmyklebu for his SO answer which helped me a lot.
I'm don't think that the time complexity of your algorithm is O(Nlog(logN)).
However, it is certainly much lesser than O(N^2). This is because your inner loop is entered only k times where k is the number of factors of N. The number of factors of an integer can be seen in this link: http://www.cut-the-knot.org/blue/NumberOfFactors.shtml
I may be inaccurate but from the link it seems,
k ~ logN * logN * logN ...
Also, the inner loop has a complexity of O(N) since the number of peaks can be N/2 in the worst case.
Hence, in my opinion, the complexity of your algorithm is O(NlogN) at best but it must be sufficient to clear all test cases.
#radicality
There's at least one point where you can optimize the number of passes in the second loop to O(sqrt(N)) -- collect divisors of N and iterate through them only.
That will make your algo a little less "brute force".
Problem definition allows for O(N) space complexity. You can store divisors without violating this condition.
This is my solution based on prefix sums. Hope it helps:
class Solution {
public int solution(int[] A) {
int n = A.length;
int result = 1;
if (n < 3)
return 0;
int[] prefixSums = new int[n];
for (int i = 1; i < n-1; i++)
if (A[i] > A[i-1] && A[i] > A[i+1])
prefixSums[i] = prefixSums[i-1] + 1;
else
prefixSums[i] = prefixSums[i-1];
prefixSums[n-1] = prefixSums[n-2];
if (prefixSums[n-1] <= 1)
return prefixSums[n-1];
for (int i = 2; i <= prefixSums[n-2]; i++) {
if (n % i != 0)
continue;
int prev = 0;
boolean containsPeak = true;
for (int j = n/i - 1; j < n; j += n/i) {
if (prefixSums[j] == prev) {
containsPeak = false;
break;
}
prev = prefixSums[j];
}
if (containsPeak)
result = i;
}
return result;
}
}
def solution(A):
length = len(A)
if length <= 2:
return 0
peek_indexes = []
for index in range(1, length-1):
if A[index] > A[index - 1] and A[index] > A[index + 1]:
peek_indexes.append(index)
for block in range(3, int((length/2)+1)):
if length % block == 0:
index_to_check = 0
temp_blocks = 0
for peek_index in peek_indexes:
if peek_index >= index_to_check and peek_index < index_to_check + block:
temp_blocks += 1
index_to_check = index_to_check + block
if length/block == temp_blocks:
return temp_blocks
if len(peek_indexes) > 0:
return 1
else:
return 0
print(solution([1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2, 1, 2, 5, 2]))
I just found the factors at first,
then just iterated in A and tested all number of blocks to see which is the greatest block division.
This is the code that got 100 (in java)
https://app.codility.com/demo/results/training9593YB-39H/
A javascript solution with complexity of O(N * log(log(N))).
function solution(A) {
let N = A.length;
if (N < 3) return 0;
let peaks = 0;
let peaksTillNow = [ 0 ];
let dividers = [];
for (let i = 1; i < N - 1; i++) {
if (A[i - 1] < A[i] && A[i] > A[i + 1]) peaks++;
peaksTillNow.push(peaks);
if (N % i === 0) dividers.push(i);
}
peaksTillNow.push(peaks);
if (peaks === 0) return 0;
let blocks;
let result = 1;
for (blocks of dividers) {
let K = N / blocks;
let prevPeaks = 0;
let OK = true;
for (let i = 1; i <= blocks; i++) {
if (peaksTillNow[i * K - 1] > prevPeaks) {
prevPeaks = peaksTillNow[i * K - 1];
} else {
OK = false;
break;
}
}
if (OK) result = blocks;
}
return result;
}
Solution with C# code
public int GetPeaks(int[] InputArray)
{
List<int> lstPeaks = new List<int>();
lstPeaks.Add(0);
for (int Index = 1; Index < (InputArray.Length - 1); Index++)
{
if (InputArray[Index - 1] < InputArray[Index] && InputArray[Index] > InputArray[Index + 1])
{
lstPeaks.Add(1);
}
else
{
lstPeaks.Add(0);
}
}
lstPeaks.Add(0);
int totalEqBlocksWithPeaks = 0;
for (int factor = 1; factor <= InputArray.Length; factor++)
{
if (InputArray.Length % factor == 0)
{
int BlockLength = InputArray.Length / factor;
int BlockCount = factor;
bool isAllBlocksHasPeak = true;
for (int CountIndex = 1; CountIndex <= BlockCount; CountIndex++)
{
int BlockStartIndex = CountIndex == 1 ? 0 : (CountIndex - 1) * BlockLength;
int BlockEndIndex = (CountIndex * BlockLength) - 1;
if (!(lstPeaks.GetRange(BlockStartIndex, BlockLength).Sum() > 0))
{
isAllBlocksHasPeak = false;
}
}
if (isAllBlocksHasPeak)
totalEqBlocksWithPeaks++;
}
}
return totalEqBlocksWithPeaks;
}
There is actually an O(n) runtime complexity solution for this task, so this is a humble attempt to share that.
The trick to go from the proposed O(n * loglogn) solutions to O(n) is to calculate the maximum gap between any two peaks (or a leading or trailing peak to the corresponding endpoint).
This can be done while building the peak hash in the first O(n) loop.
Then, if the gap is 'g' between two consecutive peaks, then the minimum group size must be 'g/2'. It will simply be 'g' between start and first peak, or last peak and end. Also, there will be at least one peak in any group from group size 'g', so the range to check for is: g/2, 1+g/2, 2+g/2, ... g.
Therefore, the runtime is the sum over d = g/2, g/2+1, ... g) * n/d where 'd' is the divisor'.
(sum over d = g/2, 1 + g/2, ... g) * n/d = n/(g/2) + n/(1 + g/2) + ... + (n/g)
if g = 5, this n/5 + n/6 + n/7 + n/8 + n/9 + n/10 = n(1/5+1/6+1/7+1/8+1/9+1/10)
If you replace each item with the largest element, then you get sum <= n * (1/5 + 1/5 + 1/5 + 1/5 + 1/5) = n
Now, generalising this, every element is replaced with n / (g/2).
The number of items from g/2 to g is 1 + g/2 since there are (g - g/2 + 1) items.
So, the whole sum is: n/(g/2) * (g/2 + 1) = n + 2n/g < 3n.
Therefore, the bound on the total number of operations is O(n).
The code, implementing this in C++, is here:
int solution(vector<int> &A)
{
int sizeA = A.size();
vector<bool> hash(sizeA, false);
int min_group_size = 2;
int pi = 0;
for (int i = 1, pi = 0; i < sizeA - 1; ++i) {
const int e = A[i];
if (e > A[i - 1] && e > A[i + 1]) {
hash[i] = true;
int diff = i - pi;
if (pi) diff /= 2;
if (diff > min_group_size) min_group_size = diff;
pi = i;
}
}
min_group_size = min(min_group_size, sizeA - pi);
vector<int> hash_next(sizeA, 0);
for (int i = sizeA - 2; i >= 0; --i) {
hash_next[i] = hash[i] ? i : hash_next[i + 1];
}
for (int group_size = min_group_size; group_size <= sizeA; ++group_size) {
if (sizeA % group_size != 0) continue;
int number_of_groups = sizeA / group_size;
int group_index = 0;
for (int peak_index = 0; peak_index < sizeA; peak_index = group_index * group_size) {
peak_index = hash_next[peak_index];
if (!peak_index) break;
int lower_range = group_index * group_size;
int upper_range = lower_range + group_size - 1;
if (peak_index > upper_range) {
break;
}
++group_index;
}
if (number_of_groups == group_index) return number_of_groups;
}
return 0;
}
var prev, curr, total = 0;
for (var i=1; i<A.length; i++) {
if (curr == 0) {
curr = A[i];
} else {
if(A[i] != curr) {
if (prev != 0) {
if ((prev < curr && A[i] < curr) || (prev > curr && A[i] > curr)) {
total += 1;
}
} else {
prev = curr;
total += 1;
}
prev = curr;
curr = A[i];
}
}
}
if(prev != curr) {
total += 1;
}
return total;
I agree with GnomeDePlume answer... the piece on looking for the divisors on the proposed solution is O(N), and that could be decreased to O(sqrt(N)) by using the algorithm provided on the lesson text.
So just adding, here is my solution using Java that solves the problem on the required complexity.
Be aware, it has way more code then yours - some cleanup (debug sysouts and comments) would always be possible :-)
public int solution(int[] A) {
int result = 0;
int N = A.length;
// mark accumulated peaks
int[] peaks = new int[N];
int count = 0;
for (int i = 1; i < N -1; i++) {
if (A[i-1] < A[i] && A[i+1] < A[i])
count++;
peaks[i] = count;
}
// set peaks count on last elem as it will be needed during div checks
peaks[N-1] = count;
// check count
if (count > 0) {
// if only one peak, will need the whole array
if (count == 1)
result = 1;
else {
// at this point (peaks > 1) we know at least the single group will satisfy the criteria
// so set result to 1, then check for bigger numbers of groups
result = 1;
// for each divisor of N, check if that number of groups work
Integer[] divisors = getDivisors(N);
// result will be at least 1 at this point
boolean candidate;
int divisor, startIdx, endIdx;
// check from top value to bottom - stop when one is found
// for div 1 we know num groups is 1, and we already know that is the minimum. No need to check.
// for div = N we know it's impossible, as all elements would have to be peaks (impossible by definition)
for (int i = divisors.length-2; i > 0; i--) {
candidate = true;
divisor = divisors[i];
for (int j = 0; j < N; j+= N/divisor) {
startIdx = (j == 0 ? j : j-1);
endIdx = j + N/divisor-1;
if (peaks[startIdx] == peaks[endIdx]) {
candidate = false;
break;
}
}
// if all groups had at least 1 peak, this is the result!
if (candidate) {
result = divisor;
break;
}
}
}
}
return result;
}
// returns ordered array of all divisors of N
private Integer[] getDivisors(int N) {
Set<Integer> set = new TreeSet<Integer>();
double sqrt = Math.sqrt(N);
int i = 1;
for (; i < sqrt; i++) {
if (N % i == 0) {
set.add(i);
set.add(N/i);
}
}
if (i * i == N)
set.add(i);
return set.toArray(new Integer[]{});
}
Thanks,
Davi
Given an array of length N. How will you find the minimum length
contiguous sub-array of whose sum is S and whose product is P.
For eg 5 6 1 4 6 2 9 7 for S = 17, Ans = [6, 2, 9] for P = 24, Ans = [4 6].
Just go from left to right, and sum all the numbers, if the sum > S, then throw away left ones.
import java.util.Arrays;
public class test {
public static void main (String[] args) {
int[] array = {5, 6, 1, 4, 6, 2, 9, 7};
int length = array.length;
int S = 17;
int sum = 0; // current sum of sub array, assume all positive
int start = 0; // current start of sub array
int minLength = array.length + 1; // length of minimum sub array found
int minStart = 0; // start of of minimum sub array found
for (int index = 0; index < length; index++) {
sum = sum + array[index];
// Find by add to right
if (sum == S && index - start + 1 < minLength) {
minLength = index - start + 1;
minStart = start;
}
while (sum >= S) {
sum = sum - array[start];
start++;
// Find by minus from left
if (sum == S && index - start + 1 < minLength) {
minLength = index - start + 1;
minStart = start;
}
}
}
// Found
if (minLength != length + 1) {
System.out.println(Arrays.toString(Arrays.copyOfRange(array, minStart, minStart + minLength)));
}
}
}
For your example, I think it is OR.
Product is nothing different from sum, except for calculation.
pseudocode:
subStart = 0;
Sum = 0
for (i = 0; i< array.Length; i++)
Sum = Sum + array[i];
if (Sum < targetSum) continue;
if (Sum == targetSum) result = min(result, i - subStart +1);
while (Sum >= targetSum)
Sum = Sum - array[subStart];
subStart++;
I think that'll find the result with one pass through the array. There's a bit of detail missing there in the result value. Needs a bit more complexity there to be able to return the actual subarray if needed.
To find the Product sub-array just substitute multiplication/division for addition/subtraction in the above algorithm
Put two indices on the array. Lets call them i and j. Initially j = 1 and i =0. If the product between i and j is less than P, increment j. If it is greater than P, increment i. If we get something equal to p, sum up the elements (instead of summing up everytime, maintain an array where S(i) is the sum of everything to the left of it. Compute sum from i to j as S(i) - S(j)) and see whether you get S. Stop when j falls out of the array length.
This is O(n).
You can use a hashmap to find the answer for product in O(N) time with extra space.