Unable to find error sement tree : minimum in subarray - c++11

I am new to data structures and algo, and unable to find error in my code for the question
Range Minimum Query
Given an array A of size N, there are two types of queries on this array.
q l r: In this query you need to print the minimum in the sub-array A[l:r].
u x y: In this query you need to update A[x]=y.
Input: First line of the test case contains two integers, N and Q, size of array A and number of queries.
Second line contains N space separated integers, elements of A.
Next Q lines contain one of the two queries.
Output:
For each type 1 query, print the minimum element in the sub-array A[l:r].
Constraints:
1 ≤ N,Q,y ≤ 10^5
1 ≤ l,r,x≤N
#include<bits/stdc++.h>
using namespace std;
long a [100001];
//global array to store input
long tree[400004];
//global array to store tree
// FUNCTION TO BUILD SEGMENT TREE //////////
void build(long i,long start,long end) //i = tree node
{
if(start==end)
{
tree[i]=a[start];
return;
}
long mid=(start+end)/2;
build(i*2,start,mid);
build(i*2+1,mid+1,end);
tree[i] = min(tree[i*2] , tree[i*2+1]);
}
// FUNCTION TO UPDATE SEGMENT TREE //////////
void update (long i,long start,long end,long idx,long val)
//idx = index to be updated
// val = new value to be given at that index
{
if(start==end)
tree[i]=a[idx]=val;
else
{
int mid=(start+end)/2;
if(start <= idx and idx <= mid)
update(i*2,start,mid,idx,val);
else
update(i*2+1,mid+1,end,idx,val);
tree[i] = min(tree[i*2] , tree[i*2+1]);
}
}
// FUNCTION FOR QUERY
long query(long i,long start,long end,long l,long r)
{
if(start>r || end<l || start > end)
return INT_MAX;
else
if(start>=l && end<=r)
return tree[i];
long mid=(start+end)/2;
long ans1 = query(i*2,start,mid,l,r);
long ans2 = query(i*2+1,mid+1,end,l,r);
return min(ans1,ans2);
}
int main()
{
long n,q;
cin>>n>>q;
for(int i=0 ; i<n ; i++)
cin>>a[i];
//for(int i=1 ; i<2*n ; i++) cout<<tree[i]<<" "; cout<<endl;
build(1,0,n-1);
//for(int i=1 ; i<2*n ; i++) cout<<tree[i]<<" "; cout<<endl;
while(q--)
{
long l,r;
char ch;
cin>>ch>>l>>r;
if(ch=='q')
cout<<query(1,0,n-1,l-1,r-1)<<endl;
else
update(1,0,n-1,l,r);
}
return 0;
}
Example :input
5 15
1 5 2 4 3
q 1 5
q 1 3
q 3 5
q 1 5
q 1 2
q 2 4
q 4 5
u 3 1
u 3 100
u 3 6
q 1 5
q 1 5
q 1 2
q 2 4
q 4 5
Expected output:
1
1
2
1
1
2
3
1
1
1
4
3

It appears that all given values assume 1 based indexing: 1 ≤ l,r,x ≤ N
You chose to build your segment tree with 0 based indexing, so all queries and updates also should use same indexing.
So this part is wrong, because you need to set A[x]=y, and because you use 0 based indexing your code actually sets A[x+1]=y
update(1,0,n-1,l,r);
To fix change it to this:
update(1,0,n-1,l-1,r);

Related

How would one solve the staircase problem recursively with a variable number of steps?

The problem of determining the n amount of ways to climb a staircase given you can take 1 or 2 steps is well known with the Fibonacci sequencing solution being very clear. However how exactly could one solve this recursively if you also assume that you can take a variable M amount of steps?
I tried to make a quick mockup of this algorithm in typescript with
function counter(n: number, h: number){
console.log(`counter(n=${n},h=${h})`);
let sum = 0
if(h<1) return 0;
sum = 1
if (n>h) {
n = h;
}
if (n==h) {
sum = Math.pow(2, h-1)
console.log(`return sum=${sum}, pow(2,${h-1}) `)
return sum
}
for (let c = 1; c <= h; c++) {
console.log(`c=${c}`)
sum += counter(n, h-c);
console.log(`sum=${sum}`)
}
console.log(`return sum=${sum}`)
return sum;
}
let result = counter (2, 4);
console.log(`result=${result}`)
but unfortunately this doesn't seem to work for most cases where the height is not equal to the number of steps one could take.
I think this could be solved with recursive DP.
vector<vector<int>> dp2; //[stair count][number of jumps]
int stair(int c, int p) {
int& ret = dp2[c][p];
if (ret != -1) return ret; //If you've already done same search, return saved result
if (c == n) { //If you hit the last stair, return 1
return ret = 1;
}
int s1 = 0, s2 = 0;
if (p < m) { //If you can do more jumps, make recursive call
s1 = stair(c + 1, p + 1);
if (c + 2 <= n) { //+2 stairs can jump over the last stair. That shouldn't happen.
s2 = stair(c + 2, p + 1);
}
}
return ret = s1 + s2; //Final result will be addition of +1 stair methods and +2 methods
}
int main()
{
ios::sync_with_stdio(0); cin.tie(0); cout.tie(0);
cin >> n >> m; dp2 = vector<vector<int>>(n + 1, vector<int>(m + 1, -1));
for (int i = 1; i <= m; i++) {
dp2[n][i] = 1; //All last stair method count should be 1, because there is no more after.
}
cout << stair(0, 0) << "\n";
return 0;
}
Example IO 1
5 5
8
// 1 1 1 1 1
// 1 1 1 2
// 1 1 2 1
// 1 2 1 1
// 2 1 1 1
// 1 2 2
// 2 1 2
// 2 2 1
Example IO 2
5 4
7
// 1 1 1 2
// 1 1 2 1
// 1 2 1 1
// 2 1 1 1
// 1 2 2
// 2 1 2
// 2 2 1
Example IO 3
5 3
3
// 1 2 2
// 2 1 2
// 2 2 1

Maximum number achievable by converting two adjacent x to one (x+1)

Given a sequence of N integers where 1 <= N <= 500 and the numbers are between 1 and 50. In a step any two adjacent equal numbers x x can be replaced with a single x + 1. What is the maximum number achievable by such steps.
For example if given 2 3 1 1 2 2 then the maximum possible is 4:
2 3 1 1 2 2 ---> 2 3 2 2 2 ---> 2 3 3 2 ---> 2 4 2.
It is evident that I should try to do better than the maximum number available in the sequence. But I can't figure out a good algorithm.
Each substring of the input can make at most one single number (invariant: the log base two of the sum of two to the power of each entry). For every x, we can find the set of substrings that can make x. For each x, this is (1) every occurrence of x (2) the union of two contiguous substrings that can make x - 1. The resulting algorithm is O(N^2)-time.
An algorithm could work like this:
Convert the input to an array where every element has a frequency attribute, collapsing repeated consecutive values in the input into one single node. For example, this input:
1 2 2 4 3 3 3 3
Would be represented like this:
{val: 1, freq: 1} {val: 2, freq: 2} {val: 4, freq: 1} {val: 3, freq: 4}
Then find local minima nodes, like the node (3 3 3 3) in 1 (2 2) 4 (3 3 3 3) 4, i.e. nodes whose neighbours both have higher values. For those local minima that have an even frequency, "lift" those by applying the step. Repeat this until no such local minima (with even frequency) exist any more.
Start of the recursive part of the algorithm:
At both ends of the array, work inwards to "lift" values as long as the more inner neighbour has a higher value. With this rule, the following:
1 2 2 3 5 4 3 3 3 1 1
will completely resolve. First from the left side inward:
1 4 5 4 3 3 3 1 1
Then from the right side:
1 4 6 3 2
Note that when there is an odd frequency (like for the 3s above), there will be a "remainder" that cannot be incremented. The remainder should in this rule always be left on the outward side, so to maximise the potential towards the inner part of the array.
At this point the remaining local minima have odd frequencies. Applying the step to such a node will always leave a "remainder" (like above) with the original value. This remaining node can appear anywhere, but it only makes sense to look at solutions where this remainder is on the left side or the right side of the lift (not in the middle). So for example:
4 1 1 1 1 1 2 3 4
Can resolve to one of these:
4 2 2 1 2 3 4
Or:
4 1 2 2 2 3 4
The 1 in either second or fourth position, is the above mentioned "remainder". Obviously, the second way of resolving is more promising in this example. In general, the choice is obvious when on one side there is a value that is too high to merge with, like the left-most 4 is too high for five 1 values to get to. The 4 is like a wall.
When the frequency of the local minimum is one, there is nothing we can do with it. It actually separates the array in a left and right side that do not influence each other. The same is true for the remainder element discussed above: it separates the array into two parts that do not influence each other.
So the next step in the algorithm is to find such minima (where the choice is obvious), apply that kind of step and separate the problem into two distinct problems which should be solved recursively (from the top). So in the last example, the following two problems would be solved separately:
4
2 2 3 4
Then the best of both solutions will count as the overall solution. In this case that is 5.
The most challenging part of the algorithm is to deal with those local minima for which the choice of where to put the remainder is not obvious. For instance;
3 3 1 1 1 1 1 2 3
This can go to either:
3 3 2 2 1 2 3
3 3 1 2 2 2 3
In this example the end result is the same for both options, but in bigger arrays it would be less and less obvious. So here both options have to be investigated. In general you can have many of them, like 2 in this example:
3 1 1 1 2 3 1 1 1 1 1 3
Each of these two minima has two options. This seems like to explode into too many possibilities for larger arrays. But it is not that bad. The algorithm can take opposite choices in neighbouring minima, and go alternating like this through the whole array. This way alternating sections are favoured, and get the most possible value drawn into them, while the other sections are deprived of value. Now the algorithm turns the tables, and toggles all choices so that the sections that were previously favoured are now deprived, and vice versa. The solution of both these alternatives is derived by resolving each section recursively, and then comparing the two "grand" solutions to pick the best one.
Snippet
Here is a live JavaScript implementation of the above algorithm.
Comments are provided which hopefully should make it readable.
"use strict";
function Node(val, freq) {
// Immutable plain object
return Object.freeze({
val: val,
freq: freq || 1, // Default frequency is 1.
// Max attainable value when merged:
reduced: val + (freq || 1).toString(2).length - 1
});
}
function compress(a) {
// Put repeated elements in a single node
var result = [], i, j;
for (i = 0; i < a.length; i = j) {
for (j = i + 1; j < a.length && a[j] == a[i]; j++);
result.push(Node(a[i], j - i));
}
return result;
}
function decompress(a) {
// Expand nodes into separate, repeated elements
var result = [], i, j;
for (i = 0; i < a.length; i++) {
for (j = 0; j < a[i].freq; j++) {
result.push(a[i].val);
}
}
return result;
}
function str(a) {
return decompress(a).join(' ');
}
function unstr(s) {
s = s.replace(/\D+/g, ' ').trim();
return s.length ? compress(s.split(/\s+/).map(Number)) : [];
}
/*
The function merge modifies an array in-place, performing a "step" on
the indicated element.
The array will get an element with an incremented value
and decreased frequency, unless a join occurs with neighboring
elements with the same value: then the frequencies are accumulated
into one element. When the original frequency was odd there will
be a "remainder" element in the modified array as well.
*/
function merge(a, i, leftWards, stats) {
var val = a[i].val+1,
odd = a[i].freq % 2,
newFreq = a[i].freq >> 1,
last = i;
// Merge with neighbouring nodes of same value:
if ((!odd || !leftWards) && a[i+1] && a[i+1].val === val) {
newFreq += a[++last].freq;
}
if ((!odd || leftWards) && i && a[i-1].val === val) {
newFreq += a[--i].freq;
}
// Replace nodes
a.splice(i, last-i+1, Node(val, newFreq));
if (odd) a.splice(i+leftWards, 0, Node(val-1));
// Update statistics and trace: this is not essential to the algorithm
if (stats) {
stats.total_applied_merges++;
if (stats.trace) stats.trace.push(str(a));
}
return i;
}
/* Function Solve
Parameters:
a: The compressed array to be reduced via merges. It is changed in-place
and should not be relied on after the call.
stats: Optional plain object that will be populated with execution statistics.
Return value:
The array after the best merges were applied to achieve the highest
value, which is stored in the maxValue custom property of the array.
*/
function solve(a, stats) {
var maxValue, i, j, traceOrig, skipLeft, skipRight, sections, goLeft,
b, choice, alternate;
if (!a.length) return a;
if (stats && stats.trace) {
traceOrig = stats.trace;
traceOrig.push(stats.trace = [str(a)]);
}
// Look for valleys of even size, and "lift" them
for (i = 1; i < a.length - 1; i++) {
if (a[i-1].val > a[i].val && a[i].val < a[i+1].val && (a[i].freq % 2) < 1) {
// Found an even valley
i = merge(a, i, false, stats);
if (i) i--;
}
}
// Check left-side elements with always increasing values
for (i = 0; i < a.length-1 && a[i].val < a[i+1].val; i++) {
if (a[i].freq > 1) i = merge(a, i, false, stats) - 1;
};
// Check right-side elements with always increasing values, right-to-left
for (j = a.length-1; j > 0 && a[j-1].val > a[j].val; j--) {
if (a[j].freq > 1) j = merge(a, j, true, stats) + 1;
};
// All resolved?
if (i == j) {
while (a[i].freq > 1) merge(a, i, true, stats);
a.maxValue = a[i].val;
} else {
skipLeft = i;
skipRight = a.length - 1 - j;
// Look for other valleys (odd sized): they will lead to a split into sections
sections = [];
for (i = a.length - 2 - skipRight; i > skipLeft; i--) {
if (a[i-1].val > a[i].val && a[i].val < a[i+1].val) {
// Odd number of elements: if more than one, there
// are two ways to merge them, but maybe
// one of both possibilities can be excluded.
goLeft = a[i+1].val > a[i].reduced;
if (a[i-1].val > a[i].reduced || goLeft) {
if (a[i].freq > 1) i = merge(a, i, goLeft, stats) + goLeft;
// i is the index of the element which has become a 1-sized valley
// Split off the right part of the array, and store the solution
sections.push(solve(a.splice(i--), stats));
}
}
}
if (sections.length) {
// Solve last remaining section
sections.push(solve(a, stats));
sections.reverse();
// Combine the solutions of all sections into one
maxValue = sections[0].maxValue;
for (i = sections.length - 1; i >= 0; i--) {
maxValue = Math.max(sections[i].maxValue, maxValue);
}
} else {
// There is no more valley that can be resolved without branching into two
// directions. Look for the remaining valleys.
sections = [];
b = a.slice(0); // take copy
for (choice = 0; choice < 2; choice++) {
if (choice) a = b; // restore from copy on second iteration
alternate = choice;
for (i = a.length - 2 - skipRight; i > skipLeft; i--) {
if (a[i-1].val > a[i].val && a[i].val < a[i+1].val) {
// Odd number of elements
alternate = !alternate
i = merge(a, i, alternate, stats) + alternate;
sections.push(solve(a.splice(i--), stats));
}
}
// Solve last remaining section
sections.push(solve(a, stats));
}
sections.reverse(); // put in logical order
// Find best section:
maxValue = sections[0].maxValue;
for (i = sections.length - 1; i >= 0; i--) {
maxValue = Math.max(sections[i].maxValue, maxValue);
}
for (i = sections.length - 1; i >= 0 && sections[i].maxValue < maxValue; i--);
// Which choice led to the highest value (choice = 0 or 1)?
choice = (i >= sections.length / 2)
// Discard the not-chosen version
sections = sections.slice(choice * sections.length/2);
}
// Reconstruct the solution from the sections.
a = [].concat.apply([], sections);
a.maxValue = maxValue;
}
if (traceOrig) stats.trace = traceOrig;
return a;
}
function randomValues(len) {
var a = [];
for (var i = 0; i < len; i++) {
// 50% chance for a 1, 25% for a 2, ... etc.
a.push(Math.min(/\.1*/.exec(Math.random().toString(2))[0].length,5));
}
return a;
}
// I/O
var inputEl = document.querySelector('#inp');
var randEl = document.querySelector('#rand');
var lenEl = document.querySelector('#len');
var goEl = document.querySelector('#go');
var outEl = document.querySelector('#out');
goEl.onclick = function() {
// Get the input and structure it
var a = unstr(inputEl.value),
stats = {
total_applied_merges: 0,
trace: a.length < 100 ? [] : undefined
};
// Apply algorithm
a = solve(a, stats);
// Output results
var output = {
value: a.maxValue,
compact: str(a),
total_applied_merges: stats.total_applied_merges,
trace: stats.trace || 'no trace produced (input too large)'
};
outEl.textContent = JSON.stringify(output, null, 4);
}
randEl.onclick = function() {
// Get input (count of numbers to generate):
len = lenEl.value;
// Generate
var a = randomValues(len);
// Output
inputEl.value = a.join(' ');
// Simulate click to find the solution immediately.
goEl.click();
}
// Tests
var tests = [
' ', '',
'1', '1',
'1 1', '2',
'2 2 1 2 2', '3 1 3',
'3 2 1 1 2 2 3', '5',
'3 2 1 1 2 2 3 1 1 1 1 3 2 2 1 1 2', '6',
'3 1 1 1 3', '3 2 1 3',
'2 1 1 1 2 1 1 1 2 1 1 1 1 1 2', '3 1 2 1 4 1 2',
'3 1 1 2 1 1 1 2 3', '4 2 1 2 3',
'1 4 2 1 1 1 1 1 1 1', '1 5 1',
];
var res;
for (var i = 0; i < tests.length; i+=2) {
var res = str(solve(unstr(tests[i])));
if (res !== tests[i+1]) throw 'Test failed: ' + tests[i] + ' returned ' + res + ' instead of ' + tests[i+1];
}
Enter series (space separated):<br>
<input id="inp" size="60" value="2 3 1 1 2 2"><button id="go">Solve</button>
<br>
<input id="len" size="4" value="30"><button id="rand">Produce random series of this size and solve</button>
<pre id="out"></pre>
As you can see the program produces a reduced array with the maximum value included. In general there can be many derived arrays that have this maximum; only one is given.
An O(n*m) time and space algorithm is possible, where, according to your stated limits, n <= 500 and m <= 58 (consider that even for a billion elements, m need only be about 60, representing the largest element ± log2(n)). m is representing the possible numbers 50 + floor(log2(500)):
Consider the condensed sequence, s = {[x, number of x's]}.
If M[i][j] = [num_j,start_idx] where num_j represents the maximum number of contiguous js ending at index i of the condensed sequence; start_idx, the index where the sequence starts or -1 if it cannot join earlier sequences; then we have the following relationship:
M[i][j] = [s[i][1] + M[i-1][j][0], M[i-1][j][1]]
when j equals s[i][0]
j's greater than s[i][0] but smaller than or equal to s[i][0] + floor(log2(s[i][1])), represent converting pairs and merging with an earlier sequence if applicable, with a special case after the new count is odd:
When M[i][j][0] is odd, we do two things: first calculate the best so far by looking back in the matrix to a sequence that could merge with M[i][j] or its paired descendants, and then set a lower bound in the next applicable cells in the row (meaning a merge with an earlier sequence cannot happen via this cell). The reason this works is that:
if s[i + 1][0] > s[i][0], then s[i + 1] could only possibly pair with the new split section of s[i]; and
if s[i + 1][0] < s[i][0], then s[i + 1] might generate a lower j that would combine with the odd j from M[i], potentially making a longer sequence.
At the end, return the largest entry in the matrix, max(j + floor(log2(num_j))), for all j.
JavaScript code (counterexamples would be welcome; the limit on the answer is set at 7 for convenient visualization of the matrix):
function f(str){
var arr = str.split(/\s+/).map(Number);
var s = [,[arr[0],0]];
for (var i=0; i<arr.length; i++){
if (s[s.length - 1][0] == arr[i]){
s[s.length - 1][1]++;
} else {
s.push([arr[i],1]);
}
}
var M = [new Array(8).fill([0,0])],
best = 0;
for (var i=1; i<s.length; i++){
M[i] = new Array(8).fill([0,i]);
var temp = s[i][1],
temp_odd,
temp_start,
odd = false;
for (var j=s[i][0]; temp>0; j++){
var start_idx = odd ? temp_start : M[i][j-1][1];
if (start_idx != -1 && M[start_idx - 1][j][0]){
temp += M[start_idx - 1][j][0];
start_idx = M[start_idx - 1][j][1];
}
if (!odd){
M[i][j] = [temp,start_idx];
temp_odd = temp;
} else {
M[i][j] = [temp_odd,-1];
temp_start = start_idx;
}
if (!odd && temp & 1 && temp > 1){
odd = true;
temp_start = start_idx;
}
best = Math.max(best,j + Math.floor(Math.log2(temp)));
temp >>= 1;
temp_odd >>= 1;
}
}
return [arr, s, best, M];
}
// I/O
var button = document.querySelector('button');
var input = document.querySelector('input');
var pre = document.querySelector('pre');
button.onclick = function() {
var val = input.value;
var result = f(val);
var text = '';
for (var i=0; i<3; i++){
text += JSON.stringify(result[i]) + '\n\n';
}
for (var i in result[3]){
text += JSON.stringify(result[3][i]) + '\n';
}
pre.textContent = text;
}
<input value ="2 2 3 3 2 2 3 3 5">
<button>Solve</button>
<pre></pre>
Here's a brute force solution:
function findMax(array A, int currentMax)
for each pair (i, i+1) of indices for which A[i]==A[i+1] do
currentMax = max(A[i]+1, currentMax)
replace A[i],A[i+1] by a single number A[i]+1
currentMax = max(currentMax, findMax(A, currentMax))
end for
return currentMax
Given the array A, let currentMax=max(A[0], ..., A[n])
print findMax(A, currentMax)
The algorithm terminates because in each recursive call the array shrinks by 1.
It's also clear that it is correct: we try out all possible replacement sequences.
The code is extremely slow when the array is large and there's lots of options regarding replacements, but actually works reasonbly fast on arrays with small number of replaceable pairs. (I'll try to quantify the running time in terms of the number of replaceable pairs.)
A naive working code in Python:
def findMax(L, currMax):
for i in range(len(L)-1):
if L[i] == L[i+1]:
L[i] += 1
del L[i+1]
currMax = max(currMax, L[i])
currMax = max(currMax, findMax(L, currMax))
L[i] -= 1
L.insert(i+1, L[i])
return currMax
# entry point
if __name__ == '__main__':
L1 = [2, 3, 1, 1, 2, 2]
L2 = [2, 3, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2]
print findMax(L1, max(L1))
print findMax(L2, max(L2))
The result of the first call is 4, as expected.
The result of the second call is 5 as expected; the sequence that gives the result: 2,3,1,1,2,2,2,2,2,2,2,2, -> 2,3,1,1,3,2,2,2,2,2,2 -> 2,3,1,1,3,3,2,2,2,2, -> 2,3,1,1,3,3,3,2,2 -> 2,3,1,1,3,3,3,3 -> 2,3,1,1,4,3, -> 2,3,1,1,4,4 -> 2,3,1,1,5

Maximum length of zigzag sequence

A sequence of integers is called zigzag sequence if each of its elements is either strictly less or strictly greater than its neighbors.
Example : The sequence 4 2 3 1 5 3 forms a zigzag, but 7 3 5 5 2 and 3 8 6 4 5 don't.
For a given array of integers we need to find the length of its largest (contiguous) sub-array that forms a zigzag sequence.
Can this be done in O(N) ?
Currently my solution is O(N^2) which is just simply taking every two points and checking each possible sub-array if it satisfies the condition or not.
I claim that the length of overlapping sequence of any 2 zigzag sub-sequences is a most 1
Proof by contradiction:
Assume a_i .. a_j is the longest zigzag sub-sequence, and there is another zigzag sub-sequence b_m...b_n overlapping it.
without losing of generality, let's say the overlapping part is
a_i ... a_k...a_j
--------b_m...b_k'...b_n
a_k = b_m, a_k+1 = b_m+1....a_j = b_k' where k'-m = j-k > 0 (at least 2 elements are overlapping)
Then they can merge to form a longer zig-zag sequence, contradiction.
This means the only case they can be overlapping each other is like
3 5 3 2 3 2 3
3 5 3 and 3 2 3 2 3 is overlapping at 1 element
This can still be solved in O(N) I believe, like just greedily increase the zig-zag length whenever possible. If fails, move iterator 1 element back and treat it as a new zig-zag starting point
Keep record the latest and longest zig-zag length you have found
Walk along the array and see if the current item belongs to (fits a definition of) a zigzag. Remember the las zigzag start, which is either the array's start or the first zigzag element after the most recent non-zigzag element. This and the current item define some zigzag subarray. When it appears longer than the previously found, store the new longest zigzag length. Proceed till the end of array and you should complete the task in O(N).
Sorry I use perl to write this.
#!/usr/bin/perl
#a = ( 5, 4, 2, 3, 1, 5, 3, 7, 3, 5, 5, 2, 3, 8, 6, 4, 5 );
$n = scalar #a;
$best_start = 0;
$best_end = 1;
$best_length = 2;
$start = 0;
$end = 1;
$direction = ($a[0] > $a[1]) ? 1 : ($a[0] < $a[1]) ? -1 : 0;
for($i=2; $i<$n; $i++) {
// a trick here, same value make $new_direction = $direction
$new_direction = ($a[$i-1] > $a[$i]) ? 1 : ($a[$i-1] < $a[$i]) ? -1 : $direction;
print "$a[$i-1] > $a[$i] : direction $new_direction Vs $direction\n";
if ($direction != $new_direction) {
$end = $i;
} else {
$this_length = $end - $start + 1;
if ($this_length > $best_length) {
$best_start = $start;
$best_end = $end;
$best_length = $this_length;
}
$start = $i-1;
$end = $i;
}
$direction = $new_direction;
}
$this_length = $end - $start + 1;
if ($this_length > $best_length) {
$best_start = $start;
$best_end = $end;
$best_length = $this_length;
}
print "BEST $best_start to $best_end length $best_length\n";
for ($i=$best_start; $i <= $best_end; $i++) {
print $a[$i], " ";
}
print "\n";
For each index i, you can find the smallest j such that the subarray with index j,j+1,...,i-1,i is a zigzag. This can be done in two phases:
Find the longest "increasing" zig zag (starts with a[1]>a[0]):
start = 0
increasing[0] = 0
sign = true
for (int i = 1; i < n; i ++)
if ((arr[i] > arr[i-1] && sign) || )arr[i] < arr[i-1] && !sign)) {
increasing[i] = start
sign = !sign
} else if (arr[i-1] < arr[i]) { //increasing and started last element
start = i-1
sign = false
increasing[i] = i-1
} else { //started this element
start = i
sign = true
increasing[i] = i
}
}
Do similarly for "decreasing" zig-zag, and you can find for each index the "earliest" possible start for a zig-zag subarray.
From there, finding the maximal possible zig-zag is easy.
Since all oporations are done in O(n), and you basically do one after the other, this is your complexity.
You can combine the both "increasing" and "decreasing" to one go:
start = 0
maxZigZagStart[0] = 0
sign = true
for (int i = 1; i < n; i ++)
if ((arr[i] > arr[i-1] && sign) || )arr[i] < arr[i-1] && !sign)) {
maxZigZagStart[i] = start
sign = !sign
} else if (arr[i-1] > arr[i]) { //decreasing:
start = i-1
sign = false
maxZigZagStart[i] = i-1
} else if (arr[i-1] < arr[i]) { //increasing:
start = i-1
sign = true
maxZigZagStart[i] = i-1
} else { //equality
start = i
//guess it is increasing, if it is not - will be taken care of next iteration
sign = true
maxZigZagStart[i] = i
}
}
You can see that you can actually even let go of maxZigZagStart aux array and stored local maximal length instead.
A sketch of simple one-pass algorithm. Cmp compares neighbour elements, returning -1, 0, 1 for less, equal and greater cases.
Zigzag ends for cases of Cmp transitions:
0 0
-1 0
1 0
Zigzag ends and new series starts:
0 -1
0 1
-1 -1
1 1
Zigzag series continues for transitions
-1 1
1 -1
Algo:
Start = 0
LastCmp = - Compare(A[i], A[i - 1]) //prepare to use the first element individually
MaxLen = 0
for i = 1 to N - 1 do
Cmp = Compare(A[i], A[i - 1]) //returns -1, 0, 1 for less, equal and greater cases
if Abs(Cmp - LastCmp) <> 2 then
//zigzag condition is violated, series ends, new series starts
MaxLen = Max(MaxLen, i - 1 - Start)
Start = i
//else series continues, nothing to do
LastCmp = Cmp
//check for ending zigzag
if LastCmp <> 0 then
MaxLen = Max(MaxLen, N - Start)
examples of output:
2 6 7 1 7 0 7 3 1 1 7 4
5 (7 1 7 0 7)
8 0 0 3 5 8
1
0 0 7 0
2
1 2 0 7 9
3
8 3 5 2
4
1 3 7 1 6 6
2
1 4 0 6 6 3 4 3 8 0 9 9
5
Lets consider sequence 5 9 3 4 5 4 2 3 6 5 2 1 3 as an example. You have a condition which every internal element of subsequence should satisfy (element is strictly less or strictly greater than its neighbors). Lets compute this condition for every element of the whole sequence:
5 9 3 6 5 7 2 3 6 5 2 1 3
0 1 1 1 1 1 1 0 1 0 0 1 0
The condition is undefined for outermost elements because they have only one neighbor each. But I defined it as 0 for convenience.
The longest subsequence of 1's (9 3 6 5 7 2) is the internal part of the longest zigzag subsequence (5 9 3 6 5 7 2 3). So the algorithm is:
Find the longest subsequence of elements satisfying condition.
Add to it one element to each side.
The first step can be done in O(n) by the following algorithm:
max_length = 0
current_length = 0
for i from 2 to len(a) - 1:
if a[i - 1] < a[i] > a[i + 1] or a[i - 1] > a[i] < a[i + 1]:
current_length += 1
else:
max_length = max(max_length, current_length)
current_length = 0
max_length = max(max_length, current_length)
The only special case is if the sequence total length is 0 or 1. Then the whole sequence would be the longest zigzag subsequence.
#include "iostream"
using namespace std ;
int main(){
int t ; scanf("%d",&t) ;
while(t--){
int n ; scanf("%d",&n) ;
int size1 = 1 , size2 = 1 , seq1 , seq2 , x ;
bool flag1 = true , flag2 = true ;
for(int i=1 ; i<=n ; i++){
scanf("%d",&x) ;
if( i== 1 )seq1 = seq2 = x ;
else {
if( flag1 ){
if( x>seq1){
size1++ ;
seq1 = x ;
flag1 = !flag1 ;
}
else if( x < seq1 )
seq1 = x ;
}
else{
if( x<seq1){
size1++ ;
seq1=x ;
flag1 = !flag1 ;
}
else if( x > seq1 )
seq1 = x ;
}
if( flag2 ){
if( x < seq2 ){
size2++ ;
seq2=x ;
flag2 = !flag2 ;
}
else if( x > seq2 )
seq2 = x ;
}
else {
if( x > seq2 ){
size2++ ;
seq2 = x ;
flag2 = !flag2 ;
}
else if( x < seq2 )
seq2 = x ;
}
}
}
printf("%d\n",max(size1,size2)) ;
}
return 0 ;
}

Partitioning a no. N into M partitions

I'm trying a problem in which I have to partition a no. N into M partitions as many as possible.
Example:
N=1 M=3 , break 1 into 3 parts
0 0 1
0 1 0
1 0 0
N=3 M=2 , break 3 into 2 parts
2 1
1 2
3 0
0 3
N=4 M=4 , break 4 into 4 parts
0 0 0 4
0 0 4 0
0 4 0 0
4 0 0 0
0 0 1 3
0 1 0 3
0 1 3 0
.
.
.
and so on.
I did code a backtrack algo. which produce all the possible compositions step by step, but it chokes for some larger input.Because many compositions are same differing only in ordering of parts.I want to reduce that.Can anybody help in providing a more efficient method.
My method:
void backt(int* part,int pos,int n) //break N into M parts
{
if(pos==M-1)
{
part[pos]=n;
ppart(part); //print part array
return;
}
if(n==0)
{
part[pos]=0;
backt(part,pos+1,0);
return;
}
for(int i=0;i<=n;i++)
{
part[pos]=i;
backt(part,pos+1,n-i);
}
}
In my algo. n is N and it fill the array part[] for every possible partition of N.
What I want to know is once generating a composition I want to calculate how many times that composition will occur with different ordering.For ex: for N=1 ,M=3 ::: composition is only one : <0,0,1> ,but it occurs 3 times. Thats what I want to know for every possible unique composition.
for another example: N=4 M=4
composition <0 0 0 4> is being repeated 4 times. Similarly, for every unique composition I wanna know exactly how many times it will occur .
Looks like I'm also getting it by explaining here.Thinking.
Thanks.
You can convert an int to a partitioning as follows:
vector<int> part(int i, int n, int m)
{
int r = n; // r is num items remaining to be allocated
vector<int> result(m, 0); // m entries inited to 0
for (int j = 0; j < m-1; j++)
{
if (r == 0) // if none left stop
break;
int k = i % r; // mod out next bucket
i /= r; // divide out bucket
result[j] = k; // assign bucket
r -= k; // remove assigned items from remaining
}
result[m-1] = r; // put remainder in last bucket
return result;
}
So you can use this as follows:
for (int i = 0; true; i++)
{
vector<int> p = part(i, 3, 4);
if (i != 0 && p.back() == 3) // last part
break;
... // use p
};
It should be clear from this how to make an incremental version of part too.
A much simpler and mathematical approach:
This problem is equivalent to finding the co-efficient of x^N in the expression f(x) = (1+x+x^2+x^3+....+x^N)^M
f(x) = ((x^(N-1) - 1)/(x-1))^M
differentiate it M times(d^Nf(x)/dx^N) and the co-efficient will be (1/n!)*(d^Nf(x)/dx^N) at x = 0;
differentiation can be done using any numerical differentiation technique. So the complexity of the algorithm is O(N*complexity_of_differentiation)..

How to calculate the index (lexicographical order) when the combination is given

I know that there is an algorithm that permits, given a combination of number (no repetitions, no order), calculates the index of the lexicographic order.
It would be very useful for my application to speedup things...
For example:
combination(10, 5)
1 - 1 2 3 4 5
2 - 1 2 3 4 6
3 - 1 2 3 4 7
....
251 - 5 7 8 9 10
252 - 6 7 8 9 10
I need that the algorithm returns the index of the given combination.
es: index( 2, 5, 7, 8, 10 ) --> index
EDIT: actually I'm using a java application that generates all combinations C(53, 5) and inserts them into a TreeMap.
My idea is to create an array that contains all combinations (and related data) that I can index with this algorithm.
Everything is to speedup combination searching.
However I tried some (not all) of your solutions and the algorithms that you proposed are slower that a get() from TreeMap.
If it helps: my needs are for a combination of 5 from 53 starting from 0 to 52.
Thank you again to all :-)
Here is a snippet that will do the work.
#include <iostream>
int main()
{
const int n = 10;
const int k = 5;
int combination[k] = {2, 5, 7, 8, 10};
int index = 0;
int j = 0;
for (int i = 0; i != k; ++i)
{
for (++j; j != combination[i]; ++j)
{
index += c(n - j, k - i - 1);
}
}
std::cout << index + 1 << std::endl;
return 0;
}
It assumes you have a function
int c(int n, int k);
that will return the number of combinations of choosing k elements out of n elements.
The loop calculates the number of combinations preceding the given combination.
By adding one at the end we get the actual index.
For the given combination there are
c(9, 4) = 126 combinations containing 1 and hence preceding it in lexicographic order.
Of the combinations containing 2 as the smallest number there are
c(7, 3) = 35 combinations having 3 as the second smallest number
c(6, 3) = 20 combinations having 4 as the second smallest number
All of these are preceding the given combination.
Of the combinations containing 2 and 5 as the two smallest numbers there are
c(4, 2) = 6 combinations having 6 as the third smallest number.
All of these are preceding the given combination.
Etc.
If you put a print statement in the inner loop you will get the numbers
126, 35, 20, 6, 1.
Hope that explains the code.
Convert your number selections to a factorial base number. This number will be the index you want. Technically this calculates the lexicographical index of all permutations, but if you only give it combinations, the indexes will still be well ordered, just with some large gaps for all the permutations that come in between each combination.
Edit: pseudocode removed, it was incorrect, but the method above should work. Too tired to come up with correct pseudocode at the moment.
Edit 2: Here's an example. Say we were choosing a combination of 5 elements from a set of 10 elements, like in your example above. If the combination was 2 3 4 6 8, you would get the related factorial base number like so:
Take the unselected elements and count how many you have to pass by to get to the one you are selecting.
1 2 3 4 5 6 7 8 9 10
2 -> 1
1 3 4 5 6 7 8 9 10
3 -> 1
1 4 5 6 7 8 9 10
4 -> 1
1 5 6 7 8 9 10
6 -> 2
1 5 7 8 9 10
8 -> 3
So the index in factorial base is 1112300000
In decimal base, it's
1*9! + 1*8! + 1*7! + 2*6! + 3*5! = 410040
This is Algorithm 2.7 kSubsetLexRank on page 44 of Combinatorial Algorithms by Kreher and Stinson.
r = 0
t[0] = 0
for i from 1 to k
if t[i - 1] + 1 <= t[i] - 1
for j from t[i - 1] to t[i] - 1
r = r + choose(n - j, k - i)
return r
The array t holds your values, for example [5 7 8 9 10]. The function choose(n, k) calculates the number "n choose k". The result value r will be the index, 251 for the example. Other inputs are n and k, for the example they would be 10 and 5.
zero-base,
# v: array of length k consisting of numbers between 0 and n-1 (ascending)
def index_of_combination(n,k,v):
idx = 0
for p in range(k-1):
if p == 0: arrg = range(1,v[p]+1)
else: arrg = range(v[p-1]+2, v[p]+1)
for a in arrg:
idx += combi[n-a, k-1-p]
idx += v[k-1] - v[k-2] - 1
return idx
Null Set has the right approach. The index corresponds to the factorial-base number of the sequence. You build a factorial-base number just like any other base number, except that the base decreases for each digit.
Now, the value of each digit in the factorial-base number is the number of elements less than it that have not yet been used. So, for combination(10, 5):
(1 2 3 4 5) == 0*9!/5! + 0*8!/5! + 0*7!/5! + 0*6!/5! + 0*5!/5!
== 0*3024 + 0*336 + 0*42 + 0*6 + 0*1
== 0
(10 9 8 7 6) == 9*3024 + 8*336 + 7*42 + 6*6 + 5*1
== 30239
It should be pretty easy to calculate the index incrementally.
If you have a set of positive integers 0<=x_1 < x_2< ... < x_k , then you could use something called the squashed order:
I = sum(j=1..k) Choose(x_j,j)
The beauty of the squashed order is that it works independent of the largest value in the parent set.
The squashed order is not the order you are looking for, but it is related.
To use the squashed order to get the lexicographic order in the set of k-subsets of {1,...,n) is by taking
1 <= x1 < ... < x_k <=n
compute
0 <= n-x_k < n-x_(k-1) ... < n-x_1
Then compute the squashed order index of (n-x_k,...,n-k_1)
Then subtract the squashed order index from Choose(n,k) to get your result, which is the lexicographic index.
If you have relatively small values of n and k, you can cache all the values Choose(a,b) with a
See Anderson, Combinatorics on Finite Sets, pp 112-119
I needed also the same for a project of mine and the fastest solution I found was (Python):
import math
def nCr(n,r):
f = math.factorial
return f(n) / f(r) / f(n-r)
def index(comb,n,k):
r=nCr(n,k)
for i in range(k):
if n-comb[i]<k-i:continue
r=r-nCr(n-comb[i],k-i)
return r
My input "comb" contained elements in increasing order You can test the code with for example:
import itertools
k=3
t=[1,2,3,4,5]
for x in itertools.combinations(t, k):
print x,index(x,len(t),k)
It is not hard to prove that if comb=(a1,a2,a3...,ak) (in increasing order) then:
index=[nCk-(n-a1+1)Ck] + [(n-a1)C(k-1)-(n-a2+1)C(k-1)] + ... =
nCk -(n-a1)Ck -(n-a2)C(k-1) - .... -(n-ak)C1
There's another way to do all this. You could generate all possible combinations and write them into a binary file where each comb is represented by it's index starting from zero. Then, when you need to find an index, and the combination is given, you apply a binary search on the file. Here's the function. It's written in VB.NET 2010 for my lotto program, it works with Israel lottery system so there's a bonus (7th) number; just ignore it.
Public Function Comb2Index( _
ByVal gAr() As Byte) As UInt32
Dim mxPntr As UInt32 = WHL.AMT.WHL_SYS_00 '(16.273.488)
Dim mdPntr As UInt32 = mxPntr \ 2
Dim eqCntr As Byte
Dim rdAr() As Byte
modBinary.OpenFile(WHL.WHL_SYS_00, _
FileMode.Open, FileAccess.Read)
Do
modBinary.ReadBlock(mdPntr, rdAr)
RP: If eqCntr = 7 Then GoTo EX
If gAr(eqCntr) = rdAr(eqCntr) Then
eqCntr += 1
GoTo RP
ElseIf gAr(eqCntr) < rdAr(eqCntr) Then
If eqCntr > 0 Then eqCntr = 0
mxPntr = mdPntr
mdPntr \= 2
ElseIf gAr(eqCntr) > rdAr(eqCntr) Then
If eqCntr > 0 Then eqCntr = 0
mdPntr += (mxPntr - mdPntr) \ 2
End If
Loop Until eqCntr = 7
EX: modBinary.CloseFile()
Return mdPntr
End Function
P.S. It takes 5 to 10 mins to generate 16 million combs on a Core 2 Duo. To find the index using binary search on file takes 397 milliseconds on a SATA drive.
Assuming the maximum setSize is not too large, you can simply generate a lookup table, where the inputs are encoded this way:
int index(a,b,c,...)
{
int key = 0;
key |= 1<<a;
key |= 1<<b;
key |= 1<<c;
//repeat for all arguments
return Lookup[key];
}
To generate the lookup table, look at this "banker's order" algorithm. Generate all the combinations, and also store the base index for each nItems. (For the example on p6, this would be [0,1,5,11,15]). Note that by you storing the answers in the opposite order from the example (LSBs set first) you will only need one table, sized for the largest possible set.
Populate the lookup table by walking through the combinations doing Lookup[combination[i]]=i-baseIdx[nItems]
EDIT: Never mind. This is completely wrong.
Let your combination be (a1, a2, ..., ak-1, ak) where a1 < a2 < ... < ak. Let choose(a,b) = a!/(b!*(a-b)!) if a >= b and 0 otherwise. Then, the index you are looking for is
choose(ak-1, k) + choose(ak-1-1, k-1) + choose(ak-2-1, k-2) + ... + choose (a2-1, 2) + choose (a1-1, 1) + 1
The first term counts the number of k-element combinations such that the largest element is less than ak. The second term counts the number of (k-1)-element combinations such that the largest element is less than ak-1. And, so on.
Notice that the size of the universe of elements to be chosen from (10 in your example) does not play a role in the computation of the index. Can you see why?
Sample solution:
class Program
{
static void Main(string[] args)
{
// The input
var n = 5;
var t = new[] { 2, 4, 5 };
// Helping transformations
ComputeDistances(t);
CorrectDistances(t);
// The algorithm
var r = CalculateRank(t, n);
Console.WriteLine("n = 5");
Console.WriteLine("t = {2, 4, 5}");
Console.WriteLine("r = {0}", r);
Console.ReadKey();
}
static void ComputeDistances(int[] t)
{
var k = t.Length;
while (--k >= 0)
t[k] -= (k + 1);
}
static void CorrectDistances(int[] t)
{
var k = t.Length;
while (--k > 0)
t[k] -= t[k - 1];
}
static int CalculateRank(int[] t, int n)
{
int k = t.Length - 1, r = 0;
for (var i = 0; i < t.Length; i++)
{
if (t[i] == 0)
{
n--;
k--;
continue;
}
for (var j = 0; j < t[i]; j++)
{
n--;
r += CalculateBinomialCoefficient(n, k);
}
n--;
k--;
}
return r;
}
static int CalculateBinomialCoefficient(int n, int k)
{
int i, l = 1, m, x, y;
if (n - k < k)
{
x = k;
y = n - k;
}
else
{
x = n - k;
y = k;
}
for (i = x + 1; i <= n; i++)
l *= i;
m = CalculateFactorial(y);
return l/m;
}
static int CalculateFactorial(int n)
{
int i, w = 1;
for (i = 1; i <= n; i++)
w *= i;
return w;
}
}
The idea behind the scenes is to associate a k-subset with an operation of drawing k-elements from the n-size set. It is a combination, so the overall count of possible items will be (n k). It is a clue that we could seek the solution in Pascal Triangle. After a while of comparing manually written examples with the appropriate numbers from the Pascal Triangle, we will find the pattern and hence the algorithm.
I used user515430's answer and converted to python3. Also this supports non-continuous values so you could pass in [1,3,5,7,9] as your pool instead of range(1,11)
from itertools import combinations
from scipy.special import comb
from pandas import Index
debugcombinations = False
class IndexedCombination:
def __init__(self, _setsize, _poolvalues):
self.setsize = _setsize
self.poolvals = Index(_poolvalues)
self.poolsize = len(self.poolvals)
self.totalcombinations = 1
fast_k = min(self.setsize, self.poolsize - self.setsize)
for i in range(1, fast_k + 1):
self.totalcombinations = self.totalcombinations * (self.poolsize - fast_k + i) // i
#fill the nCr cache
self.choose_cache = {}
n = self.poolsize
k = self.setsize
for i in range(k + 1):
for j in range(n + 1):
if n - j >= k - i:
self.choose_cache[n - j,k - i] = comb(n - j,k - i, exact=True)
if debugcombinations:
print('testnth = ' + str(self.testnth()))
def get_nth_combination(self,index):
n = self.poolsize
r = self.setsize
c = self.totalcombinations
#if index < 0 or index >= c:
# raise IndexError
result = []
while r:
c, n, r = c*r//n, n-1, r-1
while index >= c:
index -= c
c, n = c*(n-r)//n, n-1
result.append(self.poolvals[-1 - n])
return tuple(result)
def get_n_from_combination(self,someset):
n = self.poolsize
k = self.setsize
index = 0
j = 0
for i in range(k):
setidx = self.poolvals.get_loc(someset[i])
for j in range(j + 1, setidx + 1):
index += self.choose_cache[n - j, k - i - 1]
j += 1
return index
#just used to test whether nth_combination from the internet actually works
def testnth(self):
n = 0
_setsize = self.setsize
mainset = self.poolvals
for someset in combinations(mainset, _setsize):
nthset = self.get_nth_combination(n)
n2 = self.get_n_from_combination(nthset)
if debugcombinations:
print(str(n) + ': ' + str(someset) + ' vs ' + str(n2) + ': ' + str(nthset))
if n != n2:
return False
for x in range(_setsize):
if someset[x] != nthset[x]:
return False
n += 1
return True
setcombination = IndexedCombination(5, list(range(1,10+1)))
print( str(setcombination.get_n_from_combination([2,5,7,8,10])))
returns 188

Resources