Std Sort make std::vector invalid - sorting

I found a bug in std::sort and in some implementations of QuickSort in particular, I do not know whether the problem is in the algorithm in general.
Essence:
When the elements are less than 16 all the norms, because std::sort uses an insertion sort.
When there are 17 or more elements, then quick sort is used with a restriction on the depth of recursion from the logarithm of the number of elements, but vector has time to deteriorate at the first __introsort_loop iteration.
There is a vector spoilage when many identical elements. Corruption happened by replacement of valid iterators with invalid iterators.
Other containers may break too, I did not check.
An example for simplicity with a vector of type "int", for more complex objects - crash at the time of sorting, because the invalid object is passed to the comparison function:
#include <iostream>
#include <vector>
#include <algorithm>
void quickSort(int arr[], int left, int right) {
int i = left, j = right;
int tmp;
int pivot = arr[(left + right) / 2];
/* partition */
while (i <= j) {
while (arr[i] < pivot)
i++;
while (arr[j] > pivot)
j--;
if (i <= j) {
tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
i++;
j--;
}
};
/* recursion */
if (left < j)
quickSort(arr, left, j);
if (i < right)
quickSort(arr, i, right);
}
int main()
{
for( int i = 0 ; i < 1 ; i++ )
{
//std::vector<int> v({5, 6, 1, 6, 2, 6, 3, 6, 4, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6});//reproducible with this
std::vector<int> v(19, 6);//reproducible with this also
std::sort(std::begin(v), std::end(v), [&v]( const int & left, const int & right )
{
// std::cout << " left=" << left << ", right=" << right << std::endl;
bool b = left <= right;
return b;
}
);
// quickSort(v.data(), 0, v.size());
for( const auto & result : v )
{
std::cout << "results: " << result << std::endl;
}
}
std::cout << "Hello World!\n";
}
Can someone encounter this behavior quick sort?

you must save the pivot at beginning class like
void quickSort(int arr[], int left, int right) {
int i = left, j = right; int pivot=x[left];
after that will work

I tried out your code, and it seems that the problem is with vectors created with the constructor vector(n,val) (The fill constructor). Vectors when manually inserted with 16,17,18 and 19 random elements show no problems.

Related

Integer Iterator is less than Returned size of Vector (C++)

Currently writing a function to return a Pascal Triangle represented in Vectors. When writing the nested for loop within the function, I noticed that the function was returning empty vectors. Going through the debugger, I realized that the inner for loop never runs.
The code is as follows:
vector<vector<int>> generate(int numRows) {
vector<vector<int>> res = { {1} };
int k;
for (int i = 0; i < numRows; i++)
{
vector<int> c = {};
cout << res[i].size() << endl;
for (k = -1; k < res[i].size(); k++)
{
if (k == -1 || k == res[i].size() - 1)
{
c.push_back(1);
}
else
{
c.push_back(res[i][k] + res[i][k + 1]);
}
}
res.push_back(c);
}
return res;
}
I had changed the iterator variable name multiple times, and have switched the iterator type to size. However the for still does not run.
I tried printing out the iterator k (revealed to be -1) and the size of the first element in the res vector (revealed to be 1). However, when running:
cout << (k < res[i].size() << endl;
the output was 0.
Take extra care when mixing signed and unsigned types.
As explained by #YahavBoneh in the comment above, signed types are converted to unsigned when they are both used in a comparison. In this case, a k value of -1, when converted to unsigned, turns into quite a big number (demo).
If possible, let your compiler warn you about it (e.g. in gcc, using -Wall -Wextra; demo).
Since you seem to be working only with signed types, a good way to avoid introducing unsigned types into the play, is to use std::ssize (since C++20).
[Demo]
#include <fmt/ranges.h>
#include <iostream>
#include <vector>
std::vector<std::vector<int>> generate(int numRows) {
std::vector<std::vector<int>> res{{1}};
for (auto i = 0; i < numRows; i++) {
std::vector<int> c{};
auto width{ std::ssize(res[i]) };
for (auto k = -1; k < width; k++) {
if (k == -1 || k == width - 1) {
c.push_back(1);
} else {
c.push_back(res[i][k] + res[i][k + 1]);
}
}
res.push_back(c);
}
return res;
}
int main() {
fmt::print("{}", fmt::join(generate(3), "\n"));
}
// Outputs:
//
// [1]
// [1, 1]
// [1, 2, 1]
// [1, 3, 3, 1]

Longest Increasing Subarray after add or subtract some element an amount less than K

Given an array and we can add or subtract some element an amount less than K to make the longest increasing subarray
Example: An array a=[6,4,3,2] and K=1; we can subtract 1 from a[2]; add 1 to a[4] so the array will be a=[6,3,3,3] and the LIS is [3,3,3]
An algorithm of complexity O(n) is possible, by considering a "state" approach.
For each index i, the state corresponds to the three values that we can get: A[i]-K, A[i], A[i]+K.
Then, for a given index, for each state s = 0, 1, 2, we can calculate the maximum increasing sequence length terminating at this state.
length[i+1][s] = 1 + max (length[i][s'], if val[i][s'] <= val[i+1][s], for s' = 0, 1, 2)
We can use the fact that length[i][s] is increasing with s.
In practice, if we are only interesting to know the final maximum length, we don't need to memorize all the length values.
Here is a simple C++ implementation, to illustrate this algorithm. It only provides the maximum length.
#include <iostream>
#include <vector>
#include <array>
#include <string>
struct Status {
std::array<int, 3> val;
std::array<int, 3> l_seq; // length sequences
};
int longuest_ascending_seq (const std::vector<int>& A, int K) {
int max_length = 0;
int n = A.size();
if (n == 0) return 0;
Status previous, current;
previous = {{A[0]-K, A[0]-K, A[0]-K}, {0, 0, 0}};
for (int i = 0; i < n; ++i) {
current.val = {A[i]-K, A[i], A[i] + K};
for (int j = 0; j < 3; ++j) {
int x = current.val[j];
if (x >= previous.val[2]) {
current.l_seq[j] = previous.l_seq[2] + 1;
} else if (x >= previous.val[1]) {
current.l_seq[j] = previous.l_seq[1] + 1;
} else if (x >= previous.val[0]) {
current.l_seq[j] = previous.l_seq[0] + 1;
} else {
current.l_seq[j] = 1;
}
}
if (current.l_seq[2] > max_length) max_length = current.l_seq[2];
std::swap (previous, current);
}
return max_length;
}
int main() {
std::vector<int> A = {6, 4, 3, 2, 0};
int K = 1;
auto ans = longuest_ascending_seq (A, K);
std::cout << ans << std::endl;
return 0;
}

Find the missing number between [0, n] (n and numbers from 0 to n-1 are given by the user) using DAC

As a homework, I have to find the missing number from 0 to n using a divide and conquer (DAC) algorithm.
As an input, I get n-1 numbers from [0, n] and n.
I can easily do this with a quicksort and then just see which number is missing, but that would mean the complexity of my algorithm will be O(n*log n).
I'm wondering if there is any way I can do lower than that.
I was thinking that I might get the sum of the input (somehow) using DAC, and then the number missing will be n - sum. This would be O(n) complexity.
Is there any other way to get a complexity lower than O(n) (without using any space) and also, is my idea a good one? If not, can you give me other ideas for this problem, please?
Thanks.
Edit:
I know I should post another question, but I can post only once every 90 minutes (as I recall) and I want to finish this problem now if possible.
How can I calculate the sum of an array using DAC?
int DAC(int low, int high, int a[], int& s)
{
if (low <= high)
{
int pivot = (low + high)/2;
s += DAC(low, pivot - 1, a, s);
s += DAC(pivot+1, high, a, s);
return a[pivot];
}
}
for this call
cout << DAC(0, n-1, a, s);
Input:
7
1 2 3 4 5 6 7
I get 4 and I don't understand why. I didn't expect it to return only 4.
Edit 2:
I was getting call because I had to cout<<s, not DAC, I'm sorry.
Now I get 52 for the following code, with input: n=7, a=1 2 3 4 5 6 7
#include <iostream>
#include <algorithm>
using namespace std;
void citire(int& n, int a[])
{
cin >> n;
for (int i = 0; i < n; i++)
{
cin >> a[i];
}
}
int DAC(int low, int high, int a[], int& s)
{
if (low <= high)
{
int pivot = (low + high)/2;
s += DAC(low, pivot - 1, a, s);
s += DAC(pivot+1, high, a, s);
return a[pivot];
}
}
int main() {
int a[100], n, s = 0;
citire(n, a);
DAC(0, n-1, a, s);
cout << s;
return 0;
}
As sis modified internally, the function DAC doesn't have to return anything.
#include <iostream>
#include <algorithm>
void citire(int& n, int a[])
{
std::cin >> n;
for (int i = 0; i < n; i++)
{
std::cin >> a[i];
}
}
void DAC(int low, int high, int a[], int& s)
{
if (low <= high)
{
int pivot = (low + high)/2;
DAC(low, pivot - 1, a, s);
DAC(pivot+1, high, a, s);
s += a[pivot];
}
}
int main() {
int a[100], n, s = 0;
citire(n, a);
DAC(0, n-1, a, s);
std::cout << s << "\n";
return 0;
}
But std:accumulatewould be much simpler

What is a computationally efficient way to generate a list of a large number of combinations with certain restraints?

We have a set of objects indexed by integers and need to generate a list of pairs of possible combinations of these objects (of any length up to the number of objects) with a constraint. The constraint is that if one combination in a pair contains an object, then the other combination in that pair cannot also contain that object.
As an example, if we have only 3 objects { 0, 1, 2}, the list should look like
{ {0}, {1} }
{ {0}, {2} }
{ {1}, {2} }
{ {0,1}, {2} }
{ {0}, {1,2} }
{ {0,2}, {1} }
What is a computationally efficient way of generating this list for as many as 20 objects in C++?
In each pair, every object is either not used, or it's in the left set, or it's in the right set.
If you have N objects, you can easily iterate through the 3^N possibilities, skipping the ones that result in empty sets:
#include <iostream>
#include <vector>
using namespace std;
int main() {
unsigned N = 5; //number of objects
vector<unsigned> left, right;
for (unsigned index=0 ;; ++index) {
left.clear();
right.clear();
//treat the index as a number in base 3
//each digit determines the fate of one object
unsigned digits=index;
for (int obj=1; obj<=N; ++obj) {
unsigned pos=digits%3;
digits /= 3;
if (pos == 1)
left.push_back(obj);
else if (pos == 2)
right.push_back(obj);
}
if (digits) {
//done all possibilities
break;
}
if (left.empty() || right.empty()) {
// don't want empty left or right
continue;
}
//got one -- print it
cout << "{ {" <<left[0];
for (size_t i=1; i<left.size(); ++i)
cout << "," << left[i];
cout << "}, {" <<right[0];
for (size_t i=1; i<right.size(); ++i)
cout << "," << right[i];
cout << "} }" << endl;
}
return 0;
}
If unsigned is 32 bits, this will work for up to 20 objects. Note that it will print about 3.5 billion pairs in that case, though.
Try it here: https://ideone.com/KIeas7
Firstly we can decide which element will be in our pairs.
For example, if the number of element is 3, consider the binary representation from 0 to 2^3.
0=000
1=001
2=010
3=011
4=100
5=101
6=110
7=111
Now, we will make the pair from each number from 0 to 2^n by keeping the elements in which position the number has 1. Like 3=011, its first and second position have 1, so we will make a pair with first and second element.For 6=110,we will make the pair with second and third element.
So, we can decide which element we will take in our each pair through 2^n complexity, where n is the number of element.
Now we know which element will be in each pair.
For example, let for one pair we selected m elements. Now we need to divide them within each side. We can do it like the similar way by considering binary representation of all numbers from m elements.
If m=3,
0=000
1=001
2=010
3=011
4=100
5=101
6=110
7=111
So from each number from 0 to 2^m, we can make a pair. For making a pair from a number , we will keep the elements in first set which index has 0 in that number and will keep in the second set which index has 1 in that number.
C++ code:
#include<bits/stdc++.h>
using namespace std;
int main()
{
long long cnt=0;
int n;
cin>>n;
int object[n];
for(int i=0; i<n; i++)cin>>object[i];
for(int i=0; i<(1<<n); i++){
// From each i, we will decide which element will be in our set.
int c=0;
int nu=0;
int one_pair[n];
// Now we will calculate , how many elements will be in current pair.
for(int j=0; j<n; j++)if((i&(1<<j)))one_pair[c++]=object[j];
if(c>=2){
// So we have c element in each pair
for(int k=1; k<(1<<(c-1)); k++){
//Now we will divide each of the c element within two sides.
cout<<"{ {";
bool fl=0;
for(int divider=0;divider<c;divider++){
if((k&(1<<divider))){
if(fl)cout<<",";
fl=1;
cout<<one_pair[divider];
}
}
cout<<"}, ";
cout<<"{";
fl=0;
for(int divider=0;divider<c;divider++){
if((k&(1<<divider))==0){
if(fl)cout<<",";
fl=1;
cout<<one_pair[divider];
}
}
cout<<"} }"<<endl;
}
}
}
return 0;
}
Output:
3
0 1 2
{ {0}, {1} }
{ {0}, {2} }
{ {1}, {2} }
{ {0}, {1,2} }
{ {1}, {0,2} }
{ {0,1}, {2} }
Here's a recursive version that takes each combination in the powerset with more than one element and runs a "put in one bag or the other" routine. (It's pretty much my first time trying to code anything more than trivial in C++ so I imagine there may be room for improvement.)
Powerset:
{{0, 1, 2}, {0, 1}, {0, 2}, {0}, {1, 2}, {1}, {2}, {}}
{{0, 1}, {2}}
{{0, 2}, {1}}
{{0}, {1, 2}}
{{0}, {1}}
{{0}, {2}}
{{1}, {2}}
Code (also here):
#include <iostream>
#include <vector>
using namespace std;
void print(std::vector<int> const &input){
std::cout << '{';
for (int i = 0; i < input.size(); i++) {
std::cout << input.at(i);
if (i < input.size() - 1)
std::cout << ", ";
}
std::cout << '}';
}
void printMany(std::vector< std::vector<int> > const &input)
{
std::cout << '{';
for (int i = 0; i < input.size(); i++) {
print(input.at(i));
if (i < input.size() - 1)
std::cout << ", ";
}
std::cout << '}';
std::cout << '\n';
}
void printPairs(std::vector< std::vector<int> > const &input)
{
for (int i = 0; i < input.size(); i+=2) {
cout << '{';
print(input.at(i));
cout << ", ";
print(input.at(i + 1));
cout << "}\n";
}
}
std::vector< std::vector<int> > f(std::vector<int> const &A, int i, const std::vector<int> &left, const std::vector<int> &right) {
if (i == A.size() - 1 && right.empty())
return std::vector< std::vector<int> >{left, std::vector<int> {A[i]}};
if (i == A.size())
return std::vector< std::vector<int> > {left, right};
std::vector<int> _left{ left };
_left.emplace_back(A[i]);
std::vector< std::vector<int> > result = f(A, i + 1, _left, right);
std::vector<int> _right{ right };
_right.emplace_back(A[i]);
std::vector< std::vector<int> > result1 = f(A, i + 1, left, _right);
result.insert( result.end(), result1.begin(), result1.end() );
return result;
}
std::vector< std::vector<int> > powerset(std::vector<int> const &A, const vector<int>& prefix = vector<int>(), int i = 0) {
if (i == A.size())
return std::vector< std::vector<int> > {prefix};
std::vector<int> _prefix(prefix);
_prefix.emplace_back(A[i]);
std::vector< std::vector<int> > result = powerset(A, _prefix, i + 1);
std::vector< std::vector<int> > result1 = powerset(A, prefix, i + 1);
result.insert( result.end(), result1.begin(), result1.end() );
return result;
}
int main() {
std::vector<int> A{0, 1, 2};
std::vector< std::vector<int> > ps = powerset(A);
cout << "Powerset:\n";
printMany(ps);
cout << "\nResult:\n";
for (int i=0; i<ps.size(); i++){
if (ps.at(i).size() > 1){
std::vector<int> left{ps.at(i)[0]};
std::vector<int> right;
printPairs(f(ps.at(i), 1, left, right));
}
}
return 0;
}

How to find the subarray that has sum closest to zero or a certain value t in O(nlogn)

Actually it is the problem #10 of chapter 8 of Programming Pearls 2nd edition. It asked two questions: given an array A[] of integers(positive and nonpositive), how can you find a continuous subarray of A[] whose sum is closest to 0? Or closest to a certain value t?
I can think of a way to solve the problem closest to 0. Calculate the prefix sum array S[], where S[i] = A[0]+A[1]+...+A[i]. And then sort this S according to the element value, along with its original index information kept, to find subarray sum closest to 0, just iterate the S array and do the diff of the two neighboring values and update the minimum absolute diff.
Question is, what is the best way so solve second problem? Closest to a certain value t? Can anyone give a code or at least an algorithm? (If anyone has better solution to closest to zero problem, answers are welcome too)
To solve this problem, you can build an interval-tree by your own,
or balanced binary search tree, or even beneficial from STL map, in O(nlogn).
Following is use STL map, with lower_bound().
#include <map>
#include <iostream>
#include <algorithm>
using namespace std;
int A[] = {10,20,30,30,20,10,10,20};
// return (i, j) s.t. A[i] + ... + A[j] is nearest to value c
pair<int, int> nearest_to_c(int c, int n, int A[]) {
map<int, int> bst;
bst[0] = -1;
// barriers
bst[-int(1e9)] = -2;
bst[int(1e9)] = n;
int sum = 0, start, end, ret = c;
for (int i=0; i<n; ++i) {
sum += A[i];
// it->first >= sum-c, and with the minimal value in bst
map<int, int>::iterator it = bst.lower_bound(sum - c);
int tmp = -(sum - c - it->first);
if (tmp < ret) {
ret = tmp;
start = it->second + 1;
end = i;
}
--it;
// it->first < sum-c, and with the maximal value in bst
tmp = sum - c - it->first;
if (tmp < ret) {
ret = tmp;
start = it->second + 1;
end = i;
}
bst[sum] = i;
}
return make_pair(start, end);
}
// demo
int main() {
int c;
cin >> c;
pair<int, int> ans = nearest_to_c(c, 8, A);
cout << ans.first << ' ' << ans.second << endl;
return 0;
}
You can adapt your method. Assuming you have an array S of prefix sums, as you wrote, and already sorted in increasing order of sum value. The key concept is to not only examine consecutive prefix sums, but instead use two pointers to indicate two positions in the array S. Written in a (slightly pythonic) pseudocode:
left = 0 # Initialize window of length 0 ...
right = 0 # ... at the beginning of the array
best = āˆž # Keep track of best solution so far
while right < length(S): # Iterate until window reaches the end of the array
diff = S[right] - S[left]
if diff < t: # Window is getting too small
if t - diff < best: # We have a new best subarray
best = t - diff
# remember left and right as well
right = right + 1 # Make window bigger
else: # Window getting too big
if diff - t < best # We have a new best subarray
best = diff - t
# remember left and right as well
left = left + 1 # Make window smaller
The complexity is bound by the sorting. The above search will take at most 2n=O(n) iterations of the loop, each with computation time bound by a constant. Note that the above code was conceived for positive t.
The code was conceived for positive elements in S, and positive t. If any negative integers crop up, you might end up with a situation where the original index of right is smaller than that of left. So you'd end up with a sub sequence sum of -t. You can check this condition in the if ā€¦ < best checks, but if you only suppress such cases there, I believe that you might be missing some relevant cases. Bottom line is: take this idea, think it through, but you'll have to adapt it for negative numbers.
Note: I think that this is the same general idea which Boris Strandjev wanted to express in his solution. However, I found that solution somewhat hard to read and harder to understand, so I'm offering my own formulation of this.
Your solution for the 0 case seems ok to me. Here is my solution to the second case:
You again calculate the prefix sums and sort.
You initialize to indices start to 0 (first index in the sorted prefix array) end to last (last index of the prefix array)
you start iterating over start 0...last and for each you find the corresponding end - the last index in which the prefix sum is such that prefix[start] + prefix[end] > t. When you find that end the best solution for start is either prefix[start] + prefix[end] or prefix[start] + prefix[end - 1] (the latter taken only if end > 0)
the most important thing is that you do not search for end for each start from scratch - prefix[start] increases in value when iterating over all possible values for start, which means that in each iteration you are interested only in values <= the previous value of end.
you can stop iterating when start > end
you take the best of all values obtained for all start positions.
It can easily be proved that this will give you complexity of O(n logn) for the entire algorithm.
I found this question by accident. Although it's been a while, I just post it. O(nlogn) time, O(n) space algorithm. This is running Java code. Hope this help people.
import java.util.*;
public class FindSubarrayClosestToZero {
void findSubarrayClosestToZero(int[] A) {
int curSum = 0;
List<Pair> list = new ArrayList<Pair>();
// 1. create prefix array: curSum array
for(int i = 0; i < A.length; i++) {
curSum += A[i];
Pair pair = new Pair(curSum, i);
list.add(pair);
}
// 2. sort the prefix array by value
Collections.sort(list, valueComparator);
// printPairList(list);
System.out.println();
// 3. compute pair-wise value diff: Triple< diff, i, i+1>
List<Triple> tList = new ArrayList<Triple>();
for(int i=0; i < A.length-1; i++) {
Pair p1 = list.get(i);
Pair p2 = list.get(i+1);
int valueDiff = p2.value - p1.value;
Triple Triple = new Triple(valueDiff, p1.index, p2.index);
tList.add(Triple);
}
// printTripleList(tList);
System.out.println();
// 4. Sort by min diff
Collections.sort(tList, valueDiffComparator);
// printTripleList(tList);
Triple res = tList.get(0);
int startIndex = Math.min(res.index1 + 1, res.index2);
int endIndex = Math.max(res.index1 + 1, res.index2);
System.out.println("\n\nThe subarray whose sum is closest to 0 is: ");
for(int i= startIndex; i<=endIndex; i++) {
System.out.print(" " + A[i]);
}
}
class Pair {
int value;
int index;
public Pair(int value, int index) {
this.value = value;
this.index = index;
}
}
class Triple {
int valueDiff;
int index1;
int index2;
public Triple(int valueDiff, int index1, int index2) {
this.valueDiff = valueDiff;
this.index1 = index1;
this.index2 = index2;
}
}
public static Comparator<Pair> valueComparator = new Comparator<Pair>() {
public int compare(Pair p1, Pair p2) {
return p1.value - p2.value;
}
};
public static Comparator<Triple> valueDiffComparator = new Comparator<Triple>() {
public int compare(Triple t1, Triple t2) {
return t1.valueDiff - t2.valueDiff;
}
};
void printPairList(List<Pair> list) {
for(Pair pair : list) {
System.out.println("<" + pair.value + " : " + pair.index + ">");
}
}
void printTripleList(List<Triple> list) {
for(Triple t : list) {
System.out.println("<" + t.valueDiff + " : " + t.index1 + " , " + t.index2 + ">");
}
}
public static void main(String[] args) {
int A1[] = {8, -3, 2, 1, -4, 10, -5}; // -3, 2, 1
int A2[] = {-3, 2, 4, -6, -8, 10, 11}; // 2, 4, 6
int A3[] = {10, -2, -7}; // 10, -2, -7
FindSubarrayClosestToZero f = new FindSubarrayClosestToZero();
f.findSubarrayClosestToZero(A1);
f.findSubarrayClosestToZero(A2);
f.findSubarrayClosestToZero(A3);
}
}
Solution time complexity : O(NlogN)
Solution space complexity : O(N)
[Note this problem can't be solved in O(N) as some have claimed]
Algorithm:-
Compute cumulative array(here,cum[]) of given array [Line 10]
Sort the cumulative array [Line 11]
Answer is minimum amongst C[i]-C[i+1] , $\forall$ iāˆˆ[1,n-1] (1-based index) [Line 12]
C++ Code:-
#include<bits/stdc++.h>
#define M 1000010
#define REP(i,n) for (int i=1;i<=n;i++)
using namespace std;
typedef long long ll;
ll a[M],n,cum[M],ans=numeric_limits<ll>::max(); //cum->cumulative array
int main() {
ios::sync_with_stdio(false);cin.tie(0);cout.tie(0);
cin>>n; REP(i,n) cin>>a[i],cum[i]=cum[i-1]+a[i];
sort(cum+1,cum+n+1);
REP(i,n-1) ans=min(ans,cum[i+1]-cum[i]);
cout<<ans; //min +ve difference from 0 we can get
}
After more thinking on this problem, I found that #frankyym's solution is the right solution. I have made some refinements on the original solution, here is my code:
#include <map>
#include <stdio.h>
#include <algorithm>
#include <limits.h>
using namespace std;
#define IDX_LOW_BOUND -2
// Return [i..j] range of A
pair<int, int> nearest_to_c(int A[], int n, int t) {
map<int, int> bst;
int presum, subsum, closest, i, j, start, end;
bool unset;
map<int, int>::iterator it;
bst[0] = -1;
// Barriers. Assume that no prefix sum is equal to INT_MAX or INT_MIN.
bst[INT_MIN] = IDX_LOW_BOUND;
bst[INT_MAX] = n;
unset = true;
// This initial value is always overwritten afterwards.
closest = 0;
presum = 0;
for (i = 0; i < n; ++i) {
presum += A[i];
for (it = bst.lower_bound(presum - t), j = 0; j < 2; --it, j++) {
if (it->first == INT_MAX || it->first == INT_MIN)
continue;
subsum = presum - it->first;
if (unset || abs(closest - t) > abs(subsum - t)) {
closest = subsum;
start = it->second + 1;
end = i;
if (closest - t == 0)
goto ret;
unset = false;
}
}
bst[presum] = i;
}
ret:
return make_pair(start, end);
}
int main() {
int A[] = {10, 20, 30, 30, 20, 10, 10, 20};
int t;
scanf("%d", &t);
pair<int, int> ans = nearest_to_c(A, 8, t);
printf("[%d:%d]\n", ans.first, ans.second);
return 0;
}
As a side note: I agree with the algorithms provided by other threads here. There is another algorithm on top of my head recently. Make up another copy of A[], which is B[]. Inside B[], each element is A[i]-t/n, which means B[0]=A[0]-t/n, B[1]=A[1]-t/n ... B[n-1]=A[n-1]-t/n. Then the second problem is actually transformed to the first problem, once the smallest subarray of B[] closest to 0 is found, the subarray of A[] closest to t is found at the same time. (It is kinda tricky if t is not divisible by n, nevertheless, the precision has to be chosen appropriate. Also the runtime is O(n))
I think there is a little bug concerning the closest to 0 solution. At the last step we should not only inspect the difference between neighbor elements but also elements not near to each other if one of them is bigger than 0 and the other one is smaller than 0.
Sorry, I thought I am supposed to get all answers for the problem. Didn't see it only requires one.
Cant we use dynamic programming to solve this question similar to kadane's algorithm.Here is my solution to this problem.Please comment if this approach is wrong.
#include <bits/stdc++.h>
using namespace std;
int main() {
//code
int test;
cin>>test;
while(test--){
int n;
cin>>n;
vector<int> A(n);
for(int i=0;i<n;i++)
cin>>A[i];
int closest_so_far=A[0];
int closest_end_here=A[0];
int start=0;
int end=0;
int lstart=0;
int lend=0;
for(int i=1;i<n;i++){
if(abs(A[i]-0)<abs(A[i]+closest_end_here-0)){
closest_end_here=A[i]-0;
lstart=i;
lend=i;
}
else{
closest_end_here=A[i]+closest_end_here-0;
lend=i;
}
if(abs(closest_end_here-0)<abs(closest_so_far-0)){
closest_so_far=closest_end_here;
start=lstart;
end=lend;
}
}
for(int i=start;i<=end;i++)
cout<<A[i]<<" ";
cout<<endl;
cout<<closest_so_far<<endl;
}
return 0;
}
Here is a code implementation by java:
public class Solution {
/**
* #param nums: A list of integers
* #return: A list of integers includes the index of the first number
* and the index of the last number
*/
public ArrayList<Integer> subarraySumClosest(int[] nums) {
// write your code here
int len = nums.length;
ArrayList<Integer> result = new ArrayList<Integer>();
int[] sum = new int[len];
HashMap<Integer,Integer> mapHelper = new HashMap<Integer,Integer>();
int min = Integer.MAX_VALUE;
int curr1 = 0;
int curr2 = 0;
sum[0] = nums[0];
if(nums == null || len < 2){
result.add(0);
result.add(0);
return result;
}
for(int i = 1;i < len;i++){
sum[i] = sum[i-1] + nums[i];
}
for(int i = 0;i < len;i++){
if(mapHelper.containsKey(sum[i])){
result.add(mapHelper.get(sum[i])+1);
result.add(i);
return result;
}
else{
mapHelper.put(sum[i],i);
}
}
Arrays.sort(sum);
for(int i = 0;i < len-1;i++){
if(Math.abs(sum[i] - sum[i+1]) < min){
min = Math.abs(sum[i] - sum[i+1]);
curr1 = sum[i];
curr2 = sum[i+1];
}
}
if(mapHelper.get(curr1) < mapHelper.get(curr2)){
result.add(mapHelper.get(curr1)+1);
result.add(mapHelper.get(curr2));
}
else{
result.add(mapHelper.get(curr2)+1);
result.add(mapHelper.get(curr1));
}
return result;
}
}

Resources