Interview - Find magnitude pole in an array - algorithm

Magnitude Pole: An element in an array whose left hand side elements are lesser than or equal to it and whose right hand side element are greater than or equal to it.
example input
3,1,4,5,9,7,6,11
desired output
4,5,11
I was asked this question in an interview and I have to return the index of the element and only return the first element that met the condition.
My logic
Take two MultiSet (So that we can consider duplicate as well), one for right hand side of the element and one for left hand side of the
element(the pole).
Start with 0th element and put rest all elements in the "right set".
Base condition if this 0th element is lesser or equal to all element on "right set" then return its index.
Else put this into "left set" and start with element at index 1.
Traverse the Array and each time pick the maximum value from "left set" and minimum value from "right set" and compare.
At any instant of time for any element all the value to its left are in the "left set" and value to its right are in the "right set"
Code
int magnitudePole (const vector<int> &A) {
multiset<int> left, right;
int left_max, right_min;
int size = A.size();
for (int i = 1; i < size; ++i)
right.insert(A[i]);
right_min = *(right.begin());
if(A[0] <= right_min)
return 0;
left.insert(A[0]);
for (int i = 1; i < size; ++i) {
right.erase(right.find(A[i]));
left_max = *(--left.end());
if (right.size() > 0)
right_min = *(right.begin());
if (A[i] > left_max && A[i] <= right_min)
return i;
else
left.insert(A[i]);
}
return -1;
}
My questions
I was told that my logic is incorrect, I am not able to understand why this logic is incorrect (though I have checked for some cases and
it is returning right index)
For my own curiosity how to do this without using any set/multiset in O(n) time.

For an O(n) algorithm:
Count the largest element from n[0] to n[k] for all k in [0, length(n)), save the answer in an array maxOnTheLeft. This costs O(n);
Count the smallest element from n[k] to n[length(n)-1] for all k in [0, length(n)), save the answer in an array minOnTheRight. This costs O(n);
Loop through the whole thing and find any n[k] with maxOnTheLeft <= n[k] <= minOnTheRight. This costs O(n).
And you code is (at least) wrong here:
if (A[i] > left_max && A[i] <= right_min) // <-- should be >= and <=

Create two bool[N] called NorthPole and SouthPole (just to be humorous.
step forward through A[]tracking maximum element found so far, and set SouthPole[i] true if A[i] > Max(A[0..i-1])
step backward through A[] and set NorthPole[i] true if A[i] < Min(A[i+1..N-1)
step forward through NorthPole and SouthPole to find first element with both set true.
O(N) in each step above, as visiting each node once, so O(N) overall.

Java implementation:
Collection<Integer> magnitudes(int[] A) {
int length = A.length;
// what's the maximum number from the beginning of the array till the current position
int[] maxes = new int[A.length];
// what's the minimum number from the current position till the end of the array
int[] mins = new int[A.length];
// build mins
int min = mins[length - 1] = A[length - 1];
for (int i = length - 2; i >= 0; i--) {
if (A[i] < min) {
min = A[i];
}
mins[i] = min;
}
// build maxes
int max = maxes[0] = A[0];
for (int i = 1; i < length; i++) {
if (A[i] > max) {
max = A[i];
}
maxes[i] = max;
}
Collection<Integer> result = new ArrayList<>();
// use them to find the magnitudes if any exists
for (int i = 0; i < length; i++) {
if (A[i] >= maxes[i] && A[i] <= mins[i]) {
// return here if first one only is needed
result.add(A[i]);
}
}
return result;
}

Your logic seems perfectly correct (didn't check the implementation, though) and can be implemented to give an O(n) time algorithm! Nice job thinking in terms of sets.
Your right set can be implemented as a stack which supports a min, and the left set can be implemented as a stack which supports a max and this gives an O(n) time algorithm.
Having a stack which supports max/min is a well known interview question and can be done so each operation (push/pop/min/max is O(1)).
To use this for your logic, the pseudo code will look something like this
foreach elem in a[n-1 to 0]
right_set.push(elem)
while (right_set.has_elements()) {
candidate = right_set.pop();
if (left_set.has_elements() && left_set.max() <= candidate <= right_set.min()) {
break;
} else if (!left.has_elements() && candidate <= right_set.min() {
break;
}
left_set.push(candidate);
}
return candidate

I saw this problem on Codility, solved it with Perl:
sub solution {
my (#A) = #_;
my ($max, $min) = ($A[0], $A[-1]);
my %candidates;
for my $i (0..$#A) {
if ($A[$i] >= $max) {
$max = $A[$i];
$candidates{$i}++;
}
}
for my $i (reverse 0..$#A) {
if ($A[$i] <= $min) {
$min = $A[$i];
return $i if $candidates{$i};
}
}
return -1;
}

How about the following code? I think its efficiency is not good in the worst case, but it's expected efficiency would be good.
int getFirstPole(int* a, int n)
{
int leftPole = a[0];
for(int i = 1; i < n; i++)
{
if(a[j] >= leftPole)
{
int j = i;
for(; j < n; j++)
{
if(a[j] < a[i])
{
i = j+1; //jump the elements between i and j
break;
}
else if (a[j] > a[i])
leftPole = a[j];
}
if(j == n) // if no one is less than a[i] then return i
return i;
}
}
return 0;
}

Create array of ints called mags, and int variable called maxMag.
For each element in source array check if element is greater or equal to maxMag.
If is: add element to mags array and set maxMag = element.
If isn't: loop through mags array and remove all elements lesser.
Result: array of magnitude poles

Interesting question, I am having my own solution in C# which I have given below, read the comments to understand my approach.
public int MagnitudePoleFinder(int[] A)
{
//Create a variable to store Maximum Valued Item i.e. maxOfUp
int maxOfUp = A[0];
//if list has only one value return this value
if (A.Length <= 1) return A[0];
//create a collection for all candidates for magnitude pole that will be found in the iteration
var magnitudeCandidates = new List<KeyValuePair<int, int>>();
//add the first element as first candidate
var a = A[0];
magnitudeCandidates.Add(new KeyValuePair<int, int>(0, a));
//lets iterate
for (int i = 1; i < A.Length; i++)
{
a = A[i];
//if this item is maximum or equal to all above items ( maxofUp will hold max value of all the above items)
if (a >= maxOfUp)
{
//add it to candidate list
magnitudeCandidates.Add(new KeyValuePair<int, int>(i, a));
maxOfUp = a;
}
else
{
//remote all the candidates having greater values to this item
magnitudeCandidates = magnitudeCandidates.Except(magnitudeCandidates.Where(c => c.Value > a)).ToList();
}
}
//if no candidate return -1
if (magnitudeCandidates.Count == 0) return -1;
else
//return value of first candidate
return magnitudeCandidates.First().Key;
}

Related

I think I've found a good sorting algorithm. Seems to be faster than quickSort?

This works without comparing.
First, it finds the largest number in the array and saves it in a variable called "max". Then it creates a temporary array with the length of max + 1. After that, each "tempArray[i]" counts how often a number equal to "i" has been counted in the input array. In the end, it converts "tempArray" and writes it into the input array. See for yourself.
static int[] nSort(int[] array) {
int max = array[0];
for(int i = 1; i < array.length; i++) {
max = Math.max(max, array[i]);
}
Integer[] tempArray = new Integer[max+1];
for(int i = 0; i < array.length; i++) {
if(tempArray[array[i]] == null) {
tempArray[array[i]] = 0;
}
tempArray[array[i]]++;
}
for(int[] i = new int[2]; i[0] < max + 1; i[0]++) {
if(tempArray[i[0]] != null) {
while(tempArray[i[0]] > 0) {
array[i[1]] = i[0];
i[1]++;
tempArray[i[0]]--;
}
}
}
return array;
}
I've charted the measured runtime in a graph below. Green being my algorithm red and red being quicksort.
I've used this quicksort GitHub implementation and measured runtime in the same way as implemented there.
Runtime graph:

Selection Sort algorithm

Selection Sort:
I have created a selection sorting algorithm but someone said to me its not right selection sort.
If its not right so what type of sorting is it? and how it is different then selection sorting.
Code:
void selection_Sort(int arr[] , int size){
int temp , length = size;
for(int i = 0; i < size ; i++){
for(int j = i + 1; j < size ; j++){
if(arr[i] > arr[j]){
temp = arr[j];
arr[j] = arr[i];
arr[i] = temp;
}
}
}
}
please tell me how can i improve it?
To transform this code into selection sort, you have to find index of minimal element in the inner cycle, and exchange element at this index with i-th element after inner cycle finishes.
So overall number of swaps does not exceed N (while your current code could produce about N^2/2 swaps)
You have implemented Bubble sort.
The selection sort means you should find the lowest(or bigest) element in inner cycle and then switch it with element to the left/right which is at the edge of selecting (like in the picture).
There are three similar sorting alghoritms - select sort, insert sort and bubble sort you can watch how they behave here : http://i.imgur.com/fq0A8hx.gif
You have to implement a minimum element after outer for loop.
Here is the code:
def selectionSort(arr):
for i in range(len(arr)):
# Find the minimum element in remaining
# unsorted array
min_idx = i
for j in range(i+1, len(arr)):
if arr[min_idx] > arr[j]:
min_idx = j
# Swap the found minimum element with
# the first element
arr[i], arr[min_idx] = arr[min_idx], arr[i]
return arr
arr = [7,4,5,9,8,2,1]
print(selectionSort(arr))
How Selection sort works?
Starting from the first element, we search the smallest element in the array, and replace it with the element in the first position.
We then move on to the second position, and look for smallest element present in the subarray, starting from index 1, till the last index.
We replace the element at the second position in the original array, or we can say at the first position in the subarray, with the second smallest element.
4.This is repeated, until the array is completely sorted.
Selection Sort in Javascript
Comparing each with the rest and swapping with the smallest from the rest
Try this code here: https://repl.it/#VinitKhandelwal/selection-sort-javascript
function selectionSort(arr){
let min;
let i;
let j;
let temp;
console.log("Input Array");
console.log(arr);
for (i = 0; i < arr.length-1; i++) {
min = i;
for (j = i+1; j < arr.length; j++) {
console.log(arr[i], arr[j]);
if (arr[j] < arr[min]) {
console.log(arr[j]);
min = j;
}
}
if (min !== i) {
temp = arr[min];
arr[min] = arr[i];
arr[i] = temp;
console.log(arr);
}
}
console.log("Sorted using Selection Sort");
return arr
}
console.log(selectionSort([5,7,6,9,8,2,1,4,3]));
// console.log(selectionSort([1,2,3,4,5,6,7,8,9])); // uncomment to try best case, i.e. sorted
var Selectionsort = function (A) {
for (var i = 0; i < A.length; i++) {
var imin = i;
for (var j = i + 1; j <= A.length; j++) {
if (A[j] < A[imin])
imin = j;
}
var tmp = A[i];
A[i] = A[imin];
A[imin] = tmp;
}
return A;
};
var A = [10, 20, 30, 40, 50, 60, 70, 80];
var Aftersorted = Selectionsort(A);
console.log(Aftersorted);
You can improve it this way:
void selectionSort(double array[], int size) {
int min;
double temp;
for (int step = 0; step < size-1; step++) {
min = step;
for (int i = step+1; i < size; i++) {
if (array [i] < array[min]) {
min = i;
}
}
temp = array[step];
array [step] = array[min];
array [min] = temp;
}
Selection Sort in C
Selection sort is basically selecting the very first element of your unsorted sub-array as a minimum and comparing it with the other elements of your sub-array to find your original minimum. Then, replacing that minimum element with the first element of your sub-array. That's all!
Here goes my code...
#include <stdio.h>
void selectionSort(int n){
int arr[n],i,j,minIndex;
printf("\nInsert %d elements:\n",n);
for(i=0;i<n;i++){
scanf("%d",&arr[i]);
}
printf("Insert complete.\n\n");
printf("Your array looks like:\n");
for(i=0;i<n;i++){
printf("%d ",arr[i]);
}
//Selection Sort Algorithm
for(i=0;i<n-1;i++){
minIndex = i;
for(j=i+1;j<n;j++){
if(arr[j] < arr[minIndex]){
minIndex = j;
}
}
//Swapping elements
int temp = arr[i];
arr[i] = arr[minIndex];
arr[minIndex] = temp;
}
printf("\n\nAfter sorting your array looks like:\n");
for(i=0;i<n;i++){
printf("%d ",arr[i]);
}
}
int main(){
int n;
printf("Enter number of array elements: ");
scanf("%d",&n);
selectionSort(n);
return 0;
}
Result: -

Return true if there is a place to split the array so that the sum of the numbers on one side is equal to the sum of the numbers on the other side

Given an array of numbers return true if there is a place to split the array so that the sum of the numbers on one side is equal to the sum of the numbers on the other side.
Here's as far as I got. Please help:
function splitSum(arr) {
if(arr.length % 2 === 0) {
if ()
}
}
This can be solved in a pretty simple manner:
Just iterate over all possible positions for splitting the array, starting with an empty left array and right array is equal to the input-array and calculate the total sum of both chunks. Now simply move the first element in the array from the right to the left chunk. The total sums change in a pretty simple way: assume we remove n from the right chunk, simply substract n from the right chunk and add it the sum of the left chunk.
int equilibrium(int[] i)
int splitBefore = 0;
int left = 0;
int right = sumof(i);
for(; splitBefore < length(i) ; splitBefore++)
if(left == right)
return true;
left += i[splitBefore];
right -= i[splitBefore];
return left == right;
public boolean canBalance(int[] nums) {
int left = 0;
int right=0;
for(int k=0;k<nums.length; k++) {
right+=nums[k];
}
for(int i=0; i<nums.length-1; i++) {
if(left!=right) {
left+=nums[i];
right-=nums[i];
}
}
return left==right;
}
Here is commented answer hope it explains everything :)
public boolean canBalance(int[] nums) {
int p1 = 0; // a pointer to start of array
int p2 = nums.length-1; // a pointer to end of array
int sum1=0;// sum1 for left side elements sum taken care by p1
int sum2=0;// sum2 for right side elements sum taken care by p2
for(int i=0;i<nums.length;i++){
//run upto the length of array
sum1+=nums[p1]; // summing left side elements
sum2+=nums[p2];//adding right side elements
if(sum1==sum2 && Math.abs(p1-p2) == 1){
//if two sums become equal and the pointers differ by only one position
//then we got the answer
return true;
}
if(sum1 == sum2){
//two sums are equal means elements are equal on both sides
//hence move both pointers
p1++;
p2--;
}
else if(sum1 > sum2){
// sum1 is greater then need to make sum2 bigger hence move p2
p2--;
sum1-= nums[p1];//removing repeated addition when p2 is changed
}
else{
// sum2 is greater then need to make sum1 bigger hence move p1
p1++;
sum2-=nums[p2];//removing repeated addition when p1 is changed
}
}
return false;
}
public boolean canBalance(int[] nums) {
int leftSum = 0;
int rightSum = 0;
for (int i = 0; i < nums.length; i++){
leftSum += nums[i];
for (int j = i+1; j < nums.length; j++){
rightSum += nums[j];
}
if (leftSum == rightSum)
return true;
rightSum = 0;
}
return false;
}
I solved this in a different way. If we want to split the array as equal array must be divisible by 2. When you start adding the list from the left side you will reach a point the left side will be equal to half of the total.
public boolean canBalance(int[] nums) {
int total = 0, half = 0;
for(int i = 0; i < nums.length; i++)
{
total = nums[i] + total;
}
if(total % 2 == 1)
{
return false;
}
for(int i = 0; i < nums.length; i++)
{
half = nums[i] + half;
if(half == total / 2)
{
return true;
}
}
return false;
}
A solution in Python:
a = [2,1,1,2,1]
for i in range(len(a)):
a1 = a[:i+1]
a2 = a[i+1:]
if sum(a1) == sum(a2):
print(a1)
print(a2)

Linear time algorithm for 2-SUM

Given an integer x and a sorted array a of N distinct integers, design a linear-time algorithm to determine if there exists two distinct indices i and j such that a[i] + a[j] == x
This is type of Subset sum problem
Here is my solution. I don't know if it was known earlier or not. Imagine 3D plot of function of two variables i and j:
sum(i,j) = a[i]+a[j]
For every i there is such j that a[i]+a[j] is closest to x. All these (i,j) pairs form closest-to-x line. We just need to walk along this line and look for a[i]+a[j] == x:
int i = 0;
int j = lower_bound(a.begin(), a.end(), x) - a.begin();
while (j >= 0 && j < a.size() && i < a.size()) {
int sum = a[i]+a[j];
if (sum == x) {
cout << "found: " << i << " " << j << endl;
return;
}
if (sum > x) j--;
else i++;
if (i > j) break;
}
cout << " not found\n";
Complexity: O(n)
think in terms of complements.
iterate over the list, figure out for each item what the number needed to get to X for that number is. stick number and complement into hash. while iterating check to see if number or its complement is in hash. if so, found.
edit: and as I have some time, some pseudo'ish code.
boolean find(int[] array, int x) {
HashSet<Integer> s = new HashSet<Integer>();
for(int i = 0; i < array.length; i++) {
if (s.contains(array[i]) || s.contains(x-array[i])) {
return true;
}
s.add(array[i]);
s.add(x-array[i]);
}
return false;
}
Given that the array is sorted (WLOG in descending order), we can do the following:
Algorithm A_1:
We are given (a_1,...,a_n,m), a_1<...,<a_n.
Put a pointer at the top of the list and one at the bottom.
Compute the sum where both pointers are.
If the sum is greater than m, move the above pointer down.
If the sum is less than m, move the lower pointer up.
If a pointer is on the other (here we assume each number can be employed only once), report unsat.
Otherwise, (an equivalent sum will be found), report sat.
It is clear that this is O(n) since the maximum number of sums computed is exactly n. The proof of correctness is left as an exercise.
This is merely a subroutine of the Horowitz and Sahni (1974) algorithm for SUBSET-SUM. (However, note that almost all general purpose SS algorithms contain such a routine, Schroeppel, Shamir (1981), Howgrave-Graham_Joux (2010), Becker-Joux (2011).)
If we were given an unordered list, implementing this algorithm would be O(nlogn) since we could sort the list using Mergesort, then apply A_1.
First pass search for the first value that is > ceil(x/2). Lets call this value L.
From index of L, search backwards till you find the other operand that matches the sum.
It is 2*n ~ O(n)
This we can extend to binary search.
Search for an element using binary search such that we find L, such that L is min(elements in a > ceil(x/2)).
Do the same for R, but now with L as the max size of searchable elements in the array.
This approach is 2*log(n).
Here's a python version using Dictionary data structure and number complement. This has linear running time(Order of N: O(N)):
def twoSum(N, x):
dict = {}
for i in range(len(N)):
complement = x - N[i]
if complement in dict:
return True
dict[N[i]] = i
return False
# Test
print twoSum([2, 7, 11, 15], 9) # True
print twoSum([2, 7, 11, 15], 3) # False
Iterate over the array and save the qualified numbers and their indices into the map. The time complexity of this algorithm is O(n).
vector<int> twoSum(vector<int> &numbers, int target) {
map<int, int> summap;
vector<int> result;
for (int i = 0; i < numbers.size(); i++) {
summap[numbers[i]] = i;
}
for (int i = 0; i < numbers.size(); i++) {
int searched = target - numbers[i];
if (summap.find(searched) != summap.end()) {
result.push_back(i + 1);
result.push_back(summap[searched] + 1);
break;
}
}
return result;
}
I would just add the difference to a HashSet<T> like this:
public static bool Find(int[] array, int toReach)
{
HashSet<int> hashSet = new HashSet<int>();
foreach (int current in array)
{
if (hashSet.Contains(current))
{
return true;
}
hashSet.Add(toReach - current);
}
return false;
}
Note: The code is mine but the test file was not. Also, this idea for the hash function comes from various readings on the net.
An implementation in Scala. It uses a hashMap and a custom (yet simple) mapping for the values. I agree that it does not makes use of the sorted nature of the initial array.
The hash function
I fix the bucket size by dividing each value by 10000. That number could vary, depending on the size you want for the buckets, which can be made optimal depending on the input range.
So for example, key 1 is responsible for all the integers from 1 to 9.
Impact on search scope
What that means, is that for a current value n, for which you're looking to find a complement c such as n + c = x (x being the element you're trying ton find a 2-SUM of), there is only 3 possibles buckets in which the complement can be:
-key
-key + 1
-key - 1
Let's say that your numbers are in a file of the following form:
0
1
10
10
-10
10000
-10000
10001
9999
-10001
-9999
10000
5000
5000
-5000
-1
1000
2000
-1000
-2000
Here's the implementation in Scala
import scala.collection.mutable
import scala.io.Source
object TwoSumRed {
val usage = """
Usage: scala TwoSumRed.scala [filename]
"""
def main(args: Array[String]) {
val carte = createMap(args) match {
case None => return
case Some(m) => m
}
var t: Int = 1
carte.foreach {
case (bucket, values) => {
var toCheck: Array[Long] = Array[Long]()
if (carte.contains(-bucket)) {
toCheck = toCheck ++: carte(-bucket)
}
if (carte.contains(-bucket - 1)) {
toCheck = toCheck ++: carte(-bucket - 1)
}
if (carte.contains(-bucket + 1)) {
toCheck = toCheck ++: carte(-bucket + 1)
}
values.foreach { v =>
toCheck.foreach { c =>
if ((c + v) == t) {
println(s"$c and $v forms a 2-sum for $t")
return
}
}
}
}
}
}
def createMap(args: Array[String]): Option[mutable.HashMap[Int, Array[Long]]] = {
var carte: mutable.HashMap[Int,Array[Long]] = mutable.HashMap[Int,Array[Long]]()
if (args.length == 1) {
val filename = args.toList(0)
val lines: List[Long] = Source.fromFile(filename).getLines().map(_.toLong).toList
lines.foreach { l =>
val idx: Int = math.floor(l / 10000).toInt
if (carte.contains(idx)) {
carte(idx) = carte(idx) :+ l
} else {
carte += (idx -> Array[Long](l))
}
}
Some(carte)
} else {
println(usage)
None
}
}
}
int[] b = new int[N];
for (int i = 0; i < N; i++)
{
b[i] = x - a[N -1 - i];
}
for (int i = 0, j = 0; i < N && j < N;)
if(a[i] == b[j])
{
cout << "found";
return;
} else if(a[i] < b[j])
i++;
else
j++;
cout << "not found";
Here is a linear time complexity solution O(n) time O(1) space
public void twoSum(int[] arr){
if(arr.length < 2) return;
int max = arr[0] + arr[1];
int bigger = Math.max(arr[0], arr[1]);
int smaller = Math.min(arr[0], arr[1]);
int biggerIndex = 0;
int smallerIndex = 0;
for(int i = 2 ; i < arr.length ; i++){
if(arr[i] + bigger <= max){ continue;}
else{
if(arr[i] > bigger){
smaller = bigger;
bigger = arr[i];
biggerIndex = i;
}else if(arr[i] > smaller)
{
smaller = arr[i];
smallerIndex = i;
}
max = bigger + smaller;
}
}
System.out.println("Biggest sum is: " + max + "with indices ["+biggerIndex+","+smallerIndex+"]");
}
Solution
We need array to store the indices
Check if the array is empty or contains less than 2 elements
Define the start and the end point of the array
Iterate till condition is met
Check if the sum is equal to the target. If yes get the indices.
If condition is not met then traverse left or right based on the sum value
Traverse to the right
Traverse to the left
For more info :[http://www.prathapkudupublog.com/2017/05/two-sum-ii-input-array-is-sorted.html
Credit to leonid
His solution in java, if you want to give it a shot
I removed the return, so if the array is sorted, but DOES allow duplicates, it still gives pairs
static boolean cpp(int[] a, int x) {
int i = 0;
int j = a.length - 1;
while (j >= 0 && j < a.length && i < a.length) {
int sum = a[i] + a[j];
if (sum == x) {
System.out.printf("found %s, %s \n", i, j);
// return true;
}
if (sum > x) j--;
else i++;
if (i > j) break;
}
System.out.println("not found");
return false;
}
The classic linear time two-pointer solution does not require hashing so can solve related problems such as approximate sum (find closest pair sum to target).
First, a simple n log n solution: walk through array elements a[i], and use binary search to find the best a[j].
To get rid of the log factor, use the following observation: as the list is sorted, iterating through indices i gives a[i] is increasing, so any corresponding a[j] is decreasing in value and in index j. This gives the two-pointer solution: start with indices lo = 0, hi = N-1 (pointing to a[0] and a[N-1]). For a[0], find the best a[hi] by decreasing hi. Then increment lo and for each a[lo], decrease hi until a[lo] + a[hi] is the best. The algorithm can stop when it reaches lo == hi.

Max sum in an array with constraints

I have this problem , where given an array of positive numbers i have to find the maximum sum of elements such that no two adjacent elements are picked. The maximum has to be less than a certain given K. I tried thinking on the lines of the similar problem without the k , but i have failed so far.I have the following dp-ish soln for the latter problem
int sum1,sum2 = 0;
int sum = sum1 = a[0];
for(int i=1; i<n; i++)
{
sum = max(sum2 + a[i], sum1);
sum2 = sum1;
sum1 = sum;
}
Could someone give me tips on how to proceed with my present problem??
The best I can think of off the top of my head is an O(n*K) dp:
int sums[n][K+1] = {{0}};
int i, j;
for(j = a[0]; j <= K; ++j) {
sums[0][j] = a[0];
}
if (a[1] > a[0]) {
for(j = a[0]; j < a[1]; ++j) {
sums[1][j] = a[0];
}
for(j = a[1]; j <= K; ++j) {
sums[1][j] = a[1];
}
} else {
for(j = a[1]; j < a[0]; ++j) {
sums[1][j] = a[1];
}
for(j = a[0]; j <= K; ++j) {
sums[1][j] = a[0];
}
}
for(i = 2; i < n; ++i) {
for(j = 0; j <= K && j < a[i]; ++j) {
sums[i][j] = max(sums[i-1][j],sums[i-2][j]);
}
for(j = a[i]; j <= K; ++j) {
sums[i][j] = max(sums[i-1][j],a[i] + sums[i-2][j-a[i]]);
}
}
sums[i][j] contains the maximal sum of non-adjacent elements of a[0..i] not exceeding j. The solution is then sums[n-1][K] at the end.
Make a copy (A2) of the original array (A1).
Find largest value in array (A2).
Extract all values before the it's preceeding neighbour and the values after it's next neighbour into a new array (A3).
Find largest value in the new array (A3).
Check if sum is larger that k. If sum passes the check you are done.
If not you will need to go back to the copied array (A2), remove the second larges value (found in step 3) and start over with step 3.
Once there are no combinations of numbers that can be used with the largest number (i.e. number found in step 1 + any other number in array is larger than k) you remove it from the original array (A1) and start over with step 0.
If for some reason there are no valid combinations (e.g. array is only three numbers or no combination of numbers are lower than k) then throw an exception or you return null if that seems more appropriate.
First idea: Brute force
Iterate all legal combination of indexes and build the sum on the fly.
Stop with one sequence when you get over K.
keep the sequence until you find a larger one, that is still smaller then K
Second idea: maybe one can force this into a divide and conquer thing ...
Here is a solution to the problem without the "k" constraint which you set out to do as the first step: https://stackoverflow.com/a/13022021/1110808
The above solution can in my view be easily extended to have the k constraint by simply amending the if condition in the following for loop to include the constraint: possibleMax < k
// Subproblem solutions, DP
for (int i = start; i <= end; i++) {
int possibleMaxSub1 = maxSum(a, i + 2, end);
int possibleMaxSub2 = maxSum(a, start, i - 2);
int possibleMax = possibleMaxSub1 + possibleMaxSub2 + a[i];
/*
if (possibleMax > maxSum) {
maxSum = possibleMax;
}
*/
if (possibleMax > maxSum && possibleMax < k) {
maxSum = possibleMax;
}
}
As posted in the original link, this approach can be improved by adding memorization so that solutions to repeating sub problems are not recomputed. Or can be improved by using a bottom up dynamic programming approach (current approach is a recursive top down approach)
You can refer to a bottom up approach here: https://stackoverflow.com/a/4487594/1110808

Resources