Related
Link: https://leetcode.com/problems/first-bad-version/discuss/71386/An-clear-way-to-use-binary-search
I am doing a question wherein, given a string like this "FFTTTT", I have to find either the rightmost F or the leftmost T.
The following is the code:
To find the leftmost T
public int firstBadVersionLeft(int n) {
int i = 1;
int j = n;
while (i < j) {
int mid = i + (j - i) / 2;
if (isBadVersion(mid)) {
j = mid;
} else {
i = mid + 1;
}
}
return i;
}
I have the following doubts:
I am unable to understand the intuition behind returning i. I mean, why didn't we return j. I did a trial run of the code in my mind, and it works out, but how do we know we have to return i.
Why didn't we do while (i<=j) and just while(i<j). I mean, how do we determine this?
You stop when i==j as far as I can tell. So you can return either of them since they have the same value.
The difference between the two conditions is i == j . In that case mid == i + 0/2 == i. So isBadVersion can return true or false. If it returns true then you do j= mid, but we already had i == j == mid, so you don't do anything and you have an infinite loop. If isBadVersion returns false then you will make i == j+1 and the loop will end. Since the boundaries are inclusive (they include the i'th and j'th element) this can only happen if there's no 'T' in the string.
So you do (i < j) to avoid that infinite loop case.
P.S. This code will return n if there's not 'T' in the string or the last character is a 'T'. Not sure if that's intended or not.
How can we develop a dynamic programming algorithm that calculates the minimum number of different primes that sum to x?
Assume the dynamic programming calculates the minimum number of different primes amongst which the largest is p for each couple of x and p. Can someone help?
If we assume the Goldbach conjecture is true, then every even integer > 2 is the sum of two primes.
So we know the answer if x is even (1 if x==2, or 2 otherwise).
If x is odd, then there are 3 cases:
x is prime (answer is 1)
x-2 is prime (answer is 2)
otherwise x-3 is an even number bigger than 2 (answer is 3)
First of all, you need a list of primes up to x. Let's call this array of integers primes.
Now we want to populate the array answer[x][p], where x is the sum of primes and p is maximum for each prime in the set (possibly including, but not necessarily including p).
There are 3 possibilities for answer[x][p] after all calculations:
1) if p=x and p is prime => answer[x][p] contains 1
2) if it's not possible to solve problem for given x, p => answer[x][p] contains -1
3) if it's possible to solve problem for given x, p => answer[x][p] contains number of primes
There is one more possible value for answer[x][p] during calculations:
4) we did not yet solve the problem for given x, p => answer[x][p] contains 0
It's quite obvious that 0 is not the answer for anything but x=0, so we are safe initializing array with 0 (and making special treatment for x=0).
To calculate answer[x][p] we can iterate (let q is prime value we are iterating on) through all primes up to (including) p and find minimum over 1+answer[x-q][q-1] (do not consider all answer[x-q][q-1]=-1 cases though). Here 1 comes for q and answer[x-q][q-1] should be calculated in recursive call or before this calculation.
Now there's small optimization: iterate primes from higher to lower and if x/q is bigger than current answer, we can stop, because to make sum x we will need at least x/q primes anyway. For example, we will not even consider q=2 for x=10, as we'd already have answer=3 (actually, it includes 2 as one of 3 primes - 2+3+5, but we've already got it through recursive call answer(10-5, 4)), since 10/2=5, that is we'd get 5 as answer at best (in fact it does not exist for q=2).
package ru.tieto.test;
import java.util.ArrayList;
public class Primers {
static final int MAX_P = 10;
static final int MAX_X = 10;
public ArrayList<Integer> primes= new ArrayList<>();
public int answer[][] = new int[MAX_X+1][MAX_P+1];
public int answer(int x, int p) {
if (x < 0)
return -1;
if (x == 0)
return 0;
if (answer[x][p] != 0)
return answer[x][p];
int max_prime_idx = -1;
for (int i = 0;
i < primes.size() && primes.get(i) <= p && primes.get(i) <= x;
i++)
max_prime_idx = i;
if (max_prime_idx < 0) {
answer[x][p] = -1;
return -1;
}
int cur_answer = x+1;
for (int i = max_prime_idx; i >= 0; i--) {
int q = primes.get(i);
if (x / q >= cur_answer)
break;
if (x == q) {
cur_answer = 1;
break;
}
int candidate = answer(x-q, q-1);
if (candidate == -1)
continue;
if (candidate+1 < cur_answer)
cur_answer = candidate+1;
}
if (cur_answer > x)
answer[x][p] = -1;
else
answer[x][p] = cur_answer;
return answer[x][p];
}
private void make_primes() {
primes.add(2);
for (int p = 3; p <= MAX_P; p=p+2) {
boolean isPrime = true;
for (Integer q : primes) {
if (q*q > p)
break;
if (p % q == 0) {
isPrime = false;
break;
}
}
if (isPrime)
primes.add(p);
}
// for (Integer q : primes)
// System.out.print(q+",");
// System.out.println("<<");
}
private void init() {
make_primes();
for (int p = 0; p <= MAX_P; p++) {
answer[0][p] = 0;
answer[1][p] = -1;
}
for (int x = 2; x <= MAX_X; x++) {
for (int p = 0; p <= MAX_P; p++)
answer[x][p] = 0;
}
for (Integer p: primes)
answer[p][p] = 1;
}
void run() {
init();
for (int x = 0; x <= MAX_X; x++)
for (int p = 0; p <= MAX_P; p++)
answer(x, p);
}
public static void main(String[] args) {
Primers me = new Primers();
me.run();
// for (int x = 0; x <= MAX_X; x++) {
// System.out.print("x="+x+": {");
// for (int p = 0; p <= MAX_P; p++) {
// System.out.print(String.format("%2d=%-3d,",p, me.answer[x][p]));
// }
// System.out.println("}");
// }
}
}
Start with a list of all primes lower than x.
Take the largest. Now we need to solve the problem for (x - pmax). At this stage, that will be easy, x - pmax will be low. Mark all the primes as "used" and store solution 1. Now take the largest prime still in the list and repeat until all the primes are either used or rejected. If (x - pmax) is high, the problem gets more complex.
That's your first pass, brute force algorithm. Get that working first before considering how to speed things up.
Assuming you're not using goldbach conjecture, otherwise see Peter de Rivaz excellent answer, :
dynamic programming generally takes advantage of overlapping subproblems. Usually you go top down, but in this case bottom up may be simpler
I suggest you sum various combinations of primes.
lookup = {}
for r in range(1, 3):
for primes in combinations_with_replacement(all_primes, r):
s = sum(primes)
lookup[s] = lookup.get(s, r) //r is increasing, so only set it if it's not already there
this will start getting slow very quickly if you have a large number of primes, in that case, change max r to something like 1 or 2, whatever the max that is fast enough for you, and then you will be left with some numbers that aren't found, to solve for a number that doesnt have a solution in lookup, try break that number into sums of numbers that are found in lookup (you may need to store the prime combos in lookup and dedupe those combinations).
I always have the hardest time with this and I have yet to see a definitive explanation for something that is supposedly so common and highly-used.
We already know the standard binary search. Given starting lower and upper bounds, find the middle point at (lower + higher)/2, and then compare it against your array, and then re-set the bounds accordingly, etc.
However what are the needed differences to adjust the search to find (for a list in ascending order):
Smallest value >= target
Smallest value > target
Largest value <= target
Largest value < target
It seems like each of these cases requires very small tweaks to the algorithm but I can never get them to work right. I try changing inequalities, return conditions, I change how the bounds are updated, but nothing seems consistent.
What are the definitive ways to handle these four cases?
I had exactly the same issue until I figured out loop invariants along with predicates are the best and most consistent way of approaching all binary problems.
Point 1: Think of predicates
In general for all these 4 cases (and also the normal binary search for equality), imagine them as a predicate. So what this means is that some of the values are meeting the predicate and some some failing. So consider for example this array with a target of 5:
[1, 2, 3, 4, 6, 7, 8]. Finding the first number greater than 5 is basically equivalent of finding the first one in this array: [0, 0, 0, 0, 1, 1, 1].
Point 2: Search boundaries inclusive
I like to have both ends always inclusive. But I can see some people like start to be inclusive and end exclusive (on len instead of len -1). I like to have all the elements inside of the array, so when referring to a[mid] I don't think whether that will give me an array out of bound. So my preference: Go inclusive!!!
Point 3: While loop condition <=
So we even want to process the subarray of size 1 in the while loop, and when the while loop finishes there should be no unprocessed element. I really like this logic. It's always solid as a rock. Initially all the elements are not inspected, basically they are unknown. Meaning that everything in the range of [st = 0, to end = len - 1] are not inspected. Then when the while loop finishes, the range of uninspected elements should be array of size 0!
Point 4: Loop invariants
Since we defined start = 0, end = len - 1, invariants will be like this:
Anything left of start is smaller than target.
Anything right of end is greater than or equal to the target.
Point 5: The answer
Once the loop finishes, basically based on the loop invariants anything to the left of start is smaller. So that means that start is the first element greater than or equal to the target.
Equivalently, anything to the right of end is greater than or equal to the target. So that means the answer is also equal to end + 1.
The code:
public int find(int a[], int target){
int start = 0;
int end = a.length - 1;
while (start <= end){
int mid = (start + end) / 2; // or for no overflow start + (end - start) / 2
if (a[mid] < target)
start = mid + 1;
else // a[mid] >= target
end = mid - 1;
}
return start; // or end + 1;
}
variations:
<
It's equivalent of finding the first 0. So basically only return changes.
return end; // or return start - 1;
>
change the if condition to <= and else will be >. No other change.
<=
Same as >, return end; // or return start - 1;
So in general with this model for all the 5 variations (<=, <, >, >=, normal binary search) only the condition in the if changes and the return statement. And figuring those small changes is super easy when you consider the invariants (point 4) and the answer (point 5).
Hope this clarifies for whoever reads this. If anything is unclear of feels like magic please ping me to explain. After understanding this method, everything for binary search should be as clear as day!
Extra point: It would be a good practice to also try including the start but excluding the end. So the array would be initially [0, len). If you can write the invariants, new condition for the while loop, the answer and then a clear code, it means you learnt the concept.
Binary search(at least the way I implement it) relies on a simple property - a predicate holds true for one end of the interval and does not hold true for the other end. I always consider my interval to be closed at one end and opened at the other. So let's take a look at this code snippet:
int beg = 0; // pred(beg) should hold true
int end = n;// length of an array or a value that is guranteed to be out of the interval that we are interested in
while (end - beg > 1) {
int mid = (end + beg) / 2;
if (pred(a[mid])) {
beg = mid;
} else {
end = mid;
}
}
// answer is at a[beg]
This will work for any of the comparisons you define. Simply replace pred with <=target or >=target or <target or >target.
After the cycle exits, a[beg] will be the last element for which the given inequality holds.
So let's assume(like suggested in the comments) that we want to find the largest number for which a[i] <= target. Then if we use predicate a[i] <= target the code will look like:
int beg = 0; // pred(beg) should hold true
int end = n;// length of an array or a value that is guranteed to be out of the interval that we are interested in
while (end - beg > 1) {
int mid = (end + beg) / 2;
if (a[mid] <= target) {
beg = mid;
} else {
end = mid;
}
}
And after the cycle exits, the index that you are searching for will be beg.
Also depending on the comparison you may have to start from the right end of the array. E.g. if you are searching for the largest value >= target, you will do something of the sort of:
beg = -1;
end = n - 1;
while (end - beg > 1) {
int mid = (end + beg) / 2;
if (a[mid] >= target) {
end = mid;
} else {
beg = mid;
}
}
And the value that you are searching for will be with index end. Note that in this case I consider the interval (beg, end] and thus I've slightly modified the starting interval.
The basic binary search is to search the position/value which equals with the target key. While it can be extended to find the minimal position/value which satisfy some condition, or find the maximal position/value which satisfy some condition.
Suppose the array is ascending order, if no satisfied position/value found, return -1.
Code sample:
// find the minimal position which satisfy some condition
private static int getMinPosition(int[] arr, int target) {
int l = 0, r = arr.length - 1;
int ans = -1;
while(l <= r) {
int m = (l + r) >> 1;
// feel free to replace the condition
// here it means find the minimal position that the element not smaller than target
if(arr[m] >= target) {
ans = m;
r = m - 1;
} else {
l = m + 1;
}
}
return ans;
}
// find the maximal position which satisfy some condition
private static int getMaxPosition(int[] arr, int target) {
int l = 0, r = arr.length - 1;
int ans = -1;
while(l <= r) {
int m = (l + r) >> 1;
// feel free to replace the condition
// here it means find the maximal position that the element less than target
if(arr[m] < target) {
ans = m;
l = m + 1;
} else {
r = m - 1;
}
}
return ans;
}
int[] a = {3, 5, 5, 7, 10, 15};
System.out.println(BinarySearchTool.getMinPosition(a, 5));
System.out.println(BinarySearchTool.getMinPosition(a, 6));
System.out.println(BinarySearchTool.getMaxPosition(a, 8));
What you need is a binary search that lets you participate in the process at the last step. The typical binary search would receive (array, element) and produce a value (normally the index or not found). But if you have a modified binary that accept a function to be invoked at the end of the search you can cover all cases.
For example, in Javascript to make it easy to test, the following binary search
function binarySearch(array, el, fn) {
function aux(left, right) {
if (left > right) {
return fn(array, null, left, right);
}
var middle = Math.floor((left + right) / 2);
var value = array[middle];
if (value > el) {
return aux(left, middle - 1);
} if (value < el) {
return aux(middle + 1, right);
} else {
return fn(array, middle, left, right);
}
}
return aux(0, array.length - 1);
}
would allow you to cover each case with a particular return function.
default
function(a, m) { return m; }
Smallest value >= target
function(a, m, l, r) { return m != null ? a[m] : r + 1 >= a.length ? null : a[r + 1]; }
Smallest value > target
function(a, m, l, r) { return (m || r) + 1 >= a.length ? null : a[(m || r) + 1]; }
Largest value <= target
function(a, m, l, r) { return m != null ? a[m] : l - 1 > 0 ? a[l - 1] : null; }
Largest value < target
function(a, m, l, r) { return (m || l) - 1 < 0 ? null : a[(m || l) - 1]; }
I came across this interview questions. It says we have to do a binary search on a sorted array. Following is the code for that. This code has bug such that it doesn't give right answer. You have to change the code to give correct output.
Condition : You are not allowed to add line and you can change only three lines in the code.
int solution(int[] A, int X) {
int N = A.length;
if (N == 0) {
return -1;
}
int l = 0;
int r = N;
while (l < r) {
int m = (l + r) / 2;
if (A[m] > X) {
r = m - 1;
} else {
l = m+1;
}
}
if (A[r] == X) {
return r;
}
return -1;
}
I tried a lot on my own but was missing on some test cases.
I hate this question, it's one of those "unnecessary constraint" questions. As others have mentioned, the problem is that you're not returning the value if you find it. Since the stupid instructions say you can't add any code, you can hack it like this:
if (A[m] >= X) {
r = m;
} else {
l = m;
}
This kills the performance but it should work.
You need to check for the searched value inside the loop, for exit if it's found
Sample Code:
int solution(int[] A, int X) {
int N = A.length;
if (N == 0) {
return -1;
}
int l = 0;
int r = N;
while (l <= r) { // change here, need to check for the element if l == r
// this is the principal problem of your code
int m = (l + r) / 2;
if (A[m] == X) { // new code, for every loop check if the middle element
return r; // is the search element for early exit.
} else if (A[m] > X) {
r = m - 1;
} else {
l = m + 1;
}
}
return -1;
}
Other problem is that you are testing more elements that you need when the element is in the array.
Try this:
int l = 0;
int r = N - 1; // changed
while (l <= r) { // changed
You have to understand the method that is used. You are looking for the first element >= X.
You want k with i < k <=> A[i] < X.
L is for left. It is the lower limit for k. You have i < l => A[i] < X.
R is for right. It is the upper limit for k. You have i >= r => A[i] >= X.
Your target is to reduce the range and have l = r. To do so you check the value in the middle, at m = (r+l)/2.
If A[m] >= X then m satisfies the conditions for r. You can set r = m.
If A[m] < X then A[m] belongs to the part left of l. So you can set l to the right of m, l = m+1.
Each loop reduces the range between l and r. When you reach l==r, you have found the point I called k. A[k] is the smallest number >= X. You only need to check if it is == X or > X.
From there you should be able to fix the code.
PS: Note that the k (aka l or r) can be >= A.length. You need to verify that.
Given an integer x and a sorted array a of N distinct integers, design a linear-time algorithm to determine if there exists two distinct indices i and j such that a[i] + a[j] == x
This is type of Subset sum problem
Here is my solution. I don't know if it was known earlier or not. Imagine 3D plot of function of two variables i and j:
sum(i,j) = a[i]+a[j]
For every i there is such j that a[i]+a[j] is closest to x. All these (i,j) pairs form closest-to-x line. We just need to walk along this line and look for a[i]+a[j] == x:
int i = 0;
int j = lower_bound(a.begin(), a.end(), x) - a.begin();
while (j >= 0 && j < a.size() && i < a.size()) {
int sum = a[i]+a[j];
if (sum == x) {
cout << "found: " << i << " " << j << endl;
return;
}
if (sum > x) j--;
else i++;
if (i > j) break;
}
cout << " not found\n";
Complexity: O(n)
think in terms of complements.
iterate over the list, figure out for each item what the number needed to get to X for that number is. stick number and complement into hash. while iterating check to see if number or its complement is in hash. if so, found.
edit: and as I have some time, some pseudo'ish code.
boolean find(int[] array, int x) {
HashSet<Integer> s = new HashSet<Integer>();
for(int i = 0; i < array.length; i++) {
if (s.contains(array[i]) || s.contains(x-array[i])) {
return true;
}
s.add(array[i]);
s.add(x-array[i]);
}
return false;
}
Given that the array is sorted (WLOG in descending order), we can do the following:
Algorithm A_1:
We are given (a_1,...,a_n,m), a_1<...,<a_n.
Put a pointer at the top of the list and one at the bottom.
Compute the sum where both pointers are.
If the sum is greater than m, move the above pointer down.
If the sum is less than m, move the lower pointer up.
If a pointer is on the other (here we assume each number can be employed only once), report unsat.
Otherwise, (an equivalent sum will be found), report sat.
It is clear that this is O(n) since the maximum number of sums computed is exactly n. The proof of correctness is left as an exercise.
This is merely a subroutine of the Horowitz and Sahni (1974) algorithm for SUBSET-SUM. (However, note that almost all general purpose SS algorithms contain such a routine, Schroeppel, Shamir (1981), Howgrave-Graham_Joux (2010), Becker-Joux (2011).)
If we were given an unordered list, implementing this algorithm would be O(nlogn) since we could sort the list using Mergesort, then apply A_1.
First pass search for the first value that is > ceil(x/2). Lets call this value L.
From index of L, search backwards till you find the other operand that matches the sum.
It is 2*n ~ O(n)
This we can extend to binary search.
Search for an element using binary search such that we find L, such that L is min(elements in a > ceil(x/2)).
Do the same for R, but now with L as the max size of searchable elements in the array.
This approach is 2*log(n).
Here's a python version using Dictionary data structure and number complement. This has linear running time(Order of N: O(N)):
def twoSum(N, x):
dict = {}
for i in range(len(N)):
complement = x - N[i]
if complement in dict:
return True
dict[N[i]] = i
return False
# Test
print twoSum([2, 7, 11, 15], 9) # True
print twoSum([2, 7, 11, 15], 3) # False
Iterate over the array and save the qualified numbers and their indices into the map. The time complexity of this algorithm is O(n).
vector<int> twoSum(vector<int> &numbers, int target) {
map<int, int> summap;
vector<int> result;
for (int i = 0; i < numbers.size(); i++) {
summap[numbers[i]] = i;
}
for (int i = 0; i < numbers.size(); i++) {
int searched = target - numbers[i];
if (summap.find(searched) != summap.end()) {
result.push_back(i + 1);
result.push_back(summap[searched] + 1);
break;
}
}
return result;
}
I would just add the difference to a HashSet<T> like this:
public static bool Find(int[] array, int toReach)
{
HashSet<int> hashSet = new HashSet<int>();
foreach (int current in array)
{
if (hashSet.Contains(current))
{
return true;
}
hashSet.Add(toReach - current);
}
return false;
}
Note: The code is mine but the test file was not. Also, this idea for the hash function comes from various readings on the net.
An implementation in Scala. It uses a hashMap and a custom (yet simple) mapping for the values. I agree that it does not makes use of the sorted nature of the initial array.
The hash function
I fix the bucket size by dividing each value by 10000. That number could vary, depending on the size you want for the buckets, which can be made optimal depending on the input range.
So for example, key 1 is responsible for all the integers from 1 to 9.
Impact on search scope
What that means, is that for a current value n, for which you're looking to find a complement c such as n + c = x (x being the element you're trying ton find a 2-SUM of), there is only 3 possibles buckets in which the complement can be:
-key
-key + 1
-key - 1
Let's say that your numbers are in a file of the following form:
0
1
10
10
-10
10000
-10000
10001
9999
-10001
-9999
10000
5000
5000
-5000
-1
1000
2000
-1000
-2000
Here's the implementation in Scala
import scala.collection.mutable
import scala.io.Source
object TwoSumRed {
val usage = """
Usage: scala TwoSumRed.scala [filename]
"""
def main(args: Array[String]) {
val carte = createMap(args) match {
case None => return
case Some(m) => m
}
var t: Int = 1
carte.foreach {
case (bucket, values) => {
var toCheck: Array[Long] = Array[Long]()
if (carte.contains(-bucket)) {
toCheck = toCheck ++: carte(-bucket)
}
if (carte.contains(-bucket - 1)) {
toCheck = toCheck ++: carte(-bucket - 1)
}
if (carte.contains(-bucket + 1)) {
toCheck = toCheck ++: carte(-bucket + 1)
}
values.foreach { v =>
toCheck.foreach { c =>
if ((c + v) == t) {
println(s"$c and $v forms a 2-sum for $t")
return
}
}
}
}
}
}
def createMap(args: Array[String]): Option[mutable.HashMap[Int, Array[Long]]] = {
var carte: mutable.HashMap[Int,Array[Long]] = mutable.HashMap[Int,Array[Long]]()
if (args.length == 1) {
val filename = args.toList(0)
val lines: List[Long] = Source.fromFile(filename).getLines().map(_.toLong).toList
lines.foreach { l =>
val idx: Int = math.floor(l / 10000).toInt
if (carte.contains(idx)) {
carte(idx) = carte(idx) :+ l
} else {
carte += (idx -> Array[Long](l))
}
}
Some(carte)
} else {
println(usage)
None
}
}
}
int[] b = new int[N];
for (int i = 0; i < N; i++)
{
b[i] = x - a[N -1 - i];
}
for (int i = 0, j = 0; i < N && j < N;)
if(a[i] == b[j])
{
cout << "found";
return;
} else if(a[i] < b[j])
i++;
else
j++;
cout << "not found";
Here is a linear time complexity solution O(n) time O(1) space
public void twoSum(int[] arr){
if(arr.length < 2) return;
int max = arr[0] + arr[1];
int bigger = Math.max(arr[0], arr[1]);
int smaller = Math.min(arr[0], arr[1]);
int biggerIndex = 0;
int smallerIndex = 0;
for(int i = 2 ; i < arr.length ; i++){
if(arr[i] + bigger <= max){ continue;}
else{
if(arr[i] > bigger){
smaller = bigger;
bigger = arr[i];
biggerIndex = i;
}else if(arr[i] > smaller)
{
smaller = arr[i];
smallerIndex = i;
}
max = bigger + smaller;
}
}
System.out.println("Biggest sum is: " + max + "with indices ["+biggerIndex+","+smallerIndex+"]");
}
Solution
We need array to store the indices
Check if the array is empty or contains less than 2 elements
Define the start and the end point of the array
Iterate till condition is met
Check if the sum is equal to the target. If yes get the indices.
If condition is not met then traverse left or right based on the sum value
Traverse to the right
Traverse to the left
For more info :[http://www.prathapkudupublog.com/2017/05/two-sum-ii-input-array-is-sorted.html
Credit to leonid
His solution in java, if you want to give it a shot
I removed the return, so if the array is sorted, but DOES allow duplicates, it still gives pairs
static boolean cpp(int[] a, int x) {
int i = 0;
int j = a.length - 1;
while (j >= 0 && j < a.length && i < a.length) {
int sum = a[i] + a[j];
if (sum == x) {
System.out.printf("found %s, %s \n", i, j);
// return true;
}
if (sum > x) j--;
else i++;
if (i > j) break;
}
System.out.println("not found");
return false;
}
The classic linear time two-pointer solution does not require hashing so can solve related problems such as approximate sum (find closest pair sum to target).
First, a simple n log n solution: walk through array elements a[i], and use binary search to find the best a[j].
To get rid of the log factor, use the following observation: as the list is sorted, iterating through indices i gives a[i] is increasing, so any corresponding a[j] is decreasing in value and in index j. This gives the two-pointer solution: start with indices lo = 0, hi = N-1 (pointing to a[0] and a[N-1]). For a[0], find the best a[hi] by decreasing hi. Then increment lo and for each a[lo], decrease hi until a[lo] + a[hi] is the best. The algorithm can stop when it reaches lo == hi.