Guards and demand - algorithm

You have N guards in a line each with a demand of coins. You can skip paying a guard only if his demand is less than what you have totally paid before reaching him. Find the least number of coins you spend to cross all guards.
I think its a DP problem but can't come up with a formula. Another approach would be to binary search on the answer, but how do I verify if a number of coins is a possible answer?

This is indeed a dynamic programming problem.
Consider the function f(i, j), which is true (one) if there is an assignment of the first i guards which give you cost j. You can arrange function f(i, j) in a table of size n x S, where S is the sum of all the guards demand.
Let us denote d_i as the demand of guard i.
You can easily compute the column f(i+1) if you have f(i) by simply scanning f(i) and assigning f(i+1, j + d_i) as one if f(i + 1, j) is true and j < d_i, or f(i + 1, j) if j >= d_i.
This runs in O(nS) time and O(S) space (you only need to keep two columns per time), which is only pseudopolynomial (and quadratic-like if demands are somehow bounded and does not grow with n).
A common trick to reduce the complexity of a DP problem is to get an upper bound B on the value of the optimal solution. This way, you can prune unnecessary rows, obtaining a time complexity of O(nB) (well, even S is an upper-bound, but a very naïve one).
It turns out that, in our case, B = 2M, where M is the maximum demand of a guard.
In fact, consider the function best_assignment(i), which gives you the minimum amount of coins to pass the first i guards.
Let j be the guard with demand M. If best_assignment(j - 1) > M, then obviously the best assignment for the whole sequence is pay the guards for the best assignment of the first j-1 guards and skip the others, otherwise the upper-bound is given by best_assignment(j - 1) + M < 2M.
But how much best_assignment(j - 1) can be in the first case? It cannot be more than 2M.
This can be proven by contradiction. Let us suppose that best_assignment(j - 1) > 2M. In this assignment, the guard j-1 is paid? No, because 2M - d_{j-1} > d_{j-1}, thus it does not need to be paid. The same argument holds for j-2, j-3, ... 1, thus no guard is paid, which is absurd unless M = 0 (a very naïve case to be checked).
Since the upper-bound is proved to be 2M, the DP illustrated above with n columns and 2M rows solves the problem, with time complexity O(nM) and space complexity O(M).

function crossCost(amtPaidAlready, curIdx, demands){
//base case: we are at the end of the line
if (curIdx >= demands.size()){
return amtPaidAlready;
}
costIfWePay = crossCost(amtPaidAlready + demands[curIdx], curIdx+1, demands);
//can we skip paying the guard?
if (demands[curIdx] < amtPaidAlready){
costIfWeDontPay = crossCost(amtPaidAlready, curIdx+1, demands);
return min(costIfWePay, costIfWeDontPay);
}
//can't skip paying
else{
return costIfWePay;
}
}
This runs in O(2^N) time because it may call itself twice per execution. It's a good candidate for memoization, because it is a pure function with no side effects.

Here's my approach:
int guards[N];
int minSpent;
void func(int pos, int current_spent){
if(pos > N)
return;
if(pos == N && current_spent < minSpent){
minSpent = current_spent;
return;
}
if(guards[pos] < current_spent) // If current guard can be skipped
func(pos+1,current_spent); // just skip it to the next guard
func(pos+1,current_spent+guards[pos]); // In either cases try taking the current guard
}
Used in this way:
minSpent = MAX_NUM;
func(1,guards[0]);
This will try all possibilities its O(2^N), hope this helps.

Related

Search a word in a matrix runtime comlexity

Trying to analyze the runtime complexity of the following algorithm:
Problem: We have an m * n array A consisting of lower case letters and a target string s. The goal is to examine whether the target string appearing in A or not.
algorithm:
for(int i = 0; i < m; i++){
for(int j = 0; j < n; j++){
if(A[i][j] is equal to the starting character in s) search(i, j, s)
}
}
boolean search(int i, int j, target s){
if(the current position relative to s is the length of s) then we find the target
looping through the four possible directions starting from i, j: {p,q} = {i+1, j} or {i-1, j} or {i, j+1} or {i, j-1}, if the coordinate is never visited before
search(p, q, target s)
}
One runtime complexity analysis that I read is the following:
At each position in the array A, we are first presented with 4 possible directions to explore. After the first round, we are only given 3 possible choices because we can never go back. So the worst runtime complexity is O(m * n * 3**len(s))
However, I disagree with this analysis, because even though we are only presented with 3 possible choices each round, we do need to spend one operation to check whether that direction has been visited before or not. For instance, in java you probably just use a boolean array to track whether one spot has been visited before, so in order to know whether a spot has been visited or not, one needs a conditional check, and that costs one operation. The analysis I mentioned does not seem to take into account this.
What should be the runtime complexity?
update:
Let us suppose that the length of the target string is l and the runtime complexity at a given position in the matrix is T(l). Then we have:
T(l) = 4 T(l- 1) + 4 = 4(3T(l - 2) + 4) + 4 = 4(3( 3T(l -3) + 4) + 4)) + 4 = 4 * 3 ** (l - 1) + 4 + 4 *4 + 4 * 3 * 4 + ...
the +4 is coming from the fact that we are looping through four directions in each round besides recursively calling itself three times.
What should be the runtime complexity?
The mentioned analysis is correct and the complexity is indeed O(m * n * 3**len(s)).
For instance, in java you probably just use a boolean array to track whether one spot has been visited before, so in order to know whether a spot has been visited or not, one needs a conditional check, and that costs one operation.
That is correct and does not contradict the analysis.
The worst case we can construct is the matrix filled with only one letter a and a string aaaa....aaaax (many letters a and one x at the end). If m, n and len(s) are large enough, almost each call of the search function will generate 3 recursion calls of itself. Each of that calls will generate another 3 calls (which gives us total 9 calls of depth 2), each of them willl generate another 3 calls (which gives us total 27 calls of depth 3) and so on. Checking current string character, conditional checks, spawning a recursion are all O(1), so complexity of the whole search function is O(3**len(s)).
The solution is brute force. We have to touch each point on the board. That makes O(m*n) operation.
Now for each point, we have to run dfs() to check if the word exist. So we get
O(m * n * timeComplexityOf dfs)
this is a dfs written in python. Examine the time complexity
def dfs(r,c,i):
# O(1)
if i==len(word):
return True
# O(1)
# set is implemented as a hash table.
# So, time complexity of look up in a set is O(1)
if r<0 or c<0 or r>=ROWS or c>=COLS or word[i]!=board[r][c] or (r,c) in path_set:
return False
# O(1)
path.add((r,c))
# O(1)
res=(dfs(r+1,c,i+1) or
dfs(r-1,c,i+1) or
dfs(r,c+1,i+1) or
dfs(r,c-1,i+1))
# O(1)
path.remove((r,c))
return res
Since we dfs recursively calling itself, think about how many dfs calls will be on call stack. in worst case it will length of word. Thats why
O ( m * n * word.length)

Finding the asymptotic amount of comparisons performed by an algorithm

I have been given the following algorithm:
Supersort(A, i, j):
if(j = i): return
if(j = i + 1):
if(A[i] > A[j]):
swap(A[i], A[j])
else:
k = floor of ( (j-i+1)/3 )
Supersort(A, i, j-k) // sort first two thirds
Supersort(A, i+k, j) // sort last two thirds
Supersort(A, i, j-k) // sort first two thirds again
And I really am not sure how to analyze how many comparisons in the worst case this algorithm would make. I don't want the answer given to me, I just don't even know how to get started solving this problem. Thanks for any help
Typically when you have a recursive function the first thing you do is obtain a recurrence relation. In your case let T(n) be the cost of supersort when the input is of size n. What is that equal? The first two if stmts are just constants and the else costs T(2n/3)+T(n/3)+T(2n/3) then
T(n)=2T(2n/3)+T(n/3)+C
Then you solve that recurrence.
Correction:
in the three cases you use 2/3. I thought one of them is 1/3. In that case the recurrence is even simpler and can be solved using the master theorem
T(n)=3T(2n/3)+C

Example of Big O of 2^n

So I can picture what an algorithm is that has a complexity of n^c, just the number of nested for loops.
for (var i = 0; i < dataset.len; i++ {
for (var j = 0; j < dataset.len; j++) {
//do stuff with i and j
}
}
Log is something that splits the data set in half every time, binary search does this (not entirely sure what code for this looks like).
But what is a simple example of an algorithm that is c^n or more specifically 2^n. Is O(2^n) based on loops through data? Or how data is split? Or something else entirely?
Algorithms with running time O(2^N) are often recursive algorithms that solve a problem of size N by recursively solving two smaller problems of size N-1.
This program, for instance prints out all the moves necessary to solve the famous "Towers of Hanoi" problem for N disks in pseudo-code
void solve_hanoi(int N, string from_peg, string to_peg, string spare_peg)
{
if (N<1) {
return;
}
if (N>1) {
solve_hanoi(N-1, from_peg, spare_peg, to_peg);
}
print "move from " + from_peg + " to " + to_peg;
if (N>1) {
solve_hanoi(N-1, spare_peg, to_peg, from_peg);
}
}
Let T(N) be the time it takes for N disks.
We have:
T(1) = O(1)
and
T(N) = O(1) + 2*T(N-1) when N>1
If you repeatedly expand the last term, you get:
T(N) = 3*O(1) + 4*T(N-2)
T(N) = 7*O(1) + 8*T(N-3)
...
T(N) = (2^(N-1)-1)*O(1) + (2^(N-1))*T(1)
T(N) = (2^N - 1)*O(1)
T(N) = O(2^N)
To actually figure this out, you just have to know that certain patterns in the recurrence relation lead to exponential results. Generally T(N) = ... + C*T(N-1) with C > 1means O(x^N). See:
https://en.wikipedia.org/wiki/Recurrence_relation
Think about e.g. iterating over all possible subsets of a set. This kind of algorithms is used for instance for a generalized knapsack problem.
If you find it hard to understand how iterating over subsets translates to O(2^n), imagine a set of n switches, each of them corresponding to one element of a set. Now, each of the switches can be turned on or off. Think of "on" as being in the subset. Note, how many combinations are possible: 2^n.
If you want to see an example in code, it's usually easier to think about recursion here, but I can't think od any other nice and understable example right now.
Consider that you want to guess the PIN of a smartphone, this PIN is a 4-digit integer number. You know that the maximum number of bits to hold a 4-digit number is 14 bits. So, you will have to guess the value, the 14-bit correct combination let's say, of this PIN out of the 2^14 = 16384 possible values!!
The only way is to brute force. So, for simplicity, consider this simple 2-bit word that you want to guess right, each bit has 2 possible values, 0 or 1. So, all the possibilities are:
00
01
10
11
We know that all possibilities of an n-bit word will be 2^n possible combinations. So, 2^2 is 4 possible combinations as we saw earlier.
The same applies to the 14-bit integer PIN, so guessing the PIN would require you to solve a 2^14 possible outcome puzzle, hence an algorithm of time complexity O(2^n).
So, those types of problems, where combinations of elements in a set S differs, and you will have to try to solve the problem by trying all possible combinations, will have this O(2^n) time complexity. But, the exponentiation base does not have to be 2. In the example above it's of base 2 because each element, each bit, has two possible values which will not be the case in other problems.
Another good example of O(2^n) algorithms is the recursive knapsack. Where you have to try different combinations to maximize the value, where each element in the set, has two possible values, whether we take it or not.
The Edit Distance problem is an O(3^n) time complexity since you have 3 decisions to choose from for each of the n characters string, deletion, insertion, or replace.
int Fibonacci(int number)
{
if (number <= 1) return number;
return Fibonacci(number - 2) + Fibonacci(number - 1);
}
Growth doubles with each additon to the input data set. The growth curve of an O(2N) function is exponential - starting off very shallow, then rising meteorically.
My example of big O(2^n), but much better is this:
public void solve(int n, String start, String auxiliary, String end) {
if (n == 1) {
System.out.println(start + " -> " + end);
} else {
solve(n - 1, start, end, auxiliary);
System.out.println(start + " -> " + end);
solve(n - 1, auxiliary, start, end);
}
In this method program prints all moves to solve "Tower of Hanoi" problem.
Both examples are using recursive to solve problem and had big O(2^n) running time.
c^N = All combinations of n elements from a c sized alphabet.
More specifically 2^N is all numbers representable with N bits.
The common cases are implemented recursively, something like:
vector<int> bits;
int N
void find_solution(int pos) {
if (pos == N) {
check_solution();
return;
}
bits[pos] = 0;
find_solution(pos + 1);
bits[pos] = 1;
find_solution(pos + 1);
}
Here is a code clip that computes value sum of every combination of values in a goods array(and value is a global array variable):
fun boom(idx: Int, pre: Int, include: Boolean) {
if (idx < 0) return
boom(idx - 1, pre + if (include) values[idx] else 0, true)
boom(idx - 1, pre + if (include) values[idx] else 0, false)
println(pre + if (include) values[idx] else 0)
}
As you can see, it's recursive. We can inset loops to get Polynomial complexity, and using recursive to get Exponential complexity.
Here are two simple examples in python with Big O/Landau (2^N):
#fibonacci
def fib(num):
if num==0 or num==1:
return num
else:
return fib(num-1)+fib(num-2)
num=10
for i in range(0,num):
print(fib(i))
#tower of Hanoi
def move(disk , from, to, aux):
if disk >= 1:
# from twoer , auxilart
move(disk-1, from, aux, to)
print ("Move disk", disk, "from rod", from_rod, "to rod", to_rod)
move(disk-1, aux, to, from)
n = 3
move(n, 'A', 'B', 'C')
Assuming that a set is a subset of itself, then there are 2ⁿ possible subsets for a set with n elements.
think of it this way. to make a subset, lets take one element. this element has two possibilities in the subset you're creating: present or absent. the same applies for all the other elements in the set. multiplying all these possibilities, you arrive at 2ⁿ.

Dynamic Programming : True or False

I have a conceptual doubt regarding Dynamic Programming:
In a dynamic programming solution, the space requirement is always at least as big as the number of unique sub problems.
I thought it in terms of Fibonacci numbers:
f(n) = f(n-1) + f(n-2)
Here there are two subproblems, the space required will be at least O(n) if input is n.
Right?
But, the answer is False.
Can someone explain this?
The answer is indeed false.
For example, in your fibonacci series, you can use Dynamic Programming with O(1) space, by remembering the only 2 last numbers:
fib(n):
prev = current = 1
i = 2
while i < n:
next = prev + current
prev = current
current = next
return current
This is a common practice where you don't need all smaller subproblems to solve the bigger one, and you can discard most of them and save some space.
If you implement Fibonacci calculation using bottom-up DP, you can discard earlier results which you don't need. This is an example:
fib = [0, 1]
for i in xrange(n):
fib = [fib[1], fib[0] + fib[1]]
print fib[1]
As this example shows, you only need memorize the last two elements in the array.
This statement is not correct. But it's almost correct.
Generally dynamic programming solution needs O(number of subproblems) space. In other words, if there is a dynamic programming solution to the problem it could be implemented using O(number of subproblems) memory.
In your particular problem "calculation of Fibonacci numbers", if you write down straightforward dynamic programming solution:
Integer F(Integer n) {
if (n == 0 || n == 1) return 1;
if (memorized[n]) return memorized_value[n];
memorized_value[n] = F(n - 1) + F(n - 2);
memorized[n] = true;
return memorized_value[n];
}
it will use O(number of subproblems) memory. But as you mentioned, by analyzing the recurrence you can come up with a more optimal solution that uses O(1) memory.
P.S. The recurrence for Fibonacci numbers that you've mentioned has n + 1 subproblems. Usually by subproblems people are referring to all f values you need to calculate to calculate a particular f value. Here you need to calculate f(0), f(1), f(2), ..., f(n) in order to compute f(n).

write a number as sum of a consecutive primes

How to check if n can be partitioned to sum of a sequence of consecutive prime numbers.
For example, 12 is equal to 5+7 which 5 and 7 are consecutive primes, but 20 is equal to 3+17 which 3 and 17 are not consecutive.
Note that, repetition is not allowed.
My idea is to find and list all primes below n, then use 2 loops to sum all primes. The first 2 numbers, second 2 numbers, third 2 numbers etc. and then first 3 numbers, second 3 numbers and so far. But it takes lot of time and memory.
Realize that a consecutive list of primes is defined only by two pieces of information, the starting and the ending prime number. You just have to find these two numbers.
I assume that you have all the primes at your disposal, sorted in the array called primes. Keep three variables in memory: sum which initially is 2 (the smallest prime), first_index and last_index which are initially 0 (index of the smallest prime in array primes).
Now you have to "tweak" these two indices, and "travel" the array along the way in the loop:
If sum == n then finish. You have found your sequence of primes.
If sum < n then enlarge the list by adding next available prime. Increment last_index by one, and then increment sum by the value of new prime, which is primes[last_index]. Repeat the loop. But if primes[last_index] is larger than n then there is no solution, and you must finish.
If sum > n then reduce the list by removing the smallest prime from the list. Decrement sum by that value, which is primes[first_index], and then increment first_index by one. Repeat the loop.
Dialecticus's algorithm is the classic O(m)-time, O(1)-space way to solve this type of problem (here I'll use m to represent the number of prime numbers less than n). It doesn't depend on any mysterious properties of prime numbers. (Interestingly, for the particular case of prime numbers, AlexAlvarez's algorithm is also linear time!) Dialecticus gives a clear and correct description, but seems at a loss to explain why it is correct, so I'll try to do this here. I really think it's valuable to take the time to understand this particular algorithm's proof of correctness: although I had to read a number of explanations before it finally "sank in", it was a real "Aha!" moment when it did! :) (Also, problems that can be efficiently solved in the same manner crop up quite a lot.)
The candidate solutions this algorithm tries can be represented as number ranges (i, j), where i and j are just the indexes of the first and last prime number in a list of prime numbers. The algorithm gets its efficiency by ruling out (that is, not considering) sets of number ranges in two different ways. To prove that it always gives the right answer, we need to show that it never rules out the only range with the right sum. To that end, it suffices to prove that it never rules out the first (leftmost) range with the right sum, which is what we'll do here.
The first rule it applies is that whenever we find a range (i, j) with sum(i, j) > n, we rule out all ranges (i, k) having k > j. It's easy to see why this is justified: the sum can only get bigger as we add more terms, and we have determined that it's already too big.
The second, trickier rule, crucial to the linear time complexity, is that whenever we advance the starting point of a range (i, j) from i to i+1, instead of "starting again" from (i+1, i+1), we start from (i+1, j) -- that is, we avoid considering (i+1, k) for all i+1 <= k < j. Why is it OK to do this? (To put the question the other way: Couldn't it be that doing this causes us to skip over some range with the right sum?)
[EDIT: The original version of the next paragraph glossed over a subtlety: we might have advanced the range end point to j on any previous step.]
To see that it never skips a valid range, we need to think about the range (i, j-1). For the algorithm to advance the starting point of the current range, so that it changes from (i, j) to (i+1, j), it must have been that sum(i, j) > n; and as we will see, to get to a program state in which the range (i, j) is being considered in the first place, it must have been that sum(i, j-1) < n. That second claim is subtle, because there are two different ways to arrive in such a program state: either we just incremented the end point, meaning that the previous range was (i, j-1) and this range was found to be too small (in which case our desired property sum(i, j-1) < n obviously holds); or we just incremented the start point after considering (i-1, j) and finding it to be too large (in which case it's not obvious that the property still holds).
What we do know, however, is that regardless of whether the end point was increased from j-1 to j on the previous step, it was definitely increased at some time before the current step -- so let's call the range that triggered this end point increase (k, j-1). Clearly sum(k, j-1) < n, since this was (by definition) the range that caused us to increase the end point from j-1 to j; and just as clearly k <= i, since we only process ranges in increasing order of their start points. Since i >= k, sum(i, j-1) is just the same as sum(k, j-1) but with zero or more terms removed from the left end, and all of these terms are positive, so it must be that sum(i, j-1) <= sum(k, j-1) < n.
So we have established that whenever we increase i to i+1, we know that sum(i, j-1) < n. To finish the analysis of this rule, what we (again) need to make use of is that dropping terms from either end of this sum can't make it any bigger. Removing the first term leaves us with sum(i+1, j-1) <= sum(i, j-1) < n. Starting from that sum and successively removing terms from the other end leaves us with sum(i+1, j-2), sum(i+1, j-3), ..., sum(i+1, i+1), all of which we know must be less than n -- that is, none of the ranges corresponding to these sums can be valid solutions. Therefore we can safely avoid considering them in the first place, and that's exactly what the algorithm does.
One final potential stumbling block is that it might seem that, since we are advancing two loop indexes, the time complexity should be O(m^2). But notice that every time through the loop body, we advance one of the indexes (i or j) by one, and we never move either of them backwards, so if we are still running after 2m loop iterations we must have i + j = 2m. Since neither index can ever exceed m, the only way for this to hold is if i = j = m, which means that we have reached the end: i.e. we are guaranteed to terminate after at most 2m iterations.
The fact that primes have to be consecutive allows to solve quite efficiently this problem in terms of n. Let me suppose that we have previously computed all the primes less or equal than n. Therefore, we can easily compute sum(i) as the sum of the first i primes.
Having this function precomputed, we can loop over the primes less or equal than n and see whether there exists a length such that starting with that prime we can sum up to n. But notice that for a fixed starting prime, the sequence of sums is monotone, so we can binary search over the length.
Thus, let k be the number of primes less or equal than n. Precomputing the sums has cost O(k) and the loop has cost O(klogk), dominating the cost. Using the Prime number theorem, we know that k = O(n/logn), and then the whole algorithm has cost O(n/logn log(n/logn)) = O(n).
Let me put a code in C++ to make it clearer, hope there are not bugs:
#include <iostream>
#include <vector>
using namespace std;
typedef long long ll;
int main() {
//Get the limit for the numbers
int MAX_N;
cin >> MAX_N;
//Compute the primes less or equal than MAX_N
vector<bool> is_prime(MAX_N + 1, true);
for (int i = 2; i*i <= MAX_N; ++i) {
if (is_prime[i]) {
for (int j = i*i; j <= MAX_N; j += i) is_prime[j] = false;
}
}
vector<int> prime;
for (int i = 2; i <= MAX_N; ++i) if (is_prime[i]) prime.push_back(i);
//Compute the prefixed sums
vector<ll> sum(prime.size() + 1, 0);
for (int i = 0; i < prime.size(); ++i) sum[i + 1] = sum[i] + prime[i];
//Get the number of queries
int n_queries;
cin >> n_queries;
for (int z = 1; z <= n_queries; ++z) {
int n;
cin >> n;
//Solve the query
bool found = false;
for (int i = 0; i < prime.size() and prime[i] <= n and not found; ++i) {
//Do binary search over the lenght of the sum:
//For all x < ini, [i, x] sums <= n
int ini = i, fin = int(prime.size()) - 1;
while (ini <= fin) {
int mid = (ini + fin)/2;
int value = sum[mid + 1] - sum[i];
if (value <= n) ini = mid + 1;
else fin = mid - 1;
}
//Check the candidate of the binary search
int candidate = ini - 1;
if (candidate >= i and sum[candidate + 1] - sum[i] == n) {
found = true;
cout << n << " =";
for (int j = i; j <= candidate; ++j) {
cout << " ";
if (j > i) cout << "+ ";
cout << prime[j];
}
cout << endl;
}
}
if (not found) cout << "No solution" << endl;
}
}
Sample input:
1000
5
12
20
28
17
29
Sample output:
12 = 5 + 7
No solution
28 = 2 + 3 + 5 + 7 + 11
17 = 2 + 3 + 5 + 7
29 = 29
I'd start by noting that for a pair of consecutive primes to sum to the number, one of the primes must be less than N/2, and the other prime must be greater than N/2. For them to be consecutive primes, they must be the primes closest to N/2, one smaller and the other larger.
If you're starting with a table of prime numbers, you basically do a binary search for N/2. Look at the primes immediately larger and smaller than that. Add those numbers together and see if they sum to your target number. If they don't, then it can't be the sum of two consecutive primes.
If you don't start with a table of primes, it works out pretty much the same way--you still start from N/2 and find the next larger prime (we'll call that prime1). Then you subtract N-prime1 to get a candidate for prime2. Check if that's prime, and if it is, search the range prime2...N/2 for other primes to see if there was a prime in between. If there's a prime in between your number is a sum of non-consecutive primes. If there's no other prime in that range, then it is a sum of consecutive primes.
The same basic idea applies for sequences of 3 or more primes, except that (of course) your search starts from N/3 (or whatever number of primes you want to sum to get to the number).
So, for three consecutive primes to sum to N, 2 of the three must be the first prime smaller than N/3 and the first prime larger than N/3. So, we start by finding those, then compute N-(prime1+prime2). That gives use our third candidate. We know these three numbers sum to N. We still need to prove that this third number is a prime. If it is prime, we need to verify that it's consecutive to the other two.
To give a concrete example, for 10 we'd start from 3.333. The next smaller prime is 3 and the next larger is 5. Those add to 8. 10-8 = 2. 2 is prime and consecutive to 3, so we've found the three consecutive primes that add to 10.
There are some other refinements you can make as well. The most obvious would be based on the fact that all primes (other than 2) are odd numbers. Therefore (assuming we can ignore 2), an even number can only be the sum of an even number of primes, and an odd number can only be a sum of an odd number of primes. So, given 123456789, we know immediately that it can't possibly be the sum of 2 (or 4, 6, 8, 10, ...) consecutive primes, so the only candidates to consider are 3, 5, 7, 9, ... primes. Of course, the opposite works as well: given, say, 12345678, the simple fact that it's even lets us immediately rule out the possibility that it could be the sum of 3, 5, 7 or 9 consecutive primes; we only need to consider sequences of 2, 4, 6, 8, ... primes. We violate this basic rule only when we get to a large enough number of primes that we could include 2 as part of the sequence.
I haven't worked through the math to figure out exactly how many that would be be for a given number, but I'm pretty sure it should be fairly easy and it's something we want to know anyway (because it's the upper limit on the number of consecutive primes to look for for a given number). If we use M for the number of primes, the limit should be approximately M <= sqrt(N), but that's definitely only an approximation.
I know that this question is a little old, but I cannot refrain from replying to the analysis made in the previous answers. Indeed, it has been emphasized that all the three proposed algorithms have a run-time that is essentially linear in n. But in fact, it is not difficult to produce an algorithm that runs at a strictly smaller power of n.
To see how, let us choose a parameter K between 1 and n and suppose that the primes we need are already tabulated (if they must be computed from scratch, see below). Then, here is what we are going to do, to search a representation of n as a sum of k consecutive primes:
First we search for k<K using the idea present in the answer of Jerry Coffin; that is, we search k primes located around n/k.
Then to explore the sums of k>=K primes we use the algorithm explained in the answer of Dialecticus; that is, we begin with a sum whose first element is 2, then we advance the first element one step at a time.
The first part, that concerns short sums of big primes, requires O(log n) operations to binary search one prime close to n/k and then O(k) operations to search for the other k primes (there are a few simple possible implementations). In total this makes a running time
R_1=O(K^2)+O(Klog n).
The second part, that is about long sums of small primes, requires us to consider sums of consecutive primes p_1<\dots<p_k where the first element is at most n/K.
Thus, it requires to visit at most n/K+K primes (one can actually save a log factor by a weak version of the prime number theorem). Since in the algorithm every prime is visited at most O(1) times, the running time is
R_2=O(n/K) + O(K).
Now, if log n < K < \sqrt n we have that the first part runs with O(K^2) operations and the second part runs in O(n/K). We optimize with the choice K=n^{1/3}, so that the overall running time is
R_1+R_2=O(n^{2/3}).
If the primes are not tabulated
If we also have to find the primes, here is how we do it.
First we use Erathostenes, that in C_2=O(T log log T) operations finds all the primes up to T, where T=O(n/K) is the upper bound for the small primes visited in the second part of the algorithm.
In order to perform the first part of the algorithm we need, for every k<K, to find O(k) primes located around n/k. The Riemann hypothesis implies that there are at least k primes in the interval [x,x+y] if y>c log x (k+\sqrt x) for some constant c>0. Therefore a priori we need to find the primes contained in an interval I_k centered at n/k with width |I_k|= O(k log n)+O(\sqrt {n/k} log n).
Using the sieve Eratosthenes to sieve the interval I_k requires O(|I_k|log log n) + O(\sqrt n) operations. If k<K<\sqrt n we get a time complexity C_1=O(\sqrt n log n log log n) for every k<K.
Summing up, the time complexity C_1+C_2+R_1+R_2 is maximized when
K = n^{1/4} / (log n \sqrt{log log n}).
With this choice have the sublinear time complexity
R_1+R_2+C_1+C_2 = O(n^{3/4}\sqrt{log log n}.
If we do not assume the Riemann Hypothesis we will have to search on larger intervals, but we still get at the end a sublinear time complexity. If instead we assume stronger conjectures on prime gaps, we may only need to search on intervals I_k with width |I_k|=k (log n)^A for some A>0. Then, instead of Erathostenes, we can use other deterministic primality tests. For example, suppose that you can test a single number for primality in O((log n)^B) operations, for some B>0.
Then you can search the interval I_k in O(k(log n)^{A+B}) operations. In this case the optimal K is still K\approx n^{1/3}, up to logarithmic factors, and so the total complexity is O(n^{2/3}(log n)^D for some D>0.

Resources