An unparenthesized arithmetic expression - algorithm

An arithmetic expression can have many possible values
Can someone help me?

There is a dynamic programming solution.
For a expression, you can define its "outmost split point" be the first operator that is not within any parentheses. Now after this split, if it is on a +, then you need to maximize the left sub expression and the right sub expression; if it is a -, then maximize the left side and minimize the right side.
You can use either dynamic programming or memoization to implement this algorithm. Memoization is straightforward: search for each split point, and save the answer in another data structure (two 2D matrices, with M[x][y] string the max/min value of the expression beginning at x and ending at y); when the data is in the matrices, use it instead of recompute.
Use dynamic programming is a bit trickier, but you can think of it this way:
first, you loop through the expression, finding the max/min for each consecutive 2 values with the operator between them (well, this is the fancy way of saying just compute it);
loop through the expression, finding the max/min for each consecutive 3 values with the operator between them (for a ? b ? c, this is computed by assuming the split point is between a and b, and the assuming the split point is on b and c, and store the max/min values of these two);
Once you know the max/min for all k-length sequences, compute the k + 1-length ones using the same method as in step 2, until k is the length of the array, and return the max value for length k.
This is almost the same as Matrix Chain Multiplication algorithm, which has O(N^3) complexity. The complexity can be proved crudely by reasoning: you need to do the loop N - 1 times, each time at most N - 1 subsequences, and you need to try at most N - 1 split points. So, N ^ 3 time complexity.

Related

Minimal non-contiguous sequence of exactly k elements

The problem I'm having can be reduced to:
Given an array of N positive numbers, find the non-contiguous sequence of exactly K elements with the minimal sum.
Ok-ish: report the sum only. Bonus: the picked elements can be identified (at least one set of indices, if many can realize the same sum).
(in layman terms: pick any K non-neighbouring elements from N values so that their sum is minimal)
Of course, 2*K <= N+1 (otherwise no solution is possible), the problem is insensitive to positive/negative (just shift the array values with the MIN=min(A...) then add back K*MIN to the answer).
What I got so far (the naive approach):
select K+2 indexes of the values closest to the minimum. I'm not sure about this, for K=2 this seems to be the required to cover all the particular cases, but I don't know if it is required/sufficient for K>2**
brute force the minimal sum from the values of indices resulted at prev step respecting the non-contiguity criterion - if I'm right and K+2 is enough, I can live brute-forcing a (K+1)*(K+2) solution space but, as I said. I'm not sure K+2 is enough for K>2 (if in fact 2*K points are necessary, then brute-forcing goes out of window - the binomial coefficient C(2*K, K) grows prohibitively fast)
Any clever idea of how this can be done with minimal time/space complexity?
** for K=2, a non-trivial example where 4 values closest to the absolute minimum are necessary to select the objective sum [4,1,0,1,4,3,4] - one cannot use the 0 value for building the minimal sum, as it breaks the non-contiguity criterion.
PS - if you feel like showing code snippets, C/C++ and/or Java will be appreciated, but any language with decent syntax or pseudo-code will do (I reckon "decent syntax" excludes Perl, doesn't it?)
Let's assume input numbers are stored in array a[N]
Generic approach is DP: f(n, k) = min(f(n-1, k), f(n-2, k-1)+a[n])
It takes O(N*K) time and has 2 options:
for lazy backtracking recursive solution O(N*K) space
for O(K) space for forward cycle
In special case of big K there is another possibility:
use recursive back-tracking
instead of helper array of N*K size use map(n, map(k, pair(answer, list(answer indexes))))
save answer and list of indexes for this answer
instantly return MAX_INT if k>N/2
This way you'll have lower time than O(NK) for K~=N/2, something like O(Nlog(N)). It will increase up to O(N*log(N)Klog(K)) for small K, so decision between general approach or special case algorithm is important.
There should be a dynamic programming approach to this.
Work along the array from left to right. At each point i, for each value of j from 1..k, find the value of the right answer for picking j non-contiguous elements from 1..i. You can work out the answers at i by looking at the answers at i-1, i-2, and the value of array[i]. The answer you want is the answer at n for an array of length n. After you have done this you should be able to work out what the elements are by back-tracking along the array to work out whether the best decision at each point involves selecting the array element at that point, and therefore whether it used array[i-1][k] or array[i-2][k-1].

Guidance on Algorithmic Thinking (4 fours equation)

I recently saw a logic/math problem called 4 Fours where you need to use 4 fours and a range of operators to create equations that equal to all the integers 0 to N.
How would you go about writing an elegant algorithm to come up with say the first 100...
I started by creating base calculations like 4-4, 4+4, 4x4, 4/4, 4!, Sqrt 4 and made these values integers.
However, I realized that this was going to be a brute force method testing the combinations to see if they equal, 0 then 1, then 2, then 3 etc...
I then thought of finding all possible combinations of the above values, checking that the result was less than 100 and filling an array and then sorting it...again inefficient because it may find 1000s of numbers over 100
Any help on how to approach a problem like this would be helpful...not actual code...but how to think through this problem
Thanks!!
This is an interesting problem. There are a couple of different things going on here. One issue is how to describe the sequence of operations and operands that go into an arithmetic expression. Using parentheses to establish order of operations is quite messy, so instead I suggest thinking of an expression as a stack of operations and operands, like - 4 4 for 4-4, + 4 * 4 4 for (4*4)+4, * 4 + 4 4 for (4+4)*4, etc. It's like Reverse Polish Notation on an HP calculator. Then you don't have to worry about parentheses, having the data structure for expressions will help below when we build up larger and larger expressions.
Now we turn to the algorithm for building expressions. Dynamic programming doesn't work in this situation, in my opinion, because (for example) to construct some numbers in the range from 0 to 100 you might have to go outside of that range temporarily.
A better way to conceptualize the problem, I think, is as breadth first search (BFS) on a graph. Technically, the graph would be infinite (all positive integers, or all integers, or all rational numbers, depending on how elaborate you want to get) but at any time you'd only have a finite portion of the graph. A sparse graph data structure would be appropriate.
Each node (number) on the graph would have a weight associated with it, the minimum number of 4's needed to reach that node, and also the expression which achieves that result. Initially, you would start with just the node (4), with the number 1 associated with it (it takes one 4 to make 4) and the simple expression "4". You can also throw in (44) with weight 2, (444) with weight 3, and (4444) with weight 4.
To build up larger expressions, apply all the different operations you have to those initial node. For example, unary negation, factorial, square root; binary operations like * 4 at the bottom of your stack for multiply by 4, + 4, - 4, / 4, ^ 4 for exponentiation, and also + 44, etc. The weight of an operation is the number of 4s required for that operation; unary operations would have weight 0, + 4 would have weight 1, * 44 would have weight 2, etc. You would add the weight of the operation to the weight of the node on which it operates to get a new weight, so for example + 4 acting on node (44) with weight 2 and expression "44" would result in a new node (48) with weight 3 and expression "+ 4 44". If the result for 48 has better weight than the existing result for 48, substitute that new node for (48).
You will have to use some sense when applying functions. factorial(4444) would be a very large number; it would be wise to set a domain for your factorial function which would prevent the result from getting too big or going out of bounds. The same with functions like / 4; if you don't want to deal with fractions, say that non-multiples of 4 are outside of the domain of / 4 and don't apply the operator in that case.
The resulting algorithm is very much like Dijkstra's algorithm for calculating distance in a graph, though not exactly the same.
I think that the brute force solution here is the only way to go.
The reasoning behind this is that each number has a different way to get to it, and getting to a certain x might have nothing to do with getting to x+1.
Having said that, you might be able to make the brute force solution a bit quicker by using obvious moves where possible.
For instance, if I got to 20 using "4" three times (4*4+4), it is obvious to get to 16, 24 and 80. Holding an array of 100 bits and marking the numbers reached
Similar to subset sum problem, it can be solved using Dynamic Programming (DP) by following the recursive formulas:
D(0,0) = true
D(x,0) = false x!=0
D(x,i) = D(x-4,i-1) OR D(x+4,i-1) OR D(x*4,i-1) OR D(x/4,i-1)
By computing the above using DP technique, it is easy to find out which numbers can be produced using these 4's, and by walking back the solution, you can find out how each number was built.
The advantage of this method (when implemented with DP) is you do not recalculate multiple values more than once. I am not sure it will actually be effective for 4 4's, but I believe theoretically it could be a significant improvement for a less restricted generalization of this problem.
This answer is just an extension of Amit's.
Essentially, your operations are:
Apply a unary operator to an existing expression to get a new expression (this does not use any additional 4s)
Apply a binary operator to two existing expressions to get a new expression (the new expression has number of 4s equal to the sum of the two input expressions)
For each n from 1..4, calculate Expressions(n) - a List of (Expression, Value) pairs as follows:
(For a fixed n, only store 1 expression in the list that evaluates to any given value)
Initialise the list with the concatenation of n 4s (i.e. 4, 44, 444, 4444)
For i from 1 to n-1, and each permitted binary operator op, add an expression (and value) e1 op e2 where e1 is in Expressions(i) and e2 is in Expressions(n-i)
Repeatedly apply unary operators to the expressions/values calculated so far in steps 1-3. When to stop (applying 3 recursively) is a little vague, certainly stop if an iteration produces no new values. Potentially limit the magnitude of the values you allow, or the size of the expressions.
Example unary operators are !, Sqrt, -, etc. Example binary operators are +-*/^ etc. You can easily extend this approach to operators with more arguments if permitted.
You could do something a bit cleverer in terms of step 3 never ending for any given n. The simple way (described above) does not start calculating Expressions(i) until Expressions(j) is complete for all j < i. This requires that we know when to stop. The alternative is to build Expressions of a certain maximum length for each n, then if you need to (because you haven't found certain values), extend the maximum length in an outer loop.

Is there a way to normalize a set of values with only one iteration?

So say I have a big list of floats, and I'd like to normalize them all to values between min and max. E.g. {2, 5, 10} with min = 0 and max = 2 results in a set of {0.4, 1, 2}.
My current algorithm for this is
Iterate through all elements and find the sum of the set
Iterate through all elements again, and divide by the sum (accounting for min and max)
My question is a general algorithmic one - is it possible to do this operation in O(n) instead of O(2n)?
first, O(n)=O(2n), It is an asymptotic analysis.
Back to your problem, you can do as following:
Iterate through all elements and find the min/max of the set
Use lazy evaluation technique: don't iterate through all elements again. When you will be asked for an object i from the set, return it through the normalization function you calculated in 1.
Notice that you can't avoid the 2-times-read for each element in the set.
This question is ill-stated, as O(N) and O(2N) are strictly equivalent.
Assuming that what the OP is asking is a single-pass algorithm, the answer is a clear no. The normalization function, whatever it is, will depend on all values in the list so that all values must be read at least once before normalization can start, and at least one full pass is needed. In a way, that would mean that you could normalize the first element immediately, without looking at the others.
The only thing that can be gained is to normalize the values only when they are used (see #avim's answer), instead of once for all.
Technically speaking, this brings the asymptotic complexity to O(N + M) where M is the number of uses of a normalized value. If M exceeds N, this approach is more costly.

possible combinations without zeros

So I am looking for a way to get possible combinations of two integers from an array, say I have
v = [0, 1, 2, 0, 4]
I would like at the end, conceptually a matrix like this, C = v^T v where v^T is the Transpose of the vector so you get a matrix with some nonzeros and the entries will be the combinations of two integers. For row 1 for instance,
(0,0) (1,1) (1,2) (0,0) (0,4)
but I only need (1,1) (1,2) also similar reasoning holds for the other rows in my conceptual matrix visualisation. I can do this by two nested loops by checking if they include 0 or not. Question is: are there some algorithms for these kinds of combinatorial tasks that would do that better than nested loops?
There is no way to generate this output "matrix" without a 2D nested loop (or something directly equivalent). So given you'll have the loops anyway, it's trivial to add the conditional checking.
I suppose you could pre-sort the array, and then start your loop counters at the first non-zero value...
Depending on how many zeroes you have, you could use a sparse matrix data structure and that may reduce the problem from O(n^2) to something slightly smaller (depending on how many zeroes you have).
If you ignore the zeroes for a second, N choose 2 combinations is equal to n^2/2-n/2 combinations. Similarly there are n*(n-1) 2-permutations of n. So no matter whether you are doing combinations or permutations, you still need an algorithm that is worst case O(n^2). Note your matrix would possibly be symmetric depending on certain assumptions, but that would not be enough to change it from O(n^2) because it would essentially be O((n/2)^2) which is still O(n^2).

Generate all subset sums within a range faster than O((k+N) * 2^(N/2))?

Is there a way to generate all of the subset sums s1, s2, ..., sk that fall in a range [A,B] faster than O((k+N)*2N/2), where k is the number of sums there are in [A,B]? Note that k is only known after we have enumerated all subset sums within [A,B].
I'm currently using a modified Horowitz-Sahni algorithm. For example, I first call it to for the smallest sum greater than or equal to A, giving me s1. Then I call it again for the next smallest sum greater than s1, giving me s2. Repeat this until we find a sum sk+1 greater than B. There is a lot of computation repeated between each iteration, even without rebuilding the initial two 2N/2 lists, so is there a way to do better?
In my problem, N is about 15, and the magnitude of the numbers is on the order of millions, so I haven't considered the dynamic programming route.
Check the subset sum on Wikipedia. As far as I know, it's the fastest known algorithm, which operates in O(2^(N/2)) time.
Edit:
If you're looking for multiple possible sums, instead of just 0, you can save the end arrays and just iterate through them again (which is roughly an O(2^(n/2) operation) and save re-computing them. The value of all the possible subsets is doesn't change with the target.
Edit again:
I'm not wholly sure what you want. Are we running K searches for one independent value each, or looking for any subset that has a value in a specific range that is K wide? Or are you trying to approximate the second by using the first?
Edit in response:
Yes, you do get a lot of duplicate work even without rebuilding the list. But if you don't rebuild the list, that's not O(k * N * 2^(N/2)). Building the list is O(N * 2^(N/2)).
If you know A and B right now, you could begin iteration, and then simply not stop when you find the right answer (the bottom bound), but keep going until it goes out of range. That should be roughly the same as solving subset sum for just one solution, involving only +k more ops, and when you're done, you can ditch the list.
More edit:
You have a range of sums, from A to B. First, you solve subset sum problem for A. Then, you just keep iterating and storing the results, until you find the solution for B, at which point you stop. Now you have every sum between A and B in a single run, and it will only cost you one subset sum problem solve plus K operations for K values in the range A to B, which is linear and nice and fast.
s = *i + *j; if s > B then ++i; else if s < A then ++j; else { print s; ... what_goes_here? ... }
No, no, no. I get the source of your confusion now (I misread something), but it's still not as complex as what you had originally. If you want to find ALL combinations within the range, instead of one, you will just have to iterate over all combinations of both lists, which isn't too bad.
Excuse my use of auto. C++0x compiler.
std::vector<int> sums;
std::vector<int> firstlist;
std::vector<int> secondlist;
// Fill in first/secondlist.
std::sort(firstlist.begin(), firstlist.end());
std::sort(secondlist.begin(), secondlist.end());
auto firstit = firstlist.begin();
auto secondit = secondlist.begin();
// Since we want all in a range, rather than just the first, we need to check all combinations. Horowitz/Sahni is only designed to find one.
for(; firstit != firstlist.end(); firstit++) {
for(; secondit = secondlist.end(); secondit++) {
int sum = *firstit + *secondit;
if (sum > A && sum < B)
sums.push_back(sum);
}
}
It's still not great. But it could be optimized if you know in advance that N is very large, for example, mapping or hashmapping sums to iterators, so that any given firstit can find any suitable partners in secondit, reducing the running time.
It is possible to do this in O(N*2^(N/2)), using ideas similar to Horowitz Sahni, but we try and do some optimizations to reduce the constants in the BigOh.
We do the following
Step 1: Split into sets of N/2, and generate all possible 2^(N/2) sets for each split. Call them S1 and S2. This we can do in O(2^(N/2)) (note: the N factor is missing here, due to an optimization we can do).
Step 2: Next sort the larger of S1 and S2 (say S1) in O(N*2^(N/2)) time (we optimize here by not sorting both).
Step 3: Find Subset sums in range [A,B] in S1 using binary search (as it is sorted).
Step 4: Next, for each sum in S2, find using binary search the sets in S1 whose union with this gives sum in range [A,B]. This is O(N*2^(N/2)). At the same time, find if that corresponding set in S2 is in the range [A,B]. The optimization here is to combine loops. Note: This gives you a representation of the sets (in terms of two indexes in S2), not the sets themselves. If you want all the sets, this becomes O(K + N*2^(N/2)), where K is the number of sets.
Further optimizations might be possible, for instance when sum from S2, is negative, we don't consider sums < A etc.
Since Steps 2,3,4 should be pretty clear, I will elaborate further on how to get Step 1 done in O(2^(N/2)) time.
For this, we use the concept of Gray Codes. Gray codes are a sequence of binary bit patterns in which each pattern differs from the previous pattern in exactly one bit.
Example: 00 -> 01 -> 11 -> 10 is a gray code with 2 bits.
There are gray codes which go through all possible N/2 bit numbers and these can be generated iteratively (see the wiki page I linked to), in O(1) time for each step (total O(2^(N/2)) steps), given the previous bit pattern, i.e. given current bit pattern, we can generate the next bit pattern in O(1) time.
This enables us to form all the subset sums, by using the previous sum and changing that by just adding or subtracting one number (corresponding to the differing bit position) to get the next sum.
If you modify the Horowitz-Sahni algorithm in the right way, then it's hardly slower than original Horowitz-Sahni. Recall that Horowitz-Sahni works two lists of subset sums: Sums of subsets in the left half of the original list, and sums of subsets in the right half. Call these two lists of sums L and R. To obtain subsets that sum to some fixed value A, you can sort R, and then look up a number in R that matches each number in L using a binary search. However, the algorithm is asymmetric only to save a constant factor in space and time. It's a good idea for this problem to sort both L and R.
In my code below I also reverse L. Then you can keep two pointers into R, updated for each entry in L: A pointer to the last entry in R that's too low, and a pointer to the first entry in R that's too high. When you advance to the next entry in L, each pointer might either move forward or stay put, but they won't have to move backwards. Thus, the second stage of the Horowitz-Sahni algorithm only takes linear time in the data generated in the first stage, plus linear time in the length of the output. Up to a constant factor, you can't do better than that (once you have committed to this meet-in-the-middle algorithm).
Here is a Python code with example input:
# Input
terms = [29371, 108810, 124019, 267363, 298330, 368607,
438140, 453243, 515250, 575143, 695146, 840979, 868052, 999760]
(A,B) = (500000,600000)
# Subset iterator stolen from Sage
def subsets(X):
yield []; pairs = []
for x in X:
pairs.append((2**len(pairs),x))
for w in xrange(2**(len(pairs)-1), 2**(len(pairs))):
yield [x for m, x in pairs if m & w]
# Modified Horowitz-Sahni with toolow and toohigh indices
L = sorted([(sum(S),S) for S in subsets(terms[:len(terms)/2])])
R = sorted([(sum(S),S) for S in subsets(terms[len(terms)/2:])])
(toolow,toohigh) = (-1,0)
for (Lsum,S) in reversed(L):
while R[toolow+1][0] < A-Lsum and toolow < len(R)-1: toolow += 1
while R[toohigh][0] <= B-Lsum and toohigh < len(R): toohigh += 1
for n in xrange(toolow+1,toohigh):
print '+'.join(map(str,S+R[n][1])),'=',sum(S+R[n][1])
"Moron" (I think he should change his user name) raises the reasonable issue of optimizing the algorithm a little further by skipping one of the sorts. Actually, because each list L and R is a list of sizes of subsets, you can do a combined generate and sort of each one in linear time! (That is, linear in the lengths of the lists.) L is the union of two lists of sums, those that include the first term, term[0], and those that don't. So actually you should just make one of these halves in sorted form, add a constant, and then do a merge of the two sorted lists. If you apply this idea recursively, you save a logarithmic factor in the time to make a sorted L, i.e., a factor of N in the original variable of the problem. This gives a good reason to sort both lists as you generate them. If you only sort one list, you have some binary searches that could reintroduce that factor of N; at best you have to optimize them somehow.
At first glance, a factor of O(N) could still be there for a different reason: If you want not just the subset sum, but the subset that makes the sum, then it looks like O(N) time and space to store each subset in L and in R. However, there is a data-sharing trick that also gets rid of that factor of O(N). The first step of the trick is to store each subset of the left or right half as a linked list of bits (1 if a term is included, 0 if it is not included). Then, when the list L is doubled in size as in the previous paragraph, the two linked lists for a subset and its partner can be shared, except at the head:
0
|
v
1 -> 1 -> 0 -> ...
Actually, this linked list trick is an artifact of the cost model and never truly helpful. Because, in order to have pointers in a RAM architecture with O(1) cost, you have to define data words with O(log(memory)) bits. But if you have data words of this size, you might as well store each word as a single bit vector rather than with this pointer structure. I.e., if you need less than a gigaword of memory, then you can store each subset in a 32-bit word. If you need more than a gigaword, then you have a 64-bit architecture or an emulation of it (or maybe 48 bits), and you can still store each subset in one word. If you patch the RAM cost model to take account of word size, then this factor of N was never really there anyway.
So, interestingly, the time complexity for the original Horowitz-Sahni algorithm isn't O(N*2^(N/2)), it's O(2^(N/2)). Likewise the time complexity for this problem is O(K+2^(N/2)), where K is the length of the output.

Resources