I've been self-studying the Expectation Maximization lately, and grabbed myself some simple examples in the process:
http://cs.dartmouth.edu/~cs104/CS104_11.04.22.pdf
There are 3 coins 0, 1 and 2 with P0, P1 and P2 probability landing on Head when tossed. Toss coin 0, if the result is Head, toss coin 1 three times else toss coin 2 three times. The observed data produced by coin 1 and 2 is like this: HHH, TTT, HHH, TTT, HHH. The hidden data is coin 0's result. Estimate P0, P1 and P2.
http://ai.stanford.edu/~chuongdo/papers/em_tutorial.pdf
There are two coins A and B with PA and PB being the probability landing on Head when tossed. Each round, select one coin at random and toss it 10 times then record the results. The observed data is the toss results provided by these two coins. However, we don't know which coin was selected for a particular round. Estimate PA and PB.
While I can get the calculations, I can't relate the ways they are solved to the original EM theory. Specifically, during the M-Step of both examples, I don't see how they're maximizing anything. It just seems they are recalculating the parameters and somehow, the new parameters are better than the old ones. Moreover, the two E-Steps don't even look similar to each other, not to mention the original theory's E-Step.
So how exactly do these example work?
The second PDF won't download for me, but I also visited the wikipedia page http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm which has more information. http://melodi.ee.washington.edu/people/bilmes/mypapers/em.pdf (which claims to be a gentle introduction) might be worth a look too.
The whole point of the EM algorithm is to find parameters which maximize the likelihood of the observed data. This is the only bullet point on page 8 of the first PDF, the equation for capital Theta subscript ML.
The EM algorithm comes in handy where there is hidden data which would make the problem easy if you knew it. In the three coins example this is the result of tossing coin 0. If you knew the outcome of that you could (of course) produce an estimate for the probability of coin 0 turning up heads. You would also know whether coin 1 or coin 2 was tossed three times in the next stage, which would allow you to make estimates for the probabilities of coin 1 and coin 2 turning up heads. These estimates would be justified by saying that they maximized the likelihood of the observed data, which would include not only the results that you are given, but also the hidden data that you are not - the results from coin 0. For a coin that gets A heads and B tails you find that the maximum likelihood for the probability of A heads is A/(A+B) - it might be worth you working this out in detail, because it is the building block for the M step.
In the EM algorithm you say that although you don't know the hidden data, you come in with probability estimates which allow you to write down a probability distribution for it. For each possible value of the hidden data you could find the parameter values which would optimize the log likelihood of the data including the hidden data, and this almost always turns out to mean calculating some sort of weighted average (if it doesn't the EM step may be too difficult to be practical).
What the EM algorithm asks you to do is to find the parameters maximizing the weighted sum of log likelihoods given by all the possible hidden data values, where the weights are given by the probability of the associated hidden data given the observations using the parameters at the start of the EM step. This is what almost everybody, including the Wikipedia algorithm, calls the Q-function. The proof behind the EM algorithm, given in the Wikipedia article, says that if you change the parameters so as to increase the Q-function (which is only a means to an end), you will also have changed them so as to increase the likelihood of the observed data (which you do care about). What you tend to find in practice is that you can maximize the Q-function using a variation of what you would do if you know the hidden data, but using the probabilities of the hidden data, given the estimates at the start of the EM-step, to weight the observations in some way.
In your example it means totting up the number of heads and tails produced by each coin. In the PDF they work out P(Y=H|X=) = 0.6967. This means that you use weight 0.6967 for the case Y=H, which means that you increment the counts for Y=H by 0.6967 and increment the counts for X=H in coin 1 by 3*0.6967, and you increment the counts for Y=T by 0.3033 and increment the counts for X=H in coin 2 by 3*0.3033. If you have a detailed justification for why A/(A+B) is a maximum likelihood of coin probabilities in the standard case, you should be ready to turn it into a justification for why this weighted updating scheme maximizes the Q-function.
Finally, the log likelihood of the observed data (the thing you are maximizing) gives you a very useful check. It should increase with every EM step, at least until you get so close to convergence that rounding error comes in, in which case you may have a very small decrease, signalling convergence. If it decreases dramatically, you have a bug in your program or your maths.
As luck would have it, I have been struggling with this material recently as well. Here is how I have come to think of it:
Consider a related, but distinct algorithm called the classify-maximize algorithm, which we might use as a solution technique for a mixture model problem. A mixture model problem is one where we have a sequence of data that may be produced by any of N different processes, of which we know the general form (e.g., Gaussian) but we do not know the parameters of the processes (e.g., the means and/or variances) and may not even know the relative likelihood of the processes. (Typically we do at least know the number of the processes. Without that, we are into so-called "non-parametric" territory.) In a sense, the process which generates each data is the "missing" or "hidden" data of the problem.
Now, what this related classify-maximize algorithm does is start with some arbitrary guesses at the process parameters. Each data point is evaluated according to each one of those parameter processes, and a set of probabilities is generated-- the probability that the data point was generated by the first process, the second process, etc, up to the final Nth process. Then each data point is classified according to the most likely process.
At this point, we have our data separated into N different classes. So, for each class of data, we can, with some relatively simple calculus, optimize the parameters of that cluster with a maximum likelihood technique. (If we tried to do this on the whole data set prior to classifying, it is usually analytically intractable.)
Then we update our parameter guesses, re-classify, update our parameters, re-classify, etc, until convergence.
What the expectation-maximization algorithm does is similar, but more general: Instead of a hard classification of data points into class 1, class 2, ... through class N, we are now using a soft classification, where each data point belongs to each process with some probability. (Obviously, the probabilities for each point need to sum to one, so there is some normalization going on.) I think we might also think of this as each process/guess having a certain amount of "explanatory power" for each of the data points.
So now, instead of optimizing the guesses with respect to points that absolutely belong to each class (ignoring the points that absolutely do not), we re-optimize the guesses in the context of those soft classifications, or those explanatory powers. And it so happens that, if you write the expressions in the correct way, what you're maximizing is a function that is an expectation in its form.
With that said, there are some caveats:
1) This sounds easy. It is not, at least to me. The literature is littered with a hodge-podge of special tricks and techniques-- using likelihood expressions instead of probability expressions, transforming to log-likelihoods, using indicator variables, putting them in basis vector form and putting them in the exponents, etc.
These are probably more helpful once you have the general idea, but they can also obfuscate the core ideas.
2) Whatever constraints you have on the problem can be tricky to incorporate into the framework. In particular, if you know the probabilities of each of the processes, you're probably in good shape. If not, you're also estimating those, and the sum of the probabilities of the processes must be one; they must live on a probability simplex. It is not always obvious how to keep those constraints intact.
3) This is a sufficiently general technique that I don't know how I would go about writing code that is general. The applications go far beyond simple clustering and extend to many situations where you are actually missing data, or where the assumption of missing data may help you. There is a fiendish ingenuity at work here, for many applications.
4) This technique is proven to converge, but the convergence is not necessarily to the global maximum; be wary.
I found the following link helpful in coming up with the interpretation above: Statistical learning slides
And the following write-up goes into great detail of some painful mathematical details: Michael Collins' write-up
I wrote the below code in Python which explains the example given in your second example paper by Do and Batzoglou.
I recommend that you read this link first for a clear explanation of how and why the 'weightA' and 'weightB' in the code below are obtained.
Disclaimer : The code does work but I am certain that it is not coded optimally. I am not a Python coder normally and have started using it two weeks ago.
import numpy as np
import math
#### E-M Coin Toss Example as given in the EM tutorial paper by Do and Batzoglou* ####
def get_mn_log_likelihood(obs,probs):
""" Return the (log)likelihood of obs, given the probs"""
# Multinomial Distribution Log PMF
# ln (pdf) = multinomial coeff * product of probabilities
# ln[f(x|n, p)] = [ln(n!) - (ln(x1!)+ln(x2!)+...+ln(xk!))] + [x1*ln(p1)+x2*ln(p2)+...+xk*ln(pk)]
multinomial_coeff_denom= 0
prod_probs = 0
for x in range(0,len(obs)): # loop through state counts in each observation
multinomial_coeff_denom = multinomial_coeff_denom + math.log(math.factorial(obs[x]))
prod_probs = prod_probs + obs[x]*math.log(probs[x])
multinomial_coeff = math.log(math.factorial(sum(obs))) - multinomial_coeff_denom
likelihood = multinomial_coeff + prod_probs
return likelihood
# 1st: Coin B, {HTTTHHTHTH}, 5H,5T
# 2nd: Coin A, {HHHHTHHHHH}, 9H,1T
# 3rd: Coin A, {HTHHHHHTHH}, 8H,2T
# 4th: Coin B, {HTHTTTHHTT}, 4H,6T
# 5th: Coin A, {THHHTHHHTH}, 7H,3T
# so, from MLE: pA(heads) = 0.80 and pB(heads)=0.45
# represent the experiments
head_counts = np.array([5,9,8,4,7])
tail_counts = 10-head_counts
experiments = zip(head_counts,tail_counts)
# initialise the pA(heads) and pB(heads)
pA_heads = np.zeros(100); pA_heads[0] = 0.60
pB_heads = np.zeros(100); pB_heads[0] = 0.50
# E-M begins!
delta = 0.001
j = 0 # iteration counter
improvement = float('inf')
while (improvement>delta):
expectation_A = np.zeros((5,2), dtype=float)
expectation_B = np.zeros((5,2), dtype=float)
for i in range(0,len(experiments)):
e = experiments[i] # i'th experiment
ll_A = get_mn_log_likelihood(e,np.array([pA_heads[j],1-pA_heads[j]])) # loglikelihood of e given coin A
ll_B = get_mn_log_likelihood(e,np.array([pB_heads[j],1-pB_heads[j]])) # loglikelihood of e given coin B
weightA = math.exp(ll_A) / ( math.exp(ll_A) + math.exp(ll_B) ) # corresponding weight of A proportional to likelihood of A
weightB = math.exp(ll_B) / ( math.exp(ll_A) + math.exp(ll_B) ) # corresponding weight of B proportional to likelihood of B
expectation_A[i] = np.dot(weightA, e)
expectation_B[i] = np.dot(weightB, e)
pA_heads[j+1] = sum(expectation_A)[0] / sum(sum(expectation_A));
pB_heads[j+1] = sum(expectation_B)[0] / sum(sum(expectation_B));
improvement = max( abs(np.array([pA_heads[j+1],pB_heads[j+1]]) - np.array([pA_heads[j],pB_heads[j]]) ))
j = j+1
The key to understanding this is knowing what the auxiliary variables are that make estimation trivial. I will explain the first example quickly, the second follows a similar pattern.
Augment each sequence of heads/tails with two binary variables, which indicate whether coin 1 was used or coin 2. Now our data looks like the following:
c_11 c_12
c_21 c_22
c_31 c_32
...
For each i, either c_i1=1 or c_i2=1, with the other being 0. If we knew the values these variables took in our sample, estimation of parameters would be trivial: p1 would be the proportion of heads in samples where c_i1=1, likewise for c_i2, and \lambda would be the mean of the c_i1s.
However, we don't know the values of these binary variables. So, what we basically do is guess them (in reality, take their expectation), and then update the parameters in our model assuming our guesses were correct. So the E step is to take the expectation of the c_i1s and c_i2s. The M step is to take maximum likelihood estimates of p_1, p_2 and \lambda given these cs.
Does that make a bit more sense? I can write out the updates for the E and M step if you prefer. EM then just guarantees that by following this procedure, likelihood will never decrease as iterations increase.
Related
I have a priority queue of "door numbers". I get the next door number from the priority queue (i.e. the door with the lowest corresponding priority value), and then open the door. Behind the door, there may be a gift or not. Based on the presence / absence of a gift, I update the priority for this door number, and put it back into the priority queue. I then repeat, getting the next door number to open, and so on.
Assuming every door has a different gift-replenishment rate (i.e. some may get a new gift daily, others never at all), how should I update the priority values in order to maximize the number of gifts I find? That is, I want to maximize the ratio of doors I open with gifts to doors I open without gifts.
I should note that replenishment rates are not guaranteed to be fixed over time / there is random variation. But I'm okay with simplifying assumptions here.
This almost seems like a Monte-Carlo problem to me, except that the more often I explore a node (door), the lower its expected value. (And of course there's no tree to build; we only need to figure out the value of depth-1 nodes.)
The most trivial way is to keep track of last priority (LP) and current priority (CP), with delta = CP - LP. If we find a gift, set the next priority NP = CP + delta - 1; otherwise set NP = CP + delta + 1. This works I guess, but seems rather slow in its optimization.
Or we could have a multiplicative value instead: NP = CP + delta * shrink or NP = CP + delta * grow, where shrink < 1 and grow > 1. This is what I currently have, and it seemed to work fine for months, but now I'm getting the situation where some doors are being opened back-to-back (i.e. open door D, found gift, put back on priority queue, D is now best choice again, no gift found of course, now put back on queue with worse priority) which seems pretty bad. For reference, I used shrink = 0.9 and grow = 1.3.
Is there a math formula (as with Monte-Carlo) expressing the optimal way to explore doors?
Multi-armed bandit theory runs deep and is not my specialty, so there's probably a reference that I don't know about. That being said, my first instinct is:
Simplify the math with the spherical-cow assumption that, for each door, the replenishment time is exponentially distributed with some unknown rate that stays constant over time.
Separate out our estimate of the replenishment rate from the history.
Set the priority of each door to 1 − exp(−λx) where λ is the estimated replenishment rate and x is the time since we last opened the door. (Higher is better.)
Multi-armed bandits typically have to balance exploration with exploitation, but my hunch here is that we'll get this naturally from the replenishment process.
Most of the technical detail is in doing the estimate. We have a bunch of examples (x, b) where x is the time since we last opened the door and b is whether there was a gift. For a given rate λ, the formula above for the priority gives the expected value of b. I'll suggest a maximum likelihood estimator for λ. This means maximizing the sum of log(exp(−λx)) = −λx over all (x, 0) examples plus the sum of log(1 − exp(−λx)) over all (x, 1) examples. This function can be optimized directly, but there are two issues:
The more times we open a door, the more expensive the optimization gets.
If there are no positive or negative examples, then the solution is degenerate. Probably we should require λ be at least monthly or something to avoid giving up on a door entirely.
What I would actually recommend is picking a small set of λ values to make this a discrete optimization problem.
(Another potential problem is that the priority formula could be inefficient for many doors. What you could do instead is pick a target threshold for priority and then calculate when the priority will exceed that threshold.)
I have an engagement between two fleets of n and m ships, each ship in the friendly fleet with its own with salvo damage, and each ship in the enemy fleet with its own hp amount. The goal of this algorithm is to find the optimal solution (if such solution exists) to how to assign targets to your ships, (ex: ship 1 in my fleet targets ship 3 in your fleet) in such a way that the salvo will maximize the amount of damage done to the enemy fleet.
Important. By damage, I mean the amount of damage/hp value of an enemy ship destroyed. If an enemy ship has 100hp and deals 20dmg, its "value" is 100/20 = 5. So destroying that ship incurs a score of 5. And lastly, only the score of destroyed ships is taken into account. If it is impossible to destroy any ships with a single salvo, the score will then include the damaged ships.
I have attempted a greedy method, an iterative improvement method, and a hill ascent method too, but none of them are capable of reaching an optimal solution. I have also tried another method, where a large amount of randomized target choice sets are made, and evaluated, and the best one is picked out of al of them. This is the one that has produced the best results, but it is incredibly innefficient and almost never produces the optimal result.
I believe there has to be a way of calculating an optimal solution that does not require checking every single possible targeting choice, but I cannot find a way of doing so. It also seems like this problem is like a weird form of the multiple knapsack problem. With the knapsacks being the enemy hp pools, and the items the dmg values of the shots. Except this time the last item placed into a knapsack can exceed the size limit of the knapsac but only the fraction of the item's value that fits into the kanpsack is useful.
Even if it is not a solution to the problem, any thoughts or help are very much appreciated!
Linear programming will do the job perfectly here. In this case, the decision variables are integers, so we are dealing with ILP.
Here is a small description on how to model your problem as a linear program.
Data:
F_dmg[n] // an array containing the damage of friendly ships
E_hp[m] // an array containing the hp points of the ennemy ships
M // constant, the highest hp among all ships
V[m] // the 'value' of ennemy ships
Decision variables:
X[n][m] // a matrix of booleans (0 or 1)
// X[i][j] = 1 if the ship i attacks the ship j, 0 otherwise
Dmg[m] // an array of integer, representing the total damage taken by each ennemy ship
IsAlive[m] // an array of booleans, representing the fact that the ship is destroyed or not (0 if dead, 1 if alive)
Constraints:
// a friend ship can attack at most one ennemy ship
for all i in 1..n, sum(j in 1..m) X[i][j] <= 1
// the damage sustained by a ship cannot exceed its hp
for all j in 1..m, sum(i in 1..n) Dmg[m] <= E_hp[j]
// the damage sustained by a ship has to be coherent with the attacks it receives
for all j in 1..m, sum(i in 1..n) Dmg[j] <= X[i][j] * F_dmg[i]
// a ship is destroyed if the damage sustained is equal to its hp
for all j in 1..m, M * IsAlive[j] >= E_hp[j] - Dmg[j]
Objective function
maximize sum(j in 1..m) (1 - IsAlive[j]) * V[j]
Write that in OPL, feed it to an ILP solver and you'll get an optimal answer real fast if your input is not absolutely gigantic.
This either is, or is very similar to, the Weapon Target Assignment Problem.
Unfortunately that problem is NP-hard, and according to the 2003 paper "Exact and Heuristic Algorithms for the Weapon Target Assignment Problem" (Ahuja, Kumar et al.), even instances as small as 20 weapons and 20 targets can't be solved to provable optimality. (I only read the abstract.)
Must me quite similar to a depth first algorithm that will try and find the optimal route it will then return the optimal route for each ship and return. An array of targets that each ship should target
The source data for the subject is an m-by-n binary matrix (only 0s and 1s are allowed).
m Rows represent observations, n columns - features. Some observations are marked as targets which need to be separated from the rest.
While it looks like a typical NN, SVM, etc problem, I don't need generalization. What I need is an efficient algorithm to find as many as possible combinations of columns (features) that completely separate targets from other observations, classify, that is.
For example:
f1 f2 f3
o1 1 1 0
t1 1 0 1
o2 0 1 1
Here {f1, f3} is an acceptable combo which separates target t1 from the rest (o1, o2) (btw, {f2} is NOT as by task definition a feature MUST be present in a target). In other words,
t1(f1) & t1(f3) = 1 and o1(f1) & o1(f3) = 0, o2(f1) & o2(f3) = 0
where '&' represents logical conjunction (AND).
The m is about 100,000, n is 1,000. Currently the data is packed into 128bit words along m and the search is optimized with sse4 and whatnot. Yet it takes way too long to obtain those feature combos.
After 2 billion calls to the tree descent routine it has covered about 15% of root nodes. And found about 8,000 combos which is a decent result for my particular application.
I use some empirical criteria to cut off less probable descent paths, not without limited success, but is there something radically better? Im pretty sure there gotta be?.. Any help, in whatever form, reference or suggestion, would be appreciated.
I believe the problem you describe is NP-Hard so you shouldn't expect to find the optimum solution in a reasonable time. I do not understand your current algorithm, but here are some suggestions on the top of my head:
1) Construct a decision tree. Label targets as A and non-targets as B and let the decision tree learn the categorization. At each node select the feature such that a function of P(target | feature) and P(target' | feature') is maximum. (i.e. as many targets as possible fall to positive side and as many non-targets as possible fall to negative side)
2) Use a greedy algorithm. Start from the empty set and at each time step add the feauture that kills the most non-target rows.
3) Use a randomized algorithm. Start from a small subset of positive features of some target, use the set as the seed for the greedy algorithm. Repeat many times. Pick the best solution. Greedy algorithm will be fast so it will be ok.
4) Use a genetic algorithm. Generate random seeds for the greedy algorithm as in 3 to generate good solutions and cross-product them (bitwise-and probably) to generate new candidates seeds. Remember the best solution. Keep good solutions as the current population. Repeat for many generations.
You will need to find the answer "how many of the given rows have the given feature f" fast so probably you'll need specialized data structures, perhaps using a BitArray for each feature.
I've implemented a simple Bayesian classifier, but I'm running into some overflow problems when using it on non-trivial amounts of data.
One strategy I tried in order to keep the numbers small, but still exact, was to keep reducing the numerator and denominator with the greatest common divisor for every part of the equation. This, however, only works when they have a common divisor...
Note, the problem goes both ways, when I keep the denominators and numerators separate for most of the calculation I struggle with integer overflow, when I do most calculations on the fly, using double arithmetic, I'm met with the various problems/limits that really small double values have (as defined by IEEE 754).
As I'm sure some of you here have implemented this algorithm before, how did you deal with these issues? I'd prefer not to pull in arbitrary precision types as they cost too much and I'm sure there exists a solution which doesn't require them.
Thanks.
Usually the way you handle this is by taking logs and using adds, and then doing an exp if you want to get back into probability space.
p1 * p2 * p3 * ... * pn = exp(log(p1) + log(p2) + log(p3) + ... log(pn))
You avoid under flows by working in log space.
If you're classifying between two categories you can introduce the log ratio of probabilities for each category. So if:
log(Pr(cat1) / Pr(cat2)) <=> 0 # positive would favor cat1 and negative cat2
That is equal to:
log(Pr(cat1)) - log(Pr(cat2)) <=> 0
And if (as in Bayesian classifiers) the category probabilities are themselves products of probabilities given conditions:
log(Pr(cat1|cond1)) + ... <=> log(Pr(cat2|cond1)) + ...
Thus you are dealing with summation rather than multiplication and you will need a massive data set to run into the same thing.
Lets say I have a parabola. Now I also have a bunch of sticks that are all of the same width (yes my drawing skills are amazing!). How can I stack these sticks within the parabola such that I am minimizing the space it uses as much as possible? I believe that this falls under the category of Knapsack problems, but this Wikipedia page doesn't appear to bring me closer to a real world solution. Is this a NP-Hard problem?
In this problem we are trying to minimize the amount of area consumed (eg: Integral), which includes vertical area.
I cooked up a solution in JavaScript using processing.js and HTML5 canvas.
This project should be a good starting point if you want to create your own solution. I added two algorithms. One that sorts the input blocks from largest to smallest and another that shuffles the list randomly. Each item is then attempted to be placed in the bucket starting from the bottom (smallest bucket) and moving up until it has enough space to fit.
Depending on the type of input the sort algorithm can give good results in O(n^2). Here's an example of the sorted output.
Here's the insert in order algorithm.
function solve(buckets, input) {
var buckets_length = buckets.length,
results = [];
for (var b = 0; b < buckets_length; b++) {
results[b] = [];
}
input.sort(function(a, b) {return b - a});
input.forEach(function(blockSize) {
var b = buckets_length - 1;
while (b > 0) {
if (blockSize <= buckets[b]) {
results[b].push(blockSize);
buckets[b] -= blockSize;
break;
}
b--;
}
});
return results;
}
Project on github - https://github.com/gradbot/Parabolic-Knapsack
It's a public repo so feel free to branch and add other algorithms. I'll probably add more in the future as it's an interesting problem.
Simplifying
First I want to simplify the problem, to do that:
I switch the axes and add them to each other, this results in x2 growth
I assume it is parabola on a closed interval [a, b], where a = 0 and for this example b = 3
Lets say you are given b (second part of interval) and w (width of a segment), then you can find total number of segments by n=Floor[b/w]. In this case there exists a trivial case to maximize Riemann sum and function to get i'th segment height is: f(b-(b*i)/(n+1))). Actually it is an assumption and I'm not 100% sure.
Max'ed example for 17 segments on closed interval [0, 3] for function Sqrt[x] real values:
And the segment heights function in this case is Re[Sqrt[3-3*Range[1,17]/18]], and values are:
Exact form:
{Sqrt[17/6], 2 Sqrt[2/3], Sqrt[5/2],
Sqrt[7/3], Sqrt[13/6], Sqrt[2],
Sqrt[11/6], Sqrt[5/3], Sqrt[3/2],
2/Sqrt[3], Sqrt[7/6], 1, Sqrt[5/6],
Sqrt[2/3], 1/Sqrt[2], 1/Sqrt[3],
1/Sqrt[6]}
Approximated form:
{1.6832508230603465,
1.632993161855452, 1.5811388300841898, 1.5275252316519468, 1.4719601443879744, 1.4142135623730951, 1.35400640077266, 1.2909944487358056, 1.224744871391589, 1.1547005383792517, 1.0801234497346435, 1, 0.9128709291752769, 0.816496580927726, 0.7071067811865475, 0.5773502691896258, 0.4082482904638631}
What you have archived is a Bin-Packing problem, with partially filled bin.
Finding b
If b is unknown or our task is to find smallest possible b under what all sticks form the initial bunch fit. Then we can limit at least b values to:
lower limit : if sum of segment heights = sum of stick heights
upper limit : number of segments = number of sticks longest stick < longest segment height
One of the simplest way to find b is to take a pivot at (higher limit-lower limit)/2 find if solution exists. Then it becomes new higher or lower limit and you repeat the process until required precision is met.
When you are looking for b you do not need exact result, but suboptimal and it would be much faster if you use efficient algorithm to find relatively close pivot point to actual b.
For example:
sort the stick by length: largest to smallest
start 'putting largest items' into first bin thy fit
This is equivalent to having multiple knapsacks (assuming these blocks are the same 'height', this means there's one knapsack for each 'line'), and is thus an instance of the bin packing problem.
See http://en.wikipedia.org/wiki/Bin_packing
How can I stack these sticks within the parabola such that I am minimizing the (vertical) space it uses as much as possible?
Just deal with it like any other Bin Packing problem. I'd throw meta-heuristics on it (such as tabu search, simulated annealing, ...) since those algorithms aren't problem specific.
For example, if I'd start from my Cloud Balance problem (= a form of Bin Packing) in Drools Planner. If all the sticks have the same height and there's no vertical space between 2 sticks on top of each other, there's not much I'd have to change:
Rename Computer to ParabolicRow. Remove it's properties (cpu, memory, bandwith). Give it a unique level (where 0 is the lowest row). Create a number of ParabolicRows.
Rename Process to Stick
Rename ProcessAssignement to StickAssignment
Rewrite the hard constraints so it checks if there's enough room for the sum of all Sticks assigned to a ParabolicRow.
Rewrite the soft constraints to minimize the highest level of all ParabolicRows.
I'm very sure it is equivalent to bin-packing:
informal reduction
Be x the width of the widest row, make the bins 2x big and create for every row a placeholder element which is 2x-rowWidth big. So two placeholder elements cannot be packed into one bin.
To reduce bin-packing on parabolic knapsack you just create placeholder elements for all rows that are bigger than the needed binsize with size width-binsize. Furthermore add placeholders for all rows that are smaller than binsize which fill the whole row.
This would obviously mean your problem is NP-hard.
For other ideas look here maybe: http://en.wikipedia.org/wiki/Cutting_stock_problem
Most likely this is the 1-0 Knapsack or a bin-packing problem. This is a NP hard problem and most likely this problem I don't understand and I can't explain to you but you can optimize with greedy algorithms. Here is a useful article about it http://www.developerfusion.com/article/5540/bin-packing that I use to make my php class bin-packing at phpclasses.org.
Props to those who mentioned the fact that the levels could be at varying heights (ex: assuming the sticks are 1 'thick' level 1 goes from 0.1 unit to 1.1 units, or it could go from 0.2 to 1.2 units instead)
You could of course expand the "multiple bin packing" methodology and test arbitrarily small increments. (Ex: run the multiple binpacking methodology with levels starting at 0.0, 0.1, 0.2, ... 0.9) and then choose the best result, but it seems like you would get stuck calulating for an infinite amount of time unless you had some methodlogy to verify that you had gotten it 'right' (or more precisely, that you had all the 'rows' correct as to what they contained, at which point you could shift them down until they met the edge of the parabola)
Also, the OP did not specify that the sticks had to be laid horizontally - although perhaps the OP implied it with those sweet drawings.
I have no idea how to optimally solve such an issue, but i bet there are certain cases where you could randomly place sticks and then test if they are 'inside' the parabola, and it would beat out any of the methodologies relying only on horizontal rows.
(Consider the case of a narrow parabola that we are trying to fill with 1 long stick.)
I say just throw them all in there and shake them ;)