This is a semi-broad question, but it's one that I feel on some level is answerable or at least approachable.
I've spent the last month or so making a fairly extensive simulation. In order to protect the interests of my employer, I won't state specifically what it does... but an analogy of what it does may be explained by... a high school dance.
A girl or boy enters the dance floor, and based on the selection of free dance partners, an optimal choice is made. After a period of time, two dancers finish dancing and are now free for a new partnership.
I've been making partner selection algorithms designed to maximize average match outcome while not sacrificing wait time for a partner too much.
I want a way to gauge / compare versions of my algorithms in order to make a selection of the optimal algorithm for any situation. This is difficult however since the inputs of my simulation are extremely large matrices of input parameters (2-5 per dancer), and the simulation takes several minutes to run (a fact that makes it difficult to test a large number of simulation inputs). I have a few output metrics, but linking them to the large number of inputs is extremely hard. I'm also interested in finding which algorithms completely fail under certain input conditions...
Any pro tips / online resources which might help me in defining input constraints / output variables which might give clarity on an optimal algorithm?
I might not understand what you exactly want. But here is my suggestion. Let me know if my solution is inaccurate/irrelevant and I will edit/delete accordingly.
Assume you have a certain metric (say compatibility of the pairs or waiting time). If you just have the average or total number for this metric over all the users, it is kind of useless. Instead you might want to find the distribution of of this metric over all users. If nothing, you should always keep track of the variance. Once you have the distribution, you can calculate a probability that particular algorithm A is better than B for a certain metric.
If you do not have the distribution of the metric within an experiment, you can always run multiple experiments, and the number of experiments you need to run depends on the variance of the metric and difference between two algorithms.
Related
I would like to know the difference between uncertainty and randomness in mathematical fashion. I tried to find it but I get confused , as some people said they are the same? But can any one provide me logical reasoning behind it. If they are not same then please explain it why?
Don't get too hung up on it.
People use different words in different situations.
It's not so much that they have different meanings, as that their meanings are situation-dependent.
Randomness is just a fuzzy general term meaning something is random.
In statistics, uncertainty is used to mean that some property of a distribution, such as its mean, is itself unknown but can be given a distribution.
For example, suppose you want to know the average weight of all people.
You could find it out exactly if you could go around to all people, get their weight, add it all up, and divide by the number of people.
But that's too hard to do, so suppose you just pick 10 people at random and get their average weight, and pretend it's the same as the average of everybody.
That's called the sample mean, but you know it isn't accurate.
It has what is called a standard error, meaning it has uncertainty.
In fact, if you were to do that experiment many times over with different people, you would get a different sample mean every time, and those sample means would themselves form a bell-shaped distribution, the standard deviation of which would be called the standard error, representing its uncertainty.
In general, if you increased the number of people you look at by a factor of 100, you can reduce the standard error, the uncertainty, by a factor of 10.
I bet you can tell that people who take polls for a living care about this stuff very much.
EDIT for the downvoter: In case the downvote is because this doesn't look like a stackoverflow question or answer,
I've made a point of advocating the random pausing method of profiling.
Profiling in large part is perceived to be about measuring (statistically) the time that programming constructs are responsible for.
Often people are inhibited from using that method because they are afraid the results have too much uncertainty.
This post gets very specific about what that uncertainty actually is.
It shows that the bogey-man fear of uncertainty has the effect of preventing people from finding really substantial speedups in their code.
So naivete' about statistics is definitely a serious programming problem.
My view looks at a scenario using three different coloured balls:
I love some of the answers given here. My own view, based on my current research, is that these are two distinct terms. Uncertainty refers to not knowing in advance which ball could be selected when a person, for instance, is given a chance to select one ball from three different coloured balls.
This remains true when each ball has an equal chance of being selected i.e. equal probabilities. However, things soon get complex when each ball has it's own distinct probability. Chances are that the one with the highest probability will be selected. This seems especially true in algorithm development which would almost always select the highest probability compromising the meaning of randomness.
Having said all of this - I believe these concepts remain confusing which has just made me realise the time I need to dedicate on clearly distinguishing between the two to make sure my current research is not confusing. My own predicament is that I need to work on stochastic vs deterministic views. Based on the current view stochastic would be more uncertain than random whereas deterministic would be more probability based i.e. knowing for certain that the highest probability would be chosen; but this seems very far from the truth.
It seems as if uncertainty holds until just before a ball is selected/touched and soon looses its meaning as soon as the ball is picked which should result to its probability being revised. I personally think the terms have theoretical differences which perhaps allows them to be used interchangeably.
Uncertainty in math and science typically means there are a lack of facts, or the facts are unobtainable. Weather forecasting is a great example of uncertainty.
Randomness has many definitions. Commonly it's used in probability / statistics as a measure or quantification of uncertainty. So in my weather example, a 30% chance of rain is a measure of uncertainty. The more general definition (which also applies to math / science) is unpredictable, or lack of order.
There is definitely a fuzzy distinction between the two.
According to the Bayesian interpretation of probability, uncertainty and randomness are just two names for the same thing.
If an experiment is random, then it is uncertain to you. If something is uncertain to you, then it has the randomness property.
I have a Genetic algorithm with individuals composed of 2000 bits, where I try to optimize 4 variables. Is there any (relatively straight forward preferably) rule of thumb to set parameters such as population size, number of generations, and mutation probability?
Simply put: no, there is no simple way to choose these numbers. Everything depends on your domain and required outcome.
Population size can be determined with an experiment relatively quickly: try 100, 1000, 10K, 100K and a million. Which one gives you a better result - go with that.
Number of generations is the hardest one to determine. Usually improvement of a best result sky-rockets in the start of the processing, then slows down almost to a halt. Usually that is a time to either stop and take the best result or change some parameters, like mutation rate. So it is up to you to decide when the result is good enough: usually the balance between time spent and rate of improvement.
During my experiments and confirmed by a scientific literature, in the start of the processing, it is recommended to have mutation rate to be a minimum (like 0.01%). And once your rate of improvement slows down, introduce more mutation to explore wider range of solutions. At one point, I increased the mutation rate to something ridiculous, like 50%. This helped to disturb the stable state of the system, but the system returned back to the stable state pretty fast and the final result was not much better than the one I had before "nuclear bomb". I came to conclusion that highest mutation (in my domain) should be no more than 5% and only when the rate of improvement is almost zero.
Hopefully this can help you a bit, but what you ask is not trivial and people write dissertations on each of the topics separately. I also recommend to read through couple articles on the topics - this will help you significantly.
I did a little GP (note:very little) work in college and have been playing around with it recently. My question is in regards to the intial run settings (population size, number of generations, min/max depth of trees, min/max depth of initial trees, percentages to use for different reproduction operations, etc.). What is the normal practice for setting these parameters? What papers/sites do people use as a good guide?
You'll find that this depends very much on your problem domain - in particular the nature of the fitness function, your implementation DSL etc.
Some personal experience:
Large population sizes seem to work
better when you have a noisy fitness
function, I think this is because the growth
of sub-groups in the population over successive generations acts
to give more sampling of
the fitness function. I typically use
100 for less noisy/deterministic functions, 1000+
for noisy.
For number of generations it is best to measure improvements in the
fitness function and stop when it
meets your target criteria. I normally run a few hundred generations and see what kind of answers are coming out, if it is showing no improvement then you probably have an issue elsewhere.
Tree depth requirements are really dependent on your DSL. I sometimes try to do an
implementation without explicit
limits but penalise or eliminate
programs that run too long (which is probably
what you really care about....). I've also found total node counts of ~1000 to be quite useful hard limits.
Percentages for different mutation / recombination operators don't seem
to matter all that much. As long as
you have a comprehensive set of mutations, any reasonably balanced
distribution will usually work. I think the reason for this is that you are basically doing a search for favourable improvements so the main objective is just to make sure the trial improvements are reasonably well distributed across all the possibilities.
Why don't you try using a genetic algorithm to optimise these parameters for you? :)
Any problem in computer science can be
solved with another layer of
indirection (except for too many
layers of indirection.)
-David J. Wheeler
When I started looking into Genetic Algorithms I had the same question.
I wanted to collect data variating parameters on a very simple problem and link given operators and parameters values (such as mutation rates, etc) to given results in function of population size etc.
Once I started getting into GA a bit more I then realized that given the enormous number of variables this is a huge task, and generalization is extremely difficult.
talking from my (limited) experience, if you decide to simplify the problem and use a fixed way to implement crossover, selection, and just play with population size and mutation rate (implemented in a given way) trying to come up with general results you'll soon realize that too many variables are still into play because at the end of the day the number of generations after which statistically you will get a decent result (whatever way you wanna define decent) still obviously depend primarily on the problem you're solving and consequently on the genome size (representing the same problem in different ways will obviously lead to different results in terms of effect of given GA parameters!).
It is certainly possible to draft a set of guidelines - as the (rare but good) literature proves - but you will be able to generalize the results effectively in statistical terms only when the problem at hand can be encoded in the exact same way and the fitness is evaluated in a somehow an equivalent way (which more often than not means you're ealing with a very similar problem).
Take a look at Koza's voluminous tomes on these matters.
There are very different schools of thought even within the GP community -
Some regard populations in the (low) thousands as sufficient whereas Koza and others often don't deem if worthy to start a GP run with less than a million individuals in the GP population ;-)
As mentioned before it depends on your personal taste and experiences, resources and probably the GP system used!
Cheers,
Jan
How would you mathematically model the distribution of repeated real life performance measurements - "Real life" meaning you are not just looping over the code in question, but it is just a short snippet within a large application running in a typical user scenario?
My experience shows that you usually have a peak around the average execution time that can be modeled adequately with a Gaussian distribution. In addition, there's a "long tail" containing outliers - often with a multiple of the average time. (The behavior is understandable considering the factors contributing to first execution penalty).
My goal is to model aggregate values that reasonably reflect this, and can be calculated from aggregate values (like for the Gaussian, calculate mu and sigma from N, sum of values and sum of squares). In other terms, number of repetitions is unlimited, but memory and calculation requirements should be minimized.
A normal Gaussian distribution can't model the long tail appropriately and will have the average biased strongly even by a very small percentage of outliers.
I am looking for ideas, especially if this has been attempted/analysed before. I've checked various distributions models, and I think I could work out something, but my statistics is rusty and I might end up with an overblown solution. Oh, a complete shrink-wrapped solution would be fine, too ;)
Other aspects / ideas: Sometimes you get "two humps" distributions, which would be acceptable in my scenario with a single mu/sigma covering both, but ideally would be identified separately.
Extrapolating this, another approach would be a "floating probability density calculation" that uses only a limited buffer and adjusts automatically to the range (due to the long tail, bins may not be spaced evenly) - haven't found anything, but with some assumptions about the distribution it should be possible in principle.
Why (since it was asked) -
For a complex process we need to make guarantees such as "only 0.1% of runs exceed a limit of 3 seconds, and the average processing time is 2.8 seconds". The performance of an isolated piece of code can be very different from a normal run-time environment involving varying levels of disk and network access, background services, scheduled events that occur within a day, etc.
This can be solved trivially by accumulating all data. However, to accumulate this data in production, the data produced needs to be limited. For analysis of isolated pieces of code, a gaussian deviation plus first run penalty is ok. That doesn't work anymore for the distributions found above.
[edit] I've already got very good answers (and finally - maybe - some time to work on this). I'm starting a bounty to look for more input / ideas.
Often when you have a random value that can only be positive, a log-normal distribution is a good way to model it. That is, you take the log of each measurement, and assume that is normally distributed.
If you want, you can consider that to have multiple humps, i.e. to be the sum of two normals having different mean. Those are a bit tricky to estimate the parameters of, because you may have to estimate, for each measurement, its probability of belonging to each hump. That may be more than you want to bother with.
Log-normal distributions are very convenient and well-behaved. For example, you don't deal with its average, you deal with it's geometric mean, which is the same as its median.
BTW, in pharmacometric modeling, log-normal distributions are ubiquitous, modeling such things as blood volume, absorption and elimination rates, body mass, etc.
ADDED: If you want what you call a floating distribution, that's called an empirical or non-parametric distribution. To model that, typically you save the measurements in a sorted array. Then it's easy to pick off the percentiles. For example the median is the "middle number". If you have too many measurements to save, you can go to some kind of binning after you have enough measurements to get the general shape.
ADDED: There's an easy way to tell if a distribution is normal (or log-normal). Take the logs of the measurements and put them in a sorted array. Then generate a QQ plot (quantile-quantile). To do that, generate as many normal random numbers as you have samples, and sort them. Then just plot the points, where X is the normal distribution point, and Y is the log-sample point. The results should be a straight line. (A really simple way to generate a normal random number is to just add together 12 uniform random numbers in the range +/- 0.5.)
The problem you describe is called "Distribution Fitting" and has nothing to do with performance measurements, i.e. this is generic problem of fitting suitable distribution to any gathered/measured data sample.
The standard process is something like that:
Guess the best distribution.
Run hypothesis tests to check how well it describes gathered data.
Repeat 1-3 if not well enough.
You can find interesting article describing how this can be done with open-source R software system here. I think especially useful to you may be function fitdistr.
In addition to already given answers consider Empirical Distributions. I have successful experience in using empirical distributions for performance analysis of several distributed systems. The idea is very straightforward. You need to build histogram of performance measurements. Measurements should be discretized with given accuracy. When you have histogram you could do several useful things:
calculate the probability of any given value (you are bound by accuracy only);
build PDF and CDF functions for the performance measurements;
generate sequence of response times according to a distribution. This one is very useful for performance modeling.
Try whit gamma distribution http://en.wikipedia.org/wiki/Gamma_distribution
From wikipedia
The gamma distribution is frequently a probability model for waiting times; for instance, in life testing, the waiting time until death is a random variable that is frequently modeled with a gamma distribution.
The standard for randomized Arrival times for performance modelling is either Exponential distribution or Poisson distribution (which is just the distribution of multiple Exponential distributions added together).
Not exactly answering your question, but relevant still: Mor Harchol-Balter did a very nice analysis of the size of jobs submitted to a scheduler, The effect of heavy-tailed job size distributions on computer systems design (1999). She found that the size of jobs submitted to her distributed task assignment system took a power-law distribution, which meant that certain pieces of conventional wisdom she had assumed in the construction of her task assignment system, most importantly that the jobs should be well load balanced, had awful consequences for submitters of jobs. She's done good follor-up work on this issue.
The broader point is, you need to ask such questions as:
What happens if reasonable-seeming assumptions about the distribution of performance, such as that they take a normal distribution, break down?
Are the data sets I'm looking at really representative of the problem I'm trying to solve?
I'm developing an application that optimally assigns shifts to nurses in a hospital. I believe this is a linear programming problem with discrete variables, and therefore probably NP-hard:
For each day, each nurse (ca. 15-20) is assigned a shift
There is a small number (ca. 6) of different shifts
There is a considerable number of constraints and optimization criteria, either concerning a day, or concerning an emplyoee, e.g.:
There must be a minimum number of people assigned to each shift every day
Some shifts overlap so that it's OK to have one less person in early shift if there's someone doing intermediate shift
Some people prefer early shift, some prefer late shift, but a minimum of shift changes is required to still get the higher shift-work pay.
It's not allowed for one person to work late shift one day and early shift the next day (due to minimum resting time regulations)
Meeting assigned working week lengths (different for different people)
...
So basically there is a large number (aout 20*30 = 600) variables that each can take a small number of discrete values.
Currently, my plan is to use a modified Min-conflicts algorithm
start with random assignments
have a fitness function for each person and each day
select the person or day with the worst fitness value
select at random one of the assignments for that day/person and set it to the value that results in the optimal fitness value
repeat until either a maximum number of iteration is reached or no improvement can be found for the selected day/person
Any better ideas? I am somewhat worried that it will get stuck in a local optimum. Should I use some form of simulated annealing? Or consider not only changes in one variable at a time, but specifically switches of shifts between two people (the main component in the current manual algorithm)? I want to avoid tailoring the algorithm to the current constraints since those might change.
Edit: it's not necessary to find a strictly optimal solution; the roster is currently done manual, and I'm pretty sure the result is considerably sub-optimal most of the time - shouldn't be hard to beat that. Short-term adjustments and manual overrides will also definitely be necessary, but I don't believe this will be a problem; Marking past and manual assignments as "fixed" should actually simplify the task by reducing the solution space.
This is a difficult problem to solve well. There has been many academic papers on this subject particularly in the Operations Research field - see for example nurse rostering papers 2007-2008 or just google "nurse rostering operations research". The complexity also depends on aspects such as: how many days to solve; what type of "requests" can the nurse's make; is the roster "cyclic"; is it a long term plan or does it need to handle short term rostering "repair" such as sickness and swaps etc etc.
The algorithm you describe is a heuristic approach.
You may find you can tweak it to work well for one particular instance of the problem but as soon as "something" is changed it may not work so well (e.g. local optima, poor convergence).
However, such an approach may be adequate depending your particular business needs - e.g. how important is it to get the optimal solution, is the problem outline you describe expected to stay the same, what is the potential savings (money and resources), how important is the nurse's perception of the quality of their rosters, what is the budget for this work etc.
Umm, did you know that some ILP-solvers do quite a good job? Try AIMMS, Mathematica or the GNU programming kit! 600 Variables is of course a lot more than the Lenstra theorem will solve easily, but sometimes these ILP solvers have a good handle and in AIMMS, you can modify the branching strategy a little. Plus, there's a really fast 100%-approximation for ILPs.
I solved a shift assignment problem for a large manufacturing plant recently. First we tried generating purely random schedules and returning any one which passed the is_schedule_valid test - the fallback algorithm. This was, of course, slow and indeterminate.
Next we tried genetic algorithms (as you suggested), but couldn't find a good fitness function that closed on any viable solution (because the smallest change can make the entire schedule RIGHT or WRONG - no points for almost).
Finally we chose the following method (which worked great!):
Randomize the input set (i.e. jobs, shift, staff, etc.).
Create a valid tuple and add it to your tentative schedule.
If not valid tuple can be created, rollback (and increment) the last tuple added.
Pass the partial schedule to a function that tests could_schedule_be_valid, that is, could this schedule be valid if the remaining tuples were filled in a possible way
If !could_schedule_be_valid, simply rollback (and increment) the tuple added in (2).
If schedule_is_complete, return schedule
Goto (2)
You incrementally build a partial shift this way. The benefit is that some tests for valid schedule can easily be done in Step 2 (pre-tests), and others must remain in Step 5 (post-tests).
Good luck. We wasted days trying the first two algorithms, but got the recommended algorithm generating valid schedules instantly in under 5 hours of development.
Also, we supported pre-fixing and post-fixing of assignments that the algorithm would respect. You simply don't randomize those slots in Step 1. You'll find that the solutions doesn't have to be anywhere near optimal. Our solution is O(N*M) at a minimum but executes in PHP(!) in less than half a second for an entire manufacturing plant. The beauty is in ruling out bad schedules quickly using a good could_schedule_be_valid test.
The people that are used to doing it manually don't care if it takes an hour - they just know they don't have to do it manually any more.
Mike,
Don't know if you ever got a good answer to this, but I'm pretty sure that constraint programming is the ticket. While a GA might give you an answer, CP is designed to give you many answers or tell you if there is no feasible solution. A search on "constraint programming" and scheduling should bring up lots of info. It's a relatively new area and CP methods work well on many types of problems where traditional optimization methods bog down.
Dynamic programming a la Bell? Kinda sounds like there's a place for it: overlapping subproblems, optimal substructures.
One thing you can do is to try to look for symmetries in the problem. E.g. can you treat all nurses as equivalent for the purposes of the problem? If so, then you only need to consider nurses in some arbitrary order -- you can avoid considering solutions such that any nurse i is scheduled before any nurse j where i > j. (You did say that individual nurses have preferred shift times, which contradicts this example, although perhaps that's a less important goal?)
I think you should use genetic algorithm because:
It is best suited for large problem instances.
It yields reduced time complexity on the price of inaccurate answer(Not the ultimate best)
You can specify constraints & preferences easily by adjusting fitness punishments for not met ones.
You can specify time limit for program execution.
The quality of solution depends on how much time you intend to spend solving the program..
Genetic Algorithms Definition
Genetic Algorithms Tutorial
Class scheduling project with GA
Also take a look at :a similar question and another one
Using CSP programming I made programms for automatic shitfs rostering. eg:
2-shifts system - tested for 100+ nurses, 30 days time horizon, 10+
rules
3-shifts system - tested for 80+ nurses, 30 days time horizon, 10+ rules
3-shifts system, 4-teams - tested for 365 days horizon, 10+ rules,
and a couple of similiar systems. All of them were tested on my home PC (1.8GHz, dual-core). Execution times always were acceptable ie. for 3/ it took around 5 min and 300MB RAM.
Most hard part of this problem was selecting proper solver and proper solving strategy.
Metaheuristics did very well at the International Nurse Rostering Competition 2010.
For an implementation, see this video with a continuous nurse rostering (java).