Efficient node traffic allocation - algorithm

Users can be assigned to one experiment on my site. I have an API that developers use to trigger logic for each experiment. They call ExperimentEngine.run() to trigger the code logic for the experiment.
I would like to allocate traffic to each experiment at exposure, at the point where a user might be exposed to the logic for that experiment. I would like to allocate traffic so that experiments that users usually see last don't get starved.
For example, if user A is exposed to experiment A at login and then goes to page B and get's exposed to experiment B, the user A should be assigned to either experiment A or B at point exposure. That means that they will only see one of the experiments and not both (either A or B) or neither. I would like to figure out the right algorithm so that experiment B (which is downstream and shown to the user after they've seen experiment A) does not get starved of traffic. I don't want all traffic going to experiment A.
So the flow is as follow
User visits page A where experiment A is implemented
We decide whether to assign users to experiment A. If user is assigned to A, user will be able to see experiment A.
User visits page B where experiment B is implemented, we decide whether to assign users to experiment B
Users can only see experiments that they are assigned to.
I want to come up with an algorithm that allows me to assign traffic to
experiments regardless of where they are implemented so that the traffic distribution is efficient and experiments implemented downstream don't get starved (even though user sees experiment B last, they still get a good chance of being assigned to B)
Can someone please point me in the right direction to an algorithm that I can use to efficiently allocate traffic to experiments so that experiments reach sample size and stats sig in good time in a system where experiments are allocated traffic at point of exposure and where experiments are "exposed" to the user at different points of the flow (early or later on) and in a way that makes it so that experiments exposed later on are not starved of traffic?
A possible algorithm:
For each experiment, we make a decision of whether to assign based on the experiment's location using coin flip.
If we get heads, a list of experiments that match the user's criteria and that are implemented for that location are selected.
An experiment is chosen from that list based on priority system.
At every location, a % of users are assigned to one of the experiments implemented at that location.
When we decide to assign or not to assign to any experiments at that location, that decision is not made again for the user.
What I am struggling with is what that priority system algorithm should be?
and also is this the most efficient way to assign users to experiments that are implemented at different points of the flow?
How do we decide whether to assign users to an experiment at a specific location? Right now we use coin flip, but that means 50% of users will be assigned to an experiment at each location, which does not work.

If you can collect lists of page visits per user then you can work out, for each probability of running an experiment when a user visits its page, the probability with which each experiment is run.
Given this you need to work out what collection of probability settings will achieve the desired result. If you have a user track that visits pages A,B,C each running different experiments with probabilities p, q, r, then the probability of running A is p, the probability of running B is q(1-p), and the probability of running C is r(1-q)(1-p), and the overall probabilities are the sum of all of the user tracks - so you can work out not only the probabilities as a function of p,q,r but also the derivatives of these probabilities with respect to p,q,r.
This means that you should be able to find some numerical analysis optimization routine that will find values of p,q,r... to minimize the sum of the squared differences between the probabilities of running particular experiments from those values and whatever target values for those probabilities you have.
(Actually the maths might be nicer if you optimize some linear function of the probability the user running the various experiments, probably varying the linear function until you get a result that appeals to you).

Related

How to determine if a current set of data values represent or relate to previous historic data values?

I am trying to develop an method to identify browsing pattern of a user on the basis of page requests.
In a simple example I have created 8 pages and for each page request from the user to the page I have stored that page's request frequency in the database as you can see below:
Now, my hypothesis is to identify the difference in the page request pattern, which leads to my assumption that if the pattern differs from pre-existing one then its a different (fraudulent) user. I am trying to develop this method as a part of an Multifactor-Authentication system.
Now when a user logs in and browses with a different pattern from the ones observed previously, the system should be able to identify it as a change in pattern.
Question is how to utilize these data values to check if current pattern relates to pre-existing patterns or not.
OK, here's a pretty simple idea (and basically, what you're looking to do is generate a set of features, then identify if the current session behaviour is different to the previously observed behaviour). I like to think of these one-class problems (only normal behaviour to train on, want to detect significant departure) as density estimation problems, so here's a simple probability model which will allow you to get the probability of a current request pattern. Basically, when this gets too low (and how low that is will be something you need to tune for the desired behaviour), something is going on.
Our observations consist of counts for each of the pages. Let their sum, the total number of requests, be equal to c_total, and counts for each page i be p_i. Then I'd propose:
c_total ~ Poisson(\lambda)
p|c_total ~ Multinomial(\theta, c_total)
This allows you to assign probability to a new observation given learned user-specific parameters \lambda (uni-variate) and \theta (vector of same dimension as p). To do this, calculate the probability of seeing that many requests from the pmf of the Poisson distribution, then calculate the probability of seeing the page counts from the multinomial, and multiply them together. You probably then want to normalise by c_total so that you can compare sessions with different numbers of requests (since the more requests, the more numbers < 1 you're multiplying together).
So, all that's left is to get the parameters from previous, "good" sessions from that user. The simplest thing is maximum likelihood, where \lambda is the mean total number of requests in previous sessions, and \theta_i is the proportion of all page views which were p_i (for that particular user). This may work for you: however, given that you want to be learning from very small numbers of observations, I'd be tempted to go with a full Bayesian model. This will also let you neatly update parameters after each non-suspicious observation. Inference in these distributions is very easy, with conjugate priors for \lambda and \theta and analytic predictive distributions, so it won't be difficult if you're familiar with these kinds of model at all.
One approach would be to use an unsupervised learning method such as a Self-Organizing Map (SOM, http://en.wikipedia.org/wiki/Self-organizing_map). Train the SOM on data representing expected/normal user behavior and then see how well the candidate data set fits the trained map. Keywords to search for in conjunction with "Self-organizing maps" might be "novelty/anomaly/intrusion detection" (turns up e.g. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.55.2616&rep=rep1&type=pdf)
You should think about whether fraudulent use-cases can be modeled in advance (in which case you can train detectors specifically for them) or whether only deviations from normal behavior are of interest.
If you want to start simple, implement a cosine similarity measure. This would allow you to define a set of "good" vectors. The current user's activity could be compared to the good vectors. If you cannot retrieve a good vector, then the activity is flagged.

Dividing the world in a thousand or so locations

Background: I want to create a weather service, and since most available APIs limit the number of daily calls, I want to divide the planet in a thousand or so areas.
Obviously, internet users are not uniformly distributed, so the sampling should be finer around densely populated regions.
How should I go about implementing this?
Where can I find data regarding geographical internet user density?
The algorithm will probably be something similar to k-means. However, implementing it on a sphere with oceans may be a bit tricky. Any insight?
Finally, maybe there is a way I can avoid doing all of this?
Very similar to k-mean is the centroidal Voronoi diagram (it is the continuous version of k-means). However, this would produce a uniform tesselation of your sphere that does not account for user density as you wish.
So a similar solution is the same technique but used with a Power Diagram : a Power Diagram is a Voronoi Diagram that accounts for a density (by assigning a weight to each Voronoi seed). Such diagram can be computed using an embedding in a 3D space (instead of 2D) that consists of the first two (x,y) coordinates plus a third one which is the square root of [any large positive constant minus the weight for the given point].
Using that, you can obtain a tesselation of your domain accounting for a user density.
You don't care about internet user density in general. You care about the density of users using your service - and you don't care where those users are, you care where they ask about. So once your site has been going for more than a day you can use the locations people ask about the previous day to work out what the areas should be for the next day.
Dynamic programming on a tree is easy. What I would do for an algorithm is to build a tree of successively more finely divided cells. More cells mean a smaller error, because people get predictions for points closer to them, and you can work out the error, or at least the relative error between more cells and fewer cells. Starting from the bottom up work out the smallest possible total error contributed by each subtree, allowing it to be divided in up to 1,2,3,..N. ways. You can work out the best possible division and smallest possible error for each k=1..N for a node by looking at the smallest possible error you have already calculated for each of its descendants, and working out how best to share out the available k divisions between them.
I would try to avoid doing this by thinking of a different idea. Depending on the way you look at life, there are at least two disadvantages of this:
1) You don't seem to be adding anything to the party. It looks like you are interposing yourself between organizations that actually make weather forecasts and their clients. Organizations lose direct contact with their clients, which might for instance lose them advertising revenue. Customers get a poorer weather forecast.
2) Most sites have legal terms of service, which must clients can ignore without worrying. My guess is that you would be breaking those terms of service, and if your service gets popular enough to be noticed they will be enforced against you.

How to check user choice algorithm

I have an algorithm that chooses a list of items that should fit the user's likings.
I'll skip the algorithm's details because of confidentiality issues...
Now, I'm trying to think of a way to check it statistically, with a group of people.
The way I'm checking it now is:
Algorithm gets best results per user.
shuffle top 5 results with lowest 5 results.
make person list the results he liked by order (0 = liked best, 9 = didn't like)
compare user results to algorithm results.
I'm doing this because i figured that to show that algorithm chooses good results, i need to put in some bad results and show that the algorithm knows its a bad result as well.
So, what I'm asking is:
Is shuffling top results with low results is a good idea ?
And if not, do you have an idea on how to get good statistics on how good an algorithm matches user preferences (we have users that can choose stuff) ?
First ask yourself:
What am I trying to measure?
Not to rag on the other submissions here, but while mjv and Sjoerd's answers offer some plausible heuristic reasons for why what you are trying to do may not work as you expect; they are not constructive in the sense that they do not explain why your experiment is flawed, and what you can do to improve it. Before either of these issues can be addressed, what you need to do is define what you hope to measure, and only then should you go about trying to devise an experiment.
Now, I can't say for certain what would constitute a good metric for your purposes, but I can offer you some suggestions. As a starting point, you could try using a precision vs. recall graph:
http://en.wikipedia.org/wiki/Precision_and_recall
This is a standard technique for assessing the performance of ranking and classification algorithms in machine learning and information retrieval (ie web searching). If you have an engineering background, it could be helpful to understand that precision/recall generalizes the notion of precision/accuracy:
http://en.wikipedia.org/wiki/Accuracy_and_precision
Now let us suppose that your algorithm does something like this; it takes as input some prior data about a user then returns a ranked list of other items that user might like. For example, your algorithm is a web search engine and the items are pages; or you have a movie recommender and the items are books. This sounds pretty close to what you are trying to do now, so let us continue with this analogy.
Then the precision of your algorithm's results on the first n is the number of items that the user actually liked out of your first to top n recommendations:
precision = #(items user actually liked out of top n) / n
And the recall is the number of items that you actually got right out of the total number of items:
recall = #(items correctly marked as liked) / #(items user actually likes)
Ideally, one would want to maximize both of these quantities, but they are in a certain sense competing objectives. To illustrate this, consider a few extremal situations: For example, you could have a recommender that returns everything, which would have perfect recall, but very low precision. A second possibility is to have a recommender that returns nothing or only one sure-fire hit, which would have (in a limiting sense) perfect precision, but almost no recall.
As a result, to understand the performance of a ranking algorithm, people typically look at its precision vs. recall graph. These are just plots of the precision vs the recall as the number of items returned are varied:
Image taken from the following tutorial (which is worth reading):
http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-ranked-retrieval-results-1.html
Now to approximate a precision vs recall for your algorithm, here is what you can do. First, return a large set of say n, results as ranked by your algorithm. Next, get the user to mark which items they actually liked out of those n results. This trivially gives us enough information to compute the precision at every partial set of documents < n (since we know the number). We can also compute the recall (as restricted to this set of documents) by taking the total number of items liked by the user in the entire set. This, we can plot a precision recall curve for this data. Now there are fancier statistical techniques for estimating this using less work, but I have already written enough. For more information please check out the links in the body of my answer.
Your method is biased. If you use the top 5 and bottom 5 results, It is very likely that the user orders it according to your algorithm. Let's say we have an algorithm which rates music, and I present the top 1 and bottom 1 to the user:
Queen
The Cheeky Girls
Of course the user will mark it exactly like your algorithm, because the difference between the top and bottom is so big. You need to make the user rate randomly selected items.
Independently of the question of mixing top and bottom guesses, an implicit drawback of the experimental process, as described, is that the data related to the user's choice can only be exploited in the context of one particular version of the algorithm:
When / if the algorithm or its parameters are ever slightly tuned, the record of past user's choices cannot be reused to validate the changes to the algorithm.
On mixing high and low results:
The main drawback of producing sets of items by mixing the algorithm's top and bottom guesses is that it may further complicate the choice of the error/distance function used to measure how well the algorithm performed. Unless the two subsets of items (topmost choices, bottom most choices) are kept separately for the purpose of computing distinct measurements, typical statistical measures of the error (say RMSE) will not be a good measurement of the effective algorithm's quality.
For example, an algorithm which frequently suggests, low guesses items which end up being picked as top choices by the user may have the same averaged error rate than an algorithm which never confuses highs with lows, but where there the user tends to reorders the items more within their subset.
A second drawback is that the algorithm evaluation method may merely qualify its ability of filtering the relative like/dislike of users for items it [the algorithm] chooses rather than its ability of producing the user's actual top choices.
In other words the user's actual top choices may never be offered to him; so yeah the algorithm does a good job at guessing that user will like say Rock-and-Roll before Rap, but never guessing that in fact user prefers Classical Baroque music over all.

Clustering Algorithm for Paper Boys

I need help selecting or creating a clustering algorithm according to certain criteria.
Imagine you are managing newspaper delivery persons.
You have a set of street addresses, each of which is geocoded.
You want to cluster the addresses so that each cluster is assigned to a delivery person.
The number of delivery persons, or clusters, is not fixed. If needed, I can always hire more delivery persons, or lay them off.
Each cluster should have about the same number of addresses. However, a cluster may have less addresses if a cluster's addresses are more spread out. (Worded another way: minimum number of clusters where each cluster contains a maximum number of addresses, and any address within cluster must be separated by a maximum distance.)
For bonus points, when the data set is altered (address added or removed), and the algorithm is re-run, it would be nice if the clusters remained as unchanged as possible (ie. this rules out simple k-means clustering which is random in nature). Otherwise the delivery persons will go crazy.
So... ideas?
UPDATE
The street network graph, as described in Arachnid's answer, is not available.
I've written an inefficient but simple algorithm in Java to see how close I could get to doing some basic clustering on a set of points, more or less as described in the question.
The algorithm works on a list if (x,y) coords ps that are specified as ints. It takes three other parameters as well:
radius (r): given a point, what is the radius for scanning for nearby points
max addresses (maxA): what are the maximum number of addresses (points) per cluster?
min addresses (minA): minimum addresses per cluster
Set limitA=maxA.
Main iteration:
Initialize empty list possibleSolutions.
Outer iteration: for every point p in ps.
Initialize empty list pclusters.
A worklist of points wps=copy(ps) is defined.
Workpoint wp=p.
Inner iteration: while wps is not empty.
Remove the point wp in wps. Determine all the points wpsInRadius in wps that are at a distance < r from wp. Sort wpsInRadius ascendingly according to the distance from wp. Keep the first min(limitA, sizeOf(wpsInRadius)) points in wpsInRadius. These points form a new cluster (list of points) pcluster. Add pcluster to pclusters. Remove points in pcluster from wps. If wps is not empty, wp=wps[0] and continue inner iteration.
End inner iteration.
A list of clusters pclusters is obtained. Add this to possibleSolutions.
End outer iteration.
We have for each p in ps a list of clusters pclusters in possibleSolutions. Every pclusters is then weighted. If avgPC is the average number of points per cluster in possibleSolutions (global) and avgCSize is the average number of clusters per pclusters (global), then this is the function that uses both these variables to determine the weight:
private static WeightedPClusters weigh(List<Cluster> pclusters, double avgPC, double avgCSize)
{
double weight = 0;
for (Cluster cluster : pclusters)
{
int ps = cluster.getPoints().size();
double psAvgPC = ps - avgPC;
weight += psAvgPC * psAvgPC / avgCSize;
weight += cluster.getSurface() / ps;
}
return new WeightedPClusters(pclusters, weight);
}
The best solution is now the pclusters with the least weight. We repeat the main iteration as long as we can find a better solution (less weight) than the previous best one with limitA=max(minA,(int)avgPC). End main iteration.
Note that for the same input data this algorithm will always produce the same results. Lists are used to preserve order and there is no random involved.
To see how this algorithm behaves, this is an image of the result on a test pattern of 32 points. If maxA=minA=16, then we find 2 clusters of 16 addresses.
(source: paperboyalgorithm at sites.google.com)
Next, if we decrease the minimum number of addresses per cluster by setting minA=12, we find 3 clusters of 12/12/8 points.
(source: paperboyalgorithm at sites.google.com)
And to demonstrate that the algorithm is far from perfect, here is the output with maxA=7, yet we get 6 clusters, some of them small. So you still have to guess too much when determining the parameters. Note that r here is only 5.
(source: paperboyalgorithm at sites.google.com)
Just out of curiosity, I tried the algorithm on a larger set of randomly chosen points. I added the images below.
Conclusion? This took me half a day, it is inefficient, the code looks ugly, and it is relatively slow. But it shows that it is possible to produce some result in a short period of time. Of course, this was just for fun; turning this into something that is actually useful is the hard part.
(source: paperboyalgorithm at sites.google.com)
(source: paperboyalgorithm at sites.google.com)
What you are describing is a (Multi)-Vehicle-Routing-Problem (VRP). There's quite a lot of academic literature on different variants of this problem, using a large variety of techniques (heuristics, off-the-shelf solvers etc.). Usually the authors try to find good or optimal solutions for a concrete instance, which then also implies a clustering of the sites (all sites on the route of one vehicle).
However, the clusters may be subject to major changes with only slightly different instances, which is what you want to avoid. Still, something in the VRP-Papers may inspire you...
If you decide to stick with the explicit clustering step, don't forget to include your distribution in all clusters, as it is part of each route.
For evaluating the clusters using a graph representation of the street grid will probably yield more realistic results than connecting the dots on a white map (although both are TSP-variants). If a graph model is not available, you can use the taxicab-metric (|x_1 - x_2| + |y_1 - y_2|) as an approximation for the distances.
I think you want a hierarchical agglomeration technique rather than k-means. If you get your algorithm right you can stop it when you have the right number of clusters. As someone else mentioned you can seed subsequent clusterings with previous solutions which may give you a siginificant performance improvement.
You may want to look closely at the distance function you use, especially if your problem has high dimension. Euclidean distance is the easiest to understand but may not be the best, look at alternatives such as Mahalanobis.
I'm presuming that your real problem has nothing to do with delivering newspapers...
Have you thought about using an economic/market based solution? Divide the set up by an arbitrary (but constant to avoid randomness effects) split into even subsets (as determined by the number of delivery persons).
Assign a cost function to each point by how much it adds to the graph, and give each extra point an economic value.
Iterate allowing each person in turn to auction their worst point, and give each person a maximum budget.
This probably matches fairly well how the delivery people would think in real life, as people will find swaps, or will say "my life would be so much easier if I didn't do this one or two. It is also pretty flexible (for example, would allow one point miles away from any others to be given a premium fairly easily).
I would approach it differently: Considering the street network as a graph, with an edge for each side of each street, find a partitioning of the graph into n segments, each no more than a given length, such that each paperboy can ride a single continuous path from the start to the end of their route. This way, you avoid giving people routes that require them to ride the same segments repeatedly (eg, when asked to cover both sides of a street without covering all the surrounding streets).
This is a very quick and dirty method of discovering where your "clusters" lie. This was inspired by the game "Minesweeper."
Divide your entire delivery space up into a grid of squares. Note - it will take some tweaking of the size of the grid before this will work nicely. My intuition tells me that a square size roughly the size of a physical neighbourhood block will be a good starting point.
Loop through each square and store the number of delivery locations (houses) within each block. Use a second loop (or some clever method on the first pass) to store the number of delivery points for each neighbouring block.
Now you can operate on this grid in a similar way to photo manipulation software. You can detect the edges of clusters by finding blocks where some neighbouring blocks have no delivery points in them.
Finally you need a system that combines number of deliveries made as well as total distance travelled to create and assign routes. There may be some isolated clusters with just a few deliveries to be made, and one or two super clusters with many homes very close to each other, requiring multiple delivery people in the same cluster. Every home must be visited, so that is your first constraint.
Derive a maximum allowable distance to be travelled by any one delivery person on a single run. Next do the same for the number of deliveries made per person.
The first ever run of the routing algorithm would assign a single delivery person, send them to any random cluster with not all deliveries completed, let them deliver until they hit their delivery limit or they have delivered to all the homes in the cluster. If they have hit the delivery limit, end the route by sending them back to home base. If they could safely travel to the nearest cluster and then home without hitting their max travel distance, do so and repeat as above.
Once the route is finished for the current delivery person, check if there are homes that have not yet had a delivery. If so, assign another delivery person, and repeat the above algorithm.
This will generate initial routes. I would store all the info - the location and dimensions of each square, the number of homes within a square and all of its direct neighbours, the cluster to which each square belongs, the delivery people and their routes - I would store all of these in a database.
I'll leave the recalc procedure up to you - but having all the current routes, clusters, etc in a database will enable you to keep all historic routes, and also try various scenarios to see how to best to adapt to changes creating the least possible changes to existing routes.
This is a classic example of a problem that deserves an optimized solution rather than trying to solve for "The OPTIMUM". It's similar in some ways to the "Travelling Salesman Problem", but you also need to segment the locations during the optimization.
I've used three different optimization algorithms to good effect on problems like this:
Simulated Annealing
Great Deluge Algorithm
Genetic Algoritms
Using an optimization algorithm, I think you've described the following "goals":
The geographic area for each paper
boy should be minimized.
The number of subscribers served by
each should be approximately equal.
The distance travelled by each
should be about equal.
(And one you didn't state, but might
matter) The route should end where
it began.
Hope this gets you started!
* Edit *
If you don't care about the routes themselves, that eliminates goals 3 and 4 above, and perhaps allows the problem to be more tailored to your bonus requirements.
If you take demographic information into account (such as population density, subscription adoption rate and subscription cancellation rate) you could probably use the optimization techniques above to eliminate the need to rerun the algorithm at all as subscribers adopted or dropped your service. Once the clusters were optimized, they would stay in balance because the rates of each for an individual cluster matched the rates for the other clusters.
The only time you'd have to rerun the algorithm was when and external factor (such as a recession/depression) caused changes in the behavior of a demographic group.
Rather than a clustering model, I think you really want some variant of the Set Covering location model, with an additional constraint to cover the number of addresses covered by each facility. I can't really find a good explanation of it online. You can take a look at this page, but they're solving it using areal units and you probably want to solve it in either euclidean or network space. If you're willing to dig up something in dead tree format, check out chapter 4 of Network and Discrete Location by Daskin.
Good survey of simple clustering algos. There is more though:
http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/index.html
Perhaps a minimum spanning tree of the customers, broken into set based on locality to the paper boy. Prims or Kruskal to get the MST with the distance between houses for the weight.
I know of a pretty novel approach to this problem that I have seen applied to Bioinformatics, though it is valid for any sort of clustering problem. It's certainly not the simplest solution but one that I think is very interesting. The basic premise is that clustering involves multiple objectives. For one you want to minimise the number of clusters, the trival solution being a single cluster with all the data. The second standard objective is to minimise the amount of variance within a cluster, the trivial solution being many clusters each with only a single data point. The interesting solutions come about when you try to include both of these objectives and optimise the trade-off.
At the core of the proposed approach is something called a memetic algorithm that is a little like a genetic algorithm, which steve mentioned, however it not only explores the solution space well but also has the ability to focus in on interesting regions, i.e. solutions. At the very least I recommend reading some of the papers on this subject as memetic algorithms are an unusual approach, though a word of warning; it may lead you to read The Selfish Gene and I still haven't decided whether that was a good thing... If algorithms don't interest you then maybe you can just try and express your problem as the format requires and use the source code provided. Related papers and code can be found here: Multi Objective Clustering
This is not directly related to the problem, but something I've heard and which should be considered if this is truly a route-planning problem you have. This would affect the ordering of the addresses within the set assigned to each driver.
UPS has software which generates optimum routes for their delivery people to follow. The software tries to maximize the number of right turns that are taken during the route. This saves them a lot of time on deliveries.
For people that don't live in the USA the reason for doing this may not be immediately obvious. In the US people drive on the right side of the road, so when making a right turn you don't have to wait for oncoming traffic if the light is green. Also, in the US, when turning right at a red light you (usually) don't have to wait for green before you can go. If you're always turning right then you never have to wait for lights.
There's an article about it here:
http://abcnews.go.com/wnt/story?id=3005890
You can have K means or expected maximization remain as unchanged as possible by using the previous cluster as a clustering feature. Getting each cluster to have the same amount of items seems bit trickier. I can think of how to do it as a post clustering step by doing k means and then shuffling some points until things balance but that doesn't seem very efficient.
A trivial answer which does not get any bonus points:
One delivery person for each address.
You have a set of street
addresses, each of which is geocoded.
You want to cluster the addresses so that each cluster is
assigned to a delivery person.
The number of delivery persons, or clusters, is not fixed. If needed,
I can always hire more delivery
persons, or lay them off.
Each cluster should have about the same number of addresses. However,
a cluster may have less addresses if a
cluster's addresses are more spread
out. (Worded another way: minimum
number of clusters where each cluster
contains a maximum number of
addresses, and any address within
cluster must be separated by a maximum
distance.)
For bonus points, when the data set is altered (address added or
removed), and the algorithm is re-run,
it would be nice if the clusters
remained as unchanged as possible (ie.
this rules out simple k-means
clustering which is random in nature).
Otherwise the delivery persons will go
crazy.
As has been mentioned a Vehicle Routing Problem is probably better suited... Although strictly isn't designed with clustering in mind, it will optimize to assign based on the nearest addresses. Therefore you're clusters will actually be the recommended routes.
If you provide a maximum number of deliverers then and try to reach the optimal solution this should tell you the min that you require. This deals with point 2.
The same number of addresses can be obtained by providing a limit on the number of addresses to be visited, basically assigning a stock value (now its a capcitated vehicle routing problem).
Adding time windows or hours that the delivery persons work helps reduce the load if addresses are more spread out (now a capcitated vehicle routing problem with time windows).
If you use a nearest neighbour algorithm then you can get identical results each time, removing a single address shouldn't have too much impact on your final result so should deal with the last point.
I'm actually working on a C# class library to achieve something like this, and think its probably the best route to go down, although not neccesairly easy to impelement.
I acknowledge that this will not necessarily provide clusters of roughly equal size:
One of the best current techniques in data clustering is Evidence Accumulation. (Fred and Jain, 2005)
What you do is:
Given a data set with n patterns.
Use an algorithm like k-means over a range of k. Or use a set of different algorithms, the goal is to produce an ensemble of partitions.
Create a co-association matrix C of size n x n.
For each partition p in the ensemble:
3.1 Update the co-association matrix: for each pattern pair (i, j) that belongs to the same cluster in p, set C(i, j) = C(i, j) + 1/N.
Use a clustering algorihm such as Single Link and apply the matrix C as the proximity measure. Single Link gives a dendrogram as result in which we choose the clustering with the longest lifetime.
I'll provide descriptions of SL and k-means if you're interested.
I would use a basic algorithm to create a first set of paperboy routes according to where they live, and current locations of subscribers, then:
when paperboys are:
Added: They take locations from one or more paperboys working in the same general area from where the new guy lives.
Removed: His locations are given to the other paperboys, using the closest locations to their routes.
when locations are:
Added : Same thing, the location is added to the closest route.
Removed: just removed from that boy's route.
Once a quarter, you could re-calculate the whole thing and change all the routes.

Strategy to find your best route via Public Transportation only?

Finding routes for a car is pretty easy: you store a weighted graph of all the roads and you could use Djikstra's algorithm [1]. A bus route is less obvious. With a bus you have to represent things like "wait 10 minutes for the next bus" or "walk one block to another bus stop" and feed those into your pathfinding algorithm.
It's not even always simple for cars. In some cities, some roads are one-way-only into the city in the morning, and one-way-only out of the city in the evening. Some advanced GPSs know how to avoid busy routes during rush hour.
How would you efficiently represent this kind of time-dependent graph and find a route? There is no need for a provably optimal solution; if the traveler wanted to be on time, they would buy a car. ;-)
[1] A wonderful algorithm to mention in an example because everyone's heard of it, though A* is a likelier choice for this application.
I have been involved in development of one journy planner system for Stockholm Public Transportation in Sweden. It was based on Djikstra's algorithm but with termination before every node was visited in the system. Today when there are reliable coordinates available for each stop, I guess the A* algorithm would be the choise.
Data about upcoming trafic was extracted from several databases regularly and compiled into large tables loaded into memory of our search server cluster.
One key to a sucessfull algorith was using a path cost function based on travel and waiting time multiplied by diffrent weightes. Known in Swedish as “kresu"-time these weighted times reflect the fact that, for example, one minute’s waiting time is typically equivalent in “inconvenience” to two minutes of travelling time.
KRESU Weight table
x1 - Travel time
x2 - Walking between stops
x2 - Waiting at a stop
during the journey. Stops under roof,
with shops, etc can get a slightly
lower weight and crowded stations a
higher to tune the algorithm.
The weight for the waiting time at the first stop is a function of trafic intensity and can be between 0.5 to 3.
Data structure
Area
A named area where you journey can start or end. A Bus Stop could be an area with two Stops. A larger Station with several platforms could be one area with one stop for each platform.
Data: Name, Stops in area
Stops
An array with all bus stops, train and underground stations. Note that you usually need two stops, one for each direction, because it takes some time to cross the street or walk to the other platform.
Data: Name, Links, Nodes
Links
A list with other stops you can reach by walking from this stop.
Data: Other Stop, Time to walk to other Stop
Lines/Tours
You have a number on the bus and a destination. The bus starts at one stop and passes several stops on its way to the destination.
Data: Number, Name, Destination
Nodes
Usually you have a timetable with the least the time for when it should be at the first and last stop in a Tour. Each time a bus/train passes a stop you add a new node to the array. This table can have millions of values per day.
Data: Line/Tour, Stop, Arrival Time, Departure Time, Error margin, Next Node in Tour
Search
Array with the same size as the Nodes array used to store how you got there and the path cost.
Data: Back-link with Previous Node (not set if Node is unvisited), Path Cost (infinit for unvisited)
What you're talking about is more complicated than something like the mathematical models that can be described with simple data structures like graphs and with "simple" algorithms like Djikstra's. What you are asking for is a more complex problem like those encountered in the world of automated logistics management.
One way to think about it is that you are asking a multi-dimensional problem, you need to be able to calculate:
Distance optimization
Time optimization
Route optimization
"Time horizon" optimization (if it's 5:25 and the bus only shows up at 7:00, pick another route.)
Given all of these circumstances you can attempt to do deterministic modeling using complex multi-layered data structures. For example, you could still use a weighted di-graph to represent the existing potential routes, wherein each node also contained a finite state automata which added a weight bias to a route depending on time values (so by crossing a node at 5:25 you get a different value than if your simulation crossed it at 7:00.)
However, I think that at this point you are going to find yourself with a simulation that is more and more complex, which most likely does not provide "great" approximation of optimal routes when the advice is transfered into the real world. It turns out that software and mathematical modeling and simulation is at best a weak tool when encountering real world chaotic behaviors and dynamism.
My suggestion would go to use an alternate strategy. I would attempt to use a genetic algorithm in which the DNA for an individual calculated a potential route, I would then create a fitness function which encoded costs and weights in a more "easy to maintain" lookup table fashion. Then I would let the Genetic Algorithm attempt to converge on a near optimal solution for a public transport route finder. On modern computers a GA such as this is probably going to perform reasonably well, and it should be at least relatively robust to real world dynamism.
I think that most systems that do this sort of thing take the "easy way out" and simply do something like an A* search algorithm, or something similar to a greedy costed weighted digraph walk. The thing to remember is that the users of the public transport don't themselves know what the optimal route would be, so a 90% optimal solution is still going to be a great solution for the average case.
Some data points to be aware of from the public transportation arena:
Each transfer incurs a 10 minute penalty (unless it is a timed transfer) in the riders mind. That is to say mentally a trip involving a single bus that takes 40 minutes is roughly equivalent to a 30minute trip that requires a transfer.
Maximum distance that most people are willing to walk to a bus stop is 1/4 mile. Train station / Light rail about 1/2 mile.
Distance is irrelevant to the public transportation rider. (Only time is important)
Frequency matters (if a connection is missed how long until the next bus). Riders will prefer more frequent service options if the alternative is being stranded for an hour for the next express.
Rail has a higher preference than bus ( more confidence that the train will come and be going in the right direction)
Having to pay a new fare is a big hit. (add about a 15-20min penalty)
Total trip time matters as well (with above penalties)
How seamless is the connect? Does the rider have to exist a train station cross a busy street? Or is it just step off a train and walk 4 steps to a bus?
Crossing busy streets -- another big penalty on transfers -- may miss connection because can't get across street fast enough.
if the cost of each leg of the trip is measured in time, then the only complication is factoring in the schedule - which just changes the cost at each node to a function of the current time t, where t is just the total trip time so far (assuming schedules are normalized to start at t=0).
so instead of Node A having a cost of 10 minutes, it has a cost of f(t) defined as:
t1 = nextScheduledStop(t); //to get the next stop time at or after time t
baseTime for leg = 10 //for example, a 10-minute trip
return (t1-t)+baseTime
wait-time is thus included dynamically in the cost of each leg, and walks between bus stops are just arcs with a constant time cost
with this representation you should be able to apply A* or Dijkstra's algorithm directly
Finding routes for a car is pretty
easy: you store a weighted graph of
all the roads and you could use
Djikstra's algorithm. A bus route
is less obvious.
It may be less obvious, but the reality is that it's merely another dimension to the car problem, with the addition of infinite cost calculation.
For instance, you mark the buses whose time is past as having infinite cost - they then aren't included in the calculation.
You then get to decide how to weight each aspect.
Transit Time might get weighted by 1
Waiting time might get weighted by 1
Transfers might get weighted by 0.5 (since I'd rather get there sooner and have an extra transfer)
Then you calculate all the routes in the graph using any usual cost algorithm with the addition of infinite cost:
Each time you move along an edge you have to keep track of 'current' time (add up the transit time) and if you arrive at a vector you have to assign infinite cost to any buses that are prior to your current time. The current time is incremented by the waiting time at that vector until the next bus leaves, then you're free to move along another edge and find the new cost.
In other words, there's a new constraint, "current time" which is the time of the first bus starting, summed with all the transit and waiting times of buses and stops traveled.
It complicates the algorithm only a little bit, but the algorithm is still the same. You can see that most algorithms can be applied to this, some might require multiple passes, and a few won't work because you can't add the time-->infinite cost calculation inline. But most should work just fine.
You can simplify it further by simply assuming that the buses are on a schedule, and there's ALWAYS another bus, but it increases the waiting time. Do the algorithm only adding up the transit costs, then go through the tree again and add waiting costs depending on when the next bus is coming. It will sometimes result in less efficient versions, but the total graph of even a large city is actually pretty small, so it's not really an issue. In most cases one or two routes will be the obvious winners.
Google has this, but also includes additional edges for walking from one bus stop to another so you might find a slightly more optimal route if you're willing to walk in cities with large bus systems.
-Adam
The way I think of this problem is that ultimately you are trying to optimize your average speed from your starting point to your ending point. In particular, you don't care at all about total distance traveled if going [well] out of your way saves time. So, a basic part of the solution space is going to need to be identifying efficient routes available that cover non-trivial parts of the total distance at relatively high speeds between start and finish.
To your original point, the typical automotive route algorithms used by GPS navigation units to make the trip by car is a good bound for a target optimal total time and optimal route evaluations. In other words, your bus based trip would be doing really good to approach a car based solution. Clearly, the bus route based system is going to have many more constraints than the car based solutions, but having the car solution as a reference (time and distance) gives the bus algorithm a framework to optimize against*. So, put loosely, you want to morph the car solution towards the set of possible bus solutions in an iterative fashion or perhaps more likely take possible bus solutions and score them against your car based solution to know if you are doing "good" or not.
Making this somewhat more concrete, for a specific departure time there are only going to be a limited number of buses available within any reasonable period of time that can cover a significant percentage of your total distance. Based on the straight automotive analysis reasonable period of time and significant percentage of distance become quantifiable using some mildly subjective metrics. Certainly, it becomes easier to score each possibility relative to the other in a more absolute sense.
Once you have a set of possible major segment(s) available as possible answers within the solution, you then need to hook them together with other possible walking and waiting paths....or if sufficiently far apart recursive selection of additional short bus runs. Intuitively, it doesn't seem that there is really going to be a prohibitive set of choices here because of the Constraints Paradox (see footnote below). Even if you can't brute force all possible combinations from there, what remains should be able to be optimized using a simulated annealing (SA) type algorithm. A Monte Carlo method would be another option.
The way we've broken the problem down to this point leaves us something that is quite analogous to how SA algorithms are applied to the automated layout and routing of ASIC chips, FPGA's and also the placement and routing of printed circuit boards of which there is quite a bit of published work on optimizing that type of problem form.
* Note: I usually refer to this as "The Constraints Paradox" - my term. While people can naturally think of more constrained problems as harder to solve, the constraints reduce choices and less choices means easier to brute force. When you can brute force, then even the optimal solution is available.
Basically, a node in your graph should not only represent a location, but also the earliest time you can get there. You can think of it as graph exploration in the (place,time) space. Additionally, if you have (place, t1) and (place,t2) where t1<t2, discard (place,t2).
Theoretically, this will get the earliest arrival time for all possible destinations from your starting node. In practice, you need some heuristic to prune roads that take you too far away from your destination.
You also need some heuristic to consider the promising routes before the less promising ones - if a route leads away from your destination, it is less likely (but not totally unlikely) to be good.
I think Your problem is more complicated than You expect. Recent COST action is focused on solving this problem: http://www.cost.esf.org/domains_actions/tud/Actions/TU1004 : "Modelling Public Transport Passenger Flows in the Era of Intelligent Transport Systems".
From my point of view regular SPS algorithms are not suitable for this. You have dynamic network state, where certain options to travel forward are incotinuous (route is always "opened" for car, bike, pedestrain, while transit connection is available only at certain dwell time).
I think new polycriterial (time, reliability, cost, comfort, and more criteria) approach is desired here. It needs to be computed real-time to 1) publish information to end user within short time 2) be able to adjust path in real-time (based on real-time traffic conditions - from ITS).
I'm about to think about this problem for the next several months (maybe even throughout a PhD thesis).
Regards
Rafal
I dont think there is any other special data structure that would cater for these specific needs but you can still use the normal data structures like a linked list and then make route calculations per given factor-you are going to need some kind of input into your app of the variables that affect the result and then make calculations accordingly i.e depending on the input.
As for the waiting and stuff, these are factors that are associated with a particular node right? You can translate this factor into a route node for each of the branches attached to the node. For example you can say for every branch from Node X, if there is a wait for say m minutes on Node X, then scale up the weight of the branch by
[m/Some base value*100]% (just an example). In this way, you have factored in the other factors uniformly but at the same time maintaining a simple representation of the problem you want to solve.
If I was tackling this problem, I'd probably start with an annotated graph. Each node on the graph would represent every intersection in the city, whether or not the public transit system stops there - this helps account for the need to walk, etc. On intersections with transit service, you annotate these with stop labels - the labels allowing you to lookup the service schedule for the stop.
Then you have a choice to make. Do you need the best possible route, or merely a route? Are you displaying the routes in real time, or can solutions be calculated and cached?
If you need "real time" calculation, you'll probably want to go with a greedy algorithm of sorts, I think an A* algorithm would probably fit this problem domain fairly nicely.
If you need optimal solutions, you should look at dynamic programming solutions to the graph... optimal solutions will likely take much longer to calculate, but you only need to find them once, then they can be cached. Perhaps your A* algorithm could use pre-calculated optimal paths to inform its decisions about "similar" routes.
A horribly inefficient way that might work would be to store a copy of each intersection in the city for each minute of the day. A bus route from Elm St. and 2nd to Main St. and 25th would be represented as, say,
elm_st_and_2nd[12][30].edges :
elm_st_and_1st[12][35] # 5 minute walk to the next intersection
time = 5 minutes
transport = foot
main_st_and_25th[1][15] # 40 minute bus ride
time = 40 minutes
transport = bus
elm_st_and_1st[12][36] # stay in one place for one minute
time = 1 minute
transport = stand still
Run your favorite pathfinding algorithm on this graph and pray for a good virtual memory implementation.
You're answering the question yourself. Using A* or Dijkstra's algorithm, all you need to do is decide on a good cost per part of each route.
For the bus route, you're implying that you don't want the shortest, but the fastest route. The cost of each part of the route must therefore include the average travel speed of a bus in that part, and any waits at bus stops.
The algorithm for finding the most suitable route is then still the same as before. With A*, all the magic happens in the cost function...
You need to weight the legs differently. For example - on a rainy day I think someone might prefer to travel longer in a vehicle than walk in the rain. Additionally, someone who detests walking or is unable to walk might make a different/longer trip than someone who would not mind walking.
These edges are costs, but I think you can expand the notion/concept of costs and they can have different relative values.
The algorithm remains the same, you just increase the weight of each graph edge according to different scenarios (Bus schedules etc).
I put together a subway route finder as an exercise in graph path finding some time ago:
http://gfilter.net/code/pathfinderDemo.aspx

Resources