vehicle route problem with pick up and delivery - algorithm

I have been digging google for 2 days looking for an explanation for algorithm that solve the pick up and delivery variant of the vehicle route problem, but, I could not find, so please either someone give me an example or point me to some resources preferably with worked examples in them.
I cam across something called saving algorithm, but could not find any resource on how to use it to solve pick up and delivery variant.

There are many variants of the vehicle routing problem, so I wouldn't be surprised if there's no tutorial material for this particular one.
If you're not comfortable with constraint programming, I'd suggest the Biased Random Key Genetic Algorithm (BRKGA) framework, which has tutorial material and several implementations. A BRKGA genome is a vector of numbers between zero and one, and the framework defines all operations on genomes except for the one that decodes a genome into a feasible solution, which can be composed with the objective function to compute fitness.
In general there is an art to choosing a good decoder. For this problem I would try this first. Define the vector length to be the number of stops (pickup or delivery). To determine the order of stops, initialize a priority queue with all of the pickups, where the genome determines the priorities. Until the queue is empty, pop the max priority stop and schedule it next. If it's a pickup, add the corresponding delivery to the queue.

Related

Algorithms for Minimum resource requirements

I have a question for which I have made some solutions, but I am not happy with the scalability. I'm looking for input of some different approaches / algorithms to solving it.
Problem:
Software can run on electronic controllers (ECUs) and requires
different resources to run a given feature. It may require a given
amount of storage or RAM or a digital or Analog Input or Output for
instance. If we have multiple features and multiple controller options
we want to find the combination that minimizes the hardware
requirements (cost). I'll simplify the resources to letters to
simplify the understanding.
Example 1:
Feature1(A)
ECU1(A,B,C)
First a trivial example. Lets assume that a feature requires 1 unit of resource A, and ECU has 1 unit of resources A, B and C available, it is obvious that the feature will fit in the ECU with resources B & C left over.
Example 2:
Feature2(A,B)
ECU2(A|B,B,C)
In this example, Feature 2 requires resources A and B, and the ECU has 3 resources, the first of which can be A or B. In this case, you can again see that the feature will fit in the ECU, but only if check in a certain order. If you assign F(A) to E(A|B), then F(B) to E(B) it works, but if you assign F(B) to E(A|B) then there is no resource left on the ECU for F(A) so it doesn't appear to fit. This would lead one to the observation that we should prefer non-OR'd resources first to avoid such a conflict.
An example of the above could be a an analog input could also be used as a digital input for instance.
Example 3
Feature3(A,B,C)
ECU3(A|B|C, B|C, A|C)
Now things are a little bit more complicated, but it is still quite obvious to a person that the feature will fit into the ECU.
My problems are simply more scaled up versions of these examples (i.e. multiple features per ECU with more ECUs to choose from.
Algorithms
GA
My first approach to this was to use a genetic algorithm. For a given set of features i.e. F(A,B,C,D), and a list of currently available ECUs find which single or combination of ECUs fit the requirements.
ECUs would initially be randomly selected and features checked they fitted and added to them. If a feature didn't fit another ECU was added to the architecture. A population of these architectures was created and ranked based on lowest cost of housing all the features. Architectures could then be mated in successive generations with mutations and such to improve fitness.
This approached worked quite well, but tended to get stuck in local minima (not the cheapest option) based on a golden example I had worked by hand.
Combinatorial / Permutations
My next approach was to work out all of the possible permutations (the ORs from above) for an ECU to see if the features fit.
If we go back to example 2 and expand the ORs we get 2 permutations;
Feature2(A,B)
ECU2(A|B,B,C) = (A,B,C), (B,B,C)
From here it is trivial to check that the feature fits in the first permutation, but not the second.
...and for example 3 there are 12 permutations
Feature3(A,B,C)
ECU3(A|B|C, B|C, A|C) = (A,B,A), (B,B,A), (C,B,A), (A,C,A), (B,C,A), (C,C,A), (A,B,C), (B,B,C), (C,B,C), (A,C,C), (B,C,C), (C,C,C)
Again it is trivial to check that feature 3 fits in at least one of the permutations (3rd, 5th & 7th).
Based on this approach I was also able to get a solution also, but I have ECUs with so many OR'd inputs that I have millions of ECU permutations which drastically increased the run time (minutes). I can live with this, but first wanted to see if there was a better way to skin the cat, apart from Parallelizing this approach.
So that is the problem...
I have more ideas on how to approach it, but assume that there is a fancy name for such a problem or the name of the algorithm that has been around for 20+ years that I'm not familiar with and I was hoping someone could point me in that direction to either some papers or the names of relevant algorithms.
The obvious remark of simply summing the feature resource requirements and creating a new monolithic ECU is not an option. Lastly, no, this is not in any way associated with any assignment or problem given by a school or university.
Sorry for the long question, but hopefully I've sufficiently described what I am trying to do and this peaks the interest of someone out there.
Sincerely, Paul.
Looks like individual feature plug can be solved as bipartite matching.
You make bipartite graph:
left side corresponds to feature requirements
right side corresponds to ECU subnodes
edges connect each left and right side vertixes with common letters
Let me explain by example 2:
Feature2(A,B)
ECU2(A|B,B,C)
How graph looks:
2 left vertexes: L1 (A), L2 (B)
3 right vertexes: R1 (A|B), R2 (B), R3 (C)
3 edges: L1-R1 (A-A|B), L2-R1 (B-A|B), L2-R2 (B-B)
Then you find maximal matching for unordered bipartite graph. There are few well-known algorithms for it:
https://en.wikipedia.org/wiki/Matching_(graph_theory)
If maximal matching covers every feature vertex, we can use it to plug feature.
If maximal matching does not cover every feature vertex, we are short of resources.
Unfortunately, this approach works like greedy algorithms. It does not know of upcoming features and does not tweak solution to fit more features later. Partially optimization for simple cases can work like you described in question, but in general it's dead end - only algorithm that accounts for every feature in whole feature set can make overall effective solution.
You can try to add several features to one ECU simultaneously. If you want to add new feature to given ECU, you can try all already assigned features plus candidate feature. In this case local optimum solution will be found for given feature set (if it's possible to plug them all to one ECU).
I've not enough reputation to comment, so here's what i wanted to propose for your problem:
Like GA there are some other Random Based approaches too e.g. Bayesian Apporaoch , Decision Tree etc.
In my opinion Decision Tree will suit your problem as it, against some input dataset/attributes, shows a path to each class(in your case ECUs) that helps to select right class/ECU. Train your system with some sample data sets so that it can decide right ECU for your actual data set/Features.
Check Decision Trees - Machine Learning for more information. Hope it helps!

Identifying common route segments from GPS tracks

Say I've got a bunch of recorded GPS tracks. Some are from repeated trips over the same route, some are from completely unique routes, and some are distinct routes yet have some segments in common.
Given all this data, I want to:
identify the repeated trips over the same route
identify segments which are shared by multiple routes
I suppose 1 is really a special case of 2.
To give a concrete example: suppose you had daily GPS tracks of a large number of bicycle commuters. It would be interesting to extract from this data the most popular bicycle commuting corridors based on actual riding rather than from the cycling maps that are produced by local governments.
Are there published algorithms for doing this? How do they work? Pointers to papers and/or code greatly appreciated.
You can use 3D histogram find the most visited points on the map. Using that you can derive the most used paths.
Detail: Keep a 2D matrix count and initialize it to 0, X[i,j]=0. For each track, increment X[i,j]s on the path. Once you have processed all the tracks, threshold this matrix to min threshold (what is the minimum number of tracks for it to be a repeated trip?).
Some practical details: Assuming you have set of points through which path goes. You can find the set of points on the path between two such points with http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm . You might want to draw a "thicker line" to account for the noisy nature of the data.
This seems a very general question that pertains to many GIS problems.
After some search, it seems you would need to apply some algorithms for computing the similarity between any two routes.
Wang et al. (2013) analyzes a number of such measures. However, all these measures are implemented by dynamic programming with time complexity of O(N1N2), where N1 and N2 are the number of points in the two routes.
A more efficient solution is provided by Mariescu-Istodor et al. (2017) in which each route is transformed into a set of cells in a predefined grid system. The paper also defines the measure of inclusion as the amount of one route contained inside the other, which seems to relate to the second point in your question.

Clustering Algorithm for Paper Boys

I need help selecting or creating a clustering algorithm according to certain criteria.
Imagine you are managing newspaper delivery persons.
You have a set of street addresses, each of which is geocoded.
You want to cluster the addresses so that each cluster is assigned to a delivery person.
The number of delivery persons, or clusters, is not fixed. If needed, I can always hire more delivery persons, or lay them off.
Each cluster should have about the same number of addresses. However, a cluster may have less addresses if a cluster's addresses are more spread out. (Worded another way: minimum number of clusters where each cluster contains a maximum number of addresses, and any address within cluster must be separated by a maximum distance.)
For bonus points, when the data set is altered (address added or removed), and the algorithm is re-run, it would be nice if the clusters remained as unchanged as possible (ie. this rules out simple k-means clustering which is random in nature). Otherwise the delivery persons will go crazy.
So... ideas?
UPDATE
The street network graph, as described in Arachnid's answer, is not available.
I've written an inefficient but simple algorithm in Java to see how close I could get to doing some basic clustering on a set of points, more or less as described in the question.
The algorithm works on a list if (x,y) coords ps that are specified as ints. It takes three other parameters as well:
radius (r): given a point, what is the radius for scanning for nearby points
max addresses (maxA): what are the maximum number of addresses (points) per cluster?
min addresses (minA): minimum addresses per cluster
Set limitA=maxA.
Main iteration:
Initialize empty list possibleSolutions.
Outer iteration: for every point p in ps.
Initialize empty list pclusters.
A worklist of points wps=copy(ps) is defined.
Workpoint wp=p.
Inner iteration: while wps is not empty.
Remove the point wp in wps. Determine all the points wpsInRadius in wps that are at a distance < r from wp. Sort wpsInRadius ascendingly according to the distance from wp. Keep the first min(limitA, sizeOf(wpsInRadius)) points in wpsInRadius. These points form a new cluster (list of points) pcluster. Add pcluster to pclusters. Remove points in pcluster from wps. If wps is not empty, wp=wps[0] and continue inner iteration.
End inner iteration.
A list of clusters pclusters is obtained. Add this to possibleSolutions.
End outer iteration.
We have for each p in ps a list of clusters pclusters in possibleSolutions. Every pclusters is then weighted. If avgPC is the average number of points per cluster in possibleSolutions (global) and avgCSize is the average number of clusters per pclusters (global), then this is the function that uses both these variables to determine the weight:
private static WeightedPClusters weigh(List<Cluster> pclusters, double avgPC, double avgCSize)
{
double weight = 0;
for (Cluster cluster : pclusters)
{
int ps = cluster.getPoints().size();
double psAvgPC = ps - avgPC;
weight += psAvgPC * psAvgPC / avgCSize;
weight += cluster.getSurface() / ps;
}
return new WeightedPClusters(pclusters, weight);
}
The best solution is now the pclusters with the least weight. We repeat the main iteration as long as we can find a better solution (less weight) than the previous best one with limitA=max(minA,(int)avgPC). End main iteration.
Note that for the same input data this algorithm will always produce the same results. Lists are used to preserve order and there is no random involved.
To see how this algorithm behaves, this is an image of the result on a test pattern of 32 points. If maxA=minA=16, then we find 2 clusters of 16 addresses.
(source: paperboyalgorithm at sites.google.com)
Next, if we decrease the minimum number of addresses per cluster by setting minA=12, we find 3 clusters of 12/12/8 points.
(source: paperboyalgorithm at sites.google.com)
And to demonstrate that the algorithm is far from perfect, here is the output with maxA=7, yet we get 6 clusters, some of them small. So you still have to guess too much when determining the parameters. Note that r here is only 5.
(source: paperboyalgorithm at sites.google.com)
Just out of curiosity, I tried the algorithm on a larger set of randomly chosen points. I added the images below.
Conclusion? This took me half a day, it is inefficient, the code looks ugly, and it is relatively slow. But it shows that it is possible to produce some result in a short period of time. Of course, this was just for fun; turning this into something that is actually useful is the hard part.
(source: paperboyalgorithm at sites.google.com)
(source: paperboyalgorithm at sites.google.com)
What you are describing is a (Multi)-Vehicle-Routing-Problem (VRP). There's quite a lot of academic literature on different variants of this problem, using a large variety of techniques (heuristics, off-the-shelf solvers etc.). Usually the authors try to find good or optimal solutions for a concrete instance, which then also implies a clustering of the sites (all sites on the route of one vehicle).
However, the clusters may be subject to major changes with only slightly different instances, which is what you want to avoid. Still, something in the VRP-Papers may inspire you...
If you decide to stick with the explicit clustering step, don't forget to include your distribution in all clusters, as it is part of each route.
For evaluating the clusters using a graph representation of the street grid will probably yield more realistic results than connecting the dots on a white map (although both are TSP-variants). If a graph model is not available, you can use the taxicab-metric (|x_1 - x_2| + |y_1 - y_2|) as an approximation for the distances.
I think you want a hierarchical agglomeration technique rather than k-means. If you get your algorithm right you can stop it when you have the right number of clusters. As someone else mentioned you can seed subsequent clusterings with previous solutions which may give you a siginificant performance improvement.
You may want to look closely at the distance function you use, especially if your problem has high dimension. Euclidean distance is the easiest to understand but may not be the best, look at alternatives such as Mahalanobis.
I'm presuming that your real problem has nothing to do with delivering newspapers...
Have you thought about using an economic/market based solution? Divide the set up by an arbitrary (but constant to avoid randomness effects) split into even subsets (as determined by the number of delivery persons).
Assign a cost function to each point by how much it adds to the graph, and give each extra point an economic value.
Iterate allowing each person in turn to auction their worst point, and give each person a maximum budget.
This probably matches fairly well how the delivery people would think in real life, as people will find swaps, or will say "my life would be so much easier if I didn't do this one or two. It is also pretty flexible (for example, would allow one point miles away from any others to be given a premium fairly easily).
I would approach it differently: Considering the street network as a graph, with an edge for each side of each street, find a partitioning of the graph into n segments, each no more than a given length, such that each paperboy can ride a single continuous path from the start to the end of their route. This way, you avoid giving people routes that require them to ride the same segments repeatedly (eg, when asked to cover both sides of a street without covering all the surrounding streets).
This is a very quick and dirty method of discovering where your "clusters" lie. This was inspired by the game "Minesweeper."
Divide your entire delivery space up into a grid of squares. Note - it will take some tweaking of the size of the grid before this will work nicely. My intuition tells me that a square size roughly the size of a physical neighbourhood block will be a good starting point.
Loop through each square and store the number of delivery locations (houses) within each block. Use a second loop (or some clever method on the first pass) to store the number of delivery points for each neighbouring block.
Now you can operate on this grid in a similar way to photo manipulation software. You can detect the edges of clusters by finding blocks where some neighbouring blocks have no delivery points in them.
Finally you need a system that combines number of deliveries made as well as total distance travelled to create and assign routes. There may be some isolated clusters with just a few deliveries to be made, and one or two super clusters with many homes very close to each other, requiring multiple delivery people in the same cluster. Every home must be visited, so that is your first constraint.
Derive a maximum allowable distance to be travelled by any one delivery person on a single run. Next do the same for the number of deliveries made per person.
The first ever run of the routing algorithm would assign a single delivery person, send them to any random cluster with not all deliveries completed, let them deliver until they hit their delivery limit or they have delivered to all the homes in the cluster. If they have hit the delivery limit, end the route by sending them back to home base. If they could safely travel to the nearest cluster and then home without hitting their max travel distance, do so and repeat as above.
Once the route is finished for the current delivery person, check if there are homes that have not yet had a delivery. If so, assign another delivery person, and repeat the above algorithm.
This will generate initial routes. I would store all the info - the location and dimensions of each square, the number of homes within a square and all of its direct neighbours, the cluster to which each square belongs, the delivery people and their routes - I would store all of these in a database.
I'll leave the recalc procedure up to you - but having all the current routes, clusters, etc in a database will enable you to keep all historic routes, and also try various scenarios to see how to best to adapt to changes creating the least possible changes to existing routes.
This is a classic example of a problem that deserves an optimized solution rather than trying to solve for "The OPTIMUM". It's similar in some ways to the "Travelling Salesman Problem", but you also need to segment the locations during the optimization.
I've used three different optimization algorithms to good effect on problems like this:
Simulated Annealing
Great Deluge Algorithm
Genetic Algoritms
Using an optimization algorithm, I think you've described the following "goals":
The geographic area for each paper
boy should be minimized.
The number of subscribers served by
each should be approximately equal.
The distance travelled by each
should be about equal.
(And one you didn't state, but might
matter) The route should end where
it began.
Hope this gets you started!
* Edit *
If you don't care about the routes themselves, that eliminates goals 3 and 4 above, and perhaps allows the problem to be more tailored to your bonus requirements.
If you take demographic information into account (such as population density, subscription adoption rate and subscription cancellation rate) you could probably use the optimization techniques above to eliminate the need to rerun the algorithm at all as subscribers adopted or dropped your service. Once the clusters were optimized, they would stay in balance because the rates of each for an individual cluster matched the rates for the other clusters.
The only time you'd have to rerun the algorithm was when and external factor (such as a recession/depression) caused changes in the behavior of a demographic group.
Rather than a clustering model, I think you really want some variant of the Set Covering location model, with an additional constraint to cover the number of addresses covered by each facility. I can't really find a good explanation of it online. You can take a look at this page, but they're solving it using areal units and you probably want to solve it in either euclidean or network space. If you're willing to dig up something in dead tree format, check out chapter 4 of Network and Discrete Location by Daskin.
Good survey of simple clustering algos. There is more though:
http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/index.html
Perhaps a minimum spanning tree of the customers, broken into set based on locality to the paper boy. Prims or Kruskal to get the MST with the distance between houses for the weight.
I know of a pretty novel approach to this problem that I have seen applied to Bioinformatics, though it is valid for any sort of clustering problem. It's certainly not the simplest solution but one that I think is very interesting. The basic premise is that clustering involves multiple objectives. For one you want to minimise the number of clusters, the trival solution being a single cluster with all the data. The second standard objective is to minimise the amount of variance within a cluster, the trivial solution being many clusters each with only a single data point. The interesting solutions come about when you try to include both of these objectives and optimise the trade-off.
At the core of the proposed approach is something called a memetic algorithm that is a little like a genetic algorithm, which steve mentioned, however it not only explores the solution space well but also has the ability to focus in on interesting regions, i.e. solutions. At the very least I recommend reading some of the papers on this subject as memetic algorithms are an unusual approach, though a word of warning; it may lead you to read The Selfish Gene and I still haven't decided whether that was a good thing... If algorithms don't interest you then maybe you can just try and express your problem as the format requires and use the source code provided. Related papers and code can be found here: Multi Objective Clustering
This is not directly related to the problem, but something I've heard and which should be considered if this is truly a route-planning problem you have. This would affect the ordering of the addresses within the set assigned to each driver.
UPS has software which generates optimum routes for their delivery people to follow. The software tries to maximize the number of right turns that are taken during the route. This saves them a lot of time on deliveries.
For people that don't live in the USA the reason for doing this may not be immediately obvious. In the US people drive on the right side of the road, so when making a right turn you don't have to wait for oncoming traffic if the light is green. Also, in the US, when turning right at a red light you (usually) don't have to wait for green before you can go. If you're always turning right then you never have to wait for lights.
There's an article about it here:
http://abcnews.go.com/wnt/story?id=3005890
You can have K means or expected maximization remain as unchanged as possible by using the previous cluster as a clustering feature. Getting each cluster to have the same amount of items seems bit trickier. I can think of how to do it as a post clustering step by doing k means and then shuffling some points until things balance but that doesn't seem very efficient.
A trivial answer which does not get any bonus points:
One delivery person for each address.
You have a set of street
addresses, each of which is geocoded.
You want to cluster the addresses so that each cluster is
assigned to a delivery person.
The number of delivery persons, or clusters, is not fixed. If needed,
I can always hire more delivery
persons, or lay them off.
Each cluster should have about the same number of addresses. However,
a cluster may have less addresses if a
cluster's addresses are more spread
out. (Worded another way: minimum
number of clusters where each cluster
contains a maximum number of
addresses, and any address within
cluster must be separated by a maximum
distance.)
For bonus points, when the data set is altered (address added or
removed), and the algorithm is re-run,
it would be nice if the clusters
remained as unchanged as possible (ie.
this rules out simple k-means
clustering which is random in nature).
Otherwise the delivery persons will go
crazy.
As has been mentioned a Vehicle Routing Problem is probably better suited... Although strictly isn't designed with clustering in mind, it will optimize to assign based on the nearest addresses. Therefore you're clusters will actually be the recommended routes.
If you provide a maximum number of deliverers then and try to reach the optimal solution this should tell you the min that you require. This deals with point 2.
The same number of addresses can be obtained by providing a limit on the number of addresses to be visited, basically assigning a stock value (now its a capcitated vehicle routing problem).
Adding time windows or hours that the delivery persons work helps reduce the load if addresses are more spread out (now a capcitated vehicle routing problem with time windows).
If you use a nearest neighbour algorithm then you can get identical results each time, removing a single address shouldn't have too much impact on your final result so should deal with the last point.
I'm actually working on a C# class library to achieve something like this, and think its probably the best route to go down, although not neccesairly easy to impelement.
I acknowledge that this will not necessarily provide clusters of roughly equal size:
One of the best current techniques in data clustering is Evidence Accumulation. (Fred and Jain, 2005)
What you do is:
Given a data set with n patterns.
Use an algorithm like k-means over a range of k. Or use a set of different algorithms, the goal is to produce an ensemble of partitions.
Create a co-association matrix C of size n x n.
For each partition p in the ensemble:
3.1 Update the co-association matrix: for each pattern pair (i, j) that belongs to the same cluster in p, set C(i, j) = C(i, j) + 1/N.
Use a clustering algorihm such as Single Link and apply the matrix C as the proximity measure. Single Link gives a dendrogram as result in which we choose the clustering with the longest lifetime.
I'll provide descriptions of SL and k-means if you're interested.
I would use a basic algorithm to create a first set of paperboy routes according to where they live, and current locations of subscribers, then:
when paperboys are:
Added: They take locations from one or more paperboys working in the same general area from where the new guy lives.
Removed: His locations are given to the other paperboys, using the closest locations to their routes.
when locations are:
Added : Same thing, the location is added to the closest route.
Removed: just removed from that boy's route.
Once a quarter, you could re-calculate the whole thing and change all the routes.

Teacher time schedule algorithm

This is a problem I've had on my mind for a long time. Being the son of a teacher and a programmer, it occurred to me early on... but I still haven't found a solution for it.
So this is the problem. One needs to create a time schedule for a school, using some constraints. These are generally divided in two categories:
Sanity Checks
A teacher cannot teach two classes at the same time
A student cannot follow two lessons at the same time
Some teachers must have at least one day off during the week
All the days of the week should be covered by the time table
Subject X must have exactly so-and-so hours each week
...
Preferences
Each teacher's schedule should be as compact as possible (i.e. the teacher should work all hours for the day in a row with no pauses if possible)
Teachers that have days off should be able to express a preference on which day
Teachers that work part-time should be able to express a preference whether to work in the beginning or the end of the school day.
...
Now, after a few years of not finding a solution (and learning a thing or two in the meanwhile...), I realized that this smells like a NP-hard problem.
Is it proven as NP-hard?
Does anyone have an idea on how to crack this thing?
Looking at this question made me think about this problem, and whether genetic algorithms would be usable in this case. However it would be pretty hard to mutate possibilities while maintaining the sanity check rules. Also it's not clear to me how to distinguish incompatible requirements.
A small addendum to better specify the problem. This is applied to Italian school style classrooms where all students are associated in different classes (for example: year 1 section A) and the teachers move between classes. All students of the same class have the same schedule, and have no choice over which lessons to attend.
I am one of the developer that works on the scheduler part of a student information system.
During our original approach of the scheduling problem, we researched genetic algorithms to solve constraint satisfaction problems, and even though we were successful initially, we realized that there was a less complicated solution to the problem (after attending a school scheduling workshop)
Our current implementation works great, and uses brute force with smart heuristics to get a valid schedule in a short amount of time. The master schedule (assignment of the classes to the teachers) is first built, taking in consideration all the constraints that each teacher has while minimizing the possibility of conflicts for the students (based of their course requests). The students are then scheduled in the classes using the same method.
Doing this allows you to have the machine build a master schedule first, and then have a human tweak it if needed.
The scheduler current implementation is written in perl, but other options we visited early on were Prolog and CLIPS (expert system)
I think you might be missing some constraints.
One would prefer where possible to have teachers scheduled to classes for which they are certified.
One would suspect that the classes that are requested, and the expected headcount in each would be significant.
I think the place to start would be to list all of your constraints, figure out a data structure to represent them.
Then create some sort of engine to that builds a trial solution, then evaluates it for fitness according to the constraints.
You could then throw the fun genetic algorithms part at it, and see if you can get the fitness to increase over time as the genes mix.
Start with a small set of constraints, and increase them as you see success (if you see success)
There might be a way to take the constraints and shoehorn them together with something like a linear programming algorithm.
I agree. It sounds like a fun challenge
This is a mapping problem:
you need to map to every hour in a week and every teacher an activity (teach to a certain class or free hour ).
Split the problem:
Create the list of teachers, classes and preferences then let the user populate some of the preferences on a map to have as a starting point.
Randomly take one element from the list and put it at a random free position on the map
if it doesn't cross any sanity checks until the list is empty. If at any certain iteration you can't place an element on the map without crossing a sanity check shift two positions on the map and try again.
When the map is filled, try shifting positions on the map to optimize the result.
In steps 2 and 3 show each iteration to the user: items left in the list, positions on the map and the next computed move and let the user intervene.
I did not try this, but this would be my initial approach.
I've tackled similar planning/scheduling problems in the past and the AI technique that is often best suited for this class of problem is "Constraint Based Reasoning".
It's basically a brute force method like Laurenty suggested, but the approach involves applying constraints in an efficient order to cause the infeasible solutions to fail fast - to minimise the computation required.
Googling "Constraint Based Reasoning" brings up a lot of resources on the technique and its application to scheduling problems.
Answering my own question:
The FET project mentioned by gnud uses this algorithm:
Some words about the algorithm: FET
uses a heuristical algorithm, placing
the activities in turn, starting with
the most difficult ones. If it cannot
find a solution it points you to the
potential impossible activities, so
you can correct errors. The algorithm
swaps activities recursively if that
is possible in order to make space for
a new activity, or, in extreme cases,
backtracks and switches order of
evaluation. The important code is in
src/engine/generate.cpp. Please e-mail
me for details or join the mailing
list. The algorithm mimics the
operation of a human timetabler, I
think.
Link
Following up the "Constraint Based Reasoning" lead by Stringent Software on Wikipedia lead me to these pages which have an interesting paragraph:
Solving a constraint satisfaction
problem on a finite domain is an
NP-complete problem in general.
Research has shown a number of
polynomial-time subcases, mostly
obtained by restricting either the
allowed domains or constraints or the
way constraints can be placed over the
variables. Research has also
established relationship of the
constraint satisfaction problem with
problems in other areas such as finite
model theory and databases.
This reminds me of this blog post about scheduling a conference, with a video explanation here.
How I would do it:
Have the population include two things:
Who teaches what class (I expect the teachers to teach one subject).
What a class takes on a specific time slot.
This way we can't have conflicts (a teacher in 2 places, or a class having two subjects at the same time).
The fitness function would include:
How many time slots each teacher gives per week.
How many time slots a teacher has on the same day (They can't have a full day of teaching, this too must be balanced).
How many time slots of the same subject a class has on the same day (They can't have a full day of math!).
Maybe take the standard deviation for all of them since they should be balanced.
Looking at this question made me think
about this problem, and whether
genetic algorithms would be usable in
this case. However it would be pretty
hard to mutate possibilities while
maintaining the sanity check rules.
Also it's not clear to me how to
distinguish incompatible requirements.
Genetic Algorithms are very well suited to problems such as this. Once you come up with a decent representation of the chromosome (in this case, probably a vector representing all of the available class slots) you're most of the way there.
Don't worry about maintaining sanity checks during the mutation phase. Mutation is random. Sanity and preference checks both belong in the selection phase. A failed sanity check would drastically lower the fitness of an individual, while a failed preference would only mildly lower the fitness.
Incompatible requirements are a different problem altogether. If they're completely incompatible, you'll get a population that doesn't converge on anything useful.
Good luck. Being the son of a father with this sort of problem is what took me to the research group that I ended up in ...
When I was a kid my father scheduled match officials in a local sports league, this had a similarly long list of constraints and I tried to write something to help. When I got to University I even used it as my final year project eventually settling on a Monte Carlo implementation (using a Simulated Annealing model).
The basic idea is that if it's not NP, it's pretty close, so rather than assuming there is a solution, I would set out to find the best within a given timeframe. I would weight all the constraints with costs for breaking them: sanity checks would have huge costs, the preferences would have lower costs (but increasing for more breaks, so breaking it once would be less than half the cost of breaking it twice).
The basic idea is that I started with a 'random' solution and costed it; then made changes by swapping a small number of assignments, re-valued it and then, probalistically accepted or declined the change.
After thousands of iterations you inch closer to an acceptable solution.
Believe me, though, that this class of problems has research groups churning out PhDs so you're in very good company.
You might also find some interest in the Linear Programming arena, e.g. simplex and so on.
Yes, I think this is NP complete - or at least to find the optimal solution is NP complete.
I worked on a similar problem in college when i told a friend's father (who was a teacher) that I could solve his scheduling problems for him if he did not find a suitable program for it (this was back in 1990 or so)
I had no idea what I got myself into. Luckily for us all I had to do was find one solution that fit the constraints. But in my testing I was always worried about determining IF there was a solution at all. He never had too many constraints and the program used different heuristics and back tracking. It was a lot of fun.
I think Bill Gates also worked on a system like this in high school or college for his high school. Not sure though.
Good luck. All my notes are gone and I never got around to implementing a solution that I could market. It was a specialty project that I re-coded as I learned new languages (Basic, Scheme, C, VB, C++)
Have fun with it
i see that this problem can be solved by Prolog program by connecting it to a database
and the program can make the schedule given a set of constraints
read abt "Constraint satisfaction Problem prolog"

Strategy to find your best route via Public Transportation only?

Finding routes for a car is pretty easy: you store a weighted graph of all the roads and you could use Djikstra's algorithm [1]. A bus route is less obvious. With a bus you have to represent things like "wait 10 minutes for the next bus" or "walk one block to another bus stop" and feed those into your pathfinding algorithm.
It's not even always simple for cars. In some cities, some roads are one-way-only into the city in the morning, and one-way-only out of the city in the evening. Some advanced GPSs know how to avoid busy routes during rush hour.
How would you efficiently represent this kind of time-dependent graph and find a route? There is no need for a provably optimal solution; if the traveler wanted to be on time, they would buy a car. ;-)
[1] A wonderful algorithm to mention in an example because everyone's heard of it, though A* is a likelier choice for this application.
I have been involved in development of one journy planner system for Stockholm Public Transportation in Sweden. It was based on Djikstra's algorithm but with termination before every node was visited in the system. Today when there are reliable coordinates available for each stop, I guess the A* algorithm would be the choise.
Data about upcoming trafic was extracted from several databases regularly and compiled into large tables loaded into memory of our search server cluster.
One key to a sucessfull algorith was using a path cost function based on travel and waiting time multiplied by diffrent weightes. Known in Swedish as “kresu"-time these weighted times reflect the fact that, for example, one minute’s waiting time is typically equivalent in “inconvenience” to two minutes of travelling time.
KRESU Weight table
x1 - Travel time
x2 - Walking between stops
x2 - Waiting at a stop
during the journey. Stops under roof,
with shops, etc can get a slightly
lower weight and crowded stations a
higher to tune the algorithm.
The weight for the waiting time at the first stop is a function of trafic intensity and can be between 0.5 to 3.
Data structure
Area
A named area where you journey can start or end. A Bus Stop could be an area with two Stops. A larger Station with several platforms could be one area with one stop for each platform.
Data: Name, Stops in area
Stops
An array with all bus stops, train and underground stations. Note that you usually need two stops, one for each direction, because it takes some time to cross the street or walk to the other platform.
Data: Name, Links, Nodes
Links
A list with other stops you can reach by walking from this stop.
Data: Other Stop, Time to walk to other Stop
Lines/Tours
You have a number on the bus and a destination. The bus starts at one stop and passes several stops on its way to the destination.
Data: Number, Name, Destination
Nodes
Usually you have a timetable with the least the time for when it should be at the first and last stop in a Tour. Each time a bus/train passes a stop you add a new node to the array. This table can have millions of values per day.
Data: Line/Tour, Stop, Arrival Time, Departure Time, Error margin, Next Node in Tour
Search
Array with the same size as the Nodes array used to store how you got there and the path cost.
Data: Back-link with Previous Node (not set if Node is unvisited), Path Cost (infinit for unvisited)
What you're talking about is more complicated than something like the mathematical models that can be described with simple data structures like graphs and with "simple" algorithms like Djikstra's. What you are asking for is a more complex problem like those encountered in the world of automated logistics management.
One way to think about it is that you are asking a multi-dimensional problem, you need to be able to calculate:
Distance optimization
Time optimization
Route optimization
"Time horizon" optimization (if it's 5:25 and the bus only shows up at 7:00, pick another route.)
Given all of these circumstances you can attempt to do deterministic modeling using complex multi-layered data structures. For example, you could still use a weighted di-graph to represent the existing potential routes, wherein each node also contained a finite state automata which added a weight bias to a route depending on time values (so by crossing a node at 5:25 you get a different value than if your simulation crossed it at 7:00.)
However, I think that at this point you are going to find yourself with a simulation that is more and more complex, which most likely does not provide "great" approximation of optimal routes when the advice is transfered into the real world. It turns out that software and mathematical modeling and simulation is at best a weak tool when encountering real world chaotic behaviors and dynamism.
My suggestion would go to use an alternate strategy. I would attempt to use a genetic algorithm in which the DNA for an individual calculated a potential route, I would then create a fitness function which encoded costs and weights in a more "easy to maintain" lookup table fashion. Then I would let the Genetic Algorithm attempt to converge on a near optimal solution for a public transport route finder. On modern computers a GA such as this is probably going to perform reasonably well, and it should be at least relatively robust to real world dynamism.
I think that most systems that do this sort of thing take the "easy way out" and simply do something like an A* search algorithm, or something similar to a greedy costed weighted digraph walk. The thing to remember is that the users of the public transport don't themselves know what the optimal route would be, so a 90% optimal solution is still going to be a great solution for the average case.
Some data points to be aware of from the public transportation arena:
Each transfer incurs a 10 minute penalty (unless it is a timed transfer) in the riders mind. That is to say mentally a trip involving a single bus that takes 40 minutes is roughly equivalent to a 30minute trip that requires a transfer.
Maximum distance that most people are willing to walk to a bus stop is 1/4 mile. Train station / Light rail about 1/2 mile.
Distance is irrelevant to the public transportation rider. (Only time is important)
Frequency matters (if a connection is missed how long until the next bus). Riders will prefer more frequent service options if the alternative is being stranded for an hour for the next express.
Rail has a higher preference than bus ( more confidence that the train will come and be going in the right direction)
Having to pay a new fare is a big hit. (add about a 15-20min penalty)
Total trip time matters as well (with above penalties)
How seamless is the connect? Does the rider have to exist a train station cross a busy street? Or is it just step off a train and walk 4 steps to a bus?
Crossing busy streets -- another big penalty on transfers -- may miss connection because can't get across street fast enough.
if the cost of each leg of the trip is measured in time, then the only complication is factoring in the schedule - which just changes the cost at each node to a function of the current time t, where t is just the total trip time so far (assuming schedules are normalized to start at t=0).
so instead of Node A having a cost of 10 minutes, it has a cost of f(t) defined as:
t1 = nextScheduledStop(t); //to get the next stop time at or after time t
baseTime for leg = 10 //for example, a 10-minute trip
return (t1-t)+baseTime
wait-time is thus included dynamically in the cost of each leg, and walks between bus stops are just arcs with a constant time cost
with this representation you should be able to apply A* or Dijkstra's algorithm directly
Finding routes for a car is pretty
easy: you store a weighted graph of
all the roads and you could use
Djikstra's algorithm. A bus route
is less obvious.
It may be less obvious, but the reality is that it's merely another dimension to the car problem, with the addition of infinite cost calculation.
For instance, you mark the buses whose time is past as having infinite cost - they then aren't included in the calculation.
You then get to decide how to weight each aspect.
Transit Time might get weighted by 1
Waiting time might get weighted by 1
Transfers might get weighted by 0.5 (since I'd rather get there sooner and have an extra transfer)
Then you calculate all the routes in the graph using any usual cost algorithm with the addition of infinite cost:
Each time you move along an edge you have to keep track of 'current' time (add up the transit time) and if you arrive at a vector you have to assign infinite cost to any buses that are prior to your current time. The current time is incremented by the waiting time at that vector until the next bus leaves, then you're free to move along another edge and find the new cost.
In other words, there's a new constraint, "current time" which is the time of the first bus starting, summed with all the transit and waiting times of buses and stops traveled.
It complicates the algorithm only a little bit, but the algorithm is still the same. You can see that most algorithms can be applied to this, some might require multiple passes, and a few won't work because you can't add the time-->infinite cost calculation inline. But most should work just fine.
You can simplify it further by simply assuming that the buses are on a schedule, and there's ALWAYS another bus, but it increases the waiting time. Do the algorithm only adding up the transit costs, then go through the tree again and add waiting costs depending on when the next bus is coming. It will sometimes result in less efficient versions, but the total graph of even a large city is actually pretty small, so it's not really an issue. In most cases one or two routes will be the obvious winners.
Google has this, but also includes additional edges for walking from one bus stop to another so you might find a slightly more optimal route if you're willing to walk in cities with large bus systems.
-Adam
The way I think of this problem is that ultimately you are trying to optimize your average speed from your starting point to your ending point. In particular, you don't care at all about total distance traveled if going [well] out of your way saves time. So, a basic part of the solution space is going to need to be identifying efficient routes available that cover non-trivial parts of the total distance at relatively high speeds between start and finish.
To your original point, the typical automotive route algorithms used by GPS navigation units to make the trip by car is a good bound for a target optimal total time and optimal route evaluations. In other words, your bus based trip would be doing really good to approach a car based solution. Clearly, the bus route based system is going to have many more constraints than the car based solutions, but having the car solution as a reference (time and distance) gives the bus algorithm a framework to optimize against*. So, put loosely, you want to morph the car solution towards the set of possible bus solutions in an iterative fashion or perhaps more likely take possible bus solutions and score them against your car based solution to know if you are doing "good" or not.
Making this somewhat more concrete, for a specific departure time there are only going to be a limited number of buses available within any reasonable period of time that can cover a significant percentage of your total distance. Based on the straight automotive analysis reasonable period of time and significant percentage of distance become quantifiable using some mildly subjective metrics. Certainly, it becomes easier to score each possibility relative to the other in a more absolute sense.
Once you have a set of possible major segment(s) available as possible answers within the solution, you then need to hook them together with other possible walking and waiting paths....or if sufficiently far apart recursive selection of additional short bus runs. Intuitively, it doesn't seem that there is really going to be a prohibitive set of choices here because of the Constraints Paradox (see footnote below). Even if you can't brute force all possible combinations from there, what remains should be able to be optimized using a simulated annealing (SA) type algorithm. A Monte Carlo method would be another option.
The way we've broken the problem down to this point leaves us something that is quite analogous to how SA algorithms are applied to the automated layout and routing of ASIC chips, FPGA's and also the placement and routing of printed circuit boards of which there is quite a bit of published work on optimizing that type of problem form.
* Note: I usually refer to this as "The Constraints Paradox" - my term. While people can naturally think of more constrained problems as harder to solve, the constraints reduce choices and less choices means easier to brute force. When you can brute force, then even the optimal solution is available.
Basically, a node in your graph should not only represent a location, but also the earliest time you can get there. You can think of it as graph exploration in the (place,time) space. Additionally, if you have (place, t1) and (place,t2) where t1<t2, discard (place,t2).
Theoretically, this will get the earliest arrival time for all possible destinations from your starting node. In practice, you need some heuristic to prune roads that take you too far away from your destination.
You also need some heuristic to consider the promising routes before the less promising ones - if a route leads away from your destination, it is less likely (but not totally unlikely) to be good.
I think Your problem is more complicated than You expect. Recent COST action is focused on solving this problem: http://www.cost.esf.org/domains_actions/tud/Actions/TU1004 : "Modelling Public Transport Passenger Flows in the Era of Intelligent Transport Systems".
From my point of view regular SPS algorithms are not suitable for this. You have dynamic network state, where certain options to travel forward are incotinuous (route is always "opened" for car, bike, pedestrain, while transit connection is available only at certain dwell time).
I think new polycriterial (time, reliability, cost, comfort, and more criteria) approach is desired here. It needs to be computed real-time to 1) publish information to end user within short time 2) be able to adjust path in real-time (based on real-time traffic conditions - from ITS).
I'm about to think about this problem for the next several months (maybe even throughout a PhD thesis).
Regards
Rafal
I dont think there is any other special data structure that would cater for these specific needs but you can still use the normal data structures like a linked list and then make route calculations per given factor-you are going to need some kind of input into your app of the variables that affect the result and then make calculations accordingly i.e depending on the input.
As for the waiting and stuff, these are factors that are associated with a particular node right? You can translate this factor into a route node for each of the branches attached to the node. For example you can say for every branch from Node X, if there is a wait for say m minutes on Node X, then scale up the weight of the branch by
[m/Some base value*100]% (just an example). In this way, you have factored in the other factors uniformly but at the same time maintaining a simple representation of the problem you want to solve.
If I was tackling this problem, I'd probably start with an annotated graph. Each node on the graph would represent every intersection in the city, whether or not the public transit system stops there - this helps account for the need to walk, etc. On intersections with transit service, you annotate these with stop labels - the labels allowing you to lookup the service schedule for the stop.
Then you have a choice to make. Do you need the best possible route, or merely a route? Are you displaying the routes in real time, or can solutions be calculated and cached?
If you need "real time" calculation, you'll probably want to go with a greedy algorithm of sorts, I think an A* algorithm would probably fit this problem domain fairly nicely.
If you need optimal solutions, you should look at dynamic programming solutions to the graph... optimal solutions will likely take much longer to calculate, but you only need to find them once, then they can be cached. Perhaps your A* algorithm could use pre-calculated optimal paths to inform its decisions about "similar" routes.
A horribly inefficient way that might work would be to store a copy of each intersection in the city for each minute of the day. A bus route from Elm St. and 2nd to Main St. and 25th would be represented as, say,
elm_st_and_2nd[12][30].edges :
elm_st_and_1st[12][35] # 5 minute walk to the next intersection
time = 5 minutes
transport = foot
main_st_and_25th[1][15] # 40 minute bus ride
time = 40 minutes
transport = bus
elm_st_and_1st[12][36] # stay in one place for one minute
time = 1 minute
transport = stand still
Run your favorite pathfinding algorithm on this graph and pray for a good virtual memory implementation.
You're answering the question yourself. Using A* or Dijkstra's algorithm, all you need to do is decide on a good cost per part of each route.
For the bus route, you're implying that you don't want the shortest, but the fastest route. The cost of each part of the route must therefore include the average travel speed of a bus in that part, and any waits at bus stops.
The algorithm for finding the most suitable route is then still the same as before. With A*, all the magic happens in the cost function...
You need to weight the legs differently. For example - on a rainy day I think someone might prefer to travel longer in a vehicle than walk in the rain. Additionally, someone who detests walking or is unable to walk might make a different/longer trip than someone who would not mind walking.
These edges are costs, but I think you can expand the notion/concept of costs and they can have different relative values.
The algorithm remains the same, you just increase the weight of each graph edge according to different scenarios (Bus schedules etc).
I put together a subway route finder as an exercise in graph path finding some time ago:
http://gfilter.net/code/pathfinderDemo.aspx

Resources