Algorithm to identify "fuzzily-connected" subgraphs - algorithm

I have a problem which looks like the connectet subgraphs problem from a mile-high, but is quite distinct in that it does not fall under the strict definitions.
I face a graph with a few millions of nodes and links (manual analysis is not possible), among those millions of nodes, there are known to be 2 or 3 "sets".
Each of the "sets" is comprised of hundreds of thousandths of nodes, and tens of thousandths sub-graphs, not strongly connected. Each of those sets should theoretically not be linked to the other sets... but there are (guesstimation) a dozen of erroneous links that end up connecting those sets.
The problem is to find those sets and the erroneous links, or at least get a human-manageable list of erroneous links candidates that can be verified manually.
My current "best idea" is to randomly pick two nodes, find the shortest path between them, then mark the links on that shortest path. Rinse & repeat millions of times, and the erroneous links eventually end up as the most marked ones, as they are "chokepoints" between the sets.
However, this is quite slow, and when one set is much larger than the others and has internal chokepoints, it ends up dominating the "most marked" list, making it meaningless.
Are there better algorithms/approaches for that?
edit: a refinement of the path marking is to mark proportionally with the length of the path, which helps with the "internal chokepoints of a large set" issue, but does not entirely eliminates it as some sets can have distant "outliers", while other sets have lots of tightly connected nodes (short internal distances)

My idea is an ant colony algorithm. I get inspired with your approach of choosing two random nodes, but thought it will be useful to do something more instead of just computing the shortest path.
Start n ants in n random nodes. You will need to adjust n with trial and error method. Ants leave a pheromone on traveled edges. Pheromone evaporate in time. Ants choose one of the distinct edges to travel according to the probability. The more pheromone an edge has, the more likely an ant will choose that edge.
In the beginning ants move totally randomly, since there is no pheromone and edges have the same probability. However, over time, the most popular edges, bridges between two "fuzzily-connected" components will have more and more pheromone on them.
So, you throw n ants, simulate for m turns and return edges with the highest amount of pheromone on them. You can visualize this process to clearly see what is going on.
Update: I realized that the sentence "However, over time, the most popular edges, bridges between two "fuzzily-connected" components will have more and more pheromone on them" is wrong. I implemented it and it looks like most of the time bridges not necessarily attract ants:
There were n = 1000 ants and m = 1000 steps. Initially every edge had 1 unit of pheromone and it was increased by 1 if ant traveled over it. No evaporation, however I think it would not improve the situation. Bridge had 49845 units of pheromone, but there were three other edges which had more than 100k.
As suggested by Peter de Rivaz in the comment, I tried (source code) repeating min-cut between 2 random nodes and it is much better:
Graphs generated with python-igraph library.

Related

Looking for algorithms: Minimum cut to produce bipartite graph

Given an undirected weighted graph (or a single connected component of a larger disjoint graph) which typically will contain numerous odd and even cycles, I am searching for algorithms to remove the smallest possible number of edges necessary in order to produce one or more bipartite subgraphs. Are there any standard algorithms in the literature such as exist for minimum cut, etc.?
The problem I am trying to solve looks like this in the real world:
Presentations of about 1 hour each are given to students about different subjects in one or two time blocks. Students can sign up for at least one presentation of their choice, or two, or three (3rd choice is an alternative in case one of the others isn't going to be presented). They have to be all different choices. If there are less than three sign-ups for a given presentation, it will not be given. If there are 18 or more, it will be given twice in both blocks. I have to schedule the presentations such that the maximum number of sign-ups are satisfied.
Scheduling is trivial in the following cases:
Sign-ups for only one presentation can always be satisfied if the presentation is given (i.e. sign-ups >= 3);
Sign-ups for two given presentations are always satisfiable if at least one of them is given twice.
First, all sign-ups are aggregated to determine which ones are given once and which are given twice. If a student has signed up for a presentation with too few other sign-ups, the alternative presentation is chosen if it will also be given.
At the end of the day, I am left with an undirected weighted graph where the vertices are the presentations and the edges represent students who have signed up for that combination of presentations, each of which is only presented once. The weight corresponds to the number of sign-ups for the unique combination of presentations (thus avoiding parallel edges).
If the number of vertices, or presentations, is around 20 or less, I have come up with a brute force solution which finishes in acceptable time. However, each additional vertex will double the runtime of that solution. After 28 or so, it rapidly becomes unmanageable.
This year we had 37 presentations, thirty of which were only given once and thus ended up in the graph. What I am trying right now for larger graphs is the following:
Find all discrete components and solve each component individually;
For each component, remove leaf nodes and bridge edges recursively;
Generate a spanning tree (I am using Kruskal's algorithm which works very well), saving the removed edges;
Generate the fundamental cycle set by adding one removed edge back into the tree at a time and stripping off the rest of the tree;
Using the Gibbs-Welch algorithm, I generate the complete set of all elemental cycles starting with the fundamental set obtained in step 4;
Count the number of odd and even cycles to which each edge belongs;
Create a priority queue of edges (ordering discussed below) and remove each edge successively from its connected component until the resulting component is bipartite.
I cannot find an ordering of the priority queue for which I can prove that the result would be as acceptable as a solution obtained using the brute force method (it is probably NP-hard). However, I am trying something along these lines:
a. If the edge belongs only to odd cycles, remove it first;
b. If the edge belongs to more odd than even cycles, remove it before any other edges which belong to more even cycles than odd;
c. Edges with the smallest weight should be removed first.
If an edge belongs to both an odd and an even cycle, removing it would leave a larger odd cycle behind. That is why I am ordering them like that. Obviously, the larger the number of odd cycles to which an edge belongs, the higher the priority, but only if less even cycles are affected.
There are additional criteria which exist but need to be considered outside of the graph problem; for example, removing an edge effectively removes one of the sign-ups for one of the presentations, so an eye has to be kept on not letting the number of sign-ups get too small.
(EDIT: there is also the possibility of splitting presentations into two blocks which have almost enough sign-ups, e.g. 15-16 instead of 18. But this means that whoever is giving the presentation would have to do it twice, so it is a trade-off.)
Thanks in advance for any suggestions!
This problem is equivalent to the NP-hard weighted max cut problem, which asks for a partition of the vertices into two parts such that the maximum number of edges go between the parts.
I think the easiest way to solve a problem size such as you have would be to formulate it as a quadratic integer program and then apply an off the shelf solver. The formulation looks like
maximize (1/2) sum_{ij} w_{ij} (1 - y_i y_j)
subject to
y_i in {±1} for all i
where w_ij is the weight of the undirected edge ij if present else zero (so the corresponding variable and its constraint can be omitted).

Solving a TSP-related task

I have a problem similar to the basic TSP but not quite the same.
I have a starting position for a player character, and he has to pick up n objects in the shortest time possible. He doesn't need to return to the original position and the order in which he picks up the objects does not matter.
In other words, the problem is to find the minimum-weight (distance) Hamiltonian path with a given (fixed) start vertex.
What I have currently, is an algorithm like this:
best_total_weight_so_far = Inf
foreach possible end vertex:
add a vertex with 0-weight edges to the start and end vertices
current_solution = solve TSP for this graph
remove the 0 vertex
total_weight = Weight (current_solution)
if total_weight < best_total_weight_so_far
best_solution = current_solution
best_total_weight_so_far = total_weight
However this algorithm seems to be somewhat time-consuming, since it has to solve the TSP n-1 times. Is there a better approach to solving the original problem?
It is a rather minor variation of TSP and clearly NP-hard. Any heuristic algorithm (and you really shouldn't try to do anything better than heuristic for a game IMHO) for TSP should be easily modifiable for your situation. Even the nearest neighbor probably wouldn't be bad -- in fact for your situation it would probably be better that when used in TSP since in Nearest Neighbor the return edge is often the worst. Perhaps you can use NN + 2-Opt to eliminate edge crossings.
On edit: Your problem can easily be reduced to the TSP problem for directed graphs. Double all of the existing edges so that each is replaced by a pair of arrows. The costs for all arrows is simply the existing cost for the corresponding edges except for the arrows that go into the start node. Make those edges cost 0 (no cost in returning at the end of the day). If you have code that solves the TSP for directed graphs you could thus use it in your case as well.
At the risk of it getting slow (20 points should be fine), you can use the good old exact TSP algorithms in the way John describes. 20 points is really easy for TSP - instances with thousands of points are routinely solved and instances with tens of thousands of points have been solved.
For example, use linear programming and branch & bound.
Make an LP problem with one variable per edge (there are more edges now because it's directed), the variables will be between 0 and 1 where 0 means "don't take this edge in the solution", 1 means "take it" and fractional values sort of mean "take it .. a bit" (whatever that means).
The costs are obviously the distances, except for returning to the start. See John's answer.
Then you need constraints, namely that for each node the sum of its incoming edges is 1, and the sum of its outgoing edges is one. Also the sum of a pair of edges that was previously one edge must be smaller or equal to one. The solution now will consist of disconnected triangles, which is the smallest way to connect the nodes such that they all have both an incoming edge and an outgoing edge, and those edges are not both "the same edge". So the sub-tours must be eliminated. The simplest way to do that (probably strong enough for 20 points) is to decompose the solution into connected components, and then for each connected component say that the sum of incoming edges to it must be at least 1 (it can be more than 1), same thing with the outgoing edges. Solve the LP problem again and repeat this until there is only one component. There are more cuts you can do, such as the obvious Gomory cuts, but also fancy special TSP cuts (comb cuts, blossom cuts, crown cuts.. there are whole books about this), but you won't need any of this for 20 points.
What this gives you is, sometimes, directly the solution. Usually to begin with it will contain fractional edges. In that case it still gives you a good underestimation of how long the tour will be, and you can use that in the framework of branch & bound to determine the actual best tour. The idea there is to pick an edge that was fractional in the result, and pick it either 0 or 1 (this often turns edges that were previously 0/1 fractional, so you have to keep all "chosen edges" fixed in the whole sub-tree in order to guarantee termination). Now you have two sub-problems, solve each recursively. Whenever the estimation from the LP solution becomes longer than the best path you have found so far, you can prune the sub-tree (since it's an underestimation, all integral solutions in this part of the tree can only be even worse). You can initialize the "best so far solution" with a heuristic solution but for 20 points it doesn't really matter, the techniques I described here are already enough to solve 100-point problems.

graph algorithm to detect even cycles

I have an undirected graph. One edge in that graph is special. I want to find all other edges that are part of a even cycle containing the first edge.
I don't need to enumerate all the cycles, that would be inherently NP I think. I just need to know, for each each edge, whether it satisfies the conditions above.
A brute force search works of course but is too slow, and I'm struggling to come up with anything better. Any help appreciated.
I think we have an answer (I must credit my colleague with the idea). Essentially his idea is to do a flood fill algorithm through the space of even cycles. This works because if you have a large even cycle formed by merging two smaller cycles then the smaller cycles must have been both even or both odd. Similarly merging an odd and even cycle always forms a larger odd cycle.
This is a practical option only because I can imagine pathological cases consisting of alternating even and odd cycles. In this case we would never find two adjacent even cycles and so the algorithm would be slow. But I'm confident that such cases don't arise in real chemistry. At least in chemistry as it's currently known, 30 years ago we'd never heard of fullerenes.
If your graph has a small node degree, you might consider using a different graph representation:
Let three atoms u,v,w and two chemical bonds e=(u,v) and k=(v,w). A typical way of representing such data is to store u,v,w as nodes and e,k as edges in a graph.
However, one may represent e and k as nodes in the graph, having edges like f=(e,k) where f represents a 2-step link from u to w, f=(e,k) or f=(u,v,w). Running any algorithm to find cycles on such a graph will return all even cycles on the original graph.
Of course, this is efficient only if the original graph has a small node degree. When a user performs an edit, you can easily edit accordingly the alternative representation.

Generating a tower defense maze (longest maze with limited walls) - near-optimal heuristic?

In a tower defense game, you have an NxM grid with a start, a finish, and a number of walls.
Enemies take the shortest path from start to finish without passing through any walls (they aren't usually constrained to the grid, but for simplicity's sake let's say they are. In either case, they can't move through diagonal "holes")
The problem (for this question at least) is to place up to K additional walls to maximize the path the enemies have to take. For example, for K=14
My intuition tells me this problem is NP-hard if (as I'm hoping to do) we generalize this to include waypoints that must be visited before moving to the finish, and possibly also without waypoints.
But, are there any decent heuristics out there for near-optimal solutions?
[Edit] I have posted a related question here.
I present a greedy approach and it's maybe close to the optimal (but I couldn't find approximation factor). Idea is simple, we should block the cells which are in critical places of the Maze. These places can help to measure the connectivity of maze. We can consider the vertex connectivity and we find minimum vertex cut which disconnects the start and final: (s,f). After that we remove some critical cells.
To turn it to the graph, take dual of maze. Find minimum (s,f) vertex cut on this graph. Then we examine each vertex in this cut. We remove a vertex its deletion increases the length of all s,f paths or if it is in the minimum length path from s to f. After eliminating a vertex, recursively repeat the above process for k time.
But there is an issue with this, this is when we remove a vertex which cuts any path from s to f. To prevent this we can weight cutting node as high as possible, means first compute minimum (s,f) cut, if cut result is just one node, make it weighted and set a high weight like n^3 to that vertex, now again compute the minimum s,f cut, single cutting vertex in previous calculation doesn't belong to new cut because of waiting.
But if there is just one path between s,f (after some iterations) we can't improve it. In this case we can use normal greedy algorithms like removing node from a one of a shortest path from s to f which doesn't belong to any cut. after that we can deal with minimum vertex cut.
The algorithm running time in each step is:
min-cut + path finding for all nodes in min-cut
O(min cut) + O(n^2)*O(number of nodes in min-cut)
And because number of nodes in min cut can not be greater than O(n^2) in very pessimistic situation the algorithm is O(kn^4), but normally it shouldn't take more than O(kn^3), because normally min-cut algorithm dominates path finding, also normally path finding doesn't takes O(n^2).
I guess the greedy choice is a good start point for simulated annealing type algorithms.
P.S: minimum vertex cut is similar to minimum edge cut, and similar approach like max-flow/min-cut can be applied on minimum vertex cut, just assume each vertex as two vertex, one Vi, one Vo, means input and outputs, also converting undirected graph to directed one is not hard.
it can be easily shown (proof let as an exercise to the reader) that it is enough to search for the solution so that every one of the K blockades is put on the current minimum-length route. Note that if there are multiple minimal-length routes then all of them have to be considered. The reason is that if you don't put any of the remaining blockades on the current minimum-length route then it does not change; hence you can put the first available blockade on it immediately during search. This speeds up even a brute-force search.
But there are more optimizations. You can also always decide that you put the next blockade so that it becomes the FIRST blockade on the current minimum-length route, i.e. you work so that if you place the blockade on the 10th square on the route, then you mark the squares 1..9 as "permanently open" until you backtrack. This saves again an exponential number of squares to search for during backtracking search.
You can then apply heuristics to cut down the search space or to reorder it, e.g. first try those blockade placements that increase the length of the current minimum-length route the most. You can then run the backtracking algorithm for a limited amount of real-time and pick the best solution found thus far.
I believe we can reduce the contained maximum manifold problem to boolean satisifiability and show NP-completeness through any dependency on this subproblem. Because of this, the algorithms spinning_plate provided are reasonable as heuristics, precomputing and machine learning is reasonable, and the trick becomes finding the best heuristic solution if we wish to blunder forward here.
Consider a board like the following:
..S........
#.#..#..###
...........
...........
..........F
This has many of the problems that cause greedy and gate-bound solutions to fail. If we look at that second row:
#.#..#..###
Our logic gates are, in 0-based 2D array ordered as [row][column]:
[1][4], [1][5], [1][6], [1][7], [1][8]
We can re-render this as an equation to satisfy the block:
if ([1][9] AND ([1][10] AND [1][11]) AND ([1][12] AND [1][13]):
traversal_cost = INFINITY; longest = False # Infinity does not qualify
Excepting infinity as an unsatisfiable case, we backtrack and rerender this as:
if ([1][14] AND ([1][15] AND [1][16]) AND [1][17]:
traversal_cost = 6; longest = True
And our hidden boolean relationship falls amongst all of these gates. You can also show that geometric proofs can't fractalize recursively, because we can always create a wall that's exactly N-1 width or height long, and this represents a critical part of the solution in all cases (therefore, divide and conquer won't help you).
Furthermore, because perturbations across different rows are significant:
..S........
#.#........
...#..#....
.......#..#
..........F
We can show that, without a complete set of computable geometric identities, the complete search space reduces itself to N-SAT.
By extension, we can also show that this is trivial to verify and non-polynomial to solve as the number of gates approaches infinity. Unsurprisingly, this is why tower defense games remain so fun for humans to play. Obviously, a more rigorous proof is desirable, but this is a skeletal start.
Do note that you can significantly reduce the n term in your n-choose-k relation. Because we can recursively show that each perturbation must lie on the critical path, and because the critical path is always computable in O(V+E) time (with a few optimizations to speed things up for each perturbation), you can significantly reduce your search space at a cost of a breadth-first search for each additional tower added to the board.
Because we may tolerably assume O(n^k) for a deterministic solution, a heuristical approach is reasonable. My advice thus falls somewhere between spinning_plate's answer and Soravux's, with an eye towards machine learning techniques applicable to the problem.
The 0th solution: Use a tolerable but suboptimal AI, in which spinning_plate provided two usable algorithms. Indeed, these approximate how many naive players approach the game, and this should be sufficient for simple play, albeit with a high degree of exploitability.
The 1st-order solution: Use a database. Given the problem formulation, you haven't quite demonstrated the need to compute the optimal solution on the fly. Therefore, if we relax the constraint of approaching a random board with no information, we can simply precompute the optimum for all K tolerable for each board. Obviously, this only works for a small number of boards: with V! potential board states for each configuration, we cannot tolerably precompute all optimums as V becomes very large.
The 2nd-order solution: Use a machine-learning step. Promote each step as you close a gap that results in a very high traversal cost, running until your algorithm converges or no more optimal solution can be found than greedy. A plethora of algorithms are applicable here, so I recommend chasing the classics and the literature for selecting the correct one that works within the constraints of your program.
The best heuristic may be a simple heat map generated by a locally state-aware, recursive depth-first traversal, sorting the results by most to least commonly traversed after the O(V^2) traversal. Proceeding through this output greedily identifies all bottlenecks, and doing so without making pathing impossible is entirely possible (checking this is O(V+E)).
Putting it all together, I'd try an intersection of these approaches, combining the heat map and critical path identities. I'd assume there's enough here to come up with a good, functional geometric proof that satisfies all of the constraints of the problem.
At the risk of stating the obvious, here's one algorithm
1) Find the shortest path
2) Test blocking everything node on that path and see which one results in the longest path
3) Repeat K times
Naively, this will take O(K*(V+ E log E)^2) but you could with some little work improve 2 by only recalculating partial paths.
As you mention, simply trying to break the path is difficult because if most breaks simply add a length of 1 (or 2), its hard to find the choke points that lead to big gains.
If you take the minimum vertex cut between the start and the end, you will find the choke points for the entire graph. One possible algorithm is this
1) Find the shortest path
2) Find the min-cut of the whole graph
3) Find the maximal contiguous node set that intersects one point on the path, block those.
4) Wash, rinse, repeat
3) is the big part and why this algorithm may perform badly, too. You could also try
the smallest node set that connects with other existing blocks.
finding all groupings of contiguous verticies in the vertex cut, testing each of them for the longest path a la the first algorithm
The last one is what might be most promising
If you find a min vertex cut on the whole graph, you're going to find the choke points for the whole graph.
Here is a thought. In your grid, group adjacent walls into islands and treat every island as a graph node. Distance between nodes is the minimal number of walls that is needed to connect them (to block the enemy).
In that case you can start maximizing the path length by blocking the most cheap arcs.
I have no idea if this would work, because you could make new islands using your points. but it could help work out where to put walls.
I suggest using a modified breadth first search with a K-length priority queue tracking the best K paths between each island.
i would, for every island of connected walls, pretend that it is a light. (a special light that can only send out horizontal and vertical rays of light)
Use ray-tracing to see which other islands the light can hit
say Island1 (i1) hits i2,i3,i4,i5 but doesn't hit i6,i7..
then you would have line(i1,i2), line(i1,i3), line(i1,i4) and line(i1,i5)
Mark the distance of all grid points to be infinity. Set the start point as 0.
Now use breadth first search from the start. Every grid point, mark the distance of that grid point to be the minimum distance of its neighbors.
But.. here is the catch..
every time you get to a grid-point that is on a line() between two islands, Instead of recording the distance as the minimum of its neighbors, you need to make it a priority queue of length K. And record the K shortest paths to that line() from any of the other line()s
This priority queque then stays the same until you get to the next line(), where it aggregates all priority ques going into that point.
You haven't showed the need for this algorithm to be realtime, but I may be wrong about this premice. You could then precalculate the block positions.
If you can do this beforehand and then simply make the AI build the maze rock by rock as if it was a kind of tree, you could use genetic algorithms to ease up your need for heuristics. You would need to load any kind of genetic algorithm framework, start with a population of non-movable blocks (your map) and randomly-placed movable blocks (blocks that the AI would place). Then, you evolve the population by making crossovers and transmutations over movable blocks and then evaluate the individuals by giving more reward to the longest path calculated. You would then simply have to write a resource efficient path-calculator without the need of having heuristics in your code. In your last generation of your evolution, you would take the highest-ranking individual, which would be your solution, thus your desired block pattern for this map.
Genetic algorithms are proven to take you, under ideal situation, to a local maxima (or minima) in reasonable time, which may be impossible to reach with analytic solutions on a sufficiently large data set (ie. big enough map in your situation).
You haven't stated the language in which you are going to develop this algorithm, so I can't propose frameworks that may perfectly suit your needs.
Note that if your map is dynamic, meaning that the map may change over tower defense iterations, you may want to avoid this technique since it may be too intensive to re-evolve an entire new population every wave.
I'm not at all an algorithms expert, but looking at the grid makes me wonder if Conway's game of life might somehow be useful for this. With a reasonable initial seed and well-chosen rules about birth and death of towers, you could try many seeds and subsequent generations thereof in a short period of time.
You already have a measure of fitness in the length of the creeps' path, so you could pick the best one accordingly. I don't know how well (if at all) it would approximate the best path, but it would be an interesting thing to use in a solution.

How to find minimum number of transfers for a metro or railway network?

I am aware that Dijkstra's algorithm can find the minimum distance between two nodes (or in case of a metro - stations). My question though concerns finding the minimum number of transfers between two stations. Moreover, out of all the minimum transfer paths I want the one with the shortest time.
Now in order to find a minimum-transfer path I utilize a specialized BFS applied to metro lines, but it does not guarantee that the path found is the shortest among all other minimum-transfer paths.
I was thinking that perhaps modifying Dijkstra's algorithm might help - by heuristically adding weight (time) for each transfer, such that it would deter the algorithm from making transfer to a different line. But in this case I would need to find the transfer weights empirically.
Addition to the question:
I have been recommended to add a "penalty" to each time the algorithm wants to transfer to a different subway line. Here I explain some of my concerns about that.
I have put off this problem for a few days and got back to it today. After looking at the problem again it looks like doing Dijkstra algorithm on stations and figuring out where the transfer occurs is hard, it's not as obvious as one might think.
Here's an example:
If here I have a partial graph (just 4 stations) and their metro lines: A (red), B (red, blue), C (red), D (blue). Let station A be the source.
And the connections are :
---- D(blue) - B (blue, red) - A (red) - C (red) -----
If I follow the Dijkstra algorithm: initially I place A into the queue, then dequeue A in the 1st iteration and look at its neighbors :
B and C, I update their distances according to the weights A-B and A-C. Now even though B connects two lines, at this point I don't know
if I need to make a transfer at B, so I do not add the "penalty" for a transfer.
Let's say that the distance between A-B < A-C, which causes on the next iteration for B to be dequeued. Its neighbor is D and only at this
point I see that the transfer had to be made at B. But B has already been processed (dequeued). S
So I am not sure how this "delay" in determining the need for transfer would affect the integrity of the algorithm.
Any thoughts?
You can make each of your weights a pair: (# of transfers, time). You can add these weights in the obvious way, and compare them in lexicographic order (compare # of transfers first, use time as the tiebreaker).
Of course, as others have mentioned, using K * (# of transfers) + time for some large enough K produces the same effect as long as you know the maximum time apriori and you don't run out of bits in your weight storage.
I'm going to be describing my solution using the A* Algorithm, which I consider to be an extension (and an improvement -- please don't shoot me) of Dijkstra's Algorithm that is easier to intuitively understand. The basics of it goes like this:
Add the starting path to the priority queue, weighted by distance-so-far + minimum distance to goal
Every iteration, take the lowest weighted path and explode it into every path that is one step from it (discarding paths that wrap around themselves) and put it back into the queue. Stop if you find a path that ends in the goal.
Instead of making your weight simply distance-so-far + minimum-distance-to-goal, you could use two weights: Stops and Distance/Time, compared this way:
Basically, to compare:
Compare stops first, and report this comparison if possible (i.e., if they aren't the same)
If stops are equal, compare distance traveled
And sort your queue this way.
If you've ever played Mario Party, think of stops as Stars and distance as Coins. In the middle of the game, a person with two stars and ten coins is going to be above someone with one star and fifty coins.
Doing this guarantees that the first node you take out of your priority queue will be the level that has the least amount of stops possible.
You have the right idea, but you don't really need to find the transfer weights empirically -- you just have to ensure that the weight for a single transfer is greater than the weight for the longest possible travel time. You should be pretty safe if you give a transfer a weight equivalent to, say, a year of travel time.
As Amadan noted in a comment, it's all about creating right graph. I'll just describe it in more details.
Consider two vertexes (stations) to have edge if they are on a single line. With this graph (and weights 1) you will find minimum number of transitions with Dijkstra.
Now, lets assume that maximum travel time is always less 10000 (use your constant). Then, weight of edge AB (A and B are on one line) is a time_to_travel_between(A, B) + 10000.
Running Dijkstra on such graph will guarantee that minimal number of transitions is used and minimum time is reached in the second place.
update on comment
Let's "prove" it. There're two solution: with 2 transfers and 40 minutes travel time and with 3 transfers and 25 minutes travel time. In first case you travel on 3 lines, so path weight will be 3*10000 + 40. In second: 4*10000 + 25. First solution will be chosen.
I had the same problem as you, until now. I was using Dijkstra. The penalties for transfers is a very good idea indeed and I've been using it for a while now. The main problem is that you cannot use it directly in the weight as you first you have to identify the transfer. And I didn't want to modify the algorithm.
So what I'be been doing, is that each time and you find a transfer, delete the node, add it with the penalty weight and rerun the graph.
But this way I found out that Dijkstra wont work. And this is where I tried Floyd-Warshall which au contraire to Dijkstra compares all possible paths through the graph between each pair of vertices.
It helped me with my problem switching to Floyd-Warshall. Hope it helps you as well.
Its easier to code and lot more easier to implement.

Resources