I want to know some more applications of the widest path problem. CLICK!
It seems like something that can be used in a multitude of places, but I couldn't get anything constructive from searching on the internet.
Can someone please share as to where else this might be used?
thanks in advance.
(what I searched for included uses in p2p networks and CDN, but I couldn't find exactly how it is used / the papers were too long for me to scout.)
The widest path problem has a variety of applications in areas such as network routing problems, digital compositing and voting theory. Some specific applications include:
Finding the route with maximum transmission speed between two nodes.
This comes almost directly from the widest-path problem definition. We want to find the path between two nodes which maximizes the minimum-weight edge in the path.
Computing the strongest path strengths in Schulze’s method.
Schulze's method is a system in voting theory for finding a single winner among multiple candidates. Each voter provides an ordered preference list. We then construct a weighted graph where vertices represents candidates and the weight of an edge (u, v) represents the number of voters who prefer candidate u over candidate v. Next, we want to find the strength of the strongest path between each pair of candidates. This is the part of Schulze's method that can be solved using the widest-path problem. We simply run an algorithm to solve the widest-path problem for each pair of vertices.
Mosaicking of digital photographic maps. This is a technique for merging two maps into a single bigger map. The challenge is the the two original photos might have different light intensity, colors, etc. One way to do mosaicking is to produce seams where each pixel in the resulting picture is represented entirely by one photo or the other. We want the seam to appear invisible in the final product. The problem of finding the optimal seam can be modeled as a widest-path problem. Details for the modeling are found in the original paper
Metabolic path analysis for living organisms. The objective of this type of analysis is identify critical reactions in living organisms. A network is constructed based on the stoichiometry of the reactions. We wish to find the path which is energetically favored in the production of a particular metabolite, ie the path where the bottleneck between two vertices is the smallest. This corresponds to the widest-path problem.
Related
I'm trying to develop a way of retrieving a measure of disruptiveness when removing two nodes from a graph. So far I'm performing a collection of algorithms like multiple measures of centralities, degrees, pagerank etc.
Its obvious that it can be done by actually removing two nodes and then analyzing the resulting graph (or collection of graphs), but this is also time-consuming when there is O(N^2) combinations of two nodes.
Any help to steer me in the right direction would be appreciated.
I think what you are looking for is the KPP-Neg problem (Key Players Problem).
It is defined in terms of the extent to which the network depends on its key players to maintain its cohesiveness. It is a “negative” problem because it measures the amount of reduction in the cohesiveness of the network that would occur if the nodes were not present.
(As opposed to the KPP-Pos problem where you are looking for a set of network nodes that are optimally positioned to quickly diffuse information, attitudes, behaviors or goods).
Both KPP problems were defined in The key player problem [Borgatti, 2003] and Identifying sets of key players in a network [Borgatti, 2006]. See also "Key players" - a discussion by Yves Zenou here.
Many more approaches were suggested since these papers were presented. Just google Key Players Social Networks.
I am currently working on a school project for multiagent systems and I'm looking for recommendations for pathfinding and exploration algorithms. The relevant problem description is as follows:
a) The agent is in a 2D rectangular grid of known, fixed but arbitrary dimensions.
b) The agent can only move in one of four directions, one square per simulation time step.
c) The agent has a fixed sensor range. It is able to see, for example, a 2-square radius around itself - a 5x5 grid centred on the agent.
d) Cells may contain obstacles which block the agent's movement but not vision.
Obstacles are created at initialisation, are non-perishable, and no new obstacle will spawn mid-simulation.
e) All cells can be accessed from any adjacent cell.
f) Other in-world objects relevant to the agent's goals exist. They do not block the agent, but they spawn and despawn with a lifetime determined by a Normal/Gaussian distribution (parameters unknown to agent but it is allowed to guess on observations).
g) For a sense of the project's scale, the agent will be assessed in three scenarios: 50x50, 100x100, and the last with undisclosed dimensions (to the developer)
There are three different types of scenarios for which I need to generate a path for, and I've identified a possible algorithm:
1) Shortest path to a known refuelling point. The map may not be fully explored yet. I'm considering to use A*.
2) Map exploration. The agent may, at any arbitrary position, choose to initiate exploration mode. I need to get the shortest path to SENSE all cells in one pass. Known obstacle cells can be ignored. Revisiting cells is allowed but should be minimised, as implied by shortest path. I'm considering BFS for this but I honestly have no idea. I tried Googling but the results are all about real robots with hardware sensors. I'm not dealing with hardware here.
3) A variation of traveller's salesman. The agent maintains two sets of waypoints, let's say A and B. When it chooses to do so, it will alternate between waypoints from each set. The agent needs to visit every complete pair of AB-waypoints in the shortest path or, if not possible due to situational limits on path length, the shortest path to earn the highest score possible in that particular pass. The scoring function is the percentage of all pairs spawned that are visited.
Time does not pass and the world does not change while the agent is thinking and planning. Also, no penalty or constraint is imposed on the agent's execution time, so I can afford to look for optimal algorithms as opposed to greedy ones, and pruning of the plan's length is not necessary.
I am capable of tweaking general algorithms to fit my specific contexts and internal logic, but I need help to set me in the correct direction, especially for the latter two scenarios which seem to be common problems in my encounters but aren't mentioned in textbook scenarios. Thanks!
Imagine that a graph has two relatively densely connected components that are only connected to each other by relatively few edges. How can I identify the components? I don't know the proper terminology, but the intuition is that a relatively densely connected subgraph is hanging onto another subgraph by a few threads. I want to identify these clumps that are only loosely connected to the rest of the graph.
If your graph represents a real-world system, this task is called community detection. You'll find many articles about that, starting with Fortunato's review (2010). He describes, amongst others, the min-cut based methods mentioned in the earlier answers.
There're also many posts on SO, such as :
Are there implementations of algorithms for community detection in graphs?
What are the differences between community detection algorithms in igraph?
Community detection in Networkx
People in Cross Validated also talk about community detection :
How to do community detection in a weighted social network/graph?
How to find communities?
Finally, there's a proposal in Area 51 for a new Network Science site, which would be more directly related to this problem.
You probably want sparsest cut instead of min cut -- unless you can identify several nodes in a component, min cuts have a tendency to be very unbalanced when the degrees are small. One of the common approaches to sparsest cut is to compute an eigenvector of the graph's Laplacian and threshold it.
The answer might be somewhat general, but you could could try to model your problem as a flow problem and generate a minimum cut; see here. The edges could be bidirectional with capacity 1, and a resulting cut would maybe yield the desired partition?
The problem is finding an optimal route for a plane through four dimensional winds (winds at differing heights and that change as you travel (predicative wind model)).
I've used the traditional A* search algorithm and hacked it to get it to work in 3 dimensions and with wind vectors.
It works in a lot the cases but is extremely slow (im dealing with huge amounts of data nodes) and doesnt work for some edge cases.
I feel like I've got it working "well" but its feels very hacked together.
Is there a better more efficient way for path finding through data like this (maybe a genetic algorithm or neural network), or something I havent even considered? Maybe fluid dynamics? I dont know?
Edit: further details.
Data is wind vectors (direction, magnitude).
Data is spaced 15x15km at 25 different elevation levels.
By "doesnt always work" I mean it will pick a stupid path for an aircraft because the path weight is the same as another path. Its fine for path finding but sub-optimal for a plane.
I take many things into account for each node change:
Cost of elevation over descending.
Wind resistance.
Ignoring nodes with too high of a resistance.
Cost of diagonal tavel vs straight etc.
I use euclidean distance as my heuristic or H value.
I use various factors for my weight or G value (the list above).
Thanks!
You can always have a trade off of time-optimilaity by using a weighted A*.
Weighted A* [or A* epsilon], is expected to find a path faster then A*, but the path won't be optimal [However, it gives you a bound on its optimality, as a paremeter of your epsilon/weight].
A* isn't advertised to be the fastest search algorithm; it does guarantee that the first solution it finds will be the best (assuming you provide an admissible heuristic). If yours isn't working for some cases, then something is wrong with some aspect of your implementation (maybe w/ the mechanics of A*, maybe the domain-specific stuff; given you haven't provided any details, can't say more than that).
If it is too slow, you might want to reconsider the heuristic you are using.
If you don't need an optimal solution, then some other technique might be more appropriate. Again, given how little you have provided about the problem, hard to say more than that.
Are you planning offline or online?
Typically for these problems you don't know what the winds are until you're actually flying through them. If this really is an online problem you may want to consider trying to construct a near-optimal policy. There is a quite a lot of research in this area already, one of the best is "Autonomous Control of Soaring Aircraft by Reinforcement Learning" by John Wharington.
I'm working on a small house design project and one of its most important parts is a section where the user can give some info about how he wants his rooms (for example, a house with 10 x 10 meters, having a 3x3 living room, a 3x3 kitchen, two 4 x 5 bedrooms, and a 4x2 bathroom), and then the program generates a map of the house according to the requeriments made.
For now, I'm not worried about drawing the map, just arranging the rooms in a way they don't overlap (yes, the output can be pretty ugly). I've already made some searches and found that what I want is very similar to the packing problem, which has some algorithms that handle this problem pretty well (although it's a NP-complete problem).
But then I had one more restriction: the user can specify "links" between rooms, for example, he may wish that a room must have a "door" to a bathroom, the living room to have a direct to the kitchen, etc (that is, the rooms must be placed side by side), and this is where the things get complicated.
I'm pretty sure that what I want configures a NP-problem, so I'm asking for tips to construct a good, but not necessarily optimal implementation. The idea I have is to use graphs to represent the relationship between rooms, but I can't find out how I can adapt the existent packing algorithms to fit this new restriction. Can anyone help me?
I don't have a full answer for you, but I do have a hint: Your connectivity constraints will form what is known as a planar graph (if they don't, the solution is impossible with a single-story house). Rooms in the final solution will correspond to areas enclosed by edges in the dual of the constraint graph, so all you need to do then is take said dual, and adjust the shape of its edges, without introducing intersections, to fit sizing constraints. Note that you will need to introduce a vertex to represent 'outside' in the constraint graph, and ensure it is not surrounded in the dual. You may also need to introduce additional edges in the constraint graph to ensure all the rooms are connected (and add rooms for hallways, etc).
You might find this interesting.
It's a grammar for constructing Palladian villas.
To apply something like that to your problem, I would have a way to construct one at random, and then be able to make random changes to it, and use a simulated annealing algorithm.