A graph problem - algorithm

I am struggling to solve the following problem
http://uva.onlinejudge.org/external/1/193.html
However Im not able to get a fast solution.
And as seen by the times of others, there should be a solution of maximum n^2 complexity
http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=3&page=show_problem&problemid=129&page=problem_stats
Can I get some help?

You can only solve this in exponential complexity, but that's not as bad as it sounds, because in practice you'll be able to avoid a lot of bad decisions and thus reduce the running time of the algorithm significantly.
In short, you have to run a DF search from a node and try to color as many nodes black as you can. If you're at a node that has neighboring black nodes, that node can only be white. Keep doing this for every possibility of coloring a specific node.
If you can't figure it out, then check these two code snippets I found by googling for the problem name: one and two. The authors say they get AC, but I haven't tested them. They look correct however.

It is solving the problem called maximum clique, also called maximum independent set or maximum stable set. It is NP-Complete. Fastest code I know for small graphs is Cliquer: http://users.tkk.fi/pat/cliquer.html
If you are writing your own for educational purposes it is probably most efficient to do a depth first search coloring nodes black one at a time and retreating up the DFS if two black nodes are touching.
The easiest to code solution is implementing a binary counter and trying all 2^n possibilities.

Bron–Kerbosch algorithm
I have solved a similar problem on FaceBook puzzles, I used the B-K algo for that.

Related

How to divide a connected weighted graph to N semi-equal subgraphs

I have a graph of many hundred nodes that are mainly connected with each other. I can do processing on entire graph but it really takes a lot of time, so I would like to divide it to smaller sub-graphs of approximately similar size.
With other words. I have a collection of aerial images and I do pairwise image matching on all of them. As a result I get a set of matches for each pair (pixel from first image matched with pixel on second image). Number of matches is considered as weight of this (undirected) edge. These edges then form a graph mentioned above.
I'm not so familiar with graph theory (as it is a very broad topic). What is the best algorithm for this job?
Thank you.
Edit:
This problem has a perfect analogy which I think is easier to understand. Imagine you have a set of people and their connections/friendships, like I social network. Each friendship has a numeric value/weight representing how good friends they are. So in a large group of people I want to get k most interconnected sub-groups .
Unfortunately, the problem you're describing is almost certainly NP-hard. From a graph perspective, you have a graph where each edge has a weight on it. You're trying to split the graph into relatively equal pieces while cutting the lowest total cost of edges cut. This problem is called the maximum k-cut problem and is NP-hard. If you introduce the constraint that you also want to try to make the pieces roughly even in size, you have the balanced k-cut problem, which is also NP-hard.
The good news is that there are nice approximation algorithms for these problems, so if you're looking for solutions that are just "good enough," then you can probably find a library somewhere that implements them. There are also other techniques like spectral clustering which work well in practice and are really fast, but which don't have any guarantees on how well they'll do.

Random spanning trees of bipartite graphs

I'm working on making some code using metaheuristics for finding good solutions to the Fixed Charge Transportation Problem (FCTP).
The problem I'm having is to generate a starting solution, based on finding a spanning tree for the underlying bipartite graph.
I want it to be a random spanning tree, so that I can run the procedure on the same problem multiple times, possibly getting different solutions.
I'm having some difficulties doing this. The approach I've gone for so far is to make a random permutation of the arcs, then iterate through this list, sequentially putting them into basis if it won't create a cycle.
I need to find a fast method to check if including an arc will create a cycle. I don't want to "brute force" it, since this approach could take a large amount of time, given big problem instances.
Given that A is an array with a random permutation of the arcs, how would you go around making a selection procedure?
I've been working on this for a couple of hours now, and nothing I've tried has worked, most of it being nonsensical when it came to application...
Kruskals Algorithm is used for finding the minimum spanning tree. The fast-cycle detection is not actually part of Kruskals algorithm. The algorithm will work with a data structure that is able to find cycles fast as well as with a slow naive attempt (however the complexity will be different).
However Kruskals Algorithm is on track here, since it usually uses a so called union-find or disjoint-set datastructure for fast detection of cycles. This is the part of the Kruskals Algorithm page on wikipedia that you will need for your algorithm. This is also linked on wikipedia: http://en.wikipedia.org/wiki/Disjoint-set_data_structure
I found Kruskal's algorithm after long hours of research. I only needed to randomize the order in which I investigated the nodes of the graph.

Fastest path to walk over all given nodes

I'm coding a simple game and currently doing the AI part. NPC gets a list of his 'interest points' which he needs to visit. Each point has a coordinate on the map. I need to find a fastest path for the character to visit all of the given points.
As far as I understand it, the task could be described as 'finding fastest traverse path in a strongly connected weighted undirected graph'.
I'd like to get either the name of some algorithm to calculate that or if there is no name - some keypoints on programming it myself.
Thanks in advance.
This is very similar to the Travelling Salesman problem, although I'm not going to try to prove equivalency offhand. The TSP is NP-complete, which means that solving the problem exactly may be impractical, depending on the number of interest points. There are approximation algorithms that you may find more useful.
See previous post regarding tree traversals:
Tree traversal algorithm for directory structures with a lot of files
I would use algorithm like: ant algorithm.
Not directly on point but what I did in an MMO emulator was to store waypoint indices along with the rest of the pathing data. If your requirement is to demonstrate solutions to TSP then ignore this. If not, it's worth consideration IMO.
In my case it was the best solution as otherwise the server could have potentially hundreds of mobs (re)spawning and along with all the other AI logic, would have to burn cycles computing route logic.

Algorithm to computer the optimal layout of n-ary tree?

I am looking for an algorithm that will automatically arrange all the nodes in an n-tree so that no nodes overlap, and not too much space is wasted. The user will be able to add nodes at runtime and the tree must auto arrange itself. Also note it is possible that the tree's could get fairly large ( a few thousand nodes ).
The algorithm has to work in real time, meaning the user cannot notice any pausing.
I have tried Google but I haven't found any substantial resources, any help is appreciated!
I took a look at this problem a while back and decided ultimately to change my goals from a Directed acyclic graph (DAG) to a general graph only due to complexities of what I encountered.
That being said, have you looked at the Sugiyama algorithm for graph layout?
If you're not looking to roll your own, I came across yFiles that did the job quite nicely (a bit on the pricy side though, so I did end up doing exactly that - rolling my own).

Calculating "Kevin Bacon" Numbers

I've been playing around with some things and thought up the idea of trying to figure out Kevin Bacon numbers. I have data for a site that for this purpose we can consider a social network. Let's pretend that it's Facebook (for simplification of discussion). I have people and I have a list of their friends, so I have the connections between them. How can I calculate the distance from one person to another (basically, a Kevin Bacon number)?
My best idea is a Bidirectional search, with a depth limit (to limit computational complexity and avoid the problem of people who simply can't be connected in the graph), but I realize this is rather brute force.
Could it be better to make little sub-graphs (say something equivalent to groups on Facebook), calculate the shortest distances between them (ahead of time, perhaps) and then try to use THOSE to find a link? While this requires pre-calculation, it could make it possible to search many fewer nodes (nodes could be groups instead of individuals, making the graph much smaller). This would still be a bidirectional search though.
I could also pre-calculate the number of people an individual is connected to, searching the nodes for "popular" people first since they could have the best chance of connecting to the given destination individual. I realize this would be a trade-off of speed for possible shortest path. I'd think I'd also want to use a depth-first search instead of the breadth-first search I'd plan to use in the other cases.
Can someone think of a simpler/faster way of doing this? I'd like to be able to find the shortest length between two people, so it's not as easy as always having the same end point (such as in the Kevin Bacon problem).
I realize that there are problems like I could get chains of 200 people and such, but that can be solved my having a limit to the depth I'm willing to search.
This is a standard shortest path problem. There are lots of solutions, including Dijkstra's algorithm and Bellman-Ford. You may be particularly interested in looking at the A* algorithm and seeing how it would perform with the cost function relative to the inverse of any particular node's degree. The idea would be to visit more popular nodes (those with higher degree) first.
Sounds like a job for
Dijkstra's algorithm.
ED: Eh, I shouldn't have pulled the trigger so fast. Dijkstra's (and Bellman-Ford) reduces to a breadth-first search when the weights are 1, so this isn't too useful. Oh well.
The A* algorithm, mentioned by tvanfosson, may be ideal for this. The idea is that instead of searching and recursing in whatever order the elements are in each level of the tree (rooted on your start- or end-point), you use some heuristic to determine which element you are going to try first. In your case a good bet would probably be the degree of a node (number of "friends"), but you could possibly want to use the number of people within some arbitrary number of degrees of a given person (i.e., the guy who has has three friends who each have 100 friends is likely to be a better node than the guy who has 20 friends in a clique that shuns outsiders). There's all sorts of other things you could use as a heuristic (friends get 2 points, friends-of-friends get 1 point; whatever, experiment).
Combine this with a depth limit (cut off after 6 degrees of separation, or whatever), and you can vastly improve your average case (worst case is still the same as basic BFS).
run a breadth-first search in both directions (from each endpoint) and stop when you have a connection or reach your depth limit
This one might be better overall Floyd-Warshall the all pairs shortest distance.

Resources