I'm trying to use implement an "intelligent scissor" for an interactive image segmentation. Therefore, I have to create a directed graph from an image where each vertex represents a single pixel. Each vertex is then conntected to each of its neighbours by two edges: one outgoing and one incoming edge. This is due to the fact that the cost of an edge (a,b) may differ from the cost of (b,a). I'm using images with a size of 512*512 pixel so i need to create a graph with 262144 vertices and 2091012 edges. Currently, I'm using the following graph:
typedef property<vertex_index_t, int,
property<vertex_distance_t, double,
property<x_t, int,
property<y_t, int
>>>> VertexProperty;
typedef property<edge_weight_t, double> EdgeProperty;
// define MyGraph
typedef adjacency_list<
vecS, // container used for the out-edges (list)
vecS, // container used for the vertices (vector)
directedS, // directed edges (not sure if this is the right choice for incidenceGraph)
VertexProperty,
EdgeProperty
> MyGraph;
I'm using an additional class Graph (sorry for the uninspired naming) which handles the graph:
class Graph
{
private:
MyGraph *graph;
property_map<MyGraph, vertex_index_t>::type indexmap;
property_map<MyGraph, vertex_distance_t>::type distancemap;
property_map<MyGraph, edge_weight_t>::type weightmap;
property_map<MyGraph, x_t>::type xmap;
property_map<MyGraph, y_t>::type ymap;
std::vector<MyGraph::vertex_descriptor> predecessors;
public:
Graph();
~Graph();
};
Creating a new graph with 262144 vertices is pretty fast but the insertion of the edges tooks up to 10 seconds which is way too slow for the desired application. Right now, I'm inserting the edges the following way:
tie(vertexIt, vertexEnd) = vertices(*graph);
for(; vertexIt != vertexEnd; vertexIt++){
vertexID = *vertexIt;
x = vertexID % 512;
y = (vertexID - x) / 512;
xmap[vertexID] = x;
ymap[vertexID] = y;
if(y > 0){
if(x > 0){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y-1)+(x-1)], *graph); // upper left neighbour
}
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y-1)+(x)], *graph); // upper
if(x < 511){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y-1)+(x+1)], *graph); // upper right
}
}
if(x < 511){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y)+(x+1)], *graph); // right
}
if(y < 511){
if(x > 0){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y+1)+(x-1)], *graph); // lower left
}
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y+1)+(x)], *graph); // lower
if(x < 511){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y+1)+(x+1)], *graph); // lower right
}
}
if(x > 0){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y)+(x-1)], *graph); // left
}
}
Is there anything I can do do improve the speed of the programm? I'm using Microsoft Visual C++ 2010 Express in release mode with optimization (as recommended by Boost). I thought I could use a listS container for the vertices or edges but the vertices are no problem and if I use listS for the edges, it gets even slower.
adjacency_list is very general purpose; unfortunately it's never going to be as efficient as a solution exploiting the regularity of your particular use-case could be. BGL isn't magic.
Your best bet is probably to come up with the efficient graph representation you'd use in the absence of BGL (hint: for a graph of an image's neighbouring pixels, this is not going to explicitly allocate all those node and edge objects) and then fit BGL to it (example), or equivalently just directly implement a counterpart to the existing adjacency_list / adjacency_matrix templates (concept guidelines) tuned to the regularities of your system.
By an optimised representation, I of course mean one in which you don't actually store all the nodes and edges explicitly but just have some way of iterating over enumerations of the implicit nodes and edges arising from the fact that the image is a particular size. The only thing you should really need to store is an array of edge weights.
Related
A path for the context of this question is a collection of points with integer coordinates v1, v2, v3 ... vn such that v1 is connected to v2, v2 is connected to v3 and so on. The path is non-cyclic and does not have any branches.(By v and u are connected it means that the absolute difference between their either x or y coordinate is equal to 1)
We say there is a possible segment between vi and vj if they follow some criteria which is irrelevant to this question.
ci represents the farthest point on the path in the forward direction such that there is a possible segment between vi and ci. (ci lies ahead of vi)
di represents the farthest point on the path in the backward direction such that there is a possible segment between vi and di. (vi lies ahead of di)
Note: If there is a possible segment between u and v then there is a possible segment between any of its sub segments.
The values of ci and di are already calculated for each i.
For each pair vi and vj there is a penalty associated which also has been calculated for each i and j.
A sequence in a path is a collection of points of the path u1, u2, u3 ... um (not necessarily connected) such that u1 = v1, um = vn and there is a possible segment between each ui and ui+1.
Number of segments in such a cycle is (m-1).
The problem is to find the most optimal sequence which is a sequence having minimum number of segments possible and of all the such possible sequences have minimum sum of penalties of consecutive points in that sequence.
This problem is solved in a program called potrace which I am trying to implement but that implementation uses cyclic paths while my program uses non-cyclic.
I also cannot understand how the potrace implementation below works in the first place.
In the implementation below clip0[i] represents ci and clip1[i] represents di.
In potrace implementation cyclic means v1 and vn are also connected in the path.
Source Line 575
Documentation 2.2.4
/* calculate seg0[j] = longest path from 0 with j segments */
i = 0;
for (j=0; i<n; j++) {
seg0[j] = i;
i = clip0[i];
}
seg0[j] = n;
m = j;
/* calculate seg1[j] = longest path to n with m-j segments */
i = n;
for (j=m; j>0; j--) {
seg1[j] = i;
i = clip1[i];
}
seg1[0] = 0;
/* now find the shortest path with m segments, based on penalty3 */
/* note: the outer 2 loops jointly have at most n iterations, thus
the worst-case behavior here is quadratic. In practice, it is
close to linear since the inner loop tends to be short. */
pen[0]=0;
for (j=1; j<=m; j++) {
for (i=seg1[j]; i<=seg0[j]; i++) {
best = -1;
for (k=seg0[j-1]; k>=clip1[i]; k--) {
thispen = penalty3(pp, k, i) + pen[k];
if (best < 0 || thispen < best) {
prev[i] = k;
best = thispen;
}
}
pen[i] = best;
}
}
pp->m = m;
SAFE_CALLOC(pp->po, m, int); // output
/* read off shortest path */
for (i=n, j=m-1; i>0; j--) {
i = prev[i];
pp->po[j] = i;
}
A sample input can be this.
EDIT 1:
So when I implemented the same code for my case the last loop was breaking the code, the index value j was either becoming negative (without self looping) or i = prev[i] was self looping.
The penalty values are positive.
EDIT 2:
I coded vaguely the Dijkstra's algorithm and it seems to be working.
I am providing my relevant bit of code below.
using Weight = std::pair<int, float>;
std::vector<std::vector<std::pair<int, Weight>>> graph;
graph.resize(n);
/*This takes O(n^2).*/
for (int i = 0; i < n; ++i) {
for (int j = clip1[i]; j <= clip0[i]; ++j) {
float pen = calculatePenalty(index, i, j);
graph[i].emplace_back(j, Weight(1, pen));
graph[j].emplace_back(i, Weight(1, pen));
}
}
std::vector<bool> vis(n, false);
std::vector<Weight> dist(n, {10e5 + 1, 0.0f});
std::vector<int> prev(n, 0);
dist[0] = {0, 0.0f};
std::multiset<std::pair<Weight, int>> set;
set.insert({{0, 0.0f}, 0});
while (!set.empty()) {
auto p = *set.begin();
set.erase(set.begin());
int x = p.second;
Weight w0 = p.first;
if (vis[x]) continue;
vis[x] = true;
for (auto v : graph[x]) {
int e = v.first;
Weight w = v.second;
Weight w_ = {dist[x].first + w.first, dist[x].second + w.second};
if (w_ < dist[e]) {
prev[e] = x;
dist[e] = w_;
set.insert({dist[e], e});
}
}
}
for (int i = n - 1; i > 0;) {
seq.push_back(i);
i = prev[i];
}
seq.push_back(0);
If there are any errors in the above code then please correct it.
I think a number of improvements can be made in the above code.
The initialization of the graph itself has O(n^2) complexity. There should be an alternative way to do this part or the whole part.
Its also not so compact as the potrace counter part. A more compact implementation with better time complexity seems possible. If someone could provide some pseudocode in that direction than that would be appreciated.
Also in the potrace implementation it seems that the number of segments is precisely m. But when I compute m in my case and compare it with seg.size() - 1, it is not equal. (It is both greater and less in different cases but not by a large margin.)
The problem you're describing is the (single-source single-destination) shortest-path problem, where an edge's weight is (1, penalty) (and weights are summed elementwise and ordered lexically, so minimizing number of edges is first priority and minimizing total penalty is second priority). You can solve this problem in near-linear time with Dijkstra's algorithm if all your penalties are positive (or zero). In this case, you can prove that the shortest path will never repeat any vertices.
potrace's implementation looks roughly like Bellman-Ford's algorithm (in dynamic programming interpretation), which is a good approach if you have a mixture of positive and negative penalties (but unnecessarily slow if you have only positive penalties). In this case, the shortest path might repeat vertices, but when that happens, the path will actually repeat some vertices (a negative-weight cycle) infinitely many times, which is probably not what you want.
I have used std::set to implement line sweep algorithm for vertical and horizontal lines. But the final range search on the lower bound and uppper bound of 'status' set takes a lot of time. Is there some way to avoid this? I chose std::set because it is based on balanced BST and insertion, deletion and search take logn time. Is there a better data structure to implement this?
// before this I initialize the events set with segments with increasing x co-ordinates. The segment struct has 2 points variable and 1 type variable for identifying vertical segment(1), horizontal segment starting(0) and ending(2).
for(auto iter = events.begin(); iter != events.end(); iter++)
{
segment temp = *iter;
if(temp.type == 0)
status.insert(temp.p1);
else if(temp.type == 2)
status.erase(temp.p2);
else
{
auto lower = status.lower_bound(std::make_pair(temp.p1.x, temp.p1.y));
auto upper = status.upper_bound(std::make_pair(temp.p2.x, temp.p2.y));
// Can the no of elements in the interval be found without this for loop
for(;lower != upper; lower++)
{
count++;
}
}
}
Here event and status are sets of segments struct and points respectively.
typedef std::pair<int, int> point;
struct segment
{
point p1, p2;
int type;
segment(point a, point b, int t)
:p1(a), p2(b), type(t){}
};
std::set<segment, segCompare> events;
...
std::set<point, pointCompare> status;
In order to compute the distance efficiently, the tree would need to maintain size counts for each sub-tree. Since that service is not needed in most cases, it is not too surprising that std::set does not incur its cost for everyone.
I haven't found anything in the C++ standard library that will do this off the shelf. I think you may need to roll your own in this case, or find someone else who has.
If you do batch insertions of the events use a std::vector that is always sorted. There is no difference in asymptomatic runtime, which is O(n log n) for both, for a batch of n insertions.
This lets you do iterator arithmetic among other things.
I have a table of vertices and edges and from this tables i created a Boost graph. each of the vertex edges had its id assign to it while the edges also contains length. now i want to prune the graph by removing nodes. my algorithm is done by creating a matrix of num_vertices. My problem is how to associate my matrix with the boost::vertices that is how do i know which of the matrix column correspond to my vertex in the graph since the matrix has no id. hope i am not thinking too complicated.
void Nodekiller::build_matrix(){
int ndsize=num_vertices(graph);
double matrixtb[ndsize][ndsize];
for(int i=0; i<ndsize;i++){
for (int j=0;j<ndsize; j++){
if(i==j) {matrixtb[i][j]=0;}
else {
matrixtb[i][j]=addEdgeValue(); //if none add random value
}
}
}
}
//i want to to sum each column and then prioritize them based on the values gotten.
so i don't know how to associate the boost::vertices(graph) with the matrix in other to be able to prune the graph.
The question is not very clear. Do I understand right:
You have a boost graph
You create a matrix from that graph?
So a first trivial question (maybe outside of the scope): do you really need two representations of the same graphe? one as a boost::graph, and an other as your matrix?
You can add and remove edges from a boost::graph easily. The easiest representation is the adjacency list: http://www.boost.org/doc/libs/1_55_0/libs/graph/doc/adjacency_list.html
Maybe a starting point could be this answer: adding custom vertices to a boost graph
You can create all your nodes, iterate on every node, and add a vertice only if the two nodes are different. Something like :
boost::graph_traits<Graph>::vertex_iterator vi, vi_end;
boost::tie(vi, vi_end) = boost::vertices(g);
boost::tie(vi2, vi2_end) = boost::vertices(g);
for (; vi != vi_end; ++vi) {
for (; vi2 != vi2_end; ++vi2) {
if(*vi != *vi2) {
boost::add_edge(
edge_t e; bool b;
boost::tie(e,b) = boost::add_edge(u,v,g);
g[e] = addEdgeValue();
}
}
}
Okay so here's my algorithm for finding a Cut in a graph (I'm not talking about a min cut here)
Say we're given an adjacency list of a non-directed graph.
Choose any vertice on the graph (let this be denoted by pivot)
Choose any other vertice on the graph (randomly). (denote this by x)
If the two vertices have an edge between them, then remove that edge from the graph. And dump all the vertices that x is connected to, onto pivot. (if not then go back to Step 2.
If any other vertices were connected to x, then change the adjacency list so that now x is replaced by pivot. Ie they're connected to Pivot.
If number of vertices is greater than 2 (go back to step 2)
If equal to 2. Just count number of vertices present in adjacency list of either of the 2 points. This will give the cut
My question is, is this algorithm correct?
That is a nice explanation of Krager's Min-Cut Algorithm for undirected graphs.
I think there might one detail you missed. Or perhaps I just mis-read your description.
You want to remove all self-loops.
For instance, after you remove a vertex and run through your algorithm, Vertex A may now have an edge that goes from Vertex A to Vertex A. This is called a self-loop. And they are generated frequently in process of contracting two vertices. As a first step, you can simply check the whole graph for self-loops, though there are some more sophisticated approaches.
Does that make sense?
I'll only change your randomization.
After choosing first vertex, choose another from his adjacency list. Now you are sure that two vertices have the edge between them. Next step is finding the vertex from adjancecy list.
Agree that you should definitely remove self-loop.
Also another point I want to add is after you randomly choose the first vertice, you don't have to randomly choose another node until you have one that is connected to the first node, you can simply choose from the ones that are connected to the first vertice because you know how many nodes are the first chosen one connects to. So a second random selection within a smaller range. This is just effectively randomly choosing an edge (determined by two nodes/vertices). I have some c# code implementing krager's algorithm you can play around. It's not the most efficient code (especially a more efficient data structure can be used) as I tested it on a 200 nodes graph, for 10000 iterations it takes about 30 seconds to run.
using System;
using System.Collections.Generic;
using System.Linq;
namespace MinCut
{
internal struct Graph
{
public int N { get; private set; }
public readonly List<int> Connections;
public Graph(int n) : this()
{
N = n;
Connections = new List<int>();
}
public override bool Equals(object obj)
{
return Equals((Graph)obj);
}
public override int GetHashCode()
{
return base.GetHashCode();
}
private bool Equals(Graph g)
{
return N == g.N;
}
}
internal sealed class GraphContraction
{
public static void Run(IList<Graph> graphs, int i)
{
var liveGraphs = graphs.Count;
if (i >= liveGraphs)
{
throw new Exception("Wrong random index generation; index cannot be larger than the number of nodes");
}
var leftV = graphs[i];
var r = new Random();
var index = r.Next(0, leftV.Connections.Count);
var rightV = graphs.Where(x=>x.N == leftV.Connections[index]).Single();
foreach (var v in graphs.Where(x => !x.Equals(leftV) && x.Connections.Contains(leftV.N)))
{
v.Connections.RemoveAll(x => x == leftV.N);
}
foreach (var c in leftV.Connections)
{
if (c != rightV.N)
{
rightV.Connections.Add(c);
int c1 = c;
graphs.Where(x=> x.N == c1).First().Connections.Add(rightV.N);
}
}
graphs.Remove(leftV);
}
}
}
In short, I need a fast algorithm to count how many acyclic paths are there in a simple directed graph.
By simple graph I mean one without self loops or multiple edges.
A path can start from any node and must end on a node that has no outgoing edges. A path is acyclic if no edge occurs twice in it.
My graphs (empirical datasets) have only between 20-160 nodes, however, some of them have many cycles in them, therefore there will be a very large number of paths, and my naive approach is simply not fast enough for some of the graph I have.
What I'm doing currently is "descending" along all possible edges using a recursive function, while keeping track of which nodes I have already visited (and avoiding them). The fastest solution I have so far was written in C++, and uses std::bitset argument in the recursive function to keep track of which nodes were already visited (visited nodes are marked by bit 1). This program runs on the sample dataset in 1-2 minutes (depending on computer speed). With other datasets it takes more than a day to run, or apparently much longer.
The sample dataset: http://pastie.org/1763781
(each line is an edge-pair)
Solution for the sample dataset (first number is the node I'm starting from, second number is the path-count starting from that node, last number is the total path count):
http://pastie.org/1763790
Please let me know if you have ideas about algorithms with a better complexity. I'm also interested in approximate solutions (estimating the number of paths with some Monte Carlo approach). Eventually I'll also want to measure the average path length.
Edit: also posted on MathOverflow under same title, as it might be more relevant there. Hope this is not against the rules. Can't link as site won't allow more than 2 links ...
This is #P-complete, it seems. (ref http://www.maths.uq.edu.au/~kroese/ps/robkro_rev.pdf). The link has an approximation
If you can relax the simple path requirement, you can efficiently count the number of paths using a modified version of Floyd-Warshall or graph exponentiation as well. See All pairs all paths on a graph
As mentioned by spinning_plate, this problem is #P-complete so start looking for your aproximations :). I really like the #P-completeness proof for this problem, so I'd think it would be nice to share it:
Let N be the number of paths (starting at s) in the graph and p_k be the number of paths of length k. We have:
N = p_1 + p_2 + ... + p_n
Now build a second graph by changing every edge to a pair of paralel edges.For each path of length k there will now be k^2 paths so:
N_2 = p_1*2 + p_2*4 + ... + p_n*(2^n)
Repeating this process, but with i edges instead of 2, up n, would give us a linear system (with a Vandermonde matrix) allowing us to find p_1, ..., p_n.
N_i = p_1*i + p_2*(i^2) + ...
Therefore, finding the number of paths in the graph is just as hard as finding the number of paths of a certain length. In particular, p_n is the number of Hamiltonian Paths (starting at s), a bona-fide #P-complete problem.
I havent done the math I'd also guess that a similar process should be able to prove that just calculating average length is also hard.
Note: most times this problem is discussed the paths start from a single edge and stop wherever. This is the opposite from your problem, but you they should be equivalent by just reversing all the edges.
Importance of Problem Statement
It is unclear what is being counted.
Is the starting node set all nodes for which there is at least one outgoing edge, or is there a particular starting node criteria?
Is the the ending node set the set of all nodes for which there are zero outgoing edges, or can any node for which there is at least one incoming edge be a possible ending node?
Define your problem so that there are no ambiguities.
Estimation
Estimations can be off by orders of magnitude when designed for randomly constructed directed graphs and the graph is very statistically skewed or systematic in its construction. This is typical of all estimation processes, but particularly pronounced in graphs because of their exponential pattern complexity potential.
Two Optimizing Points
The std::bitset model will be slower than bool values for most processor architectures because of the instruction set mechanics of testing a bit at a particular bit offset. The bitset is more useful when memory footprint, not speed is the critical factor.
Eliminating cases or reducing via deductions is important. For instance, if there are nodes for which there is only one outgoing edge, one can calculate the number of paths without it and add to the number of paths in the sub-graph the number of paths from the node from which it points.
Resorting to Clusters
The problem can be executed on a cluster by distributing according to starting node. Some problems simply require super-computing. If you have 1,000,000 starting nodes and 10 processors, you can place 100,000 starting node cases on each processor. The above case eliminations and reductions should be done prior to distributing cases.
A Typical Depth First Recursion and How to Optimize It
Here is a small program that provides a basic depth first, acyclic traversal from any node to any node, which can be altered, placed in a loop, or distributed. The list can be placed into a static native array by using a template with a size as one parameter if the maximum data set size is known, which reduces iteration and indexing times.
#include <iostream>
#include <list>
class DirectedGraph {
private:
int miNodes;
std::list<int> * mnpEdges;
bool * mpVisitedFlags;
private:
void initAlreadyVisited() {
for (int i = 0; i < miNodes; ++ i)
mpVisitedFlags[i] = false;
}
void recurse(int iCurrent, int iDestination,
int path[], int index,
std::list<std::list<int> *> * pnai) {
mpVisitedFlags[iCurrent] = true;
path[index ++] = iCurrent;
if (iCurrent == iDestination) {
auto pni = new std::list<int>;
for (int i = 0; i < index; ++ i)
pni->push_back(path[i]);
pnai->push_back(pni);
} else {
auto it = mnpEdges[iCurrent].begin();
auto itBeyond = mnpEdges[iCurrent].end();
while (it != itBeyond) {
if (! mpVisitedFlags[* it])
recurse(* it, iDestination,
path, index, pnai);
++ it;
}
}
-- index;
mpVisitedFlags[iCurrent] = false;
}
public:
DirectedGraph(int iNodes) {
miNodes = iNodes;
mnpEdges = new std::list<int>[iNodes];
mpVisitedFlags = new bool[iNodes];
}
~DirectedGraph() {
delete mpVisitedFlags;
}
void addEdge(int u, int v) {
mnpEdges[u].push_back(v);
}
std::list<std::list<int> *> * findPaths(int iStart,
int iDestination) {
initAlreadyVisited();
auto path = new int[miNodes];
auto pnpi = new std::list<std::list<int> *>();
recurse(iStart, iDestination, path, 0, pnpi);
delete path;
return pnpi;
}
};
int main() {
DirectedGraph dg(5);
dg.addEdge(0, 1);
dg.addEdge(0, 2);
dg.addEdge(0, 3);
dg.addEdge(1, 3);
dg.addEdge(1, 4);
dg.addEdge(2, 0);
dg.addEdge(2, 1);
dg.addEdge(4, 1);
dg.addEdge(4, 3);
int startingNode = 0;
int destinationNode = 1;
auto pnai = dg.findPaths(startingNode, destinationNode);
std::cout
<< "Unique paths from "
<< startingNode
<< " to "
<< destinationNode
<< std::endl
<< std::endl;
bool bFirst;
std::list<int> * pi;
auto it = pnai->begin();
auto itBeyond = pnai->end();
std::list<int>::iterator itInner;
std::list<int>::iterator itInnerBeyond;
while (it != itBeyond) {
bFirst = true;
pi = * it ++;
itInner = pi->begin();
itInnerBeyond = pi->end();
while (itInner != itInnerBeyond) {
if (bFirst)
bFirst = false;
else
std::cout << ' ';
std::cout << (* itInner ++);
}
std::cout << std::endl;
delete pi;
}
delete pnai;
return 0;
}