Incremental graph algorithms - algorithm

There are many basic graph algorithms such as topological sort, strongly/weakly connected components, all-pairs/single-source shortest paths, reachability and so on. Incremental variants of these algorithms have a variety of important practical applications. By "incremental" I mean those graph algorithms that can compute small changes to their outputs given small changes (e.g. edge insertion and deletion) to the input graph without having to recompute everything. For example, a garbage collector accumulating the subgraph of heap allocated blocks reachable from the global roots. However, I do not recall seeing the subject of incremental graph algorithms discussed outside domain-specific literature (e.g. Richard Jones' new book on GC).
Where can I find information on incremental graph algorithms or, for that matter, incremental algorithms in general?

There's a survey article by Eppstein, Galil, and Italiano from 1999. They would describe what you're looking for as "fully dynamic algorithms"; "partially dynamic algorithms" are divided into "incremental algorithms", which allow only insertions, and "decremental algorithms", which allow only deletions.
Beyond that, I suspect you're going to have to read the research literature – there are only a handful of researchers who work on dynamic graph algorithms. You should be able to find articles by examining the papers that cite the survey.

Related

Algorithms for Tree Decomposition

I want to understand the optimum algorithm for a tree decomposition of any graph. Is there any good sites where I can look up because I cannot find proper materials to understand the logic behind tree decomposition.
The PACE (Parameterized Algorithms and Computational Experiments Challenge) challenge is a competition for implementing fast algorithms (with a worst-case exponential running time). In 2016 and 2017, one of the challenges was to compute tree decompositions. See here for reports and (inside the reports) the links to the implementations of submitted solutions.

Graph Search algorithms vs. graph optimization

I have been looking at a lot of graph searches -both informed and uninformed. I have also looked at and coded some optimization problems such as hill-climbing. My question is, how can I relate the two types of algorithms?
To be a little more clear, here is an example:
Let's say I run a graph algorithm like Depth first Iterative Deepening. I run it for one depth and get a goal node, then I run it again at a different depth and find a goal node and so on. So now, I have a set of possible goal nodes. Could I run an optimization algorithm like hill climbing to find which one is "optimal" according to different limitations?
One of the notions you encounter in graph search algorithms is optimality. You should consider that here optimality is just about the algorithm and not about the goal. Meaning that, if an algorithm is optimal it should return the minimum cost solution.
Now, if you want to optimize something it's completely a different problem. In this case the quality of the solution is of most importance not the way you achieve it.
Algorithms like Genetic Algorithms, PSO, ... and many more optimization methods exists to address this issue.
BTW, sometimes we combine a graph search method like A* with an optimization algorithm to gain better results from the optimization algorithm.
Note: The term Graph Optimization is not related to the topic Graph Search that I think is your main topic according to your question.

Significance of various graph types

There are a lot of named graph types. I am wondering what is the criteria behind this categorization. Are different types applicable in different context? Moreover, can a business application (from design and programming perspective) benefit anything out of these categorizations? Is this analogous to design patterns?
We've given names to common families of graphs for several reasons:
Certain families of graphs have nice, simple properties. For example, trees have numerous useful properties (there's exactly one path between any pair of nodes, they're maximally acyclic, they're minimally connected, etc.) that don't hold of arbitrary graphs. Directed acyclic graphs can be topologically sorted, which normal graphs cannot. If you can model a problem in terms of one of these types of graphs, you can use specialized algorithms on them to extract properties that can't necessarily be obtained from an arbitrary graph.
Certain algorithms run faster on certain types of graphs. Many NP-hard problems on graphs, which as of now don't have any polynomial-time algorithms, can be solved very easily on certain types of graphs. For example, the maximum independent set problem (choose the largest collection of nodes where no two nodes are connected by an edge) is NP-hard, but can be solved in polynomial time for trees and bipartite graphs. The 4-coloring problem (determine whether the nodes of a graph can be colored one of four different colors without assigning the same color to adjacent nodes) is NP-hard in general, but is immediately true for planar graphs (this is the famous four-color theorem).
Certain algorithms are easier on certain types of graphs. A matching in a graph is a collection of edges in the graph where no two edges share an endpoint. Maximum matchings can be used to represent ways of pairing people up into groups. In a bipartite graph, a maximum matching can be used to represent a way of assigning people to tasks such that no person is assigned two tasks and no task is assigned to two people. There are many fast algorithms for finding maximum matchings in bipartite graphs that work quickly and are easy to understand. The corresponding algorithms for general graphs are significantly more complicated and slightly less efficient.
Certain graphs are historically significant. Many named graphs are named after someone who used the graph to disprove a conjecture about properties of arbitrary graphs. The Petersen graph, for example, is a counterexample to many theorems that seem true about graphs but are actually not.
Certain graphs are useful in theoretical computer science. An expander graph is a graph where, intuitively, any collection of nodes must be connected to a proportionally larger collection of nodes in the graph. Not all graphs are expander graphs. Expander graphs are used in many results in theoretical computer science, such as one proof of the PCP theorem and in the proof that SL = L.
This is not an exhaustive list of why we care about different graph families, but hopefully it helps motivate their usage and study.
Hope this helps!

computing vertex connectivity of graph

Is there an algorithm that, when given a graph, computes the vertex connectivity of that graph (the minimum number of vertices to remove in order to separate the graph into two connected graphs). (Note that the graph may be already be disconnected). Thanks!
See:
Determining if a graph is K-vertex-connected
k-vertex connectivity of a graph
When you combine this with binary search you are done.
This book chapter should have everything you need to get started; it is basically a survey over algorithms to determine the edge connectivity and vertex connectivity of graphs, with pseudo code for the algorithms described in it. Page 12 has an overview over the available algorithms alongside complexity analyses and references. Most of the solutions for the general case are flow-based, with the exception of one randomized algorithm. The different general solutions optimize for different properties of the graph, so you can choose the most asymptotically efficient one beforehand. Also, for some classes of graphs, there exist specialized algorithms with better complexity than the general solutions provide.

Use of Optimization Algorithms for finding shortest path in network

I am new to the subject of algorithm design and graph theory. I am simulating large content based network consisting of thousands of routers. I am using "Learning by reverse path" for routing. Requested content names and contents are propagated in network using flooding. Routers check for matching names in routing tables and either reply back or populate routing table with unmatched requested content name and contents. Will using optimization algorithm like Ant colony optimization, hill climbing etc instead of learning by reverse path improve routing efficiency?
If your graph satisfy the triangle inequality i.e. is an euklidian space then I suggest you to use the christofides approximation algorithm because it has a guarantee to be 3/2 in the optimum. Other heuristic algorithm like ant colony optimization are very fast and efficient but not very safe. A good example for an ant colony optimization (and brute force and dp solution) is the google map tsp solver in javascript. I believe a space-filling-curve is a good approximation too and has a certain guarantee. You may look into a z-curve or a hilbert curve. You can find a good article about the hilbert curve at Nick spatial index quadtree hilbert curve blog or in the book Hacker's delight. I suggest to look into constructing of monotonic n-ary gray codes and hamiltonian path either.

Resources