Space Complexity in Breadth First Search (BFS) Algorithm - data-structures

According to
Artificial Intelligence A Modern Approach - Stuart J. Russell , Peter Norvig (Version 4), space complexity of BFS is O(b^d), where 'b' is branching factor and 'd' is depth.
Complexity of BFS is obtained by this assumption: we store all nodes till we arrive to target node, in other word: 1 + b + b^2 + b^3 + ... + b^d => O(b^d)
But why should we store all nodes? don't we use queue for implementation?
If we use queue, don't need to store all nodes, because we enqueue and dequeue some nodes in steps, then when we find target node(s), we can say some nodes are in queue (but not all of them).
Is my understanding wrong?

The problem in BFS is that you essentially pursue a number of paths in parallel. In depth-first search, you take one branch only, and once that has been explored, all the nodes on it can be dequeued. So you never need more than one branch worth of nodes in your queue.
But in BFS you have to keep every branch up to the current depth; you cannot discard any of them, as they haven't been fully explored yet. So you need to keep track of the 'history' of the current 'head'-node of the path. In DFS there is only ever one path at a time, but in BFS there are more, depending on branching factor and current depth.

At any moment while we apply BFS, the queue would have at most two levels of nodes, for example if we just started searching in depth d, then the queue now contains all nodes at depth d and as we proceed the queue would finish all nodes at depth d and have all nodes at depth d+1, so at any moment we have O(b^d) space.
Also 1+b+b^2+...+b^d = (b^(d+1)-1)/(b-1).

Related

Time/Space Complexity of Depth First Search

I've looked at various other StackOverflow answer's and they all are different to what my lecturer has written in his slides.
Depth First Search has a time complexity of O(b^m), where b is the
maximum branching factor of the search tree and m is the maximum depth
of the state space. Terrible if m is much larger than d, but if search
tree is "bushy", may be much faster than Breadth First Search.
He goes on to say..
The space complexity is O(bm), i.e. space linear in length of action
sequence! Need only store a single path from the root to the leaf
node, along with remaining unexpanded sibling nodes for each node on
path.
Another answer on StackOverflow states that it is O(n + m).
Time Complexity: If you can access each node in O(1) time, then with branching factor of b and max depth of m, the total number of nodes in this tree would be worst case = 1 + b + b2 + … + bm-1. Using the formula for summing a geometric sequence (or even solving it ourselves) tells that this sums to = (bm - 1)/(b - 1), resulting in total time to visit each node proportional to bm. Hence the complexity = O(bm).
On the other hand, if instead of using the branching factor and max depth you have the number of nodes n, then you can directly say that the complexity will be proportional to n or equal to O(n).
The other answers that you have linked in your question are similarly using different terminologies. The idea is same everywhere. Some solutions have added the edge count too to make the answer more precise, but in general, node count is sufficient to describe the complexity.
Space Complexity: The length of longest path = m. For each node, you have to store its siblings so that when you have visited all the children, and you come back to a parent node, you can know which sibling to explore next. For m nodes down the path, you will have to store b nodes extra for each of the m nodes. That’s how you get an O(bm) space complexity.
The complexity is O(n + m) where n is the number of nodes in your tree, and m is the number of edges.
The reason why your teacher represents the complexity as O(b ^ m), is probably because he wants to stress the difference between Depth First Search and Breadth First Search.
When using BFS, if your tree has a very large amount of spread compared to it's depth, and you're expecting results to be found at the leaves, then clearly DFS would make much more sense here as it reaches leaves faster than BFS, even though they both reach the last node in the same amount of time (work).
When a tree is very deep, and non-leaves can give information about deeper nodes, BFS can detect ways to prune the search tree in order to reduce the amount of nodes necessary to find your goal. Clearly, the higher up the tree you discover you can prune a sub tree, the more nodes you can skip.
This is harder when you're using DFS, because you're prioritize reaching a leaf over exploring nodes that are closer to the root.
I suppose this DFS time/space complexity is taught on an AI class but not on Algorithm class.
The DFS Search Tree here has slightly different meaning:
A node is a bookkeeping data structure used to represent the search
tree. A state corresponds to a configuration of the world. ...
Furthermore, two different nodes can contain the same world state if
that state is generated via two different search paths.
Quoted from book 'Artificial Intelligence - A Modern Approach'
So the time/space complexity here is focused on you visit nodes and check whether this is the goal state. #displayName already give a very clear explanation.
While O(m+n) is in algorithm class, the focus is the algorithm itself, when we store the graph as adjacency list and how we discover nodes.

Why are there two listed time complexities for breadth-first search?

The Wikipedia article on breadth-first search lists two time complexities for breadth-first search over a graph: O(|E|) and O(bd). Later on the page, though, it only lists O(|E|).
Why are there two different runtimes? Which one is correct?
Both time complexities are correct, but are used in different circumstances to measure the time complexity relative to two different quantities. This has to do with the fact that the breadth-first search algorithm typically refers to two different related algorithms used in different contexts.
In one context, BFS is an algorithm that, given a graph and a start node in the graph, visits every node reachable from the start node by first visiting all nodes at distance 0, then distance 1, then distance 2, etc. until all nodes are visited. This will visit every node in the graph and in the process of doing so explore each node one and edge edge at most once (in the directed case) or twice (in the undirected case). By using queues to keep track of which nodes to explore next and using appropriate bookkeeping, the runtime will be O(|E| + |V|) (and with further optimizations, it will be O(|E|)).
In a different context, BFS is a search algorithm used to find the shortest path from some start node in a graph to a destination node in the graph. In this case, the algorithm stops running as soon as it discovers the destination node. This means that the runtime depends on how far away the destination node is from the source node. That distance in turn depends on the structure of the graph. If the graph is densely connected, the node can't be that far away, and if the graph is sparse, the node might be extremely distant. In this context, it's common to add in a parameter called the "branching factor" b, which describes the average or maximum number of edges adjacent to any node. This means that
There is one node at distance 0 from the start node.
There are at most b nodes at distance 1 from the start node.
There are at most b2 nodes at distance 2 from the start node.
...
There are at most bk nodes at distance k from the start node.
If we assume that the destination node is at distance d from the start node, then BFS will visit at most b0 + b1 + ... + bd = O(bd) nodes during its search, spending O(b) time on each of them. Accordingly, the total runtime will be O(bd).
In summary:
The runtime of O(|E|) is typically used when discussing the algorithm when being used to explore the entire graph.
The runtime of O(bd) is typically used when discussing the algorithm when being used to find a specific node in the graph.
Hope this helps!

Breadth first search branching factor

The run time of BFS is O(b^d)
b is the branching factor
d is the depth(# of level) of the graph from starting node.
I googled for awhile, but I still dont see anyone mention how they figure out this "b"
So I know branching factor means the "# of child that each node has"
Eg, branching factor for a binary Tree is 2.
so for a BFS graph , is that b= average all the branching factor of each node in our graph.
or b = MAX( among all branch factor of each node) ?
Also, no matter which way we pick the b, still seeming ambiguous to approach our run time.
For example , if our graph has 30000 nodes, only 5 nodes has 10000 branching, and all the rest 29955 nodes just have 10 branching. and we have the depth setup to be 100.
Seems O(b^d) is not making sense at this case.
Can someone explain a little bit. Thankyou!
The runtime that is more often quoted is that BFS is O(m + n) where m is the number of edges and n the number of nodes. This is because each vertex is processed once and each edge at most twice.
I think O(b^d) is used when using BFS on, say, brute-forcing a game of chess, where each position had a relatively constant branching factor and your engine needs to search a certain number of positions deep. For example, b is about 35 for chess and Deep Blue had a search depth of 6-8 (going up to 20).
In such cases, because the graph is relatively acyclic, b^d is roughly the same as m + n (they are equal for trees). O(b^d) is more useful as b is fixed and d is something you control.
in graphs O(b^d), the b = MAX. Since it is the worst case. check this link from princeton http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Breadth-first_search.html - go to time complexity portion
To quote from Artificial Intelligence - A modern approach by Stuart Russel and Peter Norvig:
Time and space complexity are always considered with respect to some measure of the prob- lem difficulty. In theoretical computer science, the typical measure is the size of the state space graph, |V | + |E|, where V is the set of vertices (nodes) of the graph and E is the set of edges (links). This is appropriate when the graph is an explicit data structure that is input to the search program. (The map of Romania is an example of this.) In AI, the graph is often represented implicitly by the initial state, actions, and transition model and is frequently infi- nite. For these reasons, complexity is expressed in terms of three quantities: b, the branching factor or maximum number of successors of any node; d, the depth of the shallowest goal node (i.e., the number of steps along the path from the root); and m, the maximum length of any path in the state space. Time is often measured in terms of the number of nodes generated during the search, and space in terms of the maximum number of nodes stored in memory. For the most part, we describe time and space complexity for search on a tree; for a graph, the answer depends on how “redundant” the paths in the state space are.
This should give you a clear insight about the difference between O(|V|+|E|) and b^d

What is the worst-case time and space complexity of a uniform-cost search algorithm?

My book here (Artificial intelligence A modern approach) says that the worst-case time and space complexity of a uniform-cost search algorithm would be O(b[C*/e]) , where b is the branching factor, C* is the cost of the optimal solution, and every action costs atleast e. But why is this so?
First, the complexity is O(B^(C/e)) [exponential in C/e].
To understand it, think of a simple example case first:
Let G=(V,E) be a graph, with branch factor B. The graph is unweighted (w(e) = 1 for each e).
Consider finding the shortest path from S to T.
In this case, the algorithm is actually a BFS, and it will discover all nodes in the path up to length SOL, where SOL is the length of the shortest path, which is O(B^|SOL|)
For the general case - the same idea holds, you need to discover all nodes up to cost C. So you discover nodes up to depth C/e, giving you O(B^(C/e)) total nodes needed to be explored.
The exponential factor is because: First level (root) has B^0=1 nodes, second level has B nodes. from each of these you discover B nodes, giving you B^2, ....
EDIT:
Missed it so far, but the title asks for space complexity and not time complexity. However, the answer remains the same, since a uniform cost search holds a visited set, for already visited nodes. Since each node you discover is also added to it - the answer remains O(B^(C/e))
C*/e means average number of nodes which should be visited during the search, and for visiting each of this nodes you should look at all possible b branches (at least root nodes), so you should check b[C*/e] node in your search. which is your search time complexity, this is by assuming process on each node takes O(1).
P.S: It's Ω(b[C*/e])in worst case

Split a tree into equal parts by deleting an edge

I am looking for an algorithm to split a tree with N nodes (where the maximum degree of each node is 3) by removing one edge from it, so that the two trees that come as the result have as close as possible to N/2. How do I find the edge that is "the most centered"?
The tree comes as an input from a previous stage of the algorithm and is input as a graph - so it's not balanced nor is it clear which node is the root.
My idea is to find the longest path in the tree and then select the edge in the middle of the longest path. Does it work?
Optimally, I am looking for a solution that can ensure that neither of the trees has more than 2N / 3 nodes.
Thanks for your answers.
I don't believe that your initial algorithm works for the reason I mentioned in the comments. However, I think that you can solve this in O(n) time and space using a modified DFS.
Begin by walking the graph to count how many total nodes there are; call this n. Now, choose an arbitrary node and root the tree at it. We will now recursively explore the tree starting from the root and will compute for each subtree how many nodes are in each subtree. This can be done using a simple recursion:
If the current node is null, return 0.
Otherwise:
For each child, compute the number of nodes in the subtree rooted at that child.
Return 1 + the total number of nodes in all child subtrees
At this point, we know for each edge what split we will get by removing that edge, since if the subtree below that edge has k nodes in it, the spilt will be (k, n - k). You can thus find the best cut to make by iterating across all nodes and looking for the one that balances (k, n - k) most evenly.
Counting the nodes takes O(n) time, and running the recursion visits each node and edge at most O(1) times, so that takes O(n) time as well. Finding the best cut takes an additional O(n) time, for a net runtime of O(n). Since we need to store the subtree node counts, we need O(n) memory as well.
Hope this helps!
If you see my answer to Divide-And-Conquer Algorithm for Trees, you can see I'll find a node that partitions tree into 2 nearly equal size trees (bottom up algorithm), now you just need to choose one of the edges of this node to do what you want.
Your current approach is not working assume you have a complete binary tree, now add a path of length 3*log n to one of leafs (name it bad leaf), your longest path will be within one of a other leafs to the end of path connected to this bad leaf, and your middle edge will be within this path (in fact after you passed bad leaf) and if you partition base on this edge you have a part of O(log n) and another part of size O(n) .

Resources