I am reading the book Artificial Intelligence: A Modern Approach. I came across this sentence describing the time complexity of uniform cost search:
Uniform-cost search is guided by path costs rather than depths, so its
complexity is not easily characterized in terms of b and d. Instead,
let C be the cost of the optimal solution, and assume that every
action costs at least ε. Then the algorithm’s worst-case time and
space complexity is O(b^(1+C/ε)), which can be much greater than b^d.
As to my understanding, C is the cost of the optimal solution, and every action costs at least ε, so that C/ε would be the number of steps taken to the destination. But I don't know how the complexity is derived.
If the branching factor is b, every time you expand out a node, you will encounter k more nodes. Therefore, there are
1 node at level 0,
b nodes at level 1,
b2 nodes at level 2,
b3 nodes at level 3,
...
bk nodes at level k.
So let's suppose that the search stops after you reach level k. When this happens, the total number of nodes you'll have visited will be
1 + b + b2 + ... + bk = (bk+1 - 1) / (b - 1)
That equality follows from the sum of a geometric series. It happens to be the case that bk+1 / (b - 1) = O(bk), so if your goal node is in layer k, then you have to expand out O(bk) total nodes to find the one you want.
If C is your destination cost and each step gets you ε closer to the goal, the number of steps you need to take is given by C / ε + 1. The reason for the +1 is that you start at distance 0 and end at C / ε, so you take steps at distances
0, ε, 2ε, 3ε, ..., (C / ε)ε
And there are 1 + C / ε total steps here. Therefore, there are 1 + C / ε layers, and so the total number of states you need to expand is O(b(1 + C / ε)).
Hope this helps!
templatetypedef's answer is somewhat incorrect. The +1 has nothing to do with the fact that the starting depth is 0. If every step cost is at least ε > 0, and the cost of optimal solution is C, then the maximum depth of the optimal solution occurs at floor(C / ε). But the worst case time/space complexity is in fact O(b(1+floor(C/ε)). The +1 arises because in UCS, we only check whether a node is a goal when we select it for expansion, and not when we generate it (this is to ensure optimality). So in the worst case, we could potentially generate the entire level of nodes that comes after the goal node's residing level (this explains the +1). In comparison, BFS applies the goal test when nodes are generated, so there is no corresponding +1 factor. This is a very important point that he missed.
Related
This was a problem of CLR (Introduction to Algorithms) The question goes as follow:
Suppose that the splits at every level of quicksort are in the proportion 1 - α to α, where 0 < α ≤ 1/2 is a constant. Show that the minimum depth of a leaf in the recursion tree is approximately - lg n/ lg α and the maximum depth is approximately -lg n/ lg(1 - α). (Don't worry about integer round-off.)http://integrator-crimea.com/ddu0043.html
I'm not getting how to reach this solution. as per the link they show that for a ratio of 1:9 the max depth is log n/log(10/9) and minimum log n/log(10). Then how can the above formula be proved. Please help me as to where am I going wrong as I'm new to Algorithms and Data Structures course.
First, let us consider this simple problem. Assume you a number n and a fraction (between 0 and 1) p. How many times do you need to multiply n with p so that resulting number is less than or equal to 1?
n*p^k <= 1
log(n)+k*log(p) <= 0
log(n) <= -k*log(p)
k => -log(n)/log(p)
Now, let us consider your problem. Assume you send the shorter of the two segments to the left child and longer to the right child. For the left-most chain, the length is given by substituting \alpha as p in the above equation. For the right most chain, the length is calculated by substituting 1-\alpha as p. Which is why you have those numbers as answers.
general question and the answer
Suppose that the splits at every level of quicksort are in proportion
1−α to α, where 0< α ≤1/2 is a constant. Show that the minimum depth of a leaf in the recursion tree is approximately −lgn/lgα
and the maximum depth is approximately −lgn/lg(1−α). (Don't worry about integer round-off.)
answer :
The minimum depth follows a path that always takes the smaller part of the partition i.e., that multiplies the number of elements by α. One iteration reduces the number of elements from n to αn, and i iterations reduce the number of elements to (α^i)n. At a leaf, there is just one remaining element, and so at a minimum-depth leaf of depth m, we have (α^m)n=1. Thus, αm=1/n. Taking logs, we get m*lgα=−lgn, or m=−lgn/lgα.
Similarly, maximum depth corresponds to always taking the larger part of the partition, i.e., keeping a fraction 1−α of the elements each time. The maximum depth M is reached when there is one element left, that is, when [(1−α)^M ]n=1. Thus, M=−lgn/lg(1−α).
All these equations are approximate because we are ignoring floors and ceilings.
source
This is a part of job interview question which got harder in its second part.
Given two 2-3 trees T1 and T2 such that for each tree h in known (h for height) and m, M for each tree are known too (m for minimum and M for maximum), Plus that each node in T1 < every node in T2.
I was asked to find an algorithm to join both of them into one tree in O(|h1-h2|+1)
This one was quite easy, and I have to point that this algorithm may result in a tree with h bigger than the previous two.
Now, I was given k 2-3 trees (T1,T2,T3...Tk) with the same exact conditions plus knowing that h_1<=h_2<=...<=h_k and that no three trees share the same height to join them in O(h_k-h_1+k).
At first I thought about using the previou algorithm to join the first two together then to join the third to the result and so on but I felt that something is going wrong here since I didn't utilise the fact that "no three trees share the same height".
What I am missing here?
Your solution is correct, but it would not be if you had more than 2 trees of the same height. For example, if you have k trees of identical height, then the first two would be indeed merged in O(h_1 - h_1) = O(1) time, but the resulting height can become h_1 + 1. While it only might become, or it might not so let me show that it is possible that everything goes wrong.
The maximum amount of keys we can have inside a tree of height n is 3^(n+1)-1. That's because each vertex has at most 3 subtrees, therefore i-th level has 3^i vertices, adding n levels would result in (3^(n + 1) - 1)/2 vertices. Because each vertex has 2 keys in such a scenario the total number of keys is 3^(n + 1) - 1.
Therefore, if we merge 4 such maximum trees, we would for sure get a tree with height increased by 2, 16 merged trees to get height increased by 3, and so on. Thus, while the first 3 merges are done in constant time, the next 12 ones are done twice slower, and the next 48 are done 3 times slower, and so on. You would do Ω(i) operations Ω(3^(i+1) - 3^i) times for each i starting from 1 and up to log(k).
Because Ω(3^log(k)) = Ω(k) this sum is definitely Ω(k log k), therefore inappropriate for given asymptotic bounds.
When no 3 trees share the same height, this problem does not occur, because whenever you merge two trees the resulting height is max(h_i, h_(i+1)) + 1 = h_(i+1) + 1, and h_(i+3) >= h_(i+1) + 1, therefore the height of merged part never goes one above the next tree, and that's where +k part gets from in asymptotic bound.
I looked at LeetCode question 270. Perfext Squares:
Given an integer n, return the least number of perfect square numbers that sum to n.
A perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself. For example, 1, 4, 9, and 16 are perfect squares while 3 and 11 are not.>
Example 1:
Input: n = 12
Output: 3
Explanation: 12 = 4 + 4 + 4.
I solved it using the following algorithm:
def numSquares(n):
squares = [i**2 for i in range(1, int(n**0.5)+1)]
step = 1
queue = {n}
while queue:
tempQueue = set()
for node in queue:
for square in squares:
if node-square == 0:
return step
if node < square:
break
tempQueue.add(node-square)
queue = tempQueue
step += 1
It basically tries to go from goal number to 0 by subtracting each possible number, which are : [1 , 4, 9, .. sqrt(n)] and then does the same work for each of the numbers obtained.
Question
What is the time complexity of this algorithm? The branching in every level is sqrt(n) times, but some branches are destined to end early... which makes me wonder how to derive the time complexity.
If you think about what you're doing, you can imagine that you're doing a breadth-first search over a graph with n + 1 nodes (all the natural numbers between 0 and n, inclusive) and some number of edges m, which we'll determine later on. Your graph is essentially represented as an adjacency list, since at each point you iterate over all the outgoing edges (squares less than or equal to your number) and stop as soon as you consider a square that's too large. As a result, the runtime will be O(n + m), and all we have to do now is work out what m is.
(There's another cost here in computing all the square roots up to and including n, but that takes time O(n1/2), which is dominated by the O(n) term.)
If you think about it, the number of outgoing edges from each number k will be given by the number of perfect squares less than or equal to k. That value is equal to ⌊√k⌋ (check this for a few examples - it works!). This means that the total number of edges is upper-bounded by
√0 + √1 + √2 + ... + √n
We can show that this sum is Θ(n3/2). First, we'll upper-bound this sum at O(n3/2), which we can do by noting that
√0 + √1 + √2 + ... + √n
≤ √n + √n + √ n + ... + √n (n+1) times
= (n + 1)√n
= O(n3/2).
To lower-bound this at Ω(n3/2), notice that
√0 + √1 + √2 + ... + √ n
≥ √(n/2) + √(n/2 + 1) + ... + √(n) (drop the first half of the terms)
≥ √(n/2) + √(n/2) + ... + √(n/2)
= (n / 2)√(n / 2)
= Ω(n3/2).
So overall, the number of edges is Θ(n3/2), so using a regular analysis of breadth-first search we can see that the runtime will be O(n3/2).
This bound is likely not tight, because this assumes that you visit every single node and every single edge, which isn't going to happen. However, I'm not sure how to tighten things much beyond this.
As a note - this would be a great place to use A* search instead of breadth-first search, since you can fairly easily come up with heuristics to underestimate the remaining total distance (say, take the number and divide it by the largest perfect square less than it). That would cause the search to focus on extremely promising paths that jump rapidly toward 0 before less-good paths, like, say, always taking steps of size one.
Hope this helps!
Some observations:
The number of squares up to n is √n (floored to the nearest integer)
After the first iteration of the while loop, tempQueue will have √n entries
tempQueue can never have more than n entries, since all these values are positive, less than n and unique.
Every natural number can be written as the sum of four integer squares. So that means your BFS algorithm's while loop will iterate at the most 4 times. If the return statement did not get executed during any of the first 3 iterations, it is guaranteed it will in the 4th.
Every statement (except for the initialisation of squares) runs in constant time, even the call to .add().
The initialisation of squares has a list comprehension loop that has √n iterations, and range runs in constant time, so that initialisation has a time complexity of O(√n).
Now we can set a ceiling to the number of times the if node-square == 0 statement is executed (or any other statement in the innermost loop's body):
1⋅√n + √n⋅√n + n⋅√n + n⋅√n
Each of the 4 terms corresponds to an iteration of the while loop. The left factor of each product corresponds to the maximum size of queue in that particular iteration, and the factor at the right corresponds to the size of squares (always the same). This simplifies to:
√n + n + 2n3⁄2
In terms of time complexity this is:
O(n3⁄2)
This is the worst case time complexity. When the while loop only has to iterate twice, it is O(n), and when only once (when n is a square), it is O(√n).
You are given a complete undirected graph with N vertices. All but K edges have a cost of A. Those K edges have a cost of B and you know them (as a list of pairs). What's the minimum cost from node 0 to node N - 1.
2 <= N <= 500k
0 <= K <= 500k
1 <= A, B <= 500k
The problem is, obviously, when those K edges cost more than the other ones and node 0 and node N - 1 are connected by a K-edge.
Dijkstra doesn't work. I've even tried something very similar with a BFS.
Step1: Let G(0) be the set of "good" adjacent nodes with node 0.
Step2: For each node in G(0):
compute G(node)
if G(node) contains N - 1
return step
else
add node to some queue
repeat step2 and increment step
The problem is that this uses up a lot of time due to the fact that for every node you have to make a loop from 0 to N - 1 in order to find the "good" adjacent nodes.
Does anyone have any better ideas? Thank you.
Edit: Here is a link from the ACM contest: http://acm.ro/prob/probleme/B.pdf
This is laborous case work:
A < B and 0 and N-1 are joined by A -> trivial.
B < A and 0 and N-1 are joined by B -> trivial.
B < A and 0 and N-1 are joined by A ->
Do BFS on graph with only K edges.
A < B and 0 and N-1 are joined by B ->
You can check in O(N) time is there is a path with length 2*A (try every vertex in middle).
To check other path lengths following algorithm should do the trick:
Let X(d) be set of nodes reachable by using d shorter edges from 0. You can find X(d) using following algorithm: Take each vertex v with unknown distance and iterativelly check edges between v and vertices from X(d-1). If you found short edge, then v is in X(d) otherwise you stepped on long edge. Since there are at most K long edges you can step on them at most K times. So you should find distance of each vertex in at most O(N + K) time.
I propose a solution to a somewhat more general problem where you might have more than two types of edges and the edge weights are not bounded. For your scenario the idea is probably a bit overkill, but the implementation is quite simple, so it might be a good way to go about the problem.
You can use a segment tree to make Dijkstra more efficient. You will need the operations
set upper bound in a range as in, given U, L, R; for all x[i] with L <= i <= R, set x[i] = min(x[i], u)
find a global minimum
The upper bounds can be pushed down the tree lazily, so both can be implemented in O(log n)
When relaxing outgoing edges, look for the edges with cost B, sort them and update the ranges in between all at once.
The runtime should be O(n log n + m log m) if you sort all the edges upfront (by outgoing vertex).
EDIT: Got accepted with this approach. The good thing about it is that it avoids any kind of special casing. It's still ~80 lines of code.
In the case when A < B, I would go with kind of a BFS, where you would check where you can't reach instead of where you can. Here's the pseudocode:
G(k) is the set of nodes reachable by k cheap edges and no less. We start with G(0) = {v0}
while G(k) isn't empty and G(k) doesn't contain vN-1 and k*A < B
A = array[N] of zeroes
for every node n in G(k)
for every expensive edge (n,m)
A[m]++
# now we have that A[m] == |G(k)| iff m can't be reached by a cheap edge from any of G(k)
set G(k+1) to {m; A[m] < |G(k)|} except {n; n is in G(0),...G(k)}
k++
This way you avoid iterating through the (many) cheap edges and only iterate through the relatively few expensive edges.
As you have correctly noted, the problem comes when A > B and edge from 0 to n-1 has a cost of A.
In this case you can simply delete all edges in the graph that have a cost of A. This is because an optimal route shall only have edges with cost B.
Then you can perform a simple BFS since the costs of all edges are the same. It will give you optimal performance as pointed out by this link: Finding shortest path for equal weighted graph
Moreover, you can stop your BFS when the total cost exceeds A.
I have a tree data structure that is L levels deep each node has about N nodes. I want to work-out the total number of nodes in the tree. To do this (I think) I need to know what percentage of the nodes that will have children.
What is the correct term for this ratio of leaf nodes to non-leaf nodes in N?
What is the formula for working out the total number nodes in the three?
Update Someone mention Branching factor in one of the answer but it then disappeared. I think this was the term I was looking for. So shouldn't a formula take the branching factor into account?
Update I should have said an estimate about a hypothetical datastructure, not the exact figure!
Ok, each node has about N subnodes and the tree is L levels deep.
With 1 level, the tree has 1 node.
With 2 levels, the tree has 1 + N nodes.
With 3 levels, the tree has 1 + N + N^2 nodes.
With L levels, the tree has 1 + N + N^2 + ... + N^(L-1) nodes.
The total number of nodes is (N^L-1) / (N-1).
Ok, just a small example why, it is exponential:
[NODE]
|
/|\
/ | \
/ | \
/ | \
[NODE] [NODE] [NODE]
|
/|\
/ | \
Just to correct a typo in the first answer: the total number of nodes for a tree of depth L is (N^(L+1)-1) / (N-1)... (that is, to the power L+1 rather than just L).
This can be shown as follows. First, take our theorem:
1 + N^1 + N^2 + ... + N^L = (N^(L+1)-1)/(N-1)
Multiply both sides by (N-1):
(N-1)(1 + N^1 + N^2 + ... + N^L) = N^(L+1)-1.
Expand the left side:
N^1 + N^2 + N^3 + ... + N^(L+1) - 1 - N^1 - N^2 - ... - N^L.
All terms N^1 to N^L are cancelled out, which leaves N^(L+1) - 1. This is our right hand side, so the initial equality is true.
If your tree is approximately full, that is every level has its full complement of children except for the last two, then you have between N^(L-2) and N^(L-1) leaf nodes and between N^(L-1) and N^L nodes total.
If your tree is not full, then knowing the number of leaf nodes doesn't help as a totally unbalanced tree will have one leaf node but arbitrarily many parents.
I wonder how precise your statement 'each node has about N nodes' is - if you know the average branching factor, perhaps you can compute the expected size of the tree.
If you are able to find the ratio of leaves to internal nodes, and you know the average number of children, you can approximate this as (n*ratio)^N = n. This won't give you your answer, but I wonder if someone with better maths than me can figure out a way to interpose L into this equation and give you something soluble.
Still, if you want to know precisely, you must iterate over the structure of the tree and count nodes as you go.
The formula for calculating the amount of nodes in depth L is: (Given that there are N root nodes)
NL
To calculate the number of all nodes one needs to do this for every layer:
for depth in (1..L)
nodeCount += N ** depth
If there's only 1 root node, subtract 1 from L and add 1 to the total nodes count.
Be aware that if in one node the amount of leaves is different from the average case this can have a big impact on your number. The further up in the tree the more impact.
* * * N ** 1
*** *** *** N ** 2
*** *** *** *** *** *** *** *** *** N ** 3
This is community wiki, so feel free to alter my appalling algebra.
Knuth's estimator [1],[2] is a point estimate that targets the number of nodes in an arbitrary finite tree without needing to go through all of the nodes and even if the tree is not balanced. Knuth's estimator is an example of an unbiased estimator; the expected value of Knuth's estimator will be the number of nodes in the tree. With that being said, Knuth's estimator may have a large variance if the tree in question is unbalanced, but in your case, since each node will have around N children, I do not think the variance of Knuth's estimator should be too large. This estimator is especially helpful when one is trying to measure the amount of time it will take to perform a brute force search.
For the following functions, we shall assume all trees are represented as lists of lists.
For example, [] denotes the tree with the single node, and [[],[[],[]]] will denote a tree with 5 nodes and 3 leaves (the nodes in the tree are in a one-to-one correspondence with the left brackets). The following functions are written in the language GAP.
The function simpleestimate gives an output an estimate for the number of nodes in the tree tree. The idea behind simpleestimate is that we randomly choose a path x_0,x_1,...,x_n from the root x_0 of the tree to a leaf x_n. Suppose that x_i has a_i successors. Then simpleestimate will return 1+a_1+a_1*a_2+...+a_1*a_2*…*a_n.
point:=tree; prod:=1; count:=1; list:=[];
while Length(point)>0 do prod:=prod*Length(point); count:=count+prod; point:=Random(point); od;
return count; end;
The function estimate will simply give the arithmetical mean of the estimates given by applying the function simpleestimate(tree) samplesize many times.
estimate:=function(samplesize,tree) local count,i;
count:=0;
for i in [1..samplesize] do count:=count+simpleestimate(tree); od;
return Float(count/samplesize); end;
Example: simpleestimate([[[],[[],[]]],[[[],[]],[]]]); returns 15 while
estimate(10000,[[[],[[],[]]],[[[],[]],[]]]); returns 10.9608 (and the tree actually does have 11 nodes).
Estimating Search Tree Size.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.129.5569&rep=rep1&type=pdf
Estimating the Efficiency of Backtrack Programs. Donald E. Knuth
http://www.ams.org/journals/mcom/1975-29-129/S0025-5718-1975-0373371-6/S0025-5718-1975-0373371-6.pdf
If you know nothing else but the depth of the tree then your only option for working out the total size is to go through and count them.