How to calculate the height of a red black tree? [closed] - data-structures

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm almost done implementing a red black tree, but I'm stuck with how to calculate the height (not black height). Can anyone give me a hint or the concept on how to implement the height? I know the formula, but it's not much of a help.
I thought of traversing each node and adding a counter, but this gets complicated as the red black tree gets bigger.
Basically how do I know when it has travelled down to its longest path?
I'm not really concerned with the time complexity of the solution, but I would like to avoid n2.

There is a simple recursive approach for computing the height of a tree that works in time O(n), where n is the number of nodes. The idea is to compute the heights of each node's children, then to take the maximum of those values, plus one. Here's some pseudocode:
function treeHeight(node n) {
if (n is null) return -1;
return max(treeHeight(n.left), treeHeight(n.right)) + 1;
}
This visits every node exactly once and does O(1) work per node, so the total time complexity is O(n).
Hope this helps!

Related

Best search algorithm for solving Towers of Hanoi? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 11 months ago.
Improve this question
Which search algorithm is most appropriate for solving the Towers of Hanoi? More specifically, of the uninformed search algorithms breadth-first search, depth-first search, and iterative deepening, which would be best?
I'd argue that none of these algorithms would be the best way to solve Towers of Hanoi. The Towers of Hanoi puzzle is well-studied and we have some great solutions to it, some that work recursively and others that work by looking at patterns in the bits of numbers.
Any of the uninformed search strategies given above will run into trouble fairly quickly solving Towers of Hanoi because the search space is so big. With n disks, there are 3n possible states to the puzzle (each disk can be in one of three spots) and the shortest possible solution to the puzzle has length 2n - 1. You're likely to end up exploring the full search space if you used any of the uninformed strategies, and for any reasonable value of n the cost of holding 3n states in memory (which you'd need for BFS) or a sequence of at 2n - 1 states in memory (for DFS or IDDFS) would be prohibitive. Contrast this with the binary counting solution to Towers of Hanoi or the standard recursive algorithm for Towers of Hanoi, each of which uses only O(n) total memory.

Google Interview Question: Assign People and Cars optimally on 2D array [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
My friend recently got this question from google interview and I think this is a tricky question.
Given People and Cars on 2D array, find an Optimal Assignment of
people and cars where you define what "optimal" means.
If we define Optimal Assignment as the least total Manhattan distance between all pairs (min sum of each pair's distance). What could be a good way to solve this and which algorithm should I use?
Is there any better way to solve this problem more efficiently with different definition of "optimal"?
If you use total manhattan distance (or total pretty much anything else), then this is an instance of the "assignment problem", or "minimum cost bipartite matching".
You can solve it using the Hungarian algorithm or other methods: https://en.wikipedia.org/wiki/Assignment_problem
I believe another condition is that for any person, the distances to any two cars should not be the same. And similarly for any car, the distance to any two people should not be the same. And ultimately you want to give everyone a car.
A greedy solution works like this: compute the distance of every (car, person) distances and sort them. Travel thru the array and pair the (car, person) if neither is paired. The algorithm takes O(mnlog(mn)) for m cars and n people but can be improved with 1. use parallelization when calculating distances 2. keep only a O(n) priority queue for each person. When a person is paired. We no longer add its distances to the queue. Hence the algorithm takes about O(mnlog(n)). The optimality can be proved by induction.

Finding max element in first N elements of dynamic array [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm looking for an efficient algorithm or data structure to find largest element by second parameter in first N elements of multiset in which I'll make many ,so I can't use segment tree.Any Ideas?
note:I have multiset of pairs.
You can use any balanced binary search tree implementation you are familiar with. Arguably the most well known are AVL tree, Red-black tree.
Usually binary search tree description mentions a key and value pair stored in tree node. The keys are ordered from left to right. Insert, delete and find operations work with O(log(n)) time complexity because tree is balanced. Balance is often supported by tree rotation.
In order to be able to find maximum value on a range of elements you have to store and maintain additional information in each tree node namely maxValue in the subtree of the node and size of the subtree. Define a recursive function for a node to find maximum value among the first N nodes of its subtree. If N is equal to size you will already have an answer in maxValue of current node. Otherwise call the function for left/right node if some elements are in threir subtrees.
F(node, N) =
if N == size[node] : maxValue[node]
else if N <= size[leftChild[node]] :
F(leftChild[node], N)
else if N == size[leftChild[node]] + 1 :
MAX(F(leftChild[node], N), value[node])
else :
MAX(maxValue[leftChild[node]],
value[node],
F(rightChild[node], N - size[leftChild[node]] - 1)
If you are familiar with segment tree you will not encounter any problems with this implementation.
I may suggest you to use Treap. This is randomised binary tree. Because of the this randomised nature the tree always remains balances providing O(log(n)) time complexity for the basic operations. Treap DS has two basic operations split and merge, all other operations are implemented via their usage. An advatage of treap is that you don't have to deal with rotations.
EDIT: There is no way to maintain maxKey/minKey in each node explicitly O(log(n)).

For TSP, how does Held–Karp algorithm reduce the time complexity from Brute-force's O(n!) to O(2^n*n^2)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have hard time to grasp the key idea of Held-Karp algorithm, how does it reduce the time-complexity?
Is it because it uses Dynamic programming so that time is saved by getting the intermediate result from the cache or because it removes some paths earlier in the calculation?
Also, is it possible to use 2 dimension table to show the calculation for
a simple TSP problem(3 or 4 cities)?
The dynamic programming procedure of the Held–Karp algorithm takes advantage of the following property of the TSP problem: Every subpath of a path of minimum distance is itself of minimum distance.
So essentially, instead of checking all solutions in a naive "top-down", brute force approach (of every possible permutation), we instead use a "bottom-up" approach where all the intermediate information required to solve the problem is developed once and once only. The initial step is the very smallest subpath. Every time we move up to solve a larger subpath, we are able to look up the solutions to all the smaller subpath problems which have already been computed. The time savings come because all of the smaller subproblems have already been solved and these savings compound exponentially (at each greater subpath level). But no "paths are removed" from the calculations–at the end of the procedure all of the subproblems will have been solved. The obvious drawback is that a very large memory size may be required to store all the intermediate results.
In summary, the time savings of the Held–Karp algorithm follow from the fact that it never duplicates solving the solution to any subset (combination) of the cities. But the brute force approach will recompute the solution to any given subset combination many times (albeit not necessarily in consecutive order within a given overall set permutation).
Wikipedia contains a 2D distance matrix example and pseudocode here.

How should I go about checking if my graph has at least X Minimum Spanning Trees? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am looking for an efficient algorithm for finding if at least 'X' number of MST exist in a graph. Any pointers?
This doesn't flesh out a full algorithm, but the accepted answer to An algorithm to see if there are exactly two MSTs in a graph? (by #j_random_hacker) brings up a point that will probably help you a lot. Taken from his answer:
Furthermore, every MST can be produced by choosing some particular way
to order every set of equal-weight edges, and then running the Kruskal
algorithm.
You could probably write up an algorithm that takes advantage of this to get the number of MSTs. Well, just straight using this fact and nothing else probably doesn't get to "efficient algorithm" territory, though I imagine that any efficient algorithm is going to be taking advantage of a couple of similar facts. I'll add more results if I find any.

Resources