What is the use of Fibonacci Heaps and B-Tree [closed] - algorithm

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Technically I know what is Fibonacci Heaps and B-Tree are. But I want to know the use of the Fibonacci Heaps and B-tree data structure. How much useful these data structure are and where we can find the real use of these data structure? Thanks.

B-Tree s are commonly used to store large sets of data which need to be accessed quickly and updated often.
Perhaps the most pervasive use is indexing tables in the commonly used relational databases.

I'm not sure about fibonacci heaps but a b-tree is often used in database indexing and is good for ranged queries. I will try to find something on fib heaps
Edit: it seems like fibonacci heaps are basically just more optimized heaps. Heaps are very good for finding min max and median elements. Heaps are also used to implement priority queues

Related

Best data structure to implement a voting system [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I need to design a voting system on a high level that associates voters with their decision in sorted order by name.
I understand that I should implement a sorted map, and it seems like we need a map that performs best with random insertions. So I was wondering which of the above data structures would work best.
If you are sorting the objects based purely by name, I would say that a Binary Search Tree would work well.
If you are particularly worried about search time complexity, you can implement a balanced tree, such as an AVL tree or a splay tree. Doing this would get your search time complexity towards logarithmic, which is what you're after!
A heap or a BST. Trie linked list and array would have a higher search complexity.

What's the most optimized algorithm (approach) for traversing a tree, ever? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
As it's stated in wikipedia there are multiple algorithms for traversing a tree data structure. But some of the algorithms are kind of combinations of the others, like Bidirectional search which is almost useful for other graphs rather than trees. But with a tree we almost have no idea of the end of the tree and we can only start from the root or from its children.
In this case we might be able to incorporate the multiprocessing or multithreading in search process. But I couldn't find any comprehensive approach that's described this.
Now my question is that basically what's the most optimized way of traversing a tree when we don't have access to the whole data structure (to be able to index them, etc. like a file directory)?
The most optimized algorithm is usually the one optimized for specific usecase and platform.
It does not matter whether you do inorder, preorder or postorder. Or whether you do DFS or BFS.
What matters is:
How big is the tree? Does it fit into memory?
How deep is the tree? Can you use recursion, or do you have to use explicit separate stack?
How do you find children of the node. Do you have to access harddrive/network?
What do you want to do with the node after you find in traversal. If this operation is long enough, optimizing traversal is not worth it.
How do you share data between threads?
How are the nodes in the tree distributed? Does it resemble even distribution, or are there some very long and some very short branches?
How big are the node keys (this influences data locality and how much data you can fit into one L1/L2 cache line)?
Try in order traversing of Binary Search tree. The complexity of search is nlogn and traverse is n, which is considered to be the best.
Ex: http://www.geeksforgeeks.org/binary-search-tree-set-1-search-and-insertion/

Quicksort or Selectionsort? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have question about quicksort and selection sort. I have read many posts here but none of them answers my question.
Take a look:
We have 10GB of numbers and we have to sort them. However, we have only 800mb of memory available so mergesort is out of the question. Now, because of the hugeee size of the array, bubblesort is also out of the question.
Personally, I think both sortin algorithms are great for this job, however I have to choose only one of them, the one that works better.
Quicksort: Usually has : O(N * logN) and worst: O(N^2)
Selectionsort: usually & worst : O(N^2)
Quicksort seems better, but from my experience, I think that Selectionsort is slightly better that quick sort for huge data structures. What do you think? Thank you!
selection sort is slightly better than quicksort for huge data structures! Where did you get this from? The algorithm takes quadratic time so it's obviously much worse than quicksort. Actually, how are you going to fit 10GB in RAM, you can't use any algorithm on your array if it's not in RAM. You need an external sorting algorithm or you might store the data in a DB and let the DB engine sort it for you.
Quick sort is better for such huge data than selection sort. Selection sort might perform better in cases where the test data has a larger sets of sorted data within. But that doesn't in anyway make it better than quick sort. Your main problem in your case is on how to proceed with sorting such huge data as it cannot be held in memory and executed
Quicksort should be used for this situation due to it being the current fastest sorting algorithm. Because selection sort looks through every term to find the smallest number and putting it at the front, it will take much longer (especially if there is a huge data structure as mentioned), even with a limited amount of memory.

Difference between greedy and Dynamic and divide and conquer algorithms [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I want to know the difference between these three i know that in Divide and conquer and Dynamic algos the difference between these two is that both divides the broblem in small part but in D&Q the the small parts of the problem are dependent on each other whereas not the case with dynamic. but what about greedy ?
a simplified view outlining the gist of both schemes:
greedy algorithms neither postpone nor revise their decisions (ie. no backtracking).
d&q algorithms merge the results of the very same algo applied to subsets of the data
examples:
greedy: kruskal's minimal spanning tree
select an edge from a sorted list, check, decide, never visit it again.
d&q: merge sort
split the data set into 2 halves,
merge sort them,
combine the results by skimming through both partial results in parallel, stopping, choosing or advancing as appropriate.

Rebuilding a BST into AVL [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
How will you rebuild a given BST into AVL which contains exactly the same keys?
The algorithm running time should be O(n) and its allowed to use O(n) additional space. Any ideas?
The whole pseudo-code is not necessary, any idea or suggestion would be appreciated!
Thanks!
Extract all keys to sorted array (O(n) space) with suitable traversal method (O(n) time)
Build perfectly balanced tree from sorted array (O(n) time) (simultaneously filling AVL balance factors for all nodes)
I 've omitted the details for your own research

Resources