Difference between greedy and Dynamic and divide and conquer algorithms [closed] - algorithm

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I want to know the difference between these three i know that in Divide and conquer and Dynamic algos the difference between these two is that both divides the broblem in small part but in D&Q the the small parts of the problem are dependent on each other whereas not the case with dynamic. but what about greedy ?

a simplified view outlining the gist of both schemes:
greedy algorithms neither postpone nor revise their decisions (ie. no backtracking).
d&q algorithms merge the results of the very same algo applied to subsets of the data
examples:
greedy: kruskal's minimal spanning tree
select an edge from a sorted list, check, decide, never visit it again.
d&q: merge sort
split the data set into 2 halves,
merge sort them,
combine the results by skimming through both partial results in parallel, stopping, choosing or advancing as appropriate.

Related

The space complexity is always the lower bound of the time complexity [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
My book states that for a code with T(n) time complexity and S(n) space complexity, the following statement holds:
T(n) is omega(S(n)).
My question is: Why does this statement hold?
We are speaking of sequential algorithms.
Then space complexity S(n) means that the algorithm somehow inspects each of S(n) different memory locations at least once. In order to visit this many memory locations a sequential algorithm needs Ω(S(n)) time.

When to Use Sorting Algorithms [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm a mostly self-taught programmer, I'm in my freshman year of college going towards a BS in CompSci. Last year I would do some of the homework for the AP CompSci kids, and when they got to sorting algorithms, I understood what they did, but my question was what is a case where one is used? I know this may seem like a horrible, or ridiculous question, but other than a few cases I can think of, I don't understand when one would use a sorting algorithm. I understand that they are essential to know, and that they are foundational algorithms. But in the day to day, when are they used?
Sorting algorithm is an algorithm that arrange the list of elements in certain order. You can use such algorithms when you want the elements in some order.
For example:
Sorting strings on basis of lexicographical order. This makes several computation easier (like searching, insertion, deletion provided appropiate data structure is used)
Sorting integers as part of preprocessing of some algorithms. Suppose you have lot of queries in data base to find an integer, you will want to apply binary search. For it to be applicable, input must be sorted.
In many computational geometry algorithms (like convex hull), sorting the co-ordinates is the first step you do.
So, basically, if you want some ordering, you resort to sorting algorithms!

All divide and conquer approach use recursive functions or not necessarily? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Do all algorithms that use the divide and conquer approach use recursive functions or not necessarily?
Binary search is an application of the D&C paradigm. (As follows: split in two halves and continue into the half that may contain the key.)
It can be implemented both recursively or non-recursively.
Recursion is handy when you need to keep both "halves" of a split and queue them for later processing. A common situation is called tail recursion, when you only queue one of the halves and process the other immediately. In binary search, you just drop one of the halves.
In a very broad sense, D&C is the father of all algorithms when stated as "break the problem into easier subproblems of the same kind". This definition also encompasses iterative solutions, often implemented without recursion.

How should I go about checking if my graph has at least X Minimum Spanning Trees? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am looking for an efficient algorithm for finding if at least 'X' number of MST exist in a graph. Any pointers?
This doesn't flesh out a full algorithm, but the accepted answer to An algorithm to see if there are exactly two MSTs in a graph? (by #j_random_hacker) brings up a point that will probably help you a lot. Taken from his answer:
Furthermore, every MST can be produced by choosing some particular way
to order every set of equal-weight edges, and then running the Kruskal
algorithm.
You could probably write up an algorithm that takes advantage of this to get the number of MSTs. Well, just straight using this fact and nothing else probably doesn't get to "efficient algorithm" territory, though I imagine that any efficient algorithm is going to be taking advantage of a couple of similar facts. I'll add more results if I find any.

Are any of the state of the art Maximum Flow algorithms practical? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
For the maximum flow problem, there seem to be a number of very sophisticated algorithms, with at least one developed as recently as last year. Orlin's Max flows in O(mn) time or better gives an algorithm that runs in O(VE).
On the other hand, the algorithms I most commonly see implemented are (I don't claim to have done an exhaustive search; this is just from casual observation):
Edmonds-Karp, O(VE^2)
Push-relabel, O(V^2 E), or O(V^3) using FIFO vertex selection
Dinic's Algorithm, O(V^2 E)
Are the algorithms with better asymptotic running time just not practical for the problem sizes in the real world? Also, I see "Dynamic Trees" are involved in quite a few algorithms; are these ever used in practice?

Resources