Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 days ago.
This post was edited and submitted for review 3 days ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
Recently I was learning sorting algorithms, and a description that I found in a book illustrated the numbers of comparisons, swaps, and extra space that will determine the performance. So I was very confused about that. Why will the number of swaps hurt the performance?
"performance" refers to the running time of code. I have seen another post that mentioned swapping elements is vastly more expensive.
in practice, swapping elements is vastly more expensive than comparing. This is even more pronounced when elements are far apart, due to caching. So, on modern hardware, algorithms that tend to swap less - and when they do swap, move elements the furthest toward their final destination - tend to win out.
I want to know the affection of swaps in sorting algorithms. I'm new to this, pls.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am solving interesting questions which are quite frequently asked in programming interview like following:
Compute sum of digits in all numbers from 1 to n?
Compute number of perfect square between two given numbers?
Count numbers from 1 to n that have 4 as a digit?
I am wondering what are real time applications for above? Can any one please share there views.
I think these questions have multiple solutions. Question 1 and 3 are interesting because you can solve these problems without iteration in very clever ways, but also solve them using very long winded ways. As someone who does a lot of interviewing, I would want use this type of question to gauge the sophistication of the candidate at solving problems. On that basis I don't think giving you a clever answer to these question is going to be in your best interests to succeeding at interviews. How you tackle a problem and how far you can push the boundaries is what is likely being tested.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What is the optimal and usual value of selective pressure in tournament selection? What percent of the best members of the current generation should propagate to the next generation?
Unfortunately, there isn't a great answer to this question. The optimal parameters will vary from problem to problem, and people use a wide range of them. Selecting the right tournament selection parameters is currently more of an art than a science. Stronger selective pressure (a larger tournament) will generally result in the population converging on a solution faster, at the cost of that solution potentially not being as good. This is called the exploration vs. exploitation tradeoff, and it underlies most algorithms for searching a large space of possible solutions - you're not going to get away from it.
I know that's not very helpful, though - you want a starting place, and that's completely reasonable. So here's the best one I know of (and I know a number of others who use it as a go-to default tournament configuration as well): a tournament size of two. Basically, this means you just keep picking random pairs of solutions, choosing the best one, and sending it to the next generation (with mutation and crossover as desired), until the next generation is the desired size. This has the nice property that any member of the population besides the absolute worst has a chance of getting to the next generation, but better ones have a better chance.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I currently see two ways to code the next step of my program and there are probably more, but the two routes I have are as follows.
I take the factors of the lowest number and loop through the other numbers two see if they share those common factors.
I find the factors of the lowest number and add it to a list. I then find the factors of the other numbers that do not exceed the lowest and add them to the same list. I then run through the list to check which is the highest number that appears x times.
I am leaning towards 1, but I'm not sure.
Sorry if this is too ambiguous, thanks.
Well, given the ambiguity, as stated: the 1st requires less steps and avoids the allocation of a data structure.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am getting ahead on next semester classes and just had a question about bigO notation. What is the time factor measured in? Is it a measure of milliseconds, nanoseconds or just an arbitrary measure based upon the amount of inputs, n, used to compare different versions of algorithims?
It kinda sorta depends on how exactly you define the notation (there are a many different definitions that ultimately describe the same thing). We defined it on turing machines, there time would be defined as the number of computation steps performed. On real machines, it'd be similar - for instance, the number of atomic instructions performed. As some of the comments have pointed out, the unit of time doesn't really matter anyway because what's measured is the asymptotic performance, that is, how the performance changes with increasing input sizes.
Note that this isn't really a programming question and probably not a good fit for the site. More of a CompSci thing, but i think the compsci stackexchange site is meant for post graduates.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I want to understand the main reason why sorting algorithm becomes stable or unstable. I understand that every unstable algorithm can become stable if we add one more position key to the element. (But it can effect on speed and memory usage).I also understand that during unstable sorting elements are always change there places.
But what is the main reason of it? Is it because at some cases we use divide and conquer strategy?
It depends on the comparisons and swapping that the algorithm uses. Generally if comparisons and swaps occur between far-flung objects in an array, without having looked at the elements in between first, the sort will not be stable.
There is no reason. Stability is just a property, that just exists. Sometimes algorithm is stable by sheer chance (the creator did not have stability in mind). Most of the time algorithm is stable exactly because the creator wanted it so.