Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am having trouble for solving the running time of the following algorithm
Now first my question, is the case really important here(I can not come up with 2 different inputs of the same size that are different from each other) ?
Second, I think this algorithm runs in O(n^2). Am I right?
The comment you wrote in #OBu's answer is about only a quarter right:
1*n + 2*(n-1) + 3*(n-2) + ... +n*1
That equals to:
Sum(i=1..n, i*(n-i+1)) = n*Sum(i) - Sum(i*i) + n = n*[n(n+1)/2] - [n(n+1)(2n+1)/6] + n
If you want, feel free to compute the exact formula, but the overall complexity is O(n^3).
As a rule of thumb (more like a back-of-the-envelope computation trick I've picked up... just to give you a quick idea): if you are unsure about algorithms with multiple for's (with different lengths, but all in relation with n, as you have above) try to compute how many operations are performed around the middle of the algorithm (n/2). This usually gives you an idea on how the running time complexity for the whole thing might looks like - you are basically computing the largest element in the sum, so the overall complexity is always >= than the thing you compute (in most cases it's the same though).
Just to give you some hints:
How many loops do you have and how are they nested?
How often is each loop running (start solving this from the outer to the inner loop)
If in doubt, try it with n=4 or 5 and calculate each step. After this, you'll have a good feeling for what's going on.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 20 hours ago.
Improve this question
I want to program a software that calculates the best combination of materials to use base on parameters such as its tensile strength, elastic modulus, stiffness, and results from doing certain tests from those materials. Those each factor are going to be weighted differently in a WDM. Is there an algorithm that would allow me to find the best combination without actually going through all the combinations and doing each individual calculations? I will be working with a lot of data, so efficiency is important
I tried researching algorithms like kruskal's and other things, but I'm not very fammiliar with them
First step is to write down an equation to calculate a number that you want to optimize.
If you can do that and the equation has no squares or other exponential terms then this is the classical linear programming problem https://en.wikipedia.org/wiki/Linear_programming
Your equation needs to look something like this:
max O = n1 * p1 + n2 * p2 - n3 * p3 ...
If so, then your best bet is to choose a linear programming package ( ask google ) with a good introductory tutorial and plug your problem into that. After a day or so on a steep learning curve, your problem will become almost trivial.
If you cannot do that, then you will need to use some sort of hill climbing algorithm - probably best to hire an expert to help with that.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Is it possible to show that a task is done in the minimum amount of required commands or lines of code in a language, it is obvious that if you can do a task in one command this is the shortest way to do so but this is only going to be true of tasks like addition, if I say created an algorithm for sorting how would I know that there does or does not exist a faster way to carry out this task?
First off, minimum number of lines of code does not necessarily mean minimum number of commands. (i.e. processor commands) As the former is not really significant in an algorithmic sense, I am assuming that you are trying to find out the latter.
On that note, there are a variety of techniques to prove the minimum number of steps(not commands) needed to do some complex tasks. Finding the minimal number of steps necessary to achieve a task does not directly correspond to the minimum number of commands; but it should be relatively trivial to modify these techniques to find out the minimum number of commands essential to solve the problem. Note that these techniques may not necessarily yield a lower bound for every complex task, and whether a lower bound can be found depends on the specific task.
Incidentally, (comparison-based) sorting, which was mentioned in your question, is one of the tasks for which there is such a proof method, namely decision trees. You may find a more detailed description of the method on many sources including here but the method simply tries to find the least number of comparisons that has to be made in order to sort an array. It is a well-known technique lying at the heart of proving why comparison-based sorting algorithms have a time complexity lower bound of NlogN.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Call every subunitary ratio with its denominator a power of 2 a perplex.
Number 1 can be written in many ways as a sum of perplexes.
Call every sum of perplexes a zeta.
Two zetas are distinct if and only if one of the zeta has as least one perplex that the other does not have. In the image shown above, the last two zetas are considered to be the same.
Find all the numbers of ways 1 can be written as a zeta with N perplexes. Because this number can be big, calculate it modulo 100003.
Please don't post the code, but rather the algorithm. Be as precise as you can.
This problem was given at a contest and the official solution, written in the Romanian language, has been uploaded at https://www.dropbox.com/s/ulvp9of5b3bfgm0/1112_descr_P2_fractii2.docx?dl=0 , as a docx file. (you can use google translate)
I do not understand what the author of the solution meant to say there.
Well, this reminds me of BFS algorithms(Breadth first search), where you radiate out from a single point to find multiple solutions w/ different permutations.
Here you can use recursion, and set the base case as when N perplexes have been reached in that 1 call stack of the recursive function.
So you can say:
function(int N <-- perplexes, ArrayList<Double> currentNumbers, double dividedNum)
if N == 0, then you're done - enter the currentNumbers array into a hashtable
clone the currentNumbers ArrayList as cloneNumbers
remove dividedNum from cloneNumbers and add 2 dividedNum/2
iterate through index of cloneNumbers
for every number x in cloneNumbers, call function(N--, cloneNumbers, x)
This is a rough, very inefficient but short way to do it. There's obviously a lot of ways you can prune the algorithm(reduce the amount of duplicates going into the hashtable, prevent cloning as much as possible, etc), but because this shows the absolute permutation of every number, and then enters that sequence into a hashtable, the hashtable will use its equals() comparison to see that the sequence already exists(such as your last 2 zetas), and reject the duplicate. That way, you'll be left with the answer you want.
The efficiency of the current algorithm: O(|E|^(N)), where |E| is the absolute number of numbers you can have inside of the array at the end of all insertions, and N is the number of insertions(or as you said, # of perplexes). Obviously this isn't the most optimal speed, but it does definitely work.
Hope this helps!
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am trying to find a way to solve an Optimization problem as follows:
I have 22 different objects that can be selected more than once. I have a evaluation function f that takes the multiplicities and calculates the total value.
f is a product over fractions of linear (affine) terms and as such, differentiable and even smooth in the allowed region.
I want to optimize f with respect to the 22 variables, with the additional conditions that certain sums may not exceed certain values (for example, if a,...,v are my variables, a + e + i + m + q + s <= 9). By this, all of the variables are bounded.
If f were strictly monotonuous, this could be solved optimally by a (minimalistically modified) knapsack solution. However, the function isnt convex. That means it is even impossible to assume that if taking an object A is better than B on an empty knapsack, that this choice holds even when adding a third object C (as C could modify B's benefit to be better than A). This means that a greedy algorithm cannot be used;
Are there similar algorithms that solve such a problem in a optimal (or at least, nearly optimal) way?
EDIT: As requested, an example of what the problem is (I chose 5 variables a,b,c,d,e for simplicity)
for example,
f(a,b,c,d,e) = e*(a*0.45+b*1.2-1)/(c+d)
(Every variable only appears once, if this helps at all)
Also, for example, a+b+c=4, d+e=3
The problem is to optimize that with respect to a,b,c,d,e as integers. There is a bunch of optimization algorithms that hold for convex functions, but very few for non-convex...
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Two algorithms A and B solve the same algorithmic problem, A taking n^3 seconds and B taking n days.
(i) Which algorithm is asymptotically preferable?
(ii) How large does n need to be before B takes one-quarter of the time taken by A?
How do I go about solving these?
My answer for (i) is that B is preferable as n grows at a faster rate asymptotically. Days and seconds here count as a constant and therefore do not matter as n approaches infinity.
for ii) my guess is 2 days. But wondered if others got the same
I could be completely wrong here, but I think you want this
24*60*60*n = n^3 * 1/4
which when plugged into wolphram alpha gives
587.87....
or
-587.87 in some alternate universe ;)
This looks like a brute force type of algorithm would work to give you a better feal for the problem. I would take a look at the total time it takes for the two functions to be equal and what happens above and below that threshold. Also, for good practice it may be good to look for multiple solutions.
y=n^3 seconds
y=n * (60 seconds * 60 minutes * 24 hours) seconds = n seconds per day
Then look through incrementing values of n for points where the value of y is equal between the two functions.