Subset product & quantum computers, is an instance solvable [closed] - algorithm

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Suppose you have a quantum computer that can run Shor's algorithm for factorization of integers.
Is it then possible to produce an oracle that determines if no solution exists for an instance of the Subset Product problem, with 100% confidence, in sub-exponential time?
So, the oracle is given a sequence x1, ... xn, as the description of a subset product problem.
It responds either Yes, a solution to this instance does not exist, or No, a solution to this instance may or may not exist.
If we take he prime factors of all elements in the sequence and then check to see if all of them are present in the target product's factors, this should tell us if a solution is not at all possible. A solution exist may exist if and only if all the prime factors are accounted for. On quantum computers, prime factorization is sub-exponential.
Would like some feedback on if this is correct logic- if it works- and if the complexity is indeed different between classical and quantum systems for this oracle/algorithm. Would also appreciate an explanation on reductions - can Subset Product be reduced to 3SAT without consequence?

Your algorithm, if I understood it correctly, will fail for the elements [6, 15] and the target 10. It will determine that 6*15 = 2*3*3*5, which has all of the factors used in 10=2*5, and incorrectly assert that this means you can make 10 by multiplying 6 and 15.
There are two reasons that it's unlikely you'll be able to fix this algorithm:
Subset Product is NP-Complete. Finding a polynomial time quantum algorithm for it, or showing that no such algorithm exists, is probably as hard as determining if P=NP. That is to say, very very hard.
You don't want the prime factors, you want the "no-need-to-reduce" factors. For example, if every time a number in the problem has a prime factor of 13 it's accompanied by a factor of 17 then there's no need to break 221 into 13*17. You can apply Euclid's gcd algorithm to various combinations of elements to find these no-need-to-reduce factors, no quantum-ness required.

Related

how to prove a task is done in minimum required commands [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Is it possible to show that a task is done in the minimum amount of required commands or lines of code in a language, it is obvious that if you can do a task in one command this is the shortest way to do so but this is only going to be true of tasks like addition, if I say created an algorithm for sorting how would I know that there does or does not exist a faster way to carry out this task?
First off, minimum number of lines of code does not necessarily mean minimum number of commands. (i.e. processor commands) As the former is not really significant in an algorithmic sense, I am assuming that you are trying to find out the latter.
On that note, there are a variety of techniques to prove the minimum number of steps(not commands) needed to do some complex tasks. Finding the minimal number of steps necessary to achieve a task does not directly correspond to the minimum number of commands; but it should be relatively trivial to modify these techniques to find out the minimum number of commands essential to solve the problem. Note that these techniques may not necessarily yield a lower bound for every complex task, and whether a lower bound can be found depends on the specific task.
Incidentally, (comparison-based) sorting, which was mentioned in your question, is one of the tasks for which there is such a proof method, namely decision trees. You may find a more detailed description of the method on many sources including here but the method simply tries to find the least number of comparisons that has to be made in order to sort an array. It is a well-known technique lying at the heart of proving why comparison-based sorting algorithms have a time complexity lower bound of NlogN.

How to select the number of cluster centroid in K means [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am going through a list of algorithm that I found and try to implement them for learning purpose. Right now I am coding K mean and is confused in the following.
How do you know how many cluster there is in the original data set
Is there any particular format that I have follow in choosing the initial cluster centroid besides all centroid have to be different? For example does the algorithm converge if I choose cluster centroids that are different but close together?
Any advice would be appreciated
Thanks
With k-means you are minimizing a sum of squared distances. One approach is to try all plausible values of k. As k increases the sum of squared distances should decrease, but if you plot the result you may see that the sum of squared distances decreases quite sharply up to some value of k, and then much more slowly after that. The last value that gave you a sharp decrease is then the most plausible value of k.
k-means isn't guaranteed to find the best possible answer each run, and it is sensitive to the starting values you give it. One way to reduce problems from this is to start it many times, with different starting values, and pick the best answer. It looks a bit odd if an answer for larger k is actually larger than an answer for smaller k. One way to avoid this is to use the best answer found for k clusters as the basis (with slight modifications) for one of the starting points for k+1 clusters.
In the standard K-Means the K value is chosen by you, sometimes based on the problem itself ( when you know how many classes exists OR how many classes you want to exists) other times a "more or less" random value. Typically the first iteration consists of randomly selecting K points from the dataset to serve as centroids. In the following iterations the centroids are adjusted.
After check the K-Means algorithm, I suggest you also see the K-means++, which is an improvement of the first version, as it tries to find the best K for each problem, avoiding the sometimes poor clusterings found by the standard k-means algorithm.
If you need more specific details on implementation of some machine learning algorithm, please let me know.

How will greedy work in this coin change case [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
The problem is here
Say I have just 25-cent, 10-cent and 4-cent coins and my total amount is 41. Using greedy, I'll pick 25-cent and then 10-cent, and then the remaining 6 cents can not be made.
So my question is, does greedy in this case will tell me that there is no solution?
It looks like your problem was answered right in the the Greedy algorithm wiki: http://en.wikipedia.org/wiki/Greedy_algorithm#Cases_of_failure
Imagine the coin example with only 25-cent, 10-cent, and 4-cent coins. The greedy algorithm would not be able to make change for 41 cents, since after committing to use one 25-cent coin and one 10-cent coin it would be impossible to use 4-cent coins for the balance of 6 cents, whereas a person or a more sophisticated algorithm could make change for 41 cents with one 25-cent coin and four 4-cent coins.
The greedy algorithm mentioned in your link assumes the existence of a unit coin. Otherwise there are some integer amounts it can't handle at all.
Regarding optimality - as stated there, it depends on the available coins. For {10,5,1} for example the greedy algorithm is always optimal (i.e. returns the minimum number of coins to use). For {1,3,4} the greedy algorithm is not guaranteed to be optimal (it returns 6=4+1+1 instead of 6=3+3).
It seems that greedy algorithm is not always the best and this case is used as example to illustrate when it doesn't work
See example in Wikipedia

Optimization similar to Knapsack [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am trying to find a way to solve an Optimization problem as follows:
I have 22 different objects that can be selected more than once. I have a evaluation function f that takes the multiplicities and calculates the total value.
f is a product over fractions of linear (affine) terms and as such, differentiable and even smooth in the allowed region.
I want to optimize f with respect to the 22 variables, with the additional conditions that certain sums may not exceed certain values (for example, if a,...,v are my variables, a + e + i + m + q + s <= 9). By this, all of the variables are bounded.
If f were strictly monotonuous, this could be solved optimally by a (minimalistically modified) knapsack solution. However, the function isnt convex. That means it is even impossible to assume that if taking an object A is better than B on an empty knapsack, that this choice holds even when adding a third object C (as C could modify B's benefit to be better than A). This means that a greedy algorithm cannot be used;
Are there similar algorithms that solve such a problem in a optimal (or at least, nearly optimal) way?
EDIT: As requested, an example of what the problem is (I chose 5 variables a,b,c,d,e for simplicity)
for example,
f(a,b,c,d,e) = e*(a*0.45+b*1.2-1)/(c+d)
(Every variable only appears once, if this helps at all)
Also, for example, a+b+c=4, d+e=3
The problem is to optimize that with respect to a,b,c,d,e as integers. There is a bunch of optimization algorithms that hold for convex functions, but very few for non-convex...

How many fewer questions can I ask to guess a number? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
If I ask a person to select a number between 1 and 1200 in his mind. If I can only ask questions for which he will only reply with YES or NO, how many questions will I need to ask before I arrive at the answer for the number He had selected in my mind ?
I am looking for the less possible number of questions. Any proven solution would be appreciable.
To determine which of the numbers between 1 and n are chosen, you will need to ask at least log2 n questions. There is no possible way to do better.
The intuition for this answer is as follows. Suppose that you ask a total of k questions. The maximum number of different possible answers you can receive to those questions, even if they're dependent on one another, is 2k. Since there are n possible numbers that could be picked, you need to choose k such that
2k ≥ n
Which happens precisely when
k ≥ log2 n
In other words, you have to ask at least log2 n questions to be able to even have enough different possible outcomes to associate each possible number with some possible outcome. Since the number of questions must always be a natural number, the minimum number of questions you can ask must be at least ⌈log2 n⌉
This is purely a lower bound on the answer. At this point, we can't rule out the possibility that maybe you need far more questions than this to get the answer. However, the fact that we know about the binary search algorithm means that we know that you never need more than ⌈log2 n⌉ questions to get the answer, since this is the number of questions you'd ask if you were doing a binary search. This means that the binary search algorithm has to optimal, since there is no possible way of asking a smaller number of questions.
Hope this helps!
The log base 2 of 1200, rounded up to an integer: that's 11. Basically, every question cuts the possible range in half, so you just continue with a binary search until the possible range has length 1.
Ask for all the bits in the number. 11 questions are enough for that.
Edit: I would argue, that it's impossible to do better, due to the bijectivity between the binary and decimal representation - at least for the worst case.
This also a classical example of adversary arguments method which is used to find lower bound complexity. In our case the person who knows the number is the adversary. So he will wisely change his real answer when you ask a new question. How does he determine in his answer in each step? Let, for example the number is between 1-100.
You ask: is n>=50?. He may say YES OR NO both will be equally well for him since intervals are equal. Let assume he says yes.
Then you say a number between 50<=N<=100, lets say you ask: is n>=80.Then he should say NO even if the number he picked is larger than 80 because that 50<=n<=80 is larger interval.Now the number may be between 50 and 80
Maintaining this way, he will guarentee the maximum number of questions, that is logn since the interval size is decreasing like in binary search

Resources