Coin Change is a pretty popular problem which asks how many ways you can get to a sum of N cents using coins (C[0], C[1]...C[K-1]). The DP solution is to use the method
s(N, K) = s(N, K-1) + s(N-C[K-1], K), where s(N,K) is the amount of ways to arrive at a sum N with the first K coins(sorted in ascending order).
It means the amount of ways to make N cents using the first K coins is the sum of the amount of ways to arrive at the same sum without using the Kth coin added to the amount of ways to arrive at N-the Kth coin. I really don't understand how you can come across this solution, and how it makes sense logically. Could someone explain?
The most important thing when solving a DP is to reduce the problem to a set of simpler subproblems. Most of the times "simpler" means with smaller argument(s) and in this case a problem is simpler if either the sum is less or the remaining coin values are less.
My way of thinking to solve the problem is: okay I have a set of coins and I need to count the number of ways I can form a given sum. This sounds complicated, but if I has one less coin it would be a bit easier.
It also helps to think of the bottom case. In this case you know in how many ways you can form a given sum if all you had is a single coin. This somehow suggests that the reduction to simpler problems will probably reduce the number of different coins.
Related
I was going over both solutions to Two Sum on leetcode and I noticed that the n^2 solution is to basically test all combinations of two numbers and see if they sum to Target.
I understand the naive solution iterates over each element of the array( or more precisely n-1 times because we can't compare the last element to itself) to grab the first addend and then another loop to grab all of the following elements. This second loop needs to iterate n-1-i times where i is the index of the first addend. I can see that (n-1)(n-1-i) is O(n^2).
The problem comes when I googled "algorithm for finding combinations" and it lead to this thread where the accepted answer talks about Gray Codes which goes way above my head.
Now I'm unsure whether my assumption was correct, the naive solution is a version of a Gray Code, or something else.
If Two Sum is a combinations problem then it's Time Complexity would be O(n!/ ((n-k)! k!)) or O(nCk) but I don't see how that reduces to O(n^2)
I read the Two Sum question and it states that:
Given an array of integers nums and an integer target, return indices
of the two numbers such that they add up to target.
It is a combinations problem. However, on closer inspection you will find that here the value of k is fixed.
You need to find two numbers from a list of given numbers that
add up to a particular target.
Any two numbers from n numbers can be selected in nC2 ways.
nC2 = n!/((n-2)! * 2!)
= n*(n-1)*(n-2)!/((n-2)!*2)
= n*(n-1)/2
= (n^2 - n)/2
Ignoring n and the constant 2 as it will hardly matter when n tends to infinity. The expressions finally results in a complexity of O(n^2).
Hence, a naïve solution of Two Sum has a complexity of O(n^2). Check this article for more information on your question.
https://www.geeksforgeeks.org/given-an-array-a-and-a-number-x-check-for-pair-in-a-with-sum-as-x/
In two algorithms I've been working with, I use the two functions:
pi(n):=number of primes <= n, and
R(n):=r, where prod(p_i,i=1,r)<=n but n < prod(p_i,i=1,r+1) where p_i is the i-th prime.
Basically pi(n) is the famous prime-counting function, and R(n) just calculates the product of consecutive primes until you reach the bound n and returns the amount of primes used, so for example:
R(12)=2 because 2*3<=12 but 2*3*5>12 and for example
R(100)=3 because 2*3*5<=100 but 2*3*5*7>100.
With my Professor we have been talking about the running time of calculating these functions. I know that the pi(n) that it approximates x/ln(x) over time, but I have my doubts about some stuff:
Can R(n) be calculated in polynomial time? From my point of view, by using dynamic programming we can calculate the products 2*3*5*...*p_i by knowing 2*3*5*...*p_(i-1), so the problem reduces to get the next prime, which as far as I know it can be calculated in polynomial time (PRIMES is in P).
Also because we know that we can determine if a number is prime in polynomial time, shouldn't that mean that pi(n) can be calculated in polynomial time? (Also using dynamic programming can be helpful).
If anyone can help me to clarify these questions or point me on the right direction, I would really appreciate it! Thank you!
There are methods to compute pi(n) in sub-linear time. Google for "legendre phi" or for "lehmer prime counting function", or for more recent work "lagarias miller odlyzko prime counting function". Lehmer's method isn't difficult to program; I discuss it at my blog.
For any n, you can easily determine if it's prime in O(n^(1/2)) time (check for divisibility by 2,3,4...,sqrt(n)), so you could just iterate over n and keep a counter as you go. If you store your primes in a list you could even speed it up checking whether each number is prime (check for divisibility by 2,3,5...,closest prime to sqrt(n)). So this algorithm for finding pi(n) should be O(n^(3/2)).
So let's say you run that algorithm and store the primes in a list. Then for R(n), you could just iterate through them to get their cumulative product, and return once you exceed n. I'm not sure what the time complexity of this would be, but it's going to be small. Probably something along the lines of O(log(n)), certainly something faster than O(n). Put both of those together and you should get something faster than O(n^(5/2)).
I know the algorithm to solve the coin change problem for infinite number of denominations but is there any algorithm for finite number of denominations using DP?
Yes. Modify the initial algorithm such that, when it's about to add a coin that would exceed the number of available coins of that denomination, it doesn't, instead. Then it will only print the valid combos.
Another, more simple way is: run the algorithm without bounds, then filter the output based on what combinations are invalid. Thinking of it this way makes it really obvious that the problem is indeed solvable.
I have n integers given; both positive and negative values are included. What is a good algorithm to find m integers from that list, such that that the absolute value of the sum of those m integers is the smallest possible?
The problem is NP-hard, since solving it efficiently would solve the subset-sum decision problem efficiently.
Given that, you're not going to find an efficient algorithm to solve it unless you believe that P=NP.
You can always come up with some heuristics to direct your search but in the worst case you'll have to check every subset of m integers.
If "good" means "correct", then just try every possibility. This will take you about n choose m time. Very slow. Unfortunately, this is the best you can do in general, because for any set of integers you can always add one more that is the negative of a sum of m-1 other ones--and those others could all have the same sign, so you have no way to search.
If "good" means "fast and usually works okay", then there are various ways to proceed. E.g.:
Suppose you can solve the problem for m=2, and suppose further you can solve it for both the positive and the negative answer (and then take the smaller of the two). Now suppose you want to solve m=4. Solve for m=2, then throw those two numbers out and solve again...should be obvious what to do next! Now, what about m=6?
Now suppose you can solve the problem for m=3 and m=2. Think you can get a decent answer for m=5?
Finally, note that if you sort the numbers, you can solve for m=2 in one pass, and for m=3 you have an annoying quadratic search to do, but at least you can do it on only about a quarter of the list twice (the small halves of the positive and negative numbers) and look for a number of opposite sign to cancel.
Problem
Given a number n, 2<=n<=2^63. n could be prime itself. Find the prime p that is closest to n.
Using the fact that for all primes p, p>2, p is odd and p is of the form 6k+1 or 6k+5, one could write a loop from n−1 to 2 to check if that number is prime. So instead of checking for all numbers I need to check for every odd of the two forms above. However, I wonder if there is a faster algorithm to solve this problem? i.e. some constraints that can restrict the range of numbers need to be checked? Any idea would be greatly appreciated.
In reality, the odds of finding a prime number are "high" so brute force checking while skipping "trivial" numbers (numbers divisible by small primes) is going to be your best approach given what we know about number theory to date.
[update] A mild optimization that you might do is similar to the Sieve of Eratosthenes where you define some small smooth bound and mark all numbers in a range about N as being composite and only test the numbers relatively prime to your smooth base. You will need to make your range and smoothness small enough as to not eclipse the runtime of the comparatively "expense" prime test.
The biggest optimization that you can do is to use a fast primality check before doing a full test. For instance see http://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test for a commonly used test that will quickly eliminate most numbers as "probably not prime". Only after you have good reason to believe that a number is prime should you attempt to properly prove primality. (For many purposes people are happy to just accept that if it passes a fixed number of trials of the Rabin-Miller test, it is so likely to be prime that you can just accept that fact.)