There are given numbers 1, 2, ...., b-1. Every number of these can be used a[1], a[2], ...., a[b-1] times.
From them the biggest possible number (from data given) has to be concatenated, while sum of its digits (partial numbers) has to be divisible by b. "Digits" of this number can be of any base bigger than 2.
So basically the biggest number of base b has to be created, by concatenating numbers 1...b-1, up to a[1]...a[b-1] times each, while sum of all used partial numbers/digits has to be divisible by b.
For example:
There is 5 times 1, 10 times 2, 4 times 3 and 2 times 4. As stated above, they have to concatenate the biggest number divisible by b (here 5).
They would give:
44333322222222221111.
Concatenating from the biggest to the lowest gives needed number, as their sum is divisible by 5.
For 1 times 1 it is:
0
Because 1 is not divisible by 2, so no numbers should be used then.
What are the algorithms or similar problems to this? How can it be approached?
At first, we can simply arrange numbers from the biggest to the lowest, so the concatenated number will naturally be the biggest. Then, we have to take the least amount of these numbers, so their sum will be divisible by b. When there can be taken different combinations of these numbers but in the same amounts, the one that has its biggest number the smallest among others should be choosen (or the second biggest and so on).
For example:
If combinations of (3, 3, 2) and (4, 2, 2) can be taken, then the first one should be cut out from the number.
This really looks like change-making problem, but with finite amount of coins of different denominations and at the end, we have to have the combination, not only the minimal amount of coins. In addition, with dynamic approach, 2 different combinations of same length (like 332 and 442 above) can't be rather easily chosen in the middle of dynamic array, as in next steps they can give quite much different values.
Related
I need help understanding the following paragraph from a book on algorithms -
Search spaces for natural combinatorial problems tend to grow
exponentially in the size N of the input; if the input size increases
by one, the number of possibilities increases multiplicatively. We’d
like a good algorithm for such a problem to have a better scaling
property: when the input size increases by a constant factor—say, a
factor of 2—the algorithm should only slow down by some constant
factor C.
I don't really get why one is better than the other. If anyone can formulate any examples to aid my understanding, its greatly appreciated.
Let's consider the following problem: you're given a list of numbers, and you want to find the longest subsequence of that list where the numbers are in ascending order. For example, given the sequence
2 7 1 8 3 9 4 5 0 6
you could form the subsequence [2, 7, 8, 9] as follows:
2 7 1 8 3 9 4 5 0 6
^ ^ ^ ^
but there's an even longer one, [1, 3, 4, 5, 6] available here:
2 7 1 8 3 9 4 5 0 6
^ ^ ^ ^ ^
That one happens to be the longest subsequence that's in increasing order, I believe, though please let me know if I'm mistaken.
Now that we have this problem, how would we go about solving it in the general case where you have a list of n numbers? Let's start with a not so great option. One possibility would be to list off all the subsequences of the original list of numbers, then filter out everything that isn't in increasing order, and then to take the longest one out of all the ones we find. For example, given this short list:
2 7 1 8
we'd form all the possible subsequences, which are shown here:
[]
[8]
[1]
[1, 8]
[7]
[7, 8]
[7, 1]
[7, 1, 8]
[2]
[2, 8]
[2, 1]
[2, 1, 8]
[2, 7]
[2, 7, 8]
[2, 7, 1]
[2, 7, 1, 8]
Yikes, that list is pretty long. But by looking at it, we can see that the longest increasing subsequences have length two, and that there are plenty of choices for which one we could pick.
Now, how well is this going to scale as our input list gets longer and longer? Here's something to think about - how many subsequences are there of this new list, which I made by adding 3 to the end of the existing list?
2 7 1 8 3
Well, every existing subsequence is still a perfectly valid subsequence here. But on top of that, we can form a bunch of new subsequences. In fact, we could take any existing subsequence and then tack a 3 onto the end of it. That means that if we had S subsequences for our length-four list, we'll have 2S subsequences for our length-five list.
More generally, you can see that if you take a list and add one more element onto the end of it, you'll double the number of subsequences available. That's a mathematical fact, and it's neither good nor bad by itself, but if we're in the business of listing all those subsequences and checking each one of them to see whether it has some property, we're going to be in trouble because that means there's going to be a ton of subsequences. We already see that there are 16 subsequences of a four-element list. That means there's 32 subsequences of a five-element list, 64 subsequences of a six-element list, and, more generally, 2n subsequences of an n-element list.
With that insight, let's make a quick calculation. How many subsequences are we going to have to check if we have, say, a 300-element list? We'd have to potentially check 2300 of them - a number that's bigger than the number of atoms in the observable universe! Oops. That's going to take way more time than we have.
On the other hand, there's a beautiful algorithm called patience sorting that will always find the longest increasing subsequence, and which does so quite easily. You can do this by playing a little game. You'll place each of the items in the list into one of many piles. To determine what pile to pick, look for the first pile whose top number is bigger than the number in question and place it on top. If you can't find a pile this way, put the number into its own pile on the far right.
For example, given this original list:
2 7 1 8 3 9 4 5 0 6
after playing the game we'd end up with these piles:
0
1 3 4 5
2 7 8 9 6
And here's an amazing fact: the number of piles used equals the length of the longest increasing subsequence. Moreover, you can find that subsequence in the following way: every time you place a number on top of a pile, make a note of the number that was on top of the pile to its left. If we do this with the above numbers, here's what we'll find; the parenthesized number tells us what was on top of the stack to the left at the time we put the number down:
0
1 3 (1) 4 (3) 5 (4)
2 7 (2) 8 (7) 9 (8) 6 (5)
To find the subsequence we want, start with the top of the leftmost pile. Write that number down, then find the number in parentheses and repeat this process. Doing that here gives us 6, 5, 4, 3, 1, which, if reversed, is 1, 3, 4, 5, 6, the longest increasing subsequence! (Wow!) You can prove that this works in all cases, and it's a really beautiful exercise to actually go and do this.
So now the question is how fast this process is. Placing the first number down takes one unit of work - just place it in its own pile. Placing the second number down takes at most two units of work - we have to look at the top of the first pile, and optionally put the number into a second pile. Placing the third number takes at most three units of work - we have to look at up to two piles, and possibly place the number into its own third pile. More generally, placing the kth number down takes k units of work. Overall, this means that the work we're doing is roughly
1 + 2 + 3 + ... + n
if we have n total elements. That's a famous sum called Gauss's sum, and it simplifies to approximately n2 / 2. So we can say that we'll need to do roughly n2 / 2 units of work to solve things this way.
How does that compare to our 2n solution from before? Well, unlike 2n, which grows stupidly fast as a function of n, n2 / 2 is actually a pretty nice function. If we plug in n = 300, which previously in 2n land gave back "the number of atoms in the universe," we get back a more modest 45,000. If that's a number of nanoseconds, that's nothing; that'll take a computer under a second to do. In fact, you have to plug in a pretty big value of n before you're looking at something that's going to take the computer quite a while to complete.
The function n2 / 2 has an interesting property compared with 2n. With 2n, if you increase n by one, as we saw earlier, 2n will double. On the other hand, if you take n2 / 2 and increase n by one, then n2 / 2 will get bigger, but not by much (specifically, by n + 1/2).
By contrast, if you take 2n and then double n, then 2n squares in size - yikes! But if you take n2 / 2 and double n, then n2 / 2 goes up only by a factor of four - not that bad, actually, given that we doubled our input size!
This gets at the heart of what the quote you mentioned is talking about. Algorithms with runtimes like 2n, n!, etc. scale terribly as a function of n, since increasing n by one causes a huge jump in the runtime. On the other hand, functions like n, n log n, n2, etc. have the property that if you double n, the runtime only goes up by some constant term. They therefore scale much more nicely as a function of input.
I am having difficulty trying to solve the following problem:
For Q queries, Q <= 1e6, where each query is a positive integer N, N <= 1e18, find the number of integers in [1,N] that cannot be
divided by integers in [2,10] for each query.
I thought of using using a sieve method to filter out numbers in [1,1e18] for each query (similar to sieve of eratosthenes). However, the value of N could be very large. Hence, there is no way I could use this method. The most useful observation that I could make is that numbers ending with 0,2,4,5,6,8 are invalid. But that does not help me with this problem.
I saw a solution for a similar problem that uses a smaller number of queries (Q <= 200). But it doesn't work for this problem (and I don't understand that solution).
Could someone please advise me on how to solve this problem?
The only matter numbers in [2,10] are those primes which are 2, 3, 5, 7
So, Let say, the number cannot be divided by integers in [2,10] is the number cannot be divided by {2,3,5,7}
Which is also equalled to the total number between [1,n] minus all number that is divided by any combination of {2,3,5,7}.
So, this is the fun part: from [1,n] how many numbers that is divided by 2?
The answer is n/2 (why? simple, because every 2 number, there is one number divided by 2)
Similarly, how many numbers that is divided by 5? The answer is n/5
...
So, do we have our answer yet? No, as we found out that we have doubled count those numbers that divided by both {2, 5} or {2, 7} ..., so now, we need to minus them.
But wait, seems like we are double minus those that divided by {2,5,7} ... so we need to add it back
...
Keep doing this until all combinations are taken care of,
so there should be 2^4 combination, which is 16 in total, pretty small to deal with.
Take a look at Inclusion-Exclusion principle for some good understanding.
Good luck!
Here is an approach on how to handle this.
The place to start is to think about how you can split this into pieces. With such a problem, a place to start is the least common denominator (LCD) -- in this case 2,520 (the smallest number divisible by all the numbers less than 10).
The idea is that if x is not divisible by any number from 2-10, then x + 2,520 is also not divisible.
Hence, you can divide the problem into two pieces:
How many numbers between 1 and 2,520 are "relatively prime" to the numbers from 2-10?
How many times does 2,520 go into your target number? You need to take the remainder into account as well.
Really stuck the complexity analysis of this problem .
Given digits 0–9 , we need to find all the numbers of max length k whose digits will be in increasing order .
for example if k = 3 , numbers can be 0,00,000,01,02,03,04,.... 1,11,111,12,...
So the question basically that if repetitions allowed for digits,
How many such combinations are possible to find all the numbers less than size k (less than digit length k) such that digits from left to right will be non-decreasing order.
Numbers with at most k digits that are weakly increasing are in 1-1 correspondence with binary strings of length k+10, with exactly ten 1's. The number of consecutive 0s just before the ith one and one in the binary string is the number of i digits in the original number. For example, if k=7, then 001119 maps to 00100011111111010 (2 zeros, 3 ones, 0 twos, 0 threes, ..., 0 eights, 1 nine, 1 digit left over to make the number of digits up to 7).
These binary strings are easy to count: there's choose(k+10, 10)-1 of them (missing one because the empty number is disallowed). This can be computed in O(1) arithmetic operations (actually 10 additions, 18 multiplications and one division).
I don't have enough reputation neither, so I cannot answer Paul's or Globe's answer.
Globe's answer choose(k+9,9) is not perfect, because it only counts the solutions where the numbers have exactly k digits. But the original problems allows numbers with less digits too.
Paul's answer choose(k+10,10) counts these shorter numbers too, but it also allows numbers with zero digits. Let's say k=7 then the following binary string describes a number with no digits: 11111111110000000. We have to exclude this one.
So the solution is: choose(k+10,10)-1
I don't have enough reputation to comment on Paul's answer, so I'm adding another answer. The formula isn't choose(k+10, 10) as specified by Paul, it's choose(k+9, 9).
For instance if we have k=2, choose(2+10, 10) gives us 66, when there are only 55 numbers that satisfy the property.
We pick stars and separators, where the separators divide our digits into buckets from 0 to 9, and stars tell us how many digits to pick from a bucket. (E.g. **|**||*||||||* corresponding to 001139)
The reasoning behind it being k+9 and not k+10 is as follows:
we have to pick 9 separators between 10 digits, so while we have k choices for the stars, we only have 9 choices for the separators.
Consider the decimal representation of a natural number N. Find the greatest common divisor (GCD) of all numbers that can be obtained by permuting the digits in the given number. Leading zeroes are allowed
I don't want the code, just the logic on how to approach the problem
http://www.spoj.com/problems/GCD/
Here is the pseudo code that I was trying:
if sum of digits divide by 3 then k=3
if sum of digits divide by 9 then k=9
else k=1
if all digits divide by 2 then o=2
if all digits divide by 4 then o=4
if all digits divide by 8 then o=8
if all digits divide by 5 then o=5
if all digits divide by 7 then o=7
else o=1
if all digits are same , print itself
else print o*k
But i am getting Wrong Answer every time.
Let me think...
If for example your huge number contains the digits two and three, then one permutation ends in 23, another permutation is identical except that it ends in 32. The difference is 9. Therefore the gcd of all permutations is a factor of 9, which means it is 9, 3 or 1.
Can you go from there?
To give you a stronger hint: Every common divisor of x and y is also a divisor of x-y. That's why you don't need to find all the divisors of 250 digit numbers. You find the divisors of some differences of such numbers.
If you have two digits 5 and 7 in your number (because that is what you asked), there is one permutation (248 digits)57 and another permutation (same 248 digits)75. The difference is 18. The gcd of all numbers divides 18.
Now it's your turn. What can you conclude if you have two digit 2 and 9 by taking one permutation ending in 29 and one ending in 92? And if you have more than two different digits, what can you conclude?
I am looking for some ideas how to deal with this specific knapsack problem (I believe it is knapsack-like problem although I might be mistaken).
As input we get set of numbers, and each can be positive or negative - we don't know that.
We have to find minimum possible absolute value of sum of some these numbers.
We don't have to use all numbers. We have to do additions (or subtractions) in same order in which numbers are given and we have to start with first number (and add or subtract following ones).
Example would be:
4 11 5 5 => 0
because 4+11-5-5 = 0
10 3 9 4 100 => 2
because 10-3-9 = -2
In second example we skipped two last numbers - because adding next numbers wouldn't give us smaller absolute number.
Amount of numbers can be up to 5,000
, and the sum of them won't over exceed 10,000
They are integers
If you were to explore all combinations of addition and subtraction of 5000 numbers, you would have to go through 25000−1 ≈ 1.4⋅101505 alternatives. That's obviously not reasonable. However, since the sum of the numbers is at most 10000, we know that all partial sums (including subtraction) must lie between -10000 and 10000, so there can be less than 20000 different sums. If you only consider different sums when you work through the 5000 positions you have less than 100 million sums to consider, which is not that much work for a computer.
Example: suppose the first three numbers are 5,1,1. The possible sums that include exactly three numbers are
5+1+1=7
5+1-1=5
5-1+1=5
5-1-1=3
Before adding the fourth number it is important to recognize that you have only three unique results from the four computations.