Let us call a number "steady" if sum of digits on odd positions is equal to sum of digits on even positions. For example 132 or 4059. Given a number N, program should output smallest/first "steady" number greater than N. For example if N = 4, answer = 11, if N = 123123, answer = 123134.
But the constraint is that N can be very large. Number of digits in N can be 100. And time limit is 1 second.
My approach was to take in N as a string store each digit in array of int type and add 1 using long arithmetic, than test if the number is steady or not, if Yes output it, if No add 1 again and test if it is steady. Do this until you get the answer.
It works on many tests, but when the difference between oddSum and EvenSum is very large like in 9090909090 program exceeds time limit. I could not come up with other algorithm. Intuitively I think there might be some pattern in swapping several last digits with each other and if necessary add or subtract something to them, but I don't know. I prefer a good HINT instead of answer, because I want to do it myself.
Use the algorithm that you would use. It goes like this:
Input: 9090909090
Input: 9090909090 Odd:0 Even:45
Input: 909090909? Odd:0 Even:45
Clearly no digit will work, we can make the odd at most 9
Input: 90909090?? Odd:0 Even:36
Clearly no digit will work, we removed a 9 and there is no larger digit (we have to make the number larger)
Input: 9090909??? Odd:0 Even:36
Clearly no digit will work. Even is bigger than odd, we can only raise odd to 18
Input: 909090???? Odd:0 Even:27
Clearly no digit will work, we removed a 9
Input: 90909????? Odd:0 Even:27
Perhaps a 9 will work.
Input: 909099???? Odd:9 Even:27
Zero is the smallest number that might work
Input: 9090990??? Odd:9 Even:27
We need 18 more and only have two digits, so 9 is the smallest number that can work
Input: 90909909?? Odd:18 Even:27
Zero is the smallest number that can work.
Input: 909099090? Odd:18 Even:27
9 is the only number that can work
Input: 9090990909 Odd:27 Even:27
Success
Do you see the method? Remove digits while a solution is impossible then add them back until you have the solution. At first, remove digits until a solution is possible. Only a number than the one you removed can be used. Then add numbers back using the smallest one possible at each stage until you have the solution.
You can try Digit DP technique .
Your parameter can be recur(pos,oddsum,evensum,str)
your state transitions will be like this :
bool ans=0
for(int i=0;i<10;i++)
{
ans|=recur(pos+1,oddsum+(pos%2?i:0),evensum+(pos%2?i:0),str+(i+'0')
if(ans) return 1;
}
Base case :
if(pos>=n) return oddsum==evensum;
Memorization: You only need to save pos,oddsum,evensum in your DP array. So your DP array will be DP[100][100*10][100*10]. This is 10^8 and will cause MLE, you have to prune some memory.
As oddsum+evensum<9*100 , we can have only one parameter SUM and add / subtract when odd/even . So our new recursion will look like this : recur(pos,sum,str)
state transitions will be like this :
bool ans=0
for(int i=0;i<10;i++)
{
ans|=recur(pos+1,SUM+(pos%2?i:-i),str+(i+'0')
if(ans) return 1;
}
Base case :
if(pos>=n) return SUM==0;
Memorization: now our Dp array will be 2d having [pos][sum] . we can say DP[100][10*100]
Find the parity with the smaller sum. Starting from the smallest digit of that parity, increase digits of that parity to the min of 9 and the remaining increase needed.
This gets you a larger steady number, but it may be too big.
E.g., 107 gets us 187, but 110 would do.
Next, repeatedly decrement the value of the nonzero digit in the largest position of each parity in our steady number where doing so doesn't reduce us below our target.
187,176,165,154,143,132,121,110
This last step as written is linear in the number of decrements. That's fast enough since there are at most 9*digits of them, but it can be optimized.
Related
Can anyone help me with some algorithm for this problem?
We have a big number (19 digits) and, in a loop, we subtract one of the digits of that number from the number itself.
We continue to do this until the number reaches zero. We want to calculate the minimum number of subtraction that makes a given number reach zero.
The algorithm must respond fast, for a 19 digits number (10^19), within two seconds. As an example, providing input of 36 will give 7:
1. 36 - 6 = 30
2. 30 - 3 = 27
3. 27 - 7 = 20
4. 20 - 2 = 18
5. 18 - 8 = 10
6. 10 - 1 = 9
7. 9 - 9 = 0
Thank you.
The minimum number of subtractions to reach zero makes this, I suspect, a very thorny problem, one that will require a great deal of backtracking potential solutions, making it possibly too expensive for your time limitations.
But the first thing you should do is a sanity check. Since the largest digit is a 9, a 19-digit number will require about 1018 subtractions to reach zero. Code up a simple program to continuously subtract 9 from 1019 until it becomes less than ten. If you can't do that within the two seconds, you're in trouble.
By way of example, the following program (a):
#include <stdio.h>
int main (int argc, char *argv[]) {
unsigned long long x = strtoull(argv[1], NULL, 10);
x /= 1000000000;
while (x > 9)
x -= 9;
return x;
}
when run with the argument 10000000000000000000 (1019), takes a second and a half clock time (and CPU time since it's all calculation) even at gcc insane optimisation level of -O3:
real 0m1.531s
user 0m1.528s
sys 0m0.000s
And that's with the one-billion divisor just before the while loop, meaning the full number of iterations would take about 48 years.
So a brute force method isn't going to help here, what you need is some serious mathematical analysis which probably means you should post a similar question over at https://math.stackexchange.com/ and let the math geniuses have a shot.
(a) If you're wondering why I'm getting the value from the user rather than using a constant of 10000000000000000000ULL, it's to prevent gcc from calculating it at compile time and turning it into something like:
mov $1, %eax
Ditto for the return x which will prevent it noticing I don't use the final value of x and hence optimise the loop out of existence altogether.
I don't have a solution that can solve 19 digit numbers in 2 seconds. Not even close. But I did implement a couple of algorithms (including a dynamic programming algorithm that solves for the optimum), and gained some insight that I believe is interesting.
Greedy Algorithm
As a baseline, I implemented a greedy algorithm that simply picks the largest digit in each step:
uint64_t countGreedy(uint64_t inputVal) {
uint64_t remVal = inputVal;
uint64_t nStep = 0;
while (remVal > 0) {
uint64_t digitVal = remVal;
uint_fast8_t maxDigit = 0;
while (digitVal > 0) {
uint64_t nextDigitVal = digitVal / 10;
uint_fast8_t digit = digitVal - nextDigitVal * 10;
if (digit > maxDigit) {
maxDigit = digit;
}
digitVal = nextDigitVal;
}
remVal -= maxDigit;
++nStep;
}
return nStep;
}
Dynamic Programming Algorithm
The idea for this is that we can calculate the optimum incrementally. For a given value, we pick a digit, which adds one step to the optimum number of steps for the value with the digit subtracted.
With the target function (optimum number of steps) for a given value named optSteps(val), and the digits of the value named d_i, the following relationship holds:
optSteps(val) = 1 + min(optSteps(val - d_i))
This can be implemented with a dynamic programming algorithm. Since d_i is at most 9, we only need the previous 9 values to build on. In my implementation, I keep a circular buffer of 10 values:
static uint64_t countDynamic(uint64_t inputVal) {
uint64_t minSteps[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
uint_fast8_t digit0 = 0;
for (uint64_t val = 10; val <= inputVal; ++val) {
digit0 = val % 10;
uint64_t digitVal = val;
uint64_t minPrevStep = 0;
bool prevStepSet = false;
while (digitVal > 0) {
uint64_t nextDigitVal = digitVal / 10;
uint_fast8_t digit = digitVal - nextDigitVal * 10;
if (digit > 0) {
uint64_t prevStep = 0;
if (digit > digit0) {
prevStep = minSteps[10 + digit0 - digit];
} else {
prevStep = minSteps[digit0 - digit];
}
if (!prevStepSet || prevStep < minPrevStep) {
minPrevStep = prevStep;
prevStepSet = true;
}
}
digitVal = nextDigitVal;
}
minSteps[digit0] = minPrevStep + 1;
}
return minSteps[digit0];
}
Comparison of Results
This may be considered a surprise: I ran both algorithms on all values up to 1,000,000. The results are absolutely identical. This suggests that the greedy algorithm actually calculates the optimum.
I don't have a formal proof that this is indeed true for all possible values. It intuitively kind of makes sense to me. If in any given step, you choose a smaller digit than the maximum, you compromise the immediate progress with the goal of getting into a more favorable situation that allows you to catch up and pass the greedy approach. But in all the scenarios I thought about, the situation after taking a sub-optimal step just does not get significantly more favorable. It might make the next step bigger, but that is at most enough to get even again.
Complexity
While both algorithms look linear in the size of the value, they also loop over all digits in the value. Since the number of digits corresponds to log(n), I believe the complexity is O(n * log(n)).
I think it's possible to make it linear by keeping counts of the frequency of each digit, and modifying them incrementally. But I doubt it would actually be faster. It requires more logic, and turns a loop over all digits in the value (which is in the range of 2-19 for the values we are looking at) into a fixed loop over 10 possible digits.
Runtimes
Not surprisingly, the greedy algorithm is faster to calculate a single value. For example, for value 1,000,000,000, the runtimes on my MacBook Pro are:
greedy: 3 seconds
dynamic: 36 seconds
On the other hand, the dynamic programming approach is obviously much faster at calculating all the values, since its incremental approach needs them as intermediate results anyway. For calculating all values from 10 to 1,000,000:
greedy: 19 minutes
dynamic: 0.03 seconds
As already shown in the runtimes above, the greedy algorithm gets about as high as 9 digit input values within the targeted runtime of 2 seconds. The implementations aren't really tuned, and it's certainly possible to squeeze out some more time, but it would be fractional improvements.
Ideas
As already explored in another answer, there's no chance of getting the result for 19 digit numbers in 2 seconds by subtracting digits one by one. Since we subtract at most 9 in each step, completing this for a value of 10^19 needs more than 10^18 steps. We mostly use computers that perform in the rough range of 10^9 operations/second, which suggests that it would take about 10^9 seconds.
Therefore, we need something that can take shortcuts. I can think of scenarios where that's possible, but haven't been able to generalize it to a full strategy so far.
For example, if your current value is 9999, you know that you can subtract 9 until you reach 9000. So you can calculate that you will make 112 steps ((9999 - 9000) / 9 + 1) where you subtract 9, which can be done in a few operations.
As said in comments already, and agreeing with #paxdiablo’s other answer, I’m not sure if there is an algorithm to find the ideal solution without some backtracking; and the size of the number and the time constraint might be tough as well.
A general consideration though: You might want to find a way to decide between always subtracting the highest digit (which will decrease your current number by the largest possible amount, obviously), and by looking at your current digits and subtracting which of those will give you the largest “new” digit.
Say, your current number only consists of digits between 0 and 5 – then you might be tempted to subtract the 5 to decrease your number by the highest possible value, and continue with the next step. If the last digit of your current number is 3 however, then you might want to subtract 4 instead – since that will give you 9 as new digit at the end of the number, instead of “only” 8 you would be getting if you subtracted 5.
Whereas if you have a 2 and two 9 in your digits already, and the last digit is a 1 – then you might want to subtract the 9 anyway, since you will be left with the second 9 in the result (at least in most cases; in some edge cases it might get obliterated from the result as well), so subtracting the 2 instead would not have the advantage of giving you a “high” 9 that you would otherwise not have in the next step, and would have the disadvantage of not lowering your number by as high an amount as subtracting the 9 would …
But every digit you subtract will not only affect the next step directly, but the following steps indirectly – so again, I doubt there is a way to always chose the ideal digit for the current step without any backtracking or similar measures.
To clarify, as input I have 'n' (n1, n2, n3,...) numbers (integers) such as each number is unique within this set.
I would like to generate a number out of this set (lets call the generated number big 'N') that is also unique, and that allows me to verify that a number 'n1' belongs to the set 'n' just by using 'N'.
is that possible?
Edit:
Thanks for the answers guys, I am looking into them atm. For those requesting an example, here is a simple one:
imagine i have those paths (bi-directional graph) with a random unique value (let's call it identifier):
P1 (N1): A----1----B----2----C----3----D
P2 (N2): A----4----E----5----D
So I want to get the full path (unique path, not all paths) from A knowing N1 and this path as a result should be P1.
Mind you that 1,2,...are just unique numbers in this graph, not weights or distances, I just use them for my heuristic.
If you are dealing with small numbers, no problem. You are doing the same thing with digits every time you compose a number: a digit is a number from 0 to 9 and a full number is a combination of them that:
is itself a number
is unique for given digits
allows you to easily verify if a digit is inside
The gotcha is that the numbers must have an upper limit, like 10 is for digits. Let's say 1000 here for simplicity, the similar composed number could be:
n1*1000^k + n2*1000^(k-1) + n3*1000^(k-2) ... + nk*1000^(0)
So if you have numbers 33, 44 and 27 you will get:
33*1000000 + 44*1000 + 27, and that is number N: 33044027
Of course you can do the same with bigger limits, and binary like 256,1024 or 65535, but it grows big fast.
A better idea, if possible is to convert it into a string (a string is still a number!) with some separator (a number in base 11, that is 10 normal digits + 1 separator digit). This is more flexible as there are no upper limits. Imagine to use digits 0-9 + a separator digit 'a'. You can obtain number 33a44a27 in base 11. By translating this to base 10 or base 16 you can get an ordinary computer number (65451833 if I got it right). Then converting 65451833 to undecimal (base11) 33a44a27, and splitting by digit 'a' you can get the original numbers back to test.
EDIT: A VARIABLE BASE NUMBER?
Of course this would work better digitally in base 17 (16 digits+separator). But I suspect there are more optimal ways, for example if the numbers are unique in the path, the more numbers you add, the less are remaining, the shorter the base could shrink. Can you imagine a number in which the first digit is in base 20, the second in base 19, the third in base 18, and so on? Can this be done? Meh?
In this variating base world (in a 10 nodes graph), path n0-n1-n2-n3-n4-n5-n6-n7-n8-n9 would be
n0*10^0 + (n1*9^1)+(offset:1) + n2*8^2+(offset:18) + n3*7^3+(offset:170)+...
offset1: 10-9=1
offset2: 9*9^1-1*8^2+1=81-64+1=18
offset3: 8*8^2-1*7^3+1=343-512+1=170
If I got it right, in this fiddle: http://jsfiddle.net/Hx5Aq/ the biggest number path would be: 102411
var path="9-8-7-6-5-4-3-2-1-0"; // biggest number
o2=(Math.pow(10,1)-Math.pow(9,1)+1); // offsets so digits do not overlap
o3=(Math.pow(9,2)-Math.pow(8,2)+1);
o4=(Math.pow(8,3)-Math.pow(7,3)+1);
o5=(Math.pow(7,4)-Math.pow(6,4)+1);
o6=(Math.pow(6,5)-Math.pow(5,5)+1);
o7=(Math.pow(5,6)-Math.pow(4,6)+1);
o8=(Math.pow(4,7)-Math.pow(3,7)+1);
o9=(Math.pow(3,8)-Math.pow(2,8)+1);
o10=(Math.pow(2,9)-Math.pow(1,9)+1);
o11=(Math.pow(1,10)-Math.pow(0,10)+1);
var n=path.split("-");
var res;
res=
n[9]*Math.pow(10,0) +
n[8]*Math.pow(9,1) + o2 +
n[7]*Math.pow(8,2) + o3 +
n[6]*Math.pow(7,3) + o4 +
n[5]*Math.pow(6,4) + o5 +
n[4]*Math.pow(5,5) + o6 +
n[3]*Math.pow(4,6) + o7 +
n[2]*Math.pow(3,7) + o8 +
n[1]*Math.pow(2,8) + o9 +
n[0]*Math.pow(1,9) + o10;
alert(res);
So N<=102411 would represent any path of ten nodes? Just a trial. You have to find a way of naming them, for instance if they are 1,2,3,4,5,6... and you use 5 you will have to compact the remaining 1,2,3,4,6->5,7->6... => 1,2,3,4,5,6... (that is revertable and unique if you start from the first)
Theoretically, yes it is.
By defining p_i as the i'th prime number, you can generate N=p_(n1)*p_(n2)*..... Now, all you have to do is to check if N%p_(n) == 0 or not.
However, note that N will grow to huge numbers very fast, so I am not sure this is a very practical solution.
One very practical probabilistic solution is using bloom filters. Note that bloom filters is a set of bits, that can be translated easily to any number N.
Bloom filters have no false negatives (if you said a number is not in the set, it really isn't), but do suffer from false positives with an expected given probability (that is dependent on the size of the sets, number of functions used and number of bits used).
As a side note, to get a result that is 100% accurate, you are going to need at the very least 2^k bits (where k is the range of the elements) to represent the number N by looking at this number as a bitset, where each bit indicates existence or non-existence of a number in the set. You can show that there is no 100% accurate solution that uses less bits (peigeon hole principle). Note that for integers for example with 32 bits, it means you are going to need N with 2^32 bits, which is unpractical.
Let's assume we will consider binary numbers which has length 2n and n might be about 1000. We are looking for kth number (k is limited by 10^9) which has following properties:
Amount of 1's is equal to amount of 0's what can be described as following: #(1) = #(0)
Every prefix of this number has to contain atleast as much 0's as 1's. It might be easier to understand it after negating the sentence, which is: There is no prefix which would contain more 1's than 0's.
And basically that's it.
So to make it clear let's do some example:
n=2, k=2
we have to take binary number of length 2n:
0000
0001
0010
0011
0100
0101
0110
0111
1000
and so on...
And now we have to find 2nd number which fulfill those two requirements. So we see 0011 is the first one, and 0101 is second one.
If we change k=3, then answer doesn't exist since there are number which have same amount of opposite bits, but for 0110, there is prefix 011 so number doesn't fulfill second constraint and same would be with all numbers which has 1 as most significant bit.
So what I did so far to find algorithm?
Well my first idea was to generate all possible bits settings, and check whether it has those two properties, but generate them all would take O(2^(2n)) which is not an option for n=1000.
Additionally I realize there is no need to check all numbers which are smaller than 0011 for n=2, 000111 for n=3, and so on... frankly speaking those which half of most significant bits remains "untouched" because those numbers have no possibility to fulfill #(1) = #(0) condition. Using that I can reduce n by half, but it doesn't help much. Instead of 2 * forever I have forever running algorithm. It's still O(2^n) complexity, which is way too big.
Any idea for algorithm?
Conclusion
This text has been created as a result of my thoughts after reading Andy Jones post.
First of all I wouldn't post code I have used since it's point 6 in following document from Andy's post Kasa 2009. All you have to do is consider nr as that what I described as k. Unranking Dyck words algorithm, would help us find out answer much faster. However it has one bottleneck.
while (k >= C(n-i,j))
Considering that n <= 1000, Catalan number can be quite huge, even C(999,999). We can use some big number arithmetic, but on the other hand I came up with little trick to overpass it and use standard integer.
We don't want to know how big actually Catalan number is as long as it's bigger than k. So now we will create Catalan numbers caching partial sums in n x n table.
... ...
5 | 42 ...
4 | 14 42 ...
3 | 5 14 28 ...
2 | 2 5 9 14 ...
1 | 1 2 3 4 5 ...
0 | 1 1 1 1 1 1 ...
---------------------------------- ...
0 1 2 3 4 5 ...
To generate it is quite trivial:
C(x,0) = 1
C(x,y) = C(x,y-1) + C(x-1,y) where y > 0 && y < x
C(x,y) = C(x,y-1) where x == y
So what we can see only this:
C(x,y) = C(x,y-1) + C(x-1,y) where y > 0 && y < x
can cause overflow.
Let's stop at this point and provide definition.
k-flow - it's not real overflow of integer but rather information that value of C(x,y) is bigger than k.
My idea is to check after each running of above formula whether C(x,y) is grater than k or any of sum components is -1. If it is we put -1 instead, which would act as a marker, that k-flow has happened. I guess it quite obvious that if k-flow number is sum up with any positive number it's still be k-flowed in particular sum of 2 k-flowed numbers is k-flowed.
The last what we have to prove is that there is no possibility to create real overflow. Real overflow might only happen if we sum up a + b which non of them is k-flowed but as sum they generated the real overflow.
Of course it's impossible since maximum value can be described as a + b <= 2 * k <= 2*10^9 <= 2,147,483,647 where last value in this inequality is value of int with sign. I assume also that int has 32 bits, as in my case.
The numbers you are describing correspond to Dyck words. Pt 2 of Kasa 2009 gives a simple algorithm for enumerating them in lexicographic order. Its references should be helpful if you want to do any further reading.
As an aside (and be warned I'm half asleep as I write this, so it might be wrong), the wikipedia article notes that the number of Dyck words of length 2n is the n th Catalan number, C(n). You might want to find the smallest n such that C(n) is larger than the k you're looking for, and then enumerate Dyck words starting from X^n Y^n.
I'm sorry for misunderstood this problem last time, so I edit it and now I can promise the correction and you can test the code first, the complexity is O(n^2), the detail answer is follow
First, we can equal the problem to the next one
We are looking for kth largest number (k is limited by 10^9) which has following properties:
Amount of 1's is equal to amount of 0's what can be described as following: #(1) = #(0)
Every prefix of this number has to contain at least as much [[1's as 0's]], which means: There is no prefix which would contain more [[0's than 1's]].
Let's give an example to explain it: let n=3 and k=4, the amount of satisfied numbers is 5, and the picture below has explain what we should determine in previous problem and new problem:
| 000111 ------> 111000 ^
| 001011 ------> 110100 |
| 001101 ------> 110010 |
| previous 4th number 010011 ------> 101100 new 4th largest number |
v 010101 ------> 101010 |
so after we solve the new problem, we just need to bitwise not.
Now the main problem is how to solve the new problem. First, let A be the array, so A[m]{1<=m<=2n} only can be 1 or 0, let DP[v][q] be the amount of numbers which satisfy condition2 and condition #(1)=q in {A[2n-v+1]~A[2n]}, so the DP[2n][n] is the amount of satisfied numbers.
A[1] only can be 1 or 0, if A[1]=1, the amount of numbers is DP[2n-1][n-1], if A[1]=0, the amount of numbers is DP[2n-1][n], now we want to find the kth largest number, if k<=DP[2n-1][n-1], kth largest number's A[1] must be 1, then we can judge A[2] with DP[2n-2][n-2]; if k>DP[2n-1][n-1], kth largest number's A[1] must be 0 and k=k-DP[2n-1][n-1], then we can judge A[2] with DP[2n-2][n-1]. So with the same theory, we can judge A[j] one by one until there is no number to compare. Now we can give a example to understand (n=3, k=4)
(We use dynamic programming to determine DP matrix, the DP equation is DP[v][q]=DP[v-1][q-1]+DP[v-1][q])
Intention: we need the number in leftest row can be compared,
so we add a row on DP's left row, but it's not include by DP matrix
in the row, all the number is 1.
the number include by bracket are initialized by ourselves
the theory of initialize just follow the mean of DP matrix
DP matrix = (1) (0) (0) (0) 4<=DP[5][2]=5 --> A[1]=1
(1) (1) (0) (0) 4>DP[4][1]=3 --> A[2]=0, k=4-3=1
(1) (2) (0) (0) 1<=DP[3][1]=3 --> A[3]=1
(1) (3) 2 (0) 1<=1 --> a[4]=1
(1) (4) 5 (0) no number to compare, A[5]~A[6]=0
(1) (5) 9 5 so the number is 101100
If you have not understand clearly, you can use the code to understand
Intention:DP[2n][n] increase very fast, so the code can only work when n<=19, in the problem n<1000, so you can use big number programming, and the code can be optimize by bit operation, so the code is just a reference
/*--------------------------------------------------
Environment: X86 Ubuntu GCC
Author: Cong Yu
Blog: aimager.com
Mail: funcemail#gmail.com
Build_Date: Mon Dec 16 21:52:49 CST 2013
Function:
--------------------------------------------------*/
#include <stdio.h>
int DP[2000][1000];
// kth is the result
int kth[1000];
void Oper(int n, int k){
int i,j,h;
// temp is the compare number
// jishu is the
int temp,jishu=0;
// initialize
for(i=1;i<=2*n;i++)
DP[i-1][0]=i-1;
for(j=2;j<=n;j++)
for(i=1;i<=2*j-1;i++)
DP[i-1][j-1]=0;
for(i=1;i<=2*n;i++)
kth[i-1]=0;
// operate DP matrix with dynamic programming
for(j=2;j<=n;j++)
for(i=2*j;i<=2*n;i++)
DP[i-1][j-1]=DP[i-2][j-2]+DP[i-2][j-1];
// the main thought
if(k>DP[2*n-1][n-1])
printf("nothing\n");
else{
i=2*n;
j=n;
for(;j>=1;i--,jishu++){
if(j==1)
temp=1;
else
temp=DP[i-2][j-2];
if(k<=temp){
kth[jishu]=1;
j--;
}
else{
kth[jishu]=0;
if(j==1)
k-=1;
else
k-=DP[i-2][j-2];
}
}
for(i=1;i<=2*n;i++){
kth[i-1]=1-kth[i-1];
printf("%d",kth[i-1]);
}
printf("\n");
}
}
int main(){
int n,k;
scanf("%d",&n);
scanf("%d",&k);
Oper(n,k);
return 0;
}
How would you implement a random number generator that, given an interval, (randomly) generates all numbers in that interval, without any repetition?
It should consume as little time and memory as possible.
Example in a just-invented C#-ruby-ish pseudocode:
interval = new Interval(0,9)
rg = new RandomGenerator(interval);
count = interval.Count // equals 10
count.times.do{
print rg.GetNext() + " "
}
This should output something like :
1 4 3 2 7 5 0 9 8 6
Fill an array with the interval, and then shuffle it.
The standard way to shuffle an array of N elements is to pick a random number between 0 and N-1 (say R), and swap item[R] with item[N]. Then subtract one from N, and repeat until you reach N =1.
This has come up before. Try using a linear feedback shift register.
One suggestion, but it's memory intensive:
The generator builds a list of all numbers in the interval, then shuffles it.
A very efficient way to shuffle an array of numbers where each index is unique comes from image processing and is used when applying techniques like pixel-dissolve.
Basically you start with an ordered 2D array and then shift columns and rows. Those permutations are by the way easy to implement, you can even have one exact method that will yield the resulting value at x,y after n permutations.
The basic technique, described on a 3x3 grid:
1) Start with an ordered list, each number may exist only once
0 1 2
3 4 5
6 7 8
2) Pick a row/column you want to shuffle, advance it one step. In this case, i am shifting the second row one to the right.
0 1 2
5 3 4
6 7 8
3) Pick a row/column you want to shuffle... I suffle the second column one down.
0 7 2
5 1 4
6 3 8
4) Pick ... For instance, first row, one to the left.
2 0 7
5 1 4
6 3 8
You can repeat those steps as often as you want. You can always do this kind of transformation also on a 1D array. So your result would be now [2, 0, 7, 5, 1, 4, 6, 3, 8].
An occasionally useful alternative to the shuffle approach is to use a subscriptable set container. At each step, choose a random number 0 <= n < count. Extract the nth item from the set.
The main problem is that typical containers can't handle this efficiently. I have used it with bit-vectors, but it only works well if the largest possible member is reasonably small, due to the linear scanning of the bitvector needed to find the nth set bit.
99% of the time, the best approach is to shuffle as others have suggested.
EDIT
I missed the fact that a simple array is a good "set" data structure - don't ask me why, I've used it before. The "trick" is that you don't care whether the items in the array are sorted or not. At each step, you choose one randomly and extract it. To fill the empty slot (without having to shift an average half of your items one step down) you just move the current end item into the empty slot in constant time, then reduce the size of the array by one.
For example...
class remaining_items_queue
{
private:
std::vector<int> m_Items;
public:
...
bool Extract (int &p_Item); // return false if items already exhausted
};
bool remaining_items_queue::Extract (int &p_Item)
{
if (m_Items.size () == 0) return false;
int l_Random = Random_Num (m_Items.size ());
// Random_Num written to give 0 <= result < parameter
p_Item = m_Items [l_Random];
m_Items [l_Random] = m_Items.back ();
m_Items.pop_back ();
}
The trick is to get a random number generator that gives (with a reasonably even distribution) numbers in the range 0 to n-1 where n is potentially different each time. Most standard random generators give a fixed range. Although the following DOESN'T give an even distribution, it is often good enough...
int Random_Num (int p)
{
return (std::rand () % p);
}
std::rand returns random values in the range 0 <= x < RAND_MAX, where RAND_MAX is implementation defined.
Take all numbers in the interval, put them to list/array
Shuffle the list/array
Loop over the list/array
One way is to generate an ordered list (0-9) in your example.
Then use the random function to select an item from the list. Remove the item from the original list and add it to the tail of new one.
The process is finished when the original list is empty.
Output the new list.
You can use a linear congruential generator with parameters chosen randomly but so that it generates the full period. You need to be careful, because the quality of the random numbers may be bad, depending on the parameters.
Problem
I need to create 32 Bit numbers (signed or unsigned doesn't matter, the highest bit will never be set anyway) and each number must have a given number of Bits set.
Naive Solution
The easiest solution is of course to start with the number of zero. Within a loop the number is now increased by one, the number of Bits is counted, if the count has the desired value, the number is stored to a list, if not the loop just repeats. The loop is stopped if enough numbers have been found. Of course this works just fine, but it's awfully slow once the number of desired Bits gets very high.
A Better Solution
The simplest number having (let's say) 5 Bits set is the number where the first 5 Bit are set. This number can be easily created. Within a loop the first bit is set and the number is shifted to the left by one. This loop runs 5 times and I found the first number with 5 Bits set. The next couple of numbers are easy to create as well. We now pretend the number to be 6 Bit wide and the highest one is not set. Now we start shifting the first zero bit to the right, so we get 101111, 110111, 111011, 111101, 111110. We could repeat this by adding another 0 to front and repeating this process. 0111110, 1011110, 1101110, etc. However that way numbers will grow much faster than necessary, as using this simple approach we leave out numbers like 1010111.
So is there a better way to create all possible permutations, a generic approach, that can be used, regardless how many bits the next number will have and regardless how many set bits we need set?
You can use the bit-twiddling hack from hackersdelight.org.
In his book he has code to get the next higher number with the same number of one-bit set.
If you use this as a primitive to increase your number all you have to do is to find a starting point. Getting the first number with N bits set is easy. It's just 2^(N-1) -1.
You will iterate through all possible numbers very fast that way.
unsigned next_set_of_n_elements(unsigned x)
{
unsigned smallest, ripple, new_smallest, ones;
if (x == 0) return 0;
smallest = (x & -x);
ripple = x + smallest;
new_smallest = (ripple & -ripple);
ones = ((new_smallest/smallest) >> 1) - 1;
return ripple | ones;
}
// test code (shown for two-bit digits)
void test (void)
{
int bits = 2;
int a = pow(2,bits) - 1;
int i;
for (i=0; i<100; i++)
{
printf ("next number is %d\n", a);
a = next_set_of_n_elements(a);
}
}
Try approaching the problem from the opposite way round - what you're trying to do is equivalent to "find n numbers in the range 0-31".
Suppose you're trying to find 4 numbers. You start with [0,1,2,3] and then increase the last number each time (getting [0,1,2,4], [0,1,2,5] ...) until you hit the limit [0,1,2,31]. Then increase the penultimate number, and set the last number to one higher: [0,1,3,4]. Go back to increasing the last number: [0,1,3,5], [0,1,3,6]... etc. Once you hit the end of this, you go back to [0,1,4,5] - eventually you reach [0,1,30,31] at which point you have to backtrack one step further: [0,2,3,4] and off you go again. Keep going until you finally end up with [28,29,30,31].
Given a set of numbers, it's obviously easy to convert them into the 32 bit numbers.
You want to generate combinations, see this Wikipedia article.
You either need Factoradic Permutations (Google on that) or one of algorithms on Wiki