I have a resource scheduling issue in Java where things need to be sequenced, but there are restrictions on what resources can be next to each other. A good analogy is a string of "digits", where only certain digits can be next to each other. My solution was recursive, and works fine for small strings, but run time is O(X^N), where X is the number of possible digits (the base), and N is the length of the string. It quickly becomes unmanageable.
Using the compatibility matrix below, here are a few examples of allowed strings
Length of 1: 0, 1, 2, 3, 4
Length of 2: 02, 03, 14, 20, 30, 41
Length of 3: 020, 030, 141, 202, 203, 302, 303, 414
0 1 2 3 4
---------------------
0| 0 0 1 1 0
1| 0 0 0 0 1
2| 1 0 0 0 0
3| 1 0 0 0 0
4| 0 1 0 0 0
My solution for counting all strings of length N was start with an empty string, permute the first digit, and make a recursive call for all strings of length N-1. The recursive calls check the last digit that was added and try all permutations that can be next to that digit. There are some optimizations made so that I don't try and permute 00, 01, 04 every time, for example - only 02, 03, but performance is still poor as it scales from base 5 (the example) to base 4000.
Any thoughts on a better way to count the permutations other than trying to enumerate all of them?
If you just want the number of strings of a certain length, you could just multiply the compatibility matrix with itself a few times, and sum it's values.
n = length of string
A = compatibility matrix
number of possible strings = sum of An-1
A few examples:
n = 1
| 1 0 0 0 0 |
| 0 1 0 0 0 |
| 0 0 1 0 0 |
| 0 0 0 1 0 |
| 0 0 0 0 1 |
sum: 5
n = 3
| 2 0 0 0 0 |
| 0 1 0 0 0 |
| 0 0 1 1 0 |
| 0 0 1 1 0 |
| 0 0 0 0 1 |
sum: 8
n = 8
| 0 0 8 8 0 |
| 0 0 0 0 1 |
| 8 0 0 0 0 |
| 8 0 0 0 0 |
| 0 1 0 0 0 |
sum: 34
The original matrix (row i, column j) could be thought of as the number of strings that start with symbol i, and whose next symbol is symbol j. Alternatively, you could see it as number of strings of length 2, which start with symbol i and ends with symbol j.
Matrix multiplication preserves this invariant, so after exponentiation, An-1 would contain the number of strings that start with symbol i, has length n, and ends in symbol j.
See Wikipedia: Exponentiation by squaring for an algorithm for faster calculation of matrix powers.
(Thanks stefan.ciobaca)
This specific case reduces to the formula:
number of possible strings = f(n) = 4 + Σk=1..n 2⌊k-1⁄2⌋ = f(n-1) + 2⌊n-1⁄2⌋
n f(n)
---- ----
1 5
2 6
3 8
4 10
5 14
6 18
7 26
8 34
9 50
10 66
Do you just want to know how many strings of a given length you can build with the rules in the given matrix? If so, that an approach like this should work:
n = 5
maxlen = 100
combine = [
[0, 0, 1, 1, 0],
[0, 0, 0, 0, 1],
[1, 0, 0, 0, 0],
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0]
]
# counts of strings starting with 0,1,...,4, initially for strings of length one:
counts = [1, 1, 1, 1, 1]
for size in range(2, maxlen+1):
# calculate counts for size from count for (size-1)
newcount = []
for next in range(n):
total = 0
for head in range(n):
if combine[next][head]:
# |next| can be before |head|, so add the counts for |head|
total += counts[head]
# append, so that newcount[next] == total
newcount.append(total)
counts = newcount
print "length %i: %i items" % (size, sum(counts))
Your algorithm seems to be optimal.
How are you using these permutations? Are you accumulating them in one list, or using it one by one? Since there are a huge number of such permutations, so the poor performance maybe due to large memory usage (if you are collecting all of them) or it just takes so much time. You just can't do billions of loops in trivial time.
Reply to comment:
If you just want to count them, then you can using dynamic programming:
Let count[n][m] be an array, where count[l][j] is the number of such permutations whose length is l and end with j,
then count[l][i] = count[l-1][i1]+count[l-1][i2]+..., where i1, i2, ... are the digits that can precede i (this can be saved in an pre-calculated array).
Every cell of count can be filled by summing K numbers (K depends on the compatible matrix), so the complexity is O(KMN), M is the length of the permutation, and N is the total number of digits.
Maybe I don't understand this, but wouldn't this be served by having a table of lists that for each digit has a list of valid digits that could follow it.
Then your routine to generate will take an accumulated result, the digit number, and the current digit. Something like:
// not really Java - and you probably don't want chars, but you'll fix it
void GenerateDigits(char[] result, int currIndex, char currDigit)
{
if (currIndex == kMaxIndex) {
NotifyComplete(result);
return;
}
char[] validFollows = GetValidFollows(currDigit); // table lookup
foreach (char c in validFollows) {
result[currIndex] = c;
GenerateDigits(result, currIndex+1, c);
}
}
The complexity increases as a function of the number of digits to generate, but that function depends on the total number of valid follows for any one digit. If the total number of follows is the same for every digit, let's say, k, then the time to generate all possible permutations is going to be O(k^n) where n is the number of digits. Sorry, I can't change math. The time to generate n digits in base 10 is 10^n.
I'm not exactly sure what you're asking, but since there are potentially n! permutations of a string of n digits, you're not going to be able to list them faster than n!. I'm not exactly sure how you think you got a runtime of O(n^2).
Related
I'm interested in efficiently calculating the probability distribution over possible secret numbers given what one can observe of the opponents' hand (and your own hand) in the board game Da Vinci Code. A link to the game here: https://boardgamegeek.com/boardgame/8946/da-vinci-code
I have abstracted the problem into the following:
You are given an array A of length N and a finite set of numbers Si for each index i of the array. Now,
we are to place a number from Si at each index i to fill the entire array A;
while ensuring that the number is unique across the entire array A;
and for 3 disjoint subarrays A1, A2, A3 of A such that concat(A1, A2, A3) = A, the numbers in each subarray must follow a strictly increasing order;
given all the possible numbers to form A that satisfy the above constraints, what is the probability ditribution over each number at each index?
Here I provide an example below:
Assuming we have the following array of length 5 with each column representing Si at the index of the column
| 6 6 | 6 6 | 6 |
| 5 | 5 | |
| 4 4 | | 4 |
| | 3 3 | |
| 2 | 2 2 | |
| 1 1 | | |
| ___ | __ | _ |
| A1 | A2 | A3|
The set of all possible arrays are:
14236
14256
14356
15234
15236
15264
15364
16234
16254
16354
24356
25364
26354
45236
Therefore the probability distribution over each number [1-6] at each index is:
6 0 4/14 0 3/14 6/14
5 0 6/14 0 6/14 0
4 1/14 4/14 0 0 8/14
3 0 0 6/14 5/14 0
2 3/14 0 8/14 0 0
1 10/14 0 0 0 0
___________ __________ ______
A1 A2 A3
Brute forcing this problem is obviously doable but I have a gut feeling that there must be some more efficient algorithms for this.
The reason why I think so is due to the fact that one can derive the probability distribution from the set of all possibilities but not the other way around, so the distribution itself must contain less information than the set of all possibilities have. Therefore, I believe that we do not need to generate all possibilites just to obtain the probability distribution.
Hence, I am wondering if there is any smart matrix operation we could use for this problem or even fixed-point iteration/density evolution to approximate the end probability distribution? Some other potentially more efficient approaches to this problem are also appreciated.
Edit: By brute-force, I mean specifically enumerating all possibilities with constraint propagation like in sudoku. My hope is to obtain an accurate solution, or a approximate solution that approximates well (better than plain monte carlo), that works better than CP in terms of running time.
Edit2: The better solution I desire should have the characteristic that it does not need to generate all possibilities to obtain or approximate the probability distribution.
Did you consider Constraint Propagation?
When you assign a number to a position, that number cannot appear in any other position, so exclude that number from the remaining positions
When you assign a number in the first column of a subarray, the second column must contain a larger value, so exclude all values that are lower or equal
With a BF approach in your example the code would generate and check 4 * 4 * 3 * 4 * 2 = 384 possibilities; with the CP approach we only generate 65 possibilities.
Here is a sample Python implementation:
from dataclasses import dataclass, field
from typing import Dict, List
#dataclass
class DaVinci:
grid : List[List[int]]
top : int
lastcol : int = 0
solved : List = field(default_factory=list)
count : int = 0
distrib : List[Dict[int,int]] = field(init=False)
def __post_init__(self):
self.lastcol = len(self.grid)-1
self.distrib = [{x:0 for x in range(1,self.top+1)} for y in range(len(self.grid))]
self.solve_next(current = 0, even = True, blocked = [], minval = 0, solving = [])
self.count = len(self.solved)
def solve_next(self, current, even, blocked, minval, solving):
found = False
for n in self.grid[current]:
if n not in blocked and n > minval:
if current != self.lastcol:
self.solve_next(current + 1, not even, blocked + [n], n * even, solving + [n])
else:
for col in range(self.lastcol):
self.distrib[col][solving[col]] += 1
self.distrib[self.lastcol][n] += 1
self.solved.append(solving + [n])
def show_solved(self):
for sol in self.solved:
print(''.join(map(str,sol)))
def show_distrib(self):
for i in range(1, self.top+1):
print(i, end = ' ')
for col in range(len(self.grid)):
print(f'{self.distrib[col][i]:2d}/{self.count}', end = ' ')
print()
dv = DaVinci([[1,2,4,6],[1,4,5,6],[2,3,6],[2,3,5,6],[4,6]], 6)
dv.show_solved()
14236
14256
14356
15234
15236
15264
15364
16234
16254
16354
24356
25364
26354
45236
dv.show_distrib()
1 10/14 0/14 0/14 0/14 0/14
2 3/14 0/14 8/14 0/14 0/14
3 0/14 0/14 6/14 5/14 0/14
4 1/14 4/14 0/14 0/14 8/14
5 0/14 6/14 0/14 6/14 0/14
6 0/14 4/14 0/14 3/14 6/14
A simple idea to get an approximation for the distribution is to use a Monte Carlo approach.
Set a variable total: = 0 and a matrix M[N][Q] with all entries initially set to zero (Q is the total of numbers allowed).
Fix a positive integer K. Perform K iterations. At each iteration, for each i in [1..N], take a random element from Si and fill the array A. When the array A is all filled, verify in O(N) if it satisfies your conditions. If so, increment by one the variable total and iterate through the array, incrementing the matrix entries M[i][A[i]] by one, for i in [1..N].
In the end, iterate through all the elements of the matrix M in O(N Q) and divide its elements by total to get an approximation for the distribution.
Total time complexity is O(N (K + Q)).
You can also precalculate stuff to make the approximation more precise. For example, you can precalculate all increasing sequences in the groups A1, A2 and A3. Put them in arrays I1, I2, I3. Then, at each iteration, instead of taking random elements from each Si, you take random sequences from I1, I2 and I3 and verify if the concatenation has no repeated elements (in O(N)). If so, proceed as before. The total time complexity (apart from the expensive precalculation) remains O(N (K + Q)).
Start by converting all legal subarray selections into bitvectors.
E.g., for A2 we have [2,3], [2,5], [2,6], [3,5], [3,6]
[2,3] as a bitvector is 000110
[3,5] is 010100
Next, arrange your three subarrays by the number of bitvectors they have.
Next, put these in a hash for each subarray/member combination except the smallest subarray. Use the smallest set bit as the key.
E.g. For [2,3] in A2, we'd have {2 => 000110}
Note that the values of the map to be in an array since there will be multiple bitvectors for each index/element combo.
Finally,
For every bitvec of subarray_small:
For every non-set bit of that bitvec
Find the list that has that bit as a key in subarray_medium
For every bitvec in this list
Check if the inverse of (bitvec_small | bitvec_medium) is in the hash for subarray_large.
If it is, we have a valid arrangement; update your frequency counts.
I am interested in writing a function generate(n,m) which exhaustively generating strings of length n(n-1)/2 consisting solely of +/- characters. These strings will then be transformed into an n × n symmetric (-1,0,1)-matrix in the following way:
toTriangle["+--+-+-++-"]
{{1, -1, -1, 1}, {-1, 1, -1}, {1, 1}, {-1}}
toMatrix[%, 0] // MatrixForm
| 0 1 -1 -1 1 |
| 1 0 -1 1 -1 |
matrixForm = |-1 -1 0 1 1 |
|-1 1 1 0 -1 |
| 1 -1 1 -1 0 |
Thus the given string represents the upper-right triangle of the matrix, which is then reflected to generate the rest of it.
Question: How can I generate all +/- strings such that the resulting matrix has precisely m -1's per row?
For example, generate(5,3) will give all strings of length 5(5-1)/2 = 10 such that each row contains precisely three -1's.
I'd appreciate any help with constructing such an algorithm.
This is the logic to generate every matrix for a given n and m. It's a bit convoluted, so I'm not sure how much faster than brute force an implementation would be; I assume the difference will become more pronounced for larger values.
(The following will generate an output of zeros and ones for convenience, where zero represents a plus and a one represents a minus.)
A square matrix where each row has m ones translates to a triangular matrix where these folded row/columns have m ones:
x 0 1 0 1 x 0 1 0 1 0 1 0 1
0 x 1 1 0 x 1 1 0 1 1 0
1 1 x 0 0 x 0 0 0 0
0 1 0 x 1 x 1 1
1 0 0 1 x x
Each of these groups overlaps with all the other groups; choosing values for the first k groups means that the vertical part of group k+1 is already determined.
We start by putting the number of ones required per row on the diagonal; e.g. for (5,2) that is:
2 . . . .
2 . . .
2 . .
2 .
2
Then we generate every bit pattern with m ones for the first group; there are (n-1 choose m) of these, and they can be efficiently generated, e.g. with Gosper's hack.
(4,2) -> 0011 0101 0110 1001 1010 1100
For each of these, we fill them in in the matrix, and subtract them from the numbers of required ones:
X 0 0 1 1
2 . . .
2 . .
1 .
1
and then recurse with the smaller triangle:
2 . . .
2 . .
1 .
1
If we come to a point where some of the numbers of required ones on the diagonal are zero, e.g.:
2 . . .
1 . .
0 .
1
then we can already put a zero in this column, and generate the possible bit patterns for fewer columns; in the example that would be (2,2) instead of (3,2), so there's only one possible bit pattern: 11. Then we distribute the bit pattern over the columns that have a non-zero required count under them:
2 . 0 . X 1 0 1
1 . . 0 . .
0 . 0 .
1 0
However, not all possible bit patterns will lead to valid solutions; take this example:
2 . . . . X 0 0 1 1
2 . . . 2 . . . 2 . . . X 0 1 1
2 . . 2 . . 2 . . 2 . . 2 . .
2 . 1 . 1 . 0 . 0 .
2 1 1 0 0
where we end up with a row that requires another 2 ones while both columns can no longer take any ones. The way to spot this situation is by looking at the list of required ones per column that is created by each option in the penultimate step:
pattern required
0 1 1 -> 2 0 0
1 0 1 -> 1 1 0
1 1 0 -> 1 0 1
If the first value in the list is x, then there must be at least x non-zero values after it; which is false for the first of the three options.
(There is room for optimization here: in a count list like 1,1,0,6,0,2,1,1 there are only 2 non-zero values before the 6, which means that the 6 will be decremented at most 2 times, so its minimum value when it becomes the first element will be 4; however, there are only 3 non-zero values after it, so at this stage you already know this list will not lead to any valid solutions. Checking this would add to the code complexity, so I'm not sure whether that would lead to an improvement in execution speed.)
So the complete algorithm for (n,m) starts with:
Create an n-sized list with all values set to m (count of ones required per group).
Generate all bit patterns of size n-1 with m ones; for each of these:
Subtract the pattern from a copy of the count list (without the first element).
Recurse with the pattern and the copy of the count list.
and the recursive steps after that are:
Receive the sequence so far, and a count list.
The length of the count list is n, and its first element is m.
Let k be the number of non-zero values in the count list (without the first element).
Generate all bit pattern of size k with m ones; for each of these:
Create a 0-filled list sized n-1.
Distribute the bit pattern over it, skipping the columns with a zero count.
Add the value list to the sequence so far.
Subtract the value list from a copy of the count list (without the first element).
If the first value in the copy of the count list is greater than the number of non-zeros after it, skip this pattern.
At the deepest recursion level, store the sequence, or else:
Recurse with the sequence so far, and the copy of the count list.
Here's a code snippet as a proof of concept; in a serious language, and using integers instead of arrays for the bitmaps, this should be much faster:
function generate(n, m) {
// if ((n % 2) && (m % 2)) return; // to catch (3,1)
var counts = [], pattern = [];
for (var i = 0; i < n - 1; i++) {
counts.push(m);
pattern.push(i < m ? 1 : 0);
}
do {
var c_copy = counts.slice();
for (var i = 0; i < n - 1; i++) c_copy[i] -= pattern[i];
recurse(pattern, c_copy);
}
while (revLexi(pattern));
}
function recurse(sequence, counts) {
var n = counts.length, m = counts.shift(), k = 0;
for (var i = 0; i < n - 1; i++) if (counts[i]) ++k;
var pattern = [];
for (var i = 0; i < k; i++) pattern.push(i < m ? 1 : 0);
do {
var values = [], pos = 0;
for (var i = 0; i < n - 1; i++) {
if (counts[i]) values.push(pattern[pos++]);
else values.push(0);
}
var s_copy = sequence.concat(values);
var c_copy = counts.slice();
var nonzero = 0;
for (var i = 0; i < n - 1; i++) {
c_copy[i] -= values[i];
if (i && c_copy[i]) ++nonzero;
}
if (c_copy[0] > nonzero) continue;
if (n == 2) {
for (var i = 0; i < s_copy.length; i++) {
document.write(["+ ", "− "][s_copy[i]]);
}
document.write("<br>");
}
else recurse(s_copy, c_copy);
}
while (revLexi(pattern));
}
function revLexi(seq) { // reverse lexicographical because I had this lying around
var max = true, pos = seq.length, set = 1;
while (pos-- && (max || !seq[pos])) if (seq[pos]) ++set; else max = false;
if (pos < 0) return false;
seq[pos] = 0;
while (++pos < seq.length) seq[pos] = set-- > 0 ? 1 : 0;
return true;
}
generate(5, 2);
Here are the number of results and the number of recursions for values of n up to 10, so you can compare them to check correctness. When n and m are both odd numbers, there are no valid results; this is calculated correctly, except in the case of (3,1); it is of course easy to catch these cases and return immediately.
(n,m) results number of recursions
(4,0) (4,3) 1 2 2
(4,1) (4,2) 3 6 7
(5,0) (5,4) 1 3 3
(5,1) (5,3) 0 12 20
(5,2) 12 36
(6,0) (6,5) 1 4 4
(6,1) (6,4) 15 48 76
(6,2) (6,3) 70 226 269
(7,0) (7,6) 1 5 5
(7,1) (7,5) 0 99 257
(7,2) (7,4) 465 1,627 2,313
(7,3) 0 3,413
(8,0) (8,7) 1 6 6
(8,1) (8,6) 105 422 1,041
(8,2) (8,5) 3,507 13,180 23,302
(8,3) (8,4) 19,355 77,466 93,441
(9,0) (9,8) 1 7 7
(9,1) (9,7) 0 948 4,192
(9,2) (9,6) 30,016 119,896 270,707
(9,3) (9,5) 0 1,427,457 2,405,396
(9,4) 1,024,380 4,851,650
(10,0) (10,9) 1 8 8
(10,1) (10,8) 945 4440 18930
(10,2) (10,7) 286,884 1,210,612 3,574,257
(10,3) (10,6) 11,180,820 47,559,340 88,725,087
(10,4) (10,5) 66,462,606 313,129,003 383,079,169
I doubt that you really want all variants for large n,m values - number of them is tremendous large.
This problem is equivalent to generation of m-regular graphs (note that if we replace all 1's by zeros and all -1's by 1 - we can see adjacency matrix of graph. Regular graph - degrees of all vertices are equal to m).
Here we can see that number of (18,4) regular graphs is about 10^9 and rises fast with n/m values. Article contains link to program genreg intended for such graphs generation. FTP links to code and executable don't work for me - perhaps too old.
Upd: Here is another link to source (though 1996 year instead of paper's 1999)
Simple approach to generate one instance of regular graph is described here.
For small n/m values you can also try brute-force: fill the first row with m ones (there are C(n,m) variants and for every variants fill free places in the second row and so on)
Written in Wolfram Mathematica.
generate[n_, m_] := Module[{},
x = Table[StringJoin["i", ToString[i], "j", ToString[j]],
{j, 1, n}, {i, 2, n}];
y = Transpose[x];
MapThread[(x[[#, ;; #2]] = y[[#, ;; #2]]) &,
{-Range[n - 1], Reverse#Range[n - 1]}];
Clear ## Names["i*"];
z = ToExpression[x];
Clear[s];
s = Reduce[Join[Total## == m & /# z,
0 <= # <= 1 & /# Union[Flatten#z]],
Union#Flatten[z], Integers];
Clear[t, u, v];
Array[(t[#] =
Partition[Flatten[z] /.
ToRules[s[[#]]], n - 1] /.
{1 -> -1, 0 -> 1}) &, Length[s]];
Array[Function[a,
(u[a] = StringJoin[Flatten[MapThread[
Take[#, 1 - #2] &,
{t[a], Reverse[Range[n]]}]] /.
{1 -> "+", -1 -> "-"}])], Length[s]];
Array[Function[a,
(v[a] = MapThread[Insert[#, 0, #2] &,
{t[a], Range[n]}])], Length[s]]]
Timing[generate[9, 4];]
Length[s]
{202.208, Null}
1024380
The program takes 202 seconds to generate 1,024,380 solutions. E.g. the last one
u[1024380]
----++++---++++-+-+++++-++++--------
v[1024380]
0 -1 -1 -1 -1 1 1 1 1
-1 0 -1 -1 -1 1 1 1 1
-1 -1 0 -1 1 -1 1 1 1
-1 -1 -1 0 1 1 -1 1 1
-1 -1 1 1 0 1 1 -1 -1
1 1 -1 1 1 0 -1 -1 -1
1 1 1 -1 1 -1 0 -1 -1
1 1 1 1 -1 -1 -1 0 -1
1 1 1 1 -1 -1 -1 -1 0
and the first ten strings
u /# Range[10]
++++----+++----+-+-----+----++++++++
++++----+++----+-+------+--+-+++++++
++++----+++----+-+-------+-++-++++++
++++----+++----+--+---+-----++++++++
++++----+++----+---+--+----+-+++++++
++++----+++----+----+-+----++-++++++
++++----+++----+--+-----+-+--+++++++
++++----+++----+--+------++-+-++++++
++++----+++----+---+---+--+--+++++++
i seen a post on the site about it and i didn't understand the answer, can i get explanation please:
question:
Write code to determine if a number is divisible by 3. The input to the function is a single bit, 0 or 1, and the output should be 1 if the number received so far is the binary representation of a number divisible by 3, otherwise zero.
Examples:
input "0": (0) output 1
inputs "1,0,0": (4) output 0
inputs "1,1,0,0": (6) output 1
This is based on an interview question. I ask for a drawing of logic gates but since this is stackoverflow I'll accept any coding language. Bonus points for a hardware implementation (verilog etc).
Part a (easy): First input is the MSB.
Part b (a little harder): First input is the LSB.
Part c (difficult): Which one is faster and smaller, (a) or (b)? (Not theoretically in the Big-O sense, but practically faster/smaller.) Now take the slower/bigger one and make it as fast/small as the faster/smaller one.
answer:
State table for LSB:
S I S' O
0 0 0 1
0 1 1 0
1 0 2 0
1 1 0 1
2 0 1 0
2 1 2 0
Explanation: 0 is divisible by three. 0 << 1 + 0 = 0. Repeat using S = (S << 1 + I) % 3 and O = 1 if S == 0.
State table for MSB:
S I S' O
0 0 0 1
0 1 2 0
1 0 1 0
1 1 0 1
2 0 2 0
2 1 1 0
Explanation: 0 is divisible by three. 0 >> 1 + 0 = 0. Repeat using S = (S >> 1 + I) % 3 and O = 1 if S == 0.
S' is different from above, but O works the same, since S' is 0 for the same cases (00 and 11). Since O is the same in both cases, O_LSB = O_MSB, so to make MSB as short as LSB, or vice-versa, just use the shortest of both.
thanks for the answers in advanced.
Well, I suppose the question isn't entirely off-topic, since you asked about logic design, but you'll have to do the coding yourself.
You have 3 states in the S column. These track the value of the current full input mod 3. So, S0 means the current input mod 3 is 0, and so is divisible by 0 (remember also that 0 is divisible by 3). S1 means the remainder is 1, S2 means that the remainder is 2.
The I column gives the current input (0 or 1), and S' gives the next state (in other words, the new number mod 3).
For 'LSB', the new number is the old number << 1, plus either 0 or 1. Write out the table. For starters, if the old modulo was 0, then the new modulo will be 0 if the input bit was 0, and will be 1 if the new input was 1. This gives you the first 2 rows in the first table. Filling in the rest is left as an exercise for you.
Note that the O column is just 1 if the next state is 0, as expected.
This is a puzzle i think of since last night. I have come up with a solution but it's not efficient so I want to see if there is better idea.
The puzzle is this:
given positive integers N and T, you will need to have:
for i in [1, T], A[i] from { -1, 0, 1 }, such that SUM(A) == N
additionally, the prefix sum of A shall be [0, N], while when the prefix sum PSUM[A, t] == N, it's necessary to have for i in [t + 1, T], A[i] == 0
here prefix sum PSUM is defined to be: PSUM[A, t] = SUM(A[i] for i in [1, t])
the puzzle asks how many such A's exist given fixed N and T
for example, when N = 2, T = 4, following As work:
1 1 0 0
1 -1 1 1
0 1 1 0
but following don't:
-1 1 1 1 # prefix sum -1
1 1 -1 1 # non-0 following a prefix sum == N
1 1 1 -1 # prefix sum > N
following python code can verify such rule, when given N as expect and an instance of A as seq(some people may feel easier reading code than reading literal description):
def verify(expect, seq):
s = 0
for j, i in enumerate(seq):
s += i
if s < 0:
return False
if s == expect:
break
else:
return s == expect
for k in range(j + 1, len(seq)):
if seq[k] != 0:
return False
return True
I have coded up my solution, but it's too slow. Following is mine:
I decompose the problem into two parts, a part without -1 in it(only {0, 1} and a part with -1.
so if SOLVE(N, T) is the correct answer, I define a function SOLVE'(N, T, B), where a positive B allows me to extend prefix sum to be in the interval of [-B, N] instead of [0, N]
so in fact SOLVE(N, T) == SOLVE'(N, T, 0).
so I soon realized the solution is actually:
have the prefix of A to be some valid {0, 1} combination with positive length l, and with o 1s in it
at position l + 1, I start to add 1 or more -1s and use B to track the number. the maximum will be B + o or depend on the number of slots remaining in A, whichever is less.
recursively call SOLVE'(N, T, B)
in the previous N = 2, T = 4 example, in one of the search case, I will do:
let the prefix of A be [1], then we have A = [1, -, -, -].
start add -1. here i will add only one: A = [1, -1, -, -].
recursive call SOLVE', here i will call SOLVE'(2, 2, 0) to solve the last two spots. here it will return [1, 1] only. then one of the combinations yields [1, -1, 1, 1].
but this algorithm is too slow.
I am wondering how can I optimize it or any different way to look at this problem that can boost the performance up?(I will just need the idea, not impl)
EDIT:
some sample will be:
T N RESOLVE(N, T)
3 2 3
4 2 7
5 2 15
6 2 31
7 2 63
8 2 127
9 2 255
10 2 511
11 2 1023
12 2 2047
13 2 4095
3 3 1
4 3 4
5 3 12
6 3 32
7 3 81
8 3 200
9 3 488
10 3 1184
11 3 2865
12 3 6924
13 3 16724
4 4 1
5 4 5
6 4 18
an exponential time solution will be following in general(in python):
import itertools
choices = [-1, 0, 1]
print len([l for l in itertools.product(*([choices] * t)) if verify(n, l)])
An observation: assuming that n is at least 1, every solution to your stated problem ends in something of the form [1, 0, ..., 0]: i.e., a single 1 followed by zero or more 0s. The portion of the solution prior to that point is a walk that lies entirely in [0, n-1], starts at 0, ends at n-1, and takes fewer than t steps.
Therefore you can reduce your original problem to a slightly simpler one, namely that of determining how many t-step walks there are in [0, n] that start at 0 and end at n (where each step can be 0, +1 or -1, as before).
The following code solves the simpler problem. It uses the lru_cache decorator to cache intermediate results; this is in the standard library in Python 3, or there's a recipe you can download for Python 2.
from functools import lru_cache
#lru_cache()
def walks(k, n, t):
"""
Return the number of length-t walks in [0, n]
that start at 0 and end at k. Each step
in the walk adds -1, 0 or 1 to the current total.
Inputs should satisfy 0 <= k <= n and 0 <= t.
"""
if t == 0:
# If no steps allowed, we can only get to 0,
# and then only in one way.
return k == 0
else:
# Count the walks ending in 0.
total = walks(k, n, t-1)
if 0 < k:
# ... plus the walks ending in 1.
total += walks(k-1, n, t-1)
if k < n:
# ... plus the walks ending in -1.
total += walks(k+1, n, t-1)
return total
Now we can use this function to solve your problem.
def solve(n, t):
"""
Find number of solutions to the original problem.
"""
# All solutions stick at n once they get there.
# Therefore it's enough to find all walks
# that lie in [0, n-1] and take us to n-1 in
# fewer than t steps.
return sum(walks(n-1, n-1, i) for i in range(t))
Result and timings on my machine for solve(10, 100):
In [1]: solve(10, 100)
Out[1]: 250639233987229485923025924628548154758061157
In [2]: %timeit solve(10, 100)
1000 loops, best of 3: 964 µs per loop
I have an algorithm that can simulate converting a binary number to a decimal number by hand. What I mean by this is that each number is represented as an array of digits (from least-to-most-significant) rather than using a language's int or bigint type.
For example, 42 in base-10 would be represented as [2, 4], and 10111 in base-2 would be [1, 1, 1, 0, 1].
Here it is in Python.
def double(decimal):
result = []
carry = 0
for i in range(len(decimal)):
result.append((2 * decimal[i] + carry) % 10)
carry = floor((2 * decimal[i] + carry) / 10)
if carry != 0:
result.append(carry)
return result
def to_decimal(binary):
decimal = []
for i in reversed(range(len(binary))):
decimal = double(decimal)
if binary[i]:
if decimal == []:
decimal = [1]
else:
decimal[0] += 1
return decimal
This was part of an assignment I had with an algorithms class a couple of semesters ago, and he gave us a challenge in his notes claiming that we should be able to derive from this algorithm a new one that could convert a number from base-2^k to binary. I dug this up today and it's been bothering me (read: making me feel really rusty), so I was hoping someone would be able to explain how I would write a to_binary(number, k) function based on this algorithm.
Base 2^k has digits 0, 1, ..., 2^k - 1.
For example, in base 2^4 = 16, we'd have the digits 0, 1, 2, ..., 10, 11, 12, 13, 14, 15. For convenience, we use letters for the bigger digits: 0, 1, ..., A, B, C, D, E, F.
So let's say you want to convert AB to binary. The trivial thing to do is convert it to decimal first, since we know how to convert decimal to binary:
AB = B*16^0 + A*16^1
= 11*16^0 + 10*16^1
= 171
If you convert 171 to binary, you'll get:
10101011
Now, is there a shortcut we can use, so we don't go through base 10? There is.
Let's stop at this part:
AB = B*16^0 + A*16^1
= 11*16^0 + 10*16^1
And recall what it takes to convert from decimal to binary: do integer division by 2, write down the remainders, write the remainders in reverse order in the end:
number after integer division by 2 | remainder after integer division by 2
--------------------------------------------------------------------------
5 | 1
2 | 0
1 | 1
0 |
=> 5 = reverse(101) = 101 in binary
Let's apply that to this part:
11*16^0 + 10*16^1
First of all, for the first 4 (because 16^1 = 2^4) divisions, the remainder of division by 2 will only depend on 11, because 16 % 2 == 0.
11 | 1
5 | 1
2 | 0
1 | 1
0 |
So the last part of our number in binary will be:
1011
By the time we've done this, we will have gotten rid of the 16^1, since we've done 4 divisions so far. So now we only depend on 10:
10 | 0
5 | 1
2 | 0
1 | 1
0 |
So our final result will be:
10101011
Which is what we got with the classic approach!
As we can notice, we only need to convert the digits to binary individually, because they are what will, individually and sequentially, affect the result:
A = 10 = 1010
B = 11 = 1011
=> AB in binary = 10101011
For your base 2^k, do the same: convert each individual digit to binary, from most significant to least, and concatenate the results in order.
Example implementation:
def to_binary(number, k):
result = []
for x in number:
# convert x to binary
binary_x = []
t = x
while t != 0:
binary_x.append(t % 2)
t //= 2
result.extend(binary_x[::-1])
return result
#10 and 11 are digits here, so this is like AB.
print(to_binary([10, 11], 2**4))
print(to_binary([101, 51, 89], 2**7))
Prints:
[1, 0, 1, 0, 1, 0, 1, 1]
[1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1]
Note: there is actually a bug in the above code. For example, 2 in base 2**7 will get converted to 10 in binary. But digits in base 2**7 should have 7 bits, so you need to pad it to that many bits: 0000010. I'll leave this as an exercise.