Prolog change for 100 cents - prolog

I am new to Prolog and I want to write a function that returns all different ways to make change for a dollar (100 cents). We have 2-cent coins, 11-cent coins, 38-cent coins, and, interestingly, -8-cent coins (a coin worth -8 cents). Also we only have 10 pieces of "-8" cent coins in total. (no upper bound for other kinds of coins)
Here is my try:
change100([P2, P11, P38, Pn8]):-
Pn8 =< 10,
Pn8 >= 0,
P2 >= 0, P11 >= 0, P38 >= 0,
D is 2 * P2 + 11 * P11 + 38 * P38 - 8 * Pn8,
D = 100.
But it doesn't work. When I run it and query
?- change100(A).
I got message
ERROR:
=< /2: Arguments are not sufficiently instantiated.
Why is this? How can I fix it?
The original problem statement:
There are 4 kinds of coins: a 2-cent piece, an 11-cent piece, a 38-cent piece, and, interestingly, a -8-cent piece (a coin worth -8 cents). Even more interesting, a total of only 10 -8- cent pieces have ever been created, so you never need to worry about situations with more than 10 -8-cent pieces.
How many different ways can you make change for a dollar (100 cents)?
For example, one way to make change for 100 cents is to use 4 2-cent pieces, 8 11-cent pieces, 2 38-cent pieces, and 9 -8-cent pieces.
It’s possible to have 0 of some coins, e.g. 50 2-cent pieces is one way to make change for a dollar.
Write a Prolog function called change100(Coins) that begins like this:
change100([P2, P11, P38, Pn8]) :-
% ...
P2 is the number of 2-cent coins, P11 is the number of 11-cent coins, and so on. Keep in mind that, as specified in the problem description, Pn8 is at most 10.

To be used in numeric inequalities, variables in Prolog must be instantiated.
You code tries to arithmetically compare uninstantiated variables, that's not what Prolog can do.
But you can use so-called "constraint logic programming" - Prolog extension that solves this problem.
Here is the code in SWI-Prolog (almost the same as your original code):
:- use_module(library(clpfd)).
change100([P2, P11, P38, Pn8]):-
Pn8 #=< 10,
Pn8 #>= 0,
P2 #>= 0, P11 #>= 0, P38 #>= 0,
D #= 2 * P2 + 11 * P11 + 38 * P38 - 8 * Pn8,
D #= 100,
label([P2, P11, P38, Pn8]).
Update.
It can be verified that you get 195 different solutions with this program.
You mentioned that there should be 108 different solutions. This number of solutions can be obtained with incorrect assumption that number of a coin times the coin value should be less or equal 100 (i.e. no more then 100/2=50 coins of value 2). This assumption would be correct if we had only positive coins, but for the problem this assumption will lead to omission of "90 * 2 + 0 * 11 + 0 * 38 - 10 * 8", for example.
To emulate this incorrect logic and get 108 solutions you can add 2 * P2 #=< 100, 11 * P11 #=<100, 38 * P38 #=< 100, line to the program. But please argue that there are really 195 solutions.

Related

How to approach and understand a math related DSA question

I found this question online and I really have no idea what the question is even asking. I would really appreciate some help in first understanding the question, and a solution if possible. Thanks!
To see if a number is divisible by 3, you need to add up the digits of its decimal notation, and check if the sum is divisible by 3.
To see if a number is divisible by 11, you need to split its decimal notation into pairs of digits (starting from the right end), add up corresponding numbers and check if the sum is divisible by 11.
For any prime p (except for 2 and 5) there exists an integer r such that a similar divisibility test exists: to check if a number is divisible by p, you need to split its decimal notation into r-tuples of digits (starting from the right end), add up these r-tuples and check whether their sum is divisible by p.
Given a prime int p, find the minimal r for which such divisibility test is valid and output it.
The input consists of a single integer p - a prime between 3 and 999983, inclusive, not equal to 5.
Example
input
3
output
1
input
11
output
2
This is a very cool problem! It uses modular arithmetic and some basic number theory to devise the solution.
Let's say we have p = 11. What divisibility rule applies here? How many digits at once do we need to take, to have a divisibility rule?
Well, let's try a single digit at a time. That would mean, that if we have 121 and we sum its digits 1 + 2 + 1, then we get 4. However we see, that although 121 is divisible by 11, 4 isn't and so the rule doesn't work.
What if we take two digits at a time? With 121 we get 1 + 21 = 22. We see that 22 IS divisible by 11, so the rule might work here. And in fact, it does. For p = 11, we have r = 2.
This requires a bit of intuition which I am unable to convey in text (I really have tried) but it can be proven that for a given prime p other than 2 and 5, the divisibility rule works for tuples of digits of length r if and only if the number 99...9 (with r nines) is divisible by p. And indeed, for p = 3 we have 9 % 3 = 0, while for p = 11 we have 9 % 11 = 9 (this is bad) and 99 % 11 = 0 (this is what we want).
If we want to find such an r, we start with r = 1. We check if 9 is divisible by p. If it is, then we found the r. Otherwise, we go further and we check if 99 is divisible by p. If it is, then we return r = 2. Then, we check if 999 is divisible by p and if so, return r = 3 and so on. However, the 99...9 numbers can get very large. Thankfully, to check divisibility by p we only need to store the remainder modulo p, which we know is small (at least smaller than 999983). So the code in C++ would look something like this:
int r(int p) {
int result = 1;
int remainder = 9 % p;
while (remainder != 0) {
remainder = (remainder * 10 + 9) % p;
result++;
}
return result;
}
I have no idea how they expect a random programmer with no background to figure out the answer from this.
But here is the brief introduction to modulo arithmetic that should make this doable.
In programming, n % k is the modulo operator. It refers to taking the remainder of n / k. It satisfies the following two important properties:
(n + m) % k = ((n % k) + (m % k)) % k
(n * m) % k = ((n % k) * (m % k)) % k
Because of this, for any k we can think of all numbers with the same remainder as somehow being the same. The result is something called "the integers modulo k". And it satisfies most of the rules of algebra that you're used to. You have the associative property, the commutative property, distributive law, addition by 0, and multiplication by 1.
However if k is a composite number like 10, you have the unfortunate fact that 2 * 5 = 10 which means that modulo 10, 2 * 5 = 0. That's kind of a problem for division.
BUT if k = p, a prime, then things become massively easier. If (a*m) % p = (b*m) % p then ((a-b) * m) % p = 0 so (a-b) * m is divisible by p. And therefore either (a-b) or m is divisible by p.
For any non-zero remainder m, let's look at the sequence m % p, m^2 % p, m^3 % p, .... This sequence is infinitely long and can only take on p values. So we must have a repeat where, a < b and m^a % p = m^b %p. So (1 * m^a) % p = (m^(b-a) * m^a) % p. Since m doesn't divide p, m^a doesn't either, and therefore m^(b-a) % p = 1. Furthermore m^(b-a-1) % p acts just like m^(-1) = 1/m. (If you take enough math, you'll find that the non-zero remainders under multiplication is a finite group, and all the remainders forms a field. But let's ignore that.)
(I'm going to drop the % p everywhere. Just assume it is there in any calculation.)
Now let's let a be the smallest positive number such that m^a = 1. Then 1, m, m^2, ..., m^(a-1) forms a cycle of length a. For any n in 1, ..., p-1 we can form a cycle (possibly the same, possibly different) n, n*m, n*m^2, ..., n*m^(a-1). It can be shown that these cycles partition 1, 2, ..., p-1 where every number is in a cycle, and each cycle has length a. THEREFORE, a divides p-1. As a side note, since a divides p-1, we easily get Fermat's little theorem that m^(p-1) has remainder 1 and therefore m^p = m.
OK, enough theory. Now to your problem. Suppose we have a base b = 10^i. The primality test that they are discussing is that a_0 + a_1 * b + a_2 * b^2 + a_k * b^k is divisible by a prime p if and only if a_0 + a_1 + ... + a_k is divisible by p. Looking at (p-1) + b, this can only happen if b % p is 1. And if b % p is 1, then in modulo arithmetic b to any power is 1, and the test works.
So we're looking for the smallest i such that 10^i % p is 1. From what I showed above, i always exists, and divides p-1. So you just need to factor p-1, and try 10 to each power until you find the smallest i that works.
Note that you should % p at every step you can to keep those powers from getting too big. And with repeated squaring you can speed up the calculation. So, for example, calculating 10^20 % p could be done by calculating each of the following in turn.
10 % p
10^2 % p
10^4 % p
10^5 % p
10^10 % p
10^20 % p
This is an almost direct application of Fermat's little theorem.
First, you have to reformulate the "split decimal notation into tuples [...]"-condition into something you can work with:
to check if a number is divisible by p, you need to split its decimal notation into r-tuples of digits (starting from the right end), add up these r-tuples and check whether their sum is divisible by p
When you translate it from prose into a formula, what it essentially says is that you want
for any choice of "r-tuples of digits" b_i from { 0, ..., 10^r - 1 } (with only finitely many b_i being non-zero).
Taking b_1 = 1 and all other b_i = 0, it's easy to see that it is necessary that
It's even easier to see that this is also sufficient (all 10^ri on the left hand side simply transform into factor 1 that does nothing).
Now, if p is neither 2 nor 5, then 10 will not be divisible by p, so that Fermat's little theorem guarantees us that
, that is, at least the solution r = p - 1 exists. This might not be the smallest such r though, and computing the smallest one is hard if you don't have a quantum computer handy.
Despite it being hard in general, for very small p, you can simply use an algorithm that is linear in p (you simply look at the sequence
10 mod p
100 mod p
1000 mod p
10000 mod p
...
and stop as soon as you find something that equals 1 mod p).
Written out as code, for example, in Scala:
def blockSize(p: Int, n: Int = 10, r: Int = 1): Int =
if n % p == 1 then r else blockSize(p, n * 10 % p, r + 1)
println(blockSize(3)) // 1
println(blockSize(11)) // 2
println(blockSize(19)) // 18
or in Python:
def blockSize(p: int, n: int = 10, r: int = 1) -> int:
return r if n % p == 1 else blockSize(p, n * 10 % p, r + 1)
print(blockSize(3)) # 1
print(blockSize(11)) # 2
print(blockSize(19)) # 18
A wall of numbers, just in case someone else wants to sanity-check alternative approaches:
11 -> 2
13 -> 6
17 -> 16
19 -> 18
23 -> 22
29 -> 28
31 -> 15
37 -> 3
41 -> 5
43 -> 21
47 -> 46
53 -> 13
59 -> 58
61 -> 60
67 -> 33
71 -> 35
73 -> 8
79 -> 13
83 -> 41
89 -> 44
97 -> 96
101 -> 4
103 -> 34
107 -> 53
109 -> 108
113 -> 112
127 -> 42
131 -> 130
137 -> 8
139 -> 46
149 -> 148
151 -> 75
157 -> 78
163 -> 81
167 -> 166
173 -> 43
179 -> 178
181 -> 180
191 -> 95
193 -> 192
197 -> 98
199 -> 99
Thank you andrey tyukin.
Simple terms to remember:
When x%y =z then (x%y)%y again =z
(X+y)%z == (x%z + y%z)%z
keep this in mind.
So you break any number into some r digits at a time together. I.e. break 3456733 when r=6 into 3 * 10 power(6 * 1) + 446733 * 10 power(6 * 0).
And you can break 12536382626373 into 12 * 10 power (6 * 2). + 536382 * 10 power (6 * 1) + 626373 * 10 power (6 * 0)
Observe that here r is 6.
So when we say we combine the r digits and sum them together and apply modulo. We are saying we apply modulo to coefficients of above breakdown.
So how come coefficients sum represents whole number’s sum?
When the “10 power (6* anything)” modulo in the above break down becomes 1 then that particular term’s modulo will be equal to the coefficient’s modulo. That means the 10 power (r* anything) is of no effect. You can check why it will have no effect by using the formulas 1&2.
And the other similar terms 10 power (r * anything) also will have modulo as 1. I.e. if you can prove that (10 power r)modulo is 1. Then (10 power r * anything) is also 1.
But the important thing is we should have 10 power (r) equal to 1. Then every 10 power (r * anything) is 1 that leads to modulo of number equal to sum of r digits divided modulo.
Conclusion: find r in (10 power r) such that the given prime number will leave 1 as reminder.
That also mean the smallest 9…..9 which is divisible by given prime number decides r.

Prolog Extended Euclidian Algorithm

I have been struggling with some prolog code for several days and I couldn't find a way out of it. I am trying to write the extended Euclidean algorithm and find values p and s in :
a*p + b*s = gcd(a,b)
equation and here is what I have tried :`
common(X,X,X,_,_,_,_,_,_).
common(0,Y,Y,_,_,_,_,_,_).
common(X,0,X,_,_,_,_,_,_).
common(X,Y,_,1,0,L1,L2,SF,TF):-
append(L1,1,[H]),
append(L2,0,[A]),
SF is H ,
TF is A,
common(X,Y,_,0,1,[H],[A],SF,TF).
common(X,Y,_,0,1,L1,L2,SF,TF):-
append(L1,0,[_,S2]),
append(L2,1,[_,T2]),
Q is truncate(X/Y),
S is 1-Q*0,T is 0-Q*1 ,
common(X,Y,_,S,T,[S2,S],
[T2,T],SF,TF).
common(X,Y,N,S,T,[S1,S2],[T1,T2],SF,TF):-
Q is truncate(X/Y),
K is X-(Y*Q),
si_finder(S1,S2,Q,SF),
ti_finder(T1,T2,Q,TF),
common(Y,K,N,S,T,[S2,S],[T2,T],SF,TF).
si_finder(PP,P,Q,C):- C is PP - Q*P.
ti_finder(P2,P1,QA,C2):- C2 is P2 - QA*P1.
After a little search I found that s and p coefficients start from 1 and 0 and the second values for them are 0 and 1 respectively.Then it continues in a pattern which is what I have done in si_finder and ti_finder predicates.Common predicates are where I tried to control the pattern recursively. However the common predicates keeps on returning false in every call. Can anyone help me implement this algorithm in Prolog.
Thanks in advance.
First let's think about the arity of the predicate. Obviously you want to have the numbers A and B as well as the Bézout coefficients P and S as arguments. Since the algorithm is calculating the GCD anyway, it is opportune to have that as an argument as well. That leaves us with arity 5. As we're talking about the extended Euclidean algorithm, let' call the predicate eeuclid/5. Next, consider an example: Let's use the algorithm to calculate P, S and GCD for A=242 and B=69:
quotient (Q) | remainder (B1) | P | S
-------------+-------------------+-------+-------
| 242 | 1 | 0
| 69 | 0 | 1
242/69 = 3 | 242 − 3*69 = 35 | 1 | -3
69/35 = 1 | 69 − 1*35 = 34 | -1 | 4
35/34 = 1 | 35 − 1*34 = 1 | 2 | -7
34/1 = 34 | 34 − 34*1 = 0 | -69 | 242
We can observe the following:
The algorithm stops if the remainder becomes 0
The line before the last row contains the GCD in the remainder column (in this example 1) and the Bézout coefficients in the P and S columns respectively (in this example 2 and -7)
The quotient is calculated from the previous to remainders. So in the next iteration A becomes B and B becomes B1.
P and S are calculated from their respective predecessors. For example: P3 = P1 - 3*P2 = 1 - 3*0 = 1 and S3 = S1 - 3*S2 = 0 - 3*1 = -3. And since it's sufficient to have the previous two P's and S's, we might as well pass them on as pairs, e.g. P1-P2 and S1-S2.
The algorithm starts with the pairs P: 1-0 and S: 0-1
The algorithm starts with the bigger number
Putting all this together, the calling predicate has to ensure that A is the bigger number and, in addition to it's five arguments, it has to pass along the starting pairs 1-0 and 0-1 to the predicate describing the actual relation, here a_b_p_s_/7:
:- use_module(library(clpfd)).
eeuclid(A,B,P,S,GCD) :-
A #>= B,
GCD #= A*P + B*S, % <- new
a_b_p_s_(A,B,P,S,1-0,0-1,GCD).
eeuclid(A,B,P,S,GCD) :-
A #< B,
GCD #= A*P + B*S, % <- new
a_b_p_s_(B,A,S,P,1-0,0-1,GCD).
The first rule of a_b_p_s_/7 describes the base case, where B=0 and the algorithm stops. Then A is the GCD and P1, S1 are the Bézout coefficients. Otherwise the quotient Q, the remainder B1 and the new values for P and S are calculated and a_b_p_s_/7 is called with those new values:
a_b_p_s_(A,0,P1,S1,P1-_P2,S1-_S2,A).
a_b_p_s_(A,B,P,S,P1-P2,S1-S2,GCD) :-
B #> 0,
A #> B, % <- new
Q #= A/B,
B1 #= A mod B,
P3 #= P1-(Q*P2),
S3 #= S1-(Q*S2),
a_b_p_s_(B,B1,P,S,P2-P3,S2-S3,GCD).
Querying this with the above example yields the desired result:
?- eeuclid(242,69,P,S,GCD).
P = 2,
S = -7,
GCD = 1 ;
false.
And indeed: gcd(242,69) = 1 = 2*242 − 7*69
EDIT: On a second thought I would suggest to add two constraints. Firstly Bézout's identity before calling a_b_p_s_/7 and secondly A #> B after the first goal of a_b_p_s_/7. I edited the predicates above and marked the new goals. These additions make eeuclid/5 more versatile. For example, you could ask what numbers A and B have the Bézout coefficients 2 and -7 and 1 as the gcd. There is no unique answer to this query and Prolog will give you residual goals for every potential solution. However, you can ask for a limited range for A and B, say between 0 and 50 and then use label/1 to get actual numbers:
?- [A,B] ins 0..50, eeuclid(A,B,2,-7,1), label([A,B]).
A = 18,
B = 5 ;
A = 25,
B = 7 ;
A = 32,
B = 9 ;
A = 39,
B = 11 ;
A = 46,
B = 13 ;
false. % <- previously loop here
Without the newly added constraints the query would not terminate after the fifth solution. However, with the new constraints Prolog is able to determine, that there are no more solutions between 0 and 50.

Algorithm in hardware to find out if number is divisible by five

I am trying to think of an algorithm to implement this for a given n bit binary number. I tried out many examples, but am unable to find out any pattern. So how shall I proceed?
How about this:
Convert the number to base 4 (this is trivial by simply combining pairs of bits). 5 in base 4 is 11. The values base 4 that are divisible by 11 are somewhat familiar: 11, 22, 33, 110, 121, 132, 203, ...
The rule for divisibility by 11 is that you add all the odd digits and all the even digits and subtract one from the other. If the result is divisible by 11 (which remember is 5), then it's divisible by 11 (which remember is 5).
For example:
123456d = 1 1110 0010 0100 0000b = 132021000_4
The even digits are 1 2 2 0 0 : sum = 5d
The odd digits are 3 0 1 0 : sum = 4d
Difference is 1, which is not divisble by 5
Or another one:
123455d = 1 1110 0010 0011 1111b = 132020333_4
The even digits are 1 2 2 3 3 : sum = 11d
The odd digits are 3 0 0 3 : sum = 6d
Difference is 5, which is a 5 or a 0
This should have a fairly efficient HW implementation because it's mostly bit-slicing, followed by N/2 adders, where N is the number of bits in the number you're interested in.
Note that after adding the digits and subtracting, the maximum value is 3/4 * N, so if you have 16-bit numbers max, you can get at most 12 as a result, so you only need to check for 0, ±5 and ±10 explicitly. If you're using 32-bit numbers then you can get at most 24 as a result, so you need to also check if the result is ±15 or ±20.
Make a Deterministic Finite Automaton (DFA) to implement the divisibility check and implement the DFA in hardware.
Creating a DFA for divisibility by 5 is easy. You just need to notice the remainders and check what 2r (mod 5) and 2r + 1(mod 5) map to. There are many websites that discuss this. For example this one.
There are well-known examples to convert DFA to a hardware representation as well.
Well , I just figured out ...
number mod 5 = a0 * 2^0 mod 5 + a1 * 2^1 mod 5 +a2* 2^2 mod 5 + a3 * 2^3 mod 5 + a4 * 2^4 mod 5 + ....
= a0 (1) + a1(2) +a2 (-1) +a3 (-2) +a4 (1) repeats ...
Hence difference of odd digits + 2 times difference of even digits = divisible by 5
for example ... consider 110010
odd digits differnce = 0-0+1 = 1 or 01
even digits difference = 1-0+1 = 2 or 10
difference of odd digits + 2 times difference of even digits = 01 + 2*(10)=01 + 100 = 101 is divisible by 5 .
The contribution of each bit toward being divisible by five is a four bit pattern 3421.
You could shift through any binary number 4 bits at a time adding the corresponding value for positive bits.
Example:
100011
take 0011
apply the pattern 0021
sum 3
next four bits 0010
apply the pattern 0020
sum = 5
We can design a Deterministic Finite Automaton (DFA) for the same. The DFA, then can be implemented in Hardware. This is similar to this answer.
We will simulate a Deterministic Finite Automaton (DFA) that accepts Binary Representation of Integers which are divisible by 5
Now, by accept, we mean that when we are done with scanning string, we should be in one of the multiple possible Final States.
Approach to Design DFA : Essentially, we need to divide the Binary Representation of Integer by 5, and track the remainder. If after consuming/scanning [From Left to Right] the entire string, remainder is Zero, then we should end up in Final State, and if remainder isn't zero we should be in Non-Final States.
Now, DFA is defined by Quintuple/5-Tuple (Q,q₀,F,Σ,δ). We will obtain these five components step-by-step.
Q : Finite Set of States
We need to track remainder. On dividing any integer by 5, we can get remainder as 0,1, 2, 3 or 4. Hence, we will have Five States Z, O, T, Th and F for each possible remainder.
Q={Z, O, T, Th, F}
If after scanning certain part of Binary String, we are in state Z, this means that integer defined from Left to this part will give remainder Zero when divided by 5. Similarly, O for remainder One, and so on.
Now, we can write these three states by Euclidean Division Algorithm as
Z : 5m
O : 5m+1
T : 5m+2
Th : 5m+3
F : 5m+4
where m is Integer.
q₀ : an initial/start state from set Q
Now, start state can be thought in terms of empty string (ɛ). An ɛ directly gets into q₀.
What remainder does ɛ gives when divided by 5?
We can append as many 0s in left hand side of a Binary Number. In the similar fashion, we can append ɛ in left hand side of a Binary String. Thus, ɛ in left can be thought of as 0. And 0 when divided by 5 gives remainder 0. Hence, ɛ should end in State Z. But ɛ ends up in q₀.
Thus, q₀=Z
F : a set of accept states
Now we want all strings which are divisible by 5, or which gives remainder 0 when divided by 5, or which after complete scanning should end up in state Z, and gets accepted.
Hence,
F={Z}
Σ : Alphabet (a finite set of input symbols)
Since we are scanning/reading a Binary String. Hence,
Σ={0,1}
δ : Transition Function (δ : Q × Σ → Q)
Now this δ tells us that if we are in state x (in Q) and next input to be scanned is y (in Σ), then at which state z (in Q) should we go.
If the string upto this point gives remainder 3/Th when divided by 5, and if we append 1 to string, then what remainder will resultant string give.
Now, this can be analyzed by observing how magnitude of a binary string changes on appending 0 and 1.
a.
In Decimal (Base-10), if we add/append 0, then magnitude gets multiplied by 10 . 53, on appending 0 it becomes 530
Also, if we append 8 to decimal, then Magnitude gets multiplied by 10, and then we add 8 to multiplied magnitude.
b.
In Binary (Base-2), if we add/append 0, then magnitude gets multiplied by 2 (The Positional Weight of each Bit get multiplied by 2)
Example : (1010)2 [which is (10)10], on appending 0 it becomes (10100)2 [which is (20)10]
Similarly, In Binary, if we append 1, then Magnitude gets multiplied by 2, and then we add 1.
Example : (10)2 [which is (2)10], on appending 1 it becomes (101)2 [which is (5)10]
Thus, we can say that for Binary String x,
x0=2|x|
x1=2|x|+1
We will use these relation to analyze Five States
Any string in Z can be written as 5m
- On 0, it becomes 2(5m), which is 5(2m), nothing but state Z.
- On 1, it becomes 2(5m)+1, which is 5(2m)+1, that is O. [This can be read as if a Binary String is presently divisible by 5, and we append 1, then resultant string will give remainder as 1]
Any string in O can be written as 5m+1
- On 0, it becomes 2(5m+1) = 10m+2, which is 5(2m)+2, state T.
- On 1, it becomes 2(5m+1)+1 = 10m+3, which is 5(2m)+3, that is state Th.
Any string in T can be written as 5m+2
- On 0, it becomes 2(5m+2) = 10m+4, which is 5(2m)+4, state F.
- On 1, it becomes 2(5m+2)+1 = 10m+5, which is 5(2m+1), state Z. [If m is integer, so is (2m+1)]
Any string in Th can be written as 5m+3
- On 0, it becomes 2(5m+3) = 10m+6, which is 5(2m+1)+1, state V.
- On 1, it becomes 2(5m+3)+1 = 10m+7, which is 5(2m+1)+2, that is state T.
Any string in F can be written as 5m+4
- On 0, it becomes 2(5m+4) = 10m+8, which is 5(2m+1)+3, state Th.
- On 1, it becomes 2(5m+4)+1 = 10m+9, which is 5(2m+1)+4, that is state F.
Hence, the final DFA combining Everything (creating using Tool)
We can even write code [in High Level Language] for the same. But it would go beyond main aim of this question. If readers wish to see the same, they can check here.
As any assignment this would have been an answer for is bound to be way overdue a year later:
in the binary representation of a natural divisible by five the parities of bits 4n and 4n+2 equal, as well as those for bits 4n+1 and 4n+3.
(This is entirely equivalent to the answers of JoshG79, notsogeek, or james: 4≡-1(mod 5), 3≡-2(mod 5) (with reduced hand-waving about recursion in argumentation, and no dispensable handling of carries in circuitry))

Tennis match scheduling

There are a limited number of players and a limited number of tennis courts. At each round, there can be at most as many matches as there are courts.
Nobody plays 2 rounds without a break. Everyone plays a match against everyone else.
Produce the schedule that takes as few rounds as possible. (Because of the rule that there must a break between rounds for everyone, there can be a round without matches.)
The output for 5 players and 2 courts could be:
| 1 2 3 4 5
-|-------------------
2| 1 -
3| 5 3 -
4| 7 9 1 -
5| 3 7 9 5 -
In this output the columns and rows are the player-numbers, and the numbers inside the matrix are the round numbers these two players compete.
The problem is to find an algorithm which can do this for larger instances in a feasible time. We were asked to do this in Prolog, but (pseudo-) code in any language would be useful.
My first try was a greedy algorithm, but that gives results with too many rounds.
Then I suggested an iterative deepening depth-first search, which a friend of mine implemented, but that still took too much time on instances as small as 7 players.
(This is from an old exam question. No one I spoke to had any solution.)
Preface
In Prolog, CLP(FD) constraints are the right choice for solving such scheduling tasks.
See clpfd for more information.
In this case, I suggest using the powerful global_cardinality/2 constraint to restrict the number of occurrences of each round, depending on the number of available courts. We can use iterative deepening to find the minimal number of admissible rounds.
Freely available Prolog systems suffice to solve the task satisfactorily. Commercial-grade systems will run dozens of times faster.
Variant 1: Solution with SWI-Prolog
:- use_module(library(clpfd)).
tennis(N, Courts, Rows) :-
length(Rows, N),
maplist(same_length(Rows), Rows),
transpose(Rows, Rows),
Rows = [[_|First]|_],
chain(First, #<),
length(_, MaxRounds),
numlist(1, MaxRounds, Rounds),
pairs_keys_values(Pairs, Rounds, Counts),
Counts ins 0..Courts,
foldl(triangle, Rows, Vss, Dss, 0, _),
append(Vss, Vs),
global_cardinality(Vs, Pairs),
maplist(breaks, Dss),
labeling([ff], Vs).
triangle(Row, Vs, Ds, N0, N) :-
length(Prefix, N0),
append(Prefix, [-|Vs], Row),
append(Prefix, Vs, Ds),
N #= N0 + 1.
breaks([]).
breaks([P|Ps]) :- maplist(breaks_(P), Ps), breaks(Ps).
breaks_(P0, P) :- abs(P0-P) #> 1.
Sample query: 5 players on 2 courts:
?- time(tennis(5, 2, Rows)), maplist(writeln, Rows).
% 827,838 inferences, 0.257 CPU in 0.270 seconds (95% CPU, 3223518 Lips)
[-,1,3,5,7]
[1,-,5,7,9]
[3,5,-,9,1]
[5,7,9,-,3]
[7,9,1,3,-]
The specified task, 6 players on 2 courts, solved well within the time limit of 1 minute:
?- time(tennis(6, 2, Rows)),
maplist(format("~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+\n"), Rows).
% 6,675,665 inferences, 0.970 CPU in 0.977 seconds (99% CPU, 6884940 Lips)
- 1 3 5 7 10
1 - 6 9 11 3
3 6 - 11 9 1
5 9 11 - 2 7
7 11 9 2 - 5
10 3 1 7 5 -
Further example: 7 players on 5 courts:
?- time(tennis(7, 5, Rows)),
maplist(format("~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+\n"), Rows).
% 125,581,090 inferences, 17.476 CPU in 18.208 seconds (96% CPU, 7185927 Lips)
- 1 3 5 7 9 11
1 - 5 3 11 13 9
3 5 - 9 1 7 13
5 3 9 - 13 11 7
7 11 1 13 - 5 3
9 13 7 11 5 - 1
11 9 13 7 3 1 -
Variant 2: Solution with SICStus Prolog
With the following additional definitions for compatibility, the same program also runs in SICStus Prolog:
:- use_module(library(lists)).
:- use_module(library(between)).
:- op(700, xfx, ins).
Vs ins D :- maplist(in_(D), Vs).
in_(D, V) :- V in D.
chain([], _).
chain([L|Ls], Pred) :-
chain_(Ls, L, Pred).
chain_([], _, _).
chain_([L|Ls], Prev, Pred) :-
call(Pred, Prev, L),
chain_(Ls, L, Pred).
pairs_keys_values(Ps, Ks, Vs) :- keys_and_values(Ps, Ks, Vs).
foldl(Pred, Ls1, Ls2, Ls3, S0, S) :-
foldl_(Ls1, Ls2, Ls3, Pred, S0, S).
foldl_([], [], [], _, S, S).
foldl_([L1|Ls1], [L2|Ls2], [L3|Ls3], Pred, S0, S) :-
call(Pred, L1, L2, L3, S0, S1),
foldl_(Ls1, Ls2, Ls3, Pred, S1, S).
time(Goal) :-
statistics(runtime, [T0|_]),
call(Goal),
statistics(runtime, [T1|_]),
T #= T1 - T0,
format("% Runtime: ~Dms\n", [T]).
Major difference: SICStus, being a commercial-grade Prolog that ships with a serious CLP(FD) system, is much faster than SWI-Prolog in this use case and others like it.
The specified task, 6 players on 2 courts:
?- time(tennis(6, 2, Rows)),
maplist(format("~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+\n"), Rows).
% Runtime: 34ms (!)
- 1 3 5 7 10
1 - 6 11 9 3
3 6 - 9 11 1
5 11 9 - 2 7
7 9 11 2 - 5
10 3 1 7 5 -
The larger example:
| ?- time(tennis(7, 5, Rows)),
maplist(format("~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+~t~w~3+\n"), Rows).
% Runtime: 884ms
- 1 3 5 7 9 11
1 - 5 3 9 7 13
3 5 - 1 11 13 7
5 3 1 - 13 11 9
7 9 11 13 - 3 1
9 7 13 11 3 - 5
11 13 7 9 1 5 -
Closing remarks
In both systems, global_cardinality/3 allows you to specify options that alter the propagation strength of the global cardinality constraint, enabling weaker and potentially more efficient filtering. Choosing the right options for a specific example may have an even larger impact than the choice of Prolog system.
This is very similar to the Traveling Tournament Problem, which is about scheduling football teams. In TTP, they can find the optimal solution up to only 8 teams. Anyone who breaks an ongoing record of 10 or more teams, has it a lot easier to get published in a research journal.
It is NP hard and the trick is to use meta-heuristics, such as tabu search, simulated annealing, ... instead of brute force or branch and bound.
Take a look my implementation with Drools Planner (open source, java).
Here are the constraints, it should be straightforward to replace that with constraints such as Nobody plays 2 rounds without a break.
Each player must play at least n - 1 matches where n is the number of players. So the minimum of rounds is 2(n - 1) - 1, since every player needs to rest a match. The minimum is also bound by (n(n-1))/2 total matches divided by number of courts. Using the smallest of these two gives you the length of the optimal solution. Then it's a matter of coming up with a good lower estimating formula ((number of matches+rests remaining)/courts) and run A* search.
As Geoffrey said, I believe the problem is NP Hard, but meta-heuristics such as A* is very applicable.
Python Solution:
import itertools
def subsets(items, count = None):
if count is None:
count = len(items)
for idx in range(count + 1):
for group in itertools.combinations(items, idx):
yield frozenset(group)
def to_players(games):
return [game[0] for game in games] + [game[1] for game in games]
def rounds(games, court_count):
for round in subsets(games, court_count):
players = to_players(round)
if len(set(players)) == len(players):
yield round
def is_canonical(player_count, games_played):
played = [0] * player_count
for players in games_played:
for player in players:
played[player] += 1
return sorted(played) == played
def solve(court_count, player_count):
courts = range(court_count)
players = range(player_count)
games = list( itertools.combinations(players, 2) )
possible_rounds = list( rounds(games, court_count) )
rounds_last = {}
rounds_all = {}
choices_last = {}
choices_all = {}
def update(target, choices, name, value, choice):
try:
current = target[name]
except KeyError:
target[name] = value
choices[name] = choice
else:
if current > value:
target[name] = value
choices[name] = choice
def solution(games_played, players, score, choice, last_players):
games_played = frozenset(games_played)
players = frozenset(players)
choice = (choice, last_players)
update(rounds_last.setdefault(games_played, {}),
choices_last.setdefault(games_played, {}),
players, score, choice)
update(rounds_all, choices_all, games_played, score, choice)
solution( [], [], 0, None, None)
for games_played in subsets(games):
if is_canonical(player_count, games_played):
try:
best = rounds_all[games_played]
except KeyError:
pass
else:
for next_round in possible_rounds:
next_games_played = games_played.union(next_round)
solution(
next_games_played, to_players(next_round), best + 2,
next_round, [])
for last_players, score in rounds_last[games_played].items():
for next_round in possible_rounds:
if not last_players.intersection( to_players(next_round) ):
next_games_played = games_played.union(next_round)
solution( next_games_played, to_players(next_round), score + 1,
next_round, last_players)
all_games = frozenset(games)
print rounds_all[ all_games ]
round, prev = choices_all[ frozenset(games) ]
while all_games:
print "X ", list(round)
all_games = all_games - round
if not all_games:
break
round, prev = choices_last[all_games][ frozenset(prev) ]
solve(2, 6)
Output:
11
X [(1, 2), (0, 3)]
X [(4, 5)]
X [(1, 3), (0, 2)]
X []
X [(0, 5), (1, 4)]
X [(2, 3)]
X [(1, 5), (0, 4)]
X []
X [(2, 5), (3, 4)]
X [(0, 1)]
X [(2, 4), (3, 5)]
This means it will take 11 rounds. The list shows the games to be played in the rounds in reverse order. (Although I think the same schedule works forwards and backwords.)
I'll come back and explain why I have the chance.
Gets incorrect answers for one court, five players.
Some thoughts, perhaps a solution...
Expanding the problem to X players and Y courts, I think we can safely say that when given the choice, we must select the players with the fewest completed matches, otherwise we run the risk of ending up with one player left who can only play every other week and we end up with many empty weeks in between. Picture the case with 20 players and 3 courts. We can see that during round 1 players 1-6 meet, then in round 2 players 7-12 meet, and in round 3 we could re-use players 1-6 leaving players 13-20 until later. Therefor, I think our solution cannot be greedy and must balance the players.
With that assumption, here is a first attempt at a solution:
1. Create master-list of all matches ([12][13][14][15][16][23][24]...[45][56].)
2. While (master-list > 0) {
3. Create sub-list containing only eligible players (eliminate all players who played the previous round.)
4. While (available-courts > 0) {
5. Select match from sub-list where player1.games_remaining plus player2.games_remaining is maximized.
6. Place selected match in next available court, and
7. decrement available-courts.
8. Remove selected match from master-list.
9. Remove all matches that contain either player1 or player2 from sub-list.
10. } Next available-court
11. Print schedule for ++Round.
12. } Next master-list
I can't prove that this will produce a schedule with the fewest rounds, but it should be close. The step that may cause problems is #5 (select match that maximizes player's games remaining.) I can imagine that there might be a case where it's better to pick a match that almost maximizes 'games_remaining' in order to leave more options in the following round.
The output from this algorithm would look something like:
Round Court1 Court2
1 [12] [34]
2 [56] --
3 [13] [24]
4 -- --
5 [15] [26]
6 -- --
7 [35] [46]
. . .
Close inspection will show that in round 5, if the match on Court2 had been [23] then match [46] could have been played during round 6. That, however, doesn't guarantee that there won't be a similar issue in a later round.
I'm working on another solution, but that will have to wait for later.
I don't know if this matters, the "5 Players and 2 Courts" example data is missing three other matches: [1,3], [2,4] and [3,5]. Based on the instruction: "Everyone plays a match against everyone else."

How can I take the modulus of two very large numbers?

I need an algorithm for A mod B with
A is a very big integer and it contains digit 1 only (ex: 1111, 1111111111111111)
B is a very big integer (ex: 1231, 1231231823127312918923)
Big, I mean 1000 digits.
To compute a number mod n, given a function to get quotient and remainder when dividing by (n+1), start by adding one to the number. Then, as long as the number is bigger than 'n', iterate:number = (number div (n+1)) + (number mod (n+1))Finally at the end, subtract one. An alternative to adding one at the beginning and subtracting one at the end is checking whether the result equals n and returning zero if so.
For example, given a function to divide by ten, one can compute 12345678 mod 9 thusly:
12345679 -> 1234567 + 9
1234576 -> 123457 + 6
123463 -> 12346 + 3
12349 -> 1234 + 9
1243 -> 124 + 3
127 -> 12 + 7
19 -> 1 + 9
10 -> 1
Subtract 1, and the result is zero.
1000 digits isn't really big, use any big integer library to get rather fast results.
If you really worry about performance, A can be written as 1111...1=(10n-1)/9 for some n, so computing A mod B can be reduced to computing ((10^n-1) mod (9*B)) / 9, and you can do that faster.
Try Montgomery reduction on how to find modulo on large numbers - http://en.wikipedia.org/wiki/Montgomery_reduction
1) Just find a language or package that does arbitrary precision arithmetic - in my case I'd try java.math.BigDecimal.
2) If you are doing this yourself, you can avoid having to do division by using doubling and subtraction. E.g. 10 mod 3 = 10 - 3 - 3 - 3 = 1 (repeatedly subtracting 3 until you can't any more) - which is incredibly slow, so double 3 until it is just smaller than 10 (e.g. to 6), subtract to leave 4, and repeat.

Resources