Prolog Extended Euclidian Algorithm - prolog

I have been struggling with some prolog code for several days and I couldn't find a way out of it. I am trying to write the extended Euclidean algorithm and find values p and s in :
a*p + b*s = gcd(a,b)
equation and here is what I have tried :`
common(X,X,X,_,_,_,_,_,_).
common(0,Y,Y,_,_,_,_,_,_).
common(X,0,X,_,_,_,_,_,_).
common(X,Y,_,1,0,L1,L2,SF,TF):-
append(L1,1,[H]),
append(L2,0,[A]),
SF is H ,
TF is A,
common(X,Y,_,0,1,[H],[A],SF,TF).
common(X,Y,_,0,1,L1,L2,SF,TF):-
append(L1,0,[_,S2]),
append(L2,1,[_,T2]),
Q is truncate(X/Y),
S is 1-Q*0,T is 0-Q*1 ,
common(X,Y,_,S,T,[S2,S],
[T2,T],SF,TF).
common(X,Y,N,S,T,[S1,S2],[T1,T2],SF,TF):-
Q is truncate(X/Y),
K is X-(Y*Q),
si_finder(S1,S2,Q,SF),
ti_finder(T1,T2,Q,TF),
common(Y,K,N,S,T,[S2,S],[T2,T],SF,TF).
si_finder(PP,P,Q,C):- C is PP - Q*P.
ti_finder(P2,P1,QA,C2):- C2 is P2 - QA*P1.
After a little search I found that s and p coefficients start from 1 and 0 and the second values for them are 0 and 1 respectively.Then it continues in a pattern which is what I have done in si_finder and ti_finder predicates.Common predicates are where I tried to control the pattern recursively. However the common predicates keeps on returning false in every call. Can anyone help me implement this algorithm in Prolog.
Thanks in advance.

First let's think about the arity of the predicate. Obviously you want to have the numbers A and B as well as the Bézout coefficients P and S as arguments. Since the algorithm is calculating the GCD anyway, it is opportune to have that as an argument as well. That leaves us with arity 5. As we're talking about the extended Euclidean algorithm, let' call the predicate eeuclid/5. Next, consider an example: Let's use the algorithm to calculate P, S and GCD for A=242 and B=69:
quotient (Q) | remainder (B1) | P | S
-------------+-------------------+-------+-------
| 242 | 1 | 0
| 69 | 0 | 1
242/69 = 3 | 242 − 3*69 = 35 | 1 | -3
69/35 = 1 | 69 − 1*35 = 34 | -1 | 4
35/34 = 1 | 35 − 1*34 = 1 | 2 | -7
34/1 = 34 | 34 − 34*1 = 0 | -69 | 242
We can observe the following:
The algorithm stops if the remainder becomes 0
The line before the last row contains the GCD in the remainder column (in this example 1) and the Bézout coefficients in the P and S columns respectively (in this example 2 and -7)
The quotient is calculated from the previous to remainders. So in the next iteration A becomes B and B becomes B1.
P and S are calculated from their respective predecessors. For example: P3 = P1 - 3*P2 = 1 - 3*0 = 1 and S3 = S1 - 3*S2 = 0 - 3*1 = -3. And since it's sufficient to have the previous two P's and S's, we might as well pass them on as pairs, e.g. P1-P2 and S1-S2.
The algorithm starts with the pairs P: 1-0 and S: 0-1
The algorithm starts with the bigger number
Putting all this together, the calling predicate has to ensure that A is the bigger number and, in addition to it's five arguments, it has to pass along the starting pairs 1-0 and 0-1 to the predicate describing the actual relation, here a_b_p_s_/7:
:- use_module(library(clpfd)).
eeuclid(A,B,P,S,GCD) :-
A #>= B,
GCD #= A*P + B*S, % <- new
a_b_p_s_(A,B,P,S,1-0,0-1,GCD).
eeuclid(A,B,P,S,GCD) :-
A #< B,
GCD #= A*P + B*S, % <- new
a_b_p_s_(B,A,S,P,1-0,0-1,GCD).
The first rule of a_b_p_s_/7 describes the base case, where B=0 and the algorithm stops. Then A is the GCD and P1, S1 are the Bézout coefficients. Otherwise the quotient Q, the remainder B1 and the new values for P and S are calculated and a_b_p_s_/7 is called with those new values:
a_b_p_s_(A,0,P1,S1,P1-_P2,S1-_S2,A).
a_b_p_s_(A,B,P,S,P1-P2,S1-S2,GCD) :-
B #> 0,
A #> B, % <- new
Q #= A/B,
B1 #= A mod B,
P3 #= P1-(Q*P2),
S3 #= S1-(Q*S2),
a_b_p_s_(B,B1,P,S,P2-P3,S2-S3,GCD).
Querying this with the above example yields the desired result:
?- eeuclid(242,69,P,S,GCD).
P = 2,
S = -7,
GCD = 1 ;
false.
And indeed: gcd(242,69) = 1 = 2*242 − 7*69
EDIT: On a second thought I would suggest to add two constraints. Firstly Bézout's identity before calling a_b_p_s_/7 and secondly A #> B after the first goal of a_b_p_s_/7. I edited the predicates above and marked the new goals. These additions make eeuclid/5 more versatile. For example, you could ask what numbers A and B have the Bézout coefficients 2 and -7 and 1 as the gcd. There is no unique answer to this query and Prolog will give you residual goals for every potential solution. However, you can ask for a limited range for A and B, say between 0 and 50 and then use label/1 to get actual numbers:
?- [A,B] ins 0..50, eeuclid(A,B,2,-7,1), label([A,B]).
A = 18,
B = 5 ;
A = 25,
B = 7 ;
A = 32,
B = 9 ;
A = 39,
B = 11 ;
A = 46,
B = 13 ;
false. % <- previously loop here
Without the newly added constraints the query would not terminate after the fifth solution. However, with the new constraints Prolog is able to determine, that there are no more solutions between 0 and 50.

Related

Fast calculation of probability distribution in board game Da Vinci Code

I'm interested in efficiently calculating the probability distribution over possible secret numbers given what one can observe of the opponents' hand (and your own hand) in the board game Da Vinci Code. A link to the game here: https://boardgamegeek.com/boardgame/8946/da-vinci-code
I have abstracted the problem into the following:
You are given an array A of length N and a finite set of numbers Si for each index i of the array. Now,
we are to place a number from Si at each index i to fill the entire array A;
while ensuring that the number is unique across the entire array A;
and for 3 disjoint subarrays A1, A2, A3 of A such that concat(A1, A2, A3) = A, the numbers in each subarray must follow a strictly increasing order;
given all the possible numbers to form A that satisfy the above constraints, what is the probability ditribution over each number at each index?
Here I provide an example below:
Assuming we have the following array of length 5 with each column representing Si at the index of the column
| 6 6 | 6 6 | 6 |
| 5 | 5 | |
| 4 4 | | 4 |
| | 3 3 | |
| 2 | 2 2 | |
| 1 1 | | |
| ___ | __ | _ |
| A1 | A2 | A3|
The set of all possible arrays are:
14236
14256
14356
15234
15236
15264
15364
16234
16254
16354
24356
25364
26354
45236
Therefore the probability distribution over each number [1-6] at each index is:
6 0 4/14 0 3/14 6/14
5 0 6/14 0 6/14 0
4 1/14 4/14 0 0 8/14
3 0 0 6/14 5/14 0
2 3/14 0 8/14 0 0
1 10/14 0 0 0 0
___________ __________ ______
A1 A2 A3
Brute forcing this problem is obviously doable but I have a gut feeling that there must be some more efficient algorithms for this.
The reason why I think so is due to the fact that one can derive the probability distribution from the set of all possibilities but not the other way around, so the distribution itself must contain less information than the set of all possibilities have. Therefore, I believe that we do not need to generate all possibilites just to obtain the probability distribution.
Hence, I am wondering if there is any smart matrix operation we could use for this problem or even fixed-point iteration/density evolution to approximate the end probability distribution? Some other potentially more efficient approaches to this problem are also appreciated.
Edit: By brute-force, I mean specifically enumerating all possibilities with constraint propagation like in sudoku. My hope is to obtain an accurate solution, or a approximate solution that approximates well (better than plain monte carlo), that works better than CP in terms of running time.
Edit2: The better solution I desire should have the characteristic that it does not need to generate all possibilities to obtain or approximate the probability distribution.
Did you consider Constraint Propagation?
When you assign a number to a position, that number cannot appear in any other position, so exclude that number from the remaining positions
When you assign a number in the first column of a subarray, the second column must contain a larger value, so exclude all values that are lower or equal
With a BF approach in your example the code would generate and check 4 * 4 * 3 * 4 * 2 = 384 possibilities; with the CP approach we only generate 65 possibilities.
Here is a sample Python implementation:
from dataclasses import dataclass, field
from typing import Dict, List
#dataclass
class DaVinci:
grid : List[List[int]]
top : int
lastcol : int = 0
solved : List = field(default_factory=list)
count : int = 0
distrib : List[Dict[int,int]] = field(init=False)
def __post_init__(self):
self.lastcol = len(self.grid)-1
self.distrib = [{x:0 for x in range(1,self.top+1)} for y in range(len(self.grid))]
self.solve_next(current = 0, even = True, blocked = [], minval = 0, solving = [])
self.count = len(self.solved)
def solve_next(self, current, even, blocked, minval, solving):
found = False
for n in self.grid[current]:
if n not in blocked and n > minval:
if current != self.lastcol:
self.solve_next(current + 1, not even, blocked + [n], n * even, solving + [n])
else:
for col in range(self.lastcol):
self.distrib[col][solving[col]] += 1
self.distrib[self.lastcol][n] += 1
self.solved.append(solving + [n])
def show_solved(self):
for sol in self.solved:
print(''.join(map(str,sol)))
def show_distrib(self):
for i in range(1, self.top+1):
print(i, end = ' ')
for col in range(len(self.grid)):
print(f'{self.distrib[col][i]:2d}/{self.count}', end = ' ')
print()
dv = DaVinci([[1,2,4,6],[1,4,5,6],[2,3,6],[2,3,5,6],[4,6]], 6)
dv.show_solved()
14236
14256
14356
15234
15236
15264
15364
16234
16254
16354
24356
25364
26354
45236
dv.show_distrib()
1 10/14 0/14 0/14 0/14 0/14
2 3/14 0/14 8/14 0/14 0/14
3 0/14 0/14 6/14 5/14 0/14
4 1/14 4/14 0/14 0/14 8/14
5 0/14 6/14 0/14 6/14 0/14
6 0/14 4/14 0/14 3/14 6/14
A simple idea to get an approximation for the distribution is to use a Monte Carlo approach.
Set a variable total: = 0 and a matrix M[N][Q] with all entries initially set to zero (Q is the total of numbers allowed).
Fix a positive integer K. Perform K iterations. At each iteration, for each i in [1..N], take a random element from Si and fill the array A. When the array A is all filled, verify in O(N) if it satisfies your conditions. If so, increment by one the variable total and iterate through the array, incrementing the matrix entries M[i][A[i]] by one, for i in [1..N].
In the end, iterate through all the elements of the matrix M in O(N Q) and divide its elements by total to get an approximation for the distribution.
Total time complexity is O(N (K + Q)).
You can also precalculate stuff to make the approximation more precise. For example, you can precalculate all increasing sequences in the groups A1, A2 and A3. Put them in arrays I1, I2, I3. Then, at each iteration, instead of taking random elements from each Si, you take random sequences from I1, I2 and I3 and verify if the concatenation has no repeated elements (in O(N)). If so, proceed as before. The total time complexity (apart from the expensive precalculation) remains O(N (K + Q)).
Start by converting all legal subarray selections into bitvectors.
E.g., for A2 we have [2,3], [2,5], [2,6], [3,5], [3,6]
[2,3] as a bitvector is 000110
[3,5] is 010100
Next, arrange your three subarrays by the number of bitvectors they have.
Next, put these in a hash for each subarray/member combination except the smallest subarray. Use the smallest set bit as the key.
E.g. For [2,3] in A2, we'd have {2 => 000110}
Note that the values of the map to be in an array since there will be multiple bitvectors for each index/element combo.
Finally,
For every bitvec of subarray_small:
For every non-set bit of that bitvec
Find the list that has that bit as a key in subarray_medium
For every bitvec in this list
Check if the inverse of (bitvec_small | bitvec_medium) is in the hash for subarray_large.
If it is, we have a valid arrangement; update your frequency counts.

How to approach and understand a math related DSA question

I found this question online and I really have no idea what the question is even asking. I would really appreciate some help in first understanding the question, and a solution if possible. Thanks!
To see if a number is divisible by 3, you need to add up the digits of its decimal notation, and check if the sum is divisible by 3.
To see if a number is divisible by 11, you need to split its decimal notation into pairs of digits (starting from the right end), add up corresponding numbers and check if the sum is divisible by 11.
For any prime p (except for 2 and 5) there exists an integer r such that a similar divisibility test exists: to check if a number is divisible by p, you need to split its decimal notation into r-tuples of digits (starting from the right end), add up these r-tuples and check whether their sum is divisible by p.
Given a prime int p, find the minimal r for which such divisibility test is valid and output it.
The input consists of a single integer p - a prime between 3 and 999983, inclusive, not equal to 5.
Example
input
3
output
1
input
11
output
2
This is a very cool problem! It uses modular arithmetic and some basic number theory to devise the solution.
Let's say we have p = 11. What divisibility rule applies here? How many digits at once do we need to take, to have a divisibility rule?
Well, let's try a single digit at a time. That would mean, that if we have 121 and we sum its digits 1 + 2 + 1, then we get 4. However we see, that although 121 is divisible by 11, 4 isn't and so the rule doesn't work.
What if we take two digits at a time? With 121 we get 1 + 21 = 22. We see that 22 IS divisible by 11, so the rule might work here. And in fact, it does. For p = 11, we have r = 2.
This requires a bit of intuition which I am unable to convey in text (I really have tried) but it can be proven that for a given prime p other than 2 and 5, the divisibility rule works for tuples of digits of length r if and only if the number 99...9 (with r nines) is divisible by p. And indeed, for p = 3 we have 9 % 3 = 0, while for p = 11 we have 9 % 11 = 9 (this is bad) and 99 % 11 = 0 (this is what we want).
If we want to find such an r, we start with r = 1. We check if 9 is divisible by p. If it is, then we found the r. Otherwise, we go further and we check if 99 is divisible by p. If it is, then we return r = 2. Then, we check if 999 is divisible by p and if so, return r = 3 and so on. However, the 99...9 numbers can get very large. Thankfully, to check divisibility by p we only need to store the remainder modulo p, which we know is small (at least smaller than 999983). So the code in C++ would look something like this:
int r(int p) {
int result = 1;
int remainder = 9 % p;
while (remainder != 0) {
remainder = (remainder * 10 + 9) % p;
result++;
}
return result;
}
I have no idea how they expect a random programmer with no background to figure out the answer from this.
But here is the brief introduction to modulo arithmetic that should make this doable.
In programming, n % k is the modulo operator. It refers to taking the remainder of n / k. It satisfies the following two important properties:
(n + m) % k = ((n % k) + (m % k)) % k
(n * m) % k = ((n % k) * (m % k)) % k
Because of this, for any k we can think of all numbers with the same remainder as somehow being the same. The result is something called "the integers modulo k". And it satisfies most of the rules of algebra that you're used to. You have the associative property, the commutative property, distributive law, addition by 0, and multiplication by 1.
However if k is a composite number like 10, you have the unfortunate fact that 2 * 5 = 10 which means that modulo 10, 2 * 5 = 0. That's kind of a problem for division.
BUT if k = p, a prime, then things become massively easier. If (a*m) % p = (b*m) % p then ((a-b) * m) % p = 0 so (a-b) * m is divisible by p. And therefore either (a-b) or m is divisible by p.
For any non-zero remainder m, let's look at the sequence m % p, m^2 % p, m^3 % p, .... This sequence is infinitely long and can only take on p values. So we must have a repeat where, a < b and m^a % p = m^b %p. So (1 * m^a) % p = (m^(b-a) * m^a) % p. Since m doesn't divide p, m^a doesn't either, and therefore m^(b-a) % p = 1. Furthermore m^(b-a-1) % p acts just like m^(-1) = 1/m. (If you take enough math, you'll find that the non-zero remainders under multiplication is a finite group, and all the remainders forms a field. But let's ignore that.)
(I'm going to drop the % p everywhere. Just assume it is there in any calculation.)
Now let's let a be the smallest positive number such that m^a = 1. Then 1, m, m^2, ..., m^(a-1) forms a cycle of length a. For any n in 1, ..., p-1 we can form a cycle (possibly the same, possibly different) n, n*m, n*m^2, ..., n*m^(a-1). It can be shown that these cycles partition 1, 2, ..., p-1 where every number is in a cycle, and each cycle has length a. THEREFORE, a divides p-1. As a side note, since a divides p-1, we easily get Fermat's little theorem that m^(p-1) has remainder 1 and therefore m^p = m.
OK, enough theory. Now to your problem. Suppose we have a base b = 10^i. The primality test that they are discussing is that a_0 + a_1 * b + a_2 * b^2 + a_k * b^k is divisible by a prime p if and only if a_0 + a_1 + ... + a_k is divisible by p. Looking at (p-1) + b, this can only happen if b % p is 1. And if b % p is 1, then in modulo arithmetic b to any power is 1, and the test works.
So we're looking for the smallest i such that 10^i % p is 1. From what I showed above, i always exists, and divides p-1. So you just need to factor p-1, and try 10 to each power until you find the smallest i that works.
Note that you should % p at every step you can to keep those powers from getting too big. And with repeated squaring you can speed up the calculation. So, for example, calculating 10^20 % p could be done by calculating each of the following in turn.
10 % p
10^2 % p
10^4 % p
10^5 % p
10^10 % p
10^20 % p
This is an almost direct application of Fermat's little theorem.
First, you have to reformulate the "split decimal notation into tuples [...]"-condition into something you can work with:
to check if a number is divisible by p, you need to split its decimal notation into r-tuples of digits (starting from the right end), add up these r-tuples and check whether their sum is divisible by p
When you translate it from prose into a formula, what it essentially says is that you want
for any choice of "r-tuples of digits" b_i from { 0, ..., 10^r - 1 } (with only finitely many b_i being non-zero).
Taking b_1 = 1 and all other b_i = 0, it's easy to see that it is necessary that
It's even easier to see that this is also sufficient (all 10^ri on the left hand side simply transform into factor 1 that does nothing).
Now, if p is neither 2 nor 5, then 10 will not be divisible by p, so that Fermat's little theorem guarantees us that
, that is, at least the solution r = p - 1 exists. This might not be the smallest such r though, and computing the smallest one is hard if you don't have a quantum computer handy.
Despite it being hard in general, for very small p, you can simply use an algorithm that is linear in p (you simply look at the sequence
10 mod p
100 mod p
1000 mod p
10000 mod p
...
and stop as soon as you find something that equals 1 mod p).
Written out as code, for example, in Scala:
def blockSize(p: Int, n: Int = 10, r: Int = 1): Int =
if n % p == 1 then r else blockSize(p, n * 10 % p, r + 1)
println(blockSize(3)) // 1
println(blockSize(11)) // 2
println(blockSize(19)) // 18
or in Python:
def blockSize(p: int, n: int = 10, r: int = 1) -> int:
return r if n % p == 1 else blockSize(p, n * 10 % p, r + 1)
print(blockSize(3)) # 1
print(blockSize(11)) # 2
print(blockSize(19)) # 18
A wall of numbers, just in case someone else wants to sanity-check alternative approaches:
11 -> 2
13 -> 6
17 -> 16
19 -> 18
23 -> 22
29 -> 28
31 -> 15
37 -> 3
41 -> 5
43 -> 21
47 -> 46
53 -> 13
59 -> 58
61 -> 60
67 -> 33
71 -> 35
73 -> 8
79 -> 13
83 -> 41
89 -> 44
97 -> 96
101 -> 4
103 -> 34
107 -> 53
109 -> 108
113 -> 112
127 -> 42
131 -> 130
137 -> 8
139 -> 46
149 -> 148
151 -> 75
157 -> 78
163 -> 81
167 -> 166
173 -> 43
179 -> 178
181 -> 180
191 -> 95
193 -> 192
197 -> 98
199 -> 99
Thank you andrey tyukin.
Simple terms to remember:
When x%y =z then (x%y)%y again =z
(X+y)%z == (x%z + y%z)%z
keep this in mind.
So you break any number into some r digits at a time together. I.e. break 3456733 when r=6 into 3 * 10 power(6 * 1) + 446733 * 10 power(6 * 0).
And you can break 12536382626373 into 12 * 10 power (6 * 2). + 536382 * 10 power (6 * 1) + 626373 * 10 power (6 * 0)
Observe that here r is 6.
So when we say we combine the r digits and sum them together and apply modulo. We are saying we apply modulo to coefficients of above breakdown.
So how come coefficients sum represents whole number’s sum?
When the “10 power (6* anything)” modulo in the above break down becomes 1 then that particular term’s modulo will be equal to the coefficient’s modulo. That means the 10 power (r* anything) is of no effect. You can check why it will have no effect by using the formulas 1&2.
And the other similar terms 10 power (r * anything) also will have modulo as 1. I.e. if you can prove that (10 power r)modulo is 1. Then (10 power r * anything) is also 1.
But the important thing is we should have 10 power (r) equal to 1. Then every 10 power (r * anything) is 1 that leads to modulo of number equal to sum of r digits divided modulo.
Conclusion: find r in (10 power r) such that the given prime number will leave 1 as reminder.
That also mean the smallest 9…..9 which is divisible by given prime number decides r.

Finding the largest power of a number that divides a factorial in haskell

So I am writing a haskell program to calculate the largest power of a number that divides a factorial.
largestPower :: Int -> Int -> Int
Here largestPower a b has find largest power of b that divides a!.
Now I understand the math behind it, the way to find the answer is to repeatedly divide a (just a) by b, ignore the remainder and finally add all the quotients. So if we have something like
largestPower 10 2
we should get 8 because 10/2=5/2=2/2=1 and we add 5+2+1=8
However, I am unable to figure out how to implement this as a function, do I use arrays or just a simple recursive function.
I am gravitating towards it being just a normal function, though I guess it can be done by storing quotients in an array and adding them.
Recursion without an accumulator
You can simply write a recursive algorithm and sum up the result of each call. Here we have two cases:
a is less than b, in which case the largest power is 0. So:
largestPower a b | a < b = 0
a is greater than or equal to b, in that case we divide a by b, calculate largestPower for that division, and add the division to the result. Like:
| otherwise = d + largestPower d b
where d = (div a b)
Or putting it together:
largestPower a b | a < b = 1
| otherwise = d + largestPower d b
where d = (div a b)
Recursion with an accumuator
You can also use recursion with an accumulator: a variable you pass through the recursion, and update accordingly. At the end, you return that accumulator (or a function called on that accumulator).
Here the accumulator would of course be the running product of divisions, so:
largestPower = largestPower' 0
So we will define a function largestPower' (mind the accent) with an accumulator as first argument that is initialized as 1.
Now in the recursion, there are two cases:
a is less than b, we simply return the accumulator:
largestPower' r a b | a < b = r
otherwise we multiply our accumulator with b, and pass the division to the largestPower' with a recursive call:
| otherwise = largestPower' (d+r) d b
where d = (div a b)
Or the full version:
largestPower = largestPower' 1
largestPower' r a b | a < b = r
| otherwise = largestPower' (d+r) d b
where d = (div a b)
Naive correct algorithm
The algorithm is not correct. A "naive" algorithm would be to simply divide every item and keep decrementing until you reach 1, like:
largestPower 1 _ = 0
largestPower a b = sumPower a + largestPower (a-1) b
where sumPower n | n `mod` b == 0 = 1 + sumPower (div n b)
| otherwise = 0
So this means that for the largestPower 4 2, this can be written as:
largestPower 4 2 = sumPower 4 + sumPower 3 + sumPower 2
and:
sumPower 4 = 1 + sumPower 2
= 1 + 1 + sumPower 1
= 1 + 1 + 0
= 2
sumPower 3 = 0
sumPower 2 = 1 + sumPower 1
= 1 + 0
= 1
So 3.
The algorithm as stated can be implemented quite simply:
largestPower :: Int -> Int -> Int
largestPower 0 b = 0
largestPower a b = d + largestPower d b where d = a `div` b
However, the algorithm is not correct for composite b. For example, largestPower 10 6 with this algorithm yields 1, but in fact the correct answer is 4. The problem is that this algorithm ignores multiples of 2 and 3 that are not multiples of 6. How you fix the algorithm is a completely separate question, though.

Calculate Combination based on position

I'm having trouble solving this problem:
Create a function that given a character set C, can generate the Nth combination OR return the series of combination given a starting position (Ns) and ending position (Ne) and the maximum length of the combination (Mx).
A concrete example:
Let C = [A,B,C]
We know that different combinations would look like the following assuming Mx = 3 (the combination would be different for different lengths):
1. AAA
2. AAB
3. AAC
4. ABA
5. ABB
6. ABC
N. ... Etc
If we was to pass the following parameters :
C = [A,B,C] Mx = 3 Ns = 3 Ne = 3
we would expect the following result:
AAC
If we was to pass the following parameters :
C = [A,B,C] Mx = 3 Ns = 4 Ne = 6
we would expect the following result:
4. ABA
5. ABB
6. ABC
For the solution, the programming language is not relevant. However C# would be preferred. Also most important would be an explanation of how its solved.
I look forward to the amazing Guru's of Stack Overflow...
Given an index N (0-based) into the sequence of combinations of n symbols, you can get the i'th symbol by calculating N / ni % n (using integer division and remainder)
For example:
C = {A, B, C} (giving n = 3)
N = 6
i = 0 => 6 / 3^0 % 3 = 0 (symbol 0 = A)
i = 1 => 6 / 3^1 % 3 = 2 (symbol 2 = C)
i = 2 => 6 / 3^2 % 3 = 0 (symbol 0 = A)
Resulting sequence: ACA
The sequence is treated as a base-n number, and the individual digits are calculated.

Number of Comparisons using merge sort

If you have 5 distinct numbers, how many comparisons at most do you need to sort this using merge sort?
What is stopping you from coding a merge sort, keeping a counter for the number of comparisons in it, and trying it out on all permutations of [0,1,2,3,4]?
I find the question interesting, so I decided to explore it thoroughly (with a little experimentation in Python).
I downloaded mergesort.py from here and modified it to add a cmp argument for a comparator function. Then:
import collections
import itertools
import mergesort
import sys
class CountingComparator(object):
def __init__(self):
self.count = 0
def __call__(self, a, b):
self.count += 1
return cmp(a, b)
ms_histo = collections.defaultdict(int)
for perm in itertools.permutations(range(int(sys.argv[1]))):
cc = CountingComparator()
lperm = list(perm)
mergesort.mergesort(lperm, cmp=cc)
ms_histo[cc.count] += 1
for c in sorted(ms_histo):
print "%d %2d" % (c, ms_histo[c])
The resulting simple histogram (starting with a length of 4, as I did for developing and debugging this) is:
4 8
5 16
For the problem as posted, with a length of 5 instead of 4, I get:
5 4
6 20
7 48
8 48
and with a length of 6 (and a wider format;-):
7 8
8 56
9 176
10 288
11 192
Finally, with a length of 7 (and even wider format;-):
9 16
10 128
11 480
12 1216
13 1920
14 1280
Surely some perfectly regular combinatorial formula lurks here, but I'm finding it difficult to gauge what it might be, either analytically or by poring over the numbers. Anybody's got suggestions?
When merge-sorting two lists of length L1 and L2, I suppose the worst case number of comparisons is L1+L2-1.
Initially you have five 1-long lists.
You can merge two pairs of lists with 2 comparisons, resulting in lists of length 2,2 and 1.
Then you can merge a 2 and 1 long list with at most another 1+2-1 = 2 comparisons, yielding a 2 and 3 long list.
Finally you merge these lists with at most 2+3-1 = 4 comparisons.
So I guess the answer is 8.
This sequence of numbers results in the above:
[2], [4], [1], [3], [5] -> [2,4], [1,3], [5] -> [2,4], [1,3,5] -> [1,2,3,4,5]
Edit:
Here is a naive Erlang implementation. Based on this, the number of comparisons is 5,6,7 or 8 for permutations of 1..5.
-module(mergesort).
-compile(export_all).
test() ->
lists:sort([{sort(L),L} || L <- permutations()]).
sort([]) -> {0, []};
sort([_] = L) -> {0, L};
sort(L) ->
{L1, L2} = lists:split(length(L) div 2, L),
{C1, SL1} = sort(L1), {C2, SL2} = sort(L2),
{C3, RL} = merge(SL1, SL2, [], 0),
{C1+C2+C3, RL}.
merge([], L2, Merged, Comps) -> {Comps, Merged ++ L2};
merge(L1, [], Merged, Comps) -> {Comps, Merged ++ L1};
merge([H1|T1], [H2|_] = L2, Merged, Comps) when H1 < H2 -> merge(T1, L2, Merged ++[H1], Comps + 1);
merge(L1, [H2|T2], Merged, Comps) -> merge(L1, T2, Merged ++[H2], Comps + 1).
permutations() ->
L = lists:seq(1,5),
[[A,B,C,D,E] || A <- L, B <- L, C <- L, D <- L, E <- L, A =/= B, A =/= C, A =/= D, A =/= E, B =/= C, B =/= D, B =/= E, C =/= D, C =/= E, D =/= E].
http://www.sorting-algorithms.com/
According to Wikipedia: In the worst case, merge sort does an amount of comparisons equal to or slightly smaller than (n ⌈lg n⌉ - 2^⌈lg n⌉ + 1)
For just five distinct numbers to sort, the maximum number of comparisons you can have is 8 and minimum number of comparisons is 7. Here's why:-
Suppose the array is a,b,c,d,e
divide recursively: a,b,c and d,e
divide recursively: a,b&c and d&e
divide recursively: a&b & c and d&e
Now, merging which will require comparison-
a & b : one comparison to form a,b
a,b & c : two comparisons to form a,b,c
d & e : one comparison to form d,e
a,b,c and d,e : four comparison in worst case or three comparisons id d is the largest element of array to form a,b,c,d,e
So, the total number of comparisons will be eight in worst case and seven in the best case.

Resources