Evaluate all possible interpretations in OCaml - logic

I need to evaluate whether two formulas are equivalent or not. Here, I use a simple definition of formula, which is a prefix formula.
For example, And(Atom("b"), True) means b and true, while And(Atom("b"), Or(Atom("c"), Not(Atom("c")))) means (b and (c or not c))
My idea is simple, get all atoms, apply every combination (for my cases, I will have 4 combination, which are true-true, true-false, false-true, and false-false). The thing is, I don't know how to create these combinations.
For now, I have known how to get all involving atoms, so in case of there are 5 atoms, I should create 32 combinations. How to do it in OCaml?

Ok, so what you need is a function combinations n that will produce all the booleans combinations of length n; let's represent them as lists of lists of booleans (i.e. a single assignment of variables will be a list of booleans). Then this function would do the job:
let rec combinations = function
| 0 -> [[]]
| n ->
let rest = combinations (n - 1) in
let comb_f = List.map (fun l -> false::l) rest in
let comb_t = List.map (fun l -> true::l) rest in
comb_t # comb_f
There is only one empty combination of length 0 and for n > 0 we produce combinations of n-1 and prefix them with false and with true to produce all possible combinations of length n.
You could write a function to print such combinations, let's say:
let rec combinations_to_string = function
| [] -> ""
| x::xs ->
let rec bools_to_str = function
| [] -> ""
| b::bs -> Printf.sprintf "%s%s" (if b then "T" else "F") (bools_to_str bs)
in
Printf.sprintf "[%s]%s" (bools_to_str x) (combinations_to_string xs)
and then test it all with:
let _ =
let n = int_of_string Sys.argv.(1) in
let combs = combinations n in
Printf.eprintf "combinations(%d) = %s\n" n (combinations_to_string combs)
to get:
> ./combinations 3
combinations(3) = [TTT][TTF][TFT][TFF][FTT][FTF][FFT][FFF]

If you think of a list of booleans as a list of bits of fixed length, there is a very simple solution: Count!
If you want to have all combinations of 4 booleans, count from 0 to 15 (2^4 - 1) -- then interpret each bit as one of the booleans. For simplicity I'll use a for-loop, but you can also do it with a recursion:
let size = 4 in
(* '1 lsl size' computes 2^size *)
for i = 0 to (1 lsl size) - 1 do
(* from: is the least significant bit '1'? *)
let b0 = 1 = ((i / 1) mod 2) in
let b1 = 1 = ((i / 2) mod 2) in
let b2 = 1 = ((i / 4) mod 2) in
(* to: is the most significant bit '1'? *)
let b3 = 1 = ((i / 8) mod 2) in
(* do your thing *)
compute b0 b1 b2 b3
done
Of course you can make the body of the loop more general so that it e.g. creates a list/array of booleans depending on the size given above etc.;
The point is that you can solve this problem by enumerating all values you are searching for. If this is the case, compute all integers up to your problem size. Write a function that generates a value of your original problem from an integer. Put it all together.
This method has the advantage that you do not need to first create all combinations, before starting your computation. For large problems this might well save you. For rather small size=16 you will already need 65535 * sizeof(type) memory -- and this is growing exponentially with the size! The above solution will require only a constant amount of memory of sizeof(type).
And for science's sake: Your problem is NP-complete, so if you want the exact solution, it will take exponential time.

Related

F#: Two Dimensional Array - Generate all possible binary combinations

What approach would you use to generate the set of NxN matrices containing only zeros and ones which represents all possible distinct combinations?
let matrix Array2D.init N N (fun x y -> something)
If you don't know F# then pseudocode will be a contribution aswell.
So what I want is a list/array of all the distinct matrix combinations
So, I think the hard part is the generating the list of elements. We can do it recursively.
The base case is easy. For a 1x1 matrix, you have 1 element which can only have two combinations: [|[|0|]; [|1|]|].
For a 2x2 elements, we have 2^2 = 4 elements. Each one of these can be either 1 or 0, so there are 2^4 = 16 combinations possible. To get all the combinations possible for this 2x2 array, we can think of it as an array of length 4.
But first, let's think about an array of length 2. Then we have to find all the combinations between [|[|0|]; [|1|]|] and [|[|0|]; [|1|]|]. This would be [|[|0; 0|]; [|0;1|]; [|1;0|]; [|1; 1|]|]. Luckily, there's a function called Array.allPairs which will generate the array of all possible combinations between two arrays, which already does this for us!
So, we can apply Array.allPairs to each element of our array of length 4 sequentially to get all the possible combinations for the entire matrix using Array.reduce. I make a function called pairsToArray to basically flatten the data structure.
let pairsToArray x = Array.concat [|fst x; snd x|]
let rec binary N =
match N with
| 0 -> [||]
| 1 -> [|[|0|]; [|1|]|]
| n -> let elements = n*n
let combinations = Array.init elements (fun i -> binary 1)
let result = Array.reduce (fun acc i -> Array.allPairs acc i |> Array.map pairsToArray) combinations
result
Now, all that remains is converting this to a Array2D.
Something like should do the trick
let c = binary 2
c |> Array.map (fun i -> Array2D.init 2 2 (fun j k -> i.[j+k*2]))
for the 2x2 case
Maybe something like this
let rec addOne (N1: int, N2: int) (M: int[,]) (i: int, j: int)=
if M.[i,j] = 0
then M.[i,j] <- 1
true
else M.[i,j] <- 0
let newi, newj =
if i < N1-1
then (i+1,j)
else (0,j+1)
if newj = N2
then false
else addOne (N1, N2) M (newi,newj)
combined with this
let N = 3
let M: int[,] = Array2D.zeroCreate N N
let mylist =
[ yield M;
while addOne (N,N) M (0,0)
do yield Array2D.copy M ]
I don't know if it makes sense.
It is a method to find the "next" matrix, and then make a list of all the matrices that we encounter that way.
edit: replaced bool with int (0 and 1) to better fit the original question.

Random number generation in OCaml

When using strict functional languages you are bound to a way of writing programs. I come with the problem of generating large quantity of pseudo random numbers with OCaml and I'm not sure I'm using the best way to generate this numbers on such language.
What I did was create a module with a function (gen) that takes an integer as the size and an empty list and returns a list of pseudo random numbers of size size. The problem is when the size is to large, it asserts a StackOverflow which is what is expected.
Should I use tail recursion? Should I use a better method that I'm not aware of?
module RNG =
struct
(* Append a number n in the end of the list l *)
let rec append l n =
match l with
| [] -> [n]
| h :: t -> h :: (append t n)
(* Generate a list l with size random numbers *)
let rec gen size l =
if size = 0 then
l
else
let n = Random.int 1000000 in
let list = append l n in
gen (size - 1) list
end
Testing the code to generate a billion pseudo random numbers returns:
# let l = RNG.gen 1000000000 [];;
Stack overflow during evaluation (looping recursion?).
The problem is that the append function is not tail recursive. Each recursion uses up a bit of stack space to store it's state and as the list gets longer the append function takes more and more stack space. As some point the stack simply isn't big enough and the code fails.
As you suggested in the question the way to avoid that is using tail recursion. When working with lists that usually means constructing the lists in reverse order. The append function then becomes simply ::.
If the order of the resulting list is important the list needs to be reversed at the end. So it is not uncommon to see code returning List.rev acc. This takes O(n) time but constant space and is tail recursive. So the stack is no limit there.
So your code would become:
let rec gen size l =
if size = 0 then
List.rev l
else
let n = Random.int 1000000 in
let list = n :: l in
gen (size - 1) list
A few more things to optimize:
When building a result bit by bit through recursion the result is usually names acc, short for accumulator, and passed first:
let rec gen acc size =
if size = 0 then
List.rev acc
else
let n = Random.int 1000000 in
let list = n :: acc in
gen list (size - 1)
This then allows the use of function and pattern matching instead of the size argument and if construct:
let rec gen acc = function
| 0 -> List.rev acc
| size ->
let n = Random.int 1000000 in
let list = n :: acc in
gen list (size - 1)
A list of random numbers is usually just as good reversed. Unless you want lists of different sizes but using the same seed to begin with the same sequence of numbers you can skip the List.rev. And n :: acc is such a simple costruct one usually doesn't bind that to a variable.
let rec gen acc = function
| 0 -> acc
| size ->
let n = Random.int 1000000 in
gen (n :: acc) (size - 1)
And last you can take advantage of optional arguments. While that makes the code a bit more complex to read it greatly simplifies it's use:
let rec gen ?(acc=[]) = function
| 0 -> acc
| size ->
let n = Random.int 1000000 in
gen ~acc:(n :: acc) (size - 1)
# gen 5;;
- : int list = [180439; 831641; 180182; 326685; 809344]
You no longer need to specify the empty list to generate a list of random number.
Note: An alternative way is to use a wrapper function:
let gen size =
let rec loop acc = function
| 0 -> acc
| size ->
let n = Random.int 1000000 in
loop (n :: acc) (size - 1)
in loop [] size
It would be a big improvement to generate your list in reverse order, then reverse it once at the end. Adding successive values to the end of a list is very slow. Adding to the front of a list can be done in constant time.
Even better, just generate the list in reverse order and return it that way. Do you care that the list is in the same order that the values were generated?
Why do you need to compute the full list explicitly? Another option might be to generate the element lazily (and deterministically) using the new sequence module:
let rec random_seq state () =
let state' = Random.State.copy state in
Seq.Cons(Random.State.int state' 10, random_seq state')
Then the random sequence random_seq state is fully determined by the initial state state: it can be both reused without troubles and only generate new elements as needed.
The standard List module has an init function you can use to write all this in one line:
let upperbound = 10
let rec gen size =
List.init size (fun _ -> Random.int upperbound)

Is a list comprehension or a sequential filter more optimized?

Let's say you have to return the sum of all the multiples of 2 and 3 in a set of integers from 1-100. In Haskell, the code I would write would look something like this:
sum ([x*2 | x<-[1..100], x*2 < 100] `union` [x*3 | x<-[1..100], x*3 < 100])
This uses 2 list comprehensions with a union. Another solution would be to step through each item in the list and evaluate it (using a modulus), then add it to a separate list, which you would later add together.
Both of these solutions come out with the same answer, but which one is more optimized if you had to do the same for, say, a list from 1..1000000?
The answer to the original question is 3317 if you want to create your own algorithm.
If you are looking for performance, you can simplify this problem to the point where you don't even need a computer....
Numbers divisible by 2 or 3 fall into a pattern
0 (1) 2 3 4 (5).... 6 (7) 8 9 10 (11).... etc
or
TFTTTF.... TFTTTF....
Assume that the max bound is divisible by 6, (if not, you can just choose the highest value below the real bound and add the remaining few values by hand). Let maxBound=6*N.
For each additional N, you add the following values
6*n, 0, 6*n+2, 6*n+3, 6*n+4, 0
which sums to
24*n+9
so all you need to do is sum up
sum from n=0 to N of (24*n+9)
=24*(sum from n=0 to N of n) + 9*N
=24*N*(N-1)/2 + 9*N
=12*N^2-3*N
so a very fast Haskell program that would solve this problem would look something like this
f maxBound = 12*n^2-3*n + remainingStuff
where n = maxBound `quot` 6
remainingStuff = sum $ filter (<= maxBound) [6*n, 6*n+2, 6*n+3, 6*n+4]
The union function is a "quadratic" algorithm, so using one list comprehension will be faster.
A better way which is useful for generating these kinds of sequences is to take advantage of the fact that they are ordered and merge them together with a function like:
merge :: [Int] -> [Int] -> [Int]
merge as [] = as
merge [] bs = bs
merge as#(a:at) bs#(b:bt) =
case compare a b of
LT -> a : merge at bs
EQ -> a : merge at bt
GT -> b : merge as bt
and then generate your sequence with:
[ x | x <- merge [2,4..100] [3,6..100] ]
One last tip for writing combinatorial loops... replace expressions like x <- [1..100], 2*x < 100 with x <- [1..49], or if you can't compute the upper bound explicitly, use x <- takeWhile (\x -> 2*x < 100) [1..100]. The latter forms only generates as many items as needed.

Recursion confusion in Haskell again - subsets with an inclusion test

I'm testing a simple program to generate subsets with an inclusion test. For example, given
*Main Data.List> factorsets 7
[([2],2),([2,3],1),([3],1),([5],1),([7],1)]
calling chooseP 3 (factorsets 7), I would like to get (read from right to left, a la cons)
[[([5],1),([3],1),([2],2)]
,[([7],1),([3],1),([2],2)]
,[([7],1),([5],1),([2],2)]
,[([7],1),([5],1),([2,3],1)]
,[([7],1),([5],1),([3],1)]]
But my program is returning an extra [([7],1),([5],1),([3],1)] (and missing a [([7],1),([5],1),([2],2)]):
[[([5],1),([3],1),([2],2)]
,[([7],1),([3],1),([2],2)]
,[([7],1),([5],1),([3],1)]
,[([7],1),([5],1),([2,3],1)]
,[([7],1),([5],1),([3],1)]]
The inclusion test is: members' first part of the tuple must have a null intersection.
Once tested as working, the plan is to sum the internal products of each subset's snds, rather than accumulate them.
Since I've asked a similar question before, I imagine that an extra branch is generated since when the recursion splits at [2,3], the second branch runs over the same possibilities once it passes the skipped section. Any pointers on how to resolve that would be appreciated; and if you'd like to share ideas about how to enumerate and sum such product combinations more efficiently, that would be great, too.
Haskell code:
chooseP k xs = chooseP' xs [] 0 where
chooseP' [] product count = if count == k then [product] else []
chooseP' yys product count
| count == k = [product]
| null yys = []
| otherwise = f ++ g
where (y:ys) = yys
(factorsY,numY) = y
f = let zzs = dropWhile (\(fs,ns) -> not . and . map (null . intersect fs . fst) $ product) yys
in if null zzs
then chooseP' [] product count
else let (z:zs) = zzs in chooseP' zs (z:product) (count + 1)
g = if and . map (null . intersect factorsY . fst) $ product
then chooseP' ys product count
else chooseP' ys [] 0
Your code is complicated enough that I might recommend starting over. Here's how I would proceed.
Write a specification. Let it be as stupidly inefficient as necessary -- for example, the spec I choose below will build all combinations of k elements from the list, then filter out the bad ones. Even the filter will be stupidly slow.
sorted xs = sort xs == xs
unique xs = nub xs == xs
disjoint xs = and $ liftM2 go xs xs where
go x1 x2 = x1 == x2 || null (intersect x1 x2)
-- check that x is valid according to all the validation functions in fs
-- (there are other fun ways to spell this, but this is particularly
-- readable and clearly correct -- just what we want from a spec)
allFuns fs x = all ($x) fs
choosePSpec k = filter good . replicateM k where
good pairs = allFuns [unique, disjoint, sorted] (map fst pairs)
Just to make sure it's right, we can test it at the prompt:
*Main> mapM_ print $ choosePSpec 3 [([2],2),([2,3],1),([3],1),([5],1),([7],1)]
[([2],2),([3],1),([5],1)]
[([2],2),([3],1),([7],1)]
[([2],2),([5],1),([7],1)]
[([2,3],1),([5],1),([7],1)]
[([3],1),([5],1),([7],1)]
Looks good.
Now that we have a spec, we can try to improve the speed one refactoring at a time, always checking that it matches the spec. The first thing I'd want to do is notice that we can ensure uniqueness and sortedness just by sorting the input and picking things "in an increasing way". To do this, we can define a function which chooses subsequences of a given length. It piggy-backs on the tails function, which you can think of as nondeterministically choosing a place to split its input list.
subseq 0 xs = [[]]
subseq n xs = do
x':xt <- tails xs
xs' <- subseq (n-1) xt
return (x':xs')
Here's an example of this function in action:
*Main> subseq 3 [1..4]
[[1,2,3],[1,2,4],[1,3,4],[2,3,4]]
Now we can write a slightly faster chooseP by replacing replicateM with subseq. Recall that we're assuming the inputs are already sorted and unique, though.
choosePSlow k = filter good . subseq k where
good pairs = disjoint $ map fst pairs
We can sanity-check that it's working by running it on the particular input we have from above:
*Main> let i = [([2],2),([2,3],1),([3],1),([5],1),([7],1)]
*Main> choosePSlow 3 i == choosePSpec 3 i
True
Or, better yet, we can stress-test it with QuickCheck. We'll need a tiny bit more code. The condition k < 5 is just because the spec is so hopelessly slow that bigger values of k take forever.
propSlowMatchesSpec :: NonNegative Int -> OrderedList ([Int], Int) -> Property
propSlowMatchesSpec (NonNegative k) (Ordered xs)
= k < 5 && unique (map fst xs)
==> choosePSlow k xs == choosePSpec k xs
*Main> quickCheck propSlowMatchesSpec
+++ OK, passed 100 tests.
There are several more opportunities to make things faster. For instance, the disjoint test could be sped up using choose 2 instead of liftM2; or we might be able to ensure disjointness during element selection and prune the search even earlier; etc. How you want to improve it from here I leave to you -- but the basic technique (start with stupid and slow, then make it smarter, testing as you go) should be helpful to you.

Minimizing chunks in a matrix

Suppose I have the following matrix:
The matrix can be broken down into chunks such that each chunk must, for all rows, have the same number of columns where the value is marked true for that row.
For example, the following chunk is valid:
This means that rows do not have to be contiguous.
Columns do not have to be contiguous either, as the following is a valid chunk:
However, the following is invalid:
That said, what is an algorithm that can be used to select chunks such that the minimal number of chunks will be used when finding all the chunks?
Given the example, above, the proper solution is (items with the same color represent a valid chunk):
In the above example, three is the minimal number of chunks that this can be broken down into.
Note that the following is also a valid solution:
There's not a preference to the solutions, really, just to get the least number of chunks.
I thought of counting using adjacent cells, but that doesn't account for the fact that the column values don't have to be contiguous.
I believe the key lies in finding the chunks with the largest area given the constraints, removing those items, and then repeating.
Taking that approach, the solution is:
But how to traverse the matrix and find the largest area is eluding me.
Also note, that if you want to reshuffle the rows and/or columns during the operations, that's a valid operation (in order to find the largest area), but I'd imagine you can only do it after you remove the largest areas from the matrix (after one area is found and moving onto the next).
You are doing circuit minimization on a truth table. For 4x4 truth tables, you can use a K map. The Quine-McCluskey algorithm is a generalization that can handle larger truth tables.
Keep in mind the problem is NP-Hard, so depending on the size of your truth tables, this problem can quickly grow to a size that is intractable.
This problem is strongly related to Biclustering, for which there are many efficient algorithms (and freely available implementations). Usually you will have to specify the number K of clusters you expect to find; if you don't have a good idea what K should be, you can proceed by binary search on K.
In case the biclusters don't overlap, you are done, otherwise you need to do some geometry to cut them into "blocks".
The solution I propose is fairly straightforward, but very time consuming.
It can be decomposed in 4 major steps:
find all the existing patterns in the matrix,
find all the possible combinations of these patterns,
remove all the incomplete pattern sets,
scan the remaining list to get the set with the minimum number of elements
First of, the algorithm below works on either column or row major matrices. I chose column for the explanations, but you may swap it for rows at your convenience, as long as it remains consistent accross the whole process.
The sample code accompanying the answer is in OCaml, but doesn't use any specific feature of the language, so it should be easy to port to other ML dialects.
Step 1:
Each column can be seen as a bit vector. Observe that a pattern (what you call chunk in your question) can be constructed by intersecting (ie. and ing) all the columns, or all the rows composing it, or even a combinations. So the first step is really about producing all the combinations of rows and columns (the powerset of the matrix' rows and columns if you will), intersecting them at the same time, and filter out the duplicates.
We consider the following interface for a matrix datatype:
module type MATRIX = sig
type t
val w : int (* the width of the matrix *)
val h : int (* the height ........ *)
val get : t -> int -> int -> bool (* cell value getter *)
end
Now let's have a look at this step's code:
let clength = M.h
let rlength = M.w
(* the vector datatype used throughought the algorithm
operator on this type are in the module V *)
type vector = V.t
(* a pattern description and comparison operators *)
module Pattern = struct
type t = {
w : int; (* width of thd pattern *)
h : int; (* height of the pattern *)
rows : vector; (* which rows of the matrix are used *)
cols : vector; (* which columns... *)
}
let compare a b = Pervasives.compare a b
let equal a b = compare a b = 0
end
(* pattern set : let us store patterns without duplicates *)
module PS = Set.Make(Pattern)
(* a simple recursive loop on #f #k times *)
let rec fold f acc k =
if k < 0
then acc
else fold f (f acc k) (pred k)
(* extract a column/row of the given matrix *)
let cr_extract mget len =
fold (fun v j -> if mget j then V.set v j else v) (V.null len) (pred len)
let col_extract m i = cr_extract (fun j -> M.get m i j) clength
let row_extract m i = cr_extract (fun j -> M.get m j i) rlength
(* encode a single column as a pattern *)
let col_encode c i =
{ w = 1; h = count c; rows = V.set (V.null clength) i; cols = c }
let row_encode r i =
{ h = 1; w = count r; cols = V.set (V.null rlength) i; rows = r }
(* try to add a column to a pattern *)
let col_intersect p c i =
let col = V.l_and p.cols c in
let h = V.count col in
if h > 0
then
let row = V.set (V.copy p.rows) i in
Some {w = V.count row; h = h; rows = row; clos = col}
else None
let row_intersect p r i =
let row = V.l_and p.rows r in
let w = V.count row in
if w > 0
then
let col = V.set (V.copy p.cols) i in
Some { w = w; h = V.count col; rows = row; cols = col }
else None
let build_patterns m =
let bp k ps extract encode intersect =
let build (l,k) =
let c = extract m k in
let u = encode c k in
let fld p ps =
match intersect p c k with
None -> l
| Some npc -> PS.add npc ps
in
PS.fold fld (PS.add u q) q, succ k
in
fst (fold (fun res _ -> build res) (ps, 0) k)
in
let ps = bp (pred rlength) PS.empty col_extract col_encode col_intersect in
let ps = bp (pred clength) ps row_extract row_encode row_intersect in
PS.elements ps
The V module must comply with the following signature for the whole algorithm:
module type V = sig
type t
val null : int -> t (* the null vector, ie. with all entries equal to false *)
val copy : t -> t (* copy operator *)
val get : t -> int -> bool (* get the nth element *)
val set : t -> int -> t (* set the nth element to true *)
val l_and : t -> t -> t (* intersection operator, ie. logical and *)
val l_or : t -> t -> t (* logical or *)
val count : t -> int (* number of elements set to true *)
val equal : t -> t -> bool (* equality predicate *)
end
Step 2:
Combining the patterns can also be seen as a powerset construction, with some restrictions: A valid pattern set may only contain patterns which don't overlap. The later can be defined as true for two patterns if both contain at least one common matrix cell.
With the pattern data structure used above, the overlap predicate is quite simple:
let overlap p1 p2 =
let nullc = V.null h
and nullr = V.null w in
let o v1 v2 n = not (V.equal (V.l_and v1 v2) n) in
o p1.rows p2.rows nullr && o p1.cols p2.cols nullc
The cols and rows of the pattern record indicate which coordinates in the matrix are included in the pattern. Thus a logical and on both fields will tell us if the patterns overlap.
For including a pattern in a pattern set, we must ensure that it does not overlap with any pattern of the set.
type pset = {
n : int; (* number of patterns in the set *)
pats : pattern list;
}
let overlap sp p =
List.exists (fun x -> overlap x p) sp.pats
let scombine sp p =
if overlap sp p
then None
else Some {
n = sp.n + 1;
pats = p::sp.pats;
}
let build_pattern_sets l =
let pset l p =
let sp = { n = 1; pats = [p] } in
List.fold_left (fun l spx ->
match scombine spx p with
None -> l
| Some nsp -> nsp::l
) (sp::l) l
in List.fold_left pset [] l
This step produces a lot of sets, and thus is very memory and computation intensive. It's certainly the weak point of this solution, but I don't see yet how to reduce the fold.
Step 3:
A pattern set is incomplete if when rebuilding the matrix with it, we do not obtain the original one. So the process is rather simple.
let build_matrix ps w =
let add m p =
let rec add_col p i = function
| [] -> []
| c::cs ->
let c =
if V.get p.rows i
then V.l_or c p.cols
else c
in c::(add_col p (succ i) cs)
in add_col p 0 m
in
(* null matrix as a list of null vectors *)
let m = fold (fun l _ -> V.null clength::l) [] (pred rlength) in
List.fold_left add m ps.pats
let drop_incomplete_sets m l =
(* convert the matrix to a list of columns *)
let m' = fold (fun l k -> col_extract m k ::l) [] (pred rlength) in
let complete m sp =
let m' = build_matrix sp in
m = m'
in List.filter (fun x -> complete m' x) l
Step 4:
The last step is just selecting the set with the smallest number of elements:
let smallest_set l =
let smallest ps1 ps2 = if ps1.n < ps2.n then ps1 else ps2 in
match l with
| [] -> assert false (* there should be at least 1 solution *)
| h::t -> List.fold_left smallest h t
The whole computation is then just the chaining of each steps:
let compute m =
let (|>) f g = g f in
build_patterns m |> build_pattern_sets |> drop_incomplete_sets m |> smallest_set
Notes
The algorithm above constructs a powerset of a powerset, with some limited filtering. There isn't as far as I know a way to reduce the search (as mentioned in a comment, if this is a NP hard problem, there isn't any).
This algorithm checks all the possible solutions, and correctly returns an optimal one (tested with many matrices, including the one given in the problem description.
One quick remark regarding the heuristic you propose in your question:
it could be easily implemented using the first step, removing the largest pattern found, and recursing. That would yeld a solution much more rapidly than my algorithm. However, the solution found may not be optimal.
For instance, consider the following matrix:
.x...
.xxx
xxx.
...x.
The central 4 cell chunck is the largest which may be found, but the set using it would comprise 5 patterns in total.
.1...
.223
422.
...5.
Yet this solution uses only 4:
.1...
.122
334.
...4.
Update:
Link to the full code I wrote for this answer.

Resources