How to show repeating decimal in different base in mathematica - wolfram-mathematica

I read this great question about showing and doing simple arithmetic on them, but I am wondering given this (or simply starting from scratch), how to show and then further similarly do arithmetic on them when given a different base?
For example,
(1/3)_2=0.01 means the fraction 1/3 in binary form is repeating the binary digits 01.
Thank you.

Here's an attempt. Mr.Wizard did much of the heavy lifting, especially in the base-preserving arithmetic.
rd[n_] := rd[n, 10]
rd[rd[n_, _], b_] := rd[n, b]
Format[rd[n_Integer | n_Real, base_]] := BaseForm[n, base]
Format[rd[q_Rational, base_]] :=
Subscript[Row # Flatten[{
IntegerString[IntegerPart#q, base], ".",
RealDigits[FractionalPart#q, base] /.
{{nr___, r_List:{}}, pt_} :> {0~Table~{-pt}, nr, OverBar /# r}
}], base /. 10 -> ""]
Base-preserving arithmetic can be implemented using this:
Scan[
(#[rd[q1_, b1_], rd[q2_, b2_] | tail___] ^:=
rd[ #[q1, q2, tail], If[b1 === b2, b1, 10] ]) &,
{Plus, Times, Power}
]
Checking to see that conversions to repeating decimals in several bases work. Also checking routines for adding, multiplying, and dividing:
Grid[{{"n", "value", "decimal", "rd[n,10]", "rd[n,2]", "rd[n,3]", "rd[n,7]"},
{"a", a = 14/3, N[a], rd[a, 10], rd[a, 2], rd[a, 3], rd[a, 7]},
{"b", b = 2/5, N[b], rd[b, 10], rd[b, 2], rd[b, 3], rd[b, 7]},
{"c", c = 1/2, N[c], rd[c, 10], rd[c, 2], rd[c, 3], rd[c, 7]},
{"a + b", a + b, N[a + b], rd[a, 10] + rd[b, 10],
rd[a, 2] + rd[b, 2], rd[a, 3] + rd[b, 3], rd[a, 7] + rd[b, 7]},
{"a + b + c", a + b + c, N[a + b + c],
rd[a, 10] + rd[b, 10] + rd[c, 10], rd[a, 2] + rd[b, 2] + rd[c, 2],
rd[a, 3] + rd[b, 3] + rd[c, 3],
rd[a, 7] + rd[b, 7] + rd[c, 7]},
{"a \[Times] b ", a*b, N[a*b],
rd[a, 10]*rd[b, 10], rd[a, 2]*rd[b, 2], rd[a, 3]*rd[b, 3],
rd[a, 7]*rd[b, 7]}, {"a \[Times] b \[Times] c ", a*b*c, N[a*b*c],
rd[a, 10]*rd[b, 10]*rd[c, 10], rd[a, 2]*rd[b, 2]*rd[c, 2],
rd[a, 3]*rd[b, 3]*rd[c, 3], rd[a, 7]*rd[b, 7]*rd[c, 7]},
{"a / b",
a/b, N[a/b], rd[a, 10]/rd[b, 10], rd[a, 2]/rd[b, 2],
rd[a, 3]/rd[b, 3], rd[a, 7]/rd[b, 7]}}, Dividers -> All]
Edit
The latest refinements (credit, once again, to Mr.Wizard) support nesting:
ClearAll[f, x, y]
f := (x/(x + 3 + 2 y) + y)/7 x; f
f // FullForm
x = 14/3; y = 1/3; f
BaseForm[N[f], 10]
x = rd[14/3, 10]; y = rd[1/3, 10]; f
x = rd[14/3, 2]; y = rd[1/3, 2]; f
x = rd[14/3, 5]; y = rd[1/3, 5]; f

Simple: BaseForm[1./12, 3] will show you 1/12 (the decimal point after the 1 is to ensure approximation) in base 3 as a repeating decimal.
Extra: Converting base x to base ten is even simpler x^^<NUMBER>

'RealDigits' is able to handle all kinds of bases, so for instance
RealDigits[1/3, 2]
{{{1, 0}}, -1}
Refer to the documentation about the precise output format you may get. It can be rather complex.

Related

Maximizing a boolean function of AND and XOR on paritions of set

Given a set of distinct positive integers S we need to partition a set in such a way that the following function is maximize. Let S1, S2, ... Sn be the parition. n can be atleast 2.
F(S1, S2, ..., Sn) = AND(XOR(S1), XOR(S2), ..., XOR(Sn)))
where, AND is bitwise AND operation and XOR is bitwise XOR operation
We need to print the all such partitions possible.
I have tried the exponential approach as shown below.
I am looking for a solution with lesser complexity.
This problem is part of a homework, so only provide the hint.
from functools import reduce
from collections import defaultdict
def partition(collection):
if len(collection) == 1:
yield [ collection ]
return
first = collection[0]
for smaller in partition(collection[1:]):
# insert `first` in each of the subpartition's subsets
for n, subset in enumerate(smaller):
yield smaller[:n] + [[ first ] + subset] + smaller[n+1:]
# put `first` in its own subset
yield [ [ first ] ] + smaller
# print("END OF THE LOOP")
x = 4
initialAssum = [2, 4, 8, 16]
something = initialAssum
def andxor(xs):
def xorlist(xs):
return reduce(lambda i, j: i ^ j, xs)
tmp = [xorlist(x) for x in xs]
return reduce(lambda i, j: i & j, tmp)
ans = defaultdict(list)
for n, p in enumerate(partition(something), 1):
r = andxor(p)
if len(p) > 1:
ans[r].append(p)
m = max(ans.keys())
for a in ans[m]:
print(a)
#print(a, len(a))
print(f'{m}')
INPUT: the set itself
OUTPUT: max value possible followed by the all the partitions producing it
INPUT:
[2, 3, 4]
OUTPUT:
2
[[4, 2], [3]]
[[2], [4, 3]]
INPUT:
[2, 4, 6, 8]
OUTPUT:
0
[[2], [4, 8, 16]]
[[2, 4], [8, 16]]
[[4], [2, 8, 16]]
[[2], [4], [8, 16]]
[[2, 4, 8], [16]]
[[4, 8], [2, 16]]
[[2], [4, 8], [16]]
[[2, 8], [4, 16]]
[[8], [2, 4, 16]]
[[2], [8], [4, 16]]
[[2, 4], [8], [16]]
[[4], [2, 8], [16]]
[[4], [8], [2, 16]]
[[2], [4], [8], [16]]
This problem is part of a homework, so only provide the hint.
Hint: if we consider each bit separately from the highest to lowest, we can consider bit k as being set in the result if and only if that bit can appear an odd number of times in each part of the partition.

Algorithmic Optimization [duplicate]

The question I'm working on is:
Find which sum of squared factors are a perfect square given a specific range.
So if the range was (1..10) you would get each number's factors (all factors for 1, all factors for 2, all factors for 3 ect..) Square those factors, then add them together. Finally check if that sum is a perfect square.
I am stuck on refactoring/optimization because my solution is too slow.
Here is what I came up with:
def list_squared(m, n)
ans = []
range = (m..n)
range.each do |i|
factors = (1..i).select { |j| i % j == 0 }
squares = factors.map { |k| k ** 2 }
sum = squares.inject { |sum,x| sum + x }
if sum == Math.sqrt(sum).floor ** 2
all = []
all += [i, sum]
ans << all
end
end
ans
end
This is an example of what I would put in the method:
list_squared(1, 250)
And then the desired output would be an array of arrays with each array containing the number whose sum of squared factors was a perfect square and the sum of those squared factors:
[[1, 1], [42, 2500], [246, 84100]]
I would start by introducing some helper methods (factors and square?) to make your code more readable.
Furthermore, I would reduce the number of ranges and arrays to improve memory usage.
require 'prime'
def factors(number)
[1].tap do |factors|
primes = number.prime_division.flat_map { |p, e| Array.new(e, p) }
(1..primes.size).each do |i|
primes.combination(i).each do |combination|
factor = combination.inject(:*)
factors << factor unless factors.include?(factor)
end
end
end
end
def square?(number)
square = Math.sqrt(number)
square == square.floor
end
def list_squared(m, n)
(m..n).map do |number|
sum = factors(number).inject { |sum, x| sum + x ** 2 }
[number, sum] if square?(sum)
end.compact
end
list_squared(1, 250)
A benchmark with a narrow range (up to 250) shows only a minor improvement:
require 'benchmark'
n = 1_000
Benchmark.bmbm(15) do |x|
x.report("original_list_squared :") { n.times do; original_list_squared(1, 250); end }
x.report("improved_list_squared :") { n.times do; improved_list_squared(1, 250); end }
end
# Rehearsal -----------------------------------------------------------
# original_list_squared : 2.720000 0.010000 2.730000 ( 2.741434)
# improved_list_squared : 2.590000 0.000000 2.590000 ( 2.604415)
# -------------------------------------------------- total: 5.320000sec
# user system total real
# original_list_squared : 2.710000 0.000000 2.710000 ( 2.721530)
# improved_list_squared : 2.620000 0.010000 2.630000 ( 2.638833)
But a benchmark with a wider range (up to 10000) shows a much better performance than the original implementation:
require 'benchmark'
n = 10
Benchmark.bmbm(15) do |x|
x.report("original_list_squared :") { n.times do; original_list_squared(1, 10000); end }
x.report("improved_list_squared :") { n.times do; improved_list_squared(1, 10000); end }
end
# Rehearsal -----------------------------------------------------------
# original_list_squared : 36.400000 0.160000 36.560000 ( 36.860889)
# improved_list_squared : 2.530000 0.000000 2.530000 ( 2.540743)
# ------------------------------------------------- total: 39.090000sec
# user system total real
# original_list_squared : 36.370000 0.120000 36.490000 ( 36.594130)
# improved_list_squared : 2.560000 0.010000 2.570000 ( 2.581622)
tl;dr: The bigger the N the better my code performs compared to the original implementation...
One way to make it more efficient is to use Ruby's built-in method Prime::prime_division.
For any number n, if prime_division returns an array containing a single element, that element will be [n,1] and n will have been shown to be prime. That prime number has factors n and 1, so must be treated differently than numbers that are not prime.
require 'prime'
def list_squared(range)
range.each_with_object({}) do |i,h|
facs = Prime.prime_division(i)
ssq =
case facs.size
when 1 then facs.first.first**2 + 1
else facs.inject(0) { |tot,(a,b)| tot + b*(a**2) }
end
h[i] = facs if (Math.sqrt(ssq).to_i)**2 == ssq
end
end
list_squared(1..10_000)
#=> { 1=>[], 48=>[[2, 4], [3, 1]], 320=>[[2, 6], [5, 1]], 351=>[[3, 3], [13, 1]],
# 486=>[[2, 1], [3, 5]], 1080=>[[2, 3], [3, 3], [5, 1]],
# 1260=>[[2, 2], [3, 2], [5, 1], [7, 1]], 1350=>[[2, 1], [3, 3], [5, 2]],
# 1375=>[[5, 3], [11, 1]], 1792=>[[2, 8], [7, 1]], 1836=>[[2, 2], [3, 3], [17, 1]],
# 2070=>[[2, 1], [3, 2], [5, 1], [23, 1]], 2145=>[[3, 1], [5, 1], [11, 1], [13, 1]],
# 2175=>[[3, 1], [5, 2], [29, 1]], 2730=>[[2, 1], [3, 1], [5, 1], [7, 1], [13, 1]],
# 2772=>[[2, 2], [3, 2], [7, 1], [11, 1]], 3072=>[[2, 10], [3, 1]],
# 3150=>[[2, 1], [3, 2], [5, 2], [7, 1]], 3510=>[[2, 1], [3, 3], [5, 1], [13, 1]],
# 4104=>[[2, 3], [3, 3], [19, 1]], 4305=>[[3, 1], [5, 1], [7, 1], [41, 1]],
# 4625=>[[5, 3], [37, 1]], 4650=>[[2, 1], [3, 1], [5, 2], [31, 1]],
# 4655=>[[5, 1], [7, 2], [19, 1]], 4998=>[[2, 1], [3, 1], [7, 2], [17, 1]],
# 5880=>[[2, 3], [3, 1], [5, 1], [7, 2]], 6000=>[[2, 4], [3, 1], [5, 3]],
# 6174=>[[2, 1], [3, 2], [7, 3]], 6545=>[[5, 1], [7, 1], [11, 1], [17, 1]],
# 7098=>[[2, 1], [3, 1], [7, 1], [13, 2]], 7128=>[[2, 3], [3, 4], [11, 1]],
# 7182=>[[2, 1], [3, 3], [7, 1], [19, 1]], 7650=>[[2, 1], [3, 2], [5, 2], [17, 1]],
# 7791=>[[3, 1], [7, 2], [53, 1]], 7889=>[[7, 3], [23, 1]],
# 7956=>[[2, 2], [3, 2], [13, 1], [17, 1]],
# 9030=>[[2, 1], [3, 1], [5, 1], [7, 1], [43, 1]],
# 9108=>[[2, 2], [3, 2], [11, 1], [23, 1]], 9295=>[[5, 1], [11, 1], [13, 2]],
# 9324=>[[2, 2], [3, 2], [7, 1], [37, 1]]}
This calculation took approximately 0.15 seconds.
For i = 6174
(2**1) * (3**2) * (7**3) #=> 6174
and
1*(2**2) + 2*(3**2) + 3*(7**2) #=> 169 == 13*13
The trick that frequently solves questions like this is to switch from trial division to a sieve. In Python (sorry):
def list_squared(m, n):
factor_squared_sum = {i: 0 for i in range(m, n + 1)}
for factor in range(1, n + 1):
i = n - n % factor # greatest multiple of factor less than or equal to n
while i >= m:
factor_squared_sum[i] += factor ** 2
i -= factor
return {i for (i, fss) in factor_squared_sum.items() if isqrt(fss) ** 2 == fss}
def isqrt(n):
# from http://stackoverflow.com/a/15391420
x = n
y = (x + 1) // 2
while y < x:
x = y
y = (x + n // x) // 2
return x
The next optimization is to step factor only to isqrt(n), adding the factor squares in pairs (e.g., 2 and i // 2).

How to sum two matrices?

how to write a method that accepts two square matrices (nxn two dimensional arrays), and return the sum of the two. Both matrices being passed into the method will be of size nxn (square), containing only integers.
How to sum two matrices:
Take each cell [n][m] from the first matrix, and add it with the [n][m] cell from the second matrix. This will be cell [n][m] in the solution matrix.
like:
|1 2 3|
|3 2 1|
|1 1 1|
+
|2 2 1|
|3 2 3|
|1 1 3|
=
|3 4 4|
|6 4 4|
|2 2 4|
matrix_addition( [ [1, 2, 3], [3, 2, 1,], [1, 1, 1] ], [ [2, 2, 1], [3, 2, 3], [1, 1, 3] ] )
returns [ [3, 4, 4], [6, 4, 4], [2, 2, 4] ]
Even though it is possible to define method to do so, it is much easier to use ruby build in Matrix library for this:
require 'matrix'
m1 = Matrix[ [1, 2, 3], [3, 2, 1], [1, 1, 1] ]
m2 = Matrix[ [2, 2, 1], [3, 2, 3], [1, 1, 3] ]
sum = m1 + m2
Yes, certainly, use the Matrix class methods, but here is a way using recursion that might be of interest.
Code
def sum_arrays(a1, a2)
t = a1.zip(a2)
t.map { |e1,e2| (e1.is_a? Array) ? sum_arrays(e1,e2) : e1+e2 }
end
Examples
a1 = [1,2,3]
a2 = [4,5,6]
sum_arrays(a1, a2)
#=> [5, 7, 9]
a1 = [[1,2,3], [4,5]]
a2 = [[6,7,8], [9,10]]
sum_arrays(a1, a2)
#=> [[7, 9, 11], [13, 15]]
a1 = [[[ 1, 2, 3], [ 4, 5]],
[[ 6, 7], [ 8, 9, 10]]]
a2 = [[[11, 12, 13], [14, 15]],
[[16, 17], [18, 19, 20]]]
sum_arrays(a1, a2)
#=> [[[12, 14, 16], [18, 20]],
# [[22, 24], [26, 28, 30]]]
Generalization
You could make greater use of this method by passing an operator.
Code
def op_arrays(a1, a2, op)
t = a1.zip(a2)
t.map { |e1,e2| (e1.is_a? Array) ? op_arrays(e1,e2,op) : e1.send(op,e2) }
end
Examples
a1 = [[1,2,3], [4,5]]
a2 = [[6,7,8], [9,10]]
op_arrays(a1, a2, '+') #=> [[7, 9, 11], [13, 15]]
op_arrays(a1, a2, '-') #=> [[-5, -5, -5], [-5, -5]]
op_arrays(a1, a2, '*') #=> [[6, 14, 24], [36, 50]]
You could alternatively pass the operator as a symbol:
op_arrays(a1, a2, :+)
#=> [[7, 9, 11], [13, 15]]
Have you used ruby Matrix class?
It has #+ operator (mimic method).

Counting subsets with given sizes of a set

Given a set C with n elements (duplicates allowed) and a partition P of n
P = {i1, i2, ... / i1+i2+... = n}
how many different decompositions of C in subsets of size i1, i2, ... are there ?
Example :
C = {2 2 2 3}
P = {2 2}
C = {2 2} U {2 3}
P = {1 1 2}
C = {2} U {2} U {2 3}
C = {2} U {3} U {2 2}
P = {1 3}
C = {2} U {2 2 3}
C = {3} U {2 2 2}
I have a solution, but it is inefficient when C has more than a dozen of elements.
Thanks in advance
Philippe
The fact that the order of decomposition does not matter to you makes it much harder. That is, you are viewing {2 2} U {2 3} as the same as {2 3} U {2 2}. Still I have an algorithm that is better than what you have, but is not great.
Let me start it with a realistically complicated example. Our set will be A B C D E F F F F G G G G. The partition will be 1 1 1 1 2 2 5.
My first simplification will be to represent the information we care about in the set with the data structure [[2, 4], [5, 1]], meaning 2 elements are repeated 4 times, and 5 are repeated once.
My second apparent complication will be to represent the partition with [[5, 1, 1], [2, 2, 1], [4, 1, 1]. The pattern may not be obvious. Each entry is of the form [size, count, frequency]. So the one distinct instance of 2 partitions of size 2 turn into [2, 2, 1]. We're not using frequency yet, but it is counting distinguishable piles of the same size and commonness.
Now we're going to recurse as follows. We'll take the most common element, and find all of the ways to use it up. So in our case we take one of the piles of size 4, and find that we can divide it as follows, rearranging each remaining partition strategy in lexicographic order:
[4] leaving [[1, 1, 1], [2, 2, 1], [1, 4, 1]] = [[2, 2, 1], [1, 4, 1], [1, 1, 1]].
[3, [1, 0], 0] leaving [[2, 1, 1], [1, 1, 1], [2, 1, 1], [1, 4, 1]] = [[2, 1, 2], [1, 4, 1], [1, 1, 1]. (Note that we're now using frequency.)
[3, 0, 1] leaving [[2, 1, 1], [2, 2, 1], [0, 1, 1], [1, 3, 1]] = [[2, 2, 1], [2, 1, 1], [1, 3, 1]]
[2, [2, 0], 0] leaving [[3, 1, 1], [0, 1, 1], [2, 1, 1], [1, 4, 1]] = [[3, 1, 1], [2, 1, 1], [1, 4, 1]]
[2, [1, 1], 0] leaving [[3, 1, 1], [1, 2, 1], [1, 4, 1]] = [[3, 1, 1], [1, 4, 1], [1, 2, 1]]
[2, [1, 0], [1]] leaving [[3, 1, 1], [1, 1, 1], [2, 1, 1], [0, 1, 1], [1, 3, 1]] = [[3, 1, 1], [2, 1, 1], [1, 4, 1], [1, 1, 1]]
[2, 0, [1, 1]] leaving `[[3, 1, 1], [2, 2, 1], [0, 2, 1], [1, 2, 1]] = [[3, 1, 1], [2, 2, 1], [1, 2, 1]]1
[1, [2, 1]] leaving [[4, 1, 1], [0, 1, 1], [1, 1, 1], [1, 4, 1]] = [[4, 1, 1], [1, 4, 1], [1, 1, 1]]
[1, [2, 0], [1]] leaving [[4, 1, 1], [0, 1, 1], [2, 1, 1], [0, 1, 1], [1, 3, 1]] = [[4, 1, 1], [2, 1, 1], [1, 3, 1]]
[1, [1, 0], [1, 1]] leaving [[4, 1, 1], [1, 1, 1], [2, 1, 1], [0, 2, 1], [1, 2, 1]] = [[4, 1, 1], [2, 1, 1], [1, 2, 1], [1, 1, 1]]
[1, 0, [1, 1, 1]] leaving [[4, 1, 1], [2, 2, 1], [0, 3, 1], [1, 1, 1]] = [[4, 1, 1], [2, 2, 1], [1, 1, 1]]
[0, [2, 2]] leaving [[5, 1, 1], [0, 2, 1], [1, 4, 1]] = [[5, 1, 1], [1, 4, 1]]
[0, [2, 1], [1]] leaving [[5, 1, 1], [0, 1, 1], [1, 1, 1], [0, 1, 1], [1, 3, 1]] = [[5, 1, 1], [1, 3, 1], [1, 1, 1]]
[0, [2, 0], [1, 1]] leaving [[5, 1, 1], [0, 2, 1], [2, 1, 1], [0, 2, 1], [1, 2, 1]] = [[5, 1, 1], [2, 1, 1], [1, 2, 1]]
[0, [1, 1], [1, 1]] leaving [[5, 1, 1], [1, 2, 1], [0, 2, 1], [1, 2, 1]] = [[5, 1, 1,], [1, 2, 2]]
[0, [1, 0], [1, 1, 1]] leaving [[5, 1, 1], [1, 1, 1], [2, 1, 1], [0, 3, 1], [1, 1, 1]] = [[5, 1, 1], [2, 1, 1], [1, 1, 2]]
[0, 0, [1, 1, 1, 1]] leaving [[5, 1, 1], [2, 2, 1], [0, 4, 1]] = [[5, 1, 1], [2, 2, 1]]
Now each of those subproblems can be solved recursively. This may feel like we're on the way to constructing them all, but we aren't, because we memoize the recursive steps. It turns out that there are a lot of ways that the first two groups of 8 can wind up with the same 5 left overs. With memoization we don't need to repeatedly recalculate those solutions.
That said, we'll do better. Groups of 12 elements should not pose a problem. But we're not doing that much better. I wouldn't be surprised if it starts breaking down somewhere around groups of 30 or so elements with interesting sets of partitions. (I haven't coded it. It may be fine at 30 and break down at 50. I don't know where it will break down. But given that you're iterating over sets of partitions, at some fairly small point it will break down.)
All partitions can be found in 2 stages.
First: from P make new ordered partition of n, P_S={P_i1, P_i2, ..., P_ip}, summing identical i's.
P = {1, 1, 1, 1, 2, 2, 5}
P_S = (4, 4, 5)
Make partitions {C_i1, C_i2, ..., C_ip} of C with respect to P_S. Note, C_ix is multi-set like C. It is partitioning C into multi-sets by sizes of final partitions.
Second: for each {C_i1, C_i2, ..., C_ip} and for each ix, x={1,2,...,p} find number of partitions of C_ix into t (number of ix's in P) sets with ix elements. Call this number N(C_ix,ix,t).
Total number of partitions is:
sum by all {C_i1, C_i2, ..., C_ip} ( product N(C_ix,ix,t) ix={1,2,...,p} )
First part can be done recursively quite simple. Second is more complicated. Partition of multi-set M into n parts with k elements is same as finding all partially sorted list with elements from M. Partially order list is of type:
a_1_1, a_1_2, ..., a_1_k, a_2_1, a_2_2, ..., a_2_k, ....
Where a_i_x <= a_i_y if x < y and (a_x_1, a_x_2, ..., a_x_k) lexicographic <= (a_y_1, a_y_2, ..., a_y_k) if x < y. With these 2 conditions it is possible to create all partition from N(C_ix,ix,t) recursively.
For some cases N(C_ix,ix,t) is easy to calculate. Define |C_ix| as number of different elements in multi-set C_ix.
if t = 1 than 1
if |C_ix| = 1 than 1
if |C_ix| = 2 than (let m=minimal number of occurrences of elements in C_ix) floor(m/2) + 1
in general if |C_ix| = 2 than partition of m in numbers <= t.

Prolog findall existential quantifier

I have a problem with a problem in Prolog.
Here's some of the code I use.
has_same_elements([X|_],Y) :-
permutation(X,Xnew),
member(Xnew,Y), !.
has_same_elements([_|Tail],Y) :-
has_same_elements(Tail,Y).
This gets two lists of lists as input, and decides whether or not they contain lists with the same elements. E.g. [[1,2],[3,4]] has the same elements as [[2,1],[4,3]]. This works fine.
Now my findall:
findall(V, (verdeling2(S,Perm,V), \+X^(X\=V,verdeling2(S,Perm,X),has_same_elements(X,V))) ,Verd).
All that's important to know is that verdeling2/3 is a clause that returns different lists of lists (as mentioned above), and it's constructed from a permutation of [1,2,3,4,...]
Some different outputs of verdeling2/3 (according to the permutation as input) are:
V = [[[1, 2], [3, 4]]] ;
V = [[[2, 1], [3, 4]]] ;
V = [[[2, 3], [1, 4]]] ;
V = [[[2, 3], [4, 1]]] ;
V = [[[1, 3], [2, 4]]] ;
V = [[[3, 1], [2, 4]]] ;
V = [[[3, 2], [1, 4]]] ;
V = [[[3, 2], [4, 1]]] ;
V = [[[1, 3], [4, 2]]] ;
V = [[[3, 1], [4, 2]]] ;
V = [[[3, 4], [1, 2]]] ;
V = [[[3, 4], [2, 1]]] ;
V = [[[1, 2], [4, 3]]] ;
V = [[[2, 1], [4, 3]]] ;
V = [[[2, 4], [1, 3]]] ;
V = [[[2, 4], [3, 1]]] ;
V = [[[1, 4], [2, 3]]] ;
V = [[[4, 1], [2, 3]]] ;
V = [[[4, 2], [1, 3]]] ;
V = [[[4, 2], [3, 1]]] ;
V = [[[1, 4], [3, 2]]] ;
V = [[[4, 1], [3, 2]]] ;
V = [[[4, 3], [1, 2]]] ;
V = [[[4, 3], [2, 1]]] ;
Now I'd want something that gives me an overview of all lists that don't contain the same elements (using has_same_elements). I thought my use of findall should do the trick, but it returns full packet, instead of filtering the ones I don't want.
I am not assuming that you use some constraint logic programming language
or somesuch so that A\=B does more than only \+ A=B.
I guess the X\=V always fails so that the goals behind it are not executed,
and the negation \+ is always true.
The X\=V always fails, since X=V always succeeds, since X is a fresh
variable in your context.
Probably some reordering would help.
Best Regards

Resources