Propagating recursive property if all predecessors satisfy it - datalog

I am working on a directed acyclic graph where a node can satisfy a property. I want to propagate this property recursively on other nodes. The rule is that a node satisfies this property if and only if all of its direct predecessors satisfy it as well.
The following of course does not work, since it would be enough if ONE parent satisfied the property, while I need ALL parents to satisfy it in order to propagate.
satisfies(node) :- isParent(parent, node), satisfies(parent).
Things I tried:
writing a doesNotSatisfy rule instead, which would work, but by doing that, I ran into the exact same issue elsewhere in the program
negating the thing with a !, but that gave me cyclic negations
What helped me in the past with datalog was to solve the problem naturally as a human on paper. But here I don't see any other way than to try to satisfy the "for all" rule.
Is that somehow possible in datalog? And if yes, how?

Okay, what you describe can certainly be achieved with standard Datalog without introducing cyclical negative dependencies. You need two things
forall p = ! exists ! p, which it seems you already figured out
seperation of base property and derived property
Here's a very generic example that you can easily adapt.
Here's a simple graph: the nodes with + possess the basic property (base case). The ones with * are those for which all successors satisfy the property.
1 -- 2 -- 3 * --- 5 + *
|\ \----- 6 + *
| \-- 4 * -- 8 + *
\---- 7
First, encode the graph.
transition(1,2).
transition(2,3).
transition(2,4).
transition(3,5).
transition(3,6).
transition(2,7).
transition(4,8).
Auxiliary to enumerate all nodes.
node(Node) :- transition(Node, _).
node(Node) :- transition(_, Node).
Mark the base case (+ nodes).
property(5).
property(6).
property(8).
Find nodes that does not have to property.
notProperty(Node) :- node(Node), ! property(Node).
Now we figure out the * nodes. Either we know it trivially via property(Node). Or that we can transition to a node where it is not the case that property does not hold.
propertyDerived(Node) :- property(Node).
propertyDerived(Node) :- transition(Node,Next), ! notProperty(Next).
The reason the program can avoid cyclical negation is because I have separated propertyDerived from property which in your example is confounded.
Now if you query, you'd get what you expect!
?- propertyDerived(Node).
3
4
5
6
8

Related

How could I remove backtracking from this code?

The goal is to select shapes that don't touch each other using constraints (clpfd). Calling start(Pairs,4) would return Pairs = [1,3,5,7].
One problem I noticed is that if I print Final before labeling, it prints [1,3,5,7]. Which means labeling isn't doing anything.
What could I change/add to this code in order to fix that and also remove possible backtracking?
:-use_module(library(clpfd)).
:-use_module(library(lists)).
% init initialises Pairs and Max
% Pairs - The elements inside the Nth list in Pairs,
% represent the index of the shapes that shape N can touch
init([[3,5,6,7],[4,5,7],[1,4,5,7],[2,3,7],[1,2,3,7],[1],[1,2,3,4,5]],7).
start(Final, N):-
init(Pairs, Max),
length(Final, N),
domain(Final, 1, Max),
ascending(Final),
all_different(Final),
rules(Pairs,Final),
labeling([],Final).
rules(_,[]).
rules(Pairs,[H|T]):-
nth1(H,Pairs,PairH),
secondrule(PairH,T),
rules(Pairs,T).
secondrule(_, []).
secondrule(PairH, [H|T]):-
element(_,PairH,H),
secondrule(PairH, T).
ascending([_|[]]).
ascending([H|[T1|T2]]):-
H #< T1,
ascending([T1|T2]).
This is an Independent Set problem, which is an NP-hard problem. Therefore, it is unlikely that anybody will ever find a way to do it without search (backtracking) for general instances.
Regarding your code, labeling/2 does nothing, because your rules/2 is in fact a search procedure that returns the solution it it can find it. all_different/1 is useless too, because it is implied by ascending/1.
Presumably, your goal is a program that sets up constraints (without any search) and then searches for a solution with labeling/2. For that, you need to rethink your constraint model. Read up a bit on independent sets.

extracting algorithm for building prolog rules

I have to create some "list of predicates" in prolog.
But I don't fully understand the way of thinking
that I have to learn if I want to create some working predicates.
I've seen some popular tutorials (maybe I've not been searching precisely enough), but I can not find any tutorial that teaches how to plan an algorithm using REALLY elementary steps.
For example...
Task:
Write a concat(X,Y,Z). predicate that is taking elements from the lists X and Y and concatenates them in the list Z.
My analysing algorithm:
Firstly I define a domain of the number of elements I'll be concatenating (the lengths of the lists X and Y) as non-negative integers (XCount >= 0 and YCount >= 0). Then I create a predicate for the first case, which is XCount = 0 and YCount = 0:
concat([],[],[]).
... then test it and find that it is working for the first case.
Then, I create a predicate for the second case, where XCount = 1 and YCount = 0, as:
concat(X,[],X).
... and again test it and find that it is working with some unexpected positive result.
Results:
I can see that this algorithm is working not only for XCount = 1 but also for XCount = 0. So I can delete concat([],[],[]). and have only concat(X,[],X)., because X = [] inside predicate concat(X,[],X). is the same as concat([],[],[])..
The second unexpected result is that the algorithm is working not only for XCount in 0,1 but for all XCount >= 0.
Then I analyse the domain and search for elements that weren't handled yet, and find that the simplest way is to create a second predicate for YCount > 0.
Remembering that using just X as the first argument can cover all XCount >= 0, I create a case for YCount = 1 and all Xes, which is:
concat(X,[Y|_],[Y|X]).
And this is the place where my algorithm gets a brain-buffer overflow.
Respecting stackoverflow rules, I'm asking precisely.
Questions:
Is there any way to find the answer by myself? By which I mean - not an answer for the problem, but for the algorithm I've shown to solve it.
In the other words, the algorithm of my algorithm.
If you can answer question 1, how can I find this type of hint in future? Is there a specific name for my problem?
How precise do I have to be - how many cases and in what language can I try to implement my algorithm that is not just "doing" things, but is "thinking" how to plan and create other algorithms.
Lists are not defined as counts of elements in them. lists are defined recursively, as empty, or a pair of an element and the rest of elements:
list([]).
list([_A|B]) :- list(B).
Lists can be the same:
same_lists([], []).
same_lists([A|B], [A|C]) :- same_lists(B, C).
Or one can be shorter than the other, i.e. its prefix:
list_prefix([], L):- list(L).
list_prefix([A|B], [A|C]):- list_prefix(B, C).
Where the prefix ends, the suffix begins:
list_split([], L, L):- list(L).
list_split([A|B], Sfx, [A|C]):- list_split(B, Sfx, C).
So, general advice is: follow the types, how they are constructed, and analyze the situation according to all possible cases. With lists, it is either empty, or non-empty lists.

Example channelling constraints ECLiPSe

Can someone provide a simple example of channelling constraints?
Channelling constraints are used to combine viewpoints of a constraint problem. Handbook of Constraint Programming gives a good explanation of how it works and why it can be useful:
The search variables can be the variables of one of the viewpoints, say X1 (this is discussed further below). As
search proceeds, propagating the constraints C1 removes values from the domains of the
variables in X1. The channelling constraints may then allow values to be removed from
the domains of the variables in X2. Propagating these value deletions using the constraints
of the second model, C2, may remove further values from these variables, and again these
removals can be translated back into the first viewpoint by the channelling constraints. The
net result can be that more values are removed within viewpoint V1 than by the constraints
C1 alone, leading to reduced search.
I do not understand how this is implemented. What are these constraints exactly, how do they look like in a real problem? A simple example would be very helpful.
As stated in Dual Viewpoint Heuristics for Binary Constraint Satisfaction Problems (P.A. Geelen):
Channelling constraints of two different models allows for the expression of a relationship between two sets of variables, one of each model.
This implies assignments in one of the viewpoints can be translated into assignments in the other and vice versa, as well as, when search initiates,
excluded values from one model can be excluded from the other as well.
Let me throw in an example I implemented a while ago while writing a Sudoku solver.
Classic viewpoint
Here we interpret the problem in the same way a human would: using the
integers between 1 and 9 and a definition that all rows, columns and blocks must contain every integer exactly once.
We can easily state this in ECLiPSe using something like:
% Domain
dim(Sudoku,[N,N]),
Sudoku[1..N,1..N] :: 1..N
% For X = rows, cols, blocks
alldifferent(X)
And this is yet sufficient to solve the Sudoku puzzle.
Binary boolean viewpoint
One could however choose to represent integers by their binary boolean arrays (shown in the answer by #jschimpf). In case it's not clear what this does, consider the small example below (this is built-in functionality!):
?­ ic_global:bool_channeling(Digit, [0,0,0,1,0], 1).
Digit = 4
Yes (0.00s cpu)
?­ ic_global:bool_channeling(Digit, [A,B,C,D], 1), C = 1.
Digit = 3
A = 0
B = 0
C = 1
D = 0
Yes (0.00s cpu)
If we use this model to represent a Sudoku, every number will be replaced by its binary boolean array and corresponding constraints can be written. Being trivial for this answer, I will not include all the code for the constraints, but a single sum constraint is yet enough to solve a Sudoku puzzle in its binary boolean representation.
Channelling
Having these two viewpoints with corresponding constrained models now gives the opportunity to channel between them and see if any improvements were made.
Since both models are still just an NxN board, no difference in dimension of representation exists and channelling becomes real easy.
Let me first give you a last example of what a block filled with integers 1..9 would look like in both of our models:
% Classic viewpoint
1 2 3
4 5 6
7 8 9
% Binary Boolean Viewpoint
[](1,0,0,0,0,0,0,0,0) [](0,1,0,0,0,0,0,0,0) [](0,0,1,0,0,0,0,0,0)
[](0,0,0,1,0,0,0,0,0) [](0,0,0,0,1,0,0,0,0) [](0,0,0,0,0,1,0,0,0)
[](0,0,0,0,0,0,1,0,0) [](0,0,0,0,0,0,0,1,0) [](0,0,0,0,0,0,0,0,1)
We now clearly see the link between the models and simply write the code to channel our decision variables. Using Sudoku and BinBools as our boards, the code would look something like:
( multifor([Row,Col],1,N), param(Sudoku,BinBools,N)
do
Value is Sudoku[Row,Col],
ValueBools is BinBools[Row,Col,1..N],
ic_global:bool_channeling(Value,ValueBools,1)
).
At this point, we have a channelled model where, during search, if values are pruned in one of the models, its impact will also occur in the other model. This can then of course lead to further overall constraint propagation.
Reasoning
To explain the usefulness of the binary boolean model for the Sudoku puzzle, we must first differentiate between some provided alldifferent/1 implementations by ECLiPSe:
What this means in practice can be shown as following:
?­ [A, B, C] :: [0..1], ic:alldifferent([A, B, C]).
A = A{[0, 1]}
B = B{[0, 1]}
C = C{[0, 1]}
There are 3 delayed goals.
Yes (0.00s cpu)
?­ [A, B, C] :: [0..1], ic_global:alldifferent([A, B, C]).
No (0.00s cpu)
As there has not yet occurred any assignment using the Forward Checking (ic library), the invalidity of the query is not yet detected, whereas the Bounds Consistent version immediately notices this. This behaviour can lead to considerable differences in constraint propagation while searching and backtracking through highly constrained models.
On top of these two libraries there is the ic_global_gac library intended for global constraints for which generalized arc consistency (also called hyper arc consistency or domain consistency) is maintained. This alldifferent/1 constraint provides even more pruning opportunities than the bounds consistent one, but preserving full domain consistency has its cost and using this library in highly constrained models generally leads to a loss in running performance.
Because of this, I found it interesting for the Sudoku puzzle to try and work with the bounds consistent (ic_global) implementation of alldifferent to maximise performance and subsequently try to approach domain consistency myself by channelling the binary boolean model.
Experiment results
Below are the backtrack results for the 'platinumblonde' Sudoku puzzle (referenced as being the hardest, most chaotic Sudoku puzzle to solve in The Chaos Within Sudoku, M. Ercsey­Ravasz and Z. Toroczkai) using respectively forward checking, bounds consistency, domain consistency, standalone binary boolean model and finally, the channelled model:
(ic) (ic_global) (ic_global_gac) (bin_bools) (channelled)
BT 6 582 43 29 143 30
As we can see, our channelled model (using bounds consistency (ic_global)) still needs one backtrack more than the domain consistent implementation, but it definitely performs better than the standalone bounds consistent version.
When we now take a look at the running times (results are the product of calculating an average over multiple executions, this to avoid extremes) excluding the forward checking implementation as it's proven to no longer be interesting for solving Sudoku puzzles:
(ic_global) (ic_global_gac) (bin_bools) (channelled)
Time(ms) 180ms 510ms 100ms 220ms
Looking at these results, I think we can successfully conclude the experiment (these results were confirmed by 20+ other Sudoku puzzle instances):
Channelling the binary boolean viewpoint to the bounds consistent standalone implementation produces a slightly less strong constraint propagation behaviour than that of the domain consistent standalone implementation, but with running times ranging from just as long to notably faster.
EDIT: attempt to clarify
Imagine some domain variable representing a cell on a Sudoku board has a remaining domain of 4..9. Using bounds consistency, it is guaranteed that for both value 4 and 9 other domain values can be found which satisfy all constraints and thus provides consistency. However, no consistency is explicitly guaranteed for other values in the domain (this is what 'domain consistency' is).
Using a binary boolean model, we define the following two sum constraints:
The sum of every binary boolean array is always equal to 1
The sum of every N'th element of every array in every row/col/block is always equal to 1
The extra constraint strength is enforced by the second constraint which, apart from constraining row, columns and blocks, also implicitly says: "every cell can only contain every digit once". This behaviour is not actively expressed when using just the bounds consistent alldifferent/1 constraint!
Conclusion
It is clear that channelling can be very useful to improve a standalone constrained model, however if the new model's constraint strengthness is weaker than that of the current model, obviously, no improvements will be made. Also note that having a more constrained model doesn't necesarilly also mean an overall better performance! Adding more constraints will in fact decrease amounts of backtracks required to solve a problem, but it might also increase the running times of your program if more constraint checks have to occur.
Channeling constraints are used when, in a model, aspects of a problem are represented in more than one way. They are then necessary to synchronize these multiple representations, even though they do not themselves model an aspect of the problem.
Typically, when modelling a problem with constraints, you have several ways of choosing your variables. For example, in a scheduling problem, you could choose to have
an integer variable for each job (indicating which machine does the job)
an integer variable for each machine (indicating which job it performs)
a matrix of Booleans (indicating which job runs on which machine)
or something more exotic
In a simple enough problem, you choose the representation that makes it easiest to formulate the constraints of the problem. However, in real life problems with many heterogeneous constraints it is often impossible to find such a single best representation: some constraints are best represented with one type of variable, others with another.
In such cases, you can use multiple sets of variables, and formulate each individual problem constraint over the most convenient variable set. Of course, you then end up with multiple independent subproblems, and solving these in isolation will not give you a solution for the whole problem. But by adding channeling constraints, the variable sets can be synchronized, and the subproblems thus re-connected. The result is then a valid model for the whole problem.
As hinted in the quote from the handbook, in such a formulation is is sufficient to perform search on only one of the variable sets ("viewpoints"), because the values of the others are implied by the channeling constraints.
Some common examples for channeling between two representations are:
Integer variable and Array of Booleans:
Consider an integer variable T indicating the time slot 1..N when an event takes place, and an array of Booleans Bs[N] such that Bs[T] = 1 iff an event takes place in time slot T. In ECLiPSe:
T #:: 1..N,
dim(Bs, [N]), Bs #:: 0..1,
Channeling between the two representations can then be set up with
( for(I,1,N), param(T,Bs) do Bs[I] #= (T#=I) )
which will propagate information both ways between T and Bs. Another way of implementing this channeling is the special purpose bool_channeling/3 constraint.
Start/End integer variables and Array of Booleans (timetable):
We have integer variables S,E indicating the start and end time of an activity. On the other side an array of Booleans Bs[N] such that Bs[T] = 1 iff the activity takes place at time T. In ECLiPSe:
[S,E] #:: 1..N,
dim(Bs, [N]), Bs #:: 0..1,
Channeling can be achieved via
( for(I,1,N), param(S,E,Bs) do Bs[I] #= (S#=<I and I#=<E) ).
Dual representation Job/Machine integer variables:
Here, Js[J] = M means that job J is executed on machine M, while the dual formulation Ms[M] = J means that machine M executes job J
dim(Js, [NJobs]), Js #:: 0..NMach,
dim(Ms, [NMach]), Ms #:: 1..NJobs,
And channeling is achieved via
( multifor([J,M],1,[NJobs,NMach]), param(Js,Ms) do
(Js[J] #= M) #= (Ms[M] #= J)
).
Set variable and Array of Booleans:
If you use a solver (such as library(ic_sets)) that can directly handle set-variables, these can be reflected into an array of booleans indicating membership of elements in the set. The library provides a dedicated constraint membership_booleans/2 for this purpose.
Here is a simple example, works in SWI-Prolog, but should
also work in ECLiPSe Prolog (in the later you have to use (::)/2 instead of (in)/2):
Constraint C1:
?- Y in 0..100.
Y in 0..100.
Constraint C2:
?- X in 0..100.
X in 0..100.
Channelling Constraint:
?- 2*X #= 3*Y+5.
2*X#=3*Y+5.
All together:
?- Y in 0..100, X in 0..100, 2*X #= 3*Y+5.
Y in 1..65,
2*X#=3*Y+5,
X in 4..100.
So the channel constraint works in both directions, it
reduces the domain of C1 as well as the domain of C2.
Some systems use iterative methods, with the result that this channelling
can take quite some time, here is an example which needs around
1 minute to compute in SWI-Prolog:
?- time(([U,V] ins 0..1_000_000_000, 36_641*U-24 #= 394_479_375*V)).
% 9,883,559 inferences, 53.616 CPU in 53.721 seconds
(100% CPU, 184341 Lips)
U in 346688814..741168189,
36641*U#=394479375*V+24,
V in 32202..68843.
On the other hand ECLiPSe Prolog does it in a blink:
[eclipse]: U::0..1000000000, V::0..1000000000,
36641*U-24 #= 394479375*V.
U = U{346688814 .. 741168189}
V = V{32202 .. 68843}
Delayed goals:
-394479375 * V{32202 .. 68843} +
36641 * U{346688814 .. 741168189} #= 24
Yes (0.11s cpu)
Bye

Prolog, number of sections between points

I have a task to do in Prolog. It seems to be easy, but I am really novice in this kind of language and I can't get use to it.
I have to write a function path_L(a,z,N) , where a,z - edges o a "road", N is the variable that I am looking for. So, there are many edges defining one-way direction: edge(a,b), edge(b,c), edge(b,d), edge(e,f), etc. The goal of the path_L function is to give a result that is the number of sections between (a,z) in the form of N=result. So if for example:
path_L(a,b,N). -> N=1
path_L(a,c,N). -> N=2
I have already defined another function defining if a path between (X,Y) exists:
path(X,Y):-edge(X,Y).
path(X,Y):-edge(X,Z),path(Z,Y).
Assuming, like #lurker said, that you meant your predicates for finding a path are:
path(X,Y):- edge(X,Y).
path(X,Y):- edge(X,Z), path(Z,Y).
And given a few clauses stating the existance of edges as:
edge(a,b).
edge(b,c).
edge(c,d).
You got the right idea that path_L should have an additional argument for the 'count' of edges between nodes. A simple version of what you want (with some caveats) is below:
path_L(X,Y,1):- edge(X,Y).
path_L(X,Y,R):- edge(X,Z), path_L(Z,Y,N), R is N+1.
Notice how that the first case simply unifies the third argument with '1', so an objective clause like path_L(a,b,2) ("Distance between a and b is 2") correctly fails, while path_L(a,b,R) (notice R is a variable) succeeds with R unifying with 1. It's a neat feature of the paradigm that the same definition is good for 'both ways'.
Another example is the objective path_L(a,B,2) (notice B is a variable), which succeeds with B unifying with c - because c is the node with a distance of 2 from a.
Finally, assume instead a graph given by:
edge(a,b).
edge(b,c).
edge(c,d).
edge(d,x).
edge(a,x).
An objective clause like path_L(a,x,R) should first succeed with R = 1, and then, if requested (pressing ; on your terminal) with R = 4. This is because there are two valid paths (with lenghts 1 and 4) from a to x. The order of the predicates matters - if you defined path_L as such:
path_L(X,Y,R):- edge(X,A), path_L(A,Y,N), R is N+1.
path_L(X,Y,1):- edge(X,Y).
That same query would result first in R=4, and then in R=1. This is because the order in which predicates are defined matter. In fact, the mechanisms through which Prolog choose which predicates to test and how clauses are chosen for 'solving the problem' are well defined; and you should definitely look it up if you get into logic programming.
Hope this helps.
P.S.: about those caveats - e.g., none of the above allow a path of zero-length from a node to itself. Depending on what you want, that could be the case or a mistake.

Find best result without findall and a filter

I'm in a bit of pickle in Prolog.
I have a collection of objects. These objects have a certain dimension, hence weight.
I want to split up these objects in 2 sets (which form the entire set together) in such a way that their difference in total weight is minimal.
The first thing I tried was the following (pseudo-code):
-> findall with predicate createSets(List, set(A, B))
-> iterate over results while
---> calculate weight of both
---> calculate difference
---> loop with current difference and compare to current difference
till end of list of sets
This is pretty straightforward. The issue here is that I have a list of +/- 30 objects. Creating all possible sets causes a stack overflow.
Helper predicates:
sublist([],[]).
sublist(X, [_ | RestY]) :-
sublist(X,RestY).
sublist([Item|RestX], [Item|RestY]) :-
sublist(RestX,RestY).
subtract([], _, []) :-
!.
subtract([Head|Tail],ToSubstractList,Result) :-
memberchk(Head,ToSubstractList),
!,
subtract(Tail, ToSubstractList, Result).
subtract([Head|Tail], ToSubstractList, [Head|ResultTail]) :-
!,
subtract(Tail,ToSubstractList,ResultTail).
generateAllPossibleSubsets(ListToSplit,sets(Sublist,SecondPart)) :-
sublist(Sublist,ListToSplit),
subtract(ListToSplit, Sublist, SecondPart).
These can then be used as follows:
:- findall(Set, generateAllPossibleSubsets(ObjectList,Set), ListOfSets ),
findMinimalDifference(ListOfSets,Set).
So because I think this is a wrong way to do it, I figured I'd try it in an iterative way. This is what I have so far:
totalWeightOfSet([],0).
totalWeightOfSet([Head|RestOfSet],Weight) :-
objectWeight(Head,HeadWeight),
totalWeightOfSet(RestOfSet, RestWeight),
Weight is HeadWeight + RestWeight.
findBestBalancedSet(ListOfObjects,Sets) :-
generateAllPossibleSubsets(ListOfObjects,sets(A,B)),
totalWeightOfSet(A,WeightA),
totalWeightOfSet(B,WeightB),
Temp is WeightA - WeightB,
abs(Temp, Difference),
betterSets(ListOfObjects, Difference, Sets).
betterSets(ListOfObjects,OriginalDifference,sets(A,B)) :-
generateAllPossibleSubsets(ListOfObjects,sets(A,B)),
totalWeightOfSet(A,WeightA),
totalWeightOfSet(B,WeightB),
Temp is WeightA - WeightB,
abs(Temp, Difference),
OriginalDifference > Difference,
!,
betterSets(ListOfObjects, Difference, sets(A, B)).
betterSets(_,Difference,sets(A,B)) :-
write_ln(Difference).
The issue here is that it returns a better result, but it hasn't traversed the entire solution tree. I have a feeling this is a default Prolog scheme I'm missing here.
So basically I want it to tell me "these two sets have the minimal difference".
Edit:
What are the pros and cons of using manual list iteration vs recursion through fail
This is a possible solution (the recursion through fail) except that it can not fail, since that won't return the best set.
I would generate the 30 objects list, sort it descending on weight, then pop objects off the sorted list one by one and put each into one or the other of the two sets, so that I get the minimal difference between the two sets on each step. Each time we add an element to a set, just add together their weights, to keep track of the set's weight. Start with two empty sets, each with a total weight of 0.
It won't be the best partition probably, but might come close to it.
A very straightforward implementation:
pair(A,B,A-B).
near_balanced_partition(L,S1,S2):-
maplist(weight,L,W), %// user-supplied predicate weight(+E,?W).
maplist(pair,W,L,WL),
keysort(WL,SL),
reverse(SL,SLR),
partition(SLR,0,[],0,[],S1,S2).
partition([],_,A,_,B,A,B).
partition([N-E|R],N1,L1,N2,L2,S1,S2):-
( abs(N2-N1-N) < abs(N1-N2-N)
-> N3 is N1+N,
partition(R,N3,[E|L1],N2,L2,S1,S2)
; N3 is N2+N,
partition(R,N1,L1,N3,[E|L2],S1,S2)
).
If you insist on finding the precise answer, you will have to generate all the partitions of your list into two sets. Then while generating, you'd keep the current best.
The most important thing left is to find the way to generate them iteratively.
A given object is either included in the first subset, or the second (you don't mention whether they're all different; let's assume they are). We thus have a 30-bit number that represents the partition. This allows us to enumerate them independently, so our state is minimal. For 30 objects there will be 2^30 ~= 10^9 generated partitions.
exact_partition(L,S1,S2):-
maplist(weight,L,W), %// user-supplied predicate weight(+E,?W).
maplist(pair,W,L,WL),
keysort(WL,SL), %// not necessary here except for the aesthetics
length(L,Len), length(Num,Len), maplist(=(0),Num),
.....
You will have to implement the binary arithmetics to add 1 to Num on each step, and generate the two subsets from SL according to the new Num, possibly in one fused operation. For each freshly generated subset, it's easy to calculate its weight (this calculation too can be fused into the same generating operation):
maplist(pair,Ws,_,Subset1),
sumlist(Ws,Weight1),
.....
This binary number, Num, is all that represents our current position in the search space, together with the unchanging list SL. Thus the search will be iterative, i.e. running in constant space.

Resources