Reversible numerical calculations in Prolog - prolog
While reading SICP I came across logic programming chapter 4.4. Then I started looking into the Prolog programming language and tried to understand some simple assignments in Prolog. I found that Prolog seems to have troubles with numerical calculations.
Here is the computation of a factorial in standard Prolog:
f(0, 1).
f(A, B) :- A > 0, C is A-1, f(C, D), B is A*D.
The issues I find is that I need to introduce two auxiliary variables (C and D), a new syntax (is) and that the problem is non-reversible (i.e., f(5,X) works as expected, but f(X,120) does not).
Naively, I expect that at the very least C is A-1, f(C, D) above may be replaced by f(A-1,D), but even that does not work.
My question is: Why do I need to do this extra "stuff" in numerical calculations but not in other queries?
I do understand (and SICP is quite clear about it) that in general information on "what to do" is insufficient to answer the question of "how to do it". So the declarative knowledge in (at least some) math problems is insufficient to actually solve these problems. But that begs the next question: How does this extra "stuff" in Prolog help me to restrict the formulation to just those problems where "what to do" is sufficient to answer "how to do it"?
is/2 is very low-level and limited. As you correctly observe, it cannot be used in all directions and is therefore not a true relation.
For reversible arithmetic, use your Prolog system's constraint solvers.
For example, SWI-Prolog's CLP(FD) manual contains the following definition of n_factorial/2:
:- use_module(library(clpfd)).
n_factorial(0, 1).
n_factorial(N, F) :- N #> 0, N1 #= N - 1, F #= N * F1, n_factorial(N1, F1).
The following example queries show that it can be used in all directions:
?- n_factorial(47, F).
F = 258623241511168180642964355153611979969197632389120000000000 ;
false.
?- n_factorial(N, 1).
N = 0 ;
N = 1 ;
false.
?- n_factorial(N, 3).
false.
Of course, this definition still relies on unification, and you can therefore not plug in arbitrary integer expressions. A term like 2-2 (which is -(2,2) in prefix notation) does not unfiy with 0. But you can easily allow this if you rewrite this to:
:- use_module(library(clpfd)).
n_factorial(N, F) :- N #= 0, F #= 1.
n_factorial(N, F) :- N #> 0, N1 #= N - 1, F #= N * F1, n_factorial(N1, F1).
Example query and its result:
?- n_factorial(2-2, -4+5).
true .
Forget about variables and think that A and B - is just a name for value which can be placed into that clause (X :- Y). to make it reachable. Think about X = (2 + (3 * 4)) in the way of data structures which represent mathematical expression. If you will ask prolog to reach goal f(A-1, B) it will try to find such atom f(A-1,B). or a rule (f(A-1,B) :- Z), Z. which will be unified to "success".
is/2 tries to unify first argument with result of interpreting second argument as an expression. Consider eval/2 as variant of is/2:
eval(0, 1-1). eval(0, 2-2). eval(1,2-1).
eval(Y, X-0):- eval(Y, X).
eval(Y, A+B):- eval(ValA, A), eval(ValB, B), eval(Y, ValA + ValB).
eval(4, 2*2).
eval(0, 0*_). eval(0, _*0).
eval(Y, X*1):- eval(Y, X).
eval(Y, 1*X):- eval(Y, X).
eval(Y, A*B):- eval(ValA, A), eval(ValB, B), eval(Y, ValA * ValB).
The reason why f(X,120) doesn't work is simple >/2 works only when its arguments is bound (i.e. you can't compare something not yet defined like X with anything else). To fix that you have to split that rule into:
f(A,B) :- nonvar(A), A > 0, C is A-1, f(C, D), B is A*D.
f(A,B) :- nonvar(B), f_rev(A, B, 1, 1).
% f_rev/4 - only first argument is unbound.
f_rev(A, B, A, B). % solution
f_rev(A, B, N, C):- C < B, NextN is (N+1), NextC is (C*NextN), f_rev(A, B, NextN, NextC).
Update: (fixed f_rev/4)
You may be interested in finite-domain solver. There was a question about using such things. By using #>/2 and #=/2 you can describe some formula and restrictions and then resolve them. But these predicates uses special abilities of some prolog systems which allows to associate name with some attributes which may help to narrow set of possible values by intersection of restriction. Some other systems (usually the same) allows you to reorder sequence of processing goals ("suspend").
Also member(X,[1,2,3,4,5,6,7]), f(X, 120) is probably doing the same thing what your "other queries" do.
If you are interested in logical languages in general you may also look at Curry language (there all non-pure functions is "suspended" until not-yed-defined value is unified).
In this answer we use clpfd, just like this previous answer did.
:- use_module(library(clpfd)).
For easy head-to-head comparison (later on), we call the predicate presented here n_fac/2:
n_fac(N_expr,F_expr) :-
N #= N_expr, % eval arith expr
F #= F_expr, % eval arith expr
n_facAux(N,F).
Like in this previous answer, n_fac/2 admits the use of arithmetic expressions.
n_facAux(0,1). % 0! = 1
n_facAux(1,1). % 1! = 1
n_facAux(2,2). % 2! = 2
n_facAux(N,F) :-
N #> 2,
F #> N, % redundant constraint
% to help `n_fac(N,N)` terminate
n0_n_fac0_fac(3,N,6,F). % general case starts with "3! = 6"
The helper predicate n_facAux/2 delegates any "real" work to n0_n_fac0_fac/4:
n0_n_fac0_fac(N ,N,F ,F).
n0_n_fac0_fac(N0,N,F0,F) :-
N0 #< N,
N1 #= N0+1, % count "up", not "down"
F1 #= F0*N1, % calc `1*2*...*N`, not `N*(N-1)*...*2*1`
F1 #=< F, % enforce redundant constraint
n0_n_fac0_fac(N1,N,F1,F).
Let's compare n_fac/2 and n_factorial/2!
?- n_factorial(47,F).
F = 258623241511168180642964355153611979969197632389120000000000
; false.
?- n_fac(47,F).
F = 258623241511168180642964355153611979969197632389120000000000
; false.
?- n_factorial(N,1).
N = 0
; N = 1
; false.
?- n_fac(N,1).
N = 0
; N = 1
; false.
?- member(F,[3,1_000_000]), ( n_factorial(N,F) ; n_fac(N,F) ).
false. % both predicates agree
OK! Identical, so far... Why not do a little brute-force testing?
?- time((F1 #\= F2,n_factorial(N,F1),n_fac(N,F2))).
% 57,739,784 inferences, 6.415 CPU in 7.112 seconds (90% CPU, 9001245 Lips)
% Execution Aborted
?- time((F1 #\= F2,n_fac(N,F2),n_factorial(N,F1))).
% 52,815,182 inferences, 5.942 CPU in 6.631 seconds (90% CPU, 8888423 Lips)
% Execution Aborted
?- time((N1 #> 1,N2 #> 1,N1 #\= N2,n_fac(N1,F),n_factorial(N2,F))).
% 99,463,654 inferences, 15.767 CPU in 16.575 seconds (95% CPU, 6308401 Lips)
% Execution Aborted
?- time((N1 #> 1,N2 #> 1,N1 #\= N2,n_factorial(N2,F),n_fac(N1,F))).
% 187,621,733 inferences, 17.192 CPU in 18.232 seconds (94% CPU, 10913552 Lips)
% Execution Aborted
No differences for the first few hundred values of N in 2..sup... Good!
Moving on: How about the following (suggested in a comment to this answer)?
?- n_factorial(N,N), false.
false.
?- n_fac(N,N), false.
false.
Doing fine! Identical termination behaviour... More?
?- N #< 5, n_factorial(N,_), false.
false.
?- N #< 5, n_fac(N,_), false.
false.
?- F in 10..100, n_factorial(_,F), false.
false.
?- F in 10..100, n_fac(_,F), false.
false.
Alright! Still identical termination properties! Let's dig a little deeper! How about the following?
?- F in inf..10, n_factorial(_,F), false.
... % Execution Aborted % does not terminate universally
?- F in inf..10, n_fac(_,F), false.
false. % terminates universally
D'oh! The first query does not terminate, the second does.
What a speedup! :)
Let's do some empirical runtime measurements!
?- member(Exp,[6,7,8,9]), F #= 10^Exp, time(n_factorial(N,F)) ; true.
% 328,700 inferences, 0.043 CPU in 0.043 seconds (100% CPU, 7660054 Lips)
% 1,027,296 inferences, 0.153 CPU in 0.153 seconds (100% CPU, 6735634 Lips)
% 5,759,864 inferences, 1.967 CPU in 1.967 seconds (100% CPU, 2927658 Lips)
% 22,795,694 inferences, 23.911 CPU in 23.908 seconds (100% CPU, 953351 Lips)
true.
?- member(Exp,[6,7,8,9]), F #= 10^Exp, time(n_fac(N,F)) ; true.
% 1,340 inferences, 0.000 CPU in 0.000 seconds ( 99% CPU, 3793262 Lips)
% 1,479 inferences, 0.000 CPU in 0.000 seconds (100% CPU, 6253673 Lips)
% 1,618 inferences, 0.000 CPU in 0.000 seconds (100% CPU, 5129994 Lips)
% 1,757 inferences, 0.000 CPU in 0.000 seconds (100% CPU, 5044792 Lips)
true.
Wow! Some more?
?- member(U,[10,100,1000]), time((N in 1..U,n_factorial(N,_),false)) ; true.
% 34,511 inferences, 0.004 CPU in 0.004 seconds (100% CPU, 9591041 Lips)
% 3,091,271 inferences, 0.322 CPU in 0.322 seconds (100% CPU, 9589264 Lips)
% 305,413,871 inferences, 90.732 CPU in 90.721 seconds (100% CPU, 3366116 Lips)
true.
?- member(U,[10,100,1000]), time((N in 1..U,n_fac(N,_),false)) ; true.
% 3,729 inferences, 0.001 CPU in 0.001 seconds (100% CPU, 2973653 Lips)
% 36,369 inferences, 0.004 CPU in 0.004 seconds (100% CPU, 10309784 Lips)
% 362,471 inferences, 0.036 CPU in 0.036 seconds (100% CPU, 9979610 Lips)
true.
The bottom line?
The code presented in this answer is as low-level as you should go: Forget is/2!
Redundant constraints can and do pay off.
The order of arithmetic operations (counting "up" vs "down") can make quite a difference, too.
If you want to calculate the factorial of some "large" N, consider using a different approach.
Use clpfd!
There are some things which you must remember when looking at Prolog:
There is no implicit return value when you call a predicate. If you want to get a value out of a call you need to add extra arguments which can be used to "return" values, the second argument in your f/2 predicate. While being more verbose it does have the benefit of being easy to return many values.
This means that automatically "evaluating" arguments in a call is really quite meaningless as there is no value to return and it is not done. So there are no nested calls, in this respect Prolog is flat. So when you call f(A-1, D) the first argument to f/2 is the structure A-1, or really -(A, 1) as - is an infix operator. So if you want to get the value from a call to foo into a call to bar you have to explicitly use a variable to do it like:
foo(..., X), bar(X, ...),
So you need a special predicate which forces arithmetic evaluation, is/2. It's second argument is a structure representing an arithmetic expression which it interprets, evaluates and unifies the result with its first argument, which can be either a variable or numerical value.
While in principle you can run things backwards with most things you can't. Usually it is only simple predicates working on structures for which it is possible, though there are some very useful cases where it is possible. is/2 doesn't work backwards, it would be exceptional if it did.
This is why you need the extra variables C and D and can't replace C is A-1, f(C, D) by f(A-1,D).
(Yes I know you don't make calls in Prolog, but evaluate goals, but we were starting from a functional viewpoint here)
Related
Complexity of counting matching elements in two sequences using `library(aggregate)`
We want to count the correspondences between two (possibly long) strings which happen to represent DNA sequences. The sequences are lists-of-chars where the char is taken from a,c,t,g,'_', with the '_' a "don't know" placeholder which never corresponds to anything, even itself. In this case, we employ library(aggregate) (thanks to CapelliC for the idea): match(Seq1,Seq2,Count) :- aggregate_all(count, ( nth1(Pos,Seq1,X), nth1(Pos,Seq2,X), memberchk(X,[a,c,g,t]) ), N). This approach can be compared to a "straightforward" approach where one would set up a (tail-recursive) recursion that just walks down both sequences in tandem and compares elements pairwise, counting as it goes. As the sequences can be very large, algorithmic complexity becomes of some interest. One would expect, with n = length(sequence) and both sequences the same length: Straightforward approach: complexity is O(n) aggregation approach: complexity is O(n²) What is the (time and maybe space) complexity of the above algorithm and why? Test code To complement the above, an SWI-Prolog based plunit test code block: :- begin_tests(atcg). wrap_match(String1,String2,Count) :- atom_chars(String1,Seq1), atom_chars(String2,Seq2), fit(Seq1,Seq1,0,Count). test("string 1 empty",nondet) :- wrap_match("atcg","",Count), assertion(Count == 0). test("string 2 empty") :- wrap_match("","atcg",Count), assertion(Count == 0). test("both strings empty") :- wrap_match("","",Count), assertion(Count == 0). test("both strings match, 1 char only") :- wrap_match("a","a",Count), assertion(Count == 1). test("both strings match") :- wrap_match("atcgatcgatcg","atcgatcgatcg",Count), assertion(MatchCount == 12). test("both strings match with underscores") :- wrap_match("_TC_ATCG_TCG","_TC_ATCG_TCG",Count), assertion(MatchCount == 9). test("various mismatches 1") :- wrap_match("atcgatcgatcg","atcgatcgatcg",Count), assertion(MatchCount == 8). test("various mismatches with underscores") :- wrap_match("at_ga_cg__cg","atcgatcgatcg",Count), assertion(Count == 8). :- end_tests(atcg). And so: ?- run_tests. % PL-Unit: atcg ........ done % All 8 tests passed true.
Empirical info After some manual data collection (something that cries out for automatization) using the code below, which outputs time elapsed and number of inferences made to the console: gimme_random_sequence(Length,Seq) :- length(Seq,Length), maplist( [E]>>(random_between(0,3,Ix),nth0(Ix,[a,t,c,g],E)), Seq). how_fast(Length) :- gimme_random_sequence(Length,Seq1), gimme_random_sequence(Length,Seq2), time(match(Seq1,Seq2,_)). ... and a bit of graph fumbling in LibreOffice Calc (my ggplot skills are rusty), we have empirical data that this algorithm's cost is O((length(sequence))²). Count,Inferences,Seconds,milliseconds,megainferences 1000,171179,0.039,39,0.171179 2000,675661,0.097,97,0.675661 3000,1513436,0.186,186,1.513436 4000,2684639,0.327,327,2.684639 5000,4189172,0.502,502,4.189172 6000,6027056,0.722,722,6.027056 7000,8198103,1.002,1002,8.198103 8000,10702603,1.304,1304,10.702603 9000,13540531,1.677,1677,13.540531 10000,16711607,2.062,2062,16.711607 11000,20216119,2.449,2449,20.216119 20000,66756619,8.091,8091,66.756619 30000,150134731,17.907,17907,150.134731 40000,266846773,32.012,32012,266.846773 50000,416891749,52.942,52942,416.891749 60000,600269907,74.103,74103,600.269907
Never ever use functional programming idioms in Prolog that avoid backtracking, like maplist/4. This here, pair_member/4 and match3/3, should be a tick faster. match2(Seq1, Seq2, Count) :- ( maplist([X,Y,X-Y]>>true, Seq1, Seq2, Seq3) -> aggregate_all(count, (member(X-X, Seq3), X\='_'), Count) ; Count = 0 ). pair_member(X, Y, [X|_], [Y|_]). pair_member(X, Y, [_|L], [_|R]) :- pair_member(X, Y, L, R). match3(Seq1, Seq2, Count) :- aggregate_all(count, (pair_member(X, X, Seq1, Seq2), X \= '_'), Count). gimme_random_sequence(Length, Seq) :- length(Seq, Length), maplist([E]>>(random_between(0,3,Ix), nth0(Ix, [a,t,c,g], E)), Seq). test(N) :- gimme_random_sequence(N, S1), gimme_random_sequence(N, S2), time(match2(S1, S2, Count)), time(match3(S1, S2, Count)). Woa! Its 10x times faster! Thanks to genius of SWI-Prolog how it compiles the tail recursion in pair_member/4: /* SWI-Prolog 8.3.21, MacBook Air 2019 */ ?- set_prolog_flag(double_quotes, chars). true. ?- X = "abc". X = [a, b, c]. ?- match2("_TC_ATCG_TCG","_TC_ATCG_TCG",Count). Count = 9. ?- match3("_TC_ATCG_TCG","_TC_ATCG_TCG",Count). Count = 9. ?- test(100000). % 1,575,520 inferences, 0.186 CPU in 0.190 seconds (98% CPU, 8465031 Lips) % 175,519 inferences, 0.018 CPU in 0.019 seconds (98% CPU, 9577595 Lips) true. Edit 29.04.2021: Oh the irony, bifurcation backtracking is nevertheless challenging. After fixing a misuse of library(apply_macros), I get: ?- test(100000). % 374,146 inferences, 0.019 CPU in 0.019 seconds (99% CPU, 19379778 Lips) % 174,145 inferences, 0.014 CPU in 0.014 seconds (99% CPU, 12400840 Lips) true. Does native member/2 contribute to the good maplist solution performance? But I should do a better measure, with larger times durations. Open Source: Sequence Match Problem https://gist.github.com/jburse/9fd22e8c3e8de6148fbd341817538ef6#file-sequence-pl
I think that it is interresting to observe that complexity O(n²) is not due to the aggregation approach itself, but to the fact that subgoal nth1(Pos,Seq1,X), nth1(Pos,Seq2,X) behaves as a "nested loop" (in the size n of the sequences). Thus, it should be possible to create another algorithm that, even using aggregation, can have complexity O(n), as long as the "nested loop" is eliminated. Algorithms to compare % Original algorithm: Complexity O(n²) match1(Seq1, Seq2, Count) :- aggregate_all(count, ( nth1(Pos, Seq1, X), nth1(Pos, Seq2, X), memberchk(X, [a,c,g,t]) ), Count). % Proposed algorithm: Complexity O(n) match2(Seq1, Seq2, Count) :- ( maplist([X,Y,X-Y]>>true, Seq1, Seq2, Seq3) -> aggregate_all(count, (member(X-X, Seq3), X\='_'), Count) ; Count = 0 ). gimme_random_sequence(Length, Seq) :- length(Seq, Length), maplist([E]>>(random_between(0,3,Ix), nth0(Ix, [a,t,c,g], E)), Seq). test(N) :- gimme_random_sequence(N, S1), gimme_random_sequence(N, S2), time(match1(S1, S2, Count)), time(match2(S1, S2, Count)). Simple empirical results ?- test(10000). % 16,714,057 inferences, 1.156 CPU in 1.156 seconds (100% CPU, 14455401 Lips) % 39,858 inferences, 0.000 CPU in 0.000 seconds (?% CPU, Infinite Lips) true. ?- test(20000). % 66,761,535 inferences, 4.594 CPU in 4.593 seconds (100% CPU, 14533123 Lips) % 79,826 inferences, 0.016 CPU in 0.016 seconds (100% CPU, 5108864 Lips) true. ?- test(40000). % 266,856,213 inferences, 19.734 CPU in 19.841 seconds (99% CPU, 13522405 Lips) % 159,398 inferences, 0.016 CPU in 0.015 seconds (104% CPU, 10201472 Lips) true. ?- test(80000). % 1,067,046,835 inferences, 77.203 CPU in 77.493 seconds (100% CPU, 13821291 Lips) % 320,226 inferences, 0.047 CPU in 0.047 seconds (100% CPU, 6831488 Lips) true. Edit 30/04/2021: Does nth1(I,S,X), nth1(I,S,X) really work as nested loop? To see that the answer to this question is yes, consider the following simple implementation of nth/3, that counts the number of rounds needed to find each solution, using a global flag: nth(Index, List, Item) :- ( var(Index) -> nth_nondet(1, Index, List, Item) ; integer(Index) -> nth_det(Index, List, Item) ). nth_det(1, [Item|_], Item) :- !. nth_det(Index, [_|Rest], Item) :- flag(rounds, Rounds, Rounds+1), Index1 is Index - 1, nth_det(Index1, Rest, Item). nth_nondet(Index, Index, [Item|_], Item). nth_nondet(Acc, Index, [_|Rest], Item) :- flag(rounds, Rounds, Rounds+1), Acc1 is Acc + 1, nth_nondet(Acc1, Index, Rest, Item). To get the number of rounds, you can ask: ?- flag(rounds,_,0), nth(5,[a,b,c,d,e],X), flag(rounds,Rounds,Rounds). X = e, Rounds = 4. Now, using this predicate, we can create a predicate to count the number of rounds of the goal nth(I,L,X), nth(I,L,X), for lists of different lengths: count_rounds :- forall(between(1, 10, N), ( Length is 10*N, count_rounds(Length, Rounds), writeln(rounds(Length) = Rounds) )). count_rounds(Length, _) :- numlist(1, Length, List), flag(rounds, _, 0), nth(Index, List, Item), nth(Index, List, Item), fail. count_rounds(_, Rounds) :- flag(rounds, Rounds, Rounds). Empirical results: ?- count_rounds. rounds(10) = 55 rounds(20) = 210 rounds(30) = 465 rounds(40) = 820 rounds(50) = 1275 rounds(60) = 1830 rounds(70) = 2485 rounds(80) = 3240 rounds(90) = 4095 rounds(100) = 5050 As we can see, the goal nth(I,L,X), nth(I,L,X) computes half of a square matrix of order n (including its diagonal). Thus, the number of rounds for a list of length n is rounds(n) = (n² + n)/2. Hence, the time complexity of this goal is O(n²). Remark The implementation of the library predicate nth1/3 is a little more efficient than that of predicate nth/3considered for this experiment. Nevertheless, the time complexity of goal nth1(I,S,X), nth1(I,S,X)still is O(n²).
This is a followup of #MostowskiCollapse answer, where I have applied the same optimization that Gertjan van Noord provided for member/2 to pair_member/4, but I have renamed it to member/4. member(X, Y, [XH|XT], [YH|YT]) :- member_(XT, YT, X, Y, XH, YH). member_(_, _, X,Y, X,Y). member_([XH|XT],[YH|YT], X,Y, _,_) :- member_(XT,YT, X,Y, XH,YH). match4(Seq1, Seq2, Count) :- aggregate_all(count, (member(X, X, Seq1, Seq2), X \= '_'), Count). test(N) :- gimme_random_sequence(N, S1), gimme_random_sequence(N, S2), %time(match2(S1, S2, Count)), time(match3(S1, S2, Count)), time(match4(S1, S2, Count)). ... with lists of length 1.000.000 I get % 1,751,758 inferences, 0.835 CPU in 0.835 seconds (100% CPU, 2098841 Lips) % 1,751,757 inferences, 0.637 CPU in 0.637 seconds (100% CPU, 2751198 Lips) that is, a gain of about 25%...
On solving project Euler #303 in with Prolog / clpfd
Here comes Project Euler Problem 303, "Multiples with small digits". For a positive integer n, define f(n) as the least positive multiple of n that, written in base 10, uses only digits ≤ 2. Thus f(2)=2, f(3)=12, f(7)=21, f(42)=210, f(89)=1121222. Also, . Find . This is the code I have already written / that I want to improve: :- use_module(library(clpfd)). n_fn(N,FN) :- F #> 0, FN #= F*N, length(Ds, _), digits_number(Ds, FN), Ds ins 0..2, labeling([min(FN)], Ds). That code already works for solving a small number of small problem instances: ?- n_fn(2,X). X = 2 ?- n_fn(3,X). X = 12 ?- n_fn(7,X). X = 21 ?- n_fn(42,X). X = 210 ?- n_fn(89,X). X = 1121222 What can I do to tackle above challenge "find: sum(n=1 to 10000)(f(n)/n)"? How can I solve more and bigger instances in reasonable time? Please share your ideas with me! Thank you in advance!
It is slow on 9's and there is a pattern.. so.. n_fn(9,12222):-!. n_fn(99,1122222222):-!. n_fn(999,111222222222222):-!. n_fn(9999,11112222222222222222):-!. But i'm sure it would be nicer to have the prolog find this patten and adapt the search.. not sure how you would do that though! In general it must be recalculating a lot of results..
I cannot spot a recurrence relation for this problem. So, initially I was thinking that memoizing could speed it up. Not really... This code, clp(fd) based, is marginally faster than your... n_fn_d(N,FN) :- F #> 0, FN #= F*N, digits_number_d([D|Ds], Ts), D in 1..2, Ds ins 0..2, scalar_product(Ts, [D|Ds], #=, FN), labeling([min(FN)], [D|Ds]). digits_number_d([_], [1]). digits_number_d([_|Ds], [T,H|Ts]) :- digits_number_d(Ds, [H|Ts]), T #= H*10. When I used clp(fd) to solve problems from Euler, I stumbled in poor performance... sometime the simpler 'generate and test' paired with native arithmetic make a difference. This simpler one, 'native' based: n_fn_e(N,FN) :- digits_e(FN), 0 =:= FN mod N. digits_e(N) :- length([X|Xs], _), maplist(code_e, [X|Xs]), X \= 0'0, number_codes(N, [X|Xs]). code_e(0'0). code_e(0'1). code_e(0'2). it's way faster: test(N) :- time(n_fn(N,A)), time(n_fn_d(N,B)), time(n_fn_e(N,C)), writeln([A,B,C]). ?- test(999). % 473,671,146 inferences, 175.006 CPU in 182.242 seconds (96% CPU, 2706593 Lips) % 473,405,175 inferences, 173.842 CPU in 178.071 seconds (98% CPU, 2723188 Lips) % 58,724,230 inferences, 25.749 CPU in 26.039 seconds (99% CPU, 2280636 Lips) [111222222222222,111222222222222,111222222222222] true
Best way to generate integer numbers in Prolog
I want to generate integer numbers and I'm looking for the best way to do this. Example: ?- number2(N). N = 0; N = 1; N = 2; ... (and so on) Now I'm simply using length/2: number2(N) :- length(_, N). But I think that there should be some better way (without creating temporary list). I could probably write some code myself basing on code of length/2 but I'm looking for solution that employs already existing, built-in predicates. Is there any built-in predicate that would work better than length/2? I couldn't find anything like that.
It is hard to top your solution ; and probably it is not worth the effort. After all, there are now three suggestions that all are incorrect for one case or another: ?- time( (number2_gk(N), N == 10000) ). % your original % 20,002 inferences, 0.007 CPU in 0.007 seconds (99% CPU, 3006132 Lips) N = 10000 ?- time( (number2_cc(N), N == 10000) ). % quadratic overhead % 50,025,001 inferences, 28.073 CPU in 28.196 seconds (100% CPU, 1781945 Lips) N = 10000 ?- time( (next_integer(N), N == 10000) ). % 20,002 inferences, 0.011 CPU in 0.011 seconds (100% CPU, 1822247 Lips) N = 10000 However, number2_cc(-1) and next_integer(-1) simply loop, length/2 actually should produce a domain error, like SICStus and many other systems do. As you can see, CC's solution is worse than your original one. Also the suggestion by mat produces different behavior in the following situation: goal_expansion(length(Ls,L), between(0,infinite,L)) :- var_property(Ls, fresh(true)). as(N) :- length(L,N), phrase(a, L). a --> [a], a. a --> []. The goal as(N) now loops instead of enumerating all N. If you really insist on an improvement, consider the following tail-recursive solution using library(clpfd): nat(N) :- nat(N, 0). nat(N, N0) :- N #>= N0, ( N = N0 ; N1 is N0+1, nat(N, N1) ). ?- time( (nat(N), N == 10000) ). % 1,850,152 inferences, 0.544 CPU in 0.545 seconds (100% CPU, 3399793 Lips) Which is only an improvement for queries like the following. Otherwise it is just a waste of resources. ?- N in 1..2, nat(N).
To keep between/3 pure, i.e. only with integer arguments, I have started providing the following predicate above/2 in a library (for the source code see here): /** * above(L, X): * The predicate succeeds for every integer X above the integer L. */ % above(+Integer, -Integer) So if you really want to generate integer numbers, and not natural numbers, you can use: gen_int(X) :- above(0, Y), (X is Y; X is -Y-1). The above will give 0, -1, 1, -2, etc.. . If you want to generate natural numbers including zero, you can use: gen_nat(X) :- above(0, X). The above will give 0, 1, 2, etc... The names gen_int/1 and gen_nat/1 are inspried by SICStus Prolog, see here. Hope this helps. Bye
A tail-recursive alternative to Carlo's solution is: next_integer(I) :- next_integer(0, I). next_integer(I, I). next_integer(I, J) :- I2 is I + 1, next_integer(I2, J). A sample query: ?- next_integer(I). I = 0 ; I = 1 ; I = 2 ; I = 3 ; ... You can also easily start from an integer other than zero. For example: ?- next_integer(-5, I). I = -5 ; I = -4 ; I = -3 ; I = -2 ; I = -1 ; I = 0 ; I = 1 ; I = 2 ; I = 3 ; ...
Prolog DCG: find last element
I am trying to understand the use of DCGs better. In order to do this, I tried to translate some exercises in the LearnPrologNow book to DCG notation. However, I am failing miserably. What I tried to write a program that simply names the last element in a list. That's all. I just can't think of the right DCG syntax to do this. I think I figured out the 'base case' which should be: last --> [X|[]]. Where X is the last element. How do I make Prolog go down the list recursively? Or am I thinking about DCGs in a wrong way?
... --> [] | [_], ... . list_last(Xs, X) :- phrase((...,[X]), Xs). This is clearly the most "graphical" definition. You can describe a lot of patterns with ... //0. Grammars are a way to describe a language. So your question about how to make Prolog go down is malposed. Grammars don't do anything. They if you insist "generate" sentences. For the procedural details, you need to understand termination, but no more than that. Edit: And if you really care about performance, then measure it first. With SWI, I obtain the following. Note the usage of an extra library to remove the calling overheads for phrase/2. ?- use_module(library(apply_macros)). % library(pairs) compiled into pairs 0.00 sec, 22 clauses % library(lists) compiled into lists 0.01 sec, 122 clauses % library(occurs) compiled into occurs 0.00 sec, 14 clauses % library(apply_macros) compiled into apply_macros 0.01 sec, 168 clauses true. ?- [user]. **omitted** ?- listing. dcg_last(B, A) :- last(A, B, []). list_last(A, C) :- ...(A, B), B=[C]. ...(A, B) :- ( A=B ; A=[_|C], ...(C, B) ). last(A, [_|B], C) :- last(A, B, C). last(A, [A|B], B). :- thread_local thread_message_hook/3. :- dynamic thread_message_hook/3. :- volatile thread_message_hook/3. true. ?- length(L,100000), time(list_last(L,E)). % 100,000 inferences, 0.018 CPU in 0.030 seconds (60% CPU, 5482960 Lips) L = [_G351, _G354, _G357, _G360, _G363, _G366, _G369, _G372, _G375|...] ; % 5 inferences, 0.000 CPU in 0.000 seconds (94% CPU, 294066 Lips) false. ?- length(L,100000), time(dcg_last(L,E)). % 100,001 inferences, 0.033 CPU in 0.057 seconds (58% CPU, 3061609 Lips) L = [_G19, _G22, _G25, _G28, _G31, _G34, _G37, _G40, _G43|...] ; % 2 inferences, 0.011 CPU in 0.023 seconds (49% CPU, 175 Lips) false. So both are performing roughly the same number of inferences, but dcg_last/2 is slower, since it has to pile up all those useless choicepoints. list_last/2 creates the same number of choice-points, however, they are almost immediately removed. So we have 0.018s vs. 0.033s+0.011s.
You are missing the recursive step, and making the base clause more complex than needed. dcg_last(List, E) :- phrase(last(E), List). last(E) --> [_], last(E). last(E) --> [E]. last//1 just skips any element, until to last. The key, however, is how phrase/2 translates productions. phrase(last(E), List) is equivalent to phrase(last(E), List, []), that is, the grammar must consume all input.
This isn't an answer! CapelliC explains it. It's just the comments are useless for formatted code, and this comment belongs below his answer : If you use the 'listing.' predicate on his answer after consulting it, this is what prolog has rewritten it to, and will execute : last(A, [_|B], C) :- last(A, B, C). last(A, [A|B], B). dcg_last(B, A) :- phrase(last(A), B). So DCGs are just syntactic sugar on top of regular prolog expressions - a recursive loop as explained - you have to go through the list ('consume all input') to reach the end.
Prolog, triangular numbers, accumulators and tail recursion
I'm working on a homework assignment, consisting of 2 parts. The first is to write a Prolog program that checks if a certain pair X, Y belongs to a http://en.wikipedia.org/wiki/Triangular_number. For example: (2, 3) = true; (4, 10) = true et cetera. The first solution uses 'ordinary' recursion, and I've solved this like this: triangle(0, 0). triangle(X, Y) :- X > 0, Y > 0, A is X - 1, B is Y - X, triangle(A, B). The second part is to solve this using tail recursion/an accumulator, using a triangle/3 predicate. Though I have used an accumulator in another assigment, in which the use was quite obvious, so I have a general idea of how to use an accumulator, I'm quite puzzeled as to how to use it in this context. So, I'm not looking for an algorithm, I'd much rather solve that myself, but more of a practical advise on how to apply an accumulator in this context.
The beginning is always the same, i.e. the first three lines are basically what you write for each and every tail recursive predicate (with a [] instead of a 0 for list predicates). From there you can go on without many changes: triangle_t(X, Y) :- triangle_t(X, 0, Y). triangle_t(0, Y, Y). triangle_t(X, Acc, Y) :- X > 0, A is X - 1, AccX is Acc + X, triangle_t(A, AccX, Y). Here are some statistics for large X: 64 ?- time(triangle(1000000,500000500000)). % 4,000,000 inferences, 0.50 CPU in 0.52 seconds (96% CPU, 8012769 Lips) true. 65 ?- time(triangle_t(1000000,500000500000)). % 3,000,001 inferences, 0.41 CPU in 0.44 seconds (92% CPU, 7396405 Lips) true. So, while your own predicate is basically already tail recursive (because the recursive call is the last thing to do) the version with an accumulator still saves some time because you do not need the check for Y > 0. If you introduce this line in the triangle_t predicate, they have exactly the same runtime characteristics again: 67 ?- time(triangle_t(1000000,500000500000)). % 4,000,001 inferences, 0.53 CPU in 0.53 seconds (100% CPU, 7541432 Lips) true. Also note that you can now use the predicate to generate the n'th triangular number.