Why is prolog unification depth-first-search instead of breadth-first-search? - prolog

I just started learning about prolog and I was wondering why it's dfs instead of bfs and why there isn't an easy way to change it.
Does ISO prolog mandate it?

First of all, it is fairly easy to change. Most Prolog texts explain how both how to write a predicate that performs a BFS and how to create a meta-interpreter that does it with arbitrary terms. The truth is that students who get a taste of Prolog at the university get through (basically) the first week or two of using Prolog. To do this isn't exactly a basic Prolog task, but it isn't an advanced Prolog technique either. If you spent two months on Prolog it would not be an intimidating thing to do. That sounds like a lot of Prolog, but compared to (say) Java it really isn't much. For some reason we expect to get to the finish line with Prolog much faster than we do for systems that are actually much less interesting.
I believe the search strategy mandated by ISO is called SLD Resolution, and depth-first search arises from this resolution mechanism. I have not read the ISO standard, so perhaps someone better informed than me will comment. I think it would be difficult to manage Prolog standardization if the resolution method (and thus, depth-first or breadth-first) were not mandatory, since computations that succeed one way may enter an infinite loop the other way. A language standard that does not specify the behavior of normal-ish programs would be a rather poor standard. Although, there's no reason there couldn't be a built-in for specifying an alternate search strategy.
I don't know the reason for mandating DFS in particular. Having used Prolog for a while, the idea of not-DFS seems obviously inefficient to me. For instance, if I add some code to handle an edge case, I'm going to pay for it every time with BFS, but only in cases where it is necessary with DFS. I feel like DFS is going to be more memory efficient; I'm not going to have to keep track of a bunch of possibly-useless code paths, for instance. I feel like DFS is probably easier to control, because I can easily prune the search tree. But these are just feelings; maybe my sense of what is natural is completely a result of what I've used. The lack of existence of a Prolog competitor that is BFS-based is a kind of suggestion that it may not be a great idea though. On the other hand, what was inefficient in 1980 still informs Prolog implementations today, even though things are very different now.

Related

examples of prolog meta-interpreter uses?

I'm reading several texts and online guides to understand the possibilities of prolog meta-interpreters.
The following seem like solid use cases:
proof explainers / tracers
changing proof search strategy, eg breadth first vs depth first
domain specific languages
Question - what other compelling use-cases are there?
Quoting from A Couple of Meta-interpreters in Prolog which is a part of the book "The Power of Prolog":
Further extensions
Other possible extensions are module systems, delayed goals, checking for various kinds of infinite loops, profiling, debugging, type systems, constraint solving etc. The overhead incurred by implementing these things using MIs can be compiled away using partial evaluation techniques. [...]
This quite extends your proposed uses, e.g., by
changing the search of p(X) :- p(s(X)). to detect loops (including "obvious" ones like this one),
hinting at where most compute time is spent ("profiling"),
or by reducing a program to a simpler fragment that is easier to analyse—but still has the property of interest: unexpected non-termination (explained via failure-slice), unexpected failure, or unexpected success.

O(1) term look up

I wish to be able to look up the existence of a term as fast as possible in my current prolog program, without the prolog engine traversing all the terms until it finally reaches the existing term.
I have not found any proof of it.. but I assume that given
animal(lion).
animal(zebra).
...
% thousands of other animals
...
animal(tiger).
The swi-prolog engine will have to go through thousands of animals trying to unify with tiger in order to confirm that animal(tiger) is in my prolog database.
In other languages I believe a HashSet would solve this problem, enabling a O(1) look up... However I cannot seem to find any hashsets or hashtables in the swi-prolog documentation.
Is there a swi-prolog library for hashsets, or can I somehow built it myself using term_hash\2?
Bonus info, I will most likely have to do the look up on some dynamically added data, either added to a hashset data-structure or using assertz
All serious Prolog systems perform this O(1) lookup via hashing automatically and implicitly for you, so you do not have to do it yourself.
It is called argument-indexing, and you find this explained in all good Prolog books. See also "JIT (just-in-time) indexing" in more recent versions of many Prolog systems, including SWI. Indexing is applied to dynamically added clauses too, and is one reason why assertz/1 is slowed down and therefore not a good choice for data that changes more often than it is read.
You can also easily test this yourself by creating databases with increasingly more facts and seeing that the lookup time remains roughly constant when argument indexing applies.
When the built-in first argument indexing is not enough (note that some Prolog systems also provide multi-argument indexing), depending on the system, you can construct your own indexing scheme using a built-in or library term hashing predicate. In the case of ECLiPSe, GNU Prolog, SICStus Prolog, SWI-Prolog, and YAP, look into the documentation of the term_hash/4 predicate.

Partial Fraction in prolog

I want to write a Prolog program to do partial fraction.
e.g.:- input 2/(x+1)(x+2) output 2/(x+1)-2/(x+2).
Is this possible in Prolog and what can I refer to write this program or are there any example programs I can use?
Prolog implementations usually come with numeric constraint packages but those are very limited and can't handle a lot of basic computer algebra problems. I've never seen one that can handle polynomials.
So basically, you'd have to implement enough of a computer algebra package to solve those problems all by yourself. If you're good at prolog, then it wouldn't be any harder than doing it other languages - and it might be easier if you can leverage prolog's built in pattern matching and search without getting tripped up by their limitations. If you need some other kind of search than depth first, then you'll have to do some work to implement it - a lot of books on prolog will give examples. Similarly if you need to save non-logical information about a search you'll need to do some work beyond what's natural in the language.
20 or 30 years ago someone was endlessly hawking a naively written computer algebra system as a prolog library. As far as I could tell (and I didn't know much), the library was useless.
You can take a look at this algebra package. Some time ago I tried to reuse it in SWI-prolog, but was not fun at all...
Anyway, I cite from simpsv.pro:
Example of how to use the program:
to simplify the expression
(2*1)*(x^(2-1))
one can
a) simply enter
s( (2*1)*(x^(2-1)), Z).
after the Prolog prompt,
or
b) enter
Y = (2*1)*(x^(2-1)),
s( Y, Z).
after the prompt.
In both cases a two pass simplification is performed.
edit if you are interested, I've cleaned up the syntax, to make if acceptable by SWI-Prolog. Eventually, let me know.

General approach to constraint solving w/optimization over large finite domains?

I have a constraint problem I've been working on, which has a couple "fun" properties:
The domain is massive; basic constraints bring it down to around 2^40 to 2^30, but it's hard to bring it down further without...
Optimization for the solution. There is no single constrained solution; I'm looking for the best fit in the domain based on some complex predicates.
In searching for a way to handle this problem, I've brushed up on my Erlang, Haskell, and Prolog, but these languages don't already have the advanced predicates I'm looking for. I know that some of my optimizations could bring down the search space, and humans can peruse the domain fairly quickly and make really good guesses about optimal answers. (The domain is parameterized on a dozen variables; it's really easy to pick outliers as probable candidates for being close to the best in the domain.)
What I'm looking for in this question isn't a magical algorithm to handle this search, but an answer to the question: Since Prolog and Haskell aren't the right tools for this, which language or library might be a better answer? I have written this up in Haskell, but on a trivial restricted search of 6 million items, it couldn't even reach ten thousand comparisons per second, and perhaps that is because Haskell is not a good fit for expressing these kinds of problems.
If I remember correctly, Coq has a nice support for computations wit constraints. At least, if your domain may be described as formal system, Coq will help to write it down as a code and perform basic computations.

How to calculate indefinite integral programmatically

I remember solving a lot of indefinite integration problems. There are certain standard methods of solving them, but nevertheless there are problems which take a combination of approaches to arrive at a solution.
But how can we achieve the solution programatically.
For instance look at the online integrator app of Mathematica. So how do we approach to write such a program which accepts a function as an argument and returns the indefinite integral of the function.
PS. The input function can be assumed to be continuous(i.e. is not for instance sin(x)/x).
You have Risch's algorithm which is subtly undecidable (since you must decide whether two expressions are equal, akin to the ubiquitous halting problem), and really long to implement.
If you're into complicated stuff, solving an ordinary differential equation is actually not harder (and computing an indefinite integral is equivalent to solving y' = f(x)). There exists a Galois differential theory which mimics Galois theory for polynomial equations (but with Lie groups of symmetries of solutions instead of finite groups of permutations of roots). Risch's algorithm is based on it.
The algorithm you are looking for is Risch' Algorithm:
http://en.wikipedia.org/wiki/Risch_algorithm
I believe it is a bit tricky to use. This book:
http://www.amazon.com/Algorithms-Computer-Algebra-Keith-Geddes/dp/0792392590
has description of it. A 100 page description.
You keep a set of basic forms you know the integrals of (polynomials, elementary trigonometric functions, etc.) and you use them on the form of the input. This is doable if you don't need much generality: it's very easy to write a program that integrates polynomials, for example.
If you want to do it in the most general case possible, you'll have to do much of the work that computer algebra systems do. It is a lifetime's work for some people, e.g. if you look at Risch's "algorithm" posted in other answers, or symbolic integration, you can see that there are entire multi-volume books ("Manuel Bronstein, Symbolic Integration Volume I: Springer") that have been written on the topic, and very few existing computer algebra systems implement it in maximum generality.
If you really want to code it yourself, you can look at the source code of Sage or the several projects listed among its components. Of course, it's easier to use one of these programs, or, if you're writing something bigger, use one of these as libraries.
These expert systems usually have a huge collection of techniques and simply try one after another.
I'm not sure about WolframMath, but in Maple there's a command that enables displaying all intermediate steps. If you do so, you get as output all the tried techniques.
Edit:
Transforming the input should not be the really tricky part - you need to write a parser and a lexer, that transforms the textual input into an internal representation.
Good luck. Mathematica is very complex piece of software, and symbolic manipulation is something that it does the best. If you are interested in the topic take a look at these books:
http://www.amazon.com/Computer-Algebra-Symbolic-Computation-Elementary/dp/1568811586/ref=sr_1_3?ie=UTF8&s=books&qid=1279039619&sr=8-3-spell
Also, going to the source wouldn't hurt either. These book actually explains the inner workings of mathematica
http://www.amazon.com/Mathematica-Book-Fourth-Stephen-Wolfram/dp/0521643147/ref=sr_1_7?ie=UTF8&s=books&qid=1279039687&sr=1-7

Resources