logic programming - find synonyms of a given word - algorithm

Task is to write a program that can decide whether two words are synonyms or not.
I have pairs of synonymous words, E.g.:
[big large]
[large huge]
[small little]
We can derive the synonymous relationship indirectly: if big is a synonym for large and large is a synonym for huge then big is a synonym for huge.
Being synonyms doesn’t depend on order, e.g. if big is a synonym for large then large is a synonym for big.
The program has to answer whether given two words are synonyms or not.
This seems to me as a good candidate for logic programming, e.g. with clojure.core.logic.
a. How to state the input pairs of synonyms as logical statements/constraints?
b. How to perform a query to find all synonyms of a given word?
Or perhaps I am yack shaving? What would be a simpler way to solve this?

In general, this just looks like you are interested in the transitive closure (of a graph)?
For undirected graphs ("Being synonyms doesn’t depend on order"), this is related to Connected Components (and here we directly see some logic-relation as 2-SAT algorithms also exploit connected components).
Maybe see also: Algowiki
Basically:
preprocessing-time: calculate the transitive closure of your input
query-time -> synonym(a,b): check if edge (a,b) exists
Or without preprocessing: just look for a path on-demand:
query-time -> synonym(a,b): check if there is a path between a, b
BFS/DFS

Clojure's data structures make this kind of thing super-fun.
The idea here is that a set of synonyms is itself shared (as the set of synonyms) for all words in the set.
So, we produce a map from words to their set of synonyms:
(require '[clojure.set :as set])
(let [pairs [["big" "large"]
["large" "huge"]
["small" "little"]]
m (reduce (fn [eax [w1 w2]]
(let [s (set/union #{w1 w2} (get eax w1) (get eax w2))]
(merge eax (into {} (map #(vector % s) s)))))
{}
pairs)
syn? (fn [w1 w2] (boolean ((m w1 #{}) w2)))]
(println "big, large:" (syn? "big" "large"))
(println "big, huge:" (syn? "big" "huge"))
(println "huge, big:" (syn? "huge" "big"))
(println "big, small:" (syn? "big" "small"))
(println "code, large:" (syn? "code" "large")))
Additional fun Clojure exercises from here:
What is the time and space complexity of building m?
What is the time complexity of syn?
Describe (and then confirm with measurement) the structural sharing going on inside m.

Related

What is fundamentally wrong in the choice of data structure in this matrix addition code?

Section 2.2.4 here contains the following:
2.2.4 Totally Inappropriate Data Structures
Some might find this example hard to believe. This really occurred in some code I’ve seen:
(defun make-matrix (n m)
(let ((matrix ()))
(dotimes (i n matrix)
(push (make-list m) matrix))))
(defun add-matrix (m1 m2)
(let ((l1 (length m1))
(l2 (length m2)))
(let ((matrix (make-matrix l1 l2)))
(dotimes (i l1 matrix)
(dotimes (j l2)
(setf (nth i (nth j matrix))
(+ (nth i (nth j m1))
(nth i (nth j m2)))))))))
What’s worse is that in the particular application, the matrices were all fixed size, and matrix arithmetic would have been just as fast in Lisp as in FORTRAN.
This example is bitterly sad: The code is absolutely beautiful, but it adds matrices slowly. Therefore it is excellent prototype code and lousy production code. You know, you cannot write production code as bad as this in C.
Clearly, the author thinks that something is fundamentally wrong with the data structures used in this code. On a technical level, what has went wrong? I worry that this question might be opinion-based, but the author's phrasing suggests that there is an objectively correct and very obvious answer.
Lisp lists are singly-linked. Random access to an element (via nth) requires traversing all predecessors. The storage is likewise wasteful for this purpose. Working with matrices this way is very inefficient.
Lisp has built-in multidimensional arrays, so a natural fit for this problem would be a two-dimensional array, which can access elements directly instead of traversing links.
There's a strong assumption in numerical code that access to elements of matrices, or more generally arrays, is approximately constant-time. The time taken for a[n, m] should not depend on n and m. That's hopelessly untrue in this case, since, given the obvious definition of matrix-ref:
(defun matrix-ref (matrix n m)
(nth m (nth n matrix)))
then, since nth takes time proportional to its first argument (more generally: accessing the nth element of a Lisp list takes time proportional to n+1, counting from zero), then the time taken by matrix-ref is proportional to the sum of the two indices (or in fact to the sum of the two (indices + 1) but this does not matter.).
This means that, for instance, almost any algorithms involving matrices will move up time complexity classes. That's bad.
List type of matrix is slow for products as descripted above. However, it's good for teaching, you can build a matrix library with very little knowledge of lisp and with less bugs. I've build such a basic matrix library when I read "Neural Network Design", see this code in github: https://github.com/hxzrx/nnd/blob/master/linear-algebra.lisp.

An example of where normal order has less steps than applicative order?

I can't seem to come up with an example of this and wondering if there is such a case? I know if I have an expression where applicative order doesn't terminate that normal order may still terminate. I'm wondering though if there is an example where both orders terminate but normal order has fewer steps.
(λ p. λ q. q) ((λ x. λ y. λ z. ((x y) z)) (λ w. λ v. w))
With some whitespace:
(λ p.
λ q.
q
)
(
(λ x.
λ y.
λ z.
((x y) z)
)
(λ w.
λ v.
w
)
)
In normal order, the outermost reduction can be performed first, reducing directly to the identity combinator in one step. Applicative order will get there too, but it takes much longer since the x-y-z-w-v expression needs to be evaluated first.
Note that the x-y-z-w-v expression isn't even used. You can think of normal order as a sort of lazy evaluation: expressions are only evaluated or reduced when they are used. So you just build a formula that doesn't use one of its arguments and you immediately have an example of this kind of efficiency win.
In a lambda expression, any variable bound within an abstraction can be used zero or more times in the body of the abstraction.
Normal order evaluates the argument n times, where n is the number of times it is used in the body.
Applicative order evaluates the argument exactly once, irrespective of the number of times it is used in the body.
Comparison
If argument is used exactly once, both normal order and applicative order will have same performance.
If the argument is used more than once, applicative order will be faster.
If the argument is used zero times, normal order would be faster.

MicroKanren - what are the terms?

Having a little trouble understanding the core terms of the MicroKanren DSL. Section 4 says:
Terms of the language are defined by the unify operator. Here, terms of the language consist of variables, objects deemed identical under eqv?, and pairs of the foregoing.
But they never describe what the "pairs" actually mean. Are the pairs supposed to represent equality of two subterms, like so:
type 'a ukanren = KVar of int | KVal of 'a | KEq of 'a kanren * 'a kanren
So a term like:
(call/fresh (λ (a) (≡ a 7)))
Generates a pair for (≡ a 7)?
Edit: upon further thought, I don't think this is it. The mentions of "pair" in the paper seem to come much later, with extensions and refinements to the basic system, which would mean the pairs have no meaning in the terms for the basic intro. Is this correct?
In this context, "pair" just means a cons pair, such as (5 . 6) or (foo . #t). An example unification between two pairs is:
(call/fresh
(λ (a)
(call/fresh
(λ (b)
(≡ (cons a b) (cons 5 6))))))
which associates a with 5 and b with 6.
Sorry for the confusion and difficulty!! And thank you for the question!
You can think of the (typical) Kanren term language as having a single binary functor tag cons/2, and an infinite quantity of constants (the exact makeup changes from embedding to embedding).
Under the assumption that cons/2 is the only (n < 0)-ary functor tag, every compound term will have been built with it. If you look up standard presentations of unification (e.g. Martelli-Montanari) you'll usually see a step f(t0,...,tn) g(s0,...,sm) => fail if f =/= g or n =/= m. We handle clash-failure in unification when comparing ground atomic terms; for us, compound terms must necessarily be of the same arity and tag.
pair? recognizes conses. In Racket, in fact, they provide a cons? alias, to make this clearer!
Once again, thank you!

Purely functional set

Is there an algorithm that implements a purely functional set?
Expected operations would be union, intersection, difference, element?, empty? and adjoin.
Those are not hard requirements though and I would be happy to learn an algorithm that only implements a subset of them.
You can use a purely functional map implementation, where you just ignore the values.
See http://hackage.haskell.org/packages/archive/containers/0.1.0.1/doc/html/Data-IntMap.html (linked to from https://cstheory.stackexchange.com/questions/1539/whats-new-in-purely-functional-data-structures-since-okasaki ).
(sidenote: For more information on functional datastructures, see http://www.amazon.com/Purely-Functional-Structures-Chris-Okasaki/dp/0521663504 )
A purely functional implementation exists for almost any data structure. In the case of sets or maps, you typically use some form of search tree, e.g. red/black trees or AVL trees. The standard reference for functional data structures is the book by Okasaki:
http://www.cambridge.org/gb/knowledge/isbn/item1161740/
Significant parts of it are available for free via his thesis:
http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf
The links from the answer by #ninjagecko are good. What I've been following recently are the Persistent Data Structures used in Clojure, which are functional, immutable and persistent.
A description of the implementation of the persistent hash map can be found in this two-part blog post:
http://blog.higher-order.net/2009/09/08/understanding-clojures-persistenthashmap-deftwice/
http://blog.higher-order.net/2010/08/16/assoc-and-clojures-persistenthashmap-part-ii/
These are implementations of some of the ideas (see the first answer, first entry) found in this reference request question.
The sets that come out of these structures support the functions you need:
http://clojure.org/data_structures#Data Structures-Sets
All that's left is to browse the source code and try to wrap your head around it.
Here is an implementation of a purely functional set in OCaml (it is the standard library of OCaml).
Is there an algorithm that implements a purely functional set?
You can implement set operations using many different purely functional data structures. Some have better complexity than others.
Examples include:
Lists
Where we have:
List Difference:
(\\) :: Eq a => [a] -> [a] -> [a]
The \\ function is list difference ((non-associative). In the result of xs \\ ys, the first occurrence of each element of ys in turn (if any) has been removed from xs. Thus
union :: Eq a => [a] -> [a] -> [a]
The union function returns the list union of the two lists. For example,
"dog" `union` "cow" == "dogcw"
Duplicates, and elements of the first list, are removed from the the second list, but if the first list contains duplicates, so will the result. It is a special case of unionBy, which allows the programmer to supply their own equality test.
intersect :: Eq a => [a] -> [a] -> [a]
The intersect function takes the list intersection of two lists. For example,
[1,2,3,4] `intersect` [2,4,6,8] == [2,4]
If the first list contains duplicates, so will the result.
Immutable Sets
More efficient data structures can be designed to improve the complexity of set operations. For example, the standard Data.Set library in Haskell implements sets as size-balanced binary trees:
Stephen Adams, "Efficient sets: a balancing act", Journal of Functional Programming 3(4):553-562, October 1993, http://www.swiss.ai.mit.edu/~adams/BB/.
Which is this data structure:
data Set a = Bin !Size !a !(Set a) !(Set a)
| Tip
type Size = Int
Yielding complexity of:
union, intersection, difference: O(n+m)

Is this implementation tail-recursive

I read in an algorithmic book that the Ackermann function cannot be made tail-recursive (what they say is "it can't be transformed into an iteration"). I'm pretty perplex about this, so I tried and come up with this:
let Ackb m n =
let rec rAck cont m n =
match (m, n) with
| 0, n -> cont (n+1)
| m, 0 -> rAck cont (m-1) 1
| m, n -> rAck (fun x -> rAck cont (m-1) x) m (n-1)
in rAck (fun x -> x) m n
;;
(it's OCaml / F# code).
My problem is, I'm not sure that this is actually tail recursive. Could you confirm that it is? If not, why? And eventually, what does it mean when people say that the Ackermann function is not primitive recursive?
Thanks!
Yes, it is tail-recursive. Every function can be made tail-rec by an explicit transformation to Continuation Passing Style.
This does not mean that the function will execute in constant memory : you build stacks of continuations that must be allocated. It may be more efficient to defunctionalize the continuations to represent that data as a simple algebraic datatype.
Being primitive recursive is a very different notion, related to expressiveness of a certain form of recursive definition that is used in mathematical theory, but probably not very much relevant to computer science as you know it: they are of very reduced expressiveness, and systems with function composition (starting with Gödel's System T), such as all current programming languages, are much more powerful.
In term of computer languages, primtive recursive functions roughly correspond to programs without general recursion where all loop/iterations are statically bounded (the number of possible repetitions is known).
Yes.
By definition, any recursive function can be transformed into an iteration as long as it has access to an unbounded stack-like construct. The interesting question is whether it can be done without a stack or any other unbounded data storage.
A tail-recursive function can be turned into such an iteration only if the size of its arguments is bounded. In your example (and almost any recursive function that uses continuations), the cont parameter is for all means and purposes a stack that can grow to any size. Indeed, the entire point of continuation-passing style is to store data usually present on the call stack ("what to do after I return?") in a continuation parameter instead.

Resources