what is a vivid knowledge base - knowledge-management

I read in an exam about knowledge representation the question:
What is a vivid knowledge base?
And I wonder about the answer. Google only gives me some links to books which I can buy about it or some CiteSeerX articles. Wikipedia also doesn't know anything about it.
Is there any good online article?
Is "vivid knowledge" a common term or is another term more common?

Another definition:
a first-order KB is vivid :<=>
for some finite set of positive function-free ground literals KB+ :
KB = KB+ ∪ 'Negations' ∪ 'Domain closure' ∪ 'Unique names'
And another one:
a KB is vivid :<=>
KB is a complete and consistent set of literals (for some language)
To specify complete:
a KB is complete :<=> there is no formular α such that KB ⊭ α and KB ⊭ ¬α
To specify consistent:
a KB is not consistent :<=> its negation is a tautology

Vivid Knowledge Representation and Reasoning

This article explains it a bit better:
Vivid Knowledge and Tractable Reasoning: Preliminary Report
As far as I understand it, a vivid knowledge base is a knowledge base with a closed world assumption.
Please correct me if I am wrong.

Generally a collection of facts and rules is called a
knowledge base

Related

What is Eta abstraction in lambda calculus used for?

Eta Abstraction in lambda calculus means following.
A function f can be written as \x -> f x
Is Eta abstraction of any use while reducing lambda expressions?
Is it only an alternate way of writing certain expressions?
Practical use cases would be appreciated.
The eta reduction/expansion is just a consequence of the law that says that given
f = g
it must be, that for any x
f x = g x
and vice versa.
Hence given:
f x = (\y -> f y) x
we get, by beta reducing the right hand side
f x = f x
which must be true. Thus we can conclude
f = \y -> f y
First, to clarify the terminology, paraphrasing a quote from the Eta conversion article in the Haskell wiki (also incorporating Will Ness' comment above):
Converting from \x -> f x to f would
constitute an eta reduction, and moving in the opposite way
would be an eta abstraction or expansion. The term eta conversion can refer to the process in either direction.
Extensive use of η-reduction can lead to Pointfree programming.
It is also typically used in certain compile-time optimisations.
Summary of the use cases found:
Point-free (style of) programming
Allow lazy evaluation in languages using strict/eager evaluation strategies
Compile-time optimizations
Extensionality
1. Point-free (style of) programming
From the Tacit programming Wikipedia article:
Tacit programming, also called point-free style, is a programming
paradigm in which function definitions do not identify the arguments
(or "points") on which they operate. Instead the definitions merely
compose other functions
Borrowing a Haskell example from sth's answer (which also shows composition that I chose to ignore here):
inc x = x + 1
can be rewritten as
inc = (+) 1
This is because (following yatima2975's reasoning) inc x = x + 1 is just syntactic sugar for \x -> (+) 1 x so
\x -> f x => f
\x -> ((+) 1) x => (+) 1
(Check Ingo's answer for the full proof.)
There is a good thread on Stackoverflow on its usage. (See also this repl.it snippet.)
2. Allow lazy evaluation in languages using strict/eager evaluation strategies
Makes it possible to use lazy evaluation in eager/strict languages.
Paraphrasing from the MLton documentation on Eta Expansion:
Eta expansion delays the evaluation of f until the surrounding function/lambda is applied, and will re-evaluate f each time the function/lambda is applied.
Interesting Stackoverflow thread: Can every functional language be lazy?
2.1 Thunks
I could be wrong, but I think the notion of thunking or thunks belongs here. From the wikipedia article on thunks:
In computer programming, a thunk is a subroutine used to inject an
additional calculation into another subroutine. Thunks are primarily
used to delay a calculation until its result is needed, or to insert
operations at the beginning or end of the other subroutine.
The 4.2 Variations on a Scheme — Lazy Evaluation of the Structure and Interpretation of Computer Programs (pdf) has a very detailed introduction to thunks (and even though the latter has not one occurrence of the phrase "lambda calculus", it is worth reading).
(This paper also seemed interesting but didn't have the time to look into it yet: Thunks and the λ-Calculus.)
3. Compile-time optimizations
Completely ignorant on this topic, therefore just presenting sources:
From Georg P. Loczewski's The Lambda Calculus:
In 'lazy' languages like Lambda Calculus, A++, SML, Haskell, Miranda etc., eta conversion, abstraction and reduction alike, are mainly used within compilers. (See [Jon87] page 22.)
where [Jon87] expands to
Simon L. Peyton Jones
The Implementation of Functional Programming Languages
Prentice Hall International, Hertfordshire,HP2 7EZ, 1987.
ISBN 0 13 453325 9.
search results for "eta" reduction abstraction expansion conversion "compiler" optimization
4. Extensionality
This is another topic that I know little about, and this is more theoretical, so here it goes:
From the Lambda calculus wikipedia article:
η-reduction expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments.
Some other sources:
nLab entry on Eta-conversion that goes deeper into its connection with extensionality, and its relationship with beta-conversion
ton of info in the What's the point of η-conversion in lambda calculus? on the Theoretical Computer Science Stackexchange (but beware: the author of the accepted answer seems to have a beef with the commonly held belief about the relationsship between eta reduction and extensionality, so make sure to read the entire page. Most of it was over my head so I have no opinions.)
The question above has been cross-posted to Math Exchange as well
Speaking of "over my head" stuff: here's Conor McBride's take; the only thing I understood were that eta conversions can be controversial in certain context, but reading his reply was that of trying to figure out an alien language (couldn't resist)
Saved this page recursively in Internet Archive so if any of the links are not live anymore then that snapshot may have saved those too.

Where might I find a method to convert an arbitrary boolean expression into conjunctive or disjunctive normal form?

I've written a little app that parses expressions into abstract syntax trees. Right now, I use a bunch of heuristics against the expression in order to decide how to best evaluate the query. Unfortunately, there are examples which make the query plan extremely bad.
I've found a way to provably make better guesses as to how queries should be evaluated, but I need to put my expression into CNF or DNF first in order to get provably correct answers. I know this could result in potentially exponential time and space, but for typical queries my users run this is not a problem.
Now, converting to CNF or DNF is something I do by hand all the time in order to simplify complicated expressions. (Well, maybe not all the time, but I do know how that's done using e.g. demorgan's laws, distributive laws, etc.) However, I'm not sure how to begin translating that into a method that is implementable as an algorithm. I've looked at papers on query optimization, and several start with "well first we put things into CNF" or "first we put things into DNF", and they never seem to explain their method for accomplishing that.
Where should I start?
Look at https://github.com/bastikr/boolean.py
Example:
def test(self):
expr = parse("a*(b+~c*d)")
print(expr)
dnf_expr = normalize(boolean.OR, expr)
print(list(map(str, dnf_expr)))
cnf_expr = normalize(boolean.AND, expr)
print(list(map(str, cnf_expr)))
Output is:
a*(b+(~c*d))
['a*b', 'a*~c*d']
['a', 'b+~c', 'b+d']
UPDATE: Now I prefer this sympy logic package:
>>> from sympy.logic.boolalg import to_dnf
>>> from sympy.abc import A, B, C
>>> to_dnf(B & (A | C))
Or(And(A, B), And(B, C))
>>> to_dnf((A & B) | (A & ~B) | (B & C) | (~B & C), True)
Or(A, C)
The naive vanilla algorithm, for quantifier-free formulae, is :
for CNF, convert to negation normal form with De Morgan laws then distribute OR over AND
for DNF, convert to negation normal form with De Morgan laws then distribute AND over OR
It's unclear to me if your formulae are quantified. But even if they aren't, it seems the end of the wikipedia articles on conjunctive normal form, and its - roughly equivalent in the automated theorm prover world - clausal normal form alter ego outline a usable algorithm (and point to references if you want to make this transformation a bit more clever). If you need more than that, please do tell us more about where you encounter a difficulty.
I've came across this page: How to Convert a Formula to CNF. It shows the algorithm to convert a Boolean expression to CNF in pseudo code. Helped me to get started into this topic.

Word Suggestion program

Suggest me a program or way to handle the word correction / suggestion system.
- Let's say the input is given as 'Suggset', it should suggest 'Suggest'.
Thanx in advance. And I'm using python and AJAX. Please don't suggest me any jquery modules cuz I need the algorithmic part.
Algorithm that solves your problem called "edit distance". Given the list of words in some language and mistyped/incomplete word you need to build a list of words from given dictionary closest to it. For example distance between "suggest" and "suggset" is equal to 2 - you need one deletion and one insertion. As an optimization you can assign different weights to each operation - for example you can say that substitution is cheaper than deletion and substitution between two letters that lie closer on keyboard (for example 'v' and 'b') is cheaper that between those that are far apart (for example 'q' and 'l').
First description of algorithm for spelling and correction appeared in 1964. In 1974 efficient algorithm based on dynamic programming appeared in paper called "String-to-string correction problem" by Robert A. Wagner and Michael J. Fischer. Any algorithms book have more or less detailed treatment of it.
For python there is library to do that: Levenshtein distance library
Also check this earlier discussion on Stack Overflow
It will take a lot of work to make one of those yourself. There is a really nice spell checker library written in python called PyEnchant that I've found to be quite nice. Here's an example from their website:
>>> import enchant
>>> d = enchant.Dict("en_US")
>>> d.check("Hello")
True
>>> d.check("Helo")
False
>>> d.suggest("Helo")
['He lo', 'He-lo', 'Hello', 'Helot', 'Help', 'Halo', 'Hell', 'Held', 'Helm', 'Hero', "He'll"]
>>>

Query on Lambda calculus

Continuing on exercises in book Lambda Calculus, the question is as follows:
Suppose a symbol of the λ-calculus
alphabet is always 0.5cm wide. Write
down a λ-term with length less than 20
cm having a normal form with length at
least (10^10)^10 lightyear. The speed
of light is c = 3 * (10^10) cm/sec.
I have absolutely no idea as to what needs to be done in this question. Can anyone please give me some pointers to help understand the question and what needs to be done here? Please do not solve or mention the final answer.
Hoping for a reply.
Regards,
darkie
Not knowing anything about lambda calculus, I understand the question as following:
You have to write a λ-term in less than 20 cm, where a symbol is 0.5cm, meaning you are allowed less than 40 symbols. This λ-term should expand to a normal form with the length of at least (10^10)^10 = 10^100 lightyears, which results in (10^100)*2*3*(10^10)*24*60*60 symbols. Basically a very long recursive function.
Here's another hint: in lambda calculus, the typical way to represent an integer is by its Church encoding, which is a unary representation. So if you convert the distances into numbers, one thing that would do the trick would be a small function which, when applied to a small number, terminates and produces a very large number.

Standards for pseudo code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I need to translate some python and java routines into pseudo code for my master thesis but have trouble coming up with a syntax/style that is:
consistent
easy to understand
not too verbose
not too close to natural language
not too close to some concrete programming language.
How do you write pseudo code? Are there any standard recommendations?
I recommend looking at the "Introduction to Algorithms" book (by Cormen, Leiserson and Rivest). I've always found its pseudo-code description of algorithms very clear and consistent.
An example:
DIJKSTRA(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2 S ← Ø
3 Q ← V[G]
4 while Q ≠ Ø
5 do u ← EXTRACT-MIN(Q)
6 S ← S ∪{u}
7 for each vertex v ∈ Adj[u]
8 do RELAX(u, v, w)
Answering my own question, I just wanted to draw attention to the TeX FAQ entry Typesetting pseudocode in LaTeX. It describes a number of different styles, listing advantages and drawbacks. Incidentally, there happen to exist two stylesheets for writing pseudo code in the manner used in "Introductin to Algorithms" by Cormen, as recommended above: newalg and clrscode. The latter was written by Cormen himself.
I suggest you take a look at the Fortress Programming Language.
This is an actual programming language, and not pseudocode, but it was designed to be as close to executable pseudocode as possible. In particular, for designing the syntax, they read and analyzed hundreds of CS and math papers, courses, books and journals to find common usage patterns for pseudocode and other computational/mathematical notations.
You can leverage all that research by just looking at Fortress source code and abstracting out the things you don't need, since your target audience is human, whereas Fortress's is a compiler.
Here is an actual example of running Fortress code from the NAS (NASA Advanced Supercomputing) Conjugate Gradient Parallel Benchmark. For a fun experience, compare the specification of the benchmark with the implementation in Fortress and notice how there is almost a 1:1 correspondence. Also compare the implementation in a couple of other languages, like C or Fortran, and notice how they have absolutely nothing to do with the specification (and are also often an order of magnitude longer than the spec).
I must stress: this is not pseudocode, this is actual working Fortress code! From https://umbilicus.wordpress.com/2009/10/16/fortress-parallel-by-default/
Note that Fortress is written in ASCII characters; the special characters are rendered with a formatter.
If the code is procedural, normal pseudo-code is probably easy (Wikipedia has some examples).
Object-oriented pseudo-code might be more difficult. Consider:
using UML class diagrams to depict the classes/inheritence
using UML sequence diagrams to depict the sequence of code
I don't understand your requirement of "not too close to some concrete programming language".
Python is generally considered as a good candidate for writing pseudo-code. Perhaps a slightly simplified version of python would work for you.
Pascal has always been traditionally the most similar to pseudocode, when it comes to mathematical and technical fields. I don't know why, it was just always so.
I have some (oh, I don't know, 10 maybe books on a shelf, which concrete this theory).
Python as suggested, can be nice code, but it can be so unreadable as well, that it's a wonder by itself. Older languages are harder to make unreadable - them being "simpler" (take with caution) than today's ones. They'll maybe be harder to understand what's going on, but easier to read (less syntax/language features is needed for to understand what the program does).
This post is old, but hopefully this will help others.
"Introduction to Algorithms" book (by Cormen, Leiserson and Rivest) is a good book to read about algorithms, but the "pseudo-code" is terrible. Things like Q[1...n] is nonsense when one needs to understand what Q[1...n] is suppose to mean. Which will have to be noted outside of the "pseudo-code." Moreover, books like "Introduction to Algorithms" like to use a mathematical syntax, which is violating one purpose of pseudo-code.
Pseudo-code should do two things. Abstract away from syntax and be easy to read. If actual code is more descriptive than the pseudo-code, and actual code is more descriptive, then it is not pseudo-code.
Say you were writing a simple program.
Screen design:
Welcome to the Consumer Discount Program!
Please enter the customers subtotal: 9999.99
The customer receives a 10 percent discount
The customer receives a 20 percent discount
The customer does not receive a discount
The customer's total is: 9999.99
Variable List:
TOTAL: double
SUB_TOTAL: double
DISCOUNT: double
Pseudo-code:
DISCOUNT_PROGRAM
Print "Welcome to the Consumer Discount Program!"
Print "Please enter the customers subtotal:"
Input SUB_TOTAL
Select the case for SUB_TOTAL
SUB_TOTAL > 10000 AND SUB_TOTAL <= 50000
DISCOUNT = 0.1
Print "The customer receives a 10 percent discount"
SUB_TOTAL > 50000
DISCOUNT = 0.2
Print "The customer receives a 20 percent discount"
Otherwise
DISCOUNT = 0
Print "The customer does not a receive a discount"
TOTAL = SUB_TOTAL - (SUB_TOTAL * DISCOUNT)
Print "The customer's total is:", TOTAL
Notice that this is very easy to read and does not reference any syntax. This supports all three of Bohm and Jacopini's control structures.
Sequence:
Print "Some stuff"
VALUE = 2 + 1
SOME_FUNCTION(SOME_VARIABLE)
Selection:
if condition
Do one extra thing
if condition
do one extra thing
else
do one extra thing
if condition
do one extra thing
else if condition
do one extra thing
else
do one extra thing
Select the case for SYSTEM_NAME
condition 1
statement 1
condition 2
statement 2
condition 3
statement 3
otherwise
statement 4
Repetition:
while condition
do stuff
for SOME_VALUE TO ANOTHER_VALUE
do stuff
compare that to this N-Queens "pseudo-code" (https://en.wikipedia.org/wiki/Eight_queens_puzzle):
PlaceQueens(Q[1 .. n],r)
if r = n + 1
print Q
else
for j ← 1 to n
legal ← True
for i ← 1 to r − 1
if (Q[i] = j) or (Q[i] = j + r − i) or (Q[i] = j − r + i)
legal ← False
if legal
Q[r] ← j
PlaceQueens(Q[1 .. n],r + 1)
If you can't explain it simply, you don't understand it well enough.
- Albert Einstein

Resources