I need to develop any business application which includes a logic-based system like prolog. Basically I need to develop a business application and we show that the logic-based system is feasible for that.
This is an academic exercise.
I could think of only puzzles which can be solved using prolog. But I need a business application where I can use prolog.
Can any one please give some suggestions on simple business applications where I can use prolog logic based system?
Thanks & Regards.
How about some kind of resource scheduling like, say, conference rooms, labs, classrooms, etc.? You'd have to keep track of locations, available facilities, events, event priorities, times, what facilities are required for which events, etc. and try to balance these "fairly" in some way. That would be a major challenge for a conventional programming environment and would be immediately useful to boot.
Edited to add:
I found the paper I wanted to reference earlier. You'll have to pay for a copy, but it's worth it if you decide to go this route: School time table scheduling in Prolog
Abstract:
The school Time-Table Scheduling task is a very hard operations research and engineering problem, especially when-implemented in a conventional language. That is due to the imperative, deterministic nature of most conventional languages, such as BASIC and PASCAL and to the long series of constraints and goals inside the problem. The descriptive, logic-based and nondeterministic nature of Prolog language, and its ability to backtrack allows one to easily obtain a deductive data base, mixing the facts, rules, and constraints of the Time-table. Two systems, one nonmonotonic, and one monotonic with a non-monotonic reasoning structure are compared and their performances in a significant test are discussed. The approach may be easily generalized to other analogous engineering, scheduling and operations research problems.
How about a business "contact management" application? I'm thinking it could be prototyped around a couple of features, sending notes to thank customers for recent purchases and perhaps a birthday recognition of some kind.
Related
I have been asked to compare the programming models used by two different OSs for wireless sensor networks, TinyOS (which uses an event-based model) and Contiki (which uses events internally, but offers a protothread model for application programmers). I have developed the same application in both systems, and I can present a qualitative analysis of the pros and cons of both models, and give my subjective impression.
However, I have been asked to put forward metrics for comparing them. Apart from the time spent to write the programs (which is roughly equal), I'm not sure what other metrics are applicable. Can you suggest some?
Time to understand these programs? Number of questions ask on net about deadlocks (normalized by userbase)
I ended up using lines of code and cyclomatic complexity to show how different models impact code organization. I also estimated the difficulty of understanding the two programs by asking another programmer to read them.
I am looking to integrate with a 'Generic Rules Engine' based on the request of a customer.
I think the objective is to allow business stakeholders to add 'Rules', and have those be incorporated into an overall metric calculated on a dataset. So far, the Rules i have heard seem like straightforward snippets of logic in the code. I suppose the drawback is that even though simple, this would still need to be coded... (as opposed to some kind of runtime or data driven rule specification automatically used in the analysis.)
hopefully not too vague - but anyone have any success with such a thing? which open source projects have the most promise?
thanks
I have played around with DROOLS, a rule engine from JBOSS. I have seen it use in large scale production systems. It offers representation of rules in various different formats such as -- flat rule file written in JAVA or MVEL; using DROOLS rule flow, and decision tables composed in EXCEL.
The execution of rules are using RETE algorithm, which is supposedly faster due to rule memorization and variable sharing. As pointed out by Doug, there are a lot of information on Wikipedia
Expert Systems were the AI rage in the 80s.
There's lots of info on Wikipedia on the Rete Algorithm
See also Inference Engine
One well regarded toolkit is CLIPS
I am brainstorming an idea of developing a high level software to manipulate matrix algebra equations, tensor manipulations to be exact, to produce optimized C++ code using several criteria such as sizes of dimensions, available memory on the system, etc.
Something which is similar in spirit to tensor contraction engine, TCE, but specifically oriented towards producing optimized rather than general code.
The end result desired is software which is expert in producing parallel program in my domain.
Does this sort of development fall on the category of expert systems?
What other projects out there work in the same area of producing code given the constraints?
What you are describing is more like a Domain-Specific Language.
http://en.wikipedia.org/wiki/Domain-specific_language
It wouldn't be called an expert system, at least not in the traditional sense of this concept.
Expert systems are rule-based inference engines, whereby the expertise in question is clearly encapsulated in the rules. The system you suggest, while possibly encapsulating insight about the nature of the problem domain inside a linear algebra model of sorts, would act more as a black box than an expert system. One of the characteristics of expert systems is that they can produce an "explanation" of their reasoning, and such a feature is possible in part because the knowledge representation, while formalized, remains close to simple statements in a natural language; matrices and operations on them, while possibly being derived upon similar observation of reality, are a lot less transparent...
It is unclear from the description in the question if the system you propose would optimize existing code (possibly in a limited domain), or if it would produced optimized code, in that case driven bay some external goal/function...
Well production systems (rule systems) are one of four general approaches to computation (Turing machines, Church recursive functions, Post production systems and Markov algorithms [and several more have been added to that list]) which more or less have these respective realizations: imperative programming, functional programming, rule based programming - as far as I know Markov algorithms don't have an independent implementation. These are all Turing equivalent.
So rule based programming can be used to write anything at all. Also early mathematical/symbolic manipulation programs did generally use rule based programming until the problem was sufficiently well understood (whereupon the approach was changed to imperative or constraint programming - see MACSYMA - hmmm MACSYMA was written in Lisp so perhaps I have a different program in mind or perhaps they originally implemented a rule system in Lisp for this).
You could easily write a rule system to perform the matrix manipulations. You could keep a trace depending on logical support to record the actual rules fired that contributed to a solution (some rules that fire might not contribute directly to a solution afterall). Then for every rule you have a mapping to a set of C++ instructions (these don't have to be "complete" - they sort of act more like a semi-executable requirement) which are output as an intermediate language. Then that is read by a parser to link it to the required input data and any kind of fix up needed. You might find it easier to generate functional code - for one thing after the fix up you could more easily optimize the output code in functional source.
Having said that, other contributors have outlined a domain specific language approach and that is what the TED people did too (my suggestion is that too just using rules).
I was reading Code Complete (2nd Edition), and came across a quote in the margin on page 87 by Bertrand Meyer.
Ask not first what the system does; ask WHAT it does it to!
What exactly is the point Mr. Meyer is trying to get across here. I have some rough ideas, but I would like to make sure I really understand.
... So this is the second fallacy of teleology
- to attribute goal-directed
behavior to things that are not
goal-directed, perhaps without even
thinking of the things as alive and
spirit-inhabited, but only thinking, X
happens in order to Y. "In order to"
is mentalistic language, even though
it doesn't seem to name a blatantly
mental property like "fearful" or
"thinks it can fly". — Eliezer Yudkowsky, artificial intelligence theorist
concerned with self-improving AIs with stable goal systems
Bertrand Meyer's homily suggests that sound reasoning about systems is grounded in knowing what concrete entities are altered by the system; the purpose of the alterations is an emergent property.
I believe the point here is not on what the system does, but on the data it operates on and what those operations are.
This provides two major thinking shifts:
You think of the data and concepts first
You think of operations on that data
With those two "baselines" you will better prepared to organize a system to achieve your goals so that operations on data are well understood and make sense.
In effect, he is laying the ground work to be able to write the "contracts" on the code you write.
From Google search it picked up Art Gittleman's Computing With C# and the .Net Framework:
Bertrand Meyer gives an example of
payroll program, which produces
paychecks from timecards. Management
may later want to extend this program
to produce statistics or tax
information. The payroll function
itself may need to be changed to
produce weekly checks instead of
biweekly checks, for example. The
procedures used to implement the
original payroll program would need to
be changed to make any of these
modifications. Meyer notes that any of
these payroll programs will manipulate
the same sort of data, employee
records, company regulations, and so
forth.
Focusing on the more stable
aspect of such systems, Mayer states a
principle: "Ask not first what the
system does: Ask WHAT it does to!";
and a definition: "Object-oriented
design is the method which leads to
software architectures based on
objects every system or subsystem
manipulates (rather than "the"
function it meant to ensure)."
We today take UML's class diagram and other OOAD approach for granted, but it was something that was "discovered" along the way.
Also see Object-Oriented Design.
My opinion is that the quote is meant as a method to find good abstractions in your software. The text next to this quote deals with finding real-world objects to design your classes.
An simple example would be something like this:
You are making software for a bank. Because your software is working with bank accounts, it should have a class for an account. Then you start thinking what properties accounts have and the interactions you can have with accounts.
Of course, this quote makes more sense if the objects you are trying to model aren't as clear as this case.
Fred Brooks stated it this way:
"Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowcharts; they'll be obvious."
Domain-Driven design... Understand the problem the software is designed to solve. What "domain" entities, (data abstractions) does the system manipulate ? And what does it do to those domain entities?
I was wondering how common it is to find genetic algorithm approaches in commercial code.
It always seemed to me that some kinds of schedulers could benefit from a GA engine, as a supplement to the main algorithm.
Genetic Algorithms have been widely used commercially. Optimizing train routing was an early application. More recently fighter planes have used GAs to optimize wing designs. I have used GAs extensively at work to generate solutions to problems that have an extremely large search space.
Many problems are unlikely to benefit from GAs. I disagree with Thomas that they are too hard to understand. A GA is actually very simple. We found that there is a huge amount of knowledge to be gained from optimizing the GA to a particular problem that might be difficult and as always managing large amounts of parallel computation continue to be a problem for many programmers.
A problem that would benefit from a GA is going to have the following characteristics:
A good way to encode potential solutions
A way to compute an a numerical score to evaluate the quality of the solution
A large multi-dimensional search space where the answer is non-obvious
A good solution is good enough and a perfect solution is not required
There are many problems that could probably benefit from GAs and in the future they will probably be more widely deployed. I believe that GAs are used in cutting edge engineering more than people think however most people (like my company does) guards those secrets extremely closely. It is only long after the fact that it is revealed that GAs were used.
Most people that deal with "normal" applications probably don't have much use for them though.
If you want to find an example, look at Postgres's Query Planner. It uses many techniques, and one just so happens to be genetic.
http://developer.postgresql.org/pgdocs/postgres/geqo-pg-intro.html
I used GA in my Master's thesis, but after that I haven't found anything in my daily work a GA could solve that I couldn't solve faster with some other Algorithm.
I don't think it is particularly common to find genetic algorithms in everyday-commercial code. They are more commonly found in academic/research code where the need to find the "best algorithm" is less important than the need to just find a good solution to a problem.
Nonetheless, I have consulted on a couple of commercial projects that do use GAs (chiefly as a result of my involvement with GAUL). I think the most interesting example was at a Biotech company. They used the GA to optimise scoring functions that were used for virtual screening, as part of their drug discovery application.
Earlier this year, with my current company, I added a new feature to one of our products that uses another GA. I think we might be marketing this from next month. Basically, the GA is used to explore molecules that have the potential for binding to a protein, and could therefore be further investigated as drugs targeting that protein. A competing product that also uses a GA is EA inventor.
As part of my thesis I wrote a generic java framework for the multi-objective optimisation algorithm mPOEMS (Multiobjective prototype optimization with evolved improvement steps), which is a GA using evolutionary concepts. It is generic in a way that all problem-independent parts have been separated from the problem-dependent parts, and an interface is povided to use the framework with only adding the problem-dependent parts. Thus one who wants to use the algorithm does not have to begin from zero, and it facilitates work a lot.
You can find the code here.
The solutions which you can find with this algorithm have been compared in a scientific work with state-of-the-art algorithms SPEA-2 and NSGA, and it has been proven that
the algorithm performes comparable or even better, depending on the metrics you take to measure the performance, and especially depending on the optimization-problem you are looking on.
You can find it here.
Also as part of my thesis and proof of work I applied this framework to the project selection problem found in portfolio management. It is about selecting the projects which add the most value to the company, support most the strategy of the company or support any other arbitrary goal. E.g. selection of a certain number of projects from a specific category, or maximization of project synergies, ...
My thesis which applies this framework to the project selection problem:
http://www.ub.tuwien.ac.at/dipl/2008/AC05038968.pdf
After that I worked in a portfolio management department in one of the fortune 500, where they used a commercial software which also applied a GA to the project selection problem / portfolio optimization.
Further resources:
The documentation of the framework:
http://thomaskremmel.com/mpoems/mpoems_in_java_documentation.pdf
mPOEMS presentation paper:
http://portal.acm.org/citation.cfm?id=1792634.1792653
Actually with a bit of enthusiasm everybody could easily adapt the code of the generic framework to an arbitrary multi-objective optimisation problem.
I haven't but I've heard of this company (can't remember their name) which uses mutating, genetic algos to calculate placements and lengths of antennas (or something) from a friend of mine. And they're supposed to (according to my friend) have huge success with this. I guess GA is just too complex for "average Joe developer" to become mainstream. Kind of like Map Reduce - spectacularly cool, but WAY too advanced to hit the "mainstream"...
LibreOffice Calc uses it in its Solver module.