For example if I have I = V / R as input, I want V = I * R and R = V / I as output. I get it can be a broad question but how should I get started with it? Should I use stack/tree as when building postfix notation/interpreter?
You need to be able to represent the formulas symbolically, and apply algebraic rules to manipulate those formulas.
The easiest way to do this is to define a syntax that will accept your formula; best if defined explicitly as BNF. With that you can build a parser for such formulas; done appropriately your parser can build an abstract syntax tree representing the formula. You can do with tools like lex and yacc or ANTLR. Here's my advice on how do this with a custom recursive descent parser: Is there an alternative for flex/bison that is usable on 8-bit embedded systems?.
Once you have trees encoding your formulas, you can implement procedures to modify the trees according to algebraic laws, such as:
X=Y/Z => X*Z = Y if Z ~= 0
Now you can implement such a rule by writing procedural code that climbs over the tree, finds a match to pattern, and then smashes the tree to produce the result. This is pretty straightforward compiler technology. If you are enthusiastic, you can probably code a half dozen algebraic laws fairly quickly. You'll discover the code that does this is pretty grotty, what with climbing up and down the tree, matching node, and smashing links between nodes to produce the result.
Another way to do this is to use a program transformation system that will let you
define a grammar for your formulas directly,
define (tree) rewrite rules directly in terms of your grammar (e.g., essentially you provide the algebra rule above directly),
apply the rewrite rules on demand for you
regenerate the symbolic formula from the AST
My company's DMS Software Reengineering Toolkit can do this. You can see a fully worked example (too large to copy here) of algebra and calculus at Algebra Defined By Transformation Rules
Related
I was reading through a paper published by the University of Washington with tips about Machine Learning algorithms.
The way they break down an ML algorithm is into 3 parts. A representation, evaluation and optimization. I would like to understand how these three parts work together, so what is the process like throughout a typical machine learning algorithm.
I understand that my question is very abstract and each algorithm will be different, but if you know of a way to explain it abstractly please do. If not feel free to use a specific algorithm to explain.
Paper: http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf ~
See Table 1.
Sebastian Raschka has provided a very nice flowchart of the (supervised) machine learning process.
(I am familiar with text classification, so I will use this as an example)
In short :
Representation: Your data needs to be brought into a suitable algorithmic form. For Text classification you would for example extract features from your full text documents (input data) and bring them into a bag-of-words representation.
Application of the respective algorithm: Here your algorithm is executed. After training you have to do some kind of evaluation to measure your success (the criteria for evaluation is depending on the task). (The Evaluation is the final part of this step)
Optimization: The learning algorithms usually have one or more parameters to tune. Having reference pairs of parameter configuration / evaluation (from the previous step) you can now tweak the paramaters and evaluate the effect on your results.
(In the flowchart this is the edge, where we go back to the step "Learning algorithm training")
In the context of Multi-Class Classification (MCC) problem,
a common approach is to build final solution from multiple binary classifiers.
Two composition strategy typically mentioned are one-vs-all and one-vs-one.
In order to distinguish the approach,
it is clearer to look at what each binary classifier attempt to do.
One-vs-all's primitive classifier attempt to separate just one class from the rest.
Whereas one-vs-one's primitive attempts to separate one against
One-vs-one is also, quite confusingly, called all-vs-all and all-pairs.
I want to investigate this rather simple idea of building
MCC classifier by composing binary classifier in
binary-decision-tree-like manner.
For an illustrative example:
has wings?
/ \
quack? nyan?
/ \ / \
duck bird cat dog
As you can see the has wings? does a 2-vs-2 classification,
so I am calling the approach many-vs-many.
The problem is, I don't know where to start reading.
Is there a good paper you would recommend?
To give a bit more context,
I'm considering using a multilevel evolutionary algorithm (MLEA) to build the tree.
So if there is an even more direct answer, it would be most welcomed.
Edit: For more context (and perhaps you might find it useful),
I read this paper which is one of the GECCO 2011 best paper winners;
It uses MLEA to compose MCC in one-vs-all manner.
This is what inspired me to look for a way to modify it as decision tree builder.
What you want looks very much like Decision Trees.
From wiki:
Decision tree learning, used in statistics, data mining and machine learning, uses a decision tree as a predictive model which maps observations about an item to conclusions about the item's target value. More descriptive names for such tree models are classification trees or regression trees. In these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels.
Sailesh's answer is correct in that what you intend to build is a decision tree. There are many algorithms already for learning such trees such as e.g. Random Forests. You could e.g. try weka and see what is available there.
If you're more interested in evolutionary algorithms, I want to mention Genetic Programming. You can try for example our implementation in HeuristicLab. It can deal with numeric classes and attempts to find a formula (tree) that maps each row to its respective class using e.g. mean squared error (MSE) as fitness function.
There are also instance-based classification methods like nearest neighbor or kernel-based methods like support vector machines. Instance-based method also support multiple classes, but with kernel-methods you have to use one of the approaches you mentioned.
I am interested in persisting individual directed graphs. This question is not asking for a full-scale graph database solution, but for a document format that I can use to save and individual arbitrary directed graph. I don't know what notation and file format would be the smartest choice.
My primary concerns are:
Expressiveness/Flexibility - I want the ability to express graphs of different types. While the standard use case would be a simple directed graph, it should be possible to express trees, cyclical graphs, multi-graphs. As a bare minimum, I would expect support for labeling and weighting of edges and nodes. Notations for describing higraphs and edge composition/hyper-edges would also be highly desirable, although I am aware that such solutions may not exist.
Type System-Independence - I am interested in representing the structural qualities of graphs. Some solutions include an extensible type system for typed edges and nodes (e.g. RDF/OWL). I would only be interested in such a representation, if there were a clearly defined canonical decomposition of typed elements into primitives (nodes/edges/attributes). What I am trying to avoid here is the ability for multiple representations of equivalent graphs, where the equivalence is not discernible.
Canonical Representation - There should be a mechanism that allows the graph to be represented canonically (in such a way that lexical equivalence of canonical-representations could be used to determine equivalence).
Presentation Independent - I would prefer a notation that is not dependent upon the presentation of the graph. This would include spatial orientation, colors, font, etc. I am only interested in representing the data. One of the features I don't like about DOT language, DGML or SVG (at least for this particular purpose) is the focus on visual representation.
Standardized / Open / Compatible - The less implementation work that I have to do, the better. If the format is standardized and reliable tools already exist for working with the format, then it is more preferable. Accompanying this requirement is another, that the format should be highly-compatible. The proprietary nature of Microsoft's DGML is a reason for my aversion, despite the Visual Studio tooling and the fact that I work primarily with .NET (now). The fact that W3C publishes RDF standards is a motivation for considering a limited subset of RDF as a representational tool. I also appreciate GXL and GraphML, because they have well documented xml schemas, thereby facilitating the ability to integrate their data with any xml-compatible software package.
Simplicity / Readability - I appreciate human-readable syntax and ease of interpretation. I also appreciate representation that simplifies parsing. For this reason, I like GML, but I am concerned it is not mainstream enough to be a realistic choice. I would also consider JSON or YAML for readability, if they were not so limited in their respective abilities to represent complex (non-DAG) structures.
Efficiency / Concise Representation - It's worth considering that whatever format I end up choosing will inevitably have to be persisted and transferred over some network. Therefore, file size is a relevant consideration.
Overview
I recognize that I will most likely be unable to find a solution that satisfies every criteria on my wishlist. I am simply asking for the file format that is closest to what I want and that doesn't limit extensibility for unsupported use cases.
ObWindyPreamble: in the RDF world, there are a gazillion different surface syntax formats to choose from. RDF itself is an abstract metamodel for data, not directly a "graph syntax". You can of course directly represent a graph in RDF (since RDF models are graphs), but given that you want to represent different kinds of graphs you may end up with having to abstract away, and actually create an RDF vocabulary for representing different types of graphs.
All in all, I'm not convinced that RDF is the best way to go for you, but if you'd choose one, I'd say that RDF's Turtle syntax is something worth looking into. It certainly ticks the readability and simplicity boxes, as well as being a standard (well, almost... W3C is working on standardizing it) and having wide (open-source) tool support.
RDF models roughly follow set semantics, which means that a canonical syntax representation can not really be enforced: two files can have information in a different order without it affecting the actual model, or even can contain duplicate information. However, if you enforce a simple sorting algorithm when producing files (something for which most RDF parsers/writers have support), you should be able to get away with doing line-based comparisons and determining graph equivalence based on surface syntax.
Just as a simple example, let's assume we have a very simple, directed, labeled graph:
A ---r1---> B ---r2---> C
You could represent this directly in RDF, as follows (using Turtle syntax):
#prefix : <http://example.org/> .
:A :r1 :B .
:B :r2 :C .
In a more abstract modeling, you could do something like this:
#prefix g: <http://example.org/graph-model/> .
#prefix : <http://example.org/> .
:A a g:Vertex .
:B a g:Vertex .
:C a g:Vertex .
:r1 a g:DirectedEdge ;
g:from :A ;
g:to :B .
:r2 a g:DirectedEdge ;
g:from :B ;
g:to :C .
The above is just a simplistic example of course, but hopefully it illustrates that this potentially meets quite a few of the things on your wish list.
By the way, if you want even simpler, N-Triples is also an RDF syntax, which is line-based and therefore easy to process in a streaming fashion. It's slightly more verbose than Turtle but it may make file comparison easier.
My thoughts:
What I'm missing is your particular practical purpose/domain.
You mention the generic JSON format next to specific formats (e.g. GraphML which is an application of XML). So I'm left with the question if you do or don't consider making your own format.
Wouldn't having a 'canonical representation that can be used to determine equivalence' solve the graph isomorphism problem?
GraphML seems to cover a lot of your theoretical requirements, so I'd suggest you create a JSON version of this. This would then also cover requirement 6.
Then, you could create a converter between the JSON format and GraphML (and possibly other formats).
For your requirement 7 it again all depends on the practical graph sizes. I mean, nowadays sending up to a few MB to a friggin mobile device is not considered much. A graph of a few MB in (about) any format you mention, is already a relatively large beast with tens of thousands of nodes & edges.
What about Trivial Graph Format:
My current project is an advanced tag database with boolean retrieval features. Records are being queried with boolean expressions like such (e.g. in a music database):
funky-music and not (live or cover)
which should yield all funky music in the music database but not live or cover versions of the songs.
When it comes to caching, the problem is that there exist queries which are equivalent but different in structure. For example, applying de Morgan's rule the above query could be written like this:
funky-music and not live and not cover
which would yield exactly the same records but of cause break caching when caching would be implemented by hashing the query string, for example.
Therefore, my first intention was to create a truth table of the query which could then be used as a caching key as equivalent expressions form the same truth table. Unfortunately, this is not practicable as the truth table grows exponentially with the number of inputs (tags) and I do not want to limit the number of tags used in one query.
Another approach could be traversing the syntax tree applying rules defined by the boolean algebra to form a (minimal) normalized representation which seems to be tricky too.
Thus the overall question is: Is there a practicable way to implement recognition of equivalent queries without the need of circuit minimization or truth tables (edit: or any other algorithm which is NP-hard)?
The ne plus ultra would be recognizing already cached subqueries but that is no primary target.
A general and efficient algorithm to determine whether a query is equivalent to "False" could be used to solve NP-complete problems efficiently, so you are unlikely to find one.
You could try transforming your queries into a canonical form. Because of the above, there will be always be queries that are very expensive to transform into any given form, but you might find that, in practice, some form works pretty well most of the time - and you can always give up halfway through a transformation if it is becoming too hard.
You could look at http://en.wikipedia.org/wiki/Conjunctive_normal_form, http://en.wikipedia.org/wiki/Disjunctive_normal_form, http://en.wikipedia.org/wiki/Binary_decision_diagram.
You can convert the queries into conjunctive normal form (CNF). It is a canonical, simple representation of boolean formulae that is normally the basis for SAT solvers.
Most likely "large" queries are going to have lots of conjunctions (rather than lots of disjunctions) so CNF should work well.
The Quine-McCluskey algorithm should achieve what you are looking for. It is similiar to Karnaugh's Maps, but easier to implement in software.
The graph is arguably the most versatile and valuable data structure of all. I can store single variables, lists, hashes etc., and of course graphs, with it.
Given this, are there any languages that offer inline / native graph support and syntax? I can create variables, arrays, lists and hashes inline in Ruby, Python and Javascript, but if I want a graph, I have to either manage the representation myself with a matrix / list, or select a library, and use the graph through method calls.
Why on earth is this still the case in 2010? And, practically, are there any languages out there which offer inline graph support and syntax?
The main problem of what you are asking is that a more general solution is not the best one for a specific problem. It's just average for all of them but not a best one.
Ok, you can store a list in a graph assuming its degeneracy but why should you do something like that? And how would you store an hashmap inside a graph? Why would you need such a structure?
And do not forgot that graph implementation must be chosen accordingly to which operations you are going to do on it, otherwise it would be like using a hashtable to store a list of values or a list to store an ordered collection instead that a tree. You know that you can use an adjacency matrix, an edge list or adjacency lists.. every different implementation with it's own strenghts and weaknesses.
Then graphs can have really many properties compared to other collections of data, cyclic, acyclic, directed, undirected, bipartite, and so on.. and for any specific case you can implement them in a different way (assuming some hypothesis on the graph you need) so having them in native syntax would be overkill since you would need to configure them anyway (and language should provide many implementations/optimizations).
If everything is already made you remove the fun of developing :)
By the way just look for a language that allows you to write your own graph DSL and live with it!
Gremlin, a graph-based programming language: https://github.com/tinkerpop/gremlin/wiki
GrGen.NET (www.grgen.net) is a programming language for graph transformation plus an environment including a graphical debugger. You can define your graph model, the rewrite rules, and rule control with some nice special purpose languages and use the generated assemblies/C# code from any .NET language you like or from the supplied shell.
To understand why normal languages don't offer such a convenient/built-in interface to graphs, just take a look at the amount of code written for that project: the compiler alone is several man-years of work. That's a price tag too hefty for a feature/data structure only a minority of programmers ever need - so it's not included in general purpose programming languages.