I am working for a project which has many XSLT transformations.
The transformations have to be as fast as possible.
For readability I wrote many of them dividing "business logic" and
"output". For example
<!-- Business Logic -->
<xsl:variable name="myLocalVar">
<xsl:value-of select="func:whateverComputation(params)" />
</xsl:variable>
<!-- more buss logic here -->
<!-- Output -->
<xsl:element name="mytag">
<xsl:value-of select="$myLocalVar" />
</xsl:element>
Of course this can be written in a compact form
<xsl:element name="mytag">
<xsl:value-of select="func:whateverComputation(params)" />
</xsl:element>
Is the first form slower than the second one?
From a section of the the XSLT FAQ:
Few Points related to XSLT Performance:
xsl:variables are dynamic values. These variables are not in cache, and run every time that they are referenced in XSL. Explicit type casting of xsl:variable improves the performance. You can do type casting with string() and boolean() functions.
For example:
<xsl:variable name="_attr" select="string( /node/child[ #attr ] )">
Instead of using sub-elements, use attributes wherever possible. Using attributes instead of elements improves the performance. When performing XPath matches, attributes are faster because they are loosely typed. This makes validation of the schema easier.
When you match against attribute values, use enumerator attributes. Use multiple attribute names as bits, and set their values to true or false.
Eight tips for how to use XSLT efficiently:
Keep the source documents small. If necessary split the document first.
Keep the XSLT processor (and Java VM) loaded in memory between runs
If you use the same stylesheet repeatedly, compile it first.
If you use the same source document repeatedly, keep it in memory.
If you perform the same transformation repeatedly, don't. Store the result instead.
Keep the output document small. For example, if you're generating HTML, use CSS.
Never validate the same source document more than once.
Split complex transformations into several stages.
Eight tips for how to write efficient XSLT:
Avoid repeated use of "//item".
Don't evaluate the same node-set more than once; save it in a variable.
Avoid <xsl:number> if you can. For example, by using position().
Use <xsl:key>, for example to solve grouping problems.
Avoid complex patterns in template rules. Instead, use <xsl:choose> within the rule.
Be careful when using the preceding[-sibling] or following[-sibling] axes. This often indicates an algorithm with n-squared performance.
Don't sort the same node-set more than once. If necessary, save it as a result tree fragment and access it using the node-set() extension function.
To output the text value of a simple #PCDATA element, use <xsl:value-of> in preference to <xsl:apply-templates>.
Saving the result of function application to a variable isn't going to have any significant impact on performance in the general case (and some XSLT processors such as Saxon use lazy evaluation, so the function will not be evaluated untill the variable is actually needed).
On the contrary, if the function must be evaluated more than once with the same parameters, saving the result in a variable can result in some cases in significant increase of efficiency.
The correct way to improve performance is:
Profile/measure to identify real bottlenecks.
Optimize only the biggest bottlenecks.
If there is still need for increased performance, start a new iteration, going to 1. above.
To quote Donald Knuth: "Premature optimization is the root of all evil" -- which is actually a paraphrase of the wellknown saying: "The road to hell is paved with good intentions."
A little late to the game, but I thought I'd share this link: Techniques to Improve Performance of XSL Transformations.
Related
Recently I am thinking about an algorithm constructed by myself. I call it Replacment Compiling.
It works as follows:
Define a language as well as its operators' precedence, such as
(1) store <value> as <id>, replace with: var <id> = <value>, precedence: 1
(2) add <num> to <num>, replace with: <num> + <num>, precedence: 2
Accept a line of input, such as store add 1 to 2 as a;
Tokenize it: <kw,store><kw,add><num,1><kw,to><2><kw,as><id,a><EOF>;
Then scan through all the tokens until reach the end-of-file, find the operation with highest precedence, and "pack" the operation:
<kw,store>(<kw,add><num,1><kw,to><2>)<kw,as><id,a><EOF>
Replace the "sub-statement", the expression in parenthesis, with the defined replacement:
<kw,store>(1 + 2)<kw,as><id,a><EOF>
Repeat until no more statements left:
(<kw,store>(1 + 2)<kw,as><id,a>)<EOF>
(var a = (1 + 2))
Then evaluate the code with the built-in function, eval().
eval("var a = (1 + 2)")
Then my question is: would this algorithm work, and what are the limitations? Is this algorithm works better on simple languages?
This won't work as-is, because there's no way of deciding the precedence of operations and keywords, but you have essentially defined parsing (and thrown in an interpretation step at the end). This looks pretty close to operator-precedence parsing, but I could be wrong in the details of your vision. The real keys to what makes a parsing algorithm are the direction/precedence it reads the code, whether the decisions are made top-down (figure out what kind of statement and apply the rules) or bottom-up (assemble small pieces into larger components until the types of statements are apparent), and whether the grammar is encoded as code or data for a generic parser. (I'm probably overlooking something, but this should give you a starting point to make sense out of further reading.)
More typically, code is generally parsed using an LR technique (LL if it's top-down) that's driven from a state machine with look-ahead and next-step information, but you'll also find the occasional recursive descent. Since they're all doing very similar things (only implemented differently), your rough algorithm could probably be refined to look a lot like any of them.
For most people learning about parsing, recursive-descent is the way to go, since everything is in the code instead of building what amounts to an interpreter for the state machine definition. But most parser generators build an LL or LR compiler.
And I'm obviously over-simplifying the field, since you can see at the bottom of the Wikipedia pages that there's a smattering of related systems that partly revolve around the kind of grammar you have available. But for most languages, those are the big-three algorithms.
What you've defined is a rewriting system: https://en.wikipedia.org/wiki/Rewriting
You can make a compiler like that, but it's hard work and runs slowly, and if you do a really good job of optimizing it then you'll get conventional table-driven parser. It would be better in the end to learn about those first and just start there.
If you really don't want to use a parser generating tool, then the easiest way to write a parser for a simple language by hand is usually recursive descent: https://en.wikipedia.org/wiki/Recursive_descent_parser
I'm writing a program that's manipulating polynomials. I'm defining polynomials recursively as either a term (base case) or a sum or product of polynomials (recursive cases).
Sums and products are completely identical as far as their contents are concerned. They just contain a sequence of polynomials. But they need to be processed very differently. So to distinguish them I have to somehow tag my sequences of polynomials.
Currently I have two records - Sum and Product - defined. But this is causing my code to be littered with the line (:polynomials sum-or-product) to extract the contents of polynomials. Also printing out even small polynomials in the REPL produces so much boilerplate that I have to run everything through a dedicated prettyprinting routine if I want to make sense of it.
Alternatives I have considered are tagging my sums and products using metadata instead, or putting a + or * symbol at the head of the sequence. But I'm not convinced that either of these approaches are good style and I'm wondering if there's perhaps another option I haven't considered yet.
Putting a + or * symbol at the head of the sequence sounds like it would print out nicely. I would try implementing the processing of these two different "types" via multimethods, which keeps the calling convention neat and extensible. That document starts from object-oriented programmin view, but the "area of a shape" is a very neat example on what this approach can accomplish.
In your case you'd use first of the seq to determine if you are dealing with a sum or a product of polynomials, and the multimethod would automagically use the correct implementation.
This question is specifically about using XPath in XSLT 2.0 and Saxon.
XPaths ending with [1]
For XPaths like
following-sibling::foo[1]
descendant::bar[1]
I take it for granted that Saxon will not iterate over the entire axis but stop when it finds the first matching node - crucial in situations like:
following-sibling::foo[some:expensivePredicate(.)][1]
I assume that this is also the case for XPaths like this:
(following-sibling::foo/descendant::bar)[1]
I.e. Saxon will not compile the entire set of nodes matching following-sibling::foo/descendant::bar before picking the first one in the set. Rather, it will (even for chained axes) stop at the first matching node.
XPaths ending with [last()]
Now it gets interesting. When going "backwards" in the tree, I assume that XPaths like
preceding-sibling::foo[1]
work just as efficiently as their following-sibling equivalents. But what happens when chaining axes, e.g.
(preceding-sibling::foo/descendant::bar)[last()]
As we need to use [last()] here instead of [1],
will Saxon compile the entire set of nodes to count them to get a numeric value for last()?
Or will it be smart and stop iterating the preceding-sibling axis when it found a matching descendant?
Or will it be even more clever and iterate the descendant axis in reverse to more efficiently find the last descendant?
Saxon has a variety of strategies for evaluating last(). When used as a predicate, meaning [position()=last()], it is generally translated to an internal function [isLast()] which can be evaluated by a single-item lookahead. (So in your example of (preceding-sibling::foo /descendant::bar)[last()], it doesn't build the node-set in memory, rather it reads the nodes one by one and when it hits the end, returns the last one it found).
In other cases, particularly when used in XSLT match patterns, Saxon will convert child::x[last()] to child::x[not(following-sibling::x)].
When none of these approaches work, for many years Saxon had two strategies for evaluating last() depending on the expression it was applied to: (a) sometimes it would evaluate the expression twice, counting nodes the first time and returning them the second time; (b) in other cases it would read all the nodes into memory. We've recently encountered cases where strategy (a) fails: see https://saxonica.plan.io/issues/3122, and so we're always doing (b).
The last() expression is potentially expensive and it should be avoided where possible. For example the classic "insert a separator between adjacent items" which is often written
xx
if (position() != last()) sep
is much better written as
if (position() != 1) sep
xx
i.e. instead of inserting the separator after every item except the last, insert it before every item except the first. Or use string-join, or xsl:value-of/#separator.
XPath 2.0 has some new functions and syntax, relative to 1.0, that work with sequences. Some of theset don't really add to what the language could already do in 1.0 (with node sets), but they make it easier to express the desired logic in ways that are more readable. This increases the chances of the programmer getting the code correct -- and keeping it that way. For example,
empty(s) is equivalent to not(s), but its intent is much clearer when you want to test whether a sequence is empty.
Correction: the effective boolean value of a sequence is in general more complicated than that. E.g. empty((0)) != not((0)). This applies to exists(s) vs. s in a boolean context as well. However, there are domains of s where empty(s) is equivalent to not(s), so the two could be used interchangeably within those domains. But this goes to show that the use of empty() can make a non-trivial difference in making code easier to understand.
Similarly, exists(s) is equivalent to boolean(s) that already existed in XPath 1.0 (or just s in a boolean context), but again is much clearer about the intent.
Quantified expressions; e.g. "some $x in expression satisfies test($x)" would be equivalent to boolean(expression[test(.)]) (although the new syntax is more flexible, in that you don't need to worry about losing the context item because you have the variable to refer to it by).
Similarly, "every $x in expression satisfies test($x)" would be equivalent to not(expression[not(test(.))]) but is more readable.
These functions and syntax were evidently added at no small cost, solely to serve the goal of writing XPath that is easier to map to how humans think. This implies, as experienced developers know, that understandable code is significantly superior to code that is difficult to understand.
Given all that ... what would be a clear and readable way to write an XPath test expression that asks
Does value X occur in sequence S?
Some ways to do it: (Note: I used X and S notation here to indicate the value and the sequence, but I don't mean to imply that these subexpressions are element name tests, nor that they are simple expressions. They could be complicated.)
X = S: This would be one of the most unreadable, since it requires the reader to
think about which of X and S are sequences vs. single values
understand general comparisons, which are not obvious from the syntax
However, one advantage of this form is that it allows us to put the topic (X) before the comment ("is a member of S"), which, I think, helps in readability.
See also CMS's good point about readability, when the syntax or names make the "cardinality" of X and S obvious.
index-of(S, X): This one is clear about what's intended as a value and what as a sequence (if you remember the order of arguments to index-of()). But it expresses more than we need to: it asks for the index, when all we really want to know is whether X occurs in S. This is somewhat misleading to the reader. An experienced developer will figure out what's intended, with some effort and with understanding of the context. But the more we rely on context to understand the intent of each line, the more understanding the code becomes a circular (spiral) and potentially Sisyphean task! Also, since index-of() is designed to return a list of all the indexes of occurrences of X, it could be more expensive than necessary: a smart processor, in order to evaluate X = S, wouldn't necessarily have to find all the contents of S, nor enumerate them in order; but for index-of(S, X), correct order would have to be determined, and all contents of S must be compared to X. One other drawback of using index-of() is that it's limited to using eq for comparison; you can't, for example, use it to ask whether a node is identical to any node in a given sequence.
Correction: This form, used as a conditional test, can result in a runtime error: Effective boolean value is not defined for a sequence of two or more items starting with a numeric value. (But at least we won't get wrong boolean values, since index-of() can't return a zero.) If S can have multiple instances of X, this is another good reason to prefer form 3 or 6.
exists(index-of(X, S)): makes the intent clearer, and would help the processor eliminate the performance penalty if the processor is smart enough.
some $m in S satisfies $m eq X: This one is very clear, and matches our intent exactly. It seems long-winded compared to 1, and that in itself can reduce readability. But maybe that's an acceptable price for clarity. Keep in mind that X and S could potentially be complex expressions themselves -- they're not necessarily just variable references. An advantage is that since the eq operator is explicit, you can replace it with is or any other comparison operator.
S[. eq X]: clearer than 1, but shares the semantic drawbacks of 2: it computes all members of S that are equal to X. Actually, this could return a false negative (incorrect effective boolean value), if X is falsy. E.g. (0, 1)[. eq 0] returns 0 which is falsy, even though 0 occurs in (0, 1).
exists(S[. eq X]): Clearer than 1, 2, 3, and 5. Not as clear as 4, but shorter. Avoids the drawbacks of 5 (or at least most of them, depending on the processor smarts).
I'm kind of leaning toward the last one, at this point: exists(S[. eq X])
What about you... As a developer coming to a complex, unfamiliar XSLT or XQuery or other program that uses XPath 2.0, and wanting to figure out what that program is doing, which would you find easiest to read?
Apologies for the long question. Thanks for reading this far.
Edit: I changed = to eq wherever possible in the above discussion, to make it easier to see where a "value comparison" (as opposed to a general comparison) was intended.
For what it's worth, if names or context make clear that X is a singleton, I'm happy to use your first form, X = S -- for example when I want to check an attribute value against a set of possible values:
<xsl:when test="#type = ('A', 'A+', 'A-', 'B+')" />
or
<xsl:when test="#type = $magic-types"/>
If I think there is a risk of confusion, then I like your sixth formulation. The less frequently I have to remember the rules for calculating an effective boolean value, the less frequently I make a mistake with them.
I prefer this one:
count(distinct-values($seq)) eq count(distinct-values(($x, $seq)))
When $x is itself a sequence, this expression implements the (value-based) subset of relation between two sets of values, that are represented as sequences. This implementation of subset of has just linear time complexity -- vs many other ways of expressing this, that have O(N^2)) time complexity.
To summarize, the question whether a single value belongs to a set of values is a special case of the question whether one set of values is a subset of another. If we have a good implementation of the latter, we can simply use it for answering the former.
The functx library has a nice implementation of this function, so you can use
functx:is-node-in-sequence($X, $Y)
(this particular function can be found at http://www.xqueryfunctions.com/xq/functx_is-node-in-sequence.html)
The whole functx library is available for both XQuery (http://www.xqueryfunctions.com/) and XSLT (http://www.xsltfunctions.com/)
Marklogic ships the functx library with their core product; other vendors may also.
Another possibility, when you want to know whether node X occurs in sequence S, is
exists((X) intersect S)
I think that's pretty readable, and concise. But it only works when X and the values in S are nodes; if you try to ask
exists(('bob') intersect ('alice', 'bob'))
you'll get a runtime error.
In the program I'm working on now, I need to compare strings, so this isn't an option.
As Dimitri notes, the occurrence of a node in a sequence is a question of identity, not of value comparison.
Background: I'm writing a toy Lisp (Scheme) interpreter in Haskell. I'm at the point where I would like to be able to compile code using LLVM. I've spent a couple days dreaming up various ways of feeding untyped Lisp values into compiled functions that expect to know the format of the data coming at them. It occurs to me that I am not the first person to need to solve this problem.
Question: What are some historically successful ways of mapping untyped data into an efficient binary format.
Addendum: In point of fact, I do know which of about a dozen different types the data is, I just don't know which one might be sent to the function at compile time. The function itself needs a way to determine what it got.
Do you mean, "I just don't know which [type] might be sent to the function at runtime"? It's not that the data isn't typed; certainly 1 and '() have different types. Rather, the data is not statically typed, i.e., it's not known at compile time what the type of a given variable will be. This is called dynamic typing.
You're right that you're not the first person to need to solve this problem. The canonical solution is to tag each runtime value with its type. For example, if you have a dozen types, number them like so:
0 = integer
1 = cons pair
2 = vector
etc.
Once you've done this, reserve the first four bits of each word for the tag. Then, every time two objects get passed in to +, first you perform a simple bit mask to verify that both objects' first four bits are 0b0000, i.e., that they are both integers. If they are not, you jump to an error message; otherwise, you proceed with the addition, and make sure that the result is also tagged accordingly.
This technique essentially makes each runtime value a manually-tagged union, which should be familiar to you if you've used C. In fact, it's also just like a Haskell data type, except that in Haskell the taggedness is much more abstract.
I'm guessing that you're familiar with pointers if you're trying to write a Scheme compiler. To avoid limiting your usable memory space, it may be more sensical to use the bottom (least significant) four bits, rather than the top ones. Better yet, because aligned dword pointers already have three meaningless bits at the bottom, you can simply co-opt those bits for your tag, as long as you dereference the actual address, rather than the tagged one.
Does that help?
Your default solution should be a simple tagged union. If you want to narrow your typing down to more specific types, you can do it - but it won't be that "toy" any more. A thing to look at is called abstract interpretation.
There are few successful implementations of such an optimisation, with V8 being probably the most widespread. In the Scheme world, the most aggressively optimising implementation is Stalin.