I have an XML document and associated schema that defines several attributes as having the xs:boolean type. The lexical values for xs:boolean are true, false, 1, and 0, so it seems that to correctly select attributes with a particular boolean value I'd have to write something like:
#attribute='true' or #attribute='1'
or
#attribute='false' or #attribute='0'
This seems verbose.
You might expect something like boolean(#attribute) to work, but the boolean function has different semantics.
Is there a better way? Is this something that a schema-aware processor would help with?
In addition to the solutions proposed by Phil and Tomalak, I discovered that XPath 2.0 provides a few alternatives:
#attribute=('true','1')
string(#attribute) cast as xs:boolean
And finally, XPath 2.0 does provide schema-aware processing, which means that if everything is in alignment, you should be able to write:
data(#attribute)
But schema-aware processors seem hard to come by. The most popular seems to be the non-free commercial variant of saxon, which costs £300. So for now I'm using #attribute=('true','1').
Try the following XPath expression:
[#attribute=true()]
Your options may depend upon which XPath processor you're using. For example, the Microsoft stuff allows you to define JavaScript functions that you could use to implement the "boolean(#attribute)" pattern. I don't know if other engines may have similar support available.
Otherwise I suspect you are stuck with having to check for both 'true' and '1'.
Hopefully someone else has a better answer.
Using XPath alone the expression you propose is as close as it gets. There are different ways of doing the same check, but all of them would lead to a longer and more complicated expression and conceal the intent more.
I don't think there is a schema-aware processor, XML schema is not connected to XSLT and it would not make much sense to build support for it into a processor, IMHO. The language is defined, and extensions are the accepted standard way of adding functionality.
If the library you use supports the EXSL extension functions, you could write a semi-portable mechanism to abstract the boolean check into a function, or you could use vendor-specific extensions like MSXSL to the same.
I think it is best to stay with the expression you have. It is not convoluted, only a bit verbose.
I use a simple named template that does the verbose check and tweak it as needed to handle legacy systems (e.g. some fields may use "Y"/"N", some "True"/"False", others 1/0) and dump the value into a variable. It's not the most elegant approach, but if you're using XPath 1.0, don't have EXSLT support, and can't (or don't want to) utilize outside script calls, it works consistently and can be tweaked as needed.
<xsl:template name="CleanupBool">
<xsl:param name="val" />
<xsl:choose>
<xsl:when test="$val='1'">1</xsl:when>
<xsl:when test="$val='true'">1</xsl:when>
<xsl:when test="$val='True'">1</xsl:when>
<xsl:when test="$val='TRUE'">1</xsl:when>
<xsl:when test="$val='T'">1</xsl:when>
<xsl:when test="$val='Y'">1</xsl:when>
<xsl:otherwise>0</xsl:otherwise>
</xsl:choose>
</xsl:template>
Usage:
<xsl:variable name="enabled">
<xsl:call-template name="CleanupBool">
<xsl:with-param name="val" select="IS_ENABLED"/>
</xsl:call-template>
</xsl:variable>
#attribute cast as xs:boolean is a good way to go about it.
Related
When I evaluate this XPath expression: //superhero[n0:name="Superman"]/n1:name on this xml:
<n0:rootElement xmlns:n0='http://example.com' xmlns:n1='http://example.com'>
<superheroes>
<superhero>
<n0:name>Superman</n0:name>
<n1:name>Clark</n1:name>
</superhero>
<superhero>
<n0:name>Spiderman</n0:name>
<n1:name>Peter</n1:name>
</superhero>
</superheroes>
</n0:rootElement>
using an XPath evaluator, I get the expected result.
But when I send it to an XQuery processor, I get an error message saying that
Namespace prefix 'n0' has not been declared. Weird, huh?
It's always the prefix in the brackets (is it called a filter, maybe?) that gets the complaint.
I've used http://www.xpathtester.com to verify the difference between XPath and XQuery interpretations.
It works fine with https://codebeautify.org/Xpath-Tester which is XPath only.
If I replace n0: or n1: with *: it works in for XQuery processors, but not for XPath testers.
This is of course a toy example I've written up to clarify my issue. In production I'm calling an external service which I believe is driven by Saxon-HE. I know it accepts XQuery so I'm guessing it is in "XQuery-mode" for XPath expressions.
There isn't much I can do to the xml file since I receive it from another source. Is there a better XQuery expression I can use?
Is this a bug, or by design?
Different XPath engines provide different ways of binding the namespace prefixes used in the expression. Some, I believe, pick up the namespace bindings from the source document. So it's not a non-conformance with the standard, it's the fact that the standard leaves it up to the particular processor how the original context is established.
The underlying problem is that you probably want your query to work regardless what namespace prefixes are used in the source document. Picking up the namespace bindings from the source document is handy for ad-hoc queries, but it means that a query that does the right thing with one document will fail with a different one.
In XQuery you can declare any namespaces you want to use in your query:
declare namespace n0 = 'http://example.com';
declare namespace n1 = 'http://example.com';
//superhero[n0:name="Superman"]/n1:name
https://xqueryfiddle.liberty-development.net/bdxZ8S
See the spec at https://www.w3.org/TR/xquery-31/#id-namespace-declaration
I'm maintaining a legacy tool of the company I work for written in C# and I'm converting it to .Net standard 2.0. It uses the Saxon-HE processor to process some XPaths and replace some configurations in files.
Its NuGet package on .NET has dependencies that do not allow the execution on all the .Net standard 2.0 compliant platforms (in my case both .Net Framework and .Net core), so I need to replace it with one another tool, better if the standard .Net XPath library.
The problem is that the tool uses some XPaths that perform complex operations such as concatenate strings and select an array item, and I don't know if it's a Saxon-specific syntax or regards a standard.
It is important to know this because if the XPaths are compliant to some XPath standard I could find one another way to process the same XPaths.
Here is some examples:
First:
for $row in /Item/SubItem[*]/SubSubItem return(concat($row, \"/ConcatValue\"))
Second:
/Item/SubItem[*]/SubSubItem/(add[#key=\"TheKey\"]/#value/string(), '')[1]
Do you know something about this XPath syntax?
Thank you
The XPath expressions you have given as examples require an XPath 2.0 processor but they are not specific to Saxon.
The expression
for $row in /Item/SubItem[*]/SubSubItem return(concat($row, \"/ConcatValue\"))
is a ForExpression, which is specific to XPath 2.0, and is not easily converted to XPath 1.0 because its result is a sequence of strings, and there is no such data type in XPath 1.0.
The expression
/Item/SubItem[*]/SubSubItem/(add[#key=\"TheKey\"]/#value/string(), '')[1]
is specific to XPath 2.0 because it uses a parenthesized expression on the RHS of the "/" operator; and again, because it returns a sequence of strings.
I'm afraid I can't advise you whether there exist XPath 2.0 libraries that run on .NET Core, which I assume is your requirement. Saxon cannot be made to run on .NET Core because of its dependency on IKVM, which doesn't support that platform and which (I gather) cannot readily be adapted to do so.
Note that XPath 2.0 is a subset of XQuery 1.0, so you could extend your search to XQuery 1.0 processors as well as XPath 2.0 processors.
Thanks to this comment I was able to test XPath2.Net and now everything works. I needed to change only one type of XPath definition
This one:
/Item/SubItem[*]/SubSubItem/(add[#key=\"TheKey\"]/#value/string(), '')[1]
Changes to
/Item/SubItem[*]/SubSubItem/(add[#key=\"TheKey\"]/#value/string(.), '')[1]
Please note the additional dot argument of the string() function.
This is strange as it should not be require the dot; in fact, per standard
In the zero-argument version of the function, $arg defaults to the
context item. That is, calling fn:string() is equivalent to calling
fn:string(.)
but XPath2 complains with this error
{"The function 'string'/0 was not found in namespace 'http://www.w3.org/2003/11/xpath-functions'"}
UPDATE:
After updating the XPath2.Net library to version 1.0.8 the string() syntax works.
I've got some XML documents which conform to a known schema which include geometries in GML format.
I'm looking to perform validation on the XML using XSD and Schematron validation, but I'll need some way of performing spatial queries within the Xpath language (I presume via extension functions).
I was wondering if anyone is aware of a standard for implementation I can use, or indeed if someone has already done this - I've come up empty on google.
As an example (representative only, only attempting to demonstrate the xpath part of the question (which is the question really - the fact I'm aiming to use it in schematron is moot))
My XML:
<Things>
<Thing type="A">
<Geometry>...GML...</Geometry>
</Thing>
<Thing type="B">
<Geometry>...GML...</Geometry>
</Thing>
</Things>
Xpath to return things of type A which spatially intersect with things of type B (again, I'm making up a function extension namespace and a (pretty dumb) function to give an example of what I'm trying to accomplish):
/Things/Thing[#type='A' and geo:has-intersection(Geometry, /Things/Thing[#type='B']/Geometry)]
As this seems somewhere between development and GIS, I've cross posted on GIS and StackOverflow.
The EXPath Geo Module defines functions on simple OGC geometries. I believe there are several implementations but the only one I'm familiar with is BaseX.
I have an xslt 1.0 stylesheet which needs to be converted to xslt 2.0.
I found this question here: Convert XSLT 1.0 to 2.0 which deals with the same issue.
According to that changing version attribute to 2.0 would do the trick. But is that the only thing which needs to be done?
Thanks in advance
I think the choice of strategy for conversion depends on how good a set of regression tests you have.
If you have a good set of regression tests, or if the consequences of introducing an error are not severe, then I would recommend the following steps:
(a) change the version attribute to 2.0
(b) run your test cases using an XSLT 2.0 processor and see if they work
(c) examine any test discrepancies and identify their cause (perhaps 80% of the time it will work correctly first time with no discrepancies).
If you don't have good tests or if you can't afford to take any risks, then you might need a more cautious strategy. (The ultimate in caution, of course, is the "don't change anything" strategy - stick with 1.0). Perhaps the best advice in this case is to start the conversion project by writing more test cases. At the very least, collect together a sample of the source documents you are currently processing, and the output that is generated for these source documents, and then use a file comparison tool to compare the output you get after conversion.
There are a few incompatibilities between 1.0 and 2.0; the one you are most likely to encounter is that xsl:value-of (and many other constructs) in 1.0 ignore all nodes in the supplied input sequence after the first, whereas XSLT 2.0 outputs all the nodes in the supplied sequence. There are two ways of dealing with this problem. Either (my recommendation) identify the places where this problem occurs, and fix them, usually by changing select="X" to select="X[1]"; or change the version attribute on the xsl:stylesheet back to version="1.0", which causes the XSLT 2.0 processor to run in backwards compatibility mode. The disadvantage of relying on backwards compatibility mode is that you lose the benefits of the stronger type-checking in XSLT 2.0, which makes complex stylesheet code much easier to debug.
In my experience the problems you encounter in conversion are more likely to depend on processor/implementation changes than on W3C language changes. Your code might be using vendor-defined extension functions that aren't supported in the 2.0 processor, or it might be relying on implementation-defined behaviour such as collating sequences that varies from one processor to another. I have seen code, for example, that relied on the specific format of the output produced by generate-id(), which is completely implementation-dependent.
"XSL Transformations (XSLT) Version 2.0", §J, "Changes from XSLT 1.0 (Non-Normative)" lists most the differences between XSLT 1.0 and XSLT 2.0 that you need to be aware of.
Currently, I'm writing something to do Unit testing for XSLT2 functions, the idea is very simple:
Create a custom-library.xsl, which contains some custom XSLT2 functions.
Create a data XML contains the test cases, as following XML Schema xslunit.xsd:
schema structure http://xml.bodz.net/schema/xslunit/xslunit.png
Run the test cases by transform it, using xslunit-xslt2.xsl, and get the test result html.
Now, the question is, there is function-call in the test cases, and I have to evaluate it in the XSLT (file xslunit-xslt2.xsl). But I can't find a way to evaluate an XPath.
Though, it may be easy to using some kind of Java extensions, but I really don't want to bring in another trouble. I hope everything can just work with-in XSLT2 only.
No, pure XSLT 2.0 does not have support do evaluate an XPath expression found in your XML data. Saxon 9 (in its commercial editions) however has an extension function: http://www.saxonica.com/documentation/extensions/functions/evaluate.xml. And AltovaXML Tools has a similar one: http://manual.altova.com/AltovaXML/altovaxmlcommunity/index.html?xextaltova_general.htm
Update a decade later: XSLT 3.0 has an instruction <xsl:evaluate> which evaluates an XPath expression supplied dynamically as a string.