Can the HAPI validator be made to check for ambiguous references? - hl7-fhir

In 2.36.4.1 Resolving references in Bundles the standard says about ambiguous references that applications MAY deem them an error or not as they choose.
However, it is the nature of ambiguous references that their resolution tends to be unreliable and prone to change with incidental factors like relative order of bundle entries. Hence some applications may require ambiguous references to be detected and rejected.
Can the HAPI validator be made to warn about ambiguous references, or do applications need to duplicate its reference resolution logic in order to do the checking themselves?
MINIMAL EXAMPLE
<Bundle xmlns="http://hl7.org/fhir">
<type value="collection"/>
<entry>
<fullUrl value="urn:uuid:016e8556-ce1c-40cb-aa3d-8b7a3e32e3b1"/>
<resource>
<Composition>
<status value="final"/>
<type>
<coding>
<system value="http://loinc.org"/>
<code value="11502-2"/>
</coding>
</type>
<date value="2022-05-25"/>
<author>
<reference value="Patient/1"/>
</author>
<title value="regular resolution not possible (no base URL); fallback method finds two matching entries"/>
</Composition>
</resource>
</entry>
<entry>
<fullUrl value="http://zrbj.example/path-A/Patient/1"/>
<resource>
<Patient>
<id value="1"/>
</Patient>
</resource>
</entry>
<entry>
<fullUrl value="http://zrbj.example/path-B/Patient/1"/>
<resource>
<Patient>
<id value="1"/>
</Patient>
</resource>
</entry>
</Bundle>
The two patient resources have the same logical id but different base URLs. Hence there is no problem and they could - but don't have to - be referenced unambiguously by their fullUrl.
The entry containing the reference does not have a base URL, so regular resolution of the relative reference (by prefixing it with the base URL) is not possible. Fallback resolution - i.e. scanning the bundle for entries with matching resource type and id - finds two entries, and hence the result is ambiguous.
Note: the example was constructed to be minimal but syntactically valid. It is intended for testing the validator and not meant to represent any actual use case in production.

Related

Is there a core resource type without mandatory fields (for coding test cases with extension profiles etc.)

Currently I am using the Basic resource type for coding test cases like this:
<!-- https://simplifier.net/validate?scope=eRezeptAbrechnungsdaten#current&fhirVersion=R4 -->
<Basic xmlns="http://hl7.org/fhir">
<code>
<coding>
<system value="http://terminology.hl7.org/CodeSystem/basic-resource-type"/>
<code value="study"/>
</coding>
</code>
<!-- "http://fhir.de/CodeSystem/ifa/pzn" is 'preferred' ... -->
<extension url="https://fhir.gkvsv.de/StructureDefinition/GKVSV_EX_ERP_Import_PZN">
<valueCoding>
<system value="http://fhir.de/CodeSystem/ifa/pzn" />
<code value="." />
</valueCoding>
</extension>
<!-- ... but not mandatory -->
<extension url="https://fhir.gkvsv.de/StructureDefinition/GKVSV_EX_ERP_Import_PZN">
<valueCoding>
<system value="http://fhir.de/NamingSystem/arge-ik/iknr"/>
<code value="." />
</valueCoding>
</extension>
</Basic>
These are test cases for validating FHIR validators, in case anyone wondered. ;-)
The block for the mandatory Basic.code element detracts a bit from the actual payload. Is there some other resource type that could be used in a similar fashion and which does not have any mandatory fields?
If you would want an actual resource type, and not be limited by Parameters, you could look at using Patient. That has no mandatory fields, and can be persisted etc.
The Parameters resource type has no required top-level fields if you just want to use it for extensions.
The purpose of Basic.code is to allow tools to disambiguate instances of Basic, given that Basic can be used for anything. Basic.code is the only element required and it allows non-customized implementations to still be able to filter and search Basic instances to a limited extent without needing to support custom search parameters. In many systems, it determines whether they can accept the Basic resource at all and/or where it gets mapped within their system.
Basic is the only resource that is intended to be created, updated, searched and otherwise handled that is 'generic' and can be used for any purpose.

JAXB use element name instead of type when generating pojo

With jaxb I want to generate pojos from xsd.
But the xsd is provided by external vendor, where element have meaninful names,
but types are weird. Just an example:
<xs:element name="PersonAddress" type="PerAdr" />
<xs:complexType name="PerAdr">
<xs:sequence>
<xs:element name="street" type="xs:string" minOccurs="1" maxOccurs="unbounded" />
<xs:element name="house" type="xs:string" minOccurs="1" maxOccurs="unbounded" />
</xs:sequence>
</xs:complexType>
So generated class is called PerAdr.
How to make it generating classes where their names are element names, not type so it would generate in this case class PersonAddress.
I have a huge xsd, so thinking about a clever way of doing it, not just writing hundreds of lines in .xjb file
I am not exactly a professional in JAXB. But I've looked at JAXB specification (here: http://download.oracle.com/otn-pub/jcp/jaxb-2.0-fr-oth-JSpec/jaxb-2_0-fr-spec.pdf) and found the following:
The characteristics of the schema-derived Element class are derived in
terms of the properties of the “Element Declaration Schema Component” on
page 349 as follows:
The name of the generated Java Element class is derived from the
element declaration {name} using the XML Name to Java identifier
mapping algorithm for class names.
Each generated Element class must extend the Java value class javax.xml.bind.JAXBElement<T>. The next bullet specifies the
schema-derived Java class name to use for generic parameter T.
If the element declaration’s {type definition} is
Anonymous: Generic parameter T from the second bullet is set to the schema-derived class represented the anonymous type definition generated as specified
in Section 6.7.3.
Named: Generic parameter T from the second bullet is set to the Java class representing the element declaration’s {type definition}.
So, one may conclude from that: Once you have an XSD element with a named XSD type, you've got to deal with a Java class representing that type and named after it. That's logical. After all, you may have different XSD elements with the same global type. That's the default mapping.
However, JAXB allows for customizations (of the XML schema), with which you can override certain things. For instance, you can modify the name of the Java class generated by the XSD type, e.g:
<xs:complexType name="USAddress">
<xs:annotation> <xs:appinfo>
<jaxb:class name="MyAddress" />
</xs:appinfo></xs:annotation>
<xs:sequence>...</xs:sequence>
<xs:attribute name="country" type="xs:string"/>
</xs:complexType>
So, instead of USAddress, the result Java class will be named MyAddress. This looks like a solution to your problem, but to take advantage of it, you will need to modify every type definition in your XSD, which sounds daunting because your schema is huge.
So, what can you do?
First of all, you need to make sure that each XSD element in your schema and its (globally defined) type uniquely correspond each other. If there happen to be several different XSD elements with the same type, obviously the type name cannot be equal to all of them. In that case, if you don't like the original type names, you just need to edit that schema manually and give those types different names as it better fits to you.
Any automation is possible only when the relation XSD element <-> its XSD type is unique! In that case, you can derive the type name from the element name: make it the same or add, for instance, T prefix: TPersonAddress.
That is typically called refactoring and can be done automatically. The question is how?
Well, since XSD is XML, you can write an XSLT script that does the necessary transformation. But that may be not so straightforward, because you will have to parse the schema a bit. That is, to recognize every XSD element there and find the corresponding XSD type, and then to change the type name at both locations. Alternatively, you can insert those customization directives (<jaxb:...> elements) within the definition of each XSD type, as mentioned above. I don't know how much it would take to program such things. That will definitely come down to the creation of an index (with <xsl:key> construct) and iterating by it.
Alternatively, I can suggest you some unorthodox approach. We have developed a tool called FlexDoc/XML. Essentially, it is a transformer of XML files into anything. The transformation is programmed using some templates that work similar to XSLT.
The original idea was to extend that XSLT-like approach to any kind of Java based data-sources provided via various APIs. For instance, we have a similar product called
FlexDoc/Javadoc that mimics standard Javadoc. But then we realized that XML itself is also a good field full of various heavy-lifting tasks, for which XSLT is too lightweight. For instance, the generation of easily navigable single documentation by hundreds of XSD and WSDL files, for which we have two template sets now: XSDDoc and WSDLDoc. (We are working also on a similar thing for JSON Schemas).
Using FlexDoc/XML it is possible to create a template that does what you need (renaming those XSD types). That can be accomplished in a matter of a hour, and we will do it for you, if you purchase eventually a "FlexDoc/XML SDK" license. (People typically buy the SDK license to customize XSDDoc/WSDLDoc templates. But it can be equally used as a separate tool for the tasks like yours.)

JAXB bindings file: namespace-aware node selection

I tend to use external JAXB bindings files for my Schema-to-Java compilation. This works just fine, but I did notice one thing that I've started wondering about. It's not really JAXB-specific, more like an XPath question, but the context helps.
Suppose we have this schema:
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:test="www.acme.com"
targetNamespace="www.acme.com"
elementFormDefault="qualified" attributeFormDefault="unqualified">
<xs:element name="el1"/>
<xs:complexType name="testType">
<xs:sequence>
<xs:element ref="test:el1"/>
</xs:sequence>
</xs:complexType>
</xs:schema>
The element reference in the complex type requires the prefix "test", bound to our target namespace, to find the element definition. If we omitted the prefix, a schema processor would complain that it can't find the element it refers. So obviously the reference is a qualified name and a schema processor is aware of this.
Now take the following bindings file extract for XJC:
<bindings node="//xs:complexType[#name='testType']">
<bindings node=".//xs:element[#ref='test:el1']">
<property name="element1"/>
</bindings>
</bindings>
The binding for the complex type is clear. We select it by name, and the xs prefix is bound in the bindings file's root (not shown here). It might as well be xsd.
What bugs me is the nested binding. In the context of our complex type node, we select an xs:element node for which attribute ref has value test:el1. But that value is simply regarded as text. The XML processor has no knowledge of the fact that it's supposed to be a qualified name and test: is actually a prefix declaration bound to a namespace.
Now I know I'm nitpicking, but the actual prefix string should have no importance, only the namespace URI itself. Someone could change the test prefix in the schema to acme and it would still be the same schema semantically. However, my bindings file would no longer work.
So, is there any way to construct the XPath expression without relying on knowledge of the prefix, only the namespace URI? It's obviously not a big problem, but I'm curious about this.
Is there any way to construct the
XPath expression without relying on
knowledge of the prefix, only the
namespace URI?
If you talk about an attribute value, this is XPath 1.0
.//xs:element[
namespace::*[
. = 'www.acme.com'
][
susbtring-before(
../#ref,
':'
)
= name()
]
and
substring(
concat(':', #ref),
string-length(#ref) - 1
)
= 'el1'
]
In XPath 2.0 is much more simple:
.//xs:element[resolve-QName(#ref,.) eq QName('www.acme.com','el1')]

CruiseControl.NET Preprocessor 'include' Anomaly

Here's a strange one relating to a combination of the 'define' and 'include' functionality that the CC.NET preprocessor exposes. We're running CCNet 1.4.4.83, and our ccnet.config file is structured and split to make best use of common element blocks stored in subfiles that are included by the main config file; we also split out the core project definitions into their own include files too, leaving ccnet.config as essentially a series of includes thus:
<cruisecontrol xmlns:cb="urn:ccnet.config.builder">
<!-- Configuration root folder - defined so we can use it as a symbol instead of using slightly confusing relative paths -->
<cb:define ccConfigRootFolder="C:\CruiseControl.NET\Config"/>
<!-- Globals - standard or shared build elements common to all projects -->
<cb:include href="$(ccConfigRootFolder)\Globals\globals.xml" xmlns:cb="urn:ccnet.config.builder"/>
<!-- CruiseControl.NET Configuration - refresh configuration if changed -->
<cb:include href="$(ccConfigRootFolder)\CCNet Configuration\ccnet_configuration.xml" xmlns:cb="urn:ccnet.config.builder"/>
<!-- Project #1 -->
<cb:include href="$(ccConfigRootFolder)\Project1\project1.xml" xmlns:cb="urn:ccnet.config.builder"/>
<!-- Project #2 -->
<cb:include href="$(ccConfigRootFolder)\Project2\project2.xml" xmlns:cb="urn:ccnet.config.builder"/>
</cruisecontrol>
This works a treat - the preprocessor correctly includes and parses the <define> elements in globals.xml (and recursively parses further files included from globals.xml too) and the projects included afterwards (which contain references to those defined elements) are parsed correctly.
To further refine ccnet.config in an attempt to reduce the possibility of mistakes breaking the build process, we altered it to look like this:
<cruisecontrol xmlns:cb="urn:ccnet.config.builder">
<!-- Configuration root folder -->
<cb:define ccConfigRootFolder="C:\CruiseControl.NET\Config"/>
<!-- Project 'include' element definition -->
<cb:define name="ProjectInclude">
<cb:include href="$(ccConfigRootFolder)$(ccIncludePath)" xmlns:cb="urn:ccnet.config.builder"/>
</cb:define>
<!-- Include common gobal elements -->
<cb:ProjectInclude ccIncludePath="\Globals\globals.xml"/>
<!-- Project #1 -->
<cb:ProjectInclude ccIncludePath="\Project1\project1.xml"/>
<!-- Project #2 -->
<cb:ProjectInclude ccIncludePath="\Project2\project2.xml"/>
</cruisecontrol>
As you can see, we embedded the common, repeated, part of the 'include' definition in it's own defined block, and then use that to effect each include using the path as a parameter - the idea being that future modifiers of the file won't have the opportunity to accidentally forget something on their new included project lines (such as the preprocessor URN); as long as their xml file exists and they get the path to it right, the rest is taken care of in the common include definition.
The only problem is that this doesn't work - for some reason, it looks like the globals.xml file isn't being parsed properly (or maybe even included) because the projects included after it complain of not having elements defined; that is, references to elements defined in the globals file don't appear to have been 'registered', because the projects don't recognise them.
We've tried taking out the nested includes from globals.xml and including them directly at the top level, to no avail. Commenting-out the first troublesome element reference in the project just causes Validator to complain about the next one, with message "Preprocessing failed loading the XML: Reference to unknown symbol XXXXX". If we embed the body of globals.xml into ccnet.config, however, this works. Bizarre though it may sound, it's as though the preprocessor is utterly failing to parse globals.xml but then happily chewing through the project files, only to then fail because the global references aren't defined.
Validator is failing silently if this is the case, however. And of course because it can't parse the project XML properly, we get nothing in the 'Original' or 'Processed' tabs either. The CruiseControl.NET Service itself fails to start with an unhelpful exception:
Service cannot be started.
System.Runtime.Serialization.SerializationException:
Type
'ThoughtWorks.CruiseControl.Core.Config.Preprocessor.EvaluationException'
in Assembly
'ThoughtWorks.CruiseControl.Core,
Version=1.4.4.83, Culture=neutral,
PublicKeyToken=null' is not marked as
serializable. Server stack trace: at
System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object
obj, ISurrogateSelector
surrogateSelector, StreamingContext
context, SerObjectInfoInit
serObjectInfoInit, IFormatterConverter
converter, ObjectWriter objectWriter)
at
System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.Serialize(Object
obj, ISurrogateSelector
surrogateSelector, StreamingContext
context, SerObjectInfoInit
serObjectInfoInit, IFormatterConverter
converter, ObjectWriter objectWriter)
at
System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Serialize(Object
graph, Header[] inHeaders,
__BinaryWriter serWriter, Boolean fCheck) at
System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Ser...
All the documentation says this should work, and there's no mention of any incompatibility or inconsistency when using 'include' inside a 'define'. So I'm stumped, and any insight or advice would, at this stage, be highly regarded.
we're working on solving this issue for the 1.5 release (hopefully end this month)
http://jira.public.thoughtworks.org/browse/CCNET-1865
with kind regards
Ruben Willems
CCnet 1.5 Final has been released, so this gives you the excuse to test it:-)
If the problem still persists, you can also contact me via the ccnet lists.
http://groups.google.com.ag/group/ccnet-user or
http://groups.google.com.ag/group/ccnet-devel

In Visual Studio, is there a way to generate unique method names automatically?

I'm writing a bunch of tests for a class, and frankly I don't want to go to the effort of naming each test intelligently: "Compare2002to2002forLessThanOrEqual"
I'm cool with TestMethod1 through TestMethod120 - but I've got to edit each name, and that gets annoying. Is there any plugin that will generate unique names for all the methods in a class tagged with the [TestMethod] Attribute?
If you are writing distinct test methods from scratch then the naming of the method should not be a great overhead. This suggests that you may be copy-and-pasting the test method and changing some values and perhaps violating the DRY principle. If this is the case would it not be better to refactor the tests with an abstraction, leaving you with fewer methods (perhaps even one) and then provide a set of test conditions that the method could iterate over?
A benefit of this could be that if the interface or functionality of the module you are testing changes, you need only change the single test method instead of many.
It's not entirely clear what feature you want here. How would you envision that this name is generated? Are you looking for the editor to spit a method name as soon as you type [TestMethod]?
If so such a feature does not exist in Visual Studio today.
You could create a code snippet that fills out most of the name. Something like this:
<CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/CodeSnippet">
<CodeSnippet Format="1.0.0">
<Header>
<Title></Title>
<Description></Description>
<Author></Author>
<Shortcut>test</Shortcut>
</Header>
<Snippet>
<!-- Add additional Snippet information here -->
<Declarations>
<Literal>
<ID>val</ID>
<ToolTip></ToolTip>
<Default>val</Default>
</Literal>
<Literal>
<ID>condition</ID>
<ToolTip></ToolTip>
<Default>condition</Default>
</Literal>
</Declarations>
<Code Language="CSharp">
<![CDATA[ [TestMethod] public string Compare$val$to$val$for$condition$
{
} ]]>
</Code>
</Snippet>
</CodeSnippet>
</CodeSnippets>

Resources