In FHIR StructureDefinitions (profiles) how do elements aggregate into a snapshot? - hl7-fhir

How does one "roll up" or aggregate differential elements with "base" elements to create a snapshot?

Snapshot generation combines attributes from base elements (elements defined in the base profiles) with the differential elements in the constraint profile. Matching of elements from constraint to base is by name (if present in both) and path. A slice in a constraint ("homePhone") uses the bare path ("Patient.telecom") as the base element. Re-slices use the most complete slice from the base that matches.
Base elements can be sourced from either:
A snapshot of the StructureDefinition identified by the constraint profiles's StructureDefinition.base value.
A recursive application of these rules up the StructureDefinition "tree" (base to base to base ad nauseam).
IF the constraint profile includes elements from within a complex type (e.g. path=Patient.telecom.system), the base element will be found in the first of:
The profile identified in type.profile for the element (if any).
The HL7 provide datatype profile for the type (e.g. ContactPoint in this example).
Elements are brought into the snapshot using one of these methods:
K - matching keys that must be present in both base and constraint profiles for matching the element
F - fixed from the base and cannot be overriden. If present in the differential this value must match the base exactly
I - inherited from the base if not present in the constraint
N - not inherited and can be set in the constraint. If blank/missing in the differential, will be blank/missing in the snapshot.
F/N - if present in the base, the constraint must match. If not, a value may be set.
A - aggregated from the base - base instances are added to differential instances
R - restricted from base - the differential must be some subset of the base instances
Element by element:
path (K) - required in both base and constraint for matching.
representation (F)
name (K) - required in both base and constraint if appropriate
label (I)
code (A)
slicing (F/N) - if the base is sliced, the constraint must match. If not, a slicing can be introduced. Also see reslicing.
short (I)
definition (I)
comments (I)
alias (A)
min (I) - min in constraint must be greater or equal to base. Slices are relieved from this constraint (a slice may be min=0 when base is min=1 due to the possibility of other slices satisfying the base's min constraint).
max (I) - similar to min.
base (F)
type (R) - types must have a code present in the base element. They can add profiles and/or specify type code multiple times with different profiles.
nameReference (F)
defaultValue[x] (F)
meaningWhenMissing (F)
fixed[x] (F/N)
pattern[x] (F/N) - exception is that pattern can be refined to fixed.
example[x] (I)
minValue[x] (I)
maxValue[x] (I)
condition (A)
constraint (A)
mustSupport (F/N)
isModifier (F)
isSummary (F)
binding (A)
mapping (A)

Chris, I have started working on this, but this is really work in progress. Would love to cooperate with you:
https://github.com/ewoutkramer/strucdefdoc/wiki/SD's-expressing-constraints
(it's part of a wiki describing the StructureDefinition more in full)

Related

How to query using ← / back arrow / asignment / naming / renaming / declaration

Student(id, name, address, city, birthYear)
Course(code, courseName, lengthWeeks, institution°)
Institution(instNr, instName, instCity)
Takes(id°, code°)
List the ids and names of students who do not take the Databases course.
t1 ← σ courseName=Databases (Course)
t2 ← t1 ⋈ Takes
t3 ← t2 ⋈ Student
t4 ← π id, name (t3)
t5 ← π id, name (Student)
R ← t5 \ t4
I don't understand the logic behind t5. How can you pick id's and names that are already removed from the relation? Could you do it another way with the ¬?
RA (relational algebra) operators return values given input relation values. (As with everyday addition for integers.) They don't change or establish the values associated with names of variables or constants. That's outside an algebra. The only way a name might get associated with a new value is if your ← is defined to be variable assignment rather than constant declaration. You have to tell us that. Either way it's not a relational algebra operator. It is a symbol in a language whose expressions include more than just names denoting relations (terminals) and nested algebra operator calls (non-terminals). We can in a certain sense consider it an operator of that language but it is not an operator of the algebra. And either way your code never associates a second value with the same name.
(Some RAs have operators that input names as well as relation values in order to disambiguate inputs. But using such input names to associate variable/constant names with values on output is not a RA operation.)
(A RA could define its notion of relation as containing a name and a set of tuples, so it could have an algebraic renaming operator that takes a name and a relation and returns a relation with the given name in its name part. But again that would not affect the value associated with any variable/constant name used to identify the input relation.)
Re "do it another way": Usually we assume some RA operators & some way to name relation values, and then there is an obvious set of query expressions: We take an expression that is a name to denote its associated value and we take an expression that is an operator call to denote the value of calling the operator on the values of its argument expressions. (As with everyday integer arithmetic expressions.)
But a particular query language although defined in terms of such expressions might not actually allow you to write any or all such expressions. (Eg relational tuple calculus & domain calculus.) And/or it might define additional expressions in terms of expression rearrangement without explicitly giving a corresponding operator.
(Some presentations of relational algebras confuse RA operators with expressions of a query language inspired by RA. Then there might be query expressions whose results depend on the variable/constant names that appear as arguments, rather than just the values named by the names that appear as arguments. Then there can be query expressions that you might not be able to write via a nesting of operator calls and that might need you to associate a value with a name via assignment/declaration.)

What is the base of a refined element?

Consider the logical fragment:
Patient (Profile A)
identifier (sliced on system) 0..*
myclinicnbr (slice 1) 0..1
yourclinicnbr (slice 2) 0..*
And then:
Patient (Profile B, base is A)
identifier (sliced on system) 0..2
myclinicnbr (slice 1) (no diff)
yourclinicnbr (slice 2) 0..*
In B, the effective cardinalities are:
identifier 0..2 (explicit)
myclinicnbr 0..1 (constrained by A::myclinicnbr)
yourclinicnbr 0..2 (constrained by B::identifier)
Questions are:
Should B validate with B::yourclinicnbr having a cardinality incompatible with B::identifier?
Must B::yourclinicnbr override A::yourclinicnbr to bring it into compliance with B::identifier, or could it make no statement?
For each part in B, what is the correct snapshot cardinality?
I don't think we've said that cardinalities for slices must be proper subsets of the parent. Doing that would require a lot of awkward math each time a new slice got introduced - adding a minOccurs = 1 slice could easily force decreasing the maximum of a bunch of other 0..n slices. The expectation is that the instance must meet the constraints of both the base element and the slices, so if you have a 0..3 element with a bunch of 0..* slices, you can't have an instance that contains more than 3 repetitions, regardless of the fact that the slices might indicate 0..*. So the implementation behavior isn't confusing. It could however be confusing from a documentation perspective, and might in some cases cause confusion for software.
My leaning is to leave it up to the profile designer as to whether the slice cardinalities are struct mathematical subsets of the element cardinality. In some cases it'll be easy and worth it. In other cases, it might be more pain than the effort justifies. If you think we need to be tighter on this, a change proposal with rationale would be welcome.

Build Heap function

In my university notes the pseudocode of Build Heap is written almost like this (Only difference were parenthesis I have brackets):
And I searched on the internet and there are several like this:
But shouldn't be something like that?
BuildHeap(A) {
heapsize <- length[A]
for i <- floor(length[A]/2) downto 1
Heapify(A,i)
}
Why they writing heap_size[A] = length[A]?
If you have many heaps, A, B, C. And only one variable heap-size, How will you remember the sizes of all the heaps? You will have an attribute heap-size for all the heaps.
In many pseudocode the attributes of an object O are written as Attriubute[O] or Attribute(O) , (Sometimes they are also written as O.attribute ).
The first example assumes that you are storing the heap size of a particular heap as an attribute of the heap.
The second example might be storing the heap size in a local variable which gets its value from the length attribute (Length[A]) of the heap.
Here is a text about pseudocode from Introduction To Algorithms:
Compound data are typically organized into objects, which are comprised of attributes or fields . A particular field is accessed using the field name followed by the name of its object in square brackets. For example, we treat an array as an object with the attribute length indicating how many elements it contains. To specify the number of elements in an array A, we write length[A]. Although we use square brackets for both array indexing and object attributes, it will usually be clear from the context which interpretation is intended

Function to detect conflicting mathematical operators in VB6

I've developed a program which generates insurance quotes using different types of coverages based on state criteria. Now I want to add the ability to specify 'rules'. For example we may have 3 types of coverage (we'll call them UM, BI, and PD). Well some states don't allow PD to be greater than BI and other states don't allow UM to exist without BI. So I've added the ability for the user to create these rules so that when the quote is generated the rule will be followed and thus no state regulations will be violated when the program generates the quote.
The Problem
I don't want the user to be able to select conflicting rules. The user can select any of the VB mathematical operators (>, <, >=, <=, =, <>) and set a coverage on either side. They can do this multiple times (but only one at a time) so they might end up with a list of rules like this:
A > B
B > C
C > A
As you can see, the last rule conflicts with the previously set rules. My solution to this was to validate the list each time the user clicks 'Add rule to list'.
Pretend the 3rd list item is not yet in the list but the user has clicked 'add rule' to put it in the list. The validation process first checks to see if both incoming variables have already been used on the same line. If not, it just searches for the left side incoming variable (in this case 'C') in the already created list. if it finds it, it then sets tmp1 equal to the variable across from the match (tmp1 = 'B'). It then does the same for the incoming variable on the right side (in this case 'A'). Then tmp2 is set equal to the variable across from A (tmp2 = 'B'). If tmp1 and tmp2 are equal then the incoming rule is either conflicting OR is irrelevant regardless of the operators used. I'm pretty sure this is solid logic given 3 variables. However, I found that adding any additional variables could easily bypass my validation. There could be upwards of 10 coverage types in any given state so it is important to be able to validate more than just 3.
Is there any uniform way to do a sound validation given any number of variables? Any ideas or thoughts are appreciated. I hope my explanation makes sense.
Thanks
My best bet is some sort of hierarchical tree of rules. When the user adds the first rule (say A > B), the application could create a data structure like this (lowerValues is a Map which the key leads to a list of values):
lowerValues['A'] = ['B']
Now when the user adds the next rule (B > C), the application could check if B is already in a any lowerValues list (in this case, A). If that happens, C is added to lowerValues['A'], and lowerValues['B'] is also created:
lowerValues['A'] = ['B', 'C']
lowerValues['B'] = ['C']
Finally, when the last rule is provided by the user (C > A), the application checks if C is in any lowerValues list. Since it's in B and A, the rule is invalid.
Hope that helps. I don't remember if there's some sort of mapping in VB. I think you should try the Dictionary object.
In order to this idea works out, all the operations must be internally translated to a simple type. So, for example:
A > B
could be translated as
B <= A
Good luck
In general this is a pretty hard problem. What you in fact want to know is if a set of propositional equations over (apparantly) some set of arithmetic is true. To do this you need what amounts to constraint solvers that "know" arithmetic. Not likely to find that in VB6, but you might be able to invoke one as a subprocess.
If the rules are propositional equations only over inequalities (AA", write them only one way).
Second, try solving the propositions for tautology (see for Wang's algorithm which you can likely implment awkwardly in VB6).
If the propositions are not a tautology, now you want build chains of inequalities (e.g, A > B > C) as a graph and look for cycles. The place this fails is when your propositions have disjunctions, e.g., ("A>B or B>Q"); you'll have to generate an inequality chain for each combination of disjunctions, and discard the inconsistent ones. If you discard all of them, the set is inconsistent. Watch out for expressions like "A and B"; by DeMorgans theorem, they're equivalent to "not A or not B", e.g., "A>B and B>Q" is the same as "A<=B or B<=Q". You might want to reduce the conditions to disjunctive normal form to avoid getting suprised.
There are apparantly decision procedures for such inequalities. They're likely hard to implement.

implementing a basic search engine with prefix tree

The problem is the implementing a prefix tree (Trie) in functional language without using any storage and iterative method.
I am trying to solve this problem. How should I approach this problem ? Can you give me exact algorithm or link which shows already implemented one in any functional language?
Why I am trying to do => creating a simple search engine with an feature of
adding word to tree
searching a word in tree
deleting a word in tree
Why I want to use functional language => I want improve my problem-solving ability a bit further.
NOTE : Since it is my hobby project, I will first implement basic features.
EDIT:
i.) What I mean about "without using storage" => I don't want use variable storage ( ex int a ), reference to a variable, array . I want calculate the result by recursively then showing result to the screen.
ii.) I have wrote some line but then I have erased because what I wrote is made me angry. Sorry for not showing my effort.
Take a look at haskell's Data.IntMap. It is purely functional implementation of
Patricia trie and it's source is quite readable.
bytestring-trie package extends this approach to ByteStrings
There is accompanying paper Fast Mergeable Integer Maps which is also readable and through. It describes implementation step-by-step: from binary tries to big-endian patricia trees.
Here is little extract from the paper.
At its simplest, a binary trie is a complete binary tree of depth
equal to the number of bits in the keys, where each leaf is either
empty, indicating that the corresponding key is unbound, or full, in
which case it contains the data to which the corresponding key is
bound. This style of trie might be represented in Standard ML as
datatype 'a Dict =
Empty
| Lf of 'a
| Br of 'a Dict * 'a Dict
To lookup a value in a binary trie, we simply read the bits of the
key, going left or right as directed, until we reach a leaf.
fun lookup (k, Empty) = NONE
| lookup (k, Lf x) = SOME x
| lookup (k, Br (t0,t1)) =
if even k then lookup (k div 2, t0)
else lookup (k div 2, t1)
The key point in immutable data structure implementations is sharing of both data and structure. To update an object you should create new version of it with the most possible number of shared nodes. Concretely for tries following approach may be used.
Consider such a trie (from Wikipedia):
Imagine that you haven't added word "inn" yet, but you already have word "in". To add "inn" you have to create new instance of the whole trie with "inn" added. However, you are not forced to copy the whole thing - you can create only new instance of the root node (this without label) and the right banch. New root node will point to new right banch, but to old other branches, so with each update most of the structure is shared with the previous state.
However, your keys may be quite long, so recreating the whole branch each time is still both time and space consuming. To lessen this effect, you may share structure inside one node too. Normally each node is a vector or map of all possible outcomes (e.g. in a picture node with label "te" has 3 outcomes - "a", "d" and "n"). There are plenty of implementations for immutable maps (Scala, Clojure, see their repositories for more examples) and Clojure also has excellent implementation of an immutable vector (which is actually a tree).
All operations on creating, updating and searching resulting tries may be implemented recursively without any mutable state.

Resources