How to add Coding extension in FHIR resources - hl7-fhir

I'd like to add extension Coding to DSTU2 ClaimResponse.item.adjudication.code which has binding strength as Extensible. I have three formats and which one is proper, or if none of them, what is suggested format? Thanks.
a. Use FHIR code "system" with a new code value
"adjudication":[
{
"code":{
"system":"http://hl7.org/fhir/ValueSet/adjudication",
"code":"allowed"
},
"amount":{
"value":21,
"system":"urn:std:iso:4217",
"code":"USD"
}
}
]
b. Use custom code "system" with a new code value
"adjudication":[
{
"code":{
"system":"http://myhealth.com/ClaimResponse/adjudication#allowed",
"code":"allowed"
},
"amount":{
"value":21,
"system":"urn:std:iso:4217",
"code":"USD"
}
}
]
c. Use extension
"adjudication":[
{
"code":{
"extension":[
{
"url":"http://myhealth.com/ClaimResponse/adjudication#allowed",
"valueCode":"allowed"
}
]
},
"amount":{
"value":234,
"system":"urn:std:iso:4217",
"code":"USD"
}
}
]

Option b is the closest, but the system URL looks a little funky. Something like this would be better: "system":"http://myhealth.com/CodeSystem/adjudication-code"
The system should ideally be a URL that resolves to the code system definition (though it doesn't have to) and should apply to a set of codes, not the single code you're conveying. (While it's possible to have one-code code systems, it's more than a little unusual.)
Option a is wrong because we never send the value set URL as the Coding.system. Option c is unnecessary - with an extensible binding, you're free to use any codes that aren't already covered by the defined value set.
All that said, it's not clear that "allowed" makes sense as a value for "code" given the other options in the extensible value set. You might also look at the draft STU 3 version which eliminates "code" altogether. See if that design will meet your needs better, and if not, provide feedback when it goes to ballot this August.

Related

Query on yang deviation

I am new to yang deviations. I wrote a deviation like below but I am not sure if the deviation is effective. Is it possible to print a text(value of xpath) in must statement for debugging purposes? Please help.
deviation "/ns:direction" {
description "Deviation to restrict if the direction is left.";
deviate add
{
must "(<function to print current()>) " {
error-message "Direction is not left.";
description "Direction is not left." ;
}
}
}
A typical way to check whether your deviation works would be to feed your module set to a YANG schema aware validator and simply validate an instance document which ensures that your expression evaluates in a desired way. If you tailor the document so that your particular constraint fails, you'd expect the "Direction is not left." error message during validation of said document. I guess you could call that a YANG instance document test case for your YANG schema.
module b {
yang-version 1.1;
namespace "b:uri";
prefix b;
container top {
leaf direction {
type enumeration {
enum left;
enum right;
}
}
}
}
module c {
yang-version 1.1;
namespace "c:uri";
prefix "c";
import b {
prefix b;
}
deviation "/b:top/b:direction" {
deviate add {
must ". = 'left'" {
error-message "Direction is not left.";
description "Direction is not left." ;
}
}
}
}
<?xml version="1.0" encoding="utf-8"?>
<config xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<b:top xmlns:b="b:uri">
<b:direction>right</b:direction>
</b:top>
</config>
Error at (4:5): failed assert at "/nc:config/b:top/b:direction": Direction is not left.
Must statements are assertions about instantiated data nodes so the only way to "debug" those is to throw them against an instance document, configuration + operational state of a device, datastore contents, etc.
It should not matter if the actual constraint is introduced in the original module or later via a deviation. Deviations are just quick and dirty patches of an existing model. You won't find any in modules published by the IETF. You usually resort to them if certain hardware cannot support the requirements of a published model (hard resource limits) or if you implement a big model in several stages, tagging some things as "not-supported" (yet). There are several caveats to them; the order in which they are applied matters for example, you should avoid deviating the same object in several places, you should only define them in specialized modules that only contain deviations (deviation modules), you should consider them to be temporary, etc.

Different message types with the same unique ID in protobuf

I'm trying to reverse-engineer and document several protobufs for which I don't have the descriptor metadata, and to create .proto files for them. It's been going great until I encountered a protobuf where two completely differently structured messages share the same unique ID. The top level is simple:
message main
{
string user=1;
repeated Section sections=2;
}
Looking at the Section type, there are some that look like this:
message Section
{
string name=1;
string fulldescription=2;
string briefdescription=3;
int32 level=4;
...
}
...and some that look like this:
message Section
{
int32 cost=1;
int32 tier=2;
int64 timestamp=3;
...
}
It would make perfect sense if one of these had the id of 2 and the other of, say, 3, but no, both types show with the unique ID of 2. The protobuf documentation very clearly states that each field in the message definition must have a unique number, and here is a perfectly valid (well, working) protobuf that does the exact opposite. I don't understand how it's possible or, more importantly, how to re-create this in the .proto file?
This does not seem to be a "oneof" situation either, since both message types are present in the same protobuf at the same time, and at any rate the "oneof" alternatives would still have different identifiers for fields of different types.
For reference, here's an example excerpt from the output generated by protoc --decode-raw, which I'm trying to document:
1 {
1: "User123"
2 {
1 {
1: "JohnDoe"
2: "This is the full description of the user, with a lot of details"
3: "A brief summary of the above"
4: 13
}
}
2 {
1 {
1: 135
2: 2
3: 1653606400
}
}
}
(This post seems to be asking the same thing, but it's old and doesn't have an actual answer: Can you assign multiple different value types to one field in a repeated Protobuf message?)
(This is my absolutely very first StackOverflow question, so apologies if the quality of the post is not up to snuff; please let me know what I need to add to make it clearer).

How should a Terraform provider handle default values applied on the server side?

Context: I am implementing (my first) Terraform plugin/provider as a wrapper towards existing public API.
One of the create operations in the API specifies an integer field which takes positive values or -1 serving as a default. If you specify -1 in the create API call, the value gets replaced by some default on the server side (say field = 1000) and is stored as 1000 from now on.
If I present this to my Terraform plugin (terraform apply):
resource "something" "mysomething" {
name = "someName"
field = -1
}
the call is not idempotent. Terraform continues to see a drift and subsequently offers:
# something.mysomething will be updated in-place
~ resource "something" "mysomething" {
id = "165-1567498530352"
name = "someName"
~ field = 1000 -> -1
}
Plan: 0 to add, 1 to change, 0 to destroy.
How to deal with such API?
The Terraform SDK includes a special schema flag Computed, which means "if there is no value given in the configuration then a default value will be selected at apply time".
That seems to match with your use-case here. If you unset Default and set Computed: true instead -- retaining Optional: true to indicate that the user can optionally set it -- then you can activate that behavior.
If you're able to predict the final "computed" value during the plan step, before creating or updating anything, then you should implement CustomizeDiff for the resource and use d.Set to provide the value, and then Terraform can take it into account to produce a more complete plan.
If not, then you can leave it unset during planning (in Terraform terms, its value will be "unknown") and then call d.Set in the Create and Update functions instead, with the value appearing as (known after apply) in the plan.
When using this mechanism, it's important to be self-consistent: if you provide a known value during planning using CustomizeDiff then that must exactly match a final value selected during Create or Update. If you aren't consistent then any references to this attribute in other expressions will lead to errors during apply when Terraform Core verifies that the final changes are consistent with what was planned.
There is a currently caveat with this approach: due to API design limitations of the Terraform SDK today, provider code can't tell when a value that was previously set in configuration is now no longer set. Or, to put it another way, the SDK can't tell whether the value already stored was selected by explicit configuration or was populated by the provider as a default during apply.
For this reason, the last value set in the configuration will be "sticky" if the user unsets it, and the provider will not be able to adjust that value back to the server-provided default automatically.
That caveat is often not a big deal in practice, but is worth noting in case it does matter in your specific situation. A subsequent version of the SDK is likely to provide a mechanism to ask if a particular value is set in configuration, separately from what is stored in the state.
You can use the DiffSuppressFunc flag on schema attributes to conditionally suppress a diff so that Terraform doesn't choose to do anything with the diff.
Something like this should work for you:
package something
import (
"github.com/hashicorp/terraform/helper/schema"
)
func somethingSomething() *schema.Resource {
return &schema.Resource{
// ...
Schema: map[string]*schema.Schema{
// ...
"field": {
Type: schema.TypeInt,
Optional: true,
Default: -1,
DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool {
if new == "-1" {
return true
}
return false
},
},
},
}
}
Martin's answer provides probably a better alternative by using the Computed flag and making the attribute Optional. For this to work nicely you'd want to prevent people from specifying -1 as a value for this which you can do with a ValidateFunc, using the IntAtLeast validator from the list of predefined validations in the core SDK:
package something
import (
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/validation"
)
func somethingSomething() *schema.Resource {
return &schema.Resource{
// ...
Schema: map[string]*schema.Schema{
// ...
"field": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
ValidateFunc: validation.IntAtLeast(1),
},
},
}
}

How can we differentiate between include and skip directives in graphql?

Though both directives can hide a field.
When include is false it works same like when skip is true then what made them different.
According to the spec, there is no real difference -- both directives can be used to prevent a field from being resolved. Under the hood, the only difference is that if both skip and include exist on a field, the skip logic will be evaluated first (i.e. if skip is true, the field will always be omitted regardless of the value of include).
There is no preference between the two. Having both directives allows you to reuse the same variable for both cases where you want to include or exclude different fields. It also makes queries easier to read and reason about.
For example, if you had a schema like this:
type Query {
pet: Pet
}
type Pet {
# other fields
numberLitterBoxes: Int
numberDogHouses: Int
}
Having both directives allows you to reduce the number of variables that have to be included with your request. For example, you can query:
query ExampleQuery ($isCat: Boolean) {
pet {
numberLitterBoxes #include(if: $isCat)
numberDogHouses #skip(if: $isCat)
}
}
If you only had one directive or the other, the above query would require you to pass in two variables (isCat and isNotCat).
The only difference should be in case when you apply both directives at the same time. Skip should have higher prio.
The code from both directives looks quite similar

Sorting CouchDB Views By Value

I'm testing out CouchDB to see how it could handle logging some search results. What I'd like to do is produce a view where I can produce the top queries from the results. At the moment I have something like this:
Example document portion
{
"query": "+dangerous +dogs",
"hits": "123"
}
Map function
(Not exactly what I need/want but it's good enough for testing)
function(doc) {
if (doc.query) {
var split = doc.query.split(" ");
for (var i in split) {
emit(split[i], 1);
}
}
}
Reduce Function
function (key, values, rereduce) {
return sum(values);
}
Now this will get me results in a format where a query term is the key and the count for that term on the right, which is great. But I'd like it ordered by the value, not the key. From the sounds of it, this is not yet possible with CouchDB.
So does anyone have any ideas of how I can get a view where I have an ordered version of the query terms & their related counts? I'm very new to CouchDB and I just can't think of how I'd write the functions needed.
It is true that there is no dead-simple answer. There are several patterns however.
http://wiki.apache.org/couchdb/View_Snippets#Retrieve_the_top_N_tags. I do not personally like this because they acknowledge that it is a brittle solution, and the code is not relaxing-looking.
Avi's answer, which is to sort in-memory in your application.
couchdb-lucene which it seems everybody finds themselves needing eventually!
What I like is what Chris said in Avi's quote. Relax. In CouchDB, databases are lightweight and excel at giving you a unique perspective of your data. These days, the buzz is all about filtered replication which is all about slicing out subsets of your data to put in a separate DB.
Anyway, the basics are simple. You take your .rows from the view output and you insert it into a separate DB which simply emits keyed on the count. An additional trick is to write a very simple _list function. Lists "render" the raw couch output into different formats. Your _list function should output
{ "docs":
[ {..view row1...},
{..view row2...},
{..etc...}
]
}
What that will do is format the view output exactly the way the _bulk_docs API requires it. Now you can pipe curl directly into another curl:
curl host:5984/db/_design/myapp/_list/bulkdocs_formatter/query_popularity \
| curl -X POST host:5984/popularity_sorter/_design/myapp/_view/by_count
In fact, if your list function can handle all the docs, you may just have it sort them itself and return them to the client sorted.
This came up on the CouchDB-user mailing list, and Chris Anderson, one of the primary developers, wrote:
This is a common request, but not supported directly by CouchDB's
views -- to do this you'll need to copy the group-reduce query to
another database, and build a view to sort by value.
This is a tradeoff we make in favor of dynamic range queries and
incremental indexes.
I needed to do this recently as well, and I ended up doing it in my app tier. This is easy to do in JavaScript:
db.view('mydesigndoc', 'myview', {'group':true}, function(err, data) {
if (err) throw new Error(JSON.stringify(err));
data.rows.sort(function(a, b) {
return a.value - b.value;
});
data.rows.reverse(); // optional, depending on your needs
// do something with the data…
});
This example runs in Node.js and uses node-couchdb, but it could easily be adapted to run in a browser or another JavaScript environment. And of course the concept is portable to any programming language/environment.
HTH!
This is an old question but I feel it still deserves a decent answer (I spent at least 20 minutes on searching for the correct answer...)
I disapprove of the other suggestions in the answers here and feel that they are unsatisfactory. Especially I don't like the suggestion to sort the rows in the applicative layer, as it doesn't scale well and doesn't deal with a case where you need to limit the result set in the DB.
The better approach that I came across is suggested in this thread and it posits that if you need to sort the values in the query you should add them into the key set and then query the key using a range - specifying a desired key and loosening the value range. For example if your key is composed of country, state and city:
emit([doc.address.country,doc.address.state, doc.address.city], doc);
Then you query just the country and get free sorting on the rest of the key components:
startkey=["US"]&endkey=["US",{}]
In case you also need to reverse the order - note that simple defining descending: true will not suffice. You actually need to reverse the start and end key order, i.e.:
startkey=["US",{}]&endkey=["US"]
See more reference at this great source.
I'm unsure about the 1 you have as your returned result, but I'm positive this should do the trick:
emit([doc.hits, split[i]], 1);
The rules of sorting are defined in the docs.
Based on Avi's answer, I came up with this Couchdb list function that worked for my needs, which is simply a report of most-popular events (key=event name, value=attendees).
ddoc.lists.eventPopularity = function(req, res) {
start({ headers : { "Content-type" : "text/plain" } });
var data = []
while(row = getRow()) {
data.push(row);
}
data.sort(function(a, b){
return a.value - b.value;
}).reverse();
for(i in data) {
send(data[i].value + ': ' + data[i].key + "\n");
}
}
For reference, here's the corresponding view function:
ddoc.views.eventPopularity = {
map : function(doc) {
if(doc.type == 'user') {
for(i in doc.events) {
emit(doc.events[i].event_name, 1);
}
}
},
reduce : '_count'
}
And the output of the list function (snipped):
165: Design-Driven Innovation: How Designers Facilitate the Dialog
165: Are Your Customers a Crowd or a Community?
164: Social Media Mythbusters
163: Don't Be Afraid Of Creativity! Anything Can Happen
159: Do Agencies Need to Think Like Software Companies?
158: Customer Experience: Future Trends & Insights
156: The Accidental Writer: Great Web Copy for Everyone
155: Why Everything is Amazing But Nobody is Happy
Every solution above will break couchdb performance I think. I am very new to this database. As I know couchdb views prepare results before it's being queried. It seems we need to prepare results manually. For example each search term will reside in database with hit counts. And when somebody searches, its search terms will be looked up and increments hit count. When we want to see search term popularity, it will emit (hitcount, searchterm) pair.
The Link Retrieve_the_top_N_tags seems to be broken, but I found another solution here.
Quoting the dev who wrote that solution:
rather than returning the results keyed by the tag in the map step, I would emit every occurrence of every tag instead. Then in the reduce step, I would calculate the aggregation values grouped by tag using a hash, transform it into an array, sort it, and choose the top 3.
As stated in the comments, the only problem would be in case of a long tail:
Problem is that you have to be careful with the number of tags you obtain; if the result is bigger than 500 bytes, you'll have couchdb complaining about it, since "reduce has to effectively reduce". 3 or 6 or even 20 tags shouldn't be a problem, though.
It worked perfectly for me, check the link to see the code !

Resources