Custom Search Parameter for Extensions Resource on FHIR - hl7-fhir

Does FHIR support search based on Extension values?
I have added this extension under the ImagingStudy Resource
{
"extension": [
{
"url": "http://hl7.org/fhir/SearchParameter/institution-name",
"valueString": "Apollo"
}
]
}
Is it possible to have a custom search parameter added for this extension such that it can be searched accordingly? If possible, how can I register it?

It's definitely possible to define search parameters that look at extensions. There's an example of one here: http://hl7.org/fhir/searchparameter-example-extension.html
However, the process of getting a given server to support those search parameters depends on what server you're using. Some of the reference implementation servers have an ability to generically support any 'normal' SearchParameter that is appropriately registered. Other servers will require custom coding to support new parameters.
Note that having an extension with a canonical URL that looks like a SearchParameter is going to be confusing to most implementers. If you're using a FHIR-based canonical URL, it should be a StructureDefinition.

Related

xmldoc <see> with cref that refers to a class under a separate csproj

We have an SDK project that includes a test engine. These live under two different csproj's. So we have SDK.csproj and TestEngine.csproj where SDK.csproj is a ProjectReference.
I have a DocFx project set up that builds metadata for these two separately, with docfx.json metadata that looks like this:
{
"src": [
{
"src": "../../sdk",
"files": ["csharp/SDK/**/*.cs"]
}],
"dest": "reference/SDK"
},
{
"src": [
{
"src": "../../sdk",
"files": ["csharp/TestEngine/**/*.cs"]
}],
"dest": "reference/TestEngine"
}
This way I can set up TOC's to put these documentation trees under separate tabs.
However, I cannot use a cref in TestEngine xml docs that refers to a class from SDK. I get an error like this from DocFX build:
Warning:[MetadataCommand.ExtractMetadata]Invalid cref value "!:SDK.SDKClass" found in triple-slash-comments for TestEngineClass, ignored.
I can imagine why this fails - DocFX is generating the metadata for TestEngine alone so it doesn't know about the SDK classes or how to link to them. Is there a way I can change the configuration so that I can keep these two projects separate (under separate TOC's) in the final website but still link from TestEngine to SDK classes?
I realized that using # and/or xref tags as described in the DocFx documentation do resolve to links properly in the generated web pages. So that gets me a lot of what I want. However, it's not a complete solution as other generated references do not resolve to links. For example, if a TestEngine method has a parameter of type SDK.SDKClass, the generated docs won't make a link for SDKClass where it appears on the parameter documentation. So I'm still wondering if there is another solution.

Extract Graphql queries sent by a browser application with Apollo client

I am trying to simplify the process of exporting GraphQL queries sent by my application for documentation purposes. For the record, I want to be able to paste those queries into Postman collections.
Here are my different approaches:
Relying on .graphql files: first it's still very difficult to setup with a full fledged TypeScript + Webpack + Babel setup (using Next.js). Anyway, it does not provide variables, so you only have half the query.
Relying on the network tab. From there, we can copy queries content and import into Postman. Combined with Cypress it could provide an awesome workflow. It works OK, but Apollo Client will send queries as JSON objects, difficult to interpret
I tried to use the "application/graphql" content-type. It's way more readable and available in Postman. BUUUT it is non-standard, and thus not available in Apollo client.
So my question is rather open, but what are the possibilities to enable extracting graphql queries (and variables) sent by my browser and inject them into Postman?
Most promising solution is enabling "application/graphql" client side, or converting the JSON representation back to a string representation. But I could explore another possibility (eg using Apollo Engine as an intermediate)
A way to do this is to use the apollo CLI tool. It includes a client:extract command that extracts all of the GraphQL operations/documents in your application into a file. You run the tool on your JS(X)/TS(X) files and it extracts the GraphQL documents into a file that looks like this (this output is the result of pointing the tool at a single file containing a single query):
{
"version": 2,
"operations": [
{
"signature": "b4f318e6aebcc3631bc88eedef09c6001bb8c310917e97ee6df4a99e17c3c056",
"document": "query BootstrapQuery{user:viewer{__typename delivery_time_1 delivery_time_2 devices{__typename fcm_token id notification{__typename enabled}}has_password id label location name next_delivery_string oauths{__typename email id name picture provider}plan plan_billing_service plan_expires plan_since plan_will_renew profile_picture recovery_email timezone username}}",
"metadata": {
"engineSignature": ""
}
}
]
}
You can then use that file however you want.
In my case, I use this tool to publish an allow-list of operations to Hasura. I'm not sure what you mean by injecting queries into Postman, but I think this tool may provide you with an automated start that would be an improvement over manual copy/pasting.

Vizrt API JSON Data integration with Graphics

I want to integrate JSON response from API on VIZRT software. Can anyone help me so as how to read the JSON response and display on the graphics or transition...
Thank you on advance.
The best and easiest way would be to use the DataPool plugins already provided in most Vizrt installations. These plugins don't require licensing and are supported in most versions of the software. There is a plugin titled DataReader which allows you to specify a file or web address to pull Excel, SQL, XML, or JSON information from and it can do this on a regular frequency (every 10 seconds, etc).
You can get a lot of info about these plugins on the documentation site.
Also, when installing make sure to do a Complete installation instead of Typical. That will make sure that all the DataPool plugins are installed.
Firstly, I would suggest for you to read the necessary sections Vizrt Documentation (It will point you to where you can find example projects etc).
There are many different ways of getting Json Feed to Viz GFX, but it all comes down to your requirements.
If you would like to use Viz Trio you could talk to the Media Sequence Engine of default port 6100 preview and 6800 program.
Or you could also communicate directly to the Viz engine using external Application using preferably .Net?
private void SetValueToDocumentByXPath(XmlDocument doc, string xpath, string value)
{
var nav = doc.CreateNavigator();
var it = nav.Select(xpath, nameSpaceManager_);
if (it.MoveNext())
{
it.Current.SetValue(value);
}
}
SetValueToDocumentByXPath(elementDoc,"//vdf:payload/vdf:field[#name='01']/vdf:value", "My new headline");
The line Above Target Tabfield 01, Setting its value to My new headline.
XMlDocument can be fetch from the MSE.

In strapi what are api templates for?

I've started playing with strapi to see what it can do.
When generating an api through strapi studio, it generates a set of base files to handle the model and api calls.
In the entity folder (e.g. article), there's a templates/default folder created with a default template. For an article entity, I get a ArticleDefault.template.json file with this:
{
"default": {
"attributes": {
"title": {},
"content": {}
},
"displayedAttribute": "title"
}
}
In strapi studio I also then add additional templates for each entity, given it multiple templates.
The command line api generator does not create the templates folder.
I couldn't find anything about it in the documentation I read.
What are the generated templates for?
When would I use them, and how would I choose a particular template if I have multiple?
I'm one of the authors of Strapi.
A template is like a schema of data. Let’s take a simple example. You have an API called Post, sometimes your post have a title and a content attribute, but other times, your post have a title, a subtitle, a cover and a content attribute. In both cases, we’re talking about the same API Post but your schema of data is different. That’s why we implemented the templates! Your needs could be different for the same content.
Then, as you said the CLI doesn't generate a template folder in project. The Studio doesn't use the same generator as the CLI but the behavior of your API is the same.

Firefox-Addon: Add search engine with varying URL and suggestions

my Firefox addon shall add a search engine, that
provides suggestions
gets its search template URL specified on runtime (i.e.: template URL depends on the preferences of the user)
And I don't see a way to do both at the same time.
I see two options to add a search engine:
addEngineWithDetails
addEngine
addEngineWithDetails() allows me to add a search engine with the template URL. But it does (apparently?) not allow to provide a suggestions URL.
addEngine() allows me to add a search engine that is specified in an XML file. But if have that file saved locally in my addon directory (e.g. chrome://example-engine/content/search.xml), how can I change the template URL on runtime? And using an online XML is an unsafe options since the internet connection could be broken or bad during the addon install.
First fo all, you're right, addEngineWithDetails does not support suggestions.
The way to go would be to use addEngine (and removeEngine).
As for the "dynamic" part of your question: While I didn't test it, the implementation seems to happily accept data: URIs. So you could:
Construct a data URI using whatever methods you like (even constructing a full XML DOM and serializing it).
Call addEngine with the data URI.
When the user changes a pref, remove the old engine, and construct a new one.

Resources