I have been using the Hl7.org tool org.hl7.fhir.validator.jar file to validate my messages but I would like to add this function it to my .Net project. Once I parse the message is there a class I can call to validate the Structure.
Is there a validate FHIR class in fhir-net-api that will display the same results has org.hl7.fhir.validator.jar?
string HL7FilePath = string.Format("{0}\\{1}", System.IO.Directory.GetCurrentDirectory(), "Sample.xml");
string HL7FileData = File.ReadAllText(HL7FilePath)
var b = new FhirXmlParser().Parse<PlanDefinition>(HL7FileData);
FHIR Validator Build ??
Arguments: C:\HL7Tools\validator\REC78_1.xml -version 3.0
.. connect to tx server # http://tx.fhir.org
.. definitions from hl7.fhir.core#3.0.1
(v3.0.1-null)
.. validate [C:\HL7Tools\validator\Sample.xml]
Terminology server: Check for supported code systems for http://www.nlm.nih.gov/research/umls/rxnorm
Success.
Yes, there is. You need to add the Hl7.Fhir.Specification.STU3 package, and can then use the validation methods like this:
using Hl7.Fhir.Specification.Source;
using Hl7.Fhir.Validation;
... your code, reading the PlanDefinition from file and parsing it ...
// setup the resolver to use specification.zip, and a folder with custom profiles
var source = new CachedResolver(new MultiResolver(
new DirectorySource(#"<path_to_profile_folder>"),
ZipSource.CreateValidationSource()));
// prepare the settings for the validator
var ctx = new ValidationSettings()
{
ResourceResolver = source,
GenerateSnapshot = true,
Trace = false,
EnableXsdValidation = true,
ResolveExteralReferences = false
}
var validator = new Validator(ctx);
// validate the resource; optionally enter a custom profile url as 2nd parameter
var result = validator.Validate(b);
The result will be an OperationOutcome resource containing the details of the validation.
Related
I am completely new to FHIR and have stumbled upon this NuGet package "Hl7.Fhir.STU3" and want to use it to search for Healthcare Services as defined here: https://digital.nhs.uk/developer/api-catalogue/e-referral-service-fhir#api-Default-a010-patient-service-search.
I so far have this limited code and understand I need to pass some form of search criteria but have no idea how to proceed. All I ever get back from the NHS client is:
"Root object has no type indication (resourceType) and therefore cannot be used to construct an FhirJsonNode. Alternatively, specify a nodeName using the parameter."
My code is:
var settings = new FhirClientSettings
{
Timeout = 10,
PreferredFormat = ResourceFormat.Json,
PreferredReturn = Prefer.ReturnMinimal,
};
var client = new FhirClient("https://sandbox.api.service.nhs.uk/referrals/FHIR/STU3/HealthcareService/$ers.searchHealthcareServicesForPatient", settings);
client.RequestHeaders.Add("Authorization", "Bearer g1112R_ccQ1Ebbb4gtHBP1aaaNM");
client.RequestHeaders.Add("nhsd-ers-ods-code", "R69");
client.RequestHeaders.Add("nhsd-ers-business-function", "REFERRING_CLINICIAN");
client.RequestHeaders.Add("X-Correlation-Id", Guid.NewGuid().ToString());
var services = client.Search<HealthcareService>();
I would really appreciate any assistance.
The URL you have set as your FHIR server endpoint is actually the URL for the operation call, so that will not work. If you set the server URL to "https://sandbox.api.service.nhs.uk/referrals/FHIR/STU3/", you should be able to use the FhirClient to do an operation call:
// Note that you have to send parameters in with your request, so set them up first:
var params = new Parameters();
params.Add("requestType", new Coding("https://fhir.nhs.uk/STU3/CodeSystem/eRS-RequestType-1", "APPOINTMENT_REQUEST"));
// etc...
var result = c.TypeOperation<HealthcareService>("ers.searchHealthcareServicesForPatient", params);
The $ sign in the original url is not part of the operation name, so I have omitted that in the request. The FhirClient will add the $ on the outgoing request.
I'm currently trying to get a CCD from Cerner's CODE using the FHIR DocumentReference resource. In order to get a CCD the $docref operation has to be passed in the URL (example below) but the FHIR library we are using doesn't allow to use this operation.
http://fhir.cerner.com/millennium/dstu2/infrastructure/document-reference/#operation-docref
Any ideas? Anyone has been able to do this in C#?
You can use the official FHIR library for .Net, see https://github.com/FirelyTeam/fhir-net-api for the source, or add the Hl7.Fhir.Dstu2 library it to your project through NuGet.
This library has a FhirClient, that you can point to your endpoint and use to call the operation.
Here's how that could be achieved - values taken from the documentation you linked to:
var c = new FhirClient("https://fhir-open.cerner.com/dstu2/ec2458f2-1e24-41c8-b71b-0e701af7583d/");
var p = new Parameters();
p.Parameter.Add(new Parameters.ParameterComponent() { Name = "patient", Value = new FhirString("12724066") });
p.Parameter.Add(new Parameters.ParameterComponent() { Name = "type", Value = new CodeableConcept("http://loinc.org", "34133-9") });
var result = c.TypeOperation<DocumentReference>("docref", p, useGet: true);
I have implemented Swagger using Swashbuckle and MultipleApiVersions and it works like a charm. But I find it a bit ugly that the current setup requires a api-version request parameter. I assumed the version could be determined by the url /api/V1/Test.
How do I remove the api-version parameter and instruct swagger to base the version on the URL?
private static void SetupApiVersioningAndSwagger(IAppBuilder builder, AutofacWebApiDependencyResolver resolver)
{
// we only need to change the default constraint resolver for services that want urls with versioning like: ~/v{version}/{controller}
var constraintResolver = new DefaultInlineConstraintResolver() { ConstraintMap = { ["apiVersion"] = typeof(ApiVersionRouteConstraint) } };
var configuration = new HttpConfiguration();
configuration.DependencyResolver = resolver;
var httpServer = new HttpServer(configuration);
// reporting api versions will return the headers "api-supported-versions" and "api-deprecated-versions"
configuration.AddApiVersioning(o =>
{
o.ReportApiVersions = true;
o.DefaultApiVersion = new ApiVersion(1, 0);
o.AssumeDefaultVersionWhenUnspecified = true;
});
configuration.MapHttpAttributeRoutes(constraintResolver);
// add the versioned IApiExplorer and capture the strongly-typed implementation (e.g. VersionedApiExplorer vs IApiExplorer)
// note: the specified format code will format the version as "'v'major[.minor][-status]"
var apiExplorer = configuration.AddVersionedApiExplorer(
options =>
{
options.GroupNameFormat = "'v'VVV";
// note: this option is only necessary when versioning by url segment. the SubstitutionFormat
// can also be used to control the format of the API version in route templates
options.SubstituteApiVersionInUrl = true;
});
configuration.EnableSwagger(
"{apiVersion}/swagger",
swagger =>
{
// build a swagger document and endpoint for each discovered API version
swagger.MultipleApiVersions(
(apiDescription, version) => apiDescription.GetGroupName() == version,
info =>
{
foreach (var group in apiExplorer.ApiDescriptions)
{
var description = string.Empty;
if (#group.IsDeprecated)
{
description += "This API version has been deprecated.";
}
info.Version(#group.Name, $"Force Search API v{#group.ApiVersion}")
.Description(description);
}
});
swagger.UseFullTypeNameInSchemaIds();
})
.EnableSwaggerUi(swagger => swagger.EnableDiscoveryUrlSelector());
builder.UseWebApi(httpServer);
}
The reason this is happening is because the default IApiVersionReader is a composition of both the query string and URL segment methods. This allows you to use either approach without any additional configuration. The reader implementations also describe where and how an API version is consumed so that it can be reported as a parameter. Since there are 2 readers configured at different locations, the result is 2 parameters. This is generally not a problem until you integrate OpenAPI/Swagger.
The default configuration looks like this:
configuration.AddApiVersioning(
options => options.ApiVersionReader = ApiVersionReader.Combine(
new QueryStringApiVersionReader(),
new UrlSegmentApiVersionReader()));
Solution
Update your configuration as follows:
configuration.AddApiVersioning(
options =>
{
options.ReportApiVersions = true;
options.DefaultApiVersion = new ApiVersion(1, 0); // NOTE: already the default
options.ApiVersionReader = new UrlSegmentApiVersionReader();
options.AssumeDefaultVersionWhenUnspecified = true;
});
Afterward, there will be only a single parameter. Since you've configured options.SubstituteApiVersionInUrl = true, the net result will be zero API version parameters because the value is baked directly into the generated API description URL.
How can I run a Glide query from in a widget? (Service Portal)
This code runs fine in the script-background editor, but it doesn't work in the Server-script section of my widget:
var grTask = new GlideRecord('task');
grTask.get('number', "REQ0323232"); // hardcoded good sample
destination_sys_id = grTask.sys_id;
When I run the code in Scripts, I get:
*** Script: sys_id: 0f4d[...]905
When I run it in the widget, I get:{}
To elaborate on my Widget code:
Body HTML template
data.destination_sys_id = {{data.destination_sys_id }}
Server script
(function(){
var destination_sys_id = "initialized";
var grTask = new GlideRecord('task');
grTask.get('number', "REQ0323232");
destination_sys_id = grTask.sys_id;
data.destination_sys_id = destination_sys_id;
})()
you can add it as c.data.destination_sys_id nad at the time of assigning the value to object always use .toString() in order to stringify the field value.
I needed a .toString(); in my server script:
(function(){
var destination_sys_id = "initialized";
var grTask = new GlideRecord('task');
grTask.get('number', "REQ0323232");
destination_sys_id = grTask.sys_id.toString(); // <- note the addition of .toString()
data.destination_sys_id = destination_sys_id;
})()
Another way is to use grTask.getUniqueValue() instead of grTask.sys_id.toString().
Also, getters and setters are recommended with GlideRecord, so use grTask.getValue('sys_id') instead of grTask.sys_id.toString().
I am trying to import Attachments/Annotations to CRM Dynamics, I am doing this using the SDK.
I am not using the data import wizard.
I am not individually creating Annotation entities, instead I am using Data Import Feature programmatically.
I mostly leveraged the DataImport sample from the SDK sample code (SDK\SampleCode\CS\DataManagement\DataImport).
Import import = new Import()
{
ModeCode = new OptionSetValue((int)ImportModeCode.Create),
Name = "Data Import"
};
Guid importId = _serviceProxy.Create(import);
_serviceProxy.Create(
new ColumnMapping()
{
ImportMapId = new EntityReference(ImportMap.EntityLogicalName, importMapId),
ProcessCode = new OptionSetValue((int)ColumnMappingProcessCode.Process),
SourceEntityName = sourceEntityName,
SourceAttributeName = sourceAttributeName,
TargetEntityName = targetEntityName,
TargetAttributeName = targetAttributeName
});
I am getting an error "The reference to the attachment could not be found".
The documentation says the crm async service will find the physical file on disk and upload it, my question is where does the async service look for attachment files?
I tried to map documentbody field to the full path of the attachment on the desk, but that still didn't work.
The answer below was provided before the question edits clarifying the use of the import wizard instead of the SDK. The answer below is specific to using the SDK.
When you are attaching files to an Annotation (Note) record in CRM via the SDK, you do use the documentbody attribute (along with mimetype), but you have to first convert it base64.
Something like this:
var myFile = #"C:\Path\To\My\File.pdf";
// Do checks to make sure file exists...
// Convert to Base64.
var base64Data = Convert.ToBase64String(System.IO.File.ReadAllBytes(myFile));
var newNote = new Entity("annotation");
// Set subject, regarding object, etc.
// Add the data required for a file attachment.
newNote.Attributes.Add("documentbody", base64Data);
newNote.Attributes.Add("mimetype", "text/plain"); // This mime type seems to work for all file types.
orgService.Create(newNote);
I found the solution in an obscure blog post, I think the documentation is misleading or unclear, the way this whole thing works, makes having the files available on the server disk for the async to process, odd.
To follow the same principle, all contents should be sent like the csv file itself while being linked to the same import.
To solve this we need create individual special Internal ImportFile for each physical attachment, and link it to the import that has the attachments record details.
As you see below with linking the attachments ImportFile using the ImportId and then setting the two properties (ProcessCode and FileTypeCode), it all worked in the end.
Suffice to say using this method is much more efficient and quicker than individually creating Annotation records.
foreach (var line in File.ReadLines(csvFilesPath + "Attachment.csv").Skip(1))
{
var fileName = line.Split(',')[0].Replace("\"", null);
using (FileStream stream = File.OpenRead(attachmentsPath + fileName))
{
byte[] byteData = new byte[stream.Length];
stream.Read(byteData, 0, byteData.Length);
stream.Close();
string encodedAttachmentData = System.Convert.ToBase64String(byteData);
ImportFile importFileAttachment = new ImportFile()
{
Content = encodedAttachmentData,
Name = fileName,
ImportMapId = new EntityReference(ImportMap.EntityLogicalName, importMapId),
UseSystemMap = true,
ImportId = new EntityReference(Import.EntityLogicalName, importId),
ProcessCode = new OptionSetValue((int)ImportFileProcessCode.Internal),
FileTypeCode = new OptionSetValue((int)ImportFileFileTypeCode.Attachment),
RecordsOwnerId = currentUserRef
};
_serviceProxy.Create(importFileAttachment);
}
idx++;
}