HAPI FHIR Json encoder not handling contained resources - hl7-fhir

I have a FHIR Device resource that contains a FHIR DeviceComponent resource. I use the following HAPI FHIR code to 'insert' one resource into the other:
protected static void insertResourceInResouce(BaseResource resource, BaseResource resourceToInsert)
{
ContainedDt containedDt = new ContainedDt();
ArrayList<IResource> resourceList = new ArrayList<IResource>();
resourceList.add(resourceToInsert);
containedDt.setContainedResources(resourceList);
resource.setContained(containedDt);
}
According to the Eclipse debugger the insertion works fine. This resource with its insertion is then added to a bundle. When all the work is done the Eclipse debugger shows the resource with the contained resource properly placed in the bundle. However, when generating a JSON string the contained resources are not there. The encoding operation appears as follows:
return fhirContext.newJsonParser().setPrettyPrint(true)
.encodeResourceToString(bundle);
Any ideas what I am doing wrong?

It turns out that one must reference the contained resource from the parent resource using the "#" to prefix the reference. If one does that then the contained resource will be present in the XML and JSON.
Admittedly this requirement makes no sense to me. Why would I include a resource INSIDE another scoping resource if I did not think it was important?

Related

Cloudinary Error: {"error":{"message":"Missing required parameter - timestamp"}}

I am trying to use Cloudinary's .downloadMulti(String tag, Map options) to generate a URL to download multiple images as a zip with the same tag. I am generating the URL seemingly just fine, but when I go to the URL, I am met with {"error":{"message":"Missing required parameter - timestamp"}}.
I've researched a bit and I saw that I need to sign the request, but it isn't saying that I'm missing that - just the timestamp. I believe the request is already being signed, just need a proper timestamp. I believe it needs to be within the constructor, but when I called Util.timestamp() it isn't recognized as a reference.
My Cloudinary initializer:
private final Cloudinary cloudinary = new Cloudinary(ObjectUtils.asMap(
"cloud_name", "dxoa7bbix",
"api_key", "161649288458746",
"api_secret", "..."));
My upload method:
public Photo uploadOrderImage(String imageURL, String publicId, Order order, String photoType) throws IOException {
Map result = cloudinary.uploader().upload(new File(imageURL), ObjectUtils.asMap(
"public_id", publicId,
"tags", order.getId().toString()));
Photo sellOrderPhoto = new Photo(
result.get("secure_url").toString(),
photoType,
order
);
return photoRepository.save(sellOrderPhoto);
}
This is my download method:
public String downloadPhotos(String tag) throws IOException {
return cloudinary.downloadMulti(tag, ObjectUtils.asMap(
"tags", tag
));
}
An example URL my download method returns: Generated URL: https://api.cloudinary.com/v1_1/dxoa7bbix/image/multi?mode=download&async=false&signature=5f5da549fc78ea3fd50f034cdc76e2cce3089d48&api_key=161649288458746&tag=137×tamp=1638583257.
Overall, I think the problem is the lack of a timestamp. If you have any ideas, that would be great!
The reason for the error is that the parameter is expected to be called timestamp but based on the URL you shared, it is actually ×tamp.
If you want to generate a URL to a ZIP file containing assets that share a particular tag, then you will want to use the generate_archive method and not multi which provides different functionality.
If you replace the following code:
return cloudinary.downloadMulti(tag, ObjectUtils.asMap(
"tags", tag
));
With:
return cloudinary.downloadZip(ObjectUtils.asMap(
"tags", tag,
"resource_type", "image")
);
Then that will generate and return a URL to a ZIP file that will get created when the URL is accessed and contain images from your cloud that contain the tag you specified.
The Cloudinary Java SDK you're using will handle the signature/timestamp generation automatically when using any of the built-in methods, therefore, you don't need to make any changes to the SDK code or calculate the signature yourself if using the built-in methods. Signature generation is only necessary if you're not intending to use any of the SDKs but integrate with the Cloudinary API using your own custom code - such as if you're using a language for which a Cloudinary SDK doesn't yet exist. In such cases, if you want to perform authenticated API calls, you will need to generate authentication signatures yourself.

How to retrieve the XPC service of a file provider extension on macOS?

I have extended my example project from my previous question with an attempt to establish an XPC connection.
In a different project we have successfully implemented the file provider for iOS. The exposed service must be resolved by URLs it is responsible for. On iOS it is the only possibility and on macOS it appears like that, too. Because on macOS the system takes care of managing files there are no URLs except the one which can be resolved through NSFileProviderItemIdentifier.rootContainer.
In the AppDelegate.didFinishLaunching() method I try to retrieve the service like this (see linked code for full reference, I do not want to unnecessarily bloat this question page for now):
let fileManager = FileManager.default
let fileProviderManager = NSFileProviderManager(for: domain)!
fileProviderManager.getUserVisibleURL(for: NSFileProviderItemIdentifier.rootContainer) { url, error in
// [...]
fileManager.getFileProviderServicesForItem(at: url) { list, error in
// list always contains 0 items!
}
}
The delivered list always is empty. However the extension is creating a service source on initialization which creates an NSXPCListener which has an NSXPCListenerDelegate that exports the NSFileProviderReplicatedExtension object on new connections. What am I missing?
func listener(_ listener: NSXPCListener, shouldAcceptNewConnection newConnection: NSXPCConnection) -> Bool {
os_log("XPC listener delegate should accept new connection...")
newConnection.exportedObject = fileProviderExtension
newConnection.exportedInterface = NSXPCInterface(with: SomeProviderServiceInterface.self)
newConnection.remoteObjectInterface = NSXPCInterface(with: SomeProductServiceInterface.self)
newConnection.resume()
return true
}
Suspicious: serviceName of the FileProviderServiceSource never is queried. We are out of ideas why this is not working.
There is a protocol which your extension's principal class can implement, NSFileProviderServicing.
https://developer.apple.com/documentation/fileprovider/nsfileproviderservicing

How to handle weird API flow with implicit create step in custom terraform provider

Most terraform providers demand a predefined flow, Create/Read/Update/Delete/Exists
I am in a weird situation developing a provider against an API where this behavior diverges a bit.
There are two kinds of resources, Host and Scope. A host can have many scopes. Scopes are updated with configurations.
This generally fits well into the terraform flow, it has a full CRUDE flow possible - except for one instance.
When a new Host is made, it automatically has a default scope attached to it. It is always there, cannot be deleted etc.
I can't figure out how to have my provider gracefully handle this, as I would want the tf to treat it like any other resource, but it doesn't have an explicit CREATE/DELETE, only READ/UPDATE/EXISTS - but every other scope attached to the host would have CREATE/DELETE.
Importing is not an option due to density, requiring an import for every host would render the entire thing pointless.
I originally was going to attempt to split Scopes and Configurations into separate resources so one could be full-filled by the Host (the host providing the Scope ID for a configuration, and then other configurations can get their scope IDs from a scope resource)
However this approach falls apart because the API for both are the same, unless I wanted to add the abstraction of creating an empty scope then applying a configuration against it, which may not be fully supported. It would essentially be two resources controlling one resource which could lead to dramatic conflicts.
A paraphrased example of an execution I thought about implementing
resource "host" "test_integrations" {
name = "test.integrations.domain.com"
account_hash = "${local.integrationAccountHash}"
services = [40]
}
resource "configuration" "test_integrations_root_configuration" {
name = "root"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations.root_scope_id}"
hostnames = ["test.integrations.domain.com"]
}
resource "scope" "test_integrations_other" {
account_hash = "${local.integrationAccountHash}"
host_hash = "${host.test_integrations.id}"
path = "/non/root/path"
name = "Some Other URI Path"
}
resource "configuration" "test_integrations_other_configuration" {
name = "other"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations_other.id}"
}
In this example flow, a configuration and scope resource unfortunately are pointing to the same resource which I am worried would cause conflicts or confusion on who is responsible for what and dramatically confuses the create/delete lifecycle
But I can't figure out how the TF lifecycle would allow for a resource that would only UPDATE/READ/EXISTS if say a flag was given (and how state would handle that)
An alternative would be to just have a Configuration resource, but then if it was the root configuration it would need to skip create/delete as it is inherently tied to the host
Ideally I'd be able to handle this situation gracefully. I am trying to avoid including the root scope/configuration in the host definition as it would create a split in how they are written and handled.
The documentation for providers implies you can use a resource AS a schema object in a resource, but does not explain how or why. If it works the way I imagine it, it may work to create a resource that is only used to inject into the host perhaps - but I don't know if that is how it works and if it is how to accomplish it.
I believe I tentatively have found a solution after asking some folks on the gopher slack.
Using AWS Provider Default VPC as a reference, I can "clone" the resource into one with a custom Create/Delete lifecycle
Loose Example:
func defaultResourceConfiguration() *schema.Resource {
drc := resourceConfiguration()
drc.Create = resourceDefaultConfigurationCreate
drc.Delete = resourceDefaultConfigurationDelete
return drc
}
func resourceDefaultConfigurationCreate(d *schema.ResourceData, m interface{}) error {
// double check it exists and update the resource instead
return resourceConfigurationUpdate(d, m)
}
func resourceDefaultConfigurationDelete(d *schema.ResourceData, m interface{}) error {
log.Printf("[WARN] Cannot destroy Default Scope Configuration. Terraform will remove this resource from the state file, however resources may remain.")
return nil
}
This should allow me to provide an identical resource that is designed to interact with the already existing one created by its parent host.

Watch for updated properties in Wicket

In my current project we need to implement a way for texters to manage the wicket messages/internationalization via upload of property files.
Also see this question: Administrating internationalized wicket applications
As suggested there, I've implemented a custom IStringResourceLoader and added it at the beginning of the StringResourceLoader list to override any properties already in place:
getResourceSettings().getStringResourceLoaders().add(0, new CustomStringResourceLoader());
This however is not enough, because updates can happen and need to be loaded at runtime. StringResources are cached by wicket and updated only when the ResourceWatcher is triggered.
I found where Wicket adds the string resources to the watcher: the PropertiesFactory in the settings. The method to add a resource to the watcher is addToWatcher(...). However this method is protected and also the whole setup suggests this is used for development purposes and not for production.
I managed to use this method by extending PropertiesFactory and effectively creating a custom version to add to settings:
getResourceSettings().setPropertiesFactory(new CustomPropertiesFactory(getResourceSettings()));
getResourceSettings().setResourcePollFrequency(Duration.seconds(1));
So my Question is: I feel this is quite the circuitious solution. Is there another way to watch for changing properties files?
My solution to the problem:
getResourceSettings().getStringResourceLoaders().add(0, new CustomResourceLoader());
getResourceSettings().getResourceFinders().add(new Path("/pathToResources"));
getResourceSettings().setResourcePollFrequency(Duration.seconds(1));
This inserts my CustomResourceLoader at the beginning of the list so all properties are first checked there.
The added Path tells the PropertiesFactory to look for resources in a given arbitrary directory outside of wicket.
I needed custom names for my resource files as well, I realized this in the CustomResourceLoader:
public String loadStringResource(final Class<?> clazz, final String key, final Locale locale, final String style, final String variation) {
final String myResourceFilename = createCustomResourceFileName(locale);
final IPropertiesFactory pF = Application.get().getResourceSettings().getPropertiesFactory();
final org.apache.wicket.resource.Properties props = pF.load(clazz, myResourceFilename);
...
}
When using the PropertiesFactory to load the files, it adds them to the internal IModificationWatcher automatically.
It turns out that part of the problem was, that the resource files are in a non-standard encoding. This can be fixed by adding a special IPropertyLoader to the PropertiesFactory in the settings:
((PropertiesFactory) getResourceSettings().getPropertiesFactory()).getPropertiesLoaders().add(0,
new UtfPropertiesFilePropertiesLoader("properties", "your-favorite-encoding"));

Extends class makes ajax call

I am very new to ExtJS 4 and I have a problem extending a class. I have these files:
UsersWindow.js
Ext.define('MyDesktop.UserModel', {
extend: 'Ext.data.Model',
/* ... */
}
Ext.define('MyDesktop.UsersWindow', {
/* ... */
}
DealersWindow.js
Ext.define('MyDesktop.DealerModel', {
extend: 'MyDesktop.UserModel',
/* ... */
}
Ext.define('MyDesktop.DealersWindow', {
extend: 'MyDesktop.UsersWindow',
/* ... */
}
When I run the application, everything works as expected, however, I have this ajax call:
/UserModel.js?_dc=1379135132790
that gives a 404, and then a error parsing the javascript. I want to understand why ExtJS making this call ? Does every class should be in there own file ? Can I configure ExtJS to no look for this file ?
Thank you for your help.
Yes, every class has to be in its own file for the Ext Loader to work properly. Beside, the name of the file should be the same as the class, and its path should match the namespace.
What happens here is that the extend: 'MyDesktop.UserModel' line prompts the Ext.Loader to load the UserModel class, that it expects to find in the UserModel.js file in the root path for the MyDesktop namespace, which seems to be configured as the root directory of the application...
You can configure root source directories for different namespaces with the paths property of the Loader, or the one of the Application.
If you want to prevent Ext from trying to load this file, you will have to disable the Loader, and then you will have to include all your source files as <script> tags in your HTML. I think this will also make it impossible to compile a production build of the application later.
Alternatively, you could put your definition of MyDesktop.UserModel above the definition of MyDesktop.DealerModel, either in the same file or a file before it in the script tags. Note that even this may not work, because the requires option in your classes definitions may change the order in which the class definition are actually executed. Then again, that will probably break the build process ultimately...
In short, you should not try to work again the framework's expectations. Especially when you consider that the 1:1 mapping between class names and the file system is standard practice in about every OO language out there...

Resources