In Kedro, How to specify layer to parameters.yml? - kedro

Currently, I'm using kedro and kedro-viz.
I can specify a layer of dataset from catalog.yml.
hoge:
type: MemoryDataSet
layer: raw
but I don't know how to do it with parameters.yml
step_size: 1
learning_rate: 0.01
if it can be done not in parameters.yml but in run.py, I want to see example code.

At the moment layers can only be specified for datasets, not for nodes or parameters.
If you have a specific use case for adding layers to nodes/parameters, please let us know by opening a feature request in the Kedro repo: https://github.com/quantumblacklabs/kedro/issues

Related

Detect Spec update in the reconcile function

I am starting now with Kubernetes and the Operator SDK and I am trying to build my first operator and I have probably a simple question.
Question
How to detect a configuration change in the custom resource yaml in the reconcile loop and take an action according to the change?
I have some config properties specified in the my CR Spec:
apiVersion: my.example.com/v1alpha1
kind: StoreApp
metadata:
name: mystoreapp
spec:
username: technicalUser
password: abcd1234
catalogs:
- name: Bikes
description: Bikes_description
- name: Cars
description: Cars_description
I want when I add new custom resource of this kind my controller to create a new pod with my app image running inside (in a webserver). When my app is up and running for the first time I want to configure it (to add the catalogs from the spec) via HTTP request from the operator.
So far it's ok but I also what to change these catalogs while my app is up and running.
For example I want to add new catalog in the spec (through kubectl patch). My operator's reconcile method will be called and how can I understand that the spec is changed? I am not sure it's a good idea to make HTTP calls to my app to get all catalogs and compare them with the catalogs from the spec. Is this the correct way to understand there is a change?
I am thinking about two other ways to find that something is updated but I am not sure if they will work properly and are they the best way to do this.
First idea is to request the instance of StoreApp with client.Get(...) but as far as I understand this will call the API server and will get the updated version of mystoreapp. I read about some local index which acts like cache for these objects and I can check is there a difference between the cached object and the object returned from the API server. But I did not find how to get the object from this local index so I was not able to compare the two objects.
To create map in which I store the hash of the hole spec object and to check every time this hash with the hash of the object got with client.Get(...). I think this will work but there should be a better way to do this.
I read some Java Operators for K8s and there were methods like onAdd, onUpdate, onDelete. I couldn't find something similar in the Operatod SDK. Is there anything like this in the Operator SDK?
Every answer will be helpful. Thank you in advance!
Best Regards,
Hristiyan
The recommended practice is to look at the spec you received, and compare it to the state of the world/cluster, so retrieving the catalogs and comparing them to the spec is indeed the proper way to do it.
The reasoning for this recommandation is that the order of the events you get from Kubernetes is not guaranteed to be consistent, and it's also not guaranteed that you'll necessarily receive every event in a reasonable amount of time, or that you'll only receive each event once, so it's best to base your decision making on what was requested as compared to what is, rather than what specific event triggered the reconciliation.

Extracting image dimensions in the background with Shrine

I have set up direct uploads to S3 with Shrine. This works great. Among others, I have the following plugins enabled:
Shrine.plugin :backgrounding
Shrine.plugin :store_dimensions
Shrine.plugin :restore_cached_data
Correct me if I'm wrong but image dimensions extraction appears to be done synchronously. If I let the user bulk upload images via Uppy and then persist them all, this seems to be taking a long time.
What I'd like to do is perform image dimensions extraction asynchronously - I don't need the dimensions available for the cached file. If possible, I'd like to do that in the background when the file gets promoted to the store. Is there a way to do it?
The way I got this to work is by making use of :refresh_metadata plugin, instead of :restore_cached_data which I used originally. Thanks to Janko for pointing me in the right direction.
Reading into the source code provided some useful insights. :store_dimensions plugin by itself doesn't extract dimensions - it adds width and height to the metadata hash so that when Shrine's base class requests metadata, they get extracted too.
By using :restore_cached_data, this was being done on every assignment. :restore_cached_data uses :refresh_metadata internally so we can use that knowledge to only call it when the file is promoted to the store.
I have :backgrounding and :store_dimensions set up in the initializer so the final uploader can be simplified to this:
class ImageUploader < Shrine
plugin :refresh_metadata
plugin :processing
process(:store) do |io, context|
io.refresh_metadata!(context)
io
end
end
This way persisting data we get from Uppy is super fast and we let the background job extract dimensions when the file is promoted to the store, so they can be used later.
Finally, should you have questions related to Shrine, I highly recommend its dedicated Google Group. Kudos to Janko for not only creating an amazing piece of software (seriously, go read the source), but also for his dedication to supporting the community.

How add Our Own Metric Expression in kibana(5.1.1)?

In Metric agg. by default we have Sum,count,avg,min,max,unique-count etc the functions I want to add my own customized function suppose sum/Unique_count how to implement it.
I guess it will required a code changes. Now currently all Kibana metrics are located there - https://github.com/elastic/kibana/tree/master/src/ui/public/agg_types/metrics. So, you need to clone Kibana at first, than add your_own_metric.js, which will be similar to other built-in metrics. Later on, you need to add your metrics to index.js, under https://github.com/elastic/kibana/blob/master/src/ui/public/agg_types/index.js and hopefully after you will build Kibana, you could use your custom version of it.
Some additional information - https://discuss.elastic.co/t/custom-metric-aggregation-plugin/70072/8

FHIR Search by Referencing Resources

Is there a way to search for a resource by its referencing resources? For example, is the a way to find all Observations of code = X with Provenance by agent Y?
GET [base]/Observation?code=X&???
One could:
GET [base]/Provenance?userid=Y&_include=Provenance:target:Observation
but that prevents any kind of filtering on Observation (which may create a volume problem in the response!). Also, I don't need the provenance resource - I just need to make sure that the Observations I'm using have a certain provenance.
Right now, to the best of my knowledge, there's no way to apply filters to multiple resources unless you're using _filter or using a custom OperationDefinition.

Epi server - make content area specific for block

I would like to achieve the following thing-
Build a pagetype which has 3 different ContentArea's and that the user can put only a specific block type in each of these areas.
For example - ContentArea1 can only accept block type of "BlockType1", ContentArea2 can only accept "BlockType2" and so on. (It doesn't need to be generic, I can specify hard coded which type should fit in each Content Area.
Is it possible to achieve?
Maybe there is another way?
(I know you can create a property with the block type, but I want to use the same block in different places)
ps: using EPI-SERVER 8
From version 8.0 of EPiServer there is better support for AllowedTypes.
The feature was also available before version 8, but was more limited.
In short, you decorate your ContentArea property with the AllowedTypes attribute and EPiServer takes care of the rest.
Read more about it here:
http://world.episerver.com/blogs/Ben-McKernan/Dates/2015/2/the-new-and-improved-allowed-types/

Resources