How add Our Own Metric Expression in kibana(5.1.1)? - elasticsearch

In Metric agg. by default we have Sum,count,avg,min,max,unique-count etc the functions I want to add my own customized function suppose sum/Unique_count how to implement it.

I guess it will required a code changes. Now currently all Kibana metrics are located there - https://github.com/elastic/kibana/tree/master/src/ui/public/agg_types/metrics. So, you need to clone Kibana at first, than add your_own_metric.js, which will be similar to other built-in metrics. Later on, you need to add your metrics to index.js, under https://github.com/elastic/kibana/blob/master/src/ui/public/agg_types/index.js and hopefully after you will build Kibana, you could use your custom version of it.
Some additional information - https://discuss.elastic.co/t/custom-metric-aggregation-plugin/70072/8

Related

Dynamic addition of fields in vespa

In Elastic Search, to add new fields while running the application we have to provide
"dynamic":true
More info about the same: https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic.html
Is there any functionality which can replicate same behaviour in Vespa? I was not able to find in vespa documentation.
Kindly help me in this regard. Thank you.
https://docs.vespa.ai/en/schemas.html#schema-modifications is the best place to start - just modify the schema with new fields and redeploy the application. The new fields can not have a default value, they are empty. It is not necessary to restart Vespa, this can be done on a running instance.
Dynamic fields automatically created from data is not supported in Vespa. You should not use this; it's a malfeature.
If the data in question is structured, you can often achieve what you need here by using a map. Otherwise, it's easy and safe to add new fields to the schema and redeploy.

Detect Spec update in the reconcile function

I am starting now with Kubernetes and the Operator SDK and I am trying to build my first operator and I have probably a simple question.
Question
How to detect a configuration change in the custom resource yaml in the reconcile loop and take an action according to the change?
I have some config properties specified in the my CR Spec:
apiVersion: my.example.com/v1alpha1
kind: StoreApp
metadata:
name: mystoreapp
spec:
username: technicalUser
password: abcd1234
catalogs:
- name: Bikes
description: Bikes_description
- name: Cars
description: Cars_description
I want when I add new custom resource of this kind my controller to create a new pod with my app image running inside (in a webserver). When my app is up and running for the first time I want to configure it (to add the catalogs from the spec) via HTTP request from the operator.
So far it's ok but I also what to change these catalogs while my app is up and running.
For example I want to add new catalog in the spec (through kubectl patch). My operator's reconcile method will be called and how can I understand that the spec is changed? I am not sure it's a good idea to make HTTP calls to my app to get all catalogs and compare them with the catalogs from the spec. Is this the correct way to understand there is a change?
I am thinking about two other ways to find that something is updated but I am not sure if they will work properly and are they the best way to do this.
First idea is to request the instance of StoreApp with client.Get(...) but as far as I understand this will call the API server and will get the updated version of mystoreapp. I read about some local index which acts like cache for these objects and I can check is there a difference between the cached object and the object returned from the API server. But I did not find how to get the object from this local index so I was not able to compare the two objects.
To create map in which I store the hash of the hole spec object and to check every time this hash with the hash of the object got with client.Get(...). I think this will work but there should be a better way to do this.
I read some Java Operators for K8s and there were methods like onAdd, onUpdate, onDelete. I couldn't find something similar in the Operatod SDK. Is there anything like this in the Operator SDK?
Every answer will be helpful. Thank you in advance!
Best Regards,
Hristiyan
The recommended practice is to look at the spec you received, and compare it to the state of the world/cluster, so retrieving the catalogs and comparing them to the spec is indeed the proper way to do it.
The reasoning for this recommandation is that the order of the events you get from Kubernetes is not guaranteed to be consistent, and it's also not guaranteed that you'll necessarily receive every event in a reasonable amount of time, or that you'll only receive each event once, so it's best to base your decision making on what was requested as compared to what is, rather than what specific event triggered the reconciliation.

Is it possible to change ElasticSearch settings at runtime?

I want to set ‘http.max_content_length’ at runtime. Is it possible and how one can do it? And if one can do it at runtime can he also change publishing_port/host?
No. You can't. As per elasticsearch documentation, http.max_content_length is not dynamically updatable. For more details refer this link..
Changes we do in YAML file will get reflected in node after once you restart the node.

Deleting labels in Prometheus

I'm using Prometheus to do some monitoring but I can't seem to find a way to delete labels I no longer want. I tried using the DELETE /api/v1/series endpoint but it doesn't remove it from the dropdown list on the main Prometheus Graph page. Is there a way to remove them from the dropdown without restarting from scratch?
Thanks
This happens to me also, try to include the metric name when querying for labels' values like this:
label_values(node_load1, instance)
ref: http://docs.grafana.org/features/datasources/prometheus/
If you delete every relevant timeseries then it should no longer be returned. If this is not the case, please file a bug.
Prometheus doesn't provide the ability to delete particular labels, because this may result to duplicate time series with identical labelsets. For example, suppose Prometheus contains the following time series:
http_requests_total{instance="host1",job="foobar"}
http_requests_total{instance="host2",job="foobar"}
If instance label is removed, then these two time series will become identical:
http_requests_total{job="foobar"}
http_requests_total{job="foobar"}
Now neither Prometheus nor user can differentiate these two time series.
Prometheus provides only the API for deleting time series matching the given series selector - see these docs for details.

FHIR Search by Referencing Resources

Is there a way to search for a resource by its referencing resources? For example, is the a way to find all Observations of code = X with Provenance by agent Y?
GET [base]/Observation?code=X&???
One could:
GET [base]/Provenance?userid=Y&_include=Provenance:target:Observation
but that prevents any kind of filtering on Observation (which may create a volume problem in the response!). Also, I don't need the provenance resource - I just need to make sure that the Observations I'm using have a certain provenance.
Right now, to the best of my knowledge, there's no way to apply filters to multiple resources unless you're using _filter or using a custom OperationDefinition.

Resources