I want to automate the creation of a set visualizations for new kibana/elasticsearch installations.
So I need to know if I can automate this, independent the programming language.
There are no APIs yet in Kibana to manage the searches, visualizations and dashboards. Some feature requests have been suggested (here and here) but they are still being discussed.
However, since Kibana visualizations are stored in the .kibana index with the visualization mapping type, you can definitely GET them, learn how they are built, modify them and PUT them again.
For a visualization named "Top consumers by country", you can get the visualization spec using
curl -XGET http://localhost:9200/.kibana/visualization/Top-consumers-by-country
You'll get a document containing the title of your visualization, another field called visState containing the specification of your visualization (obvisouly different for each visualization) and finally a field named kibanaSavedObjectMeta which contains the Elasticsearch query and index details.
You can also view/edit/export the same data in Settings > Objects > Visualizations
Related
Mostly what I do is to assemble the mapping by hand. Choosing the correct types myself.
Is there any tool which facilitates this?
For example which will read a class (c#,java..etc) and choosing the closest ES types accordingly.
I've never seen such a tool, however I know that ElasticSearch has a REST API over HTTP.
So you can create a simple HTTP query with JSON body that will depict your object with your fields: field names + types (Strings, numbers, booleans) - pretty much like a Java/C# class that you've described in the question.
Then you can ask the ES to store the data in the non-existing index (to "index" your document in ES terms). It will index the document, but it will also create an index, and the most importantly for your question, will create a mapping for you "dynamically", so that later you will be able to query the mapping structure (again via REST).
Here is the link to the relevant chapter about dynamically created mappings in the ES documentation
And Here you can find the API for querying the mapping structure
At the end of the day you'd still want to retain some control over how your mapping is generated. I'd recommend:
syncing some sample documents w/o a mapping
investigating what mapping was auto generated and
dropping the index & using dynamic_templates to pseudo-auto-generate / update the mapping as new documents come in.
This GUI could help too.
Currently, there is no such tool available to generate the mapping for elastic.
It is a kind of similar thing as we have to design a database in MySQL.
But if we want such kind of thing then we use Mongo DB which requires no predefined schema.
But Elastic comes with its very dynamic feature, which allows us to play around it. One of the most important features of Elasticsearch is that it tries to get out of your way and let you start exploring your data as quickly as possible like the mongo schema which can be manipulated dynamically.
To index a document, you don’t need to first define a mapping or schema and define your fields along with their data type .
You can just index a document and the index, type, and fields will be created automatically.
For further details you can go through the below documentation:
Elastic Dynamic Mapping
Kibana's UI allows the user to create a scripted field which is stored as part of the index (screenshot below). How can that be done programatically? In particular, using either the NEST client or the Elasticsearch low level client.
Kibana UI for the Indice with the Scripted Fields tab highlighted
Note that I am not asking how to create add an expression/script field as part of a query, I'm specifically looking for how to add it as part of the Index when the mapping is created so that queries can reference it without having to explicitly include it.
Kibana dashboards are stored in the .kibana index. To export dashboards, you can query the Kibana index as you would any other index. For example, curl -XGET http://localhost:9200/.kibana/_search?type=dashboard&pretty would show the JSON for your dashboards. You could export the template, add the scripted field to the JSON, and then POST it again. Since Kibana uses a standard Elasticsearch index, the normal Elasticsearch API would apply to modifying Kibana dashboards. This may provide a little more clarification.
At the time of writing, current version 5.2 does not have an official way to do this.
This is how I do it:
Get index fields: GET /.kibana/index-pattern/YOUR_INDEX
Add your scripted field to _source.fields (as string, notice scaped quotation marks)
"fields":"[...,{\"name\":\"test\",\"type\":\"number\",\"count\":0,\"scripted\":true,\"script\":\"doc['area_id'].value\",\"lang\":\"painless\",\"indexed\":false,\"analyzed\":false,\"doc_values\":false,\"searchable\":true,\"aggregatable\":true}]"
Post back _source json to /.kibana/index-pattern/YOUR_INDEX
{
"title":"YOUR_INDEX",
"timeFieldName":"time",
"fields":"[...,{\"name\":\"test\",...}]"
}
I was posting a question about the automatic (ie : scripted automation) creation of visualizations in Kibana. BUT I found the solution while writing it ;) Let me explain :
I was kind of blocked because I can't find nothing on the web about it. And the documentation of Elasticsearch doesn't explain that.
In a few words, the global projet I'm trying to do is to write a script to automatically create a new dashboard according to a specific user (for that, it is the search that changes).
More precisely, as a dashboard is made of visualizations, we need to create the visualizations in the first place. I was blocked at this step, creating the visualization directly via a CURL -XPOST request to ElasticSearch.
Because in order to do the final dashboard, we need to :
Write the search(es)
Create the visualization(s)
Create the dashboard made of visualizations
Schematically :
Input : Username --> Myscript --> Dashboard of the user
To make things clear, a visualization is only a JSON document in the path (in my case, but surely in yours too) :
'http://localhost:9200/.kibana/visualization/*'
In ElasticSearch.
So it's simple, add a new visualization as you add a new document through the ElasticSearch API.
To do that, and to know what the visualization you want to create looks like, you can create it using Kibana web interface; and once you validate and it's added in ElasticSearch, you see the ElasticSearch document.
Let's say we created a Pie Chart using Kibana named "Test1" : we request it in ElasticSearch to see the document.
curl -XGET 'http://localhost:9200/.kibana/visualization/Test1'
You should have this kind of result :
{"_index":".kibana","_type":"visualization","_id":"Test1","_version":1,"found":true,"_source":{"title":"Test1","visState":"{\"aggs\":[{\"id\":\"1\",\"params\":{},\"schema\":\"metric\",\"type\":\"count\"},{\"id\":\"2\",\"params\":{\"field\":\"type.raw\",\"order\":\"desc\",\"orderBy\":\"1\",\"size\":10},\"schema\":\"segment\",\"type\":\"terms\"}],\"listeners\":{},\"params\":{\"addLegend\":true,\"addTooltip\":true,\"isDonut\":false,\"shareYAxis\":true},\"title\":\"New Visualization\",\"type\":\"pie\"}","uiStateJSON":"{}","description":"","version":1,"kibanaSavedObjectMeta":{"searchSourceJSON":"{\"index\":\"logstash-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"YOUR KIBANA DISCOVER SEARCH HERE\"}},\"filter\":[]}"}}}
So, then, we want to create a similar document into ElasticSearch directly, without using Kibana, as we would need it in a script. The query looks like this :
curl -XPOST 'http://localhost:9200/.kibana/visualization/Test2' '{"title":"Test2","visState":"{\"aggs\":[{\"id\":\"1\",\"params\":{},\"schema\":\"metric\",\"type\":\"count\"},{\"id\":\"2\",\"params\":{\"field\":\"type.raw\",\"order\":\"desc\",\"orderBy\":\"1\",\"size\":10},\"schema\":\"segment\",\"type\":\"terms\"}],\"listeners\":{},\"params\":{\"addLegend\":true,\"addTooltip\":true,\"isDonut\":false,\"shareYAxis\":true},\"title\":\"New Visualization\",\"type\":\"pie\"}","uiStateJSON":"{}","description":"","version":1,"kibanaSavedObjectMeta":{"searchSourceJSON":"{\"index\":\"logstash-*\",\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"YOUR KIBANA DISCOVER SEARCH HERE\"}},\"filter\":[]}"}}'
That's it ! You can check the Pie Chart has been well created in Kibana ;)
Have a good day, I hope this has been useful.
Tony
I have some requirement where I need to create different visualization for different users which will differ very slightly on the query param. So, I am considering to create a script which will enable me to do this.Have anyone done this on Kibana 4. Some pointers on how to create visualization using query would be of great help.
I would also like to create Dashboards on the fly but that can wait till I get this one sorted out.
If you want to go ahead with Java plugin (as mentioned in comments), here are the steps:
Create different visualizations with different X-axis parameters. Visualizations are basically json strings so you can write a java code which changes the value of x aggregation based on the mapping that you have. Now each chart will have different ids.
While you are creating a custom dashboard based on the user, check the mapping between user and the visualization and use the following command to add the visualization:
client.prepareIndex(,"visualization",).setSource().execute();
I'm looking for a list of commands required to export and then import all Kibana 4 saved Searches, Visualizations and Dashboards.
I'd also like to have the default Kibana 4 index pattern created automatically for logstash.
I've tried using elasticdump as outlined here http://air.ghost.io/kibana-4-export-and-import-visualizations-and-dashboards/ but the default Kibana index pattern isn't created and the saved searches don't seem to get exported.
You can export saved visualizations, dashboards and searches from Settings >> Objects as shown in image below
you also have to export associated visualizations and searches with the dashboard. clicking on dashboard export will not include dependent objects.
All information pertaining to saved objects like saved searches, index patterns, dashboards and visualizations is saved in the .kibana index in Elasticsearch.
The GitHub project elastic/beats-dashboards contains a Python script for dumping Kibana definitions (to JSON, one file per definition), and a shell script for loading those exported definitions into an Elasticsearch instance.
The Python script dumps all Kibana definitions, which, in my case, is more than I want.
I want to distribute only some definitions: specifically, the definitions for a few dashboards (and their visualizations and searches), rather than all of the dashboards on my Elasticsearch instance.
I considered various options, including writing scripts to get a specific dashboard definition, and then parse that definition, and get the cited visualization and search definitions, but for now, I've gone with the following solution (inelegant but pragmatic).
In Kibana, I edited each definition, and inserted a string into the Description field that identifies the definition as being one that I want to export. For example, "#exportme".
In the Python script (from beats-dashboards) that dumps the definitions, I introduced a query parameter into the search function call, limiting it to definitions with that identifying string. For example:
res = es.search(
index='.kibana',
doc_type=doc_type,
size=1000,
q='description:"#exportme"')
(In practice, rather than hardcoding the "hashtag", it's better practice to specify it via a command-line argument.)
One aspect of the dump'n'load scripts provided with elastic/beats-dashboards that I particularly like is their granularity: one JSON file per definition. I find this useful for version control.
You can get searches using elasticdump like so:
elasticdump --input=http://localhost:9200/.kibana --output=$ --type=data --searchBody='{"filter": {"type": {"value": "search"}} }'