How do I import a Kibana 6 visualization into elasticsearch 6 without using the Kibana UI? - elasticsearch

I am trying to import a Kibana 6 visualization into Elasticsearch 6, to be viewed in Kibana. I am trying to do this with a curl command, or essentially a script without going through the Kibana UI. This is the command I’m using:
curl -XPUT http://localhost:9200/.kibana/doc/visualization:vis1 -H
'Content-Type: application/json' -d #visual1.json
And this is visual1.json:
{
"type": "visualization",
"visualization": {
"title": "Logins",
"visState": "{\"title\":\"Logins\",\"type\":\"histogram\",\"params\":{\"type\":\"histogram\",\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"}},\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"type\":\"category\",\"position\":\"bottom\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\"},\"labels\":{\"show\":true,\"truncate\":100},\"title\":{}}],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"name\":\"LeftAxis-1\",\"type\":\"value\",\"position\":\"left\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\",\"mode\":\"normal\"},\"labels\":{\"show\":true,\"rotate\":0,\"filter\":false,\"truncate\":100},\"title\":{\"text\":\"Count\"}}],\"seriesParams\":[{\"show\":\"true\",\"type\":\"histogram\",\"mode\":\"stacked\",\"data\":{\"label\":\"Count\",\"id\":\"1\"},\"valueAxis\":\"ValueAxis-1\",\"drawLinesBetweenPoints\":true,\"showCircles\":true}],\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"times\":[],\"addTimeMarker\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"principal.keyword\",\"otherBucket\":true,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}]}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"def097e0-550f-11e8-9266-93ce640e5839\”,\”filter\":[{\"meta\":{\"index\":\"def097e0-550f-11e8-9266-93ce640e5839\”,\”negate\":false,\"disabled\":false,\"alias\":null,\"type\":\"phrase\",\"key\":\"requestType.keyword\",\"value\":\"ALOG\”,\”params\":{\"query\":\"AUTH_LOGIN\",\"type\":\"phrase\"}},\"query\":{\"match\":{\"requestType.keyword\":{\"query\":\"AUTH_LOGIN\",\"type\":\"phrase\"}}},\"$state\":{\"store\":\"appState\"}}],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
}
}
}
Now a couple things to note about the curl command and this json file. The index I push the visualization to is .kibana. I found that when I pushed these to other index’s such as “test”, my data would not show up as a stored object in Kibana, and thus wouldn’t show up on the visualization tab. When I PUT to .kibana with this syntax ‘.kibana/doc/visualization:vis1 ‘, my object shows up on the visualization tab.
Now concerning the json file. Note that when you export a visualization from Kibana 6, it doesn’t look like this. It looks like:
{
"_id": "vis1",
"_type": "visualization",
"_source": {
"title": "Logins",
"visState": "{\"title\":\"Logins\",\"type\":\"histogram\",\"params\":{\"type\":\"histogram\",\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"}},\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"type\":\"category\",\"position\":\"bottom\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\"},\"labels\":{\"show\":true,\"truncate\":100},\"title\":{}}],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"name\":\"LeftAxis-1\",\"type\":\"value\",\"position\":\"left\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\",\"mode\":\"normal\"},\"labels\":{\"show\":true,\"rotate\":0,\"filter\":false,\"truncate\":100},\"title\":{\"text\":\"Count\"}}],\"seriesParams\":[{\"show\":\"true\",\"type\":\"histogram\",\"mode\":\"stacked\",\"data\":{\"label\":\"Count\",\"id\":\"1\"},\"valueAxis\":\"ValueAxis-1\",\"drawLinesBetweenPoints\":true,\"showCircles\":true}],\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"times\":[],\"addTimeMarker\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"principal.keyword\",\"otherBucket\":true,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}]}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"def097e0-550f-11e8-9266-93ce640e5839\",\"filter\":[{\"meta\":{\"index\":\"def097e0-550f-11e8-9266-93ce640e5839\",\"negate\":false,\"disabled\":false,\"alias\":null,\"type\":\"phrase\",\"key\":\"requestType.keyword\",\"value\":\"LOG\",\"params\":{\"query\":\"LOG\",\"type\":\"phrase\"}},\"query\":{\"match\":{\"requestType.keyword\":{\"query\":\"LOG\",\"type\":\"phrase\"}}},\"$state\":{\"store\":\"appState\"}}],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
}
}
}
Note the first few lines. I found from this link Unable to create visualization using curl command in elaticearch that you have to modify the json export in order to import it. Seems strange right?
Anyway, then I’ve had two errors on the actual visualization object once in Kibana. The first was that “The index pattern associated with this object no longer exists.” I was able to get around this by creating an index pattern with the id referenced in the searchSourceJson of my visualization. I had to do this within the Kibana UI, so technically this solution would not work for me. In any case, I created an index with a document in it by calling
curl -X PUT "localhost:9200/test57/_doc/1" -H 'Content-Type: application/json' -d'
{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elasticsearch"
}
'
And then in the Kibana UI, created an index pattern and gave it the custom index pattern ID def097e0-550f-11e8-9266-93ce640e5839.
Now when I go try to view my visualization, I get a new error. “A field associated with this object no longer exists in the index pattern.”
I am guessing this has something to do with me pushing a random object into the index, but even with debug settings on for elastic and kibana, I don’t really get enough information to fix this problem.
If anyone could point me in the right direction that would be great! Thanks in advance.

You need to make sure that the fields you reference in your visualization definition are also present in the Kibana index pattern (Kibana main screen > Management > Index Patterns). The easiest way to do that would be to include said fields in the dummy index you created and then 'refresh field list' in the Kibana Index Patterns screen.
You can do this via CLI by creating a document of _type index-pattern in the .kibana index.

It is possible to import through kibana endpoint using api saved_objects.
This needs to modify the exported json wrapping it inside {"attributes":....}
Base on your example it should be something like:
curl -XPOST "http://localhost:5601/api/saved_objects/visualization/myvisualisation?overwrite=true" -H "kbn-xsrf: reporting" -H 'Content-Type: application/json' -d'
{"attributes":{
"title": "Logins",
"visState": "{\"title\":\"Logins\",\"type\":\"histogram\",\"params\":{\"type\":\"histogram\",\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"}},\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"type\":\"category\",\"position\":\"bottom\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\"},\"labels\":{\"show\":true,\"truncate\":100},\"title\":{}}],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"name\":\"LeftAxis-1\",\"type\":\"value\",\"position\":\"left\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\",\"mode\":\"normal\"},\"labels\":{\"show\":true,\"rotate\":0,\"filter\":false,\"truncate\":100},\"title\":{\"text\":\"Count\"}}],\"seriesParams\":[{\"show\":\"true\",\"type\":\"histogram\",\"mode\":\"stacked\",\"data\":{\"label\":\"Count\",\"id\":\"1\"},\"valueAxis\":\"ValueAxis-1\",\"drawLinesBetweenPoints\":true,\"showCircles\":true}],\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"times\":[],\"addTimeMarker\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"principal.keyword\",\"otherBucket\":true,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}]}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"def097e0-550f-11e8-9266-93ce640e5839\",\"filter\":[{\"meta\":{\"index\":\"def097e0-550f-11e8-9266-93ce640e5839\",\"negate\":false,\"disabled\":false,\"alias\":null,\"type\":\"phrase\",\"key\":\"requestType.keyword\",\"value\":\"LOG\",\"params\":{\"query\":\"LOG\",\"type\":\"phrase\"}},\"query\":{\"match\":{\"requestType.keyword\":{\"query\":\"LOG\",\"type\":\"phrase\"}}},\"$state\":{\"store\":\"appState\"}}],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
}
}
}
'

Related

How to create document in elasticsearch to save data and to search it?

Here it is my requirement, This is my 3levels of data which I am gettting from DB , my requirement is when I search for Developer I should get all the values of Developer such as Geo and Graph from Data2 in a list and while coming to support my values should contain Server and Data in a list and then on the basis of selection of Data1 . Data3 should be able to do the search , like suppose when we select developer then Geopos and Graphpos...
the logic which i need to use here is of elasticsearch
data1 data2 data3
Developer GEO GeoPos
Developer GRAPH GraphPos
Support SERVER ServerPos
Support Data DataPos
this is what I have done to crete the index and to get the values
curl -X PUT http://localhost:9200/mapping_log
{ "mappings":{ "properties":{"data1:{"type": "text","fields":{"keyword":{"type":"keyword"}}}, {"data2":{"type": "text","fields":{"keyword":{"type":"keyword"}}}, {"data3":{"type": "text","fields":{"keyword":{"type":"keyword"}}}, } } } 
searching values , I am not sure what I am going to get can u pls help with search dsl query too
curl -X GET "localhost:9200/mapping_log/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"match": {
"data1.data2": "product"
}
}
}
How to create document for such type of Data can we create json and post it through postman or curl ?
If your documents are not indexed in elastic search first you need to ingest them to an existing index in elastic with the aid of Logstah , you can find many configuration file related to you input database.
Before transforming your documents create and index in elastic with multi fields mapping, you can use dynamic mapping(elastic default mapping) also and change your Dsl query but I recommend to use multi fields mapping as follow
PUT /mapping{
"mappings":
{"properties": {"rating":{"type": "float"},
"content":{"type": "text"},
"author":{"properties": {
"name":{"type": "text"},
"email":{"type": "keyword"}
}}
}}
}
The result will be
Mapping result
then you can query the fields in kibana Dev tools with DSL query like below
GET /mapping/_search{
"query": {"match":
{ "author.email": "SOMEMAIL"}}
}

Elasticsearch query with wildcards

Use their data on Elasticsearch tutorials as an example, the following uri search hits 9 records,
curl -XGET 'remotehost:9200/bank/_search?q=city:R*d&_source_include=city&pretty&pretty'
while the following reques body search hits 0 records,
curl -XGET 'remotehost:9200/bank/_search?pretty' -H 'Content-Type: application/json'
-d'{"query": {"wildcard": {"city": "R*d"} },
"_source": ["city"]
}
'
But the two methods shoud be equivalent to each other. Any idea why this is happening? I use Elasticsearch 5.5.1 in docker.
You can get your expected result by hitting the below command. This commands add an extra .keyword with your command in field city.
curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'{"query": {"wildcard": {"city.keyword": "R*d"} }, "_source": ["city"]}'
Reason of adding .keyword
When you insert data to elasticsearch, you will notice a .keyword field and that field is not_analyzed. By default, the field you have inserted data, is standard analyzed and there is a multifield .keyword . If you create a field city with data, then it create a field city with standard analyzer and added a multifield .keyword which is not_analyzed.
In your case you need a not_analyzed field to query (as wildcard query). So, your query should be on city.keyword field which is by default not_analyzed.
In the first case, you have hit a get request to elasticsearch with query parameter. Elasticsearch will automatically converted the query as like second format.
For reliable source, you can follow the Official docs
The string field has split into two new types: text, which should be
used for full-text search, and keyword, which should be used for
keyword search.
To make things better, Elasticsearch decided to borrow an idea that
initially stemmed from Logstash: strings will now be mapped both as
text and keyword by default. For instance, if you index the
following simple document:
{
"foo": "bar"
}
Then the following dynamic mappings will be created:
{
"foo": {
"type" "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
As a consequence, it will both be possible to perform full-text search
on foo, and keyword search and aggregations using the foo.keyword
field.

How to copy some ElasticSearch data to a new index

Let's say I have movie data in my ElasticSearch and I created them like this:
curl -XPUT "http://192.168.0.2:9200/movies/movie/1" -d'
{
"title": "The Godfather",
"director": "Francis Ford Coppola",
"year": 1972
}'
And I have a bunch of movies from different years. I want to copy all the movies from a particular year (so, 1972) and copy them to a new index of "70sMovies", but I couldn't see how to do that.
Since ElasticSearch 2.3 you can now use the built in _reindex API
for example:
POST /_reindex
{
"source": {
"index": "twitter"
},
"dest": {
"index": "new_twitter"
}
}
Or only a specific part by adding a filter/query
POST /_reindex
{
"source": {
"index": "twitter",
"query": {
"term": {
"user": "kimchy"
}
}
},
"dest": {
"index": "new_twitter"
}
}
Read more: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html
The best approach would be to use elasticsearch-dump tool https://github.com/taskrabbit/elasticsearch-dump.
The real world example I used :
elasticdump \
--input=http://localhost:9700/.kibana \
--output=http://localhost:9700/.kibana_read_only \
--type=mapping
elasticdump \
--input=http://localhost:9700/.kibana \
--output=http://localhost:9700/.kibana_read_only \
--type=data
Check out knapsack:
https://github.com/jprante/elasticsearch-knapsack
Once you have the plugin installed and working, you could export part of your index via query. For example:
curl -XPOST 'localhost:9200/test/test/_export' -d '{
"query" : {
"match" : {
"myfield" : "myvalue"
}
},
"fields" : [ "_parent", "_source" ]
}'
This will create a tarball with only your query results, which you can then import into another index.
To reindex specific type from source index to destination index type syntax is
POST _reindex/
{
"source": {
"index": "source_index",
"type": "source_type",
"query": {
// add filter criteria
}
},
"dest": {
"index": "dest_index",
"type": "dest_type"
}
}
If the intent were to copy some portion of the data or the entire data to an index with the same settings/mappings as that of the original index one could use the clone api to achieve the same. Something like below:
POST /<index>/_clone/<target-index>
OR
PUT /<index>/_clone/<target-index>
However if the intent is to copy the data to a new index with the different settings/mappings than the original index one could use the reindex api to achieve the same. Something like below:
POST _reindex/
{
"source": {
"index": "source_index",
"type": "source_type",
"query": {
// add filter criteria
}
},
"dest": {
"index": "dest_index",
"type": "dest_type"
}
}
*Note: In case of reindex api the target index has to be created prior to actual api call.
For further reading on difference between clone and reindex refer What's the difference between cloning and reindexing an index in Elasticsearch?
You can do it easily with elasticsearch-dump (https://github.com/taskrabbit/elasticsearch-dump) in three steps. In the following example I copy the index "thor" to "thor2"
elasticdump --input=http://localhost:9200/thor --output=http://localhost:9200/thor2 --type=analyzer
elasticdump --input=http://localhost:9200/thor --output=http://localhost:9200/thor2 --type=mapping
elasticdump --input=http://localhost:9200/thor --output=http://localhost:9200/thor2 --type=data
Well the straightforward way to do this is to write code, with the API of your choice, querying for "year": 1972 and then indexing that data into a new index. You would use the Search api or the Scan and Scroll API to get all the documents and then either index them one by one or use the Bulk Api:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-search.html
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-scroll.html
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-bulk.html
Assuming you don't want to do this via code but are looking for a direct way of doing this, I suggest the Elasticsearch Snapshot and Restore. Basically you would take a snapshot of your existing index, restore it into a new index and then use the Delete command to delete all documents with a year other than 1972.
Snapshot And Restore
The snapshot and restore module allows to create snapshots of
individual indices or an entire cluster into a remote repository. At
the time of the initial release only shared file system repository was
supported, but now a range of backends are available via officially
supported repository plugins.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html
Delete By Query API
The delete by query API allows to delete documents from one or more
indices and one or more types based on a query. The query can either
be provided using a simple query string as a parameter, or using the
Query DSL defined within the request body.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-delete-by-query.html
Since v7.4 the _clone api was introduced and can easily satisfy your need: (read for the relevant prerequisites and monitoring involved)
POST /<index>/_clone/<target-index>
Or:
PUT /<index>/_clone/<target-index>
You can use elasticdump --searchBody:
# Copy documents from movies to 70sMovies (filtering using query)
elasticdump \
--input=http://localhost:9200/movies \
--output=http://localhost:9200/70sMovies \
--type=data \
--searchBody="{\"query\":{\"term\":{\"username\": \"admin\"}}}" # <--- Your query here
more on elasticdump options here.

how insert data to Elasticsearch without id

I insert data to Elasticsearch with id 123
localhost:9200/index/type/123
but I do not know what will next id inserted
how insert data to Elasticsearch without id in localhost:9200/index/type?
The index operation can be executed without specifying the id. In such a case, an id will be generated automatically. In addition, the op_type will automatically be set to create. Here is an example (note the POST used instead of PUT):
$ curl -XPOST 'http://localhost:9200/twitter/tweet/' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elasticsearch"
}'
In my case, using nodejs and the elasticsearch package I did it this way using the client:
client.index ()
var elasticsearch = require ('elasticsearch');
let client = new elasticsearch.Client ({
host: '127.0.0.1: 9200'
});
client.index ({
index: 'myindex'
type: 'mytype',
body: {
properti1: 'val 1',
properti2: ['y', 'z'],
properti3: true,
}
}, function (error, response) {
if (error) {
console.log("error: ", error);
} else {
console.log("response: ", response);
}
});
if an id is not specified, elasticsearch will generate one automatically
In my case, I was trying to add a document directly to an index, e.g. localhost:9200/messages, as opposed to localhost:9200/someIndex/messages.
I had to append /_doc to the URL for my POST to succeed: localhost:9200/messages/_doc. Otherwise, I was getting an HTTP 405:
{"error":"Incorrect HTTP method for uri [/messages] and method [POST], allowed: [GET, PUT, HEAD, DELETE]","status":405}
Here's my full cURL request:
$ curl -X POST "localhost:9200/messages/_doc" -H 'Content-Type:
application/json' -d'
{
"user": "Jimmy Doe",
"text": "Actually, my only brother!",
"timestamp": "something"
}
'
{"_index":"messages","_type":"_doc","_id":"AIRF8GYBjAnm5hquWm61","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":2,"_primary_term":3}
You can use POST request to create a new document or data object without specifying id property in the path.
curl -XPOST 'http://localhost:9200/stackoverflow/question' -d '
{
title: "How to insert data to elasticsearch without id in the path?"
}
If our data doesn’t have a natural ID, we can let Elasticsearch autogenerate one for us. The structure of the request changes: instead of using the PUT verb ("store this document at this URL"), we use the POST verb ("store this document under this URL").
The URL now contains just the _index and the _type:
curl -X POST "localhost:9200/website/blog/" -H 'Content-Type: application/json' -d'
{
"title": "My second blog entry",
"text": "Still trying this out...",
"date": "2014/01/01"
}
'
The response is similar to what we saw before, except that the _id field has been generated for us:
{
"_index": "website",
"_type": "blog",
"_id": "AVFgSgVHUP18jI2wRx0w",
"_version": 1,
"created": true
}
Autogenerated IDs are 20 character long, URL-safe, Base64-encoded GUID strings. These GUIDs are generated from a modified FlakeID scheme which allows multiple nodes to be generating unique IDs in parallel with essentially zero chance of collision.
https://www.elastic.co/guide/en/elasticsearch/guide/current/index-doc.html
It's possible to leave the ID field blank and elasticsearch will assign it one. For example a _bulk insert will look like
{"create":{"_index":"products","_type":"product"}}\n
{JSON document 1}\n
{"create":{"_index":"products","_type":"product"}}\n
{JSON document 2}\n
{"create":{"_index":"products","_type":"product"}}\n
{JSON document 3}\n
...and so on
The IDs will look something like 'AUvGyJMOOA8IPUB04vbF'

ElasticSearch MapperParsingException object mapping

I folow a article about ElasticSearch and I try put this example on my engine.
example:
curl -XPUT 'elasticsearch:9200/twitter/tweet/1' -d '{
"user": "david",
"message": "C'est mon premier message de la journée !",
"postDate": "2010-03-15T15:23:56",
"priority": 2,
"rank": 10.2
}'
I try to send this information across a bash file (I use Putty), but I have this errror:
{"error":"MapperParsingException[object mapping for [tweet] tried to parse as object,
but got EOF, has a concrete value been provided to it?]","status":400}
I also try to see one error with "cat -e tweet.sh", but I don't understand why I've got this error.
Thanks in advance.
It's a type mismatching. I'm facing with such issue too. It looks like you try to index a value into an object mapped json. i.e., you indexed one time something like this:
{
"obj1": {
"field1": "value1"
}
}
and then index this:
{
"obj1": "value"
}
Check your existing mapping via elasticsearch:9200/twitter/_mapping and you will see if that one of the field was indexed as object

Resources