Unable to delete index pattern in Kibana - elasticsearch

I have moved from ELK 7.9 to ELK 7.15 in an attempt to solve this problem and it looks like all that effort was of no use. I am still unable to delete the index pattern in Kibana, neither through the console, nor through the GUI.
Trying to delete it from the saved objects section simply gets stuck for ever on this screen
Deleting it from the below, temporarily shows that the index pattern has been deleted. But once you reload the page, it comes back again
The elastic search index does get deleted successfully however.
Trying it all from the devtools also does not seem to work. Some of the attempts made and the respective outputs are shown below:
GET .kibana/_search
{
"_source": ["index-pattern.title"],
"query": {
"term": {
"type": "index-pattern"
}
}
}
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 4.0535226,
"hits" : [
{
"_index" : ".kibana_7.15.0_001",
"_type" : "_doc",
"_id" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"_score" : 4.0535226,
"_source" : {
"index-pattern" : {
"title" : "all-security-bugs-from-jira"
}
}
}
]
}
}
DELETE /index_patterns/index_pattern/index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index [index_patterns]",
"resource.type" : "index_expression",
"resource.id" : "index_patterns",
"index_uuid" : "_na_",
"index" : "index_patterns"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index [index_patterns]",
"resource.type" : "index_expression",
"resource.id" : "index_patterns",
"index_uuid" : "_na_",
"index" : "index_patterns"
},
"status" : 404
}
DELETE index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index [index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30]",
"resource.type" : "index_or_alias",
"resource.id" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"index_uuid" : "_na_",
"index" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index [index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30]",
"resource.type" : "index_or_alias",
"resource.id" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"index_uuid" : "_na_",
"index" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30"
},
"status" : 404
}
DELETE 822d7f50-7cd9-11ec-95d5-a3730f55fd30
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index [822d7f50-7cd9-11ec-95d5-a3730f55fd30]",
"resource.type" : "index_or_alias",
"resource.id" : "822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"index_uuid" : "_na_",
"index" : "822d7f50-7cd9-11ec-95d5-a3730f55fd30"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index [822d7f50-7cd9-11ec-95d5-a3730f55fd30]",
"resource.type" : "index_or_alias",
"resource.id" : "822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"index_uuid" : "_na_",
"index" : "822d7f50-7cd9-11ec-95d5-a3730f55fd30"
},
"status" : 404
}
GET /.kibana?pretty
does not give any document with the index pattern concerned. This was also confirmed by the below 2 queries
GET .kibana/index-pattern/822d7f50-7cd9-11ec-95d5-a3730f55fd30
{
"_index" : ".kibana_7.15.0_001",
"_type" : "index-pattern",
"_id" : "822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"found" : false
}
GET .kibana/index-pattern/index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30
{
"_index" : ".kibana_7.15.0_001",
"_type" : "index-pattern",
"_id" : "index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30",
"found" : false
}
I have been trying to follow the suggestions on https://discuss.elastic.co/t/cant-delete-index-pattern-in-kibana/148341/5
Any help understanding what could I be doing wrong here is greatly appreciated.

You are not deleting from right index
Do below to delete:
DELETE .kibana/_doc/index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30
To get the documents you are giving wrong type, try below:
GET .kibana/_doc/index-pattern:822d7f50-7cd9-11ec-95d5-a3730f55fd30

Related

Elasticsearch - get docs with array field size greater than 1

I want to get all docs with a field, which is an array, with size greater than 1.
I'm using the following command:
curl -H "Content-Type: application/json" -XGET '127.0.0.1:9200/poly/_search?pretty' -d '
{ "query": { "bool": { "filter": { "script" : { "script" : "doc['emoji'].length > 1" } } } } }
'
but I get the following error:
{
"error" : {
"root_cause" : [
{
"type" : "script_exception",
"reason" : "compile error",
"script_stack" : [
"doc[emoji].length > 1",
" ^---- HERE"
],
"script" : "doc[emoji].length > 1",
"lang" : "painless",
"position" : {
"offset" : 4,
"start" : 0,
"end" : 21
}
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "poly",
"node" : "Dnp4BI4YSgCz-C-p8NmcTg",
"reason" : {
"type" : "query_shard_exception",
"reason" : "failed to create query: compile error",
"index_uuid" : "HlWRuJb5TY-L_2_9iyuVmg",
"index" : "poly",
"caused_by" : {
"type" : "script_exception",
"reason" : "compile error",
"script_stack" : [
"doc[emoji].length > 1",
" ^---- HERE"
],
"script" : "doc[emoji].length > 1",
"lang" : "painless",
"position" : {
"offset" : 4,
"start" : 0,
"end" : 21
},
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "cannot resolve symbol [emoji]"
}
}
}
}
],
"caused_by" : {
"type" : "script_exception",
"reason" : "compile error",
"script_stack" : [
"doc[emoji].length > 1",
" ^---- HERE"
],
"script" : "doc[emoji].length > 1",
"lang" : "painless",
"position" : {
"offset" : 4,
"start" : 0,
"end" : 21
},
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "cannot resolve symbol [emoji]"
}
}
},
"status" : 400
}
Nevertheless, the field "emoji" exists in my ElasticSearch docs, as you can see in the result to the following command:
curl -H "Content-Type: application/json" -XGET '127.0.0.1:9200/poly/_search?pretty' -d '
{
"_source": ["emoji"],
"query" : {
"constant_score" : {
"filter" : {
"exists" : {
"field" : "emoji"
}
}
}
}
}
'
Here is the result for the previous command:
"took" : 28,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 10000,
"relation" : "gte"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "poly",
"_type" : "_doc",
"_id" : "1307256887860174848",
"_score" : 1.0,
"_source" : {
"emoji" : [
"❤️"
]
}
},
{
"_index" : "poly",
"_type" : "_doc",
"_id" : "1278766523134414848",
"_score" : 1.0,
"_source" : {
"emoji" : [
"⏩"
]
}
},
{
"_index" : "poly",
"_type" : "_doc",
"_id" : "1298632385605431296",
"_score" : 1.0,
"_source" : {
"emoji" : [
"\uD83D\uDC47\uD83C\uDFFF",
"\uD83D\uDC47\uD83C\uDFFF",
"\uD83D\uDC47\uD83C\uDFFF",
"\uD83D\uDC47\uD83C\uDFFF"
]
}
},
{
"_index" : "poly",
"_type" : "_doc",
"_id" : "1300563120184651776",
"_score" : 1.0,
"_source" : {
"emoji" : [
"\uD83D\uDC4D",
"\uD83D\uDE00",
"\uD83D\uDE80"
]
}
},
]
}
}
Can someone tell me why I'm getting that error above?
You need to skip characters in your command check command below:
Script:
"script": {"script": "doc['\''emoji'\''].length > 1"
Command:
curl -H "Content-Type: application/json" -XGET '127.0.0.1:9200/poly/_search?pretty' -d '{ "query": {"bool": {"filter": {"script": {"script": "doc['\''emoji'\''].length > 1"}}}}}'

Query with match to get all values for a given field! ElasticSearch

I'm pretty new to elastic search and would like to write a query for all of the values a specific field? I mean, say i have a field "Number" and "change_manager_group", is there a query to perform list all the numbers of which "change_manager_group = Change Managers - 2"
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 10,
"successful" : 10,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 1700,
"max_score" : 1.0,
"hits" : [
{
"_index" : "test-tem-changes",
"_type" : "_doc",
"_id" : "CHG0393073_1554800400000",
"_score" : 1.0,
"_source" : {
"work_notes" : "",
"priority" : "4 - Low",
"planned_start" : 1554800400000,
"Updated_by" : "system",
"Updated" : 1554819333000,
"phase" : "Requested",
"Number" : "CHG0312373",
"change_manager_group" : "Change Managers - 1",
"approval" : "Approved",
"downtime" : "false",
"close_notes" : "",
"Standard_template_version" : "",
"close_code" : null,
"actual_start" : 1554819333000,
"closed_by" : "",
"Type" : "Normal"
}
},
{
"_index" : "test-tem-changes",
"_type" : "_doc",
"_id" : "CHG0406522_0",
"_score" : 1.0,
"_source" : {
"work_notes" : "",
"priority" : "4 - Low",
"planned_start" : 0,
"Updated_by" : "svcmdeploy_automation",
"Updated" : 1553320559000,
"phase" : "Requested",
"Number" : "CHG041232",
"change_manager_group" : "Change Managers - 2",
"approval" : "Approved",
"downtime" : "false",
"close_notes" : "Change Installed",
"Standard_template_version" : "",
"close_code" : "Successful",
"actual_start" : 1553338188000,
"closed_by" : "",
"Type" : "Automated"
}
},
{
"_index" : "test-tem-changes",
"_type" : "_doc",
"_id" : "CHG0406526_0",
"_score" : 1.0,
"_source" : {
"work_notes" : "",
"priority" : "4 - Low",
"planned_start" : 0,
"Updated_by" : "svcmdeploy_automation",
"Updated" : 1553321854000,
"phase" : "Requested",
"Number" : "CHG0412326",
"change_manager_group" : "Change Managers - 2",
"approval" : "Approved",
"downtime" : "false",
"close_notes" : "Change Installed",
"Standard_template_version" : "",
"close_code" : "Successful",
"actual_start" : 1553339629000,
"closed_by" : "",
"Type" : "Automated"
}
},
I tried this after a bit of googling, but that errors out
curl -XGET "http://localhost:9200/test-tem-changes/_search?pretty=true" -H 'Content-Type: application/json' -d '
> {
> "query" : { "Number" : {"query" : "*"} }
> }
> '
What am i missing here?
To get all the documents where change_manager_group ==Change Managers - 2 you want to use a Term Query. Below I am wrapping it in a filter context so that it is faster (does not score relevance).
If change_manager_group is not a keyword mapped field, you may have to use change_manager_group.keyword depending on your mapping.
GET test-tem-changes/_search
{
"query": {
"bool": {
"filter": {
"term": {
"change_manager_group": "Change Managers - 2"
}
}
}
}
}

How to perform the arthimatic operation on data from elasticsearch

I need to have average of cpuload on specific nodetype. For example if I give nodetype as tpt it should give the average of cpuload of nodetype's of all tpt available. I tried different methods but vain...
My data in elasticsearch is below:
{
"took" : 5,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 4,
"max_score" : 1.0,
"hits" : [
{
"_index" : "kpi",
"_type" : "kpi",
"_id" : "\u0003",
"_score" : 1.0,
"_source" : {
"kpi" : {
"CpuAverageLoad" : 13,
"NodeId" : "kishan",
"NodeType" : "Tpt",
"State" : "online",
"Static_limit" : 0
}
}
},
{
"_index" : "kpi",
"_type" : "kpi",
"_id" : "\u0005",
"_score" : 1.0,
"_source" : {
"kpi" : {
"CpuAverageLoad" : 15,
"NodeId" : "kishan1",
"NodeType" : "tpt",
"State" : "online",
"Static_limit" : 0
}
}
},
{
"_index" : "kpi",
"_type" : "kpi",
"_id" : "\u0004",
"_score" : 1.0,
"_source" : {
"kpi" : {
"MaxLbCapacity" : "700000",
"NodeId" : "kishan2",
"NodeType" : "bang",
"OnlineCSCF" : [
"001",
"002"
],
"State" : "Online",
"TdbGroup" : 1,
"TdGroup" : 0
}
}
},
{
"_index" : "kpi",
"_type" : "kpi",
"_id" : "\u0002",
"_score" : 1.0,
"_source" : {
"kpi" : {
"MaxLbCapacity" : "700000",
"NodeId" : "kishan3",
"NodeType" : "bang",
"OnlineCSCF" : [
"001",
"002"
],
"State" : "Online",
"TdLGroup" : 1,
"TGroup" : 0
}
}
}
]
}
}
And my query is
curl -XGET 'localhost:9200/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"bool" : {
"must" : {
"script" : {
"script" : {
"source" : "kpi[CpuAverageLoad].value > params.param1",
"lang" : "painless",
"params" : {
"param1" : 5
}
}
}
}
}
}
}'
but is falling as it is unable to find the exact source.
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "[script] unknown field [source], parser not found"
}
],
"type" : "illegal_argument_exception",
"reason" : "[script] unknown field [source], parser not found"
},
"status" : 400
}

Why are my completion suggester options empty?

I'm currently trying to setup my suggestion implementation.
My index settings / mappings:
{
"settings" : {
"analysis" : {
"analyzer" : {
"trigrams" : {
"tokenizer" : "mesh_default_ngram_tokenizer",
"filter" : [ "lowercase" ]
},
"suggestor" : {
"type" : "custom",
"tokenizer" : "standard",
"char_filter" : [ "html_strip" ],
"filter" : [ "lowercase" ]
}
},
"tokenizer" : {
"mesh_default_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : "3",
"max_gram" : "3"
}
}
}
},
"mappings" : {
"default" : {
"properties" : {
"uuid" : {
"type" : "string",
"index" : "not_analyzed"
},
"language" : {
"type" : "string",
"index" : "not_analyzed"
},
"fields" : {
"properties" : {
"content" : {
"type" : "string",
"index" : "analyzed",
"analyzer" : "trigrams",
"fields" : {
"suggest" : {
"type" : "completion",
"analyzer" : "suggestor"
}
}
}
}
}
}
}
}
My query:
{
"suggest": {
"query-suggest" : {
"text" : "som",
"completion" : {
"field" : "fields.content.suggest"
}
}
},
"_source": ["fields.content", "uuid", "language"]
}
The query result:
{
"took" : 44,
"timed_out" : false,
"_shards" : {
"total" : 20,
"successful" : 20,
"failed" : 0
},
"hits" : {
"total" : 5,
"max_score" : 0.0,
"hits" : [ {
"_index" : "node-08c5d084d4e842b385d084d4e8a2b301-fe6212a62ad94590a212a62ad9759026-44874a2a8d2e4483874a2a8d2e44830c-draft",
"_type" : "default",
"_id" : "c6b7391075cc437ab7391075cc637a05-en",
"_score" : 0.0,
"_source" : {
"language" : "en",
"fields" : {
"content" : "This is<pre>another set of <strong>important</strong>content s<b>om</b>e text with more content you can poke a stick at"
},
"uuid" : "c6b7391075cc437ab7391075cc637a05"
}
}, {
"_index" : "node-08c5d084d4e842b385d084d4e8a2b301-fe6212a62ad94590a212a62ad9759026-44874a2a8d2e4483874a2a8d2e44830c-draft",
"_type" : "default",
"_id" : "96e2c6765b6841fea2c6765b6871fe36-en",
"_score" : 0.0,
"_source" : {
"language" : "en",
"fields" : {
"content" : "This is<pre>another set of <strong>important</strong>content no text with more content you can poke a stick at"
},
"uuid" : "96e2c6765b6841fea2c6765b6871fe36"
}
}, {
"_index" : "node-08c5d084d4e842b385d084d4e8a2b301-fe6212a62ad94590a212a62ad9759026-44874a2a8d2e4483874a2a8d2e44830c-draft",
"_type" : "default",
"_id" : "fd1472555e9d4d039472555e9d5d0386-en",
"_score" : 0.0,
"_source" : {
"language" : "en",
"fields" : {
"content" : "This is<pre>another set of <strong>important</strong>content someth<strong>ing</strong> completely different"
},
"uuid" : "fd1472555e9d4d039472555e9d5d0386"
}
}, {
"_index" : "node-08c5d084d4e842b385d084d4e8a2b301-fe6212a62ad94590a212a62ad9759026-44874a2a8d2e4483874a2a8d2e44830c-draft",
"_type" : "default",
"_id" : "5a3727b134064de4b727b134063de4c4-en",
"_score" : 0.0,
"_source" : {
"language" : "en",
"fields" : {
"content" : "This is<pre>another set of <strong>important</strong>content some<strong>what</strong> strange content"
},
"uuid" : "5a3727b134064de4b727b134063de4c4"
}
}, {
"_index" : "node-08c5d084d4e842b385d084d4e8a2b301-fe6212a62ad94590a212a62ad9759026-44874a2a8d2e4483874a2a8d2e44830c-draft",
"_type" : "default",
"_id" : "865257b6be4340c69257b6be4340c603-en",
"_score" : 0.0,
"_source" : {
"language" : "en",
"fields" : {
"content" : "This is<pre>another set of <strong>important</strong>content some <strong>more</strong> content you can poke a stick at too"
},
"uuid" : "865257b6be4340c69257b6be4340c603"
}
} ]
},
"suggest" : {
"query-suggest" : [ {
"text" : "som",
"offset" : 0,
"length" : 3,
"options" : [ ]
} ]
}
}
I'm currently using Elasticsearch 2.4.6 and I can't update
There are 5 document in my index and only 4 contain the word "some".
Why do I see 5 hits but no options?
The options are not empty if I start my suggest text with the first word of the field string. (e.g: this)
Is my usage of the suggest feature valid when dealing with fields that contain full html pages? I'm not sure whether the feature was meant to handle many tokens per document.
I already tried to use ngram tokenizer for my suggestor analyzer but that did not change the situation. Any hint or feedback would be appreciated.
It seems that the issue I'm seeing is a restriction is completion suggesters:
Matching always starts at the beginning of the text. So, for example, “Smi” will match “Smith, Fed” but not “Fed Smith”. However, you could list both “Smith, Fed” and “Fed Smith” as two different inputs for the one output.
http://rea.tech/implementing-autosuggest-in-elasticsearch/

How to get Elasticsearch boolean match working for multiple fields

I need some expert guidance on trying to get a bool match working. I'd like the query to only return a successful search result if both 'message' matches 'Failed password for', and 'path' matches '/var/log/secure'.
This is my query:
curl -s -XGET 'http://localhost:9200/logstash-2015.05.07/syslog/_search?pretty=true' -d '{
"filter" : { "range" : { "#timestamp" : { "gte" : "now-1h" } } },
"query" : {
"bool" : {
"must" : [
{ "match_phrase" : { "message" : "Failed password for" } },
{ "match_phrase" : { "path" : "/var/log/secure" } }
]
}
}
} '
Here is the start of the output from the search:
{
"took" : 3,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 46,
"max_score" : 13.308596,
"hits" : [ {
"_index" : "logstash-2015.05.07",
"_type" : "syslog",
"_id" : "AU0wzLEqqCKq_IPSp_8k",
"_score" : 13.308596,
"_source":{"message":"May 7 16:53:50 s_local#logstash-02 sshd[17970]: Failed password for fred from 172.28.111.200 port 43487 ssh2","#version":"1","#timestamp":"2015-05-07T16:53:50.554-07:00","type":"syslog","host":"logstash-02","path":"/var/log/secure"}
}, ...
The problem is if I change '/var/log/secure' to just 'var' say, and run the query, I still get a result, just with a lower score. I understood the bool...must construct meant both match terms here would need to be successful. What I'm after is no result if 'path' doesn't exactly match '/var/log/secure'...
{
"took" : 3,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 46,
"max_score" : 10.354593,
"hits" : [ {
"_index" : "logstash-2015.05.07",
"_type" : "syslog",
"_id" : "AU0wzLEqqCKq_IPSp_8k",
"_score" : 10.354593,
"_source":{"message":"May 7 16:53:50 s_local#logstash-02 sshd[17970]: Failed password for fred from 172.28.111.200 port 43487 ssh2","#version":"1","#timestamp":"2015-05-07T16:53:50.554-07:00","type":"syslog","host":"logstash-02","path":"/var/log/secure"}
},...
I checked the mappings for these fields to check that they are not analyzed :
curl -X GET 'http://localhost:9200/logstash-2015.05.07/_mapping?pretty=true'
I think these fields are non analyzed and so I believe the search will not be analyzed too (based on some training documentation I read recently from elasticsearch). Here is a snippet of the output _mapping for this index below.
....
"message" : {
"type" : "string",
"norms" : {
"enabled" : false
},
"fields" : {
"raw" : {
"type" : "string",
"index" : "not_analyzed",
"ignore_above" : 256
}
}
},
"path" : {
"type" : "string",
"norms" : {
"enabled" : false
},
"fields" : {
"raw" : {
"type" : "string",
"index" : "not_analyzed",
"ignore_above" : 256
}
}
},
....
Where am I going wrong, or what am I misunderstanding here?
As mentioned in the OP you would need to use the "not_analyzed" view of the fields but as per the OP mapping the non-analyzed version of the field is message.raw, path.raw
Example:
{
"filter" : { "range" : { "#timestamp" : { "gte" : "now-1h" } } },
"query" : {
"bool" : {
"must" : [
{ "match_phrase" : { "message.raw" : "Failed password for" } },
{ "match_phrase" : { "path.raw" : "/var/log/secure" } }
]
}
}
}
.The link alongside gives more insight to multi-fields
.To expand further
The mapping in the OP for path is as follows:
"path" : {
"type" : "string",
"norms" : {
"enabled" : false
},
"fields" : {
"raw" : {
"type" : "string",
"index" : "not_analyzed",
"ignore_above" : 256
}
}
}
This specifies that the path field uses the default analyzer and field.raw is not analyzed.
If you want to set the path field to be not analyzed instead of raw it would be something on these lines:
"path" : {
"type" : "string",
"index" : "not_analyzed",
"norms" : {
"enabled" : false
},
"fields" : {
"raw" : {
"type" : "string",
"index" : <whatever analyzer you want>,
"ignore_above" : 256
}
}
}

Resources