So this is my index template:
{
"net-stat-template" : {
"order" : 0,
"index_patterns" : [
"net-stat-*"
],
"settings" : {
"index" : {
"lifecycle" : {
"name" : "net-stat",
"rollover_alias" : "net-stat"
},
"routing" : {
"allocation" : {
"require" : {
"data" : "hot"
}
}
},
"refresh_interval" : "15s",
"number_of_shards" : "1",
"number_of_replicas" : "0"
}
},
"mappings" : { },
"aliases" : { }
}
}
and this is my ilm/policy :
"net-stat" : {
"version" : 1,
"modified_date" : "2020-05-10T19:20:18.979Z",
"policy" : {
"phases" : {
"hot" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_size" : "50gb",
"max_age" : "5d"
},
"set_priority" : {
"priority" : 50
}
}
},
"delete" : {
"min_age" : "10d",
"actions" : {
"delete" : { }
}
},
"warm" : {
"min_age" : "0ms",
"actions" : {
"allocate" : {
"number_of_replicas" : 0,
"include" : { },
"exclude" : { },
"require" : {
"data" : "warm"
}
},
"set_priority" : {
"priority" : 50
}
}
}
}
}
}
but it's doesn't delete indexes with more than 10 days old and when I try GET net-stat-2020.04.20/_ilm/explain it returns:
{
"indices" : {
"net-stat-2020.04.20" : {
"index" : "net-stat-2020.04.20",
"managed" : true,
"policy" : "netstat",
"step_info" : {
"type" : "illegal_argument_exception",
"reason" : "policy [netstat] does not exist"
}
}
}
}
I'm not sure where this netstat came from and also when I try POST /net-stat-2020.04.20/_ilm/retry it returns error :
"type": "illegal_argument_exception",
"reason": "cannot retry an action for an index [net-stat-2020.04.20] that has not encountered an error when running a Lifecycle Policy"
Is there something I'm missing or my setting are somehow wrong?
Related
I am tired of reindexing every 2 3 weeks i have to do reindex.
{
"winlogbeat_sysmon" : {
"order" : 0,
"index_patterns" : [
"log-wlb-sysmon-*"
],
"settings" : {
"index" : {
"lifecycle" : {
"name" : "winlogbeat_sysmon_policy",
"rollover_alias" : "log-wlb-sysmon"
},
"refresh_interval" : "1s",
"number_of_shards" : "1",
"number_of_replicas" : "1"
}
},
"mappings" : {
"properties" : {
"thread_id" : {
"type" : "long"
},
"z_elastic_ecs.event.code" : {
"type" : "long"
},
"geoip" : {
"type" : "object",
"properties" : {
"ip" : {
"type" : "ip"
},
"latitude" : {
"type" : "half_float"
},
"location" : {
"type" : "geo_point"
},
"longitude" : {
"type" : "half_float"
}
}
},
"dst_ip_addr" : {
"type" : "ip"
}
}
},
"aliases" : { }
}
}
this is the template i set earlier from then i didn't change anything
in current and previous indices of log-wlb-sysmon has dst_ip_addr has ip field and older indices of log-wlb-sysmon has text field in logstash i didn't see any warnning for this issue
I have different indexes that contain different fields. And I try to figure out how to get suggests from all indexes and all fields. I know that with GET /_all/_search I can search for results through all indexes. But how can I get all suggestions from all indexes and all fields? Because I want to have a feature like Google "Did you mean: suggests"
So, I tried this out:
GET /_all/_search
{
"query" : {
"multi_match" : {
"query" : "berlin"
}
},
"suggest" : {
"text" : "berlin",
"my-suggest-1" : {
"term" : {
"field" : "street"
}
},
"my-suggest-2" : {
"term" : {
"field" : "city"
}
},
"my-suggest-3" : {
"term" : {
"field" : "description"
}
}
}
}
"my-suggest-1" and "-2" belongs to Index address (see below) and "my-suggest-3" belongs to Index product. I get the following error:
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [street]"
},
{
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [city]"
},
{
"type" : "illegal_argument_exception",
"reason" : "no mapping found for field [description]"
}
]
}
But if I use only the fields of 1 index I get suggestions, see:
GET /_all/_search
{
"query" : {
"multi_match" : {
"query" : "berlin"
}
},
"suggest" : {
"text" : "berlin",
"my-suggest-1" : {
"term" : {
"field" : "street"
}
},
"my-suggest-2" : {
"term" : {
"field" : "city"
}
}
}
}
Response
...
"failures" : {
...
},
"hits" : {
...
}
"suggest" : {
"my-suggest-1" : [
{
"text" : "berlin",
"offset" : 0,
"length" : 10,
"options" : [
{
"text" : "berliner",
"score" : 0.9,
"freq" : 12
},
{
"text" : "berlinger",
"score" : 0.9,
"freq" : 1
}
]
}
],
"my-suggest-2" : [
{
"text" : "berlin",
"offset" : 0,
"length" : 10,
"options" : []
}
]
...
I don't know how I can get suggests from index address and product? I would be happy if someone can help me.
Index 1 - Address:
"address" : {
"aliases" : {
....
},
"mappings" : {
"dynamic" : "strict",
"properties" : {
"_entity_type" : {
"type" : "keyword",
"index" : false
},
"street" : {
"type" : "text"
},
"city" : {
"type" : "text"
}
}
},
"settings" : {
...
}
}
Index 2 - Product:
"product" : {
"aliases" : {
...
},
"mappings" : {
"dynamic" : "strict",
"properties" : {
"_entity_type" : {
"type" : "keyword",
"index" : false
},
"description" : {
"type" : "text"
}
}
},
"settings" : {
...
}
}
You can add multiple indices to your search. In this case, you need to search over the fields that exist on all indices. So In your case, you need to define all three fields in both of the indices. The fields "street" and "city" are filed in the first index and the field "description" is filled only in the second index. This will be your mapping for the "Address" index. In this index, the "description" field exists but has no data. In the second index, "street" and "city" exist but have no data.
"address" : {
"aliases" : {
....
},
"mappings" : {
"dynamic" : "strict",
"properties" : {
"_entity_type" : {
"type" : "keyword",
"index" : false
},
"street" : {
"type" : "text"
},
"city" : {
"type" : "text"
},
"description" : {
"type" : "text"
}
}
},
"settings" : {
...
}
}
I am new to ElasticSearch and running version. 2.3.5.
I am running this query:
{
"query" : {
"multi_match" : {
"type" : "cross_fields",
"query" : "John Schmidt Sankt Boulevard 118b 2554 Island",
"minimum_should_match" : "50%",
"operator" : "and",
"fields" : ["*Name", "*Street.*hasStringValue", "*hasStreetNumber", "*hasPostCode", "*PostalLocality.*hasStringValue"]
}
}
}
However it does not return any result. If I remove the 'b' after 118 from the query from the query it returns the document.
All other fields is a match so how can I make ElasticSearch return the document?
Here is the mapping:
{
"my_index" : {
"mappings" : {
"datasubject" : {
"properties" : {
"#context" : {
"properties" : {
"con" : {
"type" : "string"
},
"cor" : {
"type" : "string"
},
"geo" : {
"type" : "string"
},
"per" : {
"type" : "string"
}
}
},
"cor:Person" : {
"properties" : {
"con:hasContactPoint" : {
"properties" : {
"con:Mobile" : {
"properties" : {
"con:hasAreaCode" : {
"type" : "string"
},
"con:hasCompleteTelephoneNumberString" : {
"type" : "string"
},
"con:hasCountryCode" : {
"type" : "string"
}
}
},
"con:PostalAddress" : {
"properties" : {
"con:hasAddressPoint" : {
"properties" : {
"geo:StreetAddress" : {
"properties" : {
"con:hasPostCode" : {
"type" : "string"
},
"con:hasPostalLocality" : {
"properties" : {
"geo:PostalLocality" : {
"properties" : {
"cor:hasStringValue" : {
"type" : "string"
}
}
}
}
},
"geo:hasStreet" : {
"properties" : {
"geo:Street" : {
"properties" : {
"cor:hasStringValue" : {
"type" : "string"
}
}
}
}
},
"geo:hasStreetNumber" : {
"type" : "string"
}
}
}
}
}
}
}
}
},
"cor:hasBirthDate" : {
"properties" : {
"cor:Date" : {
"properties" : {
"cor:hasDateValue" : {
"type" : "date",
"format" : "strict_date_optional_time||epoch_millis"
}
}
}
}
},
"cor:hasName" : {
"properties" : {
"per:Name" : {
"properties" : {
"per:familyName" : {
"type" : "string"
},
"per:givenName" : {
"type" : "string"
}
}
}
}
},
"cor:isIdentifiedBy" : {
"properties" : {
"cor:GEDIvA" : {
"properties" : {
"cor:hasCompleteIdentifierValue" : {
"type" : "string"
}
}
},
"dataset/pdi:IndividualId" : {
"properties" : {
"cor:hasCompleteIdentifierValue" : {
"type" : "string"
}
}
}
}
}
}
}
}
}
}
}
}
And here is the index settings:
{
"gdprui" : {
"settings" : {
"index" : {
"creation_date" : "1525442279108",
"analysis" : {
"filter" : {
"my_ascii_folding" : {
"type" : "asciifolding",
"preserve_original" : "true"
},
"substring" : {
"type" : "edgeNGram",
"min_gram" : "1",
"max_gram" : "10"
}
},
"analyzer" : {
"default" : {
"filter" : [ "standard", "my_ascii_folding", "lowercase", "substring", "reverse" ],
"tokenizer" : "standard"
}
}
},
"number_of_shards" : "5",
"number_of_replicas" : "2",
"uuid" : "EMqhJwGWRKi1F5gFwuSKTQ",
"version" : {
"created" : "2030599"
}
}
}
}
}
I've this mapping for fuas type:
curl -XGET 'http://localhost:9201/living_team/_mapping/fuas?pretty'
{
"living_v1" : {
"mappings" : {
"fuas" : {
"properties" : {
"backlogStatus" : {
"type" : "long"
},
"comment" : {
"type" : "string"
},
"dueTimestamp" : {
"type" : "date",
"format" : "strict_date_optional_time||epoch_millis"
},
"matter" : {
"type" : "string"
},
"metainfos" : {
"properties" : {
"category 1" : {
"type" : "string"
},
"key" : {
"type" : "string"
},
"null" : {
"type" : "string"
},
"processos" : {
"type" : "string"
}
}
},
"resources" : {
"properties" : {
"noteId" : {
"type" : "string"
},
"resourceId" : {
"type" : "string"
}
}
},
"status" : {
"type" : "long"
},
"timestamp" : {
"type" : "date",
"format" : "strict_date_optional_time||epoch_millis"
},
"user" : {
"type" : "string",
"index" : "not_analyzed"
}
}
}
}
}
}
I'm trying to perform this aggregation:
curl -XGET 'http://ESNode01:9201/living_team/fuas/_search?pretty' -d '
{
"aggs" : {
"demo" : {
"nested" : {
"path" : "metainfos"
},
"aggs" : {
"key" : { "terms" : { "field" : "metainfos.key" } }
}
}
}
}
'
ES realizes me:
"error" : {
"root_cause" : [ {
"type" : "aggregation_execution_exception",
"reason" : "[nested] nested path [metainfos] is not nested"
} ],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query_fetch",
"grouped" : true,
"failed_shards" : [ {
"shard" : 3,
"index" : "living_v1",
"node" : "HfaFBiZ0QceW1dpqAnv-SA",
"reason" : {
"type" : "aggregation_execution_exception",
"reason" : "[nested] nested path [metainfos] is not nested"
}
} ]
},
"status" : 500
}
Any ideas?
You're missing "type":"nested" from your metainfos mapping.
Should have been:
"metainfos" : {
"type":"nested",
"properties" : {
"category 1" : {
"type" : "string"
},
"key" : {
"type" : "string"
},
"null" : {
"type" : "string"
},
"processos" : {
"type" : "string"
}
}
}
I am using the elasticsearch ruby gem to connect to an es server and currently have an index with the below mapping. I am trying to understand the proper syntax to query these nested objects. Experimenting with queries such as the following, but keep getting errors. I was wondering if someone could get me started on the proper syntax for querying a structure such as this? thanks!
client = Elasticsearch::Client.new log:true
client.search index: 'injuries', nested: { path: { week: {id: '1' } } }
Returns:
Elasticsearch::Transport::Transport::Errors::BadRequest: [400] {"error":"SearchPhaseExecutionException[Failed to execute phase [query
Sample Mapping:
{
"injuries" : {
"mappings" : {
"tbd" : {
"properties" : {
"injuries" : {
"properties" : {
"timestamp" : {
"properties" : {
"__content__" : {
"type" : "string"
},
"timeZone" : {
"type" : "string"
}
}
}
}
}
}
},
"football" : {
"properties" : {
"injuries" : {
"properties" : {
"timestamp" : {
"properties" : {
"__content__" : {
"type" : "string"
},
"timeZone" : {
"type" : "string"
}
}
},
"week" : {
"properties" : {
"id" : {
"type" : "string"
},
"inactivePlayers" : {
"properties" : {
"inactivePlayer" : {
"properties" : {
"firstName" : {
"type" : "string"
},
"lastName" : {
"type" : "string"
},
"playerId" : {
"type" : "string"
},
"position" : {
"type" : "string"
},
"status" : {
"type" : "string"
},
"teamId" : {
"type" : "string"
}
}
}
}
},
"injuredPlayers" : {
"properties" : {
"injuredPlayer" : {
"properties" : {
"displayName" : {
"type" : "string"
},
"firstName" : {
"type" : "string"
},
"gameStatus" : {
"type" : "string"
},
"injury" : {
"type" : "string"
},
"lastName" : {
"type" : "string"
},
"playerId" : {
"type" : "string"
},
"position" : {
"type" : "string"
},
"practiceStatus" : {
"type" : "string"
},
"teamId" : {
"type" : "string"
}
}
}
}
},
"season" : {
"type" : "string"
},
"seasonType" : {
"type" : "string"
}
}
}
}
}
}
}
}
}
}
Your nested query doesn't appear to have a query defined. I think it should be something like:
"nested" : {
"path" : "week",
"query" : {
"match" : {"week.id" : "1"}
}
}