How can I create "duplicated index patterns" in Kibana - elasticsearch

I'm using kibana 7.10.1.
I need it to use different 'time fields' for each index pattern. Is this possible to set multiple time fields for same index ?

You can pick any date (or date_nanos) field as the primary time field in an index pattern. Screenshot from the second page when creating it:
#timestamp is just a convention. Though you will need to create a different index pattern for each combination of index(es) and primary time field.

Related

How can I mark a field which represent the time that events occurred?

I am using kibana to search the document of elasticsearch, I found that kibana marked some filed ,which represents the time that event occurred.
When I search index with such documents ,I can make use of the datetime picker
I noticed that if some documents(in other index) without such field , the datetime picker is missing . So how can I select a field and marked as event time?
This issue is in the index patterning level:
When creating your index pattern you should be able to choose the "Time filter field name".
There you can choose the date field and then the datetime picker will be available.
If you don't seem to have it in your current index pattern - create a new one and use it instead.
As you declare your index mapping, apply the null_value parameter. It could simply have the value 0 (0th epoch second). That way, when you select the max date range, it's going to pull all your docs.

elasticsearch - Tag data with lookup table values

I’m trying to tag my data according to a lookup table.
The lookup table has these fields:
• Key- represent the field name in the data I want to tag.
In the real data the field is a subfield of “Headers” field..
An example for the “Key” field:
“Server. (* is a wildcard)
• Value- represent the wanted value of the mentioned field above.
The value in the lookup table is only a part of a string in the real data value.
An example for the “Value” field:
“Avtech”.
• Vendor- the value I want to add to the real data if a combination of field- value is found in an document.
An example for combination in the real data:
“Headers.Server : Linux/2.x UPnP/1.0 Avtech/1.0”
A match with that document in the look up table will be:
Key= Server (with wildcard on both sides).
Value= Avtech(with wildcard on both sides)
Vendor= Avtech
So baisically I’ll need to add a field to that document with the value- “ Avtech”.
the subfields in “Headers” are dynamic fields that changes from document to document.
of a match is not found I’ll need to add to the tag field with value- “Unknown”.
I’ve tried to use the enrich processor , use the lookup table as the source data , the match field will be ”Value” and the enrich field will be “Vendor”.
In the enrich processor I didn’t know how to call to the field since it’s dynamic and I wanted to search if the value is anywhere in the “Headers” subfields.
Also, I don’t think that there will be a match between the “Value” in the lookup table and the value of the Headers subfield, since “Value” field in the lookup table is a substring with wildcards on both sides.
I can use some help to accomplish what I’m trying to do.. and how to search with wildcards inside an enrich processor.
or if you have other idea besides the enrich processor- such as parent- child and lookup terms mechanism.
Thanks!
Adi.
There are two ways to accomplish this:
Using the combination of Logstash & Elasticsearch
Using the only the Elastichsearch Ingest node
Constriant: You need to know the position of the Vendor term occuring in the Header field.
Approach 1
If so then you can use the GROK filter to extract the term. And based on the term found, do a lookup to get the corresponding value.
Reference
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html
https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html
https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html
Approach 2
Create an index consisting of KV pairs. In the ingest node, create a pipeline which consists of Grok processor and then Enrich it. The Grok would work the same way mentioned in the Approach 1. And you seem to have already got the Enrich part working.
Reference
https://www.elastic.co/guide/en/elasticsearch/reference/current/grok-processor.html
If you are able to isolate the sub field within the Header where the Term of interest is present then it would make things easier for you.

Elastic Search - Find document with a conflicting field type

I'm using Elastic Search 5.6.2 with Kibana and I'm currently facing a problem
My documents are indexed on the field timestamp which is normally an integer, however recently somebody has logged a document with a timestamp that is not an integer, and Kibana complains of conflicting type.
The discover panels display nothing and the following errors pop:
Saved "field" parameter is now invalid. Please select a new field.
Discover: "field" is a required parameter
How can I look for the document(s) causing these conflicts so that to find the service creating bad logs ?
The field type (either integer or text/keyword) is not defined on per document basis but rather on per index basis (in the mappings). I guess you are manipulating timeseries data, and you probably have un index per day (or per month or ...).
In Kibana Dev Tools:
List the created indices with GET _cat/indices
For each index (logstash-2017.09.28 in my example) do a GET logstash-2017.09.28/_mapping and check the type of the field in #timestamp
The field type is probably different between indices.
You won't be able to change the field type on created indices. Deleting document won't solve you're problem. The only solution is to drop the index or reindex the whole index with a new field type (in a specific mapping).
To avoid this problem on future indices, the solution is to create an index template with a mapping telling that the field #timestamp is of type date or whatever.

kibana unique count seeing names with - as different entery

I have an problem with the unique count feature.
I get data from elasticsearch for example an computer name (PC-01) in a field.
When i want to use a visualisation unique count then kibana makes from "DESKTOP-2D562R2" -> "DESKTOP" and "2D562R2" as a entery.
See this splitted field:
The data kibana gets from elastic search looks like this entery data:
The problem with this is that 2d562r2 and desktop two different "enterys" are in a kibana table or with unique count.
Your field is being analyzed (split into tokens). Change the mapping (or template, depending on how you're creating the indexes) to make this field not_analyzed.
Note that, as a hack, logstash's default template creates a ".raw" version of string fields that is not analyzed. You could refer to enterys.raw.

elastic search primary key and secondary key

I have an index in elastics search with products. Every product has an article number in the form of a guid. To show this products on a webshop I don't want to show a guid (to long). I want an integer number.
Now i have two keys. One to lookup the web request (the integer) and one to update the product (the guid)
I know I can search on a field in elastic search. But is an exact match search on a field slower as an exact match on a key (_id)? I don't want to do a mapping search from one key to the other because that is another operation.
The _id field is just a primary key for documents. It will be stored separately. Yes, there will be some lag. But you'll find it's not that much lag. If you want a field to search as fast as _id field. Then in mapping, store the field externally. Refer to the store attribute for a field.
Like other fields, it's also stored in ES. By default _id is not analyzed. If you define a field as not_analyzed its also as fast as the _id field. ES indexes each and every field the same.

Resources