elastic stack : i need set Time Filter field name with another field - elasticsearch

i need read messages(content is logs) from rabbitMq by logstash and then send that to elasticsearch for make visualize monitoring in kibana. so i wrote input for read from rabbitmq in logstash like this:
input {
rabbitmq {
queue => "testLogstash"
host => "localhost"
}
}
and i wrote output configuration for store in elasticsearch in logstash like this:
output {
elasticsearch{
hosts => "http://localhost:9200"
index => "d13-%{+YYYY.MM.dd}"
}
}
Both of them are placed in myConf.conf
In the content of each message, there is a Json that contains the fields like this:
{
"mDate":"MMMM dd YYYY, HH:mm:ss.SSS"
"name":"test name"
}
But there are two problems. First, there is no date field in the field of creating a new index(Time Filter field name). Second, I use the same timestamp as the default #timestamp, this field will not be displayed in the build type of graphs. I think the reason for this is because of the data type of the field. The field is of type date, but the string is considered.
i try to convert value of field to date by mutate in logstash config like this:
filter {
mutate {
convert => { "mdate" => "date" }
}
}
Now, two questions arise:
1- Is this the problem? If yes What is the right solution to fix it?
2- My main need is to use the time when messages are entered in the queue, not when Logstash takes them. What is the best solution?

If you don't specify a value for #timestamp, you should get the current system time when elasticsearch indexes the document. With that, you should be able to see items in kibana.
If I understand you correctly, you'd rather use you mDate field for #timestamp. For this, use the date{} filter in logstash.

Related

Searching or filtering on a multi value field

I have a MySQL database with a table that contains 2 importants fields title and age_range.
That table saves documents like this '45;60' for documents designed for users between 45 and 60 years old, '18;70' for users between 18 and 70 years old and so on...
Now I would like to fire the query 'test' on the field title with the filter '18;50' for the field age_range that will return all documents matching 'test' with the age range field contained in this interval including the 2 cases above for example.
For instance, I use Logstash to index my data.
How can I achieve this?
Any treatment to do while indexing my data with logstash?
Any filter, tokenizer to use while indexing using ES analyzer?
Thank you in advance
You can split the data as two fields with grok filter. To ship data to Elasticsearch, you can use logstash jdbc_streaming input and elasticsearch output firstly. And you configure your input like below:
input {
jdbc_streaming {
# Configuration of jdbc
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html
}
}
filter {
# Split the field as separeted two fields
grok {
match => { "age_range" => "%{NUMBER:age_range_top};%{NUMBER:age_range_bottom}" }
}
}
output {
elasticsearch {
# elasticsearch output configuration
}
}
Analysis depends on your search method. How can you want to search these fields. Range fields is necessary default one if you want to do only range filter. But you should do some work about title. For example you can follow this example to handle autocomplete.

logstash add_field conversion issue

I am using logstash version 5.0.2. Parsing a file which holds a filename as one of the fields which is parsed by logstash grok filter, but for visualization i needed the file number to identify each file. So I added new field through mutate filter add_field checking the filename in [message].
if 'filename_1' in [message] {
mutate { add_field => { "file_no" => "13" } }
mutate {convert => [ "file_no", "float" ] }
}
If i check the parsing through stdin/stdout (rubydebug codec) filterit shows the file_no field is converted properly. but if I send the logstash output to elasticsearch kibana shows conflict in data type of that field.
there I am able to see file_no.keyword(as string) and file_no(as conflict), with error as:
Mapping conflict! A field is defined as several types (string, integer,
etc) across the indices that match this pattern. You may still be able to use
these conflict fields in parts of Kibana, but they will be unavailable for
functions that require Kibana to know their type. Correcting this issue will
require reindexing your data
I have converted the added filed so why is is still being sent to elasticsearch as string not sure.
any help would be great.
When tried converting the field there is not option of number in Kibana. The source logfile being monitored doesn't have this number to parse it directly as an integer with %{PATTERN_FOR_NUMBER:number_variable:int} otherwise this could have been easier

Logstash Filter for a custom message

I am trying to parse a bunch of strings in Logstash and output is set as ElasticSearch.
Sample input string is: 2016 May 24 10:20:15 User1 CREATE "Create a new folder"
The grok filter is:
match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{WORD:user} %{WORD:action_performed} %{WORD:action_description} "}
In Elasticsearch, I am not able to see separate columns for different field such as timstamp, user, action_performed etc.
Instead the whole string is under a single column "message".
I would like to store the information in separate fields instead of just a single column.
Not sure what to change in logstash filter to achieve as desired.
Thanks!
You need to change your grok pattern with this, i.e. use QUOTEDSTRING instead of WORD and it will work!
match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{WORD:user} %{WORD:action_performed} %{QUOTEDSTRING:action_description}"}

Logstash Dynamic Index From Document Field Fails

I still face problems to figure out, how to tell Logstash to send a dynamic index, based on a document field. Furthermore, this Field must be transformed in order to get the "real" index at the very end.
Given, that there is a field "time" (which is a UNIX Timestamp). This Field gets already transformed with a "date" Filter to a DateTime Object for Elastic.
Additionally, it should server as index (YYYYMM). The index should NOT be derived from #Timestamp, which is not touched.
Example:
{...,"time":1453412341,...}
Shall go to the Index: 201601
I use the following Config:
filter {
date {
match => [ "time", "UNIX" ]
target => "time"
timezone => "Europe/Berlin"
}
}
output {
elasticsearch {
index => "%{time}%{+YYYYMM}"
document_type => "..."
document_id => "%{ID}"
hosts => "..."
}
}
Sadly, its not working. Any idea, how to achieve that?
Thanks a lot!
The "%{+YYYYMM}" says to use the date values from #timestamp. If you want an index named after the YYYYMM in %{time}, you need to make a string out of that date field and then reference that string in the output stanza. There might be a mutate{} that would do it, or drop into ruby{}.
In most installations, you want to set #timestamp to the event's value. The default of logstash's own time is not very useful (imagine if your events were delayed by an hour during processing). If you did that, then %{+YYYYMM}" would work just fine.
This is caused because the index name is created based on UTC time by default.

Elasticsearch converting a string to number

I am new to Elasticsearch and am just starting up with ELK stack. I am collecting key value type logs in my Logstash and passing it to an index in Elasticsearch. I am using the kv filter plugin in Logstash. Due to this, all the fields are string type by default.
When I try to perform aggregation like avg or sum on a numeric field in Elasticsearch, I am getting an Exception: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData]
When I check the mappings in the index, all the fields except the timestamp ones are marked as string.
Please tell me how to overcome this issue as I have many numeric fields in my log events for aggregation.
Thanks,
Keerthana
You could set explicit mappings for those fields (see e.g. Change default mapping of string to "not analyzed" in Elasticsearch for some guidance), but it's easier to just convert those fields to integers in Logstash using the mutate filter:
mutate {
convert => ["name-of-field", "integer"]
}
Then Elasticsearch will do a better job at guessing the best data type for your field(s).
(See also Data type conversion using logstash grok.)
In latest Logstash the syntax is as follows
filter {
mutate {
convert => { "fieldname" => "integer" }
}
}
You can visit this link for more detail: https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-convert

Resources