How to generate custom Auditbeat field? - elasticsearch

I need to insert a field for each event for the auditbeat data.
That is each document should contain a field "enviornment": "production"
Note: I need a solution not involving Logstash
How can I do this?.

you can do this using logstash and the mutate filter plugin. Something like this:
filter {
mutate {
add_field => { "enviornment" => "production" }
}
}
EDIT: without logstash. Since the beats are open source you can edit the beat to mach you specification. But this is clearly a bad solution. Another thing that you can check, is processors. But processors is to keep/drop fields and other tasks. I did not find a processor solution to your case.
For last, you have in the configuration file (yml), one field called fields.
Optional fields that you can specify to add additional information to the output. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. By default, the fields that you specify here will be grouped under a fields sub-dictionary in the output document. To store the custom fields as top-level fields, set the fields_under_root option to true.
fields_under_root: true
fields:
enviornment: production
another_field: 1234
more info

Related

Elasticsearch + Logstash: How to add a fields based on existing data at importing time

Currently, I'm importing data into Elastic through logstash, at this time by reading csv files.
Now let's say I have two numeric fields in the csv, age, and weight.
I would need to add a 3rd field on the fly, by making a math on the age, the weight and another external data ( or function result ); and I need that 3rd field to be created when importing the data.
There is any way to do this?
What could be the best practice?
In all Logstash filter sections, you can add fields via add_field, but that's typically static data.
Math calculations need a separate plugin
As mentioned there, the ruby filter plugin would probably be your best option. Here is an example snippet for your pipeline
filter {
# add calculated field, for example BMI, from height and weight
ruby {
code => "event['data']['bmi'] = event['data']['weight'].to_i / (event['data']['height'].to_i)"
}
}
Alternatively, in Kibana, there are Scripted fields meant to be visualized, but cannot be queried

Filtering Elasticsearch fields from index/store

I was wondering what is the recommended approach to filter out some of the fields that are sent to Elasticsearch from Store and Index?
I want to filter our some fields from getting indexed in Elasticsearch. You may ask why you are sending them to Elasticsearch from the first place. Unfortunately, it is sent via another application that doesn't accept any filtering mechanism. Hence, filtering should be addressed at the time of indexing. Here is what we have done, but I am not sure what would be the consequences of these steps:
1- Disable dynamic mapping ("dynamic": "false" ) in ES templates.
2- Including only the required fields in _source and excluding the rest.
According to ES website, some of the ES functionalities will be disabled by disabling _source fields. Given I don't need the filtered fields at all, I was wondering whether the mentioned solution will break anything regarding the remaining fields or not?
There are a few mapping parameters that allow you to do what you want:
index: true/false: if true the field value is indexed in order to be searched later on (default: true)
store: true/false: if true the field values are stored in addition to being indexed. Usually, the field values are stored in the source already, but you can choose to not store the source but store the field value itself (default: false)
enabled: true/false: only for the mapping type as a whole or for object types. you can decide whether to only store the value but not index it
So you can use any combination of the above parameters if you don't want to modify the source documents and simple let ES do it for you.

How to easily change a field from analyzed to non_analyzed

I have a hostname field that's coming in via filebeat to my logstash instance is getting passed to ElasticSearch where it's being treated as an analyzed field. That's causing issues, because the field itself needs to be reported on in it's totality.
Example: Knowing how many requests come to "prd-awshst-x-01" rather than splitting those out into prd, awshst, x, 01.
Does anyone have a lightweight way of doing this that can be used with visualizations?
Thanks,
We have to update mapping from analyzed to not_analyzed for specific field.
PUT/ mapping url/
{
property:{
field:{
text:"not_analyzed"
}
}
}
After updating the property please check is it reflected in mapping using GET method on mapping url.
Based on the title of your post, you already know that you need to change the mapping of the field to not_analyzed.
You should setup a template so that future indexes contain this mapping.
If you want to keep the existing data, you'll have to reindex it into a new index with the new mapping.
If you're using the default logstash template, it might be creating you a not_analyzed ".raw" field that you can use in visualizations in kibana.
The index template that is provided with Filebeat configures the hostname field as not_analyzed.
You should manually install the index template provided with Filebeat and then configure Logstash to write data to the Filebeat index as described in the docs.
This is what the elasticsearch output would look like. If you are processing other data through Logstash, then you might want to add a conditional around this output so that only beat events are sent via this output.
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}

Elastic search versioning using last updated timestamp?

Our data model doesn't have version field separately. One of the ways we versioned the data model is by the id and the last updated timestamp, the version will be incremented when a new record with same id but latest last updated timestamp is received.
However in elastic search, there is no way to derive the value of _id field. Multi fields cannot applied to _id field.
Our system is reactive and message driven, so can't rely on the order in which we receive the message.
is there anyways we can solve versioning in a performant way?
The _version field in elasticsearch is not for versioning. It's to ensure you are working on the expected document (e.g. you read a doc and decide to delete it, than it would be wise to add the version-number of the read doc to the delete command).
You can set the _id field to "[your_id]_[timestamp]" and add two additional fields "my_id" and "timestamp".
How to set the _id to "[your_id]_[timestamp]"? If you use logstash than you can use the mutate filter:
mutate { add_field => ["id", "%{your_id}", "_", "%{timestamp}"] }
should work. If you don't use logstash, you have to create the id-field similar.

Elasticsearch enum field

Say I have a field that can only have a finite set of values.
Would it not be more efficient (index-wise, and/or storage-wise) to store it as some kind of ENUM?
Is there some such possibility in elasticsearch?
An example would be the names of the states in a state machine.
Yes it would. When you index full text fields, Elasticsearch also indexes information like the length of the field, and the position and frequency of each term in the field.
These are irrelevant to ENUM values, and can be excluded completely.
In fact, if you map your field as {"index": "not_analyzed"} then, besides storing the exact value that you provide without trying to analyze it, it also disables storage of the extra info that I mentioned above.
In your app use hash map { "enumVal1" => 1, "enumVal2" => 2, "enumValX" => 3 } and then use in ES only the values from hashmap, this can save space.

Resources