Disable state management history in Elasticsearch with Open Distro - elasticsearch

I have ElasticSearch on AWS which uses Open Distro rather than Elastics ilm.
When you apply state management for indexes it causes a crazy amount of audit indexes to be created. I would like to just disable this completely.
https://opendistro.github.io/for-elasticsearch-docs/docs/ism/settings/
Apparently it's just done setting opendistro.index_state_management.history.enabled to false but if I apply it to the _cluster/settings it doesn't appear to work.
PUT _cluster/settings
{
"opendistro.index_state_management.history.enabled": false
}
Results in:
{
"Message": "Your request: '/_cluster/settings' payload is not allowed."
}
The setting is also not valid on an index template so I cannot set it there.
How can I disable this audit history?

I asked on GitHub and got an answer:
PUT _cluster/settings
{
"persistent" : {
"opendistro.index_state_management.history.enabled": false
}
}
Need to wrap it with an action of persistent.
https://opendistro.github.io/for-elasticsearch-docs/docs/elasticsearch/configuration/

Related

Parsing Syslog with Logstash grock filter isn’t working with Kibana

I have crated a very basic grok filter to parse Cisco Syslogs:
input {
udp {
port => 5140
type => syslog
}
}
filter {
grok {
match => { "message"=> "%{TIMESTAMP_ISO8601:Timestamp_Local} %{IPV4:IP_Address} %{GREEDYDATA:Event}" }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
index => "ciscologs-%{+YYYY.MM.dd}"
}
}
After reloading Logstash and verifying that logs show no major issues I reloaded Kibana and refreshed indexes.
When accessing the Discovery section, I saw that the index was indeed created. But looking at the fields, they were the default ones and not the ones defined in the grok filter.
The logs received after adding the filter show the following tag in Kibana:
Before adding the filter I made sure it works using Kibana's Grok debugger.
The tag states that there was a problem with the logs parsing but at this point.
Running versions are: 7.7.1 (Kibana/Elasticsearch) and 7.13.3 (Logstash)
I'm not sure where the issue might be, so, any help would be appreciated.
I found the problem. I was trying to match the logs in the order sent by the Cisco devices and not the logs in the "message" field. Once I modified that, the filter started working as expected.

Managing the output of logstash pipelines

We're trying to add a field for all pipelines in a LogStash server (we have 6 on-premise logstash, 3 in each country).
In specific we're trying to add a field from environment variables to mark the output of a pipeline with a suffix in the index, for example (us, eu), but we have many pipelines (approximately 145 by country) and the main idea isn't adding this environment variable in all outputs plugins, also that is not mandatory so if someone forgets to add the environment variable we'll have serious problems.
Then, we're trying to find a method to add this field automatically in each output without add this environment variable, in your experience is it possible in logstash "world" attach a suffix in an index in an output plugin?
example
output {
elasticsearch {
hosts => localhost
manage_template => false
index => "index-%{+YYYY.MM.dd}_${COUNTRY_VARIABLE}"
}
}
I want to add ${COUNTRY_VARIABLE} automatically before sending the document.
It's not possible to do this in elasticsearch because that is mounted in aws and the traffic to check all possible hosts inputs from logstash is a cost that we don't want to have it.
Sure, this will work. If you add a fallback value to the env var, you're fine in the case someone forgot to define one: ${COUTRY_VARIABLE:XX}
output {
elasticsearch {
hosts => localhost
manage_template => false
index => "index-%{+YYYY.MM.dd}_${COUNTRY_VARIABLE:ABC}"
}
}
See here for more background on env vars in logstash.

Elasticsearch: How to delete unsupported static index setting created by a previous release?

How to delete static settings from an index if this setting is not supported/known anymore to the running ES version.
Indices created with ES 5.2.2 or 5.3.0 have been subject to shrinking with a hot-warm strategy in order to lower the number of shards.
This shrinking created two static index settings shrink.source.name and shrink.source.uuid in the newly created index.
The new index works as expected.
In the meantime I upgraded to ES 6.8.1 and I am preparing the Elasticsearch cluster for ES 7.0 as indices created with older versions are not supported anymore with ES 7.0.
Kibana offers a nice UI for the required reindexing but this fails due to these two unsupported setting.
As I have no need for these settings anyway (they are just informational for me) I want to delete them from the indices.
Deleting a static setting from an index requires the follwing steps:
close the index
set the setting to null
reopen the index
Unfortunately this does not work with settings which are not supported anymore with the current version of ES.
curl -X PUT "elk29:9200/logstash-20160915/_settings?pretty" -H 'Content-Type: application/json' -d' { "index" : { "shrink.source.uuid" : null }}'
{
"error" : {
"root_cause" : [
{
"type" : "remote_transport_exception",
"reason" : "[elk24][10.21.15.24:9300][indices:admin/settings/update]"
}
],
"type" : "illegal_argument_exception",
"reason" : "unknown setting [index.shrink.source.uuid] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
},
"status" : 400
}
I expected that the setting is simply removed.
Oviously ES emulates the removal of a setting by setting the value of a setting to null. Unfortunately this only works with explicitly supported settings but not with outdated unsupported settings.
The question remains how to remove index settings which are not supported anymore with the current version of ES?

Filebeat - how to override Elasticsearch field mapping?

We're ingesting data to Elasticsearch through filebeat and hit a configuration problem.
I'm trying to specify a date format for a particular field (standard #timestamp field holds indexing time and we need an actual event time). So far, I was unable to do so - I tried fields.yml, separate json template file, specifying it inline in filebeat.yml. That last option is just a guess, I haven't found any example of this particular configuration combo.
What am I missing here? I was sure this should work:
filebeat.yml
#rest of the file
template:
# Template name. By default the template name is filebeat.
#name: "filebeat"
# Path to template file
path: "custom-template.json"
and in custom-template.json
{
"mappings": {
"doc": {
"properties": {
"eventTime": {
"type": "date",
"format": "YYYY-MM-dd HH:mm:ss.SSSS"
}
}
}
}
}
but it didn't.
We're using Filebeat version is 6.2.4 and Elasticsearch 6.x
I couldn't get the Filebeat configuration to work. So in the end changed the time field format in our service and it worked instantly.
I found official Filebeat documentation to be lacking complete examples. May be that's just my problem
EDIT actually, it turns out you can specify a list of allowed formats in your mapping

Unable to update Indices Recovery settings dynamically in elasticsearch

As per this article in elasticsearch reference. We can update the following setting dynamically for a live cluster with the cluster-update-settings.
indices.recovery.file_chunk_size
indices.recovery.translog_ops
indices.recovery.translog_size
But when I try to update any of the above I am getting the following error:
PUT /_cluster/settings
{
"transient" : {
"indices.recovery.file_chunk_size" : "5mb"
}
}
Response:
"type": "illegal_argument_exception",
"reason": "transient setting [indices.recovery.file_chunk_size], not dynamically updateable"
Have they changed this and didn't updated there reference article or am I missing something? I am using Elasticsearch 5.0.2
They have been removed in this pull request:
indices.recovery.file_chunk_size - now fixed to 512kb
indices.recovery.translog_ops - removed without replacement
indices.recovery.translog_size - now fixed to 512kb
indices.recovery.compress - file chunks are not compressed due to lucene's compression but translog operations are.
But I'm surprised it is not reflected in the documentation.

Resources