Elasticsearch best_compression is not working - elasticsearch

I am parsing Apache access log from Logstash and indexing it into a Elasticsearch index. I have also indexed geoip and agent fields.. While indexing I observed elasticsearch index size is 6.7x bigger than the actual file size (space on disk). So I just want to understand this is the correct behavior or I am doing something wrong here? I am using Elasticsearch 5.0, Logstash 5.0 and Kibana 5.0 version. I also tried best_compression but it's taking same disk size. Here is the complete observation with configuration file I tried so far.
My Observations:
Use Case 1:
Logstash Conf
Template File
Apache Log file Size : 211 MB
Total number of lines: 1,000,000
Index Size: 1.5 GB
Observation: Index is 6.7x bigger than the file size.
Use Case 2:
Logstash Conf
Template File
I have found a few solutions to compress elasticsearch index, then I tried it as well.
- Disable `_all` fields
- Remove unwanted fields that has been created by `geoip` and `agent` parsing.
- Enable `best_compression` [ index.codec": "best_compression"]
Apache Log file Size : 211 MB
Total number of lines: 1,000,000
Index Size: 1.3 GB
Observation: Index is 6.16x bigger than the file size
Log File Format:
127.0.0.1 - - [24/Nov/2016:02:03:08 -0800] "GET /wp-admin HTTP/1.0" 200 4916 "http://trujillo-carpenter.com/" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 5.01; Trident/5.1)"
I found Logstash + Elasticsearch Storage Experients they are saying they have reduced index size from 6.23x to 1.57x. But that is pretty old solutions and these solution are no more working in Elasticsearch 5.0.
Some more reference I have already tried:
- Part 2.0: The true story behind Elasticsearch storage requirements
- https://github.com/elastic/elk-index-size-tests
Is there any better way to optimize the Elasticseach index size when your purpose is only show the visualization on Kibana?

I was facing this issue due to index settings were not applied to the index. My index name and template name were different. After using the same template name and index name compression is applied properly.
In the below example I was using index name apache_access_logs and template name elk_workshop.
Sharing corrected template and logstash configuration.
Logstash.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "apache_access_logs"
template => "apache_sizing_2.json"
template_name => "apache_access_logs" /* it was elk_workshop */
template_overwrite => true
}
}
Template:
{
"template": "apache_access_logs", /* it was elk_workshop */
"settings": {
"index.refresh_interval": "5s",
"index.refresh_interval": "30s",
"number_of_shards": 5,
"number_of_replicas": 0
},
..
}
Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html#indices-templates

Related

Conditional indexing not working in ingest node pipelines

Am trying to implement an index template with datastream enabled and then set contains in ingest node pipelines. So that I could get metrics with below-mentioned index format :
.ds-metrics-kubernetesnamespace
I had tried this sometime back and I did these things as mentioned above and it was giving metrics in such format but now when I implement the same it's not changing anything in my index. I cannot see any logs in openshift cluster so ingest seems to be working fine(when I add a doc and test it works fine)
PUT _ingest/pipeline/metrics-index
{
"processors": [
{
"set": {
"field": "_index",
"value": "metrics-{{kubernetes.namespace}}",
"if": "ctx.kubernetes?.namespace==\"dev\""
}
}
]
}
This is the ingest node condition I have used for indexing.
metricbeatConfig:
metricbeat.yml: |
metricbeat.modules:
- module: kubernetes
enabled: true
metricsets:
- state_node
- state_daemonset
- state_deployment
- state_replicaset
- state_statefulset
- state_pod
- state_container
- state_job
- state_cronjob
- state_resourcequota
- state_service
- state_persistentvolume
- state_persistentvolumeclaim
- state_storageclass
- event
Since you're using Metricbeat, you have another way to do this which is much better.
Simply configure your elasticsearch output like this:
output.elasticsearch:
hosts: ["http://<host>:<port>"]
indices:
- index: "%{[kubernetes.namespace]}"
mappings:
dev: "metrics-dev"
default: "metrics-default"
or like this:
output.elasticsearch:
hosts: ["http://<host>:<port>"]
indices:
- index: "metrics-%{[kubernetes.namespace]}"
when.equals:
kubernetes.namespace: "dev"
default: "metrics-default"
or simply like this would also work if you have plenty of different namespaces and you don't want to manage different mappings:
output.elasticsearch:
hosts: ["http://<host>:<port>"]
index: "metrics-%{[kubernetes.namespace]}"
Steps to create datastreams in elastic stack:
create an ILM policy
Create an index template that has an index pattern that matches with the index pattern of metrics/logs.(Set number of primary shards/replica shards and mapping in index template)
Set a condition in ingest pipeline.(Make sure no such index exist)
If these conditions meet it will create a data stream and logs/metrics would have an index starting with .ds- and it will be hidden in index management.
In my case the issue was I did not have enough permission to create a custom index. When I checked my OpenShift logs I could find metricbeat was complaining about the privilege. So I gave Superuser permission and then used ingest node to set conditional indexing
PUT _ingest/pipeline/metrics-index
{
"processors": [
{
"set": {
"field": "_index",
"value": "metrics-{{kubernetes.namespace}}",
"if": "ctx.kubernetes?.namespace==\"dev\""
}
}
]
}

ElasticSearch BulkShardRequest failed due to org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor

I am storing logs into elastic search from my reactive spring application. I am getting the following error in elastic search:
Elasticsearch exception [type=es_rejected_execution_exception, reason=rejected execution of processing of [129010665][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logs-dev-2020.11.05][1]] containing [index {[logs-dev-2020.11.05][_doc][0d1478f0-6367-4228-9553-7d16d2993bc2], source[n/a, actual length: [4.1kb], max length: 2kb]}] and a refresh, target allocation id: WwkZtUbPSAapC3C-Jg2z2g, primary term: 1 on EsThreadPoolExecutor[name = 10-110-23-125-common-elasticsearch-apps-dev-v1/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor#6599247a[Running, pool size = 2, active threads = 2, queued tasks = 221, completed tasks = 689547]]]
My index settings:
{
"logs-dev-2020.11.05": {
"settings": {
"index": {
"highlight": {
"max_analyzed_offset": "5000000"
},
"number_of_shards": "3",
"provided_name": "logs-dev-2020.11.05",
"creation_date": "1604558592095",
"number_of_replicas": "2",
"uuid": "wjIOSfZOSLyBFTt1cT-whQ",
"version": {
"created": "7020199"
}
}
}
}
}
I have gone through this site:
https://www.elastic.co/blog/why-am-i-seeing-bulk-rejections-in-my-elasticsearch-cluster
I thought adjusting "write" size in thread-pool will resolve, but it is mentioned as not recommended in the site as below:
Adjusting the queue sizes is therefore strongly discouraged, as it is like putting a temporary band-aid on the problem rather than actually fixing the underlying issue.
So what else can we do improve the situation?
Other info:
Elastic Search version 7.2.1
Cluster health is good and they are 3 nodes in cluster
Index will be created on daily basis, there are 3 shards per index
While you are right, that increasing the thread_pool size is not a permanent solution, you will be glad to know that elasticsearch itself increased the size of write thread_pool(use in your bulk requests) from 200 to 10k in just a minor version upgrade. Please see the size of 200 in ES 7.8, while 10k of ES 7.9 .
If you are using the ES 7.X version, then you can also increase the size to if not 10k, then at least 1k(to avoid rejecting the requests).
If you want a proper fix, you need to do the below things
Find out if it's consistent or just some short-duration burst of write requests, while gets cleared in some time.
If it's consistent, then you need to figure out if have all the write optimization is in place, please refer to my short-tips to improve index speed.
See, if you have reached the full-capacity of your data-nodes, and if yes, scale your cluster to handle the increased/legitimate load.

For ELK,sometimes Logstash says “no such index”, how to set automatic create index in ES while “no such index”?

I found some pb with ELK, can anyone help me?
logstash 2.4.0
elasticsearch 2.4.0
3 elasticsearch instance for cluster
some time logstash warning:
“ "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index", ...”,
and it doesn't work. curl -XGET ES indices, it truly not have the index.
when this happen, I must kill -9 logstash, and start it again, then it can create a index in ES and it works ok again.
So, my question is how to set automatic create index in ES while “no such index”?
My logstash conf is:
input {
tcp {
port => 10514
codec => "json"
}
}
output {
elasticsearch {
hosts => [ "9200.xxxxxx.com:9200" ]
index => "log001-%{+YYYY.MM.dd}"
}

Elasticsearch Bulk Write is slow using Scan and Scroll

I am currently running into an issue on which i am really stuck.
I am trying to work on a problem where I have to output the Elasticsearch documents and write them to csv. The docs range from 50,000 to 5 million.
I am experience serious performance issues and I get a feeling that I am missing something here.
Right now I have a dataset to 400,000 documents on which I am trying to scan and scroll and which would ultimately be formatted and written to csv. But the time taken to just output is 20 mins!! That is insane.
Here is my script:
import elasticsearch
import elasticsearch.exceptions
import elasticsearch.helpers as helpers
import time
es = elasticsearch.Elasticsearch(['http://XX.XXX.XX.XXX:9200'],retry_on_timeout=True)
scanResp = helpers.scan(client=es,scroll="5m",index='MyDoc',doc_type='MyDoc',timeout="50m",size=1000)
resp={}
start_time = time.time()
for resp in scanResp:
data = resp
print data.values()[3]
print("--- %s seconds ---" % (time.time() - start_time))
I am using a hosted AWS m3.medium server for Elasticsearch.
Can anyone please tell me what I might be doing wrong here?
A simple solution to output ES data to CSV is to use Logstash with an elasticsearch input and a csv output with the following es2csv.conf config:
input {
elasticsearch {
host => "localhost"
port => 9200
index => "MyDoc"
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp" ]
}
}
output {
csv {
fields => ["field1", "field2", "field3"] <--- specify the field names you want
path => "/path/to/your/file.csv"
}
}
You can then export your data easily with bin/logstash -f es2csv.conf

Can anyone give a list of REST APIs to query elasticsearch?

I am trying to push my logs to elasticsearch through logstash.
My logstash.conf have 2 log files as input; elasticsearch as output; and grok as filter. Here is my grok match:
grok {
match => [ "message", "(?<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}
[0-9]{2}:[0-9]{2}:[0-9]{2},[0-9]{3})
(?:\[%{GREEDYDATA:caller_thread}\]) (?:%{LOGLEVEL:level})
(?:%{DATA:caller_class})(?:\-%{GREEDYDATA:message})" ]
}
When elasticsearch is started, all my logs are added to elasticsearch server with seperate index name as mentioned in logstash.conf.
My doubt is that how my logs are stored in elasticsearch? I only know that it is stored with the index name as mentioned in logstash.
'http://164.99.178.18:9200/_cat/indices?v' API given me the following:
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open tomcat-log 5 1 6478 0 1.9mb 1.9mb
yellow open apache-log 5 1 212 0 137kb
137kb
But, how 'documents', 'fields' are created in elasticsearch for my logs.
I read that elasticsearch is REST based search engine. So, if there any REST APIs that I could use to analyze my data in elasticsearch.
Indeed.
curl localhost:9200/tomcat-log/_search
Will give you back the first 10 documents but also the total number of docs in your index.
curl localhost:9200/tomcat-log/_search -d '{
"query": {
"match": {
"level" : "error"
}
}
}'
might gives you all docs in tomcat-log which have level equal to error.
Have a look at this section of the book. It will help.

Resources