A few days ago I was testing kibana and exploring its characteristics. When I entered http://localhost/_cat/indices?v I saw some strange indexes and when I deleted them, they were recreated automatically.
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-es-6-2018.11.17 23zMdQcuSeeZRmQ8yzD-Qg 1 0 86420 192 33.9mb 33.9mb
green open .monitoring-es-6-2018.11.16 Dn7WCVBUTZSSaBlKKy8hxA 1 0 12283 69 5.8mb 5.8mb
green open .monitoring-es-6-2018.11.18 YaFbgQIiTVGZ1kjOB_wWpA 1 0 95069 250 36.6mb 36.6mb
green open .monitoring-es-6-2018.11.19 3bvTjlk0SNy2UR21C1muVA 1 0 62104 208 32.4mb 32.4mb
green open .monitoring-kibana-6-2018.11.16 MXwi2p83S46tEglvViIUUQ 1 0 12 0 32.6kb 32.6kb
green open .kibana MZXJrrajQvqAL9h1rKuxWg 1 0 1 0 4kb 4kb
... my other indexes
How to prevent these indices from being created?
You can disable monitoring in your elasticsearch.yml configuration file:
xpack.monitoring.enabled: false
The same thing in kibana.yml:
xpack.monitoring.enabled: false
Related
I have 11 elasticsearch nodes 3 master node 6 data node and 2 coordinate node.We are running latest version of elasticsearch 7.13.2
we have installed metricbeat and configured in all elasticsearch node we are monitoring our ELK stack and we have observed that .monitoring-es-* indices has 200gb ,100,150gb and .monitoring-logstash-* has less amount of data size same with the kibana
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-es-7-mb-2021.07.15 NPdkPbofRde5YWd50oCzAA 1 1 95287036 0 141.2gb 70.6gb
green open .monitoring-es-7-mb-2021.07.16 F2oy_3WVRY6tSdhaMp7ZEg 1 1 16711910 0 25.1gb 12.4gb
green open .monitoring-es-7-mb-2021.07.11 d1JChmtgTGmnFoORnIcA1Q 1 1 93133543 0 135.9gb 67.9gb
green open .monitoring-es-7-mb-2021.07.12 MYu5ozjiQGGjGFBI5fjcvQ 1 1 94136537 0 137.9gb 68.9gb
green open .monitoring-es-7-mb-2021.07.13 7eLRyUWgTS-dSFE3ad669A 1 1 95323641 0 139.9gb 69.9gb
green open .monitoring-es-7-mb-2021.07.14 w2RB_A1TS1SeUBebLUURkA 1 1 95287470 0 140.7gb 70.3gb
green open .monitoring-es-7-mb-2021.07.10 llAWKQJwQ_-2FZg4Dbc3iA 1 1 92770558 0 135gb 67.5gb
we have enable elasticsearch-xpack module in metricbeat
elasticsearch-xpack.yml
- module: elasticsearch
xpack.enabled: true
period: 10s
metricsets:
- cluster_stats
- index
- index_recovery
- index_summary
- node
- node_stats
- pending_tasks
- shard
hosts:
- "https://xx.xx.xx.xx:9200" #em1
- "https://xx.xx.xx.xx:9200" #em2
- "https://xx.xx.xx.xx:9200" #em3
- "https://xx.xx.xx.xx:9200" #ec1
- "https://xx.xx.xx.xx:9200" #ec2
- "https://xx.xx.xx.xx:9200" #ed1
- "https://xx.xx.xx.xx:9200" #ed2
- "https://xx.xx.xx.xx:9200" #ed3
- "https://xx.xx.xx.xx:9200" #ed4
- "https://xx.xx.xx.xx:9200" #ed5
- "https://xx.xx.xx.xx:9200" #ed6
scope: cluster
ssl.certificate_authorities: ["/etc/elasticsearch/certs/ca/ca.crt"]
username: "xxxx"
password: "********"
How can i reduce its size or which metricset should i monitor Is this normal behaviour?
I want to fetch all the indexed logs file from elastic search in descending order of date from kibana.
Right now when I do:
GET _cate/indices
It gives me indices in random order as given below:
yellow open index-7.10.1-front-ui-2021.02.02 AbT9OEM-RP6OYOvY1xE1PQ 1 1 6045 0 2.8mb 2.8mb
yellow open index-7.10.1-front-ui-2021.02.03 TXxJUyXdRiSK6S0RZtc3eQ 1 1 6057 0 2.7mb 2.7mb
yellow open index-7.10.1-front-ui-2021.01.31
But I want it in the sorted order of date like given below:
yellow open index-7.10.1-front-ui-2021.02.03 TXxJUyXdRiSK6S0RZtc3eQ 1 1 6057 0 2.7mb 2.7mb
yellow open index-7.10.1-front-ui-2021.02.02 AbT9OEM-RP6OYOvY1xE1PQ 1 1 6045 0 2.8mb 2.8mb
yellow open index-7.10.1-front-ui-2021.01.31
Please let me know if it is possible?
Sure, you can sort the lines using the s query string parameter with the column name on which you'd like to sort:
GET _cat/indices?v&s=index:desc
I tried this first: GET _cat/indices which gave me all of my indices.
I added GET _cat/indices?v to get the column names so I would be able to sort by column name like this GET _cat/indices?v&s=store.size
Now, I just want to swap the sort order.
ElasticSearch guide had no information regarding this.
You can use the suffix :desc like this to get there:
http://localhost:9200/_cat/indices/?s=store.size:desc
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open tmdb DVLEul7bT4yGyiq34h4nCQ 1 1 27760 0 119.8mb 119.8mb
green open .kibana_2 6WJ3UFpOSj2O8zi-iLuw6w 1 0 10 0 35.3kb 35.3kb
green open .kibana_task_manager_1 WSwYXmMOSpyQOQ8ZIRniwg 1 0 2 0 31.6kb 31.6kb
green open .tasks _SsI5VWNSwO5Yfps3K12Qg 1 0 1 0 6.3kb 6.3kb
green open .kibana_1 MukHGTHfTkKS1HcYodfNqA 1 0 1 0 4kb 4kb
yellow open my_index 7FgIWDJOSQesKbJI-HKRoA 1 1 1 0 3.8kb 3.8kb
green open .ltrstore -Xh6WnJYSsWIoPGP8fhgGw 1 0 0 0 283b 283b
green open .apm-agent-configuration 6FUF8T5oTDGcLBqcU7ymJg 1 0 0 0 283b 283b
This ability is mildly-buried/described-briefly in this section of the docs.
I am running a 2 node cluster on version 5.6.12
I followed the following rolling upgrade guide: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/rolling-upgrades.html
After reconnecting the last upgraded node back into my cluster, the health status remained as yellow due to unassigned shards.
Re-enabling shard allocation seemed to have no effect:
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.enable": "all"
}
}
My query results when checking cluster health:
GET _cat/health:
1541522454 16:40:54 elastic-upgrade-test yellow 2 2 84 84 0 0 84 0 - 50.0%
GET _cat/shards:
v2_session-prod-2018.11.05 3 p STARTED 6000 1016kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05 3 r UNASSIGNED
v2_session-prod-2018.11.05 1 p STARTED 6000 963.3kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05 1 r UNASSIGNED
v2_session-prod-2018.11.05 4 p STARTED 6000 1020.4kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05 4 r UNASSIGNED
v2_session-prod-2018.11.05 2 p STARTED 6000 951.4kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05 2 r UNASSIGNED
v2_session-prod-2018.11.05 0 p STARTED 6000 972.2kb xx.xxx.xx.xxx node-25
v2_session-prod-2018.11.05 0 r UNASSIGNED
v2_status-prod-2018.11.05 3 p STARTED 6000 910.2kb xx.xxx.xx.xxx node-25
v2_status-prod-2018.11.05 3 r UNASSIGNED
Is there another way to try and get shards allocation working again so I can get my cluster health back to green?
The other node within my cluster had a "high disk watermark [90%] exceeded" warning message so shards were "relocated away from this node".
I updated the config to:
cluster.routing.allocation.disk.watermark.high: 95%
After restarting the node, shards began to allocate again.
This is a quick fix - I will also attempt to increase the disk space on this node to ensure I don't lose reliability.
I am having problems when I am looking for a register inside of an index and the message is the next:
TransportError: TransportError(503, u'NoShardAvailableActionException[[new_gompute_history][2] null]; nested: IllegalIndexShardStateException[[new_gompute_history][2] CurrentState[POST_RECOVERY] operations only allowed when started/relocated]; ')
This comes when I am searching by an id inside of an index.
The health of my cluster is green:
GET _cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign
1438678496 10:54:56 juan green 5 4 212 106 0 0 0
GET _cat/allocation?v
shards disk.used disk.avail disk.total disk.percent host ip node
53 3.1gb 16.8gb 20gb 15 bc10-05 10.8.5.15 Anomaloco
53 6.4gb 80.8gb 87.3gb 7 bc10-03 10.8.5.13 Algrim the Strong
0 0b l8a 10.8.0.231 logstash-l8a-5920-4018
53 6.4gb 80.8gb 87.3gb 7 bc10-03 10.8.5.13 Harry Leland
53 3.1gb 16.8gb 20gb 15 bc10-05 10.8.5.15 Hypnotia
I solved putting a a sleep time between consecutive PUTs, but I do not like this solution