I am new to DevOps side of things for elastic search and have a few questions regarding effective monitoring of a elastic search cluster using Graphana
What I tried
run elasticsearch locally
curl http://localhost:9200/
{
"name" : "hnsKXlb",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "IsSAzHcZTDSA40Lfy0PKcw",
"version" : {
"number" : "5.5.2",
"build_hash" : "b2f0c09",
"build_date" : "2017-08-14T12:33:14.154Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
run graphana locally
docker run -p 3000:3000 --net network_name \
-e "GF_SECURITY_ADMIN_PASSWORD=xxx" \
grafana/grafana
added an ElasticSearch datasource
Imported graphana dashboard
https://grafana.com/grafana/dashboards/878
Question
I don't seem to get any metrics
I suspect that the datasource is only allowing grafana to that specific index. How can I make it more generic ?
You'll need an elasticsearch exporter (eexporter) to export metric to prometheus then use prometheus as datasource in Grafana
Take a look into the tools like Prometheus/Graphite/Logstash/Beats which will collect the metrics from Elasticsearch and add it into ES. First, we need to collect the metrics and store it into Elasticsearch. Then we can have a tool like Grafana to visualize the data. Kibana has a built-in dashboard to visualize the cluster health. You can check here.
Related
I was under the impression GCP ElasticSearch service comes with automated snapshots/backups. That's what I find in the documentation. It suggests they happen once a day and are stored on storage but I do not see any backups in any of my GCP storage. How do you get access to the automated snapshots?
Try the below command on dev tools
GET _cat/snapshots/cs-automated?v
Output error message:
"type" : "repository_missing_exception",
"reason" : "[cs-automated] missing"
"type" : "repository_missing_exception",
"reason" : "[cs-automated] missing"
"status" : 404
Try:
GET /_snapshot/found-snapshots
Or
GET _cat/snapshots/found-snapshots?v
Currently I am getting these alerts:
Upgrade Required Your version of Elasticsearch is too old. Kibana requires Elasticsearch 0.90.9 or above.
Can someone tell me if there is a way I can find the exact installed version of ELS?
from the Chrome Rest client make a GET request or
curl -XGET 'http://localhost:9200' in console
rest client: http://localhost:9200
{
"name": "node",
"cluster_name": "elasticsearch-cluster",
"version": {
"number": "2.3.4",
"build_hash": "dcxbgvzdfbbhfxbhx",
"build_timestamp": "2016-06-30T11:24:31Z",
"build_snapshot": false,
"lucene_version": "5.5.0"
},
"tagline": "You Know, for Search"
}
where number field denotes the elasticsearch version. Here elasticsearch version is 2.3.4
I would like to add which isn't mentioned in above answers.
From your kibana's dev console, hit following command:
GET /
This is similar to accessing localhost:9200 from browser.
Hope this will help someone.
You can check version of ElasticSearch by the following command. It returns some other information also:
curl -XGET 'localhost:9200'
{
"name" : "Forgotten One",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.3.4",
"build_hash" : "e455fd0c13dceca8dbbdbb1665d068ae55dabe3f",
"build_timestamp" : "2016-06-30T11:24:31Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
Here you can see the version number: 2.3.4
Typically Kibana is installed in /opt/logstash/bin/kibana . So you can get the kibana version as follows
/opt/kibana/bin/kibana --version
navigate to the folder where you have installed your kibana
if you have used yum to install kibana it will be placed in following location by default
/usr/share/kibana
then use the following command
bin/kibana --version
To check Version of Your Running Kibana,Try this:
Step1. Start your Kibana Service.
Step2. Open Browser and Type below line,
localhost:5601
Step3. Go to settings->About
You can See Version of Your Running kibana.
Another way to do it on Ubuntu 18.0.4
sudo /usr/share/kibana/bin/kibana --version
You can use the Dev Tools console in Kibana to obtain version information about Elasticsearch.
You click "Dev Tools" to navigate into console.
In the Dev Tools Console, you do a below query
GET /
You will see version and number like below with other details also.
{
"version" : {
"number" : "6.5.1",
...
}
}
You can Try this,
After starting Service of elasticsearch Type below line in your browser.
localhost:9200
It will give Output Something like that,
{
"status" : 200,
"name" : "Hypnotia",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
If you have installed x-pack to secure elasticseach, the request should contains the valid credential details.
curl -XGET -u "elastic:passwordForElasticUser" 'localhost:9200'
Infact, if the security enabled all the subsequent requests should follow the same pattern (inline credentials should be provided).
If you are logged into your Kibana, you can click on the Management tab and that will show your Kibana version. Alternatively, you can click on the small tube-like icon and that will show the version number.
From Kibana host, a request to http://localhost:9200/ will not be answered, unless ElasticSearch is also running on the same node. Kibana listens on port 5601 not 9200.
In most cases, except for DEV, ElasticSearch will not be on the same node as Kibana, for a number of reasons.
Therefore, to get information about your ElasticSearch from Kibana, you should select the "Dev Tools" tab on the left and in the console issue the command: GET /
If you looking for version in kibana ui
I have tried to setup a kibana 3 with elasticsearch and logstash.
When i go to 127.0.0.1/kibana i get following error:
Error Could not contact Elasticsearch at http://127.0.0.1:9200. Please ensure that Elasticsearch is reachable from your system.
And when I check the console log i get the following:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://127.0.0.1:9200/_nodes. (Reason: CORS header 'Access-Control-Allow-Origin' missing).
When I go to the url http://127.0.0.1:9200 i get the following JSON text
{
"name" : "Meteor Man",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.1",
"build_hash" : "40e2c53a6b6c2972b3d13846e450e66f4375bd71",
"build_timestamp" : "2015-12-15T13:05:55Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
and in http://127.0.0.1:9200/_nodes I get the following:
{"cluster_name":"elasticsearch","nodes":{"BKXqqrymQw6lShg5P7_-eA":{"name":"Meteor Man","transport_address":"127.0.0.1:9300","host":"127.0.0.1","ip":"127.0.0.1","version":"2.1.1","build":"40e2c53","http_address":"127.0.0.1:9200","settings":{"client":{"type":"node"},"name":"Meteor Man","pidfile":"/var/run/elasticsearch/elasticsearch.pid","path":{"data":"/var/lib/elasticsearch","home":"/usr/share/elasticsearch","conf":"/etc/elasticsearch","logs":"/var/log/elasticsearch"},"config":{"ignore_system_properties":"true"},"cluster":{"name":"elasticsearch"},"foreground":"false"},"os":{"refresh_interval_in_millis":1000,"name":"Linux","arch":"amd64","version":"3.19.0-25-generic","available_processors":4,"allocated_processors":4},"process":{"refresh_interval_in_millis":1000,"id":10545,"mlockall":false},"jvm":{"pid":10545,"version":"1.7.0_91","vm_name":"OpenJDK 64-Bit Server VM","vm_version":"24.91-b01","vm_vendor":"Oracle Corporation","start_time_in_millis":1453983811248,"mem":{"heap_init_in_bytes":268435456,"heap_max_in_bytes":1038876672,"non_heap_init_in_bytes":24313856,"non_heap_max_in_bytes":224395264,"direct_max_in_bytes":1038876672},"gc_collectors":["ParNew","ConcurrentMarkSweep"],"memory_pools":["Code Cache","Par Eden Space","Par Survivor Space","CMS Old Gen","CMS Perm Gen"]},"thread_pool":{"generic":{"type":"cached","keep_alive":"30s","queue_size":-1},"index":{"type":"fixed","min":4,"max":4,"queue_size":200},"fetch_shard_store":{"type":"scaling","min":1,"max":8,"keep_alive":"5m","queue_size":-1},"get":{"type":"fixed","min":4,"max":4,"queue_size":1000},"snapshot":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"force_merge":{"type":"fixed","min":1,"max":1,"queue_size":-1},"suggest":{"type":"fixed","min":4,"max":4,"queue_size":1000},"bulk":{"type":"fixed","min":4,"max":4,"queue_size":50},"warmer":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"flush":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"search":{"type":"fixed","min":7,"max":7,"queue_size":1000},"fetch_shard_started":{"type":"scaling","min":1,"max":8,"keep_alive":"5m","queue_size":-1},"listener":{"type":"fixed","min":2,"max":2,"queue_size":-1},"percolate":{"type":"fixed","min":4,"max":4,"queue_size":1000},"refresh":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"management":{"type":"scaling","min":1,"max":5,"keep_alive":"5m","queue_size":-1}},"transport":{"bound_address":["127.0.0.1:9300","[::1]:9300"],"publish_address":"127.0.0.1:9300","profiles":{}},"http":{"bound_address":["127.0.0.1:9200","[::1]:9200"],"publish_address":"127.0.0.1:9200","max_content_length_in_bytes":104857600},"plugins":[]}}}
You simply need to enable CORS in your elasticsearch.yml configuration file and restart ES, that setting is disabled by default.
http.cors.enabled: true
However, I'm not certain that Kibana 3 will work with ES 2.1.1. You might need to upgrade your Kibana in order for this work. Try to change the above settings and see it it helps. If not, upgrade Kibana to the latest release.
I have successfully installed both the license plugin and the shield plugin on my client nodes. The logs show it starting correctly, and i am able to authenticate using the credentials is supplied. However when i connect i am getting a 503 error. I went back through the docs to see if i missed something, but i don't see anything about configuring the data nodes after enabling shield. What am i missing?
{
"status" : 503,
"name" : "Vertigo",
"cluster_name" : "cluster01",
"version" : {
"number" : "1.7.2",
"build_hash" : "e43676b1385b8125d647f593f7202acbd816e8ec",
"build_timestamp" : "2015-09-14T09:49:53Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
From the client logs
2015-10-28 03:14:52,235][INFO ][io.fabric8.elasticsearch.discovery.k8s.K8sDiscovery] [Vertigo] failed to send join request to master [[Abominatrix][T6zFRQO7RG-thZmOWVk2Xw][es-master-e6mj9][inet[/10.244.85.2:9300]]{data=false, master=true}], reason [RemoteTransportException[[Abominatrix][inet[/10.244.85.2:9300]][internal:discovery/zen/join]]; nested: RemoteTransportException[Failed to deserialize exception response from stream]; nested: TransportSerializationException[Failed to deserialize exception response from stream]; nested: InvalidClassException[failed to read class descriptor]; nested: ClassNotFoundException[org.elasticsearch.shield.authc.AuthenticationException]; ]
Andrei,
I figured it out. Since i am running containers that separate the master, data, and client nodes, i had only installed the plugin on the client nodes. Once installed the plugin on the master and data nodes, uploaded the image to docker hub and rebuilt the cluster, it all started working.
Thanks
-winn
I want to see all queries executed against an elasticsearch instance. Is it possible to run elasticsearch in a debug mode, or to tell it to store all queries executed against it?
The purpose is to see which queries are launched from a software using elasticsearch for analysis.
In versions of ElasticSearch prior to 5, you can accomplish this by changing the ElasticSearch.yml configuration file. At the very bottom of this file, you can adjust the logging time to record all:
index.search.slowlog.threshold.query.warn: 10s
index.search.slowlog.threshold.query.info: 5s
index.search.slowlog.threshold.query.debug: 2s
index.search.slowlog.threshold.query.trace: 500ms
index.search.slowlog.threshold.fetch.warn: 1s
index.search.slowlog.threshold.fetch.info: 800ms
index.search.slowlog.threshold.fetch.debug: 500ms
index.search.slowlog.threshold.fetch.trace: 200ms
index.indexing.slowlog.threshold.index.warn: 10s
index.indexing.slowlog.threshold.index.info: 5s
index.indexing.slowlog.threshold.index.debug: 2s
index.indexing.slowlog.threshold.index.trace: 500ms
Adjust the settings and restart your node, then consulting the logs to view the queries executed against your node. Note if in production log files will rapidly increase in size.
In version 5.x, you have to set slow log logging per index.
Command line:
curl -XPUT 'http://localhost:9200/myindexname/_settings' -d '{
"index.indexing.slowlog.threshold.index.debug" : "0s",
"index.search.slowlog.threshold.fetch.debug" : "0s",
"index.search.slowlog.threshold.query.debug" : "0s"
}'
Or, if you are using Kibana, go to the Dev Tools bar and enter:
PUT /myindexname/_settings
{"index.indexing.slowlog.threshold.index.debug": "0s",
"index.search.slowlog.threshold.fetch.debug" : "0s",
"index.search.slowlog.threshold.query.debug": "0s"}
#1: Apply to ALL indices
You can apply the setting to ALL indices with the following command:
PUT /_all/_settings
{"index.indexing.slowlog.threshold.index.debug": "0s",
"index.search.slowlog.threshold.fetch.debug" : "0s",
"index.search.slowlog.threshold.query.debug": "0s"}
#2: Preserve existing settings
If you don't want to overwrite existing settings, but just add new ones, add '''preserve_existing=true''' after _settings, like this:
PUT /_all/_settings?preserve_existing=true
{"index.indexing.slowlog.threshold.index.debug": "0s",
"index.search.slowlog.threshold.fetch.debug" : "0s",
"index.search.slowlog.threshold.query.debug": "0s"}
The above request will ONLY add the settings if they don't exist. It will not change them if they are already there.
#3: All available log settings
All available slow log settings are here and below for your reference:
PUT /test_index/_settings
{
"index.search.slowlog.threshold.query.warn": "60s",
"index.search.slowlog.threshold.query.info": "5s",
"index.search.slowlog.threshold.query.debug": "1s",
"index.search.slowlog.threshold.query.trace": "0.1s",
"index.search.slowlog.threshold.fetch.warn": "30s",
"index.search.slowlog.threshold.fetch.info": "5s",
"index.search.slowlog.threshold.fetch.debug": "1s",
"index.search.slowlog.threshold.fetch.trace": "0.1s",
"index.indexing.slowlog.threshold.index.warn": "6s",
"index.indexing.slowlog.threshold.index.info": "5s",
"index.indexing.slowlog.threshold.index.debug": "1s",
"index.indexing.slowlog.threshold.index.trace": "0.1s",
"index.indexing.slowlog.level": "info",
"index.indexing.slowlog.source": "1000"
}
Starting with Version 5 ElasticSearch charges money for this functionality. It's called "Audit log" and is now part of X-Pack. There is a basic license available that is free, but this license only gives you a simplistic monitoring functionality. Authentication, query logging and all these rather basic things cost money now.
Yes, it's possible to tell Elasticsearch to log all queries executed against it and you can configure logging levels, such as DEBUG. You can change it in ES 7.13.x using curl:
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
"transient": {
"logger.org.elasticsearch.discovery": "DEBUG"
}
}
'
On macOS log files are stored on $ES_HOME by default. Please check the docs about Elasticsearch Logging