I am new to DevOps side of things for elastic search and have a few questions regarding effective monitoring of a elastic search cluster using Graphana
What I tried
run elasticsearch locally
curl http://localhost:9200/
{
"name" : "hnsKXlb",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "IsSAzHcZTDSA40Lfy0PKcw",
"version" : {
"number" : "5.5.2",
"build_hash" : "b2f0c09",
"build_date" : "2017-08-14T12:33:14.154Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
run graphana locally
docker run -p 3000:3000 --net network_name \
-e "GF_SECURITY_ADMIN_PASSWORD=xxx" \
grafana/grafana
added an ElasticSearch datasource
Imported graphana dashboard
https://grafana.com/grafana/dashboards/878
Question
I don't seem to get any metrics
I suspect that the datasource is only allowing grafana to that specific index. How can I make it more generic ?
You'll need an elasticsearch exporter (eexporter) to export metric to prometheus then use prometheus as datasource in Grafana
Take a look into the tools like Prometheus/Graphite/Logstash/Beats which will collect the metrics from Elasticsearch and add it into ES. First, we need to collect the metrics and store it into Elasticsearch. Then we can have a tool like Grafana to visualize the data. Kibana has a built-in dashboard to visualize the cluster health. You can check here.
I'm trying to install elasticsearch-kopf
When I run:
plugin -install lmenezes/elasticsearch-kopf
I get:
-> Installing lmenezes/elasticsearch-kopf...
Failed to install lmenezes/elasticsearch-kopf, reason: plugin directory /usr/local/var/lib/elasticsearch/plugins/kopf already exists. To update the plugin, uninstall it first using --remove lmenezes/elasticsearch-kopf command
But, when I try to access Kopf in
http://localhost:9200/_plugin/kopf/
I get this:
This localhost page can’t be found
But when I access elasticsearch at:
http://localhost:9200/
I get:
{
"name" : "Rachel van Helsing",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.3.1",
"build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
"build_timestamp" : "2016-04-04T12:25:05Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
The error says it all
Failed to install lmenezes/elasticsearch-kopf, reason: plugin directory /usr/local/var/lib/elasticsearch/plugins/kopf already exists. To update the plugin, uninstall it first using --remove lmenezes/elasticsearch-kopf command
So either remove it first with
plugin --remove lmenezes/elasticsearch-kopf
Or simply delete the kopf folder
rm -rf /usr/local/var/lib/elasticsearch/plugins/kopf
Then you should be able to install it properly.
I have tried to setup a kibana 3 with elasticsearch and logstash.
When i go to 127.0.0.1/kibana i get following error:
Error Could not contact Elasticsearch at http://127.0.0.1:9200. Please ensure that Elasticsearch is reachable from your system.
And when I check the console log i get the following:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://127.0.0.1:9200/_nodes. (Reason: CORS header 'Access-Control-Allow-Origin' missing).
When I go to the url http://127.0.0.1:9200 i get the following JSON text
{
"name" : "Meteor Man",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.1",
"build_hash" : "40e2c53a6b6c2972b3d13846e450e66f4375bd71",
"build_timestamp" : "2015-12-15T13:05:55Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
and in http://127.0.0.1:9200/_nodes I get the following:
{"cluster_name":"elasticsearch","nodes":{"BKXqqrymQw6lShg5P7_-eA":{"name":"Meteor Man","transport_address":"127.0.0.1:9300","host":"127.0.0.1","ip":"127.0.0.1","version":"2.1.1","build":"40e2c53","http_address":"127.0.0.1:9200","settings":{"client":{"type":"node"},"name":"Meteor Man","pidfile":"/var/run/elasticsearch/elasticsearch.pid","path":{"data":"/var/lib/elasticsearch","home":"/usr/share/elasticsearch","conf":"/etc/elasticsearch","logs":"/var/log/elasticsearch"},"config":{"ignore_system_properties":"true"},"cluster":{"name":"elasticsearch"},"foreground":"false"},"os":{"refresh_interval_in_millis":1000,"name":"Linux","arch":"amd64","version":"3.19.0-25-generic","available_processors":4,"allocated_processors":4},"process":{"refresh_interval_in_millis":1000,"id":10545,"mlockall":false},"jvm":{"pid":10545,"version":"1.7.0_91","vm_name":"OpenJDK 64-Bit Server VM","vm_version":"24.91-b01","vm_vendor":"Oracle Corporation","start_time_in_millis":1453983811248,"mem":{"heap_init_in_bytes":268435456,"heap_max_in_bytes":1038876672,"non_heap_init_in_bytes":24313856,"non_heap_max_in_bytes":224395264,"direct_max_in_bytes":1038876672},"gc_collectors":["ParNew","ConcurrentMarkSweep"],"memory_pools":["Code Cache","Par Eden Space","Par Survivor Space","CMS Old Gen","CMS Perm Gen"]},"thread_pool":{"generic":{"type":"cached","keep_alive":"30s","queue_size":-1},"index":{"type":"fixed","min":4,"max":4,"queue_size":200},"fetch_shard_store":{"type":"scaling","min":1,"max":8,"keep_alive":"5m","queue_size":-1},"get":{"type":"fixed","min":4,"max":4,"queue_size":1000},"snapshot":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"force_merge":{"type":"fixed","min":1,"max":1,"queue_size":-1},"suggest":{"type":"fixed","min":4,"max":4,"queue_size":1000},"bulk":{"type":"fixed","min":4,"max":4,"queue_size":50},"warmer":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"flush":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"search":{"type":"fixed","min":7,"max":7,"queue_size":1000},"fetch_shard_started":{"type":"scaling","min":1,"max":8,"keep_alive":"5m","queue_size":-1},"listener":{"type":"fixed","min":2,"max":2,"queue_size":-1},"percolate":{"type":"fixed","min":4,"max":4,"queue_size":1000},"refresh":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"management":{"type":"scaling","min":1,"max":5,"keep_alive":"5m","queue_size":-1}},"transport":{"bound_address":["127.0.0.1:9300","[::1]:9300"],"publish_address":"127.0.0.1:9300","profiles":{}},"http":{"bound_address":["127.0.0.1:9200","[::1]:9200"],"publish_address":"127.0.0.1:9200","max_content_length_in_bytes":104857600},"plugins":[]}}}
You simply need to enable CORS in your elasticsearch.yml configuration file and restart ES, that setting is disabled by default.
http.cors.enabled: true
However, I'm not certain that Kibana 3 will work with ES 2.1.1. You might need to upgrade your Kibana in order for this work. Try to change the above settings and see it it helps. If not, upgrade Kibana to the latest release.
I have successfully installed both the license plugin and the shield plugin on my client nodes. The logs show it starting correctly, and i am able to authenticate using the credentials is supplied. However when i connect i am getting a 503 error. I went back through the docs to see if i missed something, but i don't see anything about configuring the data nodes after enabling shield. What am i missing?
{
"status" : 503,
"name" : "Vertigo",
"cluster_name" : "cluster01",
"version" : {
"number" : "1.7.2",
"build_hash" : "e43676b1385b8125d647f593f7202acbd816e8ec",
"build_timestamp" : "2015-09-14T09:49:53Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
From the client logs
2015-10-28 03:14:52,235][INFO ][io.fabric8.elasticsearch.discovery.k8s.K8sDiscovery] [Vertigo] failed to send join request to master [[Abominatrix][T6zFRQO7RG-thZmOWVk2Xw][es-master-e6mj9][inet[/10.244.85.2:9300]]{data=false, master=true}], reason [RemoteTransportException[[Abominatrix][inet[/10.244.85.2:9300]][internal:discovery/zen/join]]; nested: RemoteTransportException[Failed to deserialize exception response from stream]; nested: TransportSerializationException[Failed to deserialize exception response from stream]; nested: InvalidClassException[failed to read class descriptor]; nested: ClassNotFoundException[org.elasticsearch.shield.authc.AuthenticationException]; ]
Andrei,
I figured it out. Since i am running containers that separate the master, data, and client nodes, i had only installed the plugin on the client nodes. Once installed the plugin on the master and data nodes, uploaded the image to docker hub and rebuilt the cluster, it all started working.
Thanks
-winn
I'm trying to make the new relic elastic search plugin work, the following is the website of the author:
New Relic Plug in Github
I've followed all the instructions and even see that the plugin has been installed succesfuly when I run in my server the command
'curl -XGET http://localhost:9200/_newrelic?pretty'
, I even get the response that should confirm everything has installed just fine:
' {
"configuration" : {
"agents" : {
"http" : true,
"pool" : true,
"transport" : true,
"fs" : true,
"indices" : true,
"network" : true
},
"refreshInterval" : 10
}
}'
However, when I log to my newrelic account not a single statistic is shown, either for JVM or whatsoever, nor I see the indices being monitored.
Has anyone encountered this problem before? PS: ElasticSearch is not running as a service and I DO have the new relic java agent properly installed. Also I have configured bin/elasticsearch.in.sh as it should be.
Thanks in advance!
JM.