Options for displaying metrics stored in Cassandra to Grafana - elasticsearch

I installed graphite web, graphite-cyanite and configured cyanite. Then created the metric keyspace and table in my cassandra cluster as suggested in cyanite documentation and I am able to successfully insert data into metric table. Then I went ahead and installed grafana and elastic search and configured them by adding the below index to cyanite.yaml.
index:
use: "io.cyanite.es_path/es-rest"
index: "my_paths" #defaults to "cyanite_paths"
url: "http://myhost:9200"
chan_size: 1000 # defaults to 1000
batch_size: 1000 # defaults to 1000
Now I am lost with my options on how to proceed to retrieve the metric data from my cassandra cluster/elasticsearch to display the graphs on Grafana. Please advise.

Grafana integrates with Graphite via the graphite web api.
Sinces you mentioned that you installed graphite-web. Configure a grafana graphite data source and specify the url of your graphite-web instance:
http://docs.grafana.org/installation/#graphite-elasticsearch-setup-example

Related

How to load data from Cassandra into ELK

I have installed Cassandra 3.11.3 in my ubuntu virtual machine. I have also installed the ELK(elasticsearch, logstash, kibana).
What is the way using which I can visualize the Cassandra data into Kibana using the ELK. Please let me know the detail configurations that i will need to do in order to get data from Cassandra database into the Kibana dashboard.
I did the similar thing using Kafka, where I used below structure:
Cassandra -> Confluent Kafka -> Elastic search.
Its pretty easy to do as connectors are provided by Confluent.
But if you only need to visualize the data , you can try Banana which gels well with Cassandra.
Note: Banana is forked version of Kibana.
Se

How to monitor connection in local network

I have a ton of services: Node(s), MySQL(s), Redis(s), Elastic(s)...
I want to monitor how they connect to each other: Connection rate, Number alive connection... (Node1 create 30 connection to Node2/MySQL/Redis per second...) like Haproxy stat image attached below.
Currently i have two options:
Haproxy (proxy): I want to use single service Haproxy to archive this but it's seem very hard to use ALC detect what connection need forward to what service.
ELK (log center): I need to create log files on each service (Node, MySQL, Redis...) and then show them on the log center. I see that a ton of works to do that without built-in feature like Haproxy stat page.
How to do this? Is log center good in this case?
The problem
I think your problem is not collecting and pipelining the statistics to Elasticsearch, but instead the ton of work extracting metrics from your services because most of them do not have metric files/logs.
You'd then need to export them with some custom script, log them and capture it with filebeat, stream to a logstash for text processing and metric extraction so they are indexed in a way you can do some sort of analytics, and then send it to elasticsearch.
My take on the answer
At least for the 3 services you've referenced, there are Prometheus exporters readily available and you can find them here. The exporters are simple processes that will query your services native statistics APIs and expose a prometheus metric API for Prometheus to Scrape (poll).
After you have Prometheus scraping the metrics, you can display them in dashboards via Grafana (which is the de facto visualization layer for Prometheus) or bulk export your metrics to wherever you want (Elasticsearch, etc..) for visualization and exploration.
Conclusion
The benefits of this approach:
Prometheus can auto-discover new nodes you add to your networks
Readily available exporters from haproxy, redis and mysql for
Prometheus
No code needed, each exporter requires minimal
configuration specific to each monitored technology, it can easily
be containerized and deployed if your environment is container
oriented, otherwise you just need to run each exporter in the
correct machines
Prometheus is very, very easy to deploy
Use ELK - elasticsearch logstash and kibana stack with filebeat. Filebeat -will share the log file content with logstash
Logstash-will scan, filter and share the needed content to elastic search
Elasticsearch- will work as a db, store the content from logstash in json format as documents.
Kibana- with kibana you can search the required info. Also you can plot graphs and other visuals with the relevant data.

Kibana as Elasticsearch monitoring solution

Objective is to create a Dashboard in Kibana that include visualizations based on some special queries to monitor Elasticsearch health and status, like GET /_cluster/settings?include_defaults=true&filter_path=defaults. the problem is this query is based on no index. how can i go thru it?
Please install the free version of xpack , cluster monitoring is free.
I am using that already.

Grafana with elasticsearch data source

I am currently running the latest (master) of grafana which supports elasticsearch as data source. I am able to connect to elasticsearch but cannot find docs on structure for storing metrics in elasticsearch.
I know it's not officially released yet but since I am already running elasticserach it would be nice not to setup another data source like influxDB.
Does anybody has experience with this setup?
ok found it, basically you can use whatever structure you want as long as there is and #timestamp attribute. Example:
{ #timestamp: '2015-10-22T12:00:00.000 +0200',
name: 'my event',
load: 0.5,
cpu: 50
}
Now you can filter, group or search these attributes in grafana.

elasticsearch and kibana setup on different machines

I would like to know what is the best setup to have an elasticsearch data node and kibana server on separate machines. I have setup multiple elasticsearch data nodes and would like to show all dashboards on one server but not sure how to do that. I do not want to have different urls to view different dashboards. I set up the data nodes with logstash shipper on each machine so all I need now is to have kibana get data from each different data node. Is that possible?
I edited the config file for kibana as follows:
elasticsearch: "http://"192.168.xx.xxx":9200"
So far Kibana 3 do not support what you need. You can refer to this.
As the webpage say, maybe you want to set an proxy or a redirect page.

Resources