What is the difference between viewing aws custom metrics on aws console vs configuring it on grafana? - metrics

What are the advantages we get when custom metrics view on grafana instead of aws cloudwatch console itself?

The main advantage is if you are already using Grafana for other monitoring then you can have your Cloudwatch metrics together with your other metrics.
If you don't use Grafana then it would be overkill to install it just for Cloudwatch in my opinion.
Other advantages:
templating (dropdown lists). See this dashboard for an example.
Can put them on a TV in your office. We have stats for S3 downloads on our TV (together with other stats for our website):
And soon, there will be support for alerting in Grafana for Cloudwatch.

Related

Customised IPFS dashboard

I have a customised IPFS (created and maintained by someone else). I want to design the dashboard for this customised IPFS Private cluster (like the IPFS desktop for the nodes information). I am researching for Prometheus and Grafana service. What are the ways to achieve this task? I am new to IPFS. Please guide.
Edit: Recently I tried to get IPFS metrics using Prometheus.
http://localhost:5001/debug/metrics/prometheus gives some metric information but not sure it has complete information like peers, files etc info.
Are there any Prometheus exporters for IPFS? Or how could I use https://docs.ipfs.io/reference/http/api/#getting-started API data for Grafana?
You may need to export custom metrics, but the Prometheus endpoint seems like a reasonable place to start.
Some additional reading:
https://github.com/ipfs/go-ipfs/pull/6688
https://github.com/ipfs/go-metrics-prometheus

Send logs from AWS to Elasticcloud

I am using Elasticcloud (hosted elasticsearch) to index my app data. Now I want to start streaming logs from my AWS lambda functions to my Elasticcloud account. I have googled and I can see that there are couple of ways to do this:
Functionbeat
Cloudwatch-> Elasticsearch subscription filter
Cloudwatch-> Lambda subscription filter
My questions are
which is the most cost efficient and performant way to stream logs from AWS cloudwatch to elasticcloud
For functionbeat is it necessary to first send logs to a S3 bucket? (I am referring to this https://www.elastic.co/guide/en/beats/functionbeat/current/configuration-functionbeat-options.html)
First question:
Since Functionbeat is deployed to Lambda in case of AWS, no.1 and no.3 cost the same. No.1 is faster to deploy because you need to create Lambda by yourself in no.3.
As for performance, of course it depends on the implementation, I guess there is no big difference between two methods unless millisecond latency has impact to you.
If you are using Elastic Cloud you can't use no.2, which works with Amazon Elasticsearch Service. These two are completely different services. (see this page, I know it's a bit confusing!)
Second question:
No, you don't have to. Functionbeat directly gets logs from CloudWatch.
S3 bucket is used to store Function beat module itself before being deployed to Lambda.

how to configure x-ray traces for amazon-neptune?

I've an api going thru Lambda (node.js) to Aws Neptune. X-ray shows the traces from api-gateway --> lambda and stops here. Has anyone enabled deeper tracing all the way into Neptune ?
Thanks !
you can use AWS X-Ray SDK for Node.js to instrument your lambda function so the calls to Neptune are traced: https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-nodejs-awssdkclients.html
Please let me know if you need further help.
As of now, the most you can do is to use XRay clients and explicitly trace [1] the requests that you make from your Lambda. Neptune's AWS SDK currently only tracks management API calls, and not queries to the database. So unlike the dynamo db example called out in XRay docs, you cannot get granular insights (eg: query that was executed, the latency breakdowns etc) via XRay from Neptune at the moment.
It does sound like a useful feature, so I would recommend making a feature request for the same, or build something custom for the client you are using. Just curious, what client are you using from within the lambda? (ie Gremlin GLV? Raw HTTP request? Jena? etc..) For example, if you're using Gremlin GLV, then maybe all you need is to build a custom netty handler that can do tracing on your behalf.
[1] https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-nodejs-httpclients.html

How to set up an alert system for Graphite+Grafana server

I have a server stats.hostname with graphite + grafana. This is receiving some stats about geolocation from several clients. I want to know if there is some plugin/extension/external tool for alert (email) when this stats overpass some threshold.
I tried with worldPing, but I think the tool is only for checking is a site is reachable or not.
Can you suggest some solution?
Thanks!
Alerting is probably one of the most requested features of Grafana. The team at Raintank are building an alerting system on top of Grafana. You can follow the progress and the discussion here - https://github.com/grafana/grafana/issues/2209
Currently though, you can use Bosun for your alerting needs. - https://bosun.org/quickstart#graphite
It does have Graphite querying capabilities, and there's a Bosun Datasource for Grafana as well.
Alerting in grafana is available since release 4.0 from dec 2016
http://grafana.org/blog/2016/12/12/grafana-4.0-stable-release/
Currently v. 4.0.2 is available http://grafana.org/download/ for donwload.

What is the best way to send email reports from Kibana dashboard?

I've setup an ELK (Elasticsearch, Logstash and Kibana) stack and created some Kibana dashboard widgets. So far everything went amazing. Now I want to send daily and weekly email with the generated reports.
What is the best way to do that. Do I need to install any plugin or I can sent it right from Kibana?
You can use ElastAlert. You will be able to mail a link with the Kibana dashboard with only the data of the period you want. The period parameter in the top right corner will be set automatically in Kibana.
There are some workarounds, such as phantomjs but not straightforward to implement. For specific events and Kibana queries there are alerting mechanisms available (Watcher, Logz.io), but I'm guessing you're looking to receive the entire dashboard by email.
There are two out-of-the box options for sending email reports from Kibana dashboard:
Skedler which allows you to schedule and send automated email reports based on your Kibana dashboard or search.
If you have Elasticsearch license/subscription, then you can use the reporting plugin.
Hope it helps.
You can use Sentinl that extends Kibana for Alerting and Reporting functionality to monitor, notify and report on data series changes using standard queries, programmable validators and a variety of configurable actions - Think of it as a free an independent "Watcher" which also has scheduled "Reporting" capabilities (PNG/PDFs snapshots).
The greatest thing about Sentinl is you can easily configure alerts through it's native App interface integrated in Kibana.

Resources