carbon-tagger does not translate received metrics to Grapthite - metrics

I have configured monitoring system as bunch of next stuff:
my_app -> pystatsd -> statsdaemon -> carbon-tagger -> graphite (via carbon-cache) -> graph-explorer
But it looks like carbon-tagger does only dumping metrics to ElasticSearch but not to Graphite. In the same time carbon-tagger successfully send his internal metrics to carbon-cache and they appear in Graph Explorer well. I have look at the source code of the carbon-tagger and could not find place where it send any received from statsdaemon metrics to graphite. So now I'm confused! How should I configure my monitoring system to dump metrics both to the ElasticSearch and to the Graphite?

In a nutshell, correct configuration of described system should looks likes this:
That is, statsd/statsdaemon should pass in data to the carbon-relay (or carbon-relay-ng), not to the carbon-cache directly. And carbon-relay will broadcast data to the carbon-tagger and carbon-cache. Also, don't forget that carbon-tagger doesn't work with pickle format, while original carbon-relay produces data only through pickle protocol.

Related

clear prometheus metrics from collector

I'm trying to modify prometheus mesos exporter to expose framework states:
https://github.com/mesos/mesos_exporter/pull/97/files
A bit about mesos exporter - it collects data from both mesos /metrics/snapshot endpoint, and /state endpoint.
The issue with the latter, both with the changes in my PR and with existing metrics reported on slaves, is that metrics created lasts for ever (until exporter is restarted).
So if for example a framework was completed, the metrics reported for this framework will be stale (e.g. it will still show the framework is using CPU).
So I'm trying to figure out how I can clear those stale metrics. If I could just clear the entire mesosStateCollector each time before collect is done it would be awesome.
There is a delete method for the different p8s vectors (e.g. GaugeVec), but in order to delete a metric, I need to not only the label name, but also the label value for the relevant metric.
Ok, so seems it was easier than I thought (if only I was familiar with go-lang before approaching this task).
Just need to cast the collector to GaugeVec and reset it:
prometheus.NewGaugeVec(prometheus.GaugeOpts{
Help: "Total slave CPUs (fractional)",
Namespace: "mesos",
Subsystem: "slave",
Name: "cpus",
}, labels): func(st *state, c prometheus.Collector) {
c.(*prometheus.GaugeVec).Reset() ## <-- added this for each GaugeVec
for _, s := range st.Slaves {
c.(*prometheus.GaugeVec).WithLabelValues(s.PID).Set(s.Total.CPUs)
}
},

How do I Monitoring Clickhouse instances effectively(GCP)

I want to build some sort of materialized view on the system. merges, metrics, asynchronous_metrics, so I get a time-series view of system health(memory consumption, etc).
How is this possible I tried for the system. merges but all I get are the currently running merges?
You can use following variants:
export metrics via graphite protocol to clickhouse itself:
turn on graphite export https://clickhouse.tech/docs/en/operations/server_settings/settings/#server_settings-graphite
use https://github.com/lomik/graphite-clickhouse for storage exported data back to clickhouse
complete vagrant demo stand here: https://github.com/Slach/clickhouse-metrics-grafana/
use undocumented system.metric_log table
look at https://github.com/ClickHouse/ClickHouse/issues/6363 and https://github.com/ClickHouse/ClickHouse/search?q=metric_log, https://github.com/ClickHouse/ClickHouse/blob/master/dbms/programs/server/config.d/metric_log.xml
turn on system.metric_log in /etc/clickhouse-server/config.d/metric_log.xml
<yandex>
<metric_log>
<database>system</database>
<table>metric_log</table>
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
<collect_interval_milliseconds>1000</collect_interval_milliseconds>
</metric_log>
</yandex>
be careful, according to https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Interpreters/MetricLog.cpp#L18 system.asynchronous_metrics doesn't flush into system.metric_log

How to get Prometheus Node Exporter metrics with JSON format

I deployed Prometheus Node Exporter pod on k8s. It worked fine.
But when I try to get system metrics by calling Node Exporter metric API in my custom Go application
curl -X GET "http://[my Host]:9100/metrics"
The result format was like this
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.7636e-05
go_gc_duration_seconds{quantile="0.25"} 2.466e-05
go_gc_duration_seconds{quantile="0.5"} 5.7992e-05
go_gc_duration_seconds{quantile="0.75"} 9.1109e-05
go_gc_duration_seconds{quantile="1"} 0.004852894
go_gc_duration_seconds_sum 1.291217651
go_gc_duration_seconds_count 11338
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 8
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.12.5"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.577128e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.0073577064e+10
.
.
.
something like this
Those long texts are hard to parse and I want to get the results in JSON format to parse them easily.
https://github.com/prometheus/node_exporter/issues/1062
I checked Prometheus Node Exporter GitHub Issues and someone recommended prom2json.
But this is not I'm looking for. Because I have to run extra process to execute prom2json to get results. I want to get Node Exporter's system metric by simply calling HTTP request or some kind of Go native packages in my code.
How can I get those Node Exporter metrics in JSON format?
You already mentioned prom2json and you can pull the package into your Go file by importing github.com/prometheus/prom2json.
The sample executable in the repo has the all building blocks you need. First, open the URL and then use the prom2json package to read the data and store the result.
However, you should also have a look at expfmt.TextParser as that is the native way to ingest Prometheus formatted metrics.

How to get Jstorm get tuplelifecycle time

My question:I want to monitor the duration of each data in storm. How should I do?Are there any demos?I can only count the duration of each bolt processing.
In both apache-storm and jstorm, you can get data of the monitor, and export that to whereever you want.
In storm, use Metrics Consumer (see http://storm.apache.org/releases/1.0.6/Metrics.html).
And in jstorm, use metrics uploader (see http://jstorm.io/Maintenance/JStormMetrics.html)
And there is a jstorm metricsUploader example here: https://github.com/lcy362/StormTrooper/blob/master/src/main/java/com/trooper/storm/monitor/MetricUploaderTest.java

FIWARE Cygnus sink to Elasticsearch/Kibana ?

I'm currently having a working flow :
Fiware Orion -> Fiware Cygnus -> Kafka -> Logstash -> Elasticsearch -> Kibana
I would like to push directly data from Cygnus to Elasticsearch, is there a sink available already ?
An Apache Flume/Elasticsearch sink already exist : https://flume.apache.org/releases/content/1.3.0/apidocs/org/apache/flume/sink/elasticsearch/ElasticSearchSink.html
I was wondering if it would be easy to use it for Cygnus ?
Until Cygnus 1.5.0 (included) such a sink could be perfectly used (as any other Flume sink) in a Cygnus agent configuration.
From 1.6.0 (included, this is the last version) you will not be able since we internally replaced the usage of native Event objects with custom NGSIEvent ones. Why?:
An Event is a set of headers and an array of raw bytes for the body.
NGSIEvent inherits from Eventand is a set of headers, an already parsed version of the body (as an object) and an array of raw bytes for the body pointing to null (this last part is the one avoiding compatibility with native Flume sinks).
Anyway, this is "easy" to fix: new version a NGSIEvent will containg both the parsed version of the body and the body itself as raw bytes.

Resources