Use LogQL to determine when a function is running? - grafana-loki

I'm trying to figure out if I can use Grafana Loki to determine when a function is running, specifically I have log lines like:
event=checkpoint_start, timestamp=123
event=checkpoint_end, timestamp=234
I'm trying to figure out whether I can construct a gauge from LogQL that is:
1 whenever the time is between 123 and 234
0 outside of this range
If that's possible, I can make a graph out of the gauge and use it to visualize when the function is running.

Related

Using grafana counter to visualize weather data

I'm trying to visualize my weather data using grafana. I've already made the prometheus part and now I face an issue that hunts me for quite a while.
I created an counter that adds temperature indoor every five minutes.
var tempIn = prometheus.NewCounter(prometheus.CounterOpts{
Name: "tempin",
Help: "Temperature indoor",
})
for {
tempIn.Add(station.Body.Devices[0].DashboardData.Temperature)
time.Sleep(time.Second*300)
}
How can I now visualize this data that it shows current temperature and stores it for unlimited time so I can look at it even 1 year later like an normal graph?
tempin{instance="localhost:9999"} will only display added up temperature so its useless for me. I need the current temperature not the added up one. I also tried rate(tempin{instance="localhost:9999"}[5m])
How to solve this issue?
Although a counter is not the best solution for this use case, you can use the operator increase.
Increase(tempin{instance="localhost:9999"}[5m])
This will tell you how much the counter increased in the last five minutes

How to get Prometheus Node Exporter metrics with JSON format

I deployed Prometheus Node Exporter pod on k8s. It worked fine.
But when I try to get system metrics by calling Node Exporter metric API in my custom Go application
curl -X GET "http://[my Host]:9100/metrics"
The result format was like this
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.7636e-05
go_gc_duration_seconds{quantile="0.25"} 2.466e-05
go_gc_duration_seconds{quantile="0.5"} 5.7992e-05
go_gc_duration_seconds{quantile="0.75"} 9.1109e-05
go_gc_duration_seconds{quantile="1"} 0.004852894
go_gc_duration_seconds_sum 1.291217651
go_gc_duration_seconds_count 11338
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 8
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.12.5"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.577128e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.0073577064e+10
.
.
.
something like this
Those long texts are hard to parse and I want to get the results in JSON format to parse them easily.
https://github.com/prometheus/node_exporter/issues/1062
I checked Prometheus Node Exporter GitHub Issues and someone recommended prom2json.
But this is not I'm looking for. Because I have to run extra process to execute prom2json to get results. I want to get Node Exporter's system metric by simply calling HTTP request or some kind of Go native packages in my code.
How can I get those Node Exporter metrics in JSON format?
You already mentioned prom2json and you can pull the package into your Go file by importing github.com/prometheus/prom2json.
The sample executable in the repo has the all building blocks you need. First, open the URL and then use the prom2json package to read the data and store the result.
However, you should also have a look at expfmt.TextParser as that is the native way to ingest Prometheus formatted metrics.

Calculate Percentage From Duration Data in Graphite/Grafana

I have state change duration data between my object state in milliseconds.I am sending this data to graphite. I want to create a single stat panel which show me the percentage of the duration less than 20 seconds. How can I create it? Any idea or any similar scenario example will be useful.
myProjectName.FromStateToState.duration 10000ms
myProjectName.FromStateToState.duration 15000ms
myProjectName.FromStateToState.duration 21000ms
myProjectName.FromStateToState.duration 25000ms
myProjectName.FromStateToState.duration 30000ms
Assume for above scenario I expect my percentage should be %40. Because I have 5 duration data and 2 of them is less than 20 seconds. I am using Graphite as data source and Grafana as visualizing.
Temporary Solution
Because I couldn't get enough attention and any answer, I will add my temprorary solution to here. If I learn exact solution in the future I will post as an answer too.
Basically I created two counter like counterSuccess and counterFail. If state change duration is less than 20 seconds increase counterSuccess otherwise increase counterFail. Then get percentage of the success rate via following basic formula counterSuccess/(counterSuccess + counterFail).
Graphite commands at Grafana Panel:
A : sumSeries(myProjectName.FromStateToState.counterSuccess.count)
B : sumSeries(myProjectName.FromStateToState.counterFail.count)
C : sumSeries(#A, #B)
D : divideSeries(#A,#C)
I defined a single stat at grafana to show it as single percentage;

Getting value of linkDistance in d3

I want to print the value that my link distance returned in the console but when i do that I get only 20 which is the default value.I studied that if linkDistance is a function then ideally it gets called every time when layout starts so I should get the 2 different values that I am returning in console but it is not the case.
Any idea?

Can Cube (js metrics framework) return more than 1000 events?

The Cube software (https://github.com/square/cube) allows you to retrieve events.
I want to retrieve a lot of events. But it appears that I am capped at 1000. There are well over 9000 in mongodb in the collection and time range I am querying
Example http GET queries I issue:
# 1000 results
http://1.2.3.4:1081/1.0/event?expression=my_event_type
# 1000 results
http://1.2.3.4:1081/1.0/event?expression=my_event_type&start=2012-02-02&stop=2013-07-03
# 7 results
http://1.2.3.4:1081/1.0/event?expression=my_event_type&limit=7
# 1000 results
http://1.2.3.4:1081/1.0/event?expression=my_event_type&limit=9999
It appears that the limit is pinned:
https://github.com/square/cube/blob/28dad4af27a6680deb46077b16952590f2c21cad/lib/cube/event.js
Line 166
based on the 'batchSize=1000'
Is it possible that you can 'page' through the data in some way? Or is this just a hard limit?
Looks like there is a hard cap on results in three places that need to be updated for large domains:
event.js - line 166
metric.js - line 11
metric.js - line 12
In addition, I was unable to find any query-string apis for the parameters. Ideally, we can leave the cap at 1000 (to avoid server bloat for people not tuning their queries correctly) and allow the consumer to define override behavior.

Resources