I'm trying to visualize my weather data using grafana. I've already made the prometheus part and now I face an issue that hunts me for quite a while.
I created an counter that adds temperature indoor every five minutes.
var tempIn = prometheus.NewCounter(prometheus.CounterOpts{
Name: "tempin",
Help: "Temperature indoor",
})
for {
tempIn.Add(station.Body.Devices[0].DashboardData.Temperature)
time.Sleep(time.Second*300)
}
How can I now visualize this data that it shows current temperature and stores it for unlimited time so I can look at it even 1 year later like an normal graph?
tempin{instance="localhost:9999"} will only display added up temperature so its useless for me. I need the current temperature not the added up one. I also tried rate(tempin{instance="localhost:9999"}[5m])
How to solve this issue?
Although a counter is not the best solution for this use case, you can use the operator increase.
Increase(tempin{instance="localhost:9999"}[5m])
This will tell you how much the counter increased in the last five minutes
I have state change duration data between my object state in milliseconds.I am sending this data to graphite. I want to create a single stat panel which show me the percentage of the duration less than 20 seconds. How can I create it? Any idea or any similar scenario example will be useful.
myProjectName.FromStateToState.duration 10000ms
myProjectName.FromStateToState.duration 15000ms
myProjectName.FromStateToState.duration 21000ms
myProjectName.FromStateToState.duration 25000ms
myProjectName.FromStateToState.duration 30000ms
Assume for above scenario I expect my percentage should be %40. Because I have 5 duration data and 2 of them is less than 20 seconds. I am using Graphite as data source and Grafana as visualizing.
Temporary Solution
Because I couldn't get enough attention and any answer, I will add my temprorary solution to here. If I learn exact solution in the future I will post as an answer too.
Basically I created two counter like counterSuccess and counterFail. If state change duration is less than 20 seconds increase counterSuccess otherwise increase counterFail. Then get percentage of the success rate via following basic formula counterSuccess/(counterSuccess + counterFail).
Graphite commands at Grafana Panel:
A : sumSeries(myProjectName.FromStateToState.counterSuccess.count)
B : sumSeries(myProjectName.FromStateToState.counterFail.count)
C : sumSeries(#A, #B)
D : divideSeries(#A,#C)
I defined a single stat at grafana to show it as single percentage;
I didn't find a 'moving average' feature and I'm wondering if there's a workaround.
I'm using influxdb as the backend.
Grafana supports adding a movingAverage(). I also had a hard time finding it in the docs, but you can (somewhat hilariously) see its usage on the feature intro page:
As is normal, click on the graph title, edit, add the metric movingAverage() as per described in the graphite documentation:
movingAverage(seriesList, windowSize)
Graphs the moving average of a metric (or metrics) over a fixed number of past points, or a time interval.
Takes one metric or a wildcard seriesList followed by a number N of datapoints or a quoted string with a length of time like ‘1hour’ or ‘5min’ (See from / until in the render_api_ for examples of time formats). Graphs the average of the preceding datapoints for each point on the graph. All previous datapoints are set to None at the beginning of the graph.
Example:
&target=movingAverage(Server.instance01.threads.busy,10)
&target=movingAverage(Server.instance*.threads.idle,'5min')
Grafana does no calculations itself, it just queries a backend and draws nice charts. So aggregating abilities depends solely on your backend. While Graphite supports windowing functions such as moving average, InfluxDB currently doesn't support it.
There are quite a lot requests for moving average in influxdb on the web. You can leave your "+1" and track progress in this ticket https://github.com/influxdb/influxdb/issues/77
Possible (yet not so easy) workaround is to create a custom script (cron, daemon, whatever) that will pre-calcuate MA and save it in a separate influxdb series.
I found myself here trying to do a moving average in Grafana with a PostgreSQL database, so I'll just add a way to do with a SQL query:
SELECT
date as time,
AVG(daily_average_column)
OVER(ORDER BY date ROWS BETWEEN 4 PRECEDING AND CURRENT ROW)
AS value,
'5 Day Moving Average' as metric
FROM daily_average_table
ORDER BY time ASC;
This uses a "window" function to average of the last 4 rows (plus the current row).
I'm sure there are ways to do this with MySQL as well.
Method and capability for this is dependent on your datasource.
You specified InfluxDB, so your query will need to wrap an 'Aggregation function' [ such as mean($field) ] within the moving_average($aggregation_function, $num_of_points) 'Transformation Function'.
In the 'Metrics' tab, you will find both the 'Transformation' functions in the 'select' portion of the menu.
Craft your query with the 'Aggregation function' (mean, min, max, etc.) first -- this way you can make sure the data looks as you expect it.
After this, just click the '+' button next to the 'Aggregation function', and under the menu 'Transformations', select 'moving_average'.
The number in brackets will be the number of points you want the average taken over.
Screenshot:
try avg_over_time(mymetric[5m])
InfluxDB 2 allows you to calculate the moving average in the query, e.g.:
from(bucket: "iot")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "PoolWeather")
|> filter(fn: (r) => r["_field"] == "batteryvoltage")
|> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
|> movingAverage(n: 10)
|> yield(name: "average")
Another option is to report the data as "timing" metrics and not counts.
This is easy to do especially with Statsd in your stack.
Plotting timing data (coming from statsd) as average of the reported data points is already built in.
The Cube software (https://github.com/square/cube) allows you to retrieve events.
I want to retrieve a lot of events. But it appears that I am capped at 1000. There are well over 9000 in mongodb in the collection and time range I am querying
Example http GET queries I issue:
# 1000 results
http://1.2.3.4:1081/1.0/event?expression=my_event_type
# 1000 results
http://1.2.3.4:1081/1.0/event?expression=my_event_type&start=2012-02-02&stop=2013-07-03
# 7 results
http://1.2.3.4:1081/1.0/event?expression=my_event_type&limit=7
# 1000 results
http://1.2.3.4:1081/1.0/event?expression=my_event_type&limit=9999
It appears that the limit is pinned:
https://github.com/square/cube/blob/28dad4af27a6680deb46077b16952590f2c21cad/lib/cube/event.js
Line 166
based on the 'batchSize=1000'
Is it possible that you can 'page' through the data in some way? Or is this just a hard limit?
Looks like there is a hard cap on results in three places that need to be updated for large domains:
event.js - line 166
metric.js - line 11
metric.js - line 12
In addition, I was unable to find any query-string apis for the parameters. Ideally, we can leave the cap at 1000 (to avoid server bloat for people not tuning their queries correctly) and allow the consumer to define override behavior.
AcaniUsers loads the first 20 users in MongoDB (on Heroku via Sinatra) closest to me from my iPhone. I want to add a Load More button that will load the next 20 users closest to me. Keep in mind, my location and the locations of the users on my phone may have changed. I was thinking of switching from Sinatra to Node.js and opening a WebSocket, so I could have realtime updates of the presences & locations of the users on my phone, but think I should save that challenge for a next iteration. Basically, how should I implement the load more functionality?
To paginate queries in MongoDB you can use a combination of limit() and skip().
So, the first query will be:
your_query.limit(20)
Then if you want to load the second 20 (you will have to remember the first query somewhere):
your_query.skip(20).limit(20)
btw I suggest you to execute in the first place the query with a limit higher than 20 and put in the cache the result you don't display. When requested, just get them from the cache (you can store it in the user session). If the position change, restart from scratch and re-query the db invalidating the cache.
think of it more as a client side question: use subscriptions based on the current group - encode the group into a geo-square if possible (more efficient than circle, I think?) - periodically (t) executes an operation that checks the locations of each user and simply sends them out with a group id to match the subscriptions
actually...to build your subscription groups, just use the geonear command on all of your subscribers
- build a hash of your subscribers and their groups
- each subscriber is subscribed to one group and themselves (for targeted communication => indicate that a specific subscriber should change their subscription)
- iterate through the results i number of times where i is the number of individuals in an update group
- execute an action that checks the current value of j, the group number for a specific subscriber, against the new j value - if there is a change, notify the subscriber on the subsriber's private channel
- notifications synchronously follow subscriber adjustments
something like:
var pageSize;
// assign pageSize in method call
var documents = collection.Find(query);
var max = documents.Size();
for (int i = 0; i == max ; i++)
{
var level = i*pageSize;
if (max / level > 1)
{
documents.Skip(pageSize);
}
else
{
documents.Skip(pageSize).Limit(level);
break;
}
}
:)