Qlikview: Problem of the average duration of the most used subscription modality - business-intelligence

Be a subscription video streaming service. There are 3 subscription modalities, A, B, C. You want to calculate the average duration of the modality that has the highest number of subscribers per month.
The next chart show the amount of users by the type of suscription modality.
I need the expresion to generate a chart that shows the duration of the mode with the highest number of users per month.
I have the following fictitious data:

If you are using Month as a dimension on a chart, you can use the expression:
=FirstSortedValue(
Aggr(Avg(Duration), Msuscription, Month),
-Aggr(Count(user), Msuscription, Month)
)
This is sorting by the expression you used to create the chart in your question, and using - returns the highest value.

Related

Average sales per month per client Quicksight

Im fairly new to Quicksight and need help with the following:
I want to calculate the average sales in $ per month for each of my clients (Monthly Average by Client) and then categorize those clients in "A","B","C" categories where A > 250,000 monthly purchases, 100,000 < B <=250,000 and C <= 100,000.
The table I have is very similar to the following:
I tried using the following function:
avgOver(
sum(values_movement),
[contact,extract('MM',fecha)])
This function allows me to do the average but wont allow me to visualize it without selecting the month, given that I want to categorize the each client by their total monthly average, I cannot do a categorization based on a function that wont generate the client monthly average without selecting the month.
I know that once I get this function correct I can use ifelse() in order to categorize each client based on their monthly average transactions

How to get average value over last N hours in Prometheus

In Prometheus I want to calculate the total average of this specific power metric (dell_hw_chassis_power_reading) within a period of time. For example, the last 2 hours.
I currently am using this query below:
sum(avg_over_time(dell_hw_chassis_power_reading[2h])) by (node,instance)
This query seems to only give me the latest value within that 2-hour interval specified in the query. It does not give me the total average for all scrapes of that metric within the 2-hours.
The query avg_over_time(dell_hw_chassis_power_reading[2h]) returns the average values for raw samples stored in Prometheus over the last 2 hours. If Prometheus contains multiple time series with the dell_hw_chassis_power_reading name, then the query would return independent averages per each matching time series.
If you want calculating the average over all the time series with the name dell_hw_chassis_power_reading, then the following query must be used:
sum(sum_over_time(dell_hw_chassis_power_reading[2h]))
/
sum(count_over_time(dell_hw_chassis_power_reading[2h]))
If you need the average per some label or a set of labels, then the following query must be used:
sum(sum_over_time(dell_hw_chassis_power_reading[2h])) by (labels)
/
sum(count_over_time(dell_hw_chassis_power_reading[2h])) by (labels)

Datadog: METRIC.as_rate() vs. per_second(METRIC)

I'm trying to figure out the difference between the in-application modifier as_rate() and the rollup function per_second().
I want a table with two columns: the left column shows the total number of events submitted to a Distribution (in query-speak: count:METRIC{*} by {tag}), and the right column shows the average rate of events per second. The table visualization applies a sum rollup on left column, and an average rollup on the right column, so that the left column should equal the right column multiplied by the total number of seconds in the selected time period.
From reading the docs I expected either of these queries to work for the right column:
count:DISTRIBUTION_METRIC{*} by {tag}.as_rate()
per_second(count:DISTRIBUTION_METRIC{*} by {tag})
But, it turns out that these two queries are not the same. as_rate() is the only one that finds the expected average rate where left = right * num_seconds. In fact, the per_second() rollup does this extra weird thing where metrics with lower total events have higher average rates.
Is someone able to clarify why these two functions are not synonymous and what per_second() does differently?

Display count for a day using counter metrics in data dog

We have a counter metric in one our micro services which pushes data to DataDog. I want to display the total count for given time frame, and also the count per day (X axis would have the date and Y axis would have count). How do we achive this?
I tried using sum by and diff with Query value representation. It gives the total number of the count for given time frame. But I would like to get a bar graph with the X axis as the date and the Y axis as the count. Is this possible in DataDog?
It seems like there are 2 main questions here:
display the total count for a given time frame.
the count per day.
I think the rollup method is going to be your friend for both questions.
For #1 you need to pass in the time frame you want a total over: sum:<metric_name>.rollup(sum, <time_frame>) and the single value can be displayed using the Query Value visualization.
For #2 the datadog docs say you can get metrics per a day by
graphed using a day-long rollup with .rollup(avg,86400)
So this would look something like sum:<metric_name>.rollup(sum, 86400) and can be displayed a Timeseries with bars.

How to create timeline chart with average using Kibana?

I am ingesting data to elasticsearch using flume, I want to create a time-series graph in kibana to show the events collected over time. BUT I also want to to the average per that time unit so the user knows if the current flow is around the average or not.
To create a timeline I am using line graph with #timestamp as X-axis and count as Y-axis.
The question is how to create the average line? and how to make this average dynamic e.g. as we zoom in average changes from average per day to average per hour.
While creating a visualization you can choose the type of y-axis metric. The default is "count". You can click on the icon to choose other type of metrics you want. It will have various options like average, sum, percentile etc.
As for the time range of average calculation, the the x-axis metrics, under buckets when you choose date histogram the default interval is auto.This means that the time range of average will chage automatically depending on overall time range selected.
You can change it to a fixed interval like per second, minute, hourly daily etc.
It's a bit odd, you would expect count to appear alongside field as something you can average. In reality you have to do it another way:
For the Y axis, instead of selecting count, select "Average Bucket"
then set up your bucket aggregation that you would like e.g. Date Histogram with second interval.
Below this you have another box for metric, e.g. the thing you're averaging, set this to count

Resources