Throughput Volume Query in Application Insights - performance

i am trying to get Throughput(Volume) metrics by using below query
requests
// additional filters can be applied here
| where timestamp > ago(24h)
| where client_Type != "Browser"
| summarize count() by bin(timestamp, 5m)
| extend request='Volume'
// render result in a chart
| render timechart
So my question is for Volume do we use Count() or sum(itemCount) ? Which one of these is more Accurate to get Volume(Throughput) details per interval ??

The right way is to use sum(itemCount). Then this metric will be correct for sampled applications (by default adaptive sampling will kick in when number of telemetry items exceeds 5/sec).

Related

Running Total with Additional Field Filter in Power BI

I have created a running total measure in PBI using Dax, however, when the total does not filter by column filter when it is in the table. The running total should sum the balance but then break out into the individual maturity buckets within the table.
Here is the Measure and the resulting table
I have tried adding extra filters using the FILTER/ALL command to break out the maturity buckets and gotten either the same result or errors. Not sure what else I can do?
Here is a fake sample of data. in the comments i have include the language of the measure.
|Date |Tenor|Balance |
|-------------|-----|-----------------|
|December 2022|18m |0.196072326627487|
|December 2022|2y |0.149643186475954|
|December 2022|3y |0.180522608363889|
|December 2022|4y |0.780540306321475|
|December 2022|5y |0.156029893270158|
|January 2022|18m |0.512496934496972|
|January 2023|2y |0.068123785829084|
|January 2023|3y |0.349971677118287|
Here is my solution! I am sorry if I kept you waiting too much!
First, I need to say that your design is not so efficient! You need a full date table with full date values(not only month &year parts)
Here is your calendar table:
Here is the DAX Code you need to write to obtain correct value:
Total_Correct =
CALCULATE (
SUM('Callable'[Balance]),
FILTER ( ALL ( 'Callable'[Date] ), 'Callable'[Date] <= MAX('Callable'[Date])),
ALL('Calendar')
)

Alert on Absent Data for Combined Metric in GCP Monitoring

I have created an alert policy in GCP MOnitoring which will notify me when a certain kind of log message stops appearing (a dead man's switch). I have create a logs-based metric with a label, "client", which I use to group the metric and get a timeseries per client. I have been using "absence of data" as the trigger for the alert. This has all been working well, until...
After a recent change, the logs now also com from different resources, so there is a need to combine the metric across those resources. I can achieve this using QML
{ fetch gce_instance::logging.googleapis.com/user/ping
| group_by [metric.client], sum(val())
| every 30m
; fetch global::logging.googleapis.com/user/ping
| group_by [metric.client], sum(val())
| every 30m }
| union
Notice that I need to align the two series with the same bucket size (30m) to be able to join them, which makes sense. I notice that the value for a timeseries is "undefined" in those buckets where the metric data was absent (by downloading a CSV of the query).
To create an alert using this query, I tried something like this:
{ fetch gce_instance::logging.googleapis.com/user/ping
| group_by [metric.client], sum(val())
| every 30m
; fetch global::logging.googleapis.com/user/ping
| group_by [metric.client], sum(val())
| every 30m }
| union
| absent_for 1h
If I look at the CSV output for this query it doesn't reflect the absence of metric data for a timeseries, and this is presumably because a value of "undefined" doesn't qualify as absent data.
Is there a way to detect for absence of data for a "unioned" metric (and therefore aligned) across multiple resources?
Update 1
I have tried this, which seems to get me some of the way there. I'd really appreciate comments on this approach.
{
fetch gce_instance::logging.googleapis.com/user/ping
| group_by [metric.client], sum(val())
;
fetch global::logging.googleapis.com/user/ping
| group_by [metric.client], sum(val())
}
| union
| absent_for 1h
I have settled on a solution as follows,
{
fetch gce_instance::logging.googleapis.com/user/ping
| group_by [metric.client]
;
fetch global::logging.googleapis.com/user/ping
| group_by [metric.client]
}
| union
| absent_for 1h
| every 30m
Note:
group_by [metric.client] conforms the tables from different resource, which allows the union to work
absent_for does align input timeseries using the default period or one specified by a following every
I found it really hard to debug these MQL queries, in particular to confirm that absent_for was going to trigger an alert. I realised that I could use value [active] to show a plot of the active column (which absent_for produces) and that gave me confidence that my alert was actually going to work.
{
fetch gce_instance::logging.googleapis.com/user/ping
| group_by [metric.client]
;
fetch global::logging.googleapis.com/user/ping
| group_by [metric.client]
}
| union
| absent_for 1h
| value [active]

Write a calculated field formula in AWS QuickSight using ifelse

I am trying to create a waterfall chart in quicksight with the below dataset format.
I want to multiply the values of specific metrics, for instance if Type = Metric 1 and Metric2, then multiply their respective values, is that feasible using the calculated field option? thanks
**Type | Value**
Metric 1 | 10
Metric 2 | 20
Metric 3 | 30

Kusto\KQL - Render timechart for simple count value

I have a kql-query which calculates number of uploaded BLOBS in Azure storage since last 24 hours.
The query blow returns a number as expected when run in Azure log analytics.
StorageBlobLogs
| where TimeGenerated > ago(1d) and OperationName has "PutBlob" and StatusText contains "success" a
| distinct Uri
| summarize count()
I want now to visualise this information in a timechart to get some detailed view. Have tried to add "render timechart" to the query chain as follows
StorageBlobLogs
| where TimeGenerated > ago(1d) and OperationName has "PutBlob" and StatusText contains "success" a
| distinct Uri
| summarize count()
| render timechart
When executing the query however, i am getting the error message;
Failed to create visualization
The Stacked bar chart can't be created as you are missing a column of one of the following types: int, long, decimal or real
Any tips to how this can be accomplished?
if you wish to look at the data aggregated at an hourly resolution (for example) and rendered as a timechart, you could try this:
StorageBlobLogs
| where TimeGenerated > ago(1d) and OperationName has "PutBlob" and StatusText contains "success"
| summarize dcount(Uri) by bin(TimeGenerated, 1h)
| render timechart

LUIS Utterance and response tracking

What is the best way to track what requests were made to LUIS and QnA Maker and what the response's were?
I don't want to log the utterances and responses in any DB, I need something like AppInsights.
It's certainly possible to do this, and there are a lot of built-in pieces to do so. You don't mention what language/environment you're using (.net or node), but here are some starting points to look at:
Add telemetry to your bot (this link will take you straight to the section on LUIS & QnA Maker)
Add telemetry to your QnAMaker bot
The equivalent is possible in node, I'd imagine, if that's your language of choice. If so, this example might be useful.
For Qna Maker, as long as you enabled Application Insights when you created your QnA Maker service, all you need to do is create and run the queries in Logs. This page gives many examples of analytics you can run.
For LUIS, the link Hilton provided works for C#, but for node you need a different approach. This example will show you how to create a LUIS App Insights helper in node and send the traces to Application insights. It doesn't give you an example queries, but here are a few I use and have found useful. If there's some specific metric you're looking for, let me know.
Pie chart of intents over the last 30 days
requests
| where url endswith "messages"
| where timestamp > ago(30d)
| project timestamp, duration, performanceBucket, resultCode, url, id
| parse kind = regex url with *"(?i)http://"botName".azurewebsites.net/api/messages"
| join kind= inner (
traces | extend id = operation_ParentId
) on id
| where message == "LUIS"
| extend topIntent = tostring(customDimensions.LUIS_luisResponse_luisResult_topScoringIntent_intent)
| where topIntent != "None"
| where topIntent != ""
| summarize count() by topIntent
| order by count_ desc
| render piechart
List of queries with results below 0.5 confidence score
requests
| where url endswith "messages"
| where timestamp > ago(30d)
| project timestamp, duration, performanceBucket, resultCode, url, id
| parse kind = regex url with *"(?i)http://"botName".azurewebsites.net/api/messages"
| join kind= inner (
traces | extend id = operation_ParentId
) on id
| where message == "LUIS"
| extend topIntent = tostring(customDimensions.LUIS_luisResponse_luisResult_topScoringIntent_intent)
| extend score = todouble(customDimensions.LUIS_luisResponse_luisResult_topScoringIntent_score)
| extend utterance = tostring(customDimensions.LUIS_luisResponse_text)
| order by timestamp desc nulls last
| project timestamp, botName, topIntent, score, utterance, performanceBucket, duration, resultCode
| where score < 0.5
Average number of messages per conversation
requests
| where url endswith "messages"
| where timestamp > ago(30d)
| project timestamp, url, id
| parse kind = regex url with *"(?i)http://"botName".azurewebsites.net/api/messages"
| join kind= inner (
traces | extend id = operation_ParentId
) on id
| where message == "LUIS"
| extend convID = tostring(customDimensions.LUIS_botContext_conversation_id)
| order by timestamp desc nulls last
| project timestamp, botName, convID
| summarize messages=count() by conversation=convID
| summarize conversations=count(), messageAverage=avg(messages)

Resources