What is the best way to track what requests were made to LUIS and QnA Maker and what the response's were?
I don't want to log the utterances and responses in any DB, I need something like AppInsights.
It's certainly possible to do this, and there are a lot of built-in pieces to do so. You don't mention what language/environment you're using (.net or node), but here are some starting points to look at:
Add telemetry to your bot (this link will take you straight to the section on LUIS & QnA Maker)
Add telemetry to your QnAMaker bot
The equivalent is possible in node, I'd imagine, if that's your language of choice. If so, this example might be useful.
For Qna Maker, as long as you enabled Application Insights when you created your QnA Maker service, all you need to do is create and run the queries in Logs. This page gives many examples of analytics you can run.
For LUIS, the link Hilton provided works for C#, but for node you need a different approach. This example will show you how to create a LUIS App Insights helper in node and send the traces to Application insights. It doesn't give you an example queries, but here are a few I use and have found useful. If there's some specific metric you're looking for, let me know.
Pie chart of intents over the last 30 days
requests
| where url endswith "messages"
| where timestamp > ago(30d)
| project timestamp, duration, performanceBucket, resultCode, url, id
| parse kind = regex url with *"(?i)http://"botName".azurewebsites.net/api/messages"
| join kind= inner (
traces | extend id = operation_ParentId
) on id
| where message == "LUIS"
| extend topIntent = tostring(customDimensions.LUIS_luisResponse_luisResult_topScoringIntent_intent)
| where topIntent != "None"
| where topIntent != ""
| summarize count() by topIntent
| order by count_ desc
| render piechart
List of queries with results below 0.5 confidence score
requests
| where url endswith "messages"
| where timestamp > ago(30d)
| project timestamp, duration, performanceBucket, resultCode, url, id
| parse kind = regex url with *"(?i)http://"botName".azurewebsites.net/api/messages"
| join kind= inner (
traces | extend id = operation_ParentId
) on id
| where message == "LUIS"
| extend topIntent = tostring(customDimensions.LUIS_luisResponse_luisResult_topScoringIntent_intent)
| extend score = todouble(customDimensions.LUIS_luisResponse_luisResult_topScoringIntent_score)
| extend utterance = tostring(customDimensions.LUIS_luisResponse_text)
| order by timestamp desc nulls last
| project timestamp, botName, topIntent, score, utterance, performanceBucket, duration, resultCode
| where score < 0.5
Average number of messages per conversation
requests
| where url endswith "messages"
| where timestamp > ago(30d)
| project timestamp, url, id
| parse kind = regex url with *"(?i)http://"botName".azurewebsites.net/api/messages"
| join kind= inner (
traces | extend id = operation_ParentId
) on id
| where message == "LUIS"
| extend convID = tostring(customDimensions.LUIS_botContext_conversation_id)
| order by timestamp desc nulls last
| project timestamp, botName, convID
| summarize messages=count() by conversation=convID
| summarize conversations=count(), messageAverage=avg(messages)
Related
I'm currently evaluating a use case in Azure Application Insights but I'm open to use any other framework of infrastructure that would fit best.
So basically I have a desktop application who logs some events or traces (I don't exactly know which one it should be). Examples of events (or traces?)
| timestamp | state | user |
------------------------------------------
| yyyy-mm-dd 12:00 | is_at_home | John |
| yyyy-mm-dd 15:00 | is_at_work | John |
| yyyy-mm-dd 18:00 | is_outside | John |
Users are considered to be in the last state received until new event comes.
I need to extract data to answer questions like this:
I want to see if the total duration John is at home is growing or going down.
I want to get in which states the users pass most time.
I want the average duration of the state "is_at_work". And if it's going down or up over time.
So, Can the application insights output this kind of analysis? If not, which architecture/platform should I use? I'm I using the right keywords to describe what I want?
Thank you
the ai/log analytics query language (kql) supports all kinds of things like that. the trick you'll have is getting your queries exactly right, here you'll have to figure out exactly what you need to do so that you calculate the times between rows as "state" changes.
here's my first attempt:
let fakeevents = datatable (timestamp: datetime, state: string, user: string ) [
datetime(2021-08-02 12:00), "is_at_home" , "John" ,
datetime(2021-08-02 15:00), "is_at_work" , "John",
datetime(2021-08-02 18:00), "is_outside" , "John",
datetime(2021-08-02 11:00), "is_at_home" , "Jim" ,
datetime(2021-08-02 12:00), "is_at_work" , "Jim",
datetime(2021-08-02 13:00), "is_outside" , "Jim",
];
fakeevents | partition by user (
order by user, timestamp desc |
extend duration = prev(timestamp, 1, now()) - timestamp
)
gets me:
timestamp
state
user
duration
2021-08-02T18:00:00Z
is_outside
John
06:20:23.1748874
2021-08-02T15:00:00Z
is_at_work
John
03:00:00
2021-08-02T12:00:00Z
is_at_home
John
03:00:00
2021-08-02T13:00:00Z
is_outside
Jim
11:25:14.6912472
2021-08-02T12:00:00Z
is_at_work
Jim
01:00:00
2021-08-02T11:00:00Z
is_at_home
Jim
01:00:00
before you send any data real data, you can create "fake" data by using the datatable operator to make a fake table full of data.
you can then apply things like summarize to calculate things like which had the max, etc. note the use of partition by user to make sure each user is treated separately. in my assumption i use now() if there's no value to end the duration of an event, you'll want to do something there otherwise you'll have blank cells.
I have a kql-query which calculates number of uploaded BLOBS in Azure storage since last 24 hours.
The query blow returns a number as expected when run in Azure log analytics.
StorageBlobLogs
| where TimeGenerated > ago(1d) and OperationName has "PutBlob" and StatusText contains "success" a
| distinct Uri
| summarize count()
I want now to visualise this information in a timechart to get some detailed view. Have tried to add "render timechart" to the query chain as follows
StorageBlobLogs
| where TimeGenerated > ago(1d) and OperationName has "PutBlob" and StatusText contains "success" a
| distinct Uri
| summarize count()
| render timechart
When executing the query however, i am getting the error message;
Failed to create visualization
The Stacked bar chart can't be created as you are missing a column of one of the following types: int, long, decimal or real
Any tips to how this can be accomplished?
if you wish to look at the data aggregated at an hourly resolution (for example) and rendered as a timechart, you could try this:
StorageBlobLogs
| where TimeGenerated > ago(1d) and OperationName has "PutBlob" and StatusText contains "success"
| summarize dcount(Uri) by bin(TimeGenerated, 1h)
| render timechart
I am a newbie in UIPath.
I have a DataTable with these headers:
1.) Date
2.) Error
I want to extract a Distinct Date for every error, and use this code:
dtQuery = ExtractDataTable.DefaultView.ToTable(True,{"Date","Error"})
With this, I get my desired result. My problem is how can I append (a new Column, "Count") EACH COUNT of DISTINCT VALUES given? For Example:
DATE | ERROR | COUNT
2/27/2019 | Admin Query String |
2/27/2019 | 404 Shield |
2/26/2019 | 404 Shield |
2/25/2019 | 404 Shield |
2/25/2019 | Admin Query String |
I tried to use ADD DATA COLUMN ACTIVITY with these properties:
Column Name = "COUNT"
Data Table = dtQuery
DefaultValue = ExtractDataTable.DefaultView.ToTable(True,{"Date","Error"}).Rows.Count
But by using this, it gives me this:
DATE | ERROR | COUNT
2/27/2019 | Admin Query String | 5
2/27/2019 | 404 Shield | 5
2/26/2019 | 404 Shield | 5
2/25/2019 | 404 Shield | 5
2/25/2019 | Admin Query String | 5
Thanks in advance! Happy coding!
After hours of research, here is what I learned.
I can iterate on each item of the datatable by using FOR EACH ROW Activity.
So for every row item of my dtQuery, I add ASSIGN Activity that looks like this:
row(2) = [item i want to add]
But that doesn't answer my question. I want to know the count of each unique item with 2 criteria - They are same DATE and ERROR.
Maybe I can code directly on the Excel File?
So I researched for Excel Formula that looks like "Select Distinct Col1....etc."
I found this video tutorial, hope it might help: Countif
But its only for a single criterion, so I found this: Countifs
So to wrap it up,
For Each Row Image
1.) I loop inside dtQuery using For Each Row Activity
2.) Inside loop, I add Assign Activity with this code
row(2) = "=COUNTIFS('LookUp Sheet'!B:B,'Result Sheet'!A" & indexerRow + 2 & ",'LookUp Sheet'!D:D,'Result Sheet'!B" & indexerRow + 2 & ")"
Hope this help others who will be stumbling upon the same problem. Happy Automating! ^_^
Having the following tables:
----------- ----------------- ---------------
| PROJECT | | ACCESSES | | ENVIRONMENT |
----------- ----------------- ---------------
| id | | id | | id |
| title | | project_id | | title |
----------- | environment_id| ---------------
| username |
| password |
-----------------
My goal is to get all the environments used by a project through the accesses table
In my Project model:
public function environments(){
return $this->belongsToMany('App\Models\Environment', "accesses");
}
My problem is that if I have multiple rows with the same project_id and environment_id values in the accesses table, it will fetch multiple time the same environment.
How may I force it to retrieve each environment only once?
This is an old question, but the benefit of future travellers:
The distinct() method can help in this situation:
public function environments() {
return $this
->belongsToMany('App\Models\Environment', "accesses")
->distinct();
}
From the docs:
The distinct method allows you to force the query to return distinct results:
$users = DB::table('users')->distinct()->get();
As far as I can tell, this works at least as far back as Laravel 4.x, so it should be fine for all currently supported versions.
You can achieve this using sync method or toggle method it depends on your use case.
From the docs:
The many-to-many relationship also provides a toggle method which "toggles" the attachment status of the given IDs. If the given ID is currently attached, it will be detached. Likewise, if it is currently detached, it will be attached:
$project->environments()->toggle([1, 2, 3]);
You may also use the sync method to construct many-to-many associations. The sync method accepts an array of IDs to place on the intermediate table. Any IDs that are not in the given array will be removed from the intermediate table. So, after this operation is complete, only the IDs in the given array will exist in the intermediate table
$project->environments()->sync([1, 2, 3]);
For more information please have a look in the docs.
https://laravel.com/docs/5.6/eloquent-relationships#updating-many-to-many-relationships
Does https://crate.io support facets (for faceted search)?
I didn't find anything in the docs. ElasticSearch replaced facets with aggregations in 2014, but the aggregation section in the crate docs only talks about SQL aggregation functions.
My use case:
I've got a list of web sites, each record has a domain and a language field. When displaying the search results, I want to get a list of all domains that the search results appear in, as well a list of all languages, ordered by number of occurences so search results can be narrowed down. The number of results for those single facet values shall also be given.
Screenshot with facets:
There is no way to get the facets I want from crate itself.
Instead we're enabling the ElasticSearch REST API in crate.yml now
es.api.enabled: true
.. and can use the ElasticSearch aggregation API.
Crate doesn't support facets or Elasticsearch aggregations directly. Like you suggested, you can always turn on the Elasticsearch API. However, there are other ways to get these aggregations.
1) Have you considered to issue multiple queries to the cluster? For example, if you load your page dynamically with Javascript, you can first return the search results and load the facets later. This should also decrease the overall response time of the application.
2) In CrateDB 2.1.x, there will be support for subqueries, which allow you to include the facets within your query:
select q1.id, q1.domain, q1.tag, q2.d_count, q3.t_count from websites q1,
(select domain, count(*) as d_count from websites where text like '%query%' group by domain) q2,
(select tag, count(*) as t_count from websites where text like '%query%' group by tag) q3
where q1.domain = q2.domain and q1.tag = q3.tag and q1.text like '%query%'
order by q1.id
limit 5;
This gives you a result table like this where you have the search results alongside with the domain and tag count for the query:
+----+--------------+-----------+---------+-----------+
| id | domain | tag | d_count | t_count |
+----+--------------+-------------+---------+---------+
| 1 | example.com | example | 2 | 3 |
| 14 | crate.io | software | 1 | 4 |
| 17 | google.com | search | 5 | 2 |
| 29 | github.com | open-source | 3 | 3 |
| 47 | linux.org | software | 2 | 4 |
+----+--------------+-------------+---------+---------+
Disclaimer: I'm new to Crate :)