Experiments Feature stuck on collecting data - dialogflow-cx

I am trying to split traffic from a given flow into different versions to measure statistical performance over time using the Experiment feature. However, it always shows the state "Collecting Data".
Here are the steps to reproduce the issue --
Create an Experiment on a flow and select different versions
Select Auto rollout and Select Steps option
Add steps for gradual progress of traffic increase and minimum duration
Save and Start the Experiment
Send queries to chatbot triggering the configured flow for the Experiment
The experiment should show some results in the Status tab and compare the performance of multiple flow versions. However, It does not produce any results. Always show the status as "Collecting Data" and Auto Rollout as "Not Started".
The only prerequisite for the Experiments feature to work is to enable the Interaction logs which are already enabled on my virtual agent.
About 2.5K sessions (~4K interactions ) were created in the last 48 hours. Are there any minimum requirements for it to generate results like the minimum number of sessions etc.?

Related

How to make a Spotfire link open faster?

I've published a Spotfire file with 70 '.txt' files linked to it. The total size of the files is around 2Gb. when the users open it in their web browser it takes + - 27 minutes to load the linked tables.
I need an option that enhances opening performance. The issue seems to be the aumont of data and the way they are linked to Spotfire.
This runs in a server and the users open the BI in their browser.
I've tryed to embeed the data, it lowers the time, but forces me to interact with the software every time I want to update the data. The solution is supposed to run automatically.
I need to open this in less than 5 minutes.
Update:
- I need the data to be updated at least twice a day.
- The embedded link is acceptable from the time perspective, but the system need to run without my intetrvention.
- I've never used Spotfire automation services.
Schedule the report to cache twice a day on the Spotfire server by setting up a rule under scheduling and routing. The good thing about this is while it is updating the analysis for the second time during the day, it will still allow users to quickly open older data until it is complete. To the end user it will open in seconds but behind the scenes you have just pre-opened the report. Once you set up the rule this will run automatically with no intervention needed.
All functionality and scripting within the report will work the same, and it can be opened up many times at the same time from different users. This is really the best way if you have to link to that many files. Otherwise, try collapsing files, aggregating data, removing all unnecessary columns and data tables for the data to pull through faster.

Design an alarm system based on logs in Elasticsearch

How would you design a system which can generate alarms based on certain conditions on data stored on Elasticsearch?
I'm thinking of a system similar to AWS CloudWatch.
Proposed alarming system should be able to work under following conditions:
There could be thousands of users using this system to create alarms.
There could be thousands of alarms active at any given time.
Shouldn't have high impact on query performance.
Large volume of data.
Naive approach would be to apply all the alarm conditions when a new record is added to Elasticsearch or a service/lambda function executing all the alarm rules at a specified time interval but I really doubt a system like this can satisfy above all conditions.
You might be interested in learning more about the Alerts feature in X-Pack. It includes Watchers, which are essentially the query you want to monitor.
Take control of your alerts by viewing, creating, and managing all of
them from a single UI. Stay in the know with real-time updates on
which alerts are running and what actions were taken.
Documentation: https://www.elastic.co/guide/en/x-pack/current/xpack-alerting.html
Sales Page: https://www.elastic.co/products/x-pack/alerting

What will be the wait time before big query executes a query?

Every time I execute a query in Google bigquery in the Explanation tab, I can see that their involves an average waiting time. Is it possible to know the percentage or seconds of wait time?
Since BigQuery is a managed service, around the glob a lot of customers are using it. It has an internal scheduling system based on the billingTier (explained here https://cloud.google.com/bigquery/pricing#high-compute) and other internals of your project. Based on this the query is scheduled to be executed based on the cluster availability. So there will be a minimum time until it finds a cluster of machines to execute your job.
I never seen there significant times. In case you have this issue then contact google support to see your project. If you edit your original question and add a job ID, a google enginner may check it out if there is an issue orn ot.
It's currently not exposed in the UI.
But you can find a similar concept from API (search "wait" from following page):
https://cloud.google.com/bigquery/docs/reference/v2/jobs#resource
Is it possible to reduce the big query execution wait time to the minimum?
Purchase more BigQuery Slots.
Contact your sales representative or support for more information.

Redmine filter or report to see historical data

I'm trying to find usage statistics for the "statuses" in our Redmine data. I'm using two filters: The tracker and the status. I realized that these filters only show tasks which are currently in that particular status. To find out the total usage statistics I need to find out the historical data, too, which means if a task was marked with a particular status in the past, it should be counted.
There isn't any current filter or report in Redmine which can give me those numbers. Do you know of any other methods to query historical data in Redmine?
(We're changing our tracker, status and workflows and we want to remove those unused statuses from the workflows. To make that decision, we need to find out which statuses were useful and functional in the past.)

How can I see aggregates over the traces in New Relic

We use New Relic to gather performance information from our production environment and we have added some custom instrumentation. In the Web Transactions screens, we can see which transactions use most time and we can even drill down into the specific traces of the slowest transactions. This is all working fine. However, the slowest transactions are not always representative for the operation as a whole. They are often edge cases (cache expired, warming requests after an update, etc...).
I would be interested to see the very same data that we can see in the Trace Details in a more aggregate way. Preferably also in the hierarchical way that is used in Trace Details (although this will not always be possible, as multiple instances may have different traces). Is the Breakdown Table on the overview page for one Web Transaction type actually what I am looking for? I am not sure. What does that show exactly?
The Breakdown Table in New Relic's Web Transactions tab is designed to give you an aggregate of performance data along with historical comparisons. This may not provide the specific level of detail you're looking for.
New Relic has a new feature available for the Python and Java agents called X-Ray Sessions. After you start an x-ray session, New Relic will collect up to 100 transaction traces and a thread profile for your transaction. Collection automatically stops at 100 traces or 24 hours, whichever comes first. The results are displayed in a hierarchical waterfall chart like transaction traces, but the data is aggregated. Here is an overview:
https://newrelic.com/docs/transactions-dashboards/xray-sessions
While I can't say if or when this feature will be rolled out to the other language agents, I suggest keeping an eye the following for updates:
https://newrelic.com/docs/features/new-noteworthy

Resources