What are the ways to monitor Power BI service activity? - performance

I am working on a report to monitor certain things on the Power BI report Server. I was wondering what items others may be monitoring on reports and how do they do it.
Some examples of things I want to monitor:
A. Whether the scheduled data refreshes failed or succeeded.
Would love to be able to get the failure message.
B. What is the average response time of a query.
Is there a way to determine when the report is first opened. I would like to calculate initial load time.
C. What was the longest response time of a query per day.
D. How many times a query took longer than 5 seconds.

Related

Getting so high average response time in Jmeter

I am testing a scenario with 400 threads. Although I am almost getting no errors, I have very high average response. What can bring about this problem? Seems like server gives no time-out but gives response so late. I've addded the summary report. It is as follows:
This table doesn't tell the full story, if response time seems "so high" to you - this is definitely the bottleneck and you can report it already.
What you can do to localize the problem is:
Consider using a longer ramp-up period, i.e. start with 1 user and add 1 more user every 5 seconds (adjust these numbers according to your scenario) so you would have arrival phase, the "plateau" and the load decrease phase. This approach will allow you to correlate increasing load and increasing response time by looking at Active Threads Over Time and Response Times Over Time charts. This way you will be able to state that:
response time remains the same up to X concurrent users
after X concurrent users it starts growing so throughput is going down
after Z concurrent users response time exceeds acceptable threshold
It would also be good to see CPU, RAM, etc. usage on the server side as increased response time might be due to lack of resources, you can use JMeter PerfMon Plugin for this
Inspect your server configuration as you might need to tune it for high loads (same applies to JMeter, make sure to follow JMeter Best Practices)
Use a profiler tool on server side during the next test execution, it will show you the slowest places in your application code

How to deal with server throughput quota programatically?

I have a program that make many queries to Google Search Analytics server. My program does the queries one after the other sequentially, so each instant, only one query will be in process.
Google has advised about a throughput limit of 2000 queries per each 100 seconds at most so to configure my system to be the more efficient it could be I have two ideas on mind:
Known that 2000 queries per 100 seconds is one query per each 0.05 seconds, i have separated my queries by sleeping the process, but only if any query take less than 0.05 seconds, so the time the process will sleep in that case is the remaining time to complete the 0.05 second interval. If the query takes 0.05s or more I trigger the following without waiting.
The second idea is more easy to implement but I think it will be less efficient: i will trigger the queries taking note of the time when the process start so if i reach 2000 queries before 100 seconds, I will wait the remaining time sleeping.
So far I don't know how to measure which one is the best.
Which is your opinion about the two options? Any of them is better and why? Any additional option I haven't figured out? (specially if it's better than mine)
Actually what you need to consider is that its 2000 requests per 100 seconds. But you could do all 2000 requests in 10 seconds and still be on the good side of the quota.
I am curious as to why you are worried about it though. If you get one of the following errors
403 userRateLimitExceeded
403 rateLimitExceeded
429 RESOURCE_EXHAUSTED
Google just recommends that you implement Exponential backoff which consists of making your request getting the error sleeping for a bit and trying again. (do this up to eight times). Google will not penalize you for getting these errors they just ask that you wait a bit before trying again.
If you want to go crazy you can do something like what i did in my C# application I created a request queue that i use to track how much time has gone since i created the last 100 request. I call it Google APIs Flood Buster.
Basically i have a queue where i log each requests as i make it before i make a new request i check how long it has gone since i started. Yes this requires moving the items around the queue a bit. If there has gone more then 90 seconds then i sleep (100 - time since ) this has reduced my errors a great deal. Its not perfect but that's because google is not perfect with regard to tracking your quota. they are normally off by a little.

TIBCO Performance Measurement

Is there a way where I can measure the time taking for a particular node in the TIBCO workflow process?
e.g - How much time did the JMS/ Database node take to complete its operation?
The following goes for Tibco Business Works:
a) In tibco Administrator, you can see the time elapsed for each individual activity.
Service Instances > BW Process > Process Definitions.
Select each process after running it once and you will get an Execution count, Elapsed time and CPU time for each activity than ran.
b) If you are only interested in a single activity, you can add two mapper activities in the flow, one before and one after the node you want to measure, and assign to them a value of tib:timestamp(). Their difference will give you the elapsed time in miliseconds.
You might enable statistics in TIBCO Administrator for the deployed engine
(Engine Control Tab) -> Start Statistic Collection.
This will produce a CSV on local disk (the path is also displayed there) with details of elapsed time of all activities of the executed processes of your engine.
You might use this data for detailed analysis then.

Realtime alerts for page speed

I'm looking for a tool that will send me an alert for page load time.
Think of a downtime alert, eg: Pingdom, but one that sends alerts once a page load time increases above a certain threshold. Eg: Alert that X page has taken greater than 7 seconds consistently for 30 minutes.
I know of a lot of tools that give you historical records and page speed metrics after the fact, or give you Apdex scores, but nothing that alerts around speed thresholds.
Does anyone know of such a tool?
Almost all website monitoring services have alerts when the response time is above certain threshold. Your question however is bit more specific since you have a time frame (30 min). Depending on the service used and the monitoring frequency, during a 30 min period you might have between 1 and 30 tests. Do you want an alert if all of those tests are above 7 seconds or if the average response time is above 7 seconds?
I can speak of WebSitePulse where you can receive an alert if 1 or more tests in a row have detected a problem or if the page-load time is within certain limits.
GTmetrix.com Offers Daily alerts for Yslow and PageSpeed scores, as well as great breakdowns and grades for specific ticket items. Great freemium business model as well, free for 3 sites.
Upgraded versions include loading videos of your site.
Source: Just used it for my company's site.

governor limits with reports in SFDC

We have a business requirement to show a cost summary for all our projects in a single table.
In order to tabulate these costs we have to query through all the client tasks, regions, job roles, pay rates, cost tables, deliverables, efforts, and hour records (client tasks are in the same table and tasks and regions are in the same table and deliverables, effort, and hours are stored as monthly totals).
Since I have to query all of this before I go for-looping through everything it hits a large number of scripting lines very quickly. Computationally it's like O(m * n * o * p) and some of our projects have all four variables that go up very quickly. My estimates for how to do this have ranged from 90 million lines of code to 600 billion.
Using batch apex we could break this up by task regions into 200 batches, but that would reduce the computational profile to (600 B / 200 ) = 3 billion lines of code (well above the salesforce limit.
We have been playing around with using informatica to do these massive calculations, but we have several problems including (1) our end users can not wait more than five or so minutes, but just transferring the data (90% of all records if all the projects got updated at once) would take 15 minutes over informatica or the web api (2) we have noticed these massive calculations need to happen in several places (changing a deliverable forecast value, creating an initial forecast, etc).
Is there a governor limit work around that will meet our requirements here (massive volume of data with response in 5 or so minutes? Is force.com a good platform for us to use here?
This is the way I've been doing it for a similar calculation:
An ERD would help, but have you considered doing this in smaller pieces and with reports in salesforce instead of custom code?
By smaller pieces I mean, use roll-up summary fields to get some totals higher in your tree of objects.
Or use apex triggers so as hours are entered the cost * hours is calculated and placed onto the time record, and then rolled-up to the deliverables.
Basically get your values calculated at the time the data is entered instead of having to run your calculations every time.
Then you can simply run a report that says show me all my projects and their total cost or total time because those total costs/times are stored/calculated already.
Roll-up summaries only work with master-detail
Triggers work anytime, but you'll want to account for insert, update as well as delete and undelete! Aggregate Functions will be your friend assuming that the trigger context has fewer than 50,000 records to aggregate - which I'd hope it does b/c if there are more than 50,000 time entries for a single deliverable that's a BIG deliverable :)
Hope that helps a bit?

Resources