Since my web application using many AJAX request so categorize as Single Page Application.
what i want is to track AJAX technical performance using Google Analytics.
Regarding to GA document, it suggest to implement Virtual Pageviews Tracking as detail in this link
https://developers.google.com/analytics/devguides/collection/analyticsjs/single-page-applications
After implement virtual pageviews tracking, Pageviews stats and Page URI seem to be feed into GA correctly. But Timing Stats such as Avg.Page Load Time (sec) are not. all of them have no value!
I tried these 3 senario to implement Virtual Page Tracking but non of them is working.
do i miss something ? or it's GA limitation so we can not collect Timing stats of Virtual Page just like Real Pageview ?
any others Tools suggestion to track AJAX performance ?
GA is not meant to be used to track page performance and the Value in ga implies monetary value.
When it says "tracking pageviews" it's not about measuring performance, it's about tracking user activity. As in, how many pages per session, what pages, what led to conversions, where they have troubles going through and so forth. Not a technical tool, but an analytics/marketing tool.
Technically, you still could use it to track page performance and people do it. But not as you've done it. You have to remove any network influence on your timestamps since normal fluctuation there would exceed the useful timing of page performance.
I think the most elegant way of doing it would be creating a custom metric in GA interface and then populate it with performance measuring events (or pageviews). So:
You take a new Date() timestamp (or whatever you do in jquery to get current timestamp) right before the post request
You get another new Date() in the post callback
You calculate the difference in milliseconds and send that as the value of the custom metric with the pageview
You wait for two days for the new data to get processed and build a custom report using your custom metric.
Now when you improve performance of your endpoint, you will be able to see statistical improvements in that report.
This is usually done on the backend though, with the datadog or a similar tool with endpoint monitoring functionality.
When performance is measured on the front-end, we usually use the native performance API, so the window.performance object. Or whatever your front-end rendering library suggests using for that. Here's a bit more on this: https://developer.mozilla.org/en-US/docs/Web/API/performance_property That way you're taking into account a bit more data, not just one endpoint response time.
Related
I've been testing Plaid's investments transactions endpoint (investments/transactions/get) in development.
I'm encountering issues with highly variable delays for data to be returned (following the product initialization with Link). Plaid states that it takes 1–2 minutes to return investment transaction data, but I've found that in practice, it can be up to several hours before the data is returned.
Anyone else using this endpoint and getting data returned within 1–2 minutes, or is it generally a longer wait?
If it is a longer wait, do you simply wait for the DEFAULT_UPDATE webhook before you retrieve the data?
So far, my experience with their investments/transactions/get has been problematic (missing transactions, product doesn't work as described in their docs, limited sandbox dataset, etc.) so I'm very interested in hearing from anyone with more experience with this endpoint.
Do you find this endpoint generally reliable, and the data provided to be usable, or have you had issues? I've not seen any issues with investments/holdings/get, so I'm hoping that my problems are unusual, and I just need to push through it.
I'm testing in development with my own brokerage accounts, so I know what the underlying transactions are compared to what Plaid is returning to me. My calls are set up correctly, and I can't get a helpful answer from Plaid support.
I took at look at the support issue and it does appear like the problem you're hitting is related to a bug (or two different bugs, in this case).
However, for posterity/anyone else reading this question, I looked it up and the general answer to the question is that the endpoint in the general case is pretty fast -- P95 latency for calling /investments/transactions/get is currently about 1 second (initial calls on an Item will be higher latency as they have more data to fetch and because they are blocked on Plaid's extracting the data for the Item for the first time -- hence the 1-2 minute guidance in the docs).
In addition, Investments updates at some major brokerages are scheduled to happen only overnight after market close, so there might be a delay of 12+ hours between making a trade and seeing that trade be returned by the API.
I'm trying to use data from google analytics for an existing website to load test a new website. In our busiest month over an hour we had 8361 page requests. So should I get a list of all the urls for these page requests and feed these to jMeter, would that be a sensible approach? I'm hoping to compare the page response times against the existing website.
If you need to do this very quickly, say you have less than an hour for scripting, in that case you can do this way to compare that there are no major differences between 2 instances.
If you would like to go deeper:
8361 requests per hour == 2.3 requests per second so it doesn't make any sense to replicate this load pattern as I'm more than sure that your application will survive such an enormous load.
Performance testing is not only about hitting URLs from list and measuring response times, normally the main questions which need to be answered are:
how many concurrent users my application can support providing acceptable response times (at this point you may be also interested in requests/second)
what happens when the load exceeds the threshold, what types of errors start occurring and what is the impact.
does application recover when the load gets back to normal
what is the bottleneck (i.e. lack of RAM, slow DB queries, low network bandwidth on server/router, whatever)
So the options are in:
If you need "quick and dirty" solution you can use the list of URLs from Google Analytics with i.e. CSV Data Set Config or Access Log Sampler or parse your application logs to replay production traffic with JMeter
Better approach would be checking Google Analytics to identify which groups of users you have and their behavioral patterns, i.e. X % of not authenticated users are browsing the site, Y % of authenticated users are searching, Z % of users are doing checkout, etc. After it you need to properly simulate all these groups using separate JMeter Thread Groups and keep in mind cookies, headers, cache, think times, etc. Once you have this form of test gradually and proportionally increase the number of virtual users and monitor the correlation of increasing response time with the number of virtual users until you hit any form of bottleneck.
The "sensible approach" would be to know the profile, the pattern of your load.
For that, it's excellent you're already have these data.
Yes, you can feed it as is, but that would be the quick & dirty approach - while get the data analysed, patterns distilled out of it and applied to your test plan seems smarter.
I'm developing a basic messaging system on the Parse.com at the moment and I have noticed in the Events Analytics screen I'm hitting 30,000+ requests per day. This is a shock considering I'm the only person using the system at the moment. Obviously with a few users I would blow my API request limit straight away.
I'm pretty experienced with Parse.com these days, so I'm lean with queries and I'm alert to not putting finds, saves, retrieves, etc in for loops. I also understand that saveAll() on an array of ParseObjects doesn't always limit the request count to 1 (depending on relationships inside that object).
So how does one track down where the excessive calls are coming from?
I see the above Analytics > Performance > Served Requests data, but how do I drill down to see if cloud code or iOS is the culprit?
Current solution is to effectively unit test each block of Parse code and look at the results in above screen.
For the benefit of others who may happen upon this thread with the same questions, I found some techniques to hunt down where excessive requests are coming from.
1) Parse's documentation on the API's themselves is really good, but there isn't a lot of information / guides for the admin interfaces. Under: Analytics -> Explorer -> Make a table there is a capability to download all the requests for a specific day (to import into a spreadsheet). The data isn't very detailed though and the dates are epoch timestamps, so hard to follow. At least you can see [Request Type, Class, Installation ID] e.g. ["find", "MyParseClass", "Cloud Code"].
2) My other technique was to add custom Analytic events to the code. So in Cloud Code for example, I added the following line to each beforeSave and afterSave event:
Parse.Analytics.track('MyClass_beforeSave', null);
3) Obviously, Parse logs these calls in the Logs window, but given you can only see the most recents transactions and can't clear them, I found it mostly unhelpful in tracking down the excessive calls.
I need to measure load time on a page navigation. Here is my situation:
When I navigate, the page laod is taking variable time as the ajax elements load. How to be certain that the page is fully loaded to measure its load time correctly?
I cannot be specific that locating a particular element(text, table, or image...) indicate the complete page load as page load depends on data.
Please help me deal with this situation.
Thanks
Do you want to be able to test this on an "as needed" basis or do you want to instrument the pages so that you gather data from all your users?
If you just need to do it on an ad-hoc basis then http://webpagetest.org will help you - providing there's not too long a gap between the AJAX requests it will include them.
If you want to look gather data across all AJAX calls then you will need to instrument the success and failure callbacks to store the time they finish and calculate the difference between the last one and the page start. Then once you've got this push the value to Google Analytics or something else.
If all your AJAX calls are designed to complete before onload fires then the existing SiteSpeed numbers in Google Analytics might be good enough for you.
I have been trying to research the hack proposed by Avinash Kaushik in his book Web Analytics 2.0. He poses the problem whereby most web analytics tools are unable to record the time a user spent on the last page they visit on a website, or on the only page they visit. In other words if user comes to page 1, a timestamp is created showing the time they arrived at the page, when they visit page 2, a second timestamp is created. The time spent on page 1 can be calculated by timestamp 2 - timestamp 1. However if the user closes the browser window or navigates away from the website there is no way to record time on page 2. Here is a link to this problem on Kaushik.net
standard-metrics-revisited-time-on-page-and-time-on-site
One proposed hack is to use the window.onbeforeunload event to call a method and push the time that the page was unloaded to google analytics. So I tried the following code -
window.onbeforeunload = capturePageExit;
function capturePageExit()
{
_gaq.push(['_trackPageview', '/page-exit?page=' + document.location.pathname + document.location.search + '&from=' + document.referrer]);
return("You are about to close this page");
}
Using firebug I can see that the correct __utm.gif image is requested and the correct params are sent to google analytics. But clearly there is a problem now that this will be called on each page unload and so each visitor will appear to go from page1 -> page-exit -> page2 -> page-exit -> page3 -> page-exit... but I should get a more accurate time on site reading, right?
However this is at the expense of accurate navigation-summary data and so not a good solution. What would be good is if I could tell - if user has clicked the close browser/tab button or is navigating away from my site then record the page-exit.
I cant find a great deal of information about how to solve this problem, plenty of discussion about being aware of this inaccuracy when interpreting google analytics (and most web analytics tools probably), another useful link is time_on_page_and_time_on_site_how_confident_are_you
Just wanted to raise this on stackoverflow as I cant find a similar question and start a discussion about this, but my interpretation is that there isnt really a way around this problem but it is just better to be aware of it.
any thoughts?
------------------------------------------------------ UPDATE -----------------------------------------------------
Here is another link that was suggested to me from a blog called Savio.no, is this a good method?
how-to-measure-true-time-with-google-analytics
Web Analytics is not an exact science. Data is always approximate and most of the time sampled.
Web Analytics tools strive for Precision not accuracy. This whitepaper describes why it's more important to have precision and less important to have accuracy when working with Web Analytics.
Once you understand the difference between precision and accuracy and why it matters you will understand that it's not important to get the exact time on site metric, but a precise measure that could clearly express trendings or changes to that metric.
On other words forget about absolute numbers, learn to report using trends and changes.
Another advice, don't bother tweaking GA to render every single metric perfectly if you're never gonna use it. Bother with metrics that you can use. And by use I mean Actionable analysis.
There are, however a few cases were some code tweaking can help you out measuring the time on site. A clear example is a weblog. You may want to implement something like that in a weblog, ince most of your visits will be looking at your homepage, reading your posts and then leaving, all that is done in the same single PageView so it may be a good idea to fire an event when the user leaves to get the correct time on site, or maybe fire an event when the user scrolls past some threshold, in the end you'll be measuring the same ting, if the user scrolls more he reads more, and if the user spends more time then he reads more. So it may not make sense to track those 2 metrics to measure the same effect. Just choose one and stick with it, leave it running for a while to create historical data and then make use of it.