I've an application that have some room for performance improvements.
Our customer has requested some performance measurements on the Client (Browser) side,
and I'm trying to use testcafe to have some execution time indications.
One option is to have people accessing the different features, activating in Chrome
the development tools, and taking note of DOMContentLoaded values, too boring, error prone and time consuming.
Using testcafe we can do begin-end measurements, but because testcafe is loading
the pages through it's proxy is clear that this figures will be worst.
There are several questions
1. amount of delay added by the proxy:
does anybody have idea of something like a multiplier factor,
i.e.: times in testcafe will be -> X times the DOMContentLoaded you get from the developer console.
2. When to get Selector value from the page
I'm trying to do this:
S1 - access the page PageUnderTest
S2 - set filter values
S3 - click search to submit the page and apply the filters
S4 - the PageUnderTest is rendered with the filters applied.
Because I'm trying to get the time till the page is loaded,
I get BEGIN Timestamp before issuing t.click(button) (S3)
then I expect for the page title, but not knowing how testcafe works
I fear that testcafe get this value from S3 because the PageUnderTest is already
rendered.
Can anybody provide some clarifications?
I've a token that is changed on each submit then I'm getting the token in S3 (before the click)
and loop reading the token till the value is different to the value got in S3.
Do you think this is a good approach?
3- How to understand page has been fully rendered.
Do you have any suggestions?
Best regards
TestCafe is a tool built for functional testing, supporting you to write end-to-end tests which should replicate real user scenarios within your web application. Do not use it to perform non-functional testing (like performance or load testing). Such tests would not yield any conclusive results. You can read more about TestCafe's scope here
Try artillery for load testing or performance testing.
Also, if you want to measure the time it takes for an UI element to appear, you can build a counter, but those results will not be very accurate.
I used testcafe to do this:
Start a timer & click button X
Stop the timer when element Y appeared.
I wanted to see how long it takes for an UI element to appear, but this was not a valid test, because the UI wasn't slow, the API behind it was slow, that is when I gave Artillery a try.
I use Artillery + Testcafe for my tests. I'm a QA so I don't really know others.
Related
Is it possible to go back in a page without reloading it?
I am developing a Web Forms website and every time a go back in history, the page reloads (and takes a long time).
Following is the curl of the page:
Honestly, no.
The life cycle of a Web Form is very specific and the page goes through it every time it is run (that is every time you request it through your browser).
On the other hand, you can always optimize your page to make it load faster. How you do it depends on many things one of which is what code runs on the server side upon loading and if any portions of that code can be either optimized for speed or moved in event handlers to be executed at a later point in time. For example, if you're fetching data from a database when your page loads consider applying paging to narrow the number of selected rows.
Please, feel free to ask a new question if you decide to take that course of action.
I'm currently testing a asp.net application. I have recorded all the steps i need and i have noticed that if i remove some of the parameters that i'm sending with the request the scripts still work and the desired outcome still happens. Anyway i couldn't find difference in the response time with them or without them, and i was wondering can i remove those parameters which are not needed and is this going to impact the performance in any way? I understand that the most realistic way of executing the scripts should be to do it like a normal user does (send all which is sent with normal usage) but this would really improve the readability of my scripts, any idea?
Thank you in advance and here is a picture which shows for example some parameters which i can remove and the scripts still work this is from a document management system and i'm performing step which doesn't direct the document as the parameters say but the normal usage records those :
Although it may be something very trivial like pre-populating date and time in calendar in user's time zone I believe you shouldn't be omitting any request parameters.
I strongly believe that load testing should mimic real user as close as possible so if it is not a big deal to send these extra parameters and perform their correlation - I would leave them.
Few other tips:
Embedded Resources (scripts, styles, images). Real-browsers download these entities so
Make sure you have "Retrieve All Embedded Resources" box checked
Make sure you "Use concurrent pool" size 3-5 threads
Filter out any "external" stuff via "URLs must match" input
Well-behaved browsers download embedded resources but do it only once. On subsequent requests they're being returned from browser's cache. Add HTTP Cache Manager to your Test Plan to simulate browser cache.
Add HTTP Cookie Manager to represent browser cookies and deal with cookie-based authentication.
See How To Make JMeter Behave More Like A Real Browser article for above tips explained just in case you want to dive into details
Less data to send, faster response time (normally).
Like you said, it's more realistic to test with all data from the recorded case, but if these parameters really doesn't impact your result and measured time, you can remove them for a better readability.
Sometimes jmeter records not necessary parameters because they are only needed for brower compability.
I have a Django server. The server loads a webpage with almost all static content but a few numbers must load from the database.
I'm thinking about performance/price; I can host my Django server on a fast server and render the page using Django templates. or I can host the server on a slower machine and make a static page that loads the few numbers using ajax and host the page cheaply somewhere else like github.io.
The latter choice will have most of the page load real quick and real cheap.
I was wondering what are the trade-offs ?
Whichever server you decide to hire, you should always think of reducing the server load - no matter how fast your server is. By reducing server load I mean only make your server do what is really required at the moment.
Let's learn something from the big players like Facebook, for instance
You log into your account and you see that you've got 5 notifications and 3 new messages plus a couple of photos and highly interesting statuses of your friends. Cool! You now click on the notifications icon to find out if that hot girl (forgive me if you're a girl :D) has added you to her friends list or not. As you click a big white <div> pops up AND you see nothing but a loading gif! The notifications do appear, but after a couple of seconds. Try doing it with a slow internet connection, and you get to adore the beauty of the loading gif for a lot more time.
So, what do you make of it?
Facebook only made it's server count the number of notifications and new messages, and displayed those numbers to you. Thus reducing server load. It only displayed the notifications to you when you wanted to see them. And to load the notifications, all it took was a minimal AJAX call in which only around 10 KB of data was transferred!
Facebook does it all the time and everywhere. Consider this: Robert Downey Jr. posts a photo of himself on his Facebook page. A little while later, you see that it has got 10k+ comments. You decide to read them and click the comments button. An attractive loading gif pops up again for a little while and is soon replaced by comments. But hey, only 10 comments were loaded. What the ... Oh wait! That's how Facebook reduces its server load - read those 10 comments first, if you want to read more, send a request again.
Twitter does it too - the infinite scroll.
Icing on the cake
This approach benefits you in two ways:
It reduces server load - less chances of crashing a website.
It decreases your website's page-load time since you'll be passing less data i.e. the data required at that moment. Thus making your website faster. (Yes, it can outrun Flash, too!)
Food for thought
If you've got some cool technologies around such as AJAX, why not use it? Your server is not a donkey, for God's sake!
P.S. By Facebook and Twitter, I mean the engineers behind them.
Well It would depend on the following:
A. Whether you want to Display that number on Page load itself or when user clicks to see it* ?
If you want to show the the numbers at the time of Page load Itself than it is preferable to get them at time of Template response itself.
Why do you would want your Site Visitors to wait till those numbers populate (if the intention is to display them) ?
If it is to be displayed on User's click only then Ajax should be preferred
B. How much Time is this Query going to take and Can the query be optimized to minimal time ?
If the Query you are making takes a Lot of time than first effort should be made to optimize that query to be as fast as possible,
If the query can give result in minimal time than it is futile to do another Request to Server via Ajax.
But if you know the Query will take a lot of Time than Ajax is fine.
I have been trying to research the hack proposed by Avinash Kaushik in his book Web Analytics 2.0. He poses the problem whereby most web analytics tools are unable to record the time a user spent on the last page they visit on a website, or on the only page they visit. In other words if user comes to page 1, a timestamp is created showing the time they arrived at the page, when they visit page 2, a second timestamp is created. The time spent on page 1 can be calculated by timestamp 2 - timestamp 1. However if the user closes the browser window or navigates away from the website there is no way to record time on page 2. Here is a link to this problem on Kaushik.net
standard-metrics-revisited-time-on-page-and-time-on-site
One proposed hack is to use the window.onbeforeunload event to call a method and push the time that the page was unloaded to google analytics. So I tried the following code -
window.onbeforeunload = capturePageExit;
function capturePageExit()
{
_gaq.push(['_trackPageview', '/page-exit?page=' + document.location.pathname + document.location.search + '&from=' + document.referrer]);
return("You are about to close this page");
}
Using firebug I can see that the correct __utm.gif image is requested and the correct params are sent to google analytics. But clearly there is a problem now that this will be called on each page unload and so each visitor will appear to go from page1 -> page-exit -> page2 -> page-exit -> page3 -> page-exit... but I should get a more accurate time on site reading, right?
However this is at the expense of accurate navigation-summary data and so not a good solution. What would be good is if I could tell - if user has clicked the close browser/tab button or is navigating away from my site then record the page-exit.
I cant find a great deal of information about how to solve this problem, plenty of discussion about being aware of this inaccuracy when interpreting google analytics (and most web analytics tools probably), another useful link is time_on_page_and_time_on_site_how_confident_are_you
Just wanted to raise this on stackoverflow as I cant find a similar question and start a discussion about this, but my interpretation is that there isnt really a way around this problem but it is just better to be aware of it.
any thoughts?
------------------------------------------------------ UPDATE -----------------------------------------------------
Here is another link that was suggested to me from a blog called Savio.no, is this a good method?
how-to-measure-true-time-with-google-analytics
Web Analytics is not an exact science. Data is always approximate and most of the time sampled.
Web Analytics tools strive for Precision not accuracy. This whitepaper describes why it's more important to have precision and less important to have accuracy when working with Web Analytics.
Once you understand the difference between precision and accuracy and why it matters you will understand that it's not important to get the exact time on site metric, but a precise measure that could clearly express trendings or changes to that metric.
On other words forget about absolute numbers, learn to report using trends and changes.
Another advice, don't bother tweaking GA to render every single metric perfectly if you're never gonna use it. Bother with metrics that you can use. And by use I mean Actionable analysis.
There are, however a few cases were some code tweaking can help you out measuring the time on site. A clear example is a weblog. You may want to implement something like that in a weblog, ince most of your visits will be looking at your homepage, reading your posts and then leaving, all that is done in the same single PageView so it may be a good idea to fire an event when the user leaves to get the correct time on site, or maybe fire an event when the user scrolls past some threshold, in the end you'll be measuring the same ting, if the user scrolls more he reads more, and if the user spends more time then he reads more. So it may not make sense to track those 2 metrics to measure the same effect. Just choose one and stick with it, leave it running for a while to create historical data and then make use of it.
we've got a GWT application with a simple search mask displaying the results as a grid.
Server side processing time is ok as well as network latency.
Client rendering time is ok even on low spec hardware with internet explorer 6 as long as the number of results is not too high (max 100 rows in the grid).
We have implemented a navigation scheme allowing the user to scroll up/down the grid. That's fast enough also.
Has anybody an idea if it is possible to display the first 100 results immediately and pull the rest in the background? The GWT architecture allows this. However I'm interested in possible pitfalls e.g. what happens if the user starts another query while the browser is still fetching previous results etc.
Thanks!
Holger
LazyPanel and this blog post might be a good starting point for you :)
The GWT Incubator has also many interesting (albeit not always complete/perfect/stable) tables and other pagination solutions - like PagingScrollTable.
Assuming your plan is to send the first 100, and then bring the rest, you can use bulks for the rest of the results. then, if a user initiates another search, you just wait for the end of the bulk ( ie, check between bulk retrivals if you have a pending query ).
Another way you can go is assign identifiers to the user searches. this will make the problem of mixed results non-existant, and will also help you with results history for multiple searches.
we found that users love the live grid look & feel, which solves most of those problems, but that might not be optional always.