Measure performance of web application? - performance

What are the best tools to test the performance of a (not deployed) application using Play framework? Things like, how long takes a request to execute, with different parameters, simulating a lot of requests (stress test), etc.
I'm searching a while but the problem is that the keyword "performance", "benchmarks" etc. lead me to pages about the performance of Play framework.
I thought maybe functional tests, could be used to measure performance (print difference between method start time and end...). But this doesn't look suitable for this kind of task.
I could just write a script, that triggers the requests, writes the timestamps to a log file... but maybe there's something finished, with extras, like e.g. charts, etc.
Any hint in the right direction greatly appreciated.

Iago is a load generation tool by Twitter written in Scala. Also, I've used the Loader.io addon on Heroku to do performance testing. Loader.io also has a non-heroku service that I have not used. Iago is probably your best bet for local testing of a non-public app.

A good example is a project used by Versal to choose their Scala stack for production.
The project is Scamper.

Related

How to start performance testing

I have taken as an example for learning and gathered some information about tools, objectives,scenarios, but I need your inputs. Please assist me.
I am new to Performance testing and would like to test the following website www.volkswagen.co.nz
Can you tell me, what are need to be tested? What are the scenarios and activities for each scenario? What metrics do I need to add? Which is the best and free tool for testing it? How to test if it is deployed in cloud like AWS?
Please let me know, Thanks in advance.
Performance testing needs,
Identify critical/heavy/important scenario in your webapp (irrespective of deployment cloud/standalone)
Identify service level agreements in terms of response times, throughput, latency etc.
Identify workload model i.e. how much user load application is expecting. this should be as fine grained as possible (avg users per transaction/workflow at a point of time)
Identify tools (JMeter is freeware and best but if you can afford paid then look at loadrunner, neoload etc.)
Record the script for workflows and parameterise and correlate.
generate test setup for load test and execute the load test.
monitor system utilization, collect metrics like response time, throughput, error rate, latency etc.
This all comes in load testing. For more you can read http://www.guru99.com/performance-testing.html
I am new to Performance testing and would like to test the following website www.volkswagen.co.nz
That is a recipe for disaster. No one new should be allowed to work on their own without a full period of training and internship with a master in the field. This is true of stone masons, electricians, plumbers, barbers, accountants, engineers and physicians. And it is most certainly true of performance testers/engineers.
There are dozens of foundation skills you need to master before you touch any tool, open source or otherwise. Until you show mastery of those items along with tool mechanics for your tool you should not be allowed to test any website, particularly a production website. And, if you don't work for this company what you are engaging in is a denial of service attack and could leave you with exposed legal liability.
I strongly agree with James on this one.
Do not touched the site if:
it's not yours
not sure what you are doing
the owner gave you explicit (and sounds like irresponsible) permission
don't know or don't have the support to restore the environment into a working state
If you do work for the company then you need to have a test environment first, a playground where you can mess around and nobody would mind if you take it down.
Firstly get information from the business on which use cases needs to
be tested.
Get response times target for user actions and for environments utilisation.
Get response time targets for environments utilisation: define environment monitoring tactics.
Found a tool that can fit for purpose: Jmeter, Gattling,etc, lot's of free ones available.
Get a test environment, preferably similar scale to production
Create scripts to cover critical use cases
Comply scripts into scenarios
Create a reporting framework
Kick off monitoring
Kick off scenario
Collect and analyse results
Be mindful of the free editions of load testing tools: they tend to be easy to use at first but soon as you start to outgrow it it can cost a fortune and more often then not it's hard to port scripts/scenarios to another tool.

web performance testing that renders javascript

I would like to load test pages AND render javascript. It seems there are 3 categories of applications which might solve this problem, and all three miss the mark.
1) Jmeter, apachebench, tsung, Grinder, Iago
Why it wont work: It doesn't render javascript. These are handy tools, but they won't work for this purpose.
2) Watir, Selenium
These tools are excellent, they use real browsers and render javascript / ajax, but alas, they are designed mostly for functional testing, and not performance testing. You could create your own app to use these things for a load test, but it would be a massive pain in the ass to collect all of the performance metrics and aggregate them.
If only there was a combination of the first two types of web performance tests, that would be great.
The third option, solves this problem, but unfortunately you have to pay for it.
3) Web load testing services
Services like Keynote Load Pro, BrowserMob and others are great, and they solve the problem of using real browsers and rendering JavaScript. The only problem is, I don't want to pay $300 ever time I run a damn load test (this is an exaggeration, but not really, depending on how many virtual users you use).
So that option won't work, unless I want to hemorrhage money.
Isn't there a test harness out there that solves this problem? It seems like a big gaping hole where there is a need for an open source tool that no one has solved yet. The commercial companies dominate this space (and those who want to employ a full time developer to write a selenium performance test framework).
The good thing about jmeter, ab, tsung, etc. is they can give you aggregated metrics in relatively short time but they can't execute javascript. They can't measure how fast DOM has been created. They won't load external resources such as images, javascripts, styles. They are good for stress-testing which is not the case here.
When measuring web performance you need to know:
what do you really want to measure? - domLoading, domInteractive, domContentLoaded or maybe something else from Performance Timing Interface,
which statistical measure you want to use? - p95, median. Average is not the best choice.
when? - do it always at the same time to make tests comparable,
from which location? - the best you can do is to spawn dedicated server located closely to your production,
how confident you want to be about the results? - choose sample data size, for example if you accept 3% margin of error with 95% confidence interval then choose 1067.
For doing such Synthetic Performance Measurements you can use https://github.com/msn0/sweter. It measures timeToFirstByte, domInteractive and domComplete timings. It can be scheduled to run tests always at the same time. Sweter can report raw data to ElasticSearch, console, json file or wherever you want.
The different topic is Real User Measurements. There are good tools for that already, e.g. NewRelic.
Well you haven't mentioned what metrics you really want to measure. But I think you can test both server side and browser performance using two simple tools.
PhantomJS and HAR file: PhantomJS is a webkit headless browser that has a Javascript API. You can script it using Javascript to do pretty much everything (including javascript rendering). Use it to generate HAR file so that you can analyze performance bottle necks in all the requests.
Navigation Timing API : Modern browsers support navigation timing data which gives you details about timings in browser rendering, including all important metrics like domLoading, domInteractive etc.

Performance test against Web Service

I am doing performance test against our site. The site exposes many Web services APIs. Our product has several workflows and each complete workflow is composed of calling more than one APIs.
I am wondering which of the following apporoaches is reasonable:
Test against each single API in a standalone way.
Test against each complete workflow which involves several APIs.
I think the latter one make more sense since it mimic the real scenario.
I'd like to hear your comments.
Thanks.
There are (as usual) pro's and cons of each approach. Personally I'd consider doing both, starting with the single API test.
The single API is probably the easiest to build, and it has the benefit of exactly pinpointing out where your performance issues are. It's also useful to so spot regressions in performance during development. If there are unit tests for the application consider using those instead. When there is a performance regression there usually also is a unit test which suddenly became slower.
Once you've done that you will still need to do the more complex tests. Firstly because you need to know if the performance of a certain flow is acceptable, but also because there can be unexpected interactions between different API's. Depending on you application there may be nasty concurrency issues, throughput bottlenecks etc. Make sure you run several flows concurrently, that's what happens in really live and it's only way to find issues related to locking in the database, I/O bottlenecks etc.
But before you start make sure you have a realistic idea of what the performance should be, how many concurrent users there will be and what the hardware requirements are. There is no limit to improving performance, so you have to decide what is good enough or you will never stop optimizing.

How would you stress test a dynamic site, when you don't know what the URLs will be ahead of time?

This isn't a question of what stress testing tools are out there. I'm afraid it's a lot harder than that. (At least for me)
Consider a restful architecture for a forum or blog that generates random IDs for each post.
Simulating creating those topics/articles would be simple, because you'd just be posting form data to an endpoint like: /article, or /topic
But how do you then stress test commenting on those articles/topics? This is different, because the comments need to belong to an article/topic, which means that you need the ids of those items. However, if all you can do is issue posts, and you have no way of pulling those ids, you'd be unable to create them.
I'm creating a site that is similar in this regard, and I have no idea how to stress test the creation of the comments.
I have two ideas, and they're both pretty awful:
Generate a massive system ahead of time with some kind of factory, and then freeze it. From there, I figure I'd have to use some kind of browser automation to create my 'comments' on all of this. The automation would I suppose go through a recording proxy, like what JMeter offers. Then, to run the test, I reload the database, and replay the massive log file.
Use browser automation for the whole thing, taking advantage of the dynamic links delivered in the HTML page. The only option here would be Selenium, and really, we're talking a massive selenium grid that would be extremely expensive. Probably very difficult to maintain also.
Option 2 is completely infeasible near as I can tell, but option 1 sounds excruciating. I'm really hoping someone can suggest something more clever.
Option 1.
I mean, implementation notes aside, you're basically just asking for a testing environment. So, the answer is to make one. In whatever fashion:
Generate it
Make it once and reload it
Randomise it
Whatever. It's the approach to go with.
How do you your testing is kind of a side issue (unit testing/browser/whatever, up to you).
But you've reached a point where you need to test with real data. So make it happen.
This is a common problem. We handle it by extracting the dynamic parts of the URLs from the server responses. I presume this system uses web browser client - which implies that those URLs are being sent in the server responses. If they are in the responses, then you CAN get them. However, since you said "if all you can do is issue posts, and you have no way of pulling those ids", then perhaps this is not the case? In that case, can you clarify?
We've recently been doing a lot of testing of Drupal systems for our customers - which has exactly the problem you've described. We either solve it by extracting the IDs dynamically from the page as the user browses to the page they want to comment on, or we use option 1, or a combination of both. Note that if you have a load testing tool handy, then generation of content is not too difficult - use the tool to do it. I.e. run a "content generation" load test. Besides yielding useful data on its own accord, that will give you a test database that you can then backup/restore as needed to maintain your test infrastructure. Now you can run the test on a more realistic environment - one that has lots of content already in it (assuming, of course, that this is realistic for your purposes).
If you are interested, I'd be happy to demo how we solve the problem using our software (Web Performance Load Tester).
I have used Visual Studio to solve this kind of problem. Visual Studio allows C# coded web tests that can programatically parse the html returned and take action based on that.
I was load testing a SharePoint website and required information to be populated ahead of time. I did create a load test that was specifically for creating "random" pages of content ahead of time. I populated a test harness database with the urls ahead of time, allowing some control over the pages that were loaded.
With a list of "articles" available and a list of potential comments, it is possible to code a pseudo-random number generator (inside a stored procedure because of the asynchronous nature of the test harness) to get a repeatable load test. That meant that the site would be populated in the same way each time the load test was run.
It does take some effort to create a decent way of populating the site off the bat, but the return in the relevance of the load test is quite good.

Applying TDD when the application is 100% CRUD

I routinely run into this problem, and I'm not sure how to get past this hurdle. I really want to start learning and applying Test-Driven-Development (or BDD, or whatever) but it seems like every application I do where I want to apply is it pretty much only standard database CRUD stuff, and I'm not sure how to go about applying it. The objects pretty much don't do anything apart from being persisted to a database; there is no complex logic that needs to be tested. There is a gateway that I'll eventually need to test for a 3rd-party service, but I want to get the core of the app done first.
Whenever I try to write tests, I only end up testing basic stuff that I probably shouldn't be testing in the first place (e.g. getters/setters) but it doesn't look like the objects have anything else. I guess I could test persistence but this never seems right to me because you aren't supposed to actually hit a database, but if you mock it out then you really aren't testing anything because you control the data that's spit back; like I've seen a lot of examples where there is a mock repository that simulates a database by looping and creating a list of known values, and the test verifies that the "repository" can pull back a certain value... I'm not seeing the point of a test like this because of course the "repository" is going to return that value; it's hard-coded in the class! Well, I see it from a pure TDD standpoint (i.e. you need to have a test saying that your repository needs a GetCustomerByName method or whatever before you can write the method itself), but that seems like following dogma for no reason other than its "the way" - the test doesn't seem to be doing anything useful apart from justifying a method.
Am I thinking of this the wrong way?
For example take a run of the mill contact management application. We have contacts, and let's say that we can send messages to contacts. We therefore have two entities: Contact and Message, each with common properties (e.g. First Name, Last Name, Email for Contact, and Subject and Body and Date for Message). If neither of these objects have any real behavior or need to perform any logic, then how do you apply TDD when designing an app like this? The only purpose of the app is basically to pull a list of contacts and display them on a page, display a form to send a message, and the like. I'm not seeing any sort of useful tests here - I could think of some tests but they would pretty much be tests for the sake of saying "See, I have tests!" instead of actually testing some kind of logic (While Ruby on Rails makes good use of it, I don't really consider testing validation to be a "useful" test because it should be something the framework takes care of for you)
"The only purpose of the app is basically to pull a list of contacts"
Okay. Test that. What does "pull" mean? That sounds like "logic".
" display them on a page"
Okay. Test that. Right ones displayed? Everything there?
" display a form to send a message,"
Okay. Test that. Right fields? Validations of inputs all work?
" and the like."
Okay. Test that. Do the queries work? Find the right data? Display the right data? Validate the inputs? Produce the right error messages for the invalid inputs?
I am working on a pure CRUD application right now
But I see lots of benefits of Unit test cases (note- I didn't say TDD)
I write code first and then the test cases- but never too apart- soon enough though
And I test the CRUD operations - persistence to the database as well.
When I am done with the persistence - and move on to the UI layer- I will have fair amount of confidence that my service\persistence layer is good- and I can then concentrate on the UI alone at that moment.
So IMHO- there is always benefit of TDD\Unit testing (whatever you call it depending on how extreme you feel about it)- even for CRUD application
You just need to find the right strategy for- your application
Just use common sense....and you will be fine.
I feel like we are confusing TDD with Unit Testing.
Unit Testing are specific tests which tests units of behaviors. These tests are often included in the integration build. S.Lott described some excellent candidates for just those types of tests.
TDD is for design. I find more often then not that my tests I write when using TDD will either be discarded or evolve into a Unit Test. Reason behind this is when I'm doing TDD I'm testing my design while I'm designing my application, class, method, domain, etc...
In response to your scenario I agree with what S.Lott implied is that what you are needing is a suite of Unit tests to test specific behaviors in your application.
TDDing a simple CRUD application is in my opinion kind of like practicing scales on a guitar- you may think that it's boring and tedious only to discover how much your playing improves. In development terms - you would be likely to write code that's less coupled - more testable. Additionally you're more likely to see things from the code consumer's perspective - you'll actually be using it. This can have a lot of interesting side effects like more intuitive API's, better segregation of concerns etc. Granted there are scaffold generators that can do basic CRUD for you and they do have a place especially for prototyping, however they are usually tied to a framework of sorts. Why not focus on the core domain first, deferring the Framework / UI / Database decisions until you have a better idea of the core functionality needed - TDD can help you do that as well.
In your example: Do you want messages to be a queue or a hierarchical tree etc?
Do you want them to be loaded in real time? What about sorting / searching? do you need to support JSON or just html? it's much easier to see these kinds of questions with BDD / TDD. If you're doing TDD you may be able to test your core logic without even using a framework (and waiting a minute for it to load / run)
Skip it. All will be just fine. I'm sure you have a deadline to meet. (/sarcasm)
Next month, we can go back and optimize the queries based on user feedback. And break things that we didn't know we weren't supposed to break.
If you think the project will last 2 weeks and then never be reopened, automated testing probably is a waste of time. Otherwise, if you have a vested interest in "owning" this code for a few months, and its active, build some tests. Use your judgement as to where the most risk is. Worse, if you plan on being with the company for a few years, and have other teammates who take turns whacking on various pieces of a system, and it may be your turn again a year from now, build some tests.
Don't over do it, but do "stick a few pins in it", so that if things start to "move around", you have some alarms to call attention to things.
Most of my testing has been JUnit or batch "diff" type tests, and a rudimentaryy screen scraper type tool I wrote a few years ago (scripting some regex + wget/curl type stuff). I hear Selenium is supposed to be a good tool for web app UI testing, but have not tried it. Anybody have available tools for local GUI apps???
Just an idea...
Take the requirements for the CRUD, use tools like watij or watir or AutoIt to create test cases. Start creating the UI to pass the test cases. Once you have the UI up and passing maybe just one test, start writing the logic layer for that test, and then the db layer.
For most users, the UI is the system. Remember to write test cases for each new layer that you are building. So instead of starting from the db to app to ui layer, start in the reverse direction.
At the end of the day, you would probably have a accumulated a powerful set of regression test set, to give you some confidence in doing refactoring safely.
this is just an idea...
I see what you are saying, but eventually your models will become sufficiently advanced that they will require (or be greatly augmented by) automated testing. If not, what you are essentially developing is a spreadsheet which somebody has already developed for you.
Since you mentioned Rails, I would say doing a standard create/read/update/delete test is a good idea for each property, especially because your test should note permissions (this is huge I think). This also ensures that your migrations work as you expected them to.
I am working on a CRUD application now. What I am doing at this point is writing unit tests on my Repository objects and test that the CRUD features are working as they should. I have found that this has inherently unit tested the actual database code as well. We have found quite a few bugs in the database code this way. So I would suggest you push ahead and keep going with unit tests. I know applying TDD on CRUD apps is not as glamorous as things you might read about in blogs or magazines, but it is serving its purpose and you will be that much better when you work on a more complex application.
These days you should not need much hand written code for a CRUD app apart from the UI, as there are a 101 frameworks that will generate the database and data access code.
So I would look at reducing the amount of hand written code, and automating the testing of the UI. Then I would use TDD of the odd bits of logic that need to be written by hand.

Resources