Website performance certification - performance

My client is looking for performance audit, something similar to yearly security audit.
Are there any reputed services or vendors that measure and analyze a given website performance and more importantly certify the performance data.
My client's intent is to share such data with future customers.

I suggest using a waterfall graph to show the performance. I have used webpagetest and am pretty happy with it. It is is also creditable because a lot of big companies use it.
Here is a sample run for SO: http://www.webpagetest.org/result/111031_H2_21NJR/1/details/
So for example time to first byte was 200 ms. This means that browser doesn't start rendering anything until after 200ms. Keeping it < 800ms is generally a good idea.
If you are looking at companies that do this performance test, I would be cautious because they will all say yes and just go to a similar website and say here is your performance analysis.

Related

NoSQL db performance testing

Let's assume you've got a nosql database - redis, cassandra, mongodb. And you need to check the overall performance for this database - various platforms, operation systems, even programming languages which are used for test. It's not tied to a specific application or schema.
What tests you want to see? Can you please help me to form requirements?
How database operates in cluster?
In broken cluster?
In cloud env?
How it can perform queries when 10k connections opened?
What tools you will use?
Is it something like JMeter->http server->database?
Jmeter->tcp app->database?
Other?
All material I've found about database performance testing is like testing database as a part of some product (specific scheme, specific env).
Have you thought about database performance testing when database is product itself?
Looking forward for you help.
-vova
In NoSQL benchmarks and performance evaluations I've put together a list of the benchmarks that are correct in the sense that they clearly define the purpose of the benchmark and compare similar features (apples-to-apples comparisons); there are way too many benchmarks out there that are failing at at least one of these fundamental requirements of a benchmark. Going through those you'll be able to extract the bits that are interesting for your own benchmark plus learn what tools have been used and get some benchmarking code too.
So far the most generic NoSQL benchmark is YCSB (Yahoo Cloud Servicing Benchmark). Recently the Cubrid blog posted the results of running this benchmark against some of the most popular NoSQL solutions and that might give you an idea of how to interpret results.
check the overall performance for this database
Unless you need to do it for fun, or you just want to get a benchmark for the sake of getting a benchmark, I would recommend to tailor a performance benchmark to the actual problem/requirements.
For example do you really need crazy fast writes? Are you ok with losing data? Do you mind spending time on configuring fail over? Do you plan to scale up or out? Are you planning for TBs of data? etc..
From the examples you gave => Redis, Cassandra and MongoDB are quite different:
Redis is mostly cache, and it is really fast, but being just a cache it would not help you much in doing medium complexity aggregation. However it is currently the best cache (my opinion) out there. "Redis + a killer DB" is an ideal combination. It also has a built in benchmark tool you can try.
Cassandra is a solid product modelled after Google Big Table (but I am sure you already know that). It scale writes well if you have lots of nodes, but if you reach TBs of data for example, it can take days to add nodes. It is also not a simplest one to get. But if you are ok to pay, there are excellent guys from Datastax who can take all the complexity away. I have a very simple Cassandra Bombardier that may help you to start off.
MongoDB is a great DB for multiple reasons: very sexy and simple query language, good documentation, huge community, etc.. Not so great in other aspects: need to spend time sharding it correctly, and then resharding it again [compare to e.g. Riak, where it is done automatically]. It is very fast (writes) if the data [not just the index] fits in RAM, it starts slow down very quickly if it does not. There is a ongoing speculation that you may lose data (from one of the Basho engineers: "I had personally spent some time finding out ways to demonstrate that MongoDB will lose writes in the face of failure"), aggregation queries may take a while given a not so large dataset. I have a Mongo Performance Playground that you may find useful.

Website Response Times - General Performance Rules

I am currently in the process of performance tuning a Web Application and have been doing some research into what is considered 'Good' performance. I know this depends often on the application being built, target audience, plus many other factors, but wondered if people follow a general set of rules.
There is always the risk with tuning that there is no end to the job, and one should at some point have to make a call one when to stop, but when is this? When can we be happy the job is done?
To kick off the discussion, I have been using the following rules, based on the Jakob Nielsen report (http://www.useit.com/alertbox/response-times.html), which says
The 3 response-time limits are the
same today as when I wrote about them
in 1993 (based on 40-year-old research
by human factors pioneers):
0.1 seconds gives the feeling of instantaneous response — that is, the
outcome feels like it was caused by
the user, not the computer. This level
of responsiveness is essential to
support the feeling of direct
manipulation (direct manipulation is
one of the key GUI techniques to
increase user engagement and control —
for more about it, see our Principles
of Interface Design seminar).
1 second
keeps the user's flow of thought
seamless. Users can sense a delay, and
thus know the computer is generating
the outcome, but they still feel in
control of the overall experience and
that they're moving freely rather than
waiting on the computer. This degree
of responsiveness is needed for good
navigation.
10 seconds keeps the
user's attention. From 1–10 seconds,
users definitely feel at the mercy of
the computer and wish it was faster,
but they can handle it. After 10
seconds, they start thinking about
other things, making it harder to get
their brains back on track once the
computer finally does respond.
A 10-second delay will often make users
leave a site immediately. And even if
they stay, it's harder for them to
understand what's going on, making it
less likely that they'll succeed in
any difficult tasks.
Even a few
seconds' delay is enough to create an
unpleasant user experience. Users are
no longer in control, and they're
consciously annoyed by having to wait
for the computer. Thus, with repeated
short delays, users will give up
unless they're extremely committed to
completing the task. The result? You
can easily lose half your sales (to
those less-committed customers) simply
because your site is a few seconds too
slow for each page.slow for each page.
If you have Apache as web server you can use the page-speed module made by Google.
Instead of waiting for developers to change legacy, make use of the CPU and memory you have available to provide a better UX.
http://code.google.com/speed/page-speed/docs/module.htmlct
It provides the solution to the most common pain factors and with immediate effect. No coding, no changes to the legacy code of web applications.
The rules are pretty much sensible. Indeed one should aim to have response times in 1 second or less but sometimes the processing will really take longer (bad design, slow machines, waiting on 3rd parties, intense data processing, etc). In this case one can use various tips & tricks to improve the user experience:
use caching (both in the browser and in your frequently processed data)
use progressive loading of data using ajax where possible (and use progress indicators to give feedback that tings are happening)
use tools such as Firebug, YSlow to detect potential issues with your html design and structure
etc etc

How can I sell non-functional testing tool to my company?

I need to present performance test tool to management team of my company. Some of them think performance testing is not necessary for us because our customer never request or give requirement about performance to us.
However one of our current big project found performance problem, responding time is very long, server down when it handle many concurrent user.
I think I need to prepare myself to present about it benefit both concrete and non-concrete. Anyone have experience with performance testing tool? How it can empower your productivity?
Management cares about money. Show them how your tool will save them money and you will get their approval. Everything else is usually trivial to them.
Expanding on what #LWoodyiii had to say. When presenting a case for anything, whether that be hiring more people, or investing in a performance testing tool (or outsourcing your performance testing for that matter), it needs to be presented in terms of money saved. By doing a little leg work, you should be able to back into the $ saved amount.
If you had never had any performance problems, then it would be more difficult to quantify $ saved. But in your case, it should be a little easier to figure this out, as you have already had some significant performance issues. You should be able to put a $ amount to your existing performance problem. You should be able to quantify the revenue lost (lost transactions, lost customers, decreased transaction throughput, etc...) due to the degradation of service. You can also factor in costs associated with fixing and resolving the performance issue. Then it is a matter of comparing the costs of having performance problems vs implementing a performance testing program (tool, training and resource costs).
It also probably would not hurt to spice up the presentation with some anecdotal performance horror stories that were well publicized in the news and how much those outages cost those firms.
It sounds like you haven't used profilers yourself. That would be a good start. You didn't mention your environment but red-gate makes a wonderful profile for .NET.
http://www.red-gate.com/products/ants_performance_profiler/index.htm
Whatever environment you're in, you can probably find a decent profiler with a trial period. Use the trial period to profile your app and get to know how profilers work and how they can help make your app better.
One thing to demonstrate about productivity is how they can let you focus on the biggest bottlenecks and have the most impact on improving performance with the least effort. With a good profiler you won't bother optimizing code that is already performant.
Of course, if your company really don't care about performance they won't want you doing any optimization anyways. There are lots of companies like this and it stinks.
I think performance is one of the trivial cases that is really difficult to present to someone else especially the management. You should have some "Clear" plus "simple" way to show the use of it.
I have the experience with profilers like JBuilder and YourKit but no other performance tools. But I think the "Numbers" shown on them are not sufficient to show the use for them.
If you can build a nice practical example it would be great. Show same case for both scenarios. If you can show that the old one's response time is large and after performance improving the same operation takes much less time then it is a good way to prove your claim.

Should users see their own performance stats in real time

I know this questions seems extremely open ended. I will try to narrow scope.
I have been struggling for some time as to include or exclude real time user performance stats in an application gui.
Does anyone have any info on the harm vs gain in including these stats in an app?
i.e. number of emails answered, number of customer calls taken, average time per customer etc.
The users are begging for more info on their stats, as it is how they are rated. However there is concern that given access to see their performance real time or near real time it will negatively affect their work.
I can kind of equate it to being measured on how many lines of code I churned out in one day. Would this help me to be more productive or just teach me write code as fast as possible and most likely make a lot of mistakes.
In my application I can think of these scenario's
i.e. BAD: "I see I have spent 10 minutes on this issue already, I need to finish this up asap"
vs
i.e. GOOD: "I was able to help that customer quickly, My productivity is good today"
I think there may indeed be a phenomenon one could call “The Facebook Effect,” where by merely presenting a metric (e.g., number of “friends,” minutes per call), you imply that it’s important. However, I don’t think that’s an issue here because it’s clear that users already regard these metrics as very important, otherwise they wouldn’t be begging you for them.
I believe there is a wealth of psychology research showing that adding faster feedback on a metric will very likely improve user performance on that metric. Furthermore, it is emotionally stressful for people to be evaluated on a metric that they cannot track well themselves, which appears to be the case now.
I say show the metrics real time. Management obviously wants high performance on them otherwise they wouldn’t be rating the users on it. User want it, and it’ll reduce their job stress. Sounds like a win-win to me.
Of course, the customers probably lose out, so there’s a bit of an ethical dilemma. However, if these metrics are problem, then the problem lies in corporate policy, not in the user interface. By showing the metrics to the user, you will highlight whatever shortcomings this policy has to the managers. Maybe management will get a clue.
If it’s part of your role, you could propose measurements of customer satisfaction (e.g., after-call automated survey, random follow-up surveys by a human) so that managers can at least track the impacts of their policy. Assuming they care (how are they evaluated?).
KISS principle, is this information pertinent to what the users will be doing?
If no, then no do not add it. One thing that I like on web apps is the time it takes for a page to load after updating / submitting something (basically posting back data).
That way I / a user can tell if there is an issue with data going to the database or a caching issue.
The problem with displaying say for instance calls to customers is your average times may not be as accurate as you'd hope. You may have a customer who likes to chat, or is having technical issues that are irrelevant to your business but yet get reflected in your apps.
This data shouldn't be trusted because of these types of things. Another thing is, if you start displaying call times you end up having employee competitions to see who can get off the phone first..that's when you start hurting yourself more..bad customer service.
A few big names used to rate employees based on things like call volume, average time on the call, etc. Remember when Dell tried to outsource all their technical calls to India? Customers here in the US were frustrated and calls were either too long (not understanding) or too short (Customers did not want to deal with it). Well the big shooters thought hey call times are pretty inline with what we had forecasted and our costs are going down. But it hit rock bottom as time went on...
Your application is useful as a tool to convey this information, and you should definitely include it if users are clamoring for more visibility and transparency into the data it collects. A user's response to the information, however, will largely depend on what the organizational culture already does with it.
For example, does management aggressively encourage users to keep calls short and deal with as many customers as possible, and fire those who don't meet quotas? If so, that's going to provoke an equally strong reaction from users to keep their calls short when they discover they've been a bit on the longer side. ("Uh-oh, I'm getting to the 2-minute mark. I'd better hang up and fake a disconnection to avoid getting my average call length too high.")
Conversely, if management simply encourages everyone to do a good job and provide excellent customer service, this information can be synthesized by users in the overall context of this work. ("I've spent a lot of time on this customer -- I should see if I can wrap things up shortly, or escalate him if I can't fix this problem.")

How can I estimate a Web site build (refresh) when I don't know all of the site's features?

I know there are several estimating questions here, and I have read through most of them, but this one is slightly different. If you're doing a refresh on a Web site, it might include usability enhancements that increase the hours for page production and development. We'll never look at a Web site and say to ourselves the way it is now is the way it will be in the future. If that were the case, then our clients wouldn't be looking for our expertise. Should it always be a requirement to do a team brainstorm before responding to an RFP or creating a formal statement of work? What if those doing the brainstorming are not doing the final work? We can only inventory the current site to a certain extent, and I'm starting to think we should make estimates only for what we know, letting the potential client tell us where we're missing certain elements.
In your proposal, be fairly specific about what you saw on the current web site (how many pages/resources? are they low/med/high complex? What are the high level features you see already existing (i.e. search, security, AJAX, profiles, etc.) and what would you consider adding? Give ranges, rather than specific estimates.
The more detail you give about what you saw without knowing the requirements will help the client believe you didn't just shoot the proposal through an RFP chute, and that you're serious about the work, without tying you to a commitment to deliver more/faster than you can. Clients do understand that you can't really make a reliable estimate, but the more they believe you have made a serious attempt, the more they're likely to commit their time to helping you understand the requirements towards making a final proposal and SOW.
When you don't know what your estimating, expect the estimation to be inaccurate (and, at best, approximate).
State your assumptions ("X if we do Foo or Y if we don't").
State what you would need to reduce uncertainty ("we need to spend an hour with the client to gather requirements before we can provide any estimate").

Resources