Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am in the process of measuring the bandwidth requirements and how a Web application behaves in terms of response time and memory requirements when the number of users is increased.
Is there a particular good tool that can help us here? I believe JMeter is the standard tool. But are there other tools considering that the site is IE only.
Your answer is extremely appreciated.
Well, the question is how do you want to profile? Do you want to simulate real-world activity? Or do you just want to bombard the heck out of the site?
Load Testing Individual Pages
I don't think you can go wrong with using Apache Bench (ab). It's dirt simple to use, and can really stress your application. My typical usage is:
ab -c 10 -n 1000 http://www.example.com/path/to/page
The -c parameter is the number of simultaneous requests to issue. I would suggest starting low (like 5 to 10) and working your way up. Watch the output for failed requests and falling response rate. You're limited to about 1000 connections on most linux machines, so don't go too crazy.
The -n parameter is how many requests to issue. I would suggest doing at least 100 times the number of concurent requests to get a good average...
Another great use for apache bench is to benchmark individual database queries. Just create a simple script that runs the query, and load away. This can be a really good way to detect fast but expensive queries that will take your server down in production yet seem fine in testing.
Load Testing The Whole Application
I've had good luck with WebLoad. There's an Open Source version if you don't have a good budget that will get you started. But I'd suggest springing for the pro version. With it, you can setup a distributed test environment (as simple as installing the client on every machine in the office, as complex as spinning up a bunch of VMs for it).
The cool thing, is that you can program it in javascript. So you can tell it to take random click paths through the site with random delays. This should simulate a user far better than you could do manually. Then, once you have it setup, push the tests to the distributed engine and hit go.
It supports many different load profiles (stair-step where it adds load little by little for the duration of the test, etc). So you can simulate a slashdot-effect profile, normal day-to-day usage, etc.
The reports it generates are immensely useful. It shows you the slow urls, where the bottlenecks are, etc.
There are plenty of other test platforms and systems out there. This was just one that I found that I felt worked pretty well at the time (I did a comparison about 2 to 3 years ago). I am not affiliated with the company in any way.
Load Testing parts of the application
This is a really useful technique called profiling. The how to and tools are fairly language specific, so I won't go into too much detail here (since you don't have a language tag on your question). But the point is that once you find a slow page, you'll need to profile it to figure out what's slowing it down. Then fix the low hanging fruit (the parts that are the slowest). Then re-test to see if you made a difference or not...
Conclusion
Since it's almost impossible to simulate real-world load, this is really more of an art than a science. Have at it, and have fun. Don't take the results to seriously though, even with the best testing, you're likely to miss something... So I wouldn't take them as gospel and go telling the CEO that you tested that it's capable of 100k concurent users. Since when the day it crashes happens (and if you are lucky it will crash), he will blame you since you told him it would work...
Just a thought, you say IE only so is it hosted on IIS? if so then you might want to look at Microsft's WCAT (Web Capacity Analysis Tool), more information is available here:
http://support.microsoft.com/kb/231282
Although it isn't open source but it is free - do you need the source.
Related
As a developer and constant user of minipfoler, I use stakoverflow as the benchmark for my .NET sites. That is because the entire stack network is just a blazingly fast.
I know miniprofiler is used on stackexchange. There is a whole developers thing that can be used on stack but can we enable the stats to see how fast it really is?
I might be bit over obsessive here - but I am looking to improve permanences in milliseconds and the only viable benchmark is a large and complex site like stack exchange.
I know it might be a security issue to see live data but I just really want a benchmark (screenshot / guidelines) to see how far I can optimize my .NET MVC web application.
My actual IIS and MVC performance is fantastic and I think I am more concerned about server replies and client side stuff. So can I (and should I) put more effort into smashing down this response time?
This site is hosted in Azure Cloupapp and using Azure DB - I know about 60~180ms is used on connection times that are out of my control.
How can I improve times between Paint, Load and Complete?
I find that I answer my own question on StackExchange more often now a days. Not sure what that means. But this in interesting what I found while dealing with other Q&A's (And it answered this question)
Yes, you should avoid the obvious beginner mistakes of string
concatenation, the stuff every programmer learns their first year on
the job. But after that, you should be more worried about the
maintainability and readability of your code than its performance. And
that is perhaps the most tragic thing about letting yourself get
sucked into micro-optimization theater -- it distracts you from your
real goal: writing better code.
Posted by Jeff Atwood
There is no real problem in performance or serious delays. Its just an obsession that wont lead to much satisfaction.
The 'dudes' got a point. As long as my code is readable and it runs fast - what the heck more do I want?
PERFECTION! - Waste of time, lol#me!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
As a web developer I've been asked (a couple of times in my career) about the performance of sites that we've built.
Sometimes you'll get semi-vague questions like "will the site continue perform well, even during product launch week?", "can the site handle a million users?", and even "how is the site doing?"
Of course, these questions are very legitimate, and I have always tried to answer these questions to the best of my ability, using a combination of
historic data (google analytics / IIS logs)
web load test tools
server performance counters
experience
gut feeling
common sense
a little help from our sysadmins
my personal understanding of the software architecture in question
I have usually been able to come up with reasonable answers to these questions.
However, web app performance can be influenced by many things (database dependencies, caching strategies, concurrency issues, etcetera, user behaviour).
I'm a programmer and not a statician, and my approach to this problem has always felt deeply unscientific. So I did a little more research... and all of my google results seem to focus on tools and features and metrics (and MORE metrics) when I am really looking for a way to make sense of these things.
The question:
What are some good resources (books?) to read on the best practices for a developer to read on the subject of web load testing, that will help me answer these types of questions?
First your question proves you do understand the problem. It can sometimes be tricky enough creating the tools, scripts etc. to generate the load but the real challenge lies in evaluating the results and what to monitor.
A very easy answer to your question could be to Generate load on a production-like environment that is similar to current or expected usage. If it runs ok without any crashes or slow performance that is usually good enough. After that, increase load to see where your limits are.
When you reach your limit my experience is that this is purely a project budget question. Will we invest more time/money/resources etc to evaluate the cause.
I work as a test professional and I do recommend respect load testing as a vital part of the development process but unfortunately that is not always in line of what management decides.
So the answer to your question is that almost everyone needs to be involved in this process:
developers to monitor their code; system admins need to monitor CPU, memory usage etc.; DBA; networking guys; and so on. They all probably need their own source of knowledge to be able to get all this info recorded and analysed.
A few book tips:
The Art of Application Performance Testing: Help for Programmers and Quality Assurance
http://www.amazon.com/exec/obidos/ASIN/0596520662/
The Art of Capacity Planning: Scaling Web Resources
http://www.amazon.com/exec/obidos/ASIN/0596518579/
Performance Testing Guidance for Web Applications
http://www.amazon.com/exec/obidos/ASIN/0735625700/
Have you seen:
Performance Testing Guidance for Web Applications by J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
It's even available on the web for free.
http://msdn.microsoft.com/en-us/library/bb924375.aspx
You could emulate typical user behaviour and use one of the cloud services to simulate a huge number of users on the website to see how well your website handles with huge numbers of users. I heard Amazon's service is decent
I can recommend two books published in 2010:
The first is "ASP.NET SITE PERFORMANCE SECRETS" by Matt Perdeck, was published in late fall 2010. It is written more from performance optimization standpoint, but also has detail material on load testing. It is a free pdf eBook.
The second book is ".NET Performance Testing and Optimization - The Complete Guide", by Paul Glavich, Chris Farrell". It is pretty complete source on performance / load testing
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm looking to enhance our current test suites, and continuous integration builds with full stack integration/acceptance testing.
I'm looking into tools like Culerity and Selenium that can execute front end javascript while running user stories. I'm looking for something that can provide coverage of front-end javascript and high level features without sucking up tons of development time maintaining a complex test environment. We're currently using Rspec, Cucumber, and CruiseControl.rb, so easy integration with those tools would be ideal.
Are any of the headless browsers and js-capable test environments to a point where they are worth the trouble of setting up and maintaining? What are the best option you've come across, and pitfalls to avoid?
Thanks.
You sound like you are way further down this road than I am, but I'll comment anyway.
I am working on a JavaScript project (with a Java + MySQL back end) and decided to use Selenium for testing, and to try to achieve as thorough coverage as I could. I also poked around with a few other testing tools, but I can't say I really got to know any of them. None of them appeared, from their web sites, to be very polished or popular compared to Selenium. I am planning to integrate to CruiseControl eventually, but haven't done so yet.
This has been an interesting project and at the end of the day, I am quite happy with Selenium. Selenium plusses:
Test 'scripts' can all be written in Java, no obscure scripting language involved. Among other things, you can easily do things like manipulate and verify the data in your database before and after tests.
Se also supports Perl, C#, etc. I think, although that is of no interest to me.
Selenium IDE is a great tool for quickly understanding how Se works, how locators work, etc. You don't want to actually run tests long-term using the IDE, but it's great for getting your feet wet, and for ongoing figuring things out.
Se seems to work flawlessly with jUnit. Probably TestNG as well, but have not tried that yet, it's on my todo list.
Excellent documentation and web site.
Minuses:
I spent a LOT of time figuring out how to locate elements in all cases. This is partially the 'fault' of the framework I am using (ExtJS), not Selenium.
It seems no matter what you do, Se has timing dependencies - eg. places where you have to inject artificial pauses to make it work.
There are also monitor-size dependencies in my tests. I think this is extremely undesirable but in some places it seems to be unavoidable. Basically, this is because there are many element types that JS doesn't support you clicking on programatically.
Related to #3, in places I am forced to drive the mouse. That means you have to have a dedicated test PC. Which is no big deal, but doesn't seem right.
Tests are slow - mainly due to the time it takes Se to invoke Firefox. No doubt this is partially my environment, and I suspect I could do lots of things to improve this. However, it is really noticeable and not obvious why. It takes about 10 minutes to run about 40 tests.
Support forum is very spotty. Well, you get what you pay for. But time and again I found someone had posted about my problem, and the post was ignored or else an invalid solution was offered with no follow-up when the OP pointed out that the suggestion was bogus.
HTH, cheers.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
for my final year project (BSc Software Engineering) I am looking at time entries for software applications, and whether they accurately reflect the development of the project, and whether they can be improved or automated.
For this I will be prototyping a plug-in for Visual Studio using VSPackages that will automatically track which files are being worked on, assigning the files to tasks and projects. The plug-in will also track periods of inactivity within Visual Studio.
This will then be backed up via a simple Web Application for non-technical staff to pull reports from, so that projects can be tracked very accurately.
I currently work in a small company (10 people) and cannot get the large set of data I need to gain a good conclusion from. For this reason I ask if it would be possible to discuss the topic below and if you have a few spare minutes to fill in my questionnaire and email me the result to the address contained within the document:
http://www.mediafire.com/?dmrqmwknmty
Cheers,
MiG
In answer to your question, development time entries are important. But you can't measure them through a single IDE, nor indeed through any software. The development process is a complex one involving discussions, planning around a whiteboard, diagrams sketched on a piece of paper, research on the Internet, etc etc.
Read Jeff Atwood's excellent post on laziness and the other posts he refers to there. A good, successful developer spends time away from the IDE making sure they don't spend 90% of their working day reinventing the wheel, or 50% of their day heading down the wrong track because they haven't thought the design through.
I find the basic idea interesting, even though automated time tracking has flaws, just as measuring the number and frequency of commits to a project (as done on ohloh.net for example) can be a very misleading indicator about its activity.
However, the reality is that time worked is the basis for billing, and needs to be measured somehow. There are already solutions for this, though.
Take a look at
Grindstone or
AllNetic Working Time Tracker
(there are many more out there but these two I know well).
They work independently from what tool(s)/IDEs I am using, they can detect my absence/presence on the computer and prompt me about how I want to file the time, and they can do all the necessary reporting. It is also easy to add and manage filed entries.
What would your Visual Studio Plugin achieve that these solutions don't offer already?
Time spent developing in an IDE provides only a (sometimes very) partial metric of how much time a developer works.
I have been using FogBugz version 7 lately at work, and it has a feature that allows developers to estimate how long it will take them to finish a case. The developer can then use the software to say, "I am working on this case". Then the clock will count down until it reaches zero, based on the developer's working schedule (including days off), the hours that they say they are in the office, and the percentage of their time that they estimate they are working on cases.
But as a developer, I know that I can easily get sidetracked by more important cases. I also know that I spend a good deal of time working on the cases using tools other than the IDE - such as testing in MbUnit, looking for error message explanations online, or giving status to people who ask me why I have not finished working on a bug yet. And I've also been in places where I spent half the typical day - or more - in meetings or in a lab doing my work on a remote machine somewhere else. When I'm at my desk, I could be using my computer to map out ideas for the work I'm doing, or just pen and paper.
So there are a lot of variables to consider when you ask the question, "Is the guy who sits over there really doing his work?" You would really need to look at more running applications than just Visual Studio 2008 (devenv.exe). You would probably need to look at activity for processes associated with a developer's test framework, text documents, remote desktop connections to other machines, and even Firefox. (Firefox would be a huge judgment call as to whether somebody is actually working!)
As part of your research for the project, I would also suggest researching some of the other time collection systems that are in use throughout your company's industry and comparing their features.
A bit off track, but you could potentially use this sort of data to illuminate areas of complexity (LOC), areas that are prone to change (frequent updates 'n' days apart), etc. but even this would be skewed by different programmers approaches to development.
We track all our time by project daily. It takes me less than five minutes a day to fill out what I was working on. This is not something that can be automated or even should be automated as it will never be anywhere close to accurate. Files aren't always associated with just one project and it would cost me more time to tell an application which files belong to which project that the five minutes it takes to fill our my timesheet. No one spends the entire day typing - there are meetings and phone calls and thinking (you know where you figure out what you want to type!), none of that will be captured in your automated system. What you are porposing will not be more accurate, it will be less accurate than requiring people fill in time sheets daily.
While time entries are important, figuring out how to organize it is where trouble comes into the picture. How well would non-technical staff understand the various phases of development in order to understand the data? I'll agree with the other responses that the IDE tracking is a terrible idea, especially if part of what is being done involves changing a database through a web browser which is what I have in my current big CMS project where we may have to change templates or create content to test out if the functionality works.
This also heavily ignores the gaming the system idea that can happen. What if I leave my IDE open in debug because I'm wanting to scan memory or do something else that requires the window be open to actually look at something but I could also have left my desk unless you are somehow tracking where I'm looking and sitting.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm looking for a good template on server-side installation of software for a project I'm working on.
The client-side is pretty straight-forward. The server-side installation is a little trickier. It is made up of several pieces (services, database connections, dependencies, ports that need to be unblocked, etc.). During a recent test, several undocumented pieces were discovered. Now I need to create installation documentation for our disaster-recovery plans and ways to test the installation without necessarily having a "full-up" system to test on.
I'd really like a suggestion of where I can get a template or a really good example of such a document. I'd like it to be something that an operator could read and comprehend in the heat of a recovery.
[EDIT]
Our current documentation comes mainly from the questions our administrators have had during off-site tests. As new code is written, I'd like to make sure the documentation is written ahead of time. I've been collecting VMWare images to start testing, but was looking for some good examples. It's a Windows Server shop (2000 & 2003). Word templates would be great, but if I could see good documentation, I could create the templates. Any suggestions about what should be tested would be great as well.
[2nd EDIT]
I've gotten several good ideas from the answers posted. After changing my Google search, I came up with some good starting points. They're not perfect, but they are a good start.
Microsoft Exchange - http://technet.microsoft.com/en-us/library/bb125074(EXCHG.65).aspx
iPhone - http://manuals.info.apple.com/en_US/Enterprise_Deployment_Guide.pdf
http://www.novell.com/documentation/gwgateways/gw7_exch/index.html?page=/documentation/gwgateways/gw7_exch/data/ab32nt1.html
http://cregan.wordpress.com/2006/06/22/exchange-2003-step-by-step-installation-instructions/
http://technet.microsoft.com/en-us/magazine/cc160942.aspx
Covers planning in the design stage well - http://www.onlamp.com/pub/a/onlamp/2004/04/08/disaster_recovery.html?page=2
[Edit 10/29/2008]
THIS is the type sample I was looking for. It doesn't have a lot of garbage, but seems to explain enough of the why along with the how http://wiki.alfresco.com/wiki/Installing_Labs_3_Nile
The most complete method that we've come up with for creating our DR documentation, involves going through a full cycle (or two) of installation, and documenting each step along the way.
I realize this can be a bit difficult if you don't have a test (or replacement) system to use to create your documentation - but it's worth lobbying for running through this cycle at least once.
(I recommend twice, the second being done by someone not involved with the project - this is how you test the documentation for future admins, who may not be as experienced with the process.)
A side effect of the above is that your documentation grows fairly large - last I had to do it, I believe the completed installation manual for our database servers was 30+ pages.
What should be tested? Well, in the case of a web site, "can you get to the page?" Include a URL as a starting point and let the admin click through to a certain point. It is not necessary for the admin to go through the whole QA cycle, just a confirmation that what you meant to be deployed is really what got deployed.
Other ideas
Also, we (my team at my last job) had QA test the deployment. As a QA person should be, he was not intimate with the details and as he deployed to QA, we were able to get feedback on what went wrong.
Another thing that is useful is sitting down with the admin(s) before the deployment. Go over the instructions and make sure they understand them the same way you do.
Template? Just make sections that have fields for data such as URL to DEV, QA, and PROD. When you write out the instruction you can refer to those. Just make it clear what is being deployed.
Depending on the admins, automation is helpful. I've had windows admins that want a Word doc with step by step instructions and other admins that wanted a script.
However, some helpful things to include, probably as sections
Database changes
Scripts to run
Verification that they worked
Configuration changes
what are the change
where is a version of the new file (In my case they diffed the two, which helped reduced errors concerning production-specific values)
General verification
what should be different from the user perspective (feature changes)
For web farm deployments, it might be helpful to have a coordination document concerning how the servers need to be pulled in and out of pool.