As a developer and constant user of minipfoler, I use stakoverflow as the benchmark for my .NET sites. That is because the entire stack network is just a blazingly fast.
I know miniprofiler is used on stackexchange. There is a whole developers thing that can be used on stack but can we enable the stats to see how fast it really is?
I might be bit over obsessive here - but I am looking to improve permanences in milliseconds and the only viable benchmark is a large and complex site like stack exchange.
I know it might be a security issue to see live data but I just really want a benchmark (screenshot / guidelines) to see how far I can optimize my .NET MVC web application.
My actual IIS and MVC performance is fantastic and I think I am more concerned about server replies and client side stuff. So can I (and should I) put more effort into smashing down this response time?
This site is hosted in Azure Cloupapp and using Azure DB - I know about 60~180ms is used on connection times that are out of my control.
How can I improve times between Paint, Load and Complete?
I find that I answer my own question on StackExchange more often now a days. Not sure what that means. But this in interesting what I found while dealing with other Q&A's (And it answered this question)
Yes, you should avoid the obvious beginner mistakes of string
concatenation, the stuff every programmer learns their first year on
the job. But after that, you should be more worried about the
maintainability and readability of your code than its performance. And
that is perhaps the most tragic thing about letting yourself get
sucked into micro-optimization theater -- it distracts you from your
real goal: writing better code.
Posted by Jeff Atwood
There is no real problem in performance or serious delays. Its just an obsession that wont lead to much satisfaction.
The 'dudes' got a point. As long as my code is readable and it runs fast - what the heck more do I want?
PERFECTION! - Waste of time, lol#me!
Related
In my code I have few instructions that are connecting to my database. I would like to know how fast are they executing. I know that I can write some kind of a timer, but pasting this code into few places and then removing it after measurement, will surely leave some mess.
I want to know if maybe VS2012 has any tool to help me with that? Or is there any addon maybe?
Just a quick one-off use of the Stopwatch class can give you insight.
Do beware that such a test isn't actually that useful. It will repeat very poorly, dbase connection times heavily depend on network traffic overhead and dbase server usage. And worst of all, there just isn't anything you can do about it in your code. Spending time on profiling code that you cannot improve is not a very productive use of your time.
You might actually want to leave that code in place so that the user has some idea why the program is performing poorly. Whether that is useful is hard to tell.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am in the process of measuring the bandwidth requirements and how a Web application behaves in terms of response time and memory requirements when the number of users is increased.
Is there a particular good tool that can help us here? I believe JMeter is the standard tool. But are there other tools considering that the site is IE only.
Your answer is extremely appreciated.
Well, the question is how do you want to profile? Do you want to simulate real-world activity? Or do you just want to bombard the heck out of the site?
Load Testing Individual Pages
I don't think you can go wrong with using Apache Bench (ab). It's dirt simple to use, and can really stress your application. My typical usage is:
ab -c 10 -n 1000 http://www.example.com/path/to/page
The -c parameter is the number of simultaneous requests to issue. I would suggest starting low (like 5 to 10) and working your way up. Watch the output for failed requests and falling response rate. You're limited to about 1000 connections on most linux machines, so don't go too crazy.
The -n parameter is how many requests to issue. I would suggest doing at least 100 times the number of concurent requests to get a good average...
Another great use for apache bench is to benchmark individual database queries. Just create a simple script that runs the query, and load away. This can be a really good way to detect fast but expensive queries that will take your server down in production yet seem fine in testing.
Load Testing The Whole Application
I've had good luck with WebLoad. There's an Open Source version if you don't have a good budget that will get you started. But I'd suggest springing for the pro version. With it, you can setup a distributed test environment (as simple as installing the client on every machine in the office, as complex as spinning up a bunch of VMs for it).
The cool thing, is that you can program it in javascript. So you can tell it to take random click paths through the site with random delays. This should simulate a user far better than you could do manually. Then, once you have it setup, push the tests to the distributed engine and hit go.
It supports many different load profiles (stair-step where it adds load little by little for the duration of the test, etc). So you can simulate a slashdot-effect profile, normal day-to-day usage, etc.
The reports it generates are immensely useful. It shows you the slow urls, where the bottlenecks are, etc.
There are plenty of other test platforms and systems out there. This was just one that I found that I felt worked pretty well at the time (I did a comparison about 2 to 3 years ago). I am not affiliated with the company in any way.
Load Testing parts of the application
This is a really useful technique called profiling. The how to and tools are fairly language specific, so I won't go into too much detail here (since you don't have a language tag on your question). But the point is that once you find a slow page, you'll need to profile it to figure out what's slowing it down. Then fix the low hanging fruit (the parts that are the slowest). Then re-test to see if you made a difference or not...
Conclusion
Since it's almost impossible to simulate real-world load, this is really more of an art than a science. Have at it, and have fun. Don't take the results to seriously though, even with the best testing, you're likely to miss something... So I wouldn't take them as gospel and go telling the CEO that you tested that it's capable of 100k concurent users. Since when the day it crashes happens (and if you are lucky it will crash), he will blame you since you told him it would work...
Just a thought, you say IE only so is it hosted on IIS? if so then you might want to look at Microsft's WCAT (Web Capacity Analysis Tool), more information is available here:
http://support.microsoft.com/kb/231282
Although it isn't open source but it is free - do you need the source.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
As a web developer I've been asked (a couple of times in my career) about the performance of sites that we've built.
Sometimes you'll get semi-vague questions like "will the site continue perform well, even during product launch week?", "can the site handle a million users?", and even "how is the site doing?"
Of course, these questions are very legitimate, and I have always tried to answer these questions to the best of my ability, using a combination of
historic data (google analytics / IIS logs)
web load test tools
server performance counters
experience
gut feeling
common sense
a little help from our sysadmins
my personal understanding of the software architecture in question
I have usually been able to come up with reasonable answers to these questions.
However, web app performance can be influenced by many things (database dependencies, caching strategies, concurrency issues, etcetera, user behaviour).
I'm a programmer and not a statician, and my approach to this problem has always felt deeply unscientific. So I did a little more research... and all of my google results seem to focus on tools and features and metrics (and MORE metrics) when I am really looking for a way to make sense of these things.
The question:
What are some good resources (books?) to read on the best practices for a developer to read on the subject of web load testing, that will help me answer these types of questions?
First your question proves you do understand the problem. It can sometimes be tricky enough creating the tools, scripts etc. to generate the load but the real challenge lies in evaluating the results and what to monitor.
A very easy answer to your question could be to Generate load on a production-like environment that is similar to current or expected usage. If it runs ok without any crashes or slow performance that is usually good enough. After that, increase load to see where your limits are.
When you reach your limit my experience is that this is purely a project budget question. Will we invest more time/money/resources etc to evaluate the cause.
I work as a test professional and I do recommend respect load testing as a vital part of the development process but unfortunately that is not always in line of what management decides.
So the answer to your question is that almost everyone needs to be involved in this process:
developers to monitor their code; system admins need to monitor CPU, memory usage etc.; DBA; networking guys; and so on. They all probably need their own source of knowledge to be able to get all this info recorded and analysed.
A few book tips:
The Art of Application Performance Testing: Help for Programmers and Quality Assurance
http://www.amazon.com/exec/obidos/ASIN/0596520662/
The Art of Capacity Planning: Scaling Web Resources
http://www.amazon.com/exec/obidos/ASIN/0596518579/
Performance Testing Guidance for Web Applications
http://www.amazon.com/exec/obidos/ASIN/0735625700/
Have you seen:
Performance Testing Guidance for Web Applications by J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
It's even available on the web for free.
http://msdn.microsoft.com/en-us/library/bb924375.aspx
You could emulate typical user behaviour and use one of the cloud services to simulate a huge number of users on the website to see how well your website handles with huge numbers of users. I heard Amazon's service is decent
I can recommend two books published in 2010:
The first is "ASP.NET SITE PERFORMANCE SECRETS" by Matt Perdeck, was published in late fall 2010. It is written more from performance optimization standpoint, but also has detail material on load testing. It is a free pdf eBook.
The second book is ".NET Performance Testing and Optimization - The Complete Guide", by Paul Glavich, Chris Farrell". It is pretty complete source on performance / load testing
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I recently started a career in software development after graduating a couple of years ago in CS. The current project I'm on is a large ongoing project that has it's origins in the 90s with a mix of C, C++, and Java. There are multiple platforms (UNIX, WIN, etc) being supported, older technologies in use like CVS, and some dated documentation in some areas.
The extent of my software development skills stem from going to university as I've had little real world experience. I felt like I had a decent foundation in CS but I cannot but help feel slightly overwhelmed by it all. I'm excited to be part of something so huge but at the same time I feel like it's a lot of information to absorb.
My coworkers have been great people and answer a lot of questions I. My employer hired me knowing that I am entry level.
I've tried poking around the source code and examining how everything gets built but it's on a scale I've never seen before.
How do more experienced people situate themselves when joining a large ongoing project? What are some common tasks you do when getting yourself up to speed?
Good question. I haven't had your exact experience, but in cases like this I like to think, "how do you eat a whale?" The answer is (predictably) "one bite at a time." Reasonable people won't expect you to grasp the whole thing immediately, but they will want to see progress. Perhaps there are some small areas of the larger project that are not too complex, without too many dependencies. Work toward understanding one of those and you're one 'bite' (and/or 'byte') closer to expertise on the whole project.
Being familiar with all existing documentation I would try to get the big picture. Literally.
generate a TreeMap of the source code
I would use GrandPerspective on Mac or WinDirStat on Windows. It will give you some insights about the structure of the project's files (sometimes it gives some hints about the code structure). Having this, you can ask your colleagues for some of the clusters, what they do, how they relate to each other.
learn how to build the project
This is important to have it compiling all the time if you are about to do any changes. Having tests executed at the build time is always a good thing, so ask for it also. Even better if there is some kind of continuous integration server in place. If there is, look at its configuration - figure out how the build is done. If there was no CI server, but you already got the knowledge how to build the project, create such a server on your local machine, and show it to your fellows - they should fell in love with it.
browse the source code with Structure101 or similar tool
This is useful especially for Java projects. This tool does great job. That will give you more details about the code structure, and sometimes about the system architecture. This experience may be sometimes hard, you may learn from this tool that a code is basically a Big Ball of Mud ;)
look for tests, and explore them
If you will be lucky there may be some JUnit, or CPPUnit tests. This is always good to try to understand what those tests are doing. It may be a good starting point to explore the code further.
My coworkers have been great people
and answer a lot of questions I. My
employer hired me knowing that I am
entry level.
You have little to worry about, you're employer knows what you are capable of and your co-workers seem eager to help you out - to be honest most developers love explaining things to others...
From what I've seen, it take truly 6+ years to become fully knowledgeable in a language, so don't expect to become a guru within a year... and even these so called gurus end up learning something new about their language everyday.
Learning a new system (large) will always take time.... the systems were usually not built in 2 weeks but over many years, so don't expect to understand it fully yet. You'll eventually discover what each part does piece by piece.
I know how you feel, because I felt like that once...
"I took a speed reading course and read 'War and Peace' in twenty minutes. It involves Russia." (Woody Allen)
I agree on what the others said before me. You need some tools that give you an overview on the code. I personally used inFusion (http://www.intooitus.com/inFusion) because it gives also other interesting data beside structure.
The method that has worked best for me is to grab a copy from source control, with the intention of throwing this version away...
Then try and refactor the code. It is even better if you can refactor the code that you know you will be working on at a later stage.
The reason this is effective is because:
refactoring gives you a goal for you to aim towards. Whereas "playing" an "breaking" the code is great - it is unfocused.
To refactor code you really have to understand the code.
Refactored code leaves code that has less concepts to retain in memory. If you don't understand a large codebase its not because you are a graduate - its because nobody can retain more than 7 (give or take a few) concepts at a time.
If you follow correct refactoring guidelines it means you will be writing tests. Although, make sure that you will be working on the modules that you are testing as writing tests can be very time consumning (although very rewarding)
Do invest in buying this book at some point:
http://www.amazon.co.uk/Refactoring-Improving-Design-Existing-Technology/dp/0201485672
But these links should get you started:
Signs that your code needs refactoring and what refacoring to use (From Refactoring - Martin Fowler)
http://industriallogic.com/papers/smellstorefactorings.pdf
A taxonomy of code smells:
http://www.soberit.hut.fi/mmantyla/BadCodeSmellsTaxonomy.htm
Good luck!!!
I agree to the first comment but I also Think that you have to learn and see the big picture in some way. You have to trace the main flow from code at least.
I was in the exact same situation several years ago when I joined a software project with 50+ ClearCase version control vobs, 5 million lines of code, and some of it dating back to the 1980's.
The first thing I did was look through every source controlled directory and made a quick summary of my best guess about what the software in that folder did and what language the code was. You can make a pretty good guess by looking at filenames and any comments or documents in those folders.
I then looked at the build scripts to see if they were readable enough to get an idea of dependencies between different parts of the code.
Finally - and I believe this was the most valuable - throw an IDE like Eclipse or NetBeans on top of the code and start reading through pieces of it. Having the ability to jump to the definition of any functions or classes using the IDE allows you to move around a massive software baseline with relative ease.
Overall, have some confidence - it is unlikely that anyone else on the project knows all of the code, so you don't need to either. Use what other people said to get a good idea of the overall project and interfaces and requirements (if they exist) and poke through the code to get an idea of the most commonly used classes and methods.
YSlow, dynaTrace, HTTPWatch, Fiddler .........
All these things are really good for measuring the performance of the website and get statistics for the same. YSlow is really cool, offers good guidelines also.
However, i am very confused with so many things around (Though it's good that people already invested time and have made nice guidelines to follow and i thank them for great work done).
Following are my questions:
How much accuracy these tools have in terms or numbers they show ?
Which one(tool) is BEST to use (one for all needs)? Or i am missing name of some tool which is out of box and better than above all?
I'm suprised that you haven't mentioned JMeter. It is free, quite easy to use, has lots of features, and great for load testing your website.
As for question one, I'm not sure I can answer that. I'm sure that in general, the numbers these tools show are pretty accurate, but there are some catches. Take JMeter for example:
JMeter itself uses a lot of memory and also some substantial CPU time if you do some heavy load testing. That means that if you run the tool on the same machine as your website, some resources are lost, e.g. not available for the website
Testing it on the same machine does not out-of-the-box take in account that the data has to be sent over the internet connection, so response times are lower then the reality.
But in all, I think you should never blindly trust the results these tools give you, but they can give you a good insight into possible bottlenecks or problems.
YSlow is good to measure performance for a single user. Try to keep it grade A and it will be OK. But it actually doesn't measure performance in case of multiple concurrent users. For that you can use under each Apache JMeter. It's a good webserver/webapplication stresstest tool. So I would say, just use both YSlow (for client performance) and JMeter (for server performance).
I haven't used DynaTrace before, so I'll skip that part. The mentioned HTTP request trackers doesn't really measure performance, they are more debuggers.
As far as I am concerned, i find YSlow to be really good (have tried fiddler too) and it does help me when i need it and i do believe that it provides the correct figures thereby making me use that in the time ahead too unless there is anything unanimously accepted (which is difficult because everyone has different choices and requirements.) or even better. Oh they are right, forgot the JMeter, something you should definitely give a mention.
There is also Speed Tracer extension for Chrome. It should be usable with any JavaScript heavy website.
http://code.google.com/webtoolkit/speedtracer/
http://gtmetrix.com is a good tool and it is free. that analyzes your page's speed performance using Page Speed and YSlow