How can I sell non-functional testing tool to my company? - performance

I need to present performance test tool to management team of my company. Some of them think performance testing is not necessary for us because our customer never request or give requirement about performance to us.
However one of our current big project found performance problem, responding time is very long, server down when it handle many concurrent user.
I think I need to prepare myself to present about it benefit both concrete and non-concrete. Anyone have experience with performance testing tool? How it can empower your productivity?

Management cares about money. Show them how your tool will save them money and you will get their approval. Everything else is usually trivial to them.

Expanding on what #LWoodyiii had to say. When presenting a case for anything, whether that be hiring more people, or investing in a performance testing tool (or outsourcing your performance testing for that matter), it needs to be presented in terms of money saved. By doing a little leg work, you should be able to back into the $ saved amount.
If you had never had any performance problems, then it would be more difficult to quantify $ saved. But in your case, it should be a little easier to figure this out, as you have already had some significant performance issues. You should be able to put a $ amount to your existing performance problem. You should be able to quantify the revenue lost (lost transactions, lost customers, decreased transaction throughput, etc...) due to the degradation of service. You can also factor in costs associated with fixing and resolving the performance issue. Then it is a matter of comparing the costs of having performance problems vs implementing a performance testing program (tool, training and resource costs).
It also probably would not hurt to spice up the presentation with some anecdotal performance horror stories that were well publicized in the news and how much those outages cost those firms.

It sounds like you haven't used profilers yourself. That would be a good start. You didn't mention your environment but red-gate makes a wonderful profile for .NET.
http://www.red-gate.com/products/ants_performance_profiler/index.htm
Whatever environment you're in, you can probably find a decent profiler with a trial period. Use the trial period to profile your app and get to know how profilers work and how they can help make your app better.
One thing to demonstrate about productivity is how they can let you focus on the biggest bottlenecks and have the most impact on improving performance with the least effort. With a good profiler you won't bother optimizing code that is already performant.
Of course, if your company really don't care about performance they won't want you doing any optimization anyways. There are lots of companies like this and it stinks.

I think performance is one of the trivial cases that is really difficult to present to someone else especially the management. You should have some "Clear" plus "simple" way to show the use of it.
I have the experience with profilers like JBuilder and YourKit but no other performance tools. But I think the "Numbers" shown on them are not sufficient to show the use for them.
If you can build a nice practical example it would be great. Show same case for both scenarios. If you can show that the old one's response time is large and after performance improving the same operation takes much less time then it is a good way to prove your claim.

Related

How to explain at your boss that code/resources optimization is important?

Ahhhw, every time is so frustrating..
We have a dedicated server in our hosting company, and everytime i have to write down a new app (or an add to an pre-existing app), i use to 'lose' some time to optimize the code for many behaviors (reducing the db query, optimizing the db structure, reducing the bandwith, etc..) depending on what the app is supposed to do.
Obviously, is the point is not that i write bad code and then rebuild it, its just that after the project is complete, i allways find somethings that could be done better.
And everytime, if my boss catch me doing this, he say 'Your wasting your time! if the application need more resources, we buy more RAM, more CPU, or more bandwith!'.
What is the best (and simplyest) way to explain he that optimization is still important, and that is not so easy or automagically upgrating the hardware of an (production!) server?
EDIT: im not talking just about database optimization, but every aspect of an application
Chances are you are really wasting time. Most of the optimizations that you learn in college or university are sort of irrelevant in the business world. I recommend concentrating on those optimizations where you can quantify WHY your optimization is worth your company's money (your time spent is their money spent).
When trying to optimise code you need to follow the optimisation cycle. There are no short-cuts.
Define specific, detailed, measurable, performance requirements.
Measure your application's performance.
If it meets the requirements, stop, as continuing is a waste of time/money.
Determine where the bottleneck is (which machine? CPU, memory, disk, etc.?).
Fix the bottleneck (either hardware or software fix, whichever is faster/cheaper).
Go to 2.
From your post it doesn't sound like you have done 1 or 2, so you've skipped over 3 and 4 because you don't know enough to do them, and are now stumbling blindly around in 5 attempting to optimise whatever you think needs optimising.
So from what you've said I'd have to agree with your boss. You are wasting your time. Or at least you haven't followed enough process to prove otherwise, which is essentially the same thing.
Keep detailed notes and time sheets the next time you make a server move/upgrade, and protocol every minute of work that arose from it (including things in the weeks after). Then you'll have hard evidence that upgrading is expensive, and optimizations help delay the investment.
In the current situation, you might be able to do something with benchmarks and statistics along the line of "without my optimization, the server will be at 90% capacity with X number of users. With my optimization, we can cater for Y users more, before we have to get a new machine."
On the other hand, the line between optimization and over-optimization is thin. While writing code that doesn't waste resources is good craftsmanship and should be a human right, RAM and disk space are awfully cheap these days. Optimizations also tend to make code more complicated, and harder to maintain for others. When you control your own hardware, code optimization may not always be the top goal.
Do you have performance objectives for the application(s)? Free capacity, response time, bandwidth usage etc. Without them you will find it difficult to justify why an improvement needs to be made. You need to be able to quantify the problem before people will spend money on it.
If the application is already meeting the requirements then your boss is probably right.
To ensure reserve capacity for future customers you (and your boss) could set a threashold such that once performance comes within a percentage of the requirements you need to investigate the problem. This is the time for you to request/push/negociate for performing optimisations as he will be aware of the problem and the risk of doing nothing. If you can recommend some changes to correct the problem that will cost less than the hardware then hopefully he should agree to let you do it. Of course the reverse still applies. If the total cost of the hardware upgrade is cheaper then that is probably the better option.
Wait until buying RAM and CPU becomes more expensive than paying you to optimize the code. Then you have a case.
There's a break even point between improving software vs. improving hardware. For desktop systems, a good threshold is by purchase time:
If that machine fails, can you go to the next mall and buy a replacement within an hour?
As long as you can answer yes, optimization is futile - an investment into the future maybe, but is that really your decision alone?
A similar rule for web sites might be "how long would it take to get a new hosting contract on that?"
This isn't perfect, it covers just a single aspect of scalability. I like it because it's an easy to understand aspect outside of technical concerns.
Of course, you should have more material if that aspect suddenly gets brushed away. It helps to just ask what are the companies/bosses plans for a few key scenarios. - e.g. what contingency plans for DDOS attacks, for "rush events" etc. Having some data to back that up, some key stories ("company foo was mentioned on popular night show, web server broke down under requests, business lost, potential customers angered") helps a lot.
The key to realize is that (assuming from your description alone) it is your responsibility to forsee such events, and provide the technical solutions, but it's not your responsibility to mke these decisions.
Also, getting a written statement from your boss that "five hours downtime isn't a problem, get bck to work on the important things!" is a good CYA.
about the same way one would explain a developer that code/resources optimization is unimportant ...
i guess this problem arises because your way to measure importance is different than his ...
his job is to care about money and make sure he gets the "most" out of it ... and frankly, servers are cheaper than developers ... and managed servers are cheaper than admins ... and the ugly truth is, that many people think that way ... personally, I think PHP isn't really the right choice for performance-critical parts of an app ... neither are RMDBSs ... but nearly every managed server runs PHP and MySQL and business clients prefer to have an app they can run everywhere, based on a technology widely known, which turns out to be PHP and MySQL on the server side ... the world is a cruel place and the truth is ugly ... now when it comes to you, you have only 2 choices:
live with it
work with people who are more like you, and who share the same ambitions
your boss does the job right, from his perspective ... what you do is a waste of company money, because the company's goal is not to write beautiful and highly optimized code ... whatever your company does, in the end you only provide a tool ... and the value of a tool is measured by the use it provides in relation to the money it costs ...
if your optimizations were economically reasonable, then the easiest way would be not to optimize anymore and wait for the moment until your boss comes to you, asking for speedup ...
i suggest, you either look for a new job, or focus on other important things:
good design and use of best practices such as KISS, DRY, SOLID and GRASP
scalable architecture (doesn't mean you have to actually implement it, just have an idea how the whole thing could scale well)
maintainability, including readability and good documentation
flexibility & extensibility
robustness & correctness
do not reinvent the wheel and use the right tools for a job - use adequate technologies, libraries and frameworks
fast developement
these goals are important for the present and the future of a software project (also when it comes to money (especially the bold ones)) ... and they provide the possibility of doing optimization when actually needed ...
this probably isn't the answer you wanted, but I hope it helps ...
Optimizing is often really just a waste of time, what is more important is optimizing for maintainability:
- keep the code simple (see: KISS)
- move duplicate code into functions (see: DRY)
- optimize comments (as few comments as possible, as many as necessary)
- remove unused code
Those changes do not necessarily affect the speed of the program but they reduce the time needed to implement changes and fix bugs so they have real business value as they save expensive developer time.
If you're having to explain why there is a need for optimisation, then you're already on a low-win-rate path. Unfortunately, the best argument is when there is a situation where performance is becoming a noticeable problem (e.g. customers complaining) - this makes people sit up and listen if they're getting constant ear-ache about it. Often, the solution can be that throwing hardware at it is the best/cheapest solution.
However for me, performance is important, when working on highly scalable systems, you need to spend time on optimisations. It should at the very least be on your mind while you're developing and a lot of things can be swallowed up in the dev process (as opposed to after you've finished, then going back to optimise).
e.g.
- What happens if my database has 100,000,000 records instead of 100,000? (you can easily fool SQL Server into thinking there are more records in a table than there actually are to see what the effect on the execution plan would be).
If you're being pro-active about optimisation (which I personally think is good), then you need to try and come with a real-world meaning to your boss. e.g. "if we optimise our site by cutting down the page size, we could save x amount of bandwidth a month, saving us xx much money".
You do have to be careful about micro-optimisations though as you can micro-optimise until the cows come home, for much lower benefit.
You could just wait until your boss says "Why is it taking so long?" Then you could say "Hey, boss, we could buy 10 times as much hardware, or I could spend a couple days and make it 10 times faster". Here's an example.
Added: If your boss says "Well, why did you make it slow in the first place?" I would have no trouble explaining that he could hire God's Right Hand Programmer, and the code as first written would still have room for optimizing.
If you are on a relational database (and your not working with trivial applications).... you cannot afford to write bad database queries. Since you are you using a red-gate tag I assume you are on SQL server. If you design a bad schema (including queries) on SQL Server a database server when you start getting moderate traffic you will be forced to scale in horrendous ways. Once you get a high level of traffic you will be screwed -- and will need to hire a very expensive consultant/DBA to undo the mess -- use this argument with your boss :D.
What I am trying to get at is even though SQL is declarative and it doesn't appear that you have to pay attention to performance -- it doesn't really work that way. You have to design your data model by keeping in mind the way it is going to be used.
If you are writing business logic in application code ... well that doesn't matter much -- you can fix that latter with less trouble if performance becomes an issue. Most business logic these days is stateless (especially if its web-based or web-service based) meaning you can just chuck more machines at the problem -- if it's not than its just a bit harder.
Every time something takes long than it would have if the refactoring had been done ahead of time make sure your boss hears about it.
IMO, It's less a case of convincing that optimizations need to be done (and based on your Boss's responses, you're not going to win this fight), and more a case of always building in refactoring time into your estimates.
Do a cost analysis...
This will take me X hours and will mean every project needs 50% less data storage. Based on us doing 2 projects a year using on average 100Gb space, that will save $Y after a year, $Z after 3 years.
Crude example but it suffices.
Ultimately you and your boss are both wrong... you think code has to be fast (primarily) to satisfy your idealism, he doesn't care at all as he wants to deliver products.

Is Performance Always Important?

Since I am a Lone Developer, I have to think about every aspect of the systems I am working on. Lately I've been thinking about performance of my two websites, and ways to improve it. Sites like StackOverflow proclaim, "performance is a feature." However, "premature optimization is the root of all evil," and none of my customers have complained yet about the sites' performance.
My question is, is performance always important? Should performance always be a feature?
Note: I don't think this question is the same as this one, as that poster is asking when to consider performance and I am asking if the answer to that question is always, and if so, why. I also don't think this question should be CW, as I believe there is an answer and reasoning for that answer.
Adequate performance is always important.
Absolute fastest possible performance is almost never important.
It's always worth keeping an eye on performance and being aware of anything outrageously non-optimal that you're doing (particularly at a design/architecture level) but that's not the same as micro-optimising every line of code.
Performance != Optimization.
Performance is a feature indeed, but premature optimization will cost you time and will not yield the same result as when you optimize the parts that need optimization. And you can't really know which parts need optimization until you can actually profile something.
Performance is the feature that your clients will not tell you about if it's missing, unless it's really painfully slow and they're forced to use your product. Existing customers may report it in the end, but new customers will simply not bother if the performance is required.
You need to know what performance you need, and formulate it as a requirement. Then, you have to meet your own requirement.
That 'root of all evil' quote is almost always misused and misunderstood.
Designing your application to perform well can be mostly be done with just good design. Good design != premature optimization, and it's utterly ridiculous to go off writing crap code and blowing off doing a better job on the design as an 'evil' waste. Now, I'm not specifically talking about you here... but I see people do this a lot.
It usually saves you time to do a good job on the design. If you emphasize that, you'll get better at it... and get faster and faster at writing systems that perform well from the start.
Understanding what kinds of structures and access methods work best in certain situations is key here.
Sure, if you're app becomes truly massive or has insane speed requirements you may find yourself doing tricked out optimizations that make your code uglier or harder to maintain... and it would be wrong to do those things before you need to.
But that is absolutely NOT the same thing as making an effort to understand and use the right algorithms or data patterns or whatever in the first place.
Your users are probably not going to complain about bad performance if it's bearable. They possibly wouldn't even know it could be faster. Reacting to complaints as a primary driver is a bad way to operate. Sure, you need to address complaints you receive... but a lack of them does not mean there isn't a problem. The fact that you are considering improving performance is a bit of an indicator right there. Was it just a whim, or is some part of you telling you it should be better? Why did you consider improving it?
Just don't go crazy doing unnecessary stuff.
Keep performance in mind but given your situation it would be unwise to spend too much time up front on it.
Performance is important but it's often hard to know where your bottleneck will be. Therefore I'd suggest planning to dedicate some time to this feature once you've got something to work with.
Thus you need to set up metrics that are important to your clients and you. Keep and analyse these measurements. Then estimate how long and how much each would take to implement. Now you can aim on getting as much bang for you buck/time.
If it's web it would be wise to note your page size and performance using Firebug + yslow and/or google page speed. Again, know what applies to a small site like yours and things that only apply to yahoo and google.
Jackson’s Rules of Optimization:
Rule 1. Don’t do it.
Rule 2 (for experts only). Don’t do it
yet— that is, not until you have a
perfectly clear and unoptimized
solution.
—M. A. Jackson
Extracted from Code Complete 2nd edition.
To give a generalized answer to a general question:
First make it work, then make it right, then make it fast.
http://c2.com/cgi/wiki?MakeItWorkMakeItRightMakeItFast
This puts a more constructive perspective on "premature optimization is the root of all evil".
So to parallel Jon Skeet's answer, adequate performance (as part of making something work, and making it right) is always important. Even then it can often be addressed after other functionality.
Jon Skeets 'adequate' nails it, with the additional provision that for a library you don't know yet what's adequate, so it's better to err on the safe side.
It is one of the many stakes you must not get wrong, but the quality of your app is largely determined by the weakest link.
Performance is definitely always important in a certain sense - maybe not the one you mean: namely in all phases of development.
In Big O notation, what's inside the parantheses is largely decided by design - both components isolation and data storage. Choice of algorithm will usually only best/worst case behavior (unless you start with decidedly substandard algorithms). Code optimizations will mostly affect the constant factor - which shouldn't be neglected, either.
But that's true for all aspects of code: in any stage, you have a good chance to fail any aspect - stability, maintainability, compatibility etc. Performance needs to be balanced, so that no aspect is left behind.
In most applications 90% or more of execution time is spend in 10% or less of the code. Usually there is little use in optimizing other code than these 10%.
performance is only important to the extent that developing the performance improvement takes less time than the total amount of time that will be saved for the user(s).
the result is that if you're developing something for millions... yeah it's important to save them time. if you're coding up a tool for your own use... it might be more trouble than it's worth to save a minute or even an hour or more.
(this is clearly not a rule set in stone... there are times when performance is truly critical no matter how much development time it takes)
There should be a balance to everything. Cost (or time to develop) vs Performance for instance. More performance = more cost. If a requirement of the system being built is high performance then the cost should not matter, but if cost is a factor then you optimize within reason. After a while, your return on investment suffers in that more performance does not bring in more returns.
The importance of performance is IMHO highly correlated to your problem set. If you are creating a site with an expectation of a heavy load and lot of server side processing, then you might want to put some more time into performance (otherwise your site might end up being unusable). However, for most applications the the time put into optimizing your perfomance on a website is not going to pay off - users won't notice the difference.
So I guess it breaks down to this:
Will users notice the improvements?
How does this improvement compare to competing sites?
If users will notice AND the improvement would be enough to differentiate you from the competition - performance is an important feature - otherwise not so much. (To a point - I don't recommend ignoring it entirely - you don't want your site to turtle along after all).
No. Fast enough is generally good enough.
It's not necessarily true, however, that your client's ideas about "fast enough" should trump your own. If you think it's fast enough and your client doesn't then yes, you need to accommodate your ideas to theirs. But if you're client thinks it's fast enough and you don't you should seriously consider going with your opinion, no theirs (since you may be more knowledgeable about performance standards in the wider world).
How important performance is depends largely and foremost on what you do.
For example, if you write a library that can be used in any environment, this can hardly ever have too much performance. In some environments, a 10% performance advantage can be a major feature for a library.
If you, OTOH, write an application, there's always a point where it is fast enough. Users won't neither realize nor care whether a button pressed reacts within 0.05 or 0.2 seconds - even though that's a factor of 4.
However, it is always easier to get working code faster, than it is to get fast code working.
No. Performance is not important.
Lack of performance is important.
Performance is something to be designed in from the outset, not tacked on at the end. For the past 15 years I have been working in the performance engineering space and the cause of most project failures that I work on is a lack of requirements on performance. A couple of posts have noted "fast enough" as an observation and whether your expectation matches that of your clients, but what about when you have a situation of your client, your architectural team, your platform engineering team, your functional test team, your performance test team and your operations team all have different expectations on performance, none of which have been committed to stone and measured against. Bad Magic to be certain.
Capture those expectations on the part of your clients. Commit them to a specific, objective, measurable requirement that you can evaluate at each stage of production of your software. Expectations may not be uniform, with one section of your app/code needing to be faster than others, nor will each customer have the same expectations on what is considered acceptable. Having this information will force you to confront decisions in the design and implementation that you may have overlooked in the past and it will result in a product which is a better match to your clients expectations.

How to measure software development performance? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am looking after some ways to measure the performance of a software development team. Is it a good idea to use the build tool? We use Hudson as an automatic build tool. I wonder if I can take the information from Hudson reports and obtain from it the progress of each of the programmers.
The main problem with performance metrics like this, is that humans are VERY good at gaming any system that measures their own performance to maximize that exact performance metric - usually at the expense of something else that is valuable.
Lets say we do use the hudson build to gather stats on programmer output. What could you look for, and what would be the unintended side effects of measuring that once programmers are clued onto it?
Lines of code (developers just churn out mountains of boilerplate code, and other needless overengineering, or simply just inline every damn method)
Unit test failures (don't write any unit tests, then they won't fail)
Unit test coverage (write weak tests that exercise the code, but don't really test it properly)
Number of bugs found in their code (don't do any coding, then you won't get bugs)
Number of bugs fixed (choose the easy/trivial bugs to work on)
Actual time to finish a task based against their own estimate (estimate higher to give more room)
And it goes on.
The point is, no matter what you measure, humans (not just programmers) get very good at optimizing to meet exactly that thing.
So how should you look at the performance of your developers? Well, that's hard. And it involves human managers, who are good at understanding people (and the BS they pull), and can look at each person subjectively in the context of who/where/what they are to figure out if they are doing a good job or not.
What you do once you've figured out who is/isn't performing is a whole different question though.
(I can't take credit for this line of thinking. It's originally from Joel Spolsky. Here and here)
Do NOT measure the performance of each individual programmer simply using the build tool. You can measure the team as a whole, sure, or you can certainly measure the progress of each programmer, but you cannot measure their performance with such a tool. Some modules are more complicated than others, some programmers are tasked with other projects, etc. It's not a recommended way of doing this, and it will encourage programmers to write sloppy code so that it looks like they did the most work.
No.
Metrics like that are doomed to failure. Different people work on different parts of the code, on different classes of problem, and absolute measurements are misleading at best.
The way to measure developer performance is to have excellent managers that do their job well, have good specs that accurately reflect requirements, and track everyone's progress carefully against those specs.
It's hard to do right. A software solution won't work.
I think this needs a very careful approach when deciding the ways to measure developers performance as most the traditional methods such as line of codes, number of check ins, number of bugs fixed etc. are proven to be subjective with todays software engineering concepts. We need to value team performance approach rather measuring individual KPIs in a project. However working in commercial development environment its important to keep a track and a close look at following factors of individual developers;
Code review comments – Each project, we can decide the number of code reviews need to be conducted for a given period. Based on the code reviews individuals get remarks about their coding standard improvements. Recurring issues of code reviews of same individual’s code needs to be brought in to attention. You can use automated code reviews tools or manual code reviews.
Test coverage and completeness of tests. – The % covered needs to be decided upfront and if certain developer fails to attempt it often, it needs to be taken care of.
Willingness to sign in to complex tasks and deliver them without much struggle
Achieving what’s defined as “Done” in a user story
Mastery level of each technical area.
With agile approach in some of the projects, the measurements of the development team and the expected performance are decided based on the releases. At each release planning there are different ‘contracts’ negotiated with the team members for the expected performance. I find this approach is more successful as there is no reason of adhering to UI related measurements in a release where there is a complex algorithm to be released.
I would NOT recommend using build tool information as a way to measure the performance / progress of software developers. Some of the confounding problems: possibly one task is considerably harder than another; possibly one task is much more involved in "design space" than "implementation space"; possibly (probably) the more efficient solution is the better solution, but that better solution contributes less lines of code than a terribly inefficient one which provides many many more lines of code; etc.
Speaking of KPI in software developers. www.smartKPIs.com may be a good resource for you. It contains a user friendly library of well-documented performance measures. At the moment it lists over 3300 KPI examples, grouped in 73 functional areas, as well as 83 industries and sub-categories.
KPI examples for the software developers are available on this page www.smartKPIs.com - application development They include but not limited to:
Defects removal efficiency
Data redundancy
In addition to examples of performance measures, www.smartKPIs.com also contains a catalogue of performance reports that illustrate the use of KPIs in practice.
Examples of such reports for information technology are available on: www.smartKPIs.com - KPIs in practice - information technology
The website is updated daily with new content, so check it from time to time for additional content.
Please note that while examples of performance measures are useful to inform decisions, each performance measure needs to be selected and customized based on the objectives and priorities of each organisation.
You would probably do better measuring how well your team tracks to schedules. If a team member (or entire team) is consistantly late, you will need to work with them to improve performance.
Don't short-cut or look for quick and easy ways to measure performance/progress of developers. There are many many factors that affect the output of a developer. I've seen alot of people try various metrics ...
Lines of code produced - encourages developers to churn out inefficient garbage
Complexity measures - encourages over analysis and refactoring
Number of bugs produced - encourages people to seek out really simple tasks and to hate your testers
... the list goes on.
When reviewing a developer you really need to look at how good their work is and define "good" in the context of what the comany needs and what situations/positions the company has put that indivual in. Progress should be evaluated with equal consideration and thought.
There are many different ways of doing this. Entire books written on the subject. You could use reports from Hudson but I think that would lead to misinformation and provide crude results. Really you need to have task tracking methodology.
Check how many lines of the codes each has written.
Then fire the bottom 70%.. NO 90%!... EVERY DAY!
(for the folks that aren't sure, YES, I am joking. Serious answer here)
We get 360 feedback from everyone on the team. If all your team members think you are crap, then you probably are.
There is a common mistake that many businesses make when setting up their release management tool. The Salesforce release management toolkit is one of the best ones available in the market today, but if you do not follow the vital steps of setting it up, you will definitely have some very bad results. You will get to use it but not to its full capacity. Establishing release management processes in isolation from the business processes is one of the worst mistakes to make. Release management tools go hand in hand with the enterprise strategy, objectives, governance, change management plus some other aspects. The processes of release management need to be formed in such a way that everyone in the business is on the same page.
Goals of release management
The main goal of release management is to have a consistent set of reliable and repeatable processes that are resource independent. This enables the achievement of the most favorable business value while at the same time optimizing the utilization of resources available. Considering that most organizations focus on running short, high-yield business projects, it is essential for optimization of the delivery value chain of the application to make certain that there are no holdups in the delivery of the business value.
Take for instance the force.com migration toolkit, as this tool has proven to be great in governance. A release management tool should allow for optimal visibility and accountability in governance.
Processes and release cycles
The release management processes must be consistent for the whole business. It is necessary to have streamlined and standardized processes across the various tool users. This is because they will be using the same platform and resources that enable efficient completion of their tasks. Having different processes for different divisions of your business can lead to grievous failures in tool management. The different sets of users will need to have visibility into what the others are doing. As aforementioned, visibility is of great importance in any business process.
When it comes to the release cycles, it is also imperative to have one centralized system that will track all the requirements of the different sets of users. It is also necessary to have this system centralized so that software development teams get insight into the features and changes requested by the business. Requests have to become priorities to make sure that the business gets to enjoy maximum benefit. Having a steering team is important because it is involved in the reviewing of business requirements plus also prioritizing the most appropriate changes that the business needs to make.
The changes that should happen to the Salesforce system can be very tricky and therefore having a regular meet up between the business and IT is good. This will help to determine the best changes to make to the system that will benefit the business. By considering the cost and value of implementing a feature, the steering committee has the task of deciding on the most important feature changes to make.
Here also good research http://intersog.com/blog/tech-tips/how-to-manage-millennials-on-software-development-teams
This is an old question but still, something you can do is borrow Velocity from Agile Software Development, where you assign a weight to each task and then you calculate how much "weight" you solve in each sprint (or iteration or whatever DLC you use). Of course this comes in hand with the fact that, like a commenter mentioned before, you need to actively keep track yourself of whether your developers are working or chatting online.
If you know your developers are working responsively, then you can rely on that velocity to give you an estimate of how much work the team can do. If at any iteration this number drops (considerably), then either it was poorly estimated or the team worked less.
Ultimately, the use of KPIs together with velocity can give you per-developer (or per-team) insights on performance.
Typically, directly using metrics for performance measurement is considered a Bad Idea, and one of the easy ways to run a team into the ground.
Now, you can use metrics like % of projects completed on-time, % of churn as code goes toward completion, etc...it's a wide field.
Here's an example:
60% of mission-critical bugs were written by Joe. That's a simple, straightforward metric. Fire Joe, right?
But wait, there's more!
Joe is the Senior Developer. He's the only guy trusted to write ultra-reliable code, every time. He's written about 80% of the mission-critical software, because he's the best.
Metrics are a bad measurement of developers.
I would share my experience and how I learnt a very valuable process on measuring the team performance. I must admit, I have fallen on tracking KPI simply because most of the departments would do the same but not really for the insight until I had the responsibility to evaluate developers performance where after a number of reading I evolved with the following solution.
One every project, I would entertain the team in a discussion on the project requirements and involve them so everyone knows what is to be done. In the same discussion through collaboration we would break the projects in to tasks and weight those tasks. Now previously we would estimate the project completion as 100% where each task has a percentage contribution. Well this did work for a while but was not the best solution. Now we would based the task on weight or points to be exact and use relative measurements to compare the task and differentiate the weights for example. There is a requirement to develop a web form to gather user data.
Task would go about like
1. User Interface - 2 Points
2. Database CRUD - 5 Points
3. Validation - 4 Points
4. Design (css) - 3 Points
With this strategy We can pin point a weekly approximate on how much we have completed and what is pending on the task force. We can also be able to pin point who has performed best.
I must admit that I still face some challenges on this strategy such as not every developer is comfortable on every technology. Somehow some are excited to learn technologies simply because they find 2015 high % of points fall in that section some would do what they can.
Remember, do not track a KPI for their own sake, track it for it's insight.

Software Rewrite-vs-Running Cost Analysis

The IT department I work in as a programmer revolves around a 30+ year old code base (Fortran and C). The code is in a poor condition partially as a result of 30+ years of ad-hoc poorly thought out changes but I also suspect a lot of it has to do with the capabilities of the programmers who made the changes (and who incidentally are still around).
The business that depends on the software operates 363 days a year and 20 hours a day. Unfortunately there are numerous outages. This is the first place I have worked where there are developers on call to apply operational code fixes to production systems. When I was first, there was actually a copy of the source code and development tools on the production servers so that on the fly changes could be applied; thankfully that practice has now been stopped.
I have hinted a couple of times to management that the costs of the downtime, having developers on call, extra operational staff, unsatisifed customers etc. are costing the business a lot more in the medium, and possibly even short term, than it would to launch a whole hearted effort to re-write/refactor/replace the whole thing (the code base is about 300k lines).
Ideally they'd be some external consultancy that could come in and run the rule over the quality of the code and the costs involved to keep it running vs rewrite/refactor/replace it. The question I have is how should a business go about doing that kind of cost analysis on software AND be able to have confidence in that analysis? The first IT consultants down the street may claim to be able to do the analysis but how could management be made to feel comfortable with it over what they are being told by internal staff?
We recently decided to completely rewrite large portions of our business code from scratch, and it has not gone as well as we had hoped. I've seen a lot of quotes saying you should never try to rewrite anything from scratch, and now I see why. I would recommend starting small - don't try to rewrite the whole thing at once. Identify the large problem areas and focus on refactoring small portions of the system at a time. Since there is 30+ years worth of work in the system, it will take a long time to get it back to a reasonable state. We had about 5-8 years worth of work to rewrite, and it has been difficult. I can't imagine 30+ years of work!
First, the profile of the consultant you need is very specific. Unless you can find someone who worked in a similar domain with the same languages, don't hire him.
Second, there's a 99% probability (I like dramatic numbers) the analysis will go as follow:
Consultant explores the application
Consultant does understand 10% of the application
Time's up, time for the report
Consultant advices a complete rewrite (no refactoring, plain rewrite)
So you may as well make the economy of what the consultant will cost.
You have only two solutions here:
Keep with the actual source code but determine proper methods to fix problems so that you have a very long run refactoring that is progressly made by those who know the application
Get a secondary team to make a new application to replace the old one
If I talk about a secondary team, it's because you cannot bring just one architect to make the new application and have the old team working with him:
They're too busy on the old application
There will be frictions because the newcomer will undoubtedly underestimate the task at hand
I talk from experience, believe me.
If you go the "new application" way don't put your hopes too high. You'll end up with an application that has less than half the functionalities of the current one, simply because you cannot cram 30+ years of special case and exceptional situation fixes into a freshly design software.
Oh, also, if your developers happen to tell you they have a plan, by all means, hear them out. They most probably know what they are talking about.
The first thing that comes to mind is that you are prematurely addressing the rewrite/refactor/replace argument. The first step two steps I would recommend would be:
Unit tests
QA
It's well within engineering scope to implement these. Unit tests are an essential preliminary step before any reasonable refactor or rewrite could possibly take place. By 'unit test' I mean wrap each function call with corresponding code that proves the code works for all known conditions. In complex retrofits this may not actually happen at the most granular level but any automated tests will help immensely.
And QA - have an independent (and aggressive) quality assurance team that rigorously tests beta releases before production. Their test plans and test procedures become essential for any kind of replacement effort.
Once you've got the code under control, then you are in a position where the business can reasonably consider massive changes.
Just a note about your comment about external consultants - no consultancy will ever care enough about the code to provide realistic quality assurance. QA ends up being married to the hip of business defending the company bottom line. It's an internal function ultimately and an external consultant can't provide much more than getting you started really.
I think that your description provides all of the necessary information on code quality (lack thereof). The fact that so many support resources are required also indicates the high costs involved with maintaining the existing system.
As I answered here, a good approach to consider is refactoring one piece of the system at a time until everything works at an acceptable level. I agree with Joel re not throwing away existing code (see Things You Should Never Do. Parts of your code work, so you should leave those in place whenever possible, and focus on the sections that lead to downtime.
Andy also makes a great point about starting small as well.
Another thing to try, is reviewing the processes around the system. When you do this, you should try to determine what failure situations are caused directly or indirectly by user action?, are there configuration or environment problems? If you are having trouble fixing the code directly, then you can still prop it up by dealing with external issues more effectively.
Read the book Working Effectively with Legacy Code (also see the short PDF version) and surround the code with automated tests, as instructed in that book.
Refactor the system little by little. If you rewrite some parts of the code, do it a small subsystem at a time. Don't try to make a Grand Redesign.
The code has been around for 30 years?
Development paradigms have shifted substantially in the last three decades in many ways, and most relevant to your predicament, I feel, is in terms of the amount of time (in man days) required to create something to input->process->output something.
300,000 lines of code 30 years ago, could probably fit into 100,000 lines or less today, and expending fewer man hours(?) This could seem optimistic/ridiculous to some, but on the other hand is achievable, depending on the type of application in question. You have given no indication as to the classification of system - is it a real-time manufacturing process control system of sorts with sensors and actuators tied to it? An airline booking system ? Does it post-process some backlog of data? In other words could it be rebuilt in something like Java and quickly with an agressive, smallish team? Have the requirements been documented, and if so do they need updating or redeveloping from scratch? Is human safety a factor?
Just a quick sanity check, I think whether or not you should rebuild depends on (any order means the same thing):
Number of code dudes required.
Level of expertise of said dudes.
Which languages do not fit.
Which languages do fit.
How much it costs to use chosen language(s) them in terms of hardware and software.
How much does the business depend on this to stay alive.
Is it really too much downtime, or are you just nitpicking? (maybe they really don't care, but pretend to).
Good luck with that!

Can you estimate an application's performance before testing?

It's a tricky question I was asked the other day... We're working on a pretty complex telephony (SIP) application with mixed C++ and PHP code with MySQL databases and several open source components.
A telecom engineer asked us to estimate the performance of the application (which is not ready yet). He went like 'well, you know how many packets can pass through the Linux kernel per second, plus you might know how quick your app is, so tell me how many calls will pass through your stuff per second'.
Seems nonsense to me, as there are a million scenarios that might happen (well, literally...)
However... is there a way to estimate application performance (knowing the hardware it will run on, being able to run standard benchmarks on it, etc) before actual testing?
You certainly can bound the problem with upper (max throughput) limits. There is nothing nonsense about that. In fact, not knowing that stuff indicates a pretty haphazard approach to a problem - especially in the telephony world.
You can work through the problem yourself - what is the minimum "work" you have to accomplish for a transaction or whatever unit of task you have in your app?
Some messages to and from, some processing and a database hit for example? Getting information on the individual pieces will give you an idea of the fastest possible throughput. If you load up the system and see significantly lower performance then you can take time to figure out where you are possibly losing throughput with inefficient algorithms, etc.
EDIT
To do this exercise you need to know all the steps your app does for each use case. Then you can identify the max throughput for each use case. You should definitely know this stuff prior to release and going live.
I'm ignoring the worst case analysis as that - as you point out - is quite a bit harder.
See Capacity Planning for Web Performance: Metrics, Models, and Methods. There are also some tools that can do this sort of discrete event simulation:
Hyperformix
SimPy
WikiPedia list of simulation tools
This stuff ain't easy, and the commercial tools will cost ya. The Capacity Planning book comes with a CD with lots of Excel workbook templates and examples of models that can jump start you.
Good luck :)
If you really have to answer this you could say something like this:
"I don't know off the top of my head. I am will to estimate this for you but it will take time. Obviously the accuracy of my answer depends upon how much effort (I.E. time) I put into calculating my estimate. How much time should I put into calculating my estimate?"
Put the burden back on them. If they really want an accurate answer, they're going to have to let you build at least some test applications that can simulate the actual environment.
You can spike to measure performance. Your whole system may not be working yet, but you know how the parts are intended to fit together. You can whip something up in a few hours that does the same kind of work as the final app will, across all the layers, and use it to measure performance of your design.
Remember: prototypes are broad, spikes are deep.
You should do the estimate. An estimate won't give you the right answer. It will however make you to think about the problem. Right now it sounds like your coding and hoping that everything will be OK. Or you are in panic mode and feel you don't have time for estimates.
Spend some time thinking about it. Analyse the important use cases. Think about the memory you may need; think about database access; think about network access (local and remote). These will effect the performance of your system. Get the whole team together to do this.
Regularly measure your system's performance during development for these important use cases. Mock up components/other systems if you have to. Analyse the results. How do these compare to your estimate. Maybe components are memory/database/network bound. Maybe you need more memory; less database access; simpler queries; caching. You don't have to make these changes straight away. However you do know how your system operates and what you need to do.
Result: Fewer nasty surprises at system test. Less panic as the release date looms.
You can definitely do capacity planning in advance, but the quality of the estimate will depend on the quality of the data available.
The best estimate is to build the system in test, run simulated workloads, then predict capacity as a function of performance requirements and workload. These 3 form a prediction space - given 2 of the 3, you can predict the third:
Given performance requirements and capacity (i.e. hardware) you can calculate the workload you can handle.
Given performance requirements and workload, you can calculate the capacity (i.e. hardware) that you need.
Given Workload and capacity, you can predict your expected performance.
This is true in some domains, but unless you are an expert in that domain then you don't have any idea. For example I write code to controlling industrial robots. The speed is limited by the robot motion, not by the execution speed of the code. Knowing how fast the robot is and how far it has to go, we can make fairly good estimates of "speed". I'd have no idea how to estimate time for your application.

Resources