Is function point analysis still used for estimates? [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In one discussion among colleagues I have heard that function point analysis is not used nowadays since it can go wrong for various reasons.
So WBS (work breakdown structure) is used commonly.
Is that true?

Function Points and WBS are two different, but related items. Function Points is a unit of measurement that can be used to determine complexity and work effort, WBS (work breakdown structure) is an approach to define sub units to a project (problem).
SO, when starting a project with a given scope and set of expectations, you use a WBS to define the sub units of the project (to a degree that is useful for you), once you have well defined sub units, you can determine the work effort by assigning function points to each and multiplying velocity (# of function points per day that can be delivered- as an example).
Here are some useful links:
Function Points via Wikipedia
Software Sizing During Requirements Analysis
WBS via Wikipedia
step by step WBS

I just took the introductory course of a project management program, and we didn't even look at "function point analysis" (I'm not sure what that is), but we spent a lot of time looking at WBS. All the following processes referred back to the WBS.

I can only talk about what I have seen. I have seen IBM use Function Points in Mexico to determine product size and pay subcontractors.
Regards...

I did function point analysis back in university in the early nineties, but it never came up again once I actually entered the work force.

Function Points have gone out of fashion, but they do work very well. I urge you to look further at the work of Capers Jones who has published some terrific books that help bring measurement and certainty to software projects.

FPA is not an effort estimation technique itself. FPA is used to determine the 'functional size' of requirements to a software, expressed in 'function points' (fp), thus can be one of many input variables for a more complex effort estimation model (such as COCOMO).
Do not mistake 'estimation' for 'planning'. WBS is a planning technique, that requires detailed knowledge on what and how to build/develop. In contrast, estimation is a forecast of expected effort/costs based on limited facts.
So it's not about 'one-or-the-other' but rather 'when-what'.

Related

Software Engineering - Time Schedules, Projects Estimates [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
My question is how someone can really estimate the duration of a project in terms of hours?
This is my overall problem during my carreer.
Let's say you need a simple app with customer/orders forms and some inventory capacity along with user/role privileges.
If you'd go with Java/Spring etc you should have X time.
If you'd go with ASP.NET you should have Y time.
If you would go with PHP or something else you would get another time
right?
All say break the project in smaller parts. I agree. But then what how do you estimate that the user/role takes so much time?
The problem gigles up when you implement things that let's say are 'upgraded' ( Spring 2.5 vs Spring 3.0 has a lot of differences for example ).
Or perhaps you meet things that you can't possibly know as it is new and always you meet new things!Sometimes I found odd behaviours like some .net presentation libraries that gave me the jingles! and could cost me 2 nights of my life for something that was perhaps a type error in some XML file...
Basically what I see as a problem, is that there is no standardized schedule on that things? A time and cost pre-estimated evaluation? It is a service not a product. So it scales to the human ability.
But what are the average times for average things for average people?
We have lots and lots of cost estimation models that give us a schedule estimate, COCOMO 2 being a good example. although more or less as you said, a type error can cost you 2 nights. so whats the solution
in my view
Expert judgement is serving the industry best and will continue to do
so inspite of various cost estimation techniques springing up as the
judgement is more or less keeping in mind the overheads that you
might have faced doing such projects in past
some models give you direct mapping between programming language LOC
per function point but that is till a higher level and does not say
what if a new framework is intorduced (as you mentioned, switching
from spring 2.5 to 3.0)
as you said, something new keeps coming up always, so i think thats
where expert judgement again comes into play. experience can help you
determine what time you might end up working more.
so, in my view, it should be a mix of estimation technique with a expert judgement of overheads that might occur with the project. more or less, you will end up with a figure very near to your actual effort.
My rule of thumb is to always keep the estimation initially language agnostic. Meaning, do NOT create different estimations for different implementation languages. This does not make sense, as a client will express to you requirements, in a business sense.
This is the "what". The "how" is up to you, and the estimate given to a client should not give the choice to them. The "how" is solely your domain, as an analyst. This you will have to arrive upon, by analyzing the demands of the business, then find what experiences you have in the teams, your own time constraints, your infrastructure and so on. Then deliver an estimate in BOTH time, and the tech you assume you will use.
Programming languages and algorithms are just mediators to achieve business need. Therefore the client should not be deciding this, so you shall only give one estimate, where you as an analyst have made the descision on what to use, given the problem at hand and your resources.
I follow these three domains as a rule:
The "what" - Requirements, they should be small units of CLEAR scoping, they are supplied by the client
The "how" - Technical architecture, the actual languages and tech used, infrastructure, and team composition. Also the high level modeling of how the system should integrate and be composed (relationship of components), this is supplied by the analyst with help of his desired team
The "when" - This is your actual time component. This tells the client when to expect each part of the "what". When will requirement or feature x be delivered. This is deducted by the analyst, the team AND the client together based on the "how" and "what" together with the clients priorities. Usually i arrive at the sums based on what the team tells me (technical time), what i think (experience) and what the client wants (priority). The "when" is owned by all stakeholders (analyst/CM, team and client).
The forms to describe the above can vary, but i usually do high level use case-models that feed backlogs with technical detailing. These i then put a time estimate on, first with the team, then i revise with the client.
The same backlog can then be used to drive development sprints...
Planning / estimation is one of the most difficult parts in software engineering.
Then:
- Split the work in smaller parts
- Invite some team members (about 5-8),
- Discuss what is meant with each item
- Let them fill in a form how many hours each item is, don't discuss or let them look at others
- Then for each item, check the average and discuss if there is a lot of variance (risks?)
- Remove the lowest and highest value per item
- Take the average of the rest
This works best for work that is based on experience, for new projects with completely new things to do it is always more difficult to plan.
Hope this helps.
There is no short easy answer to your question. Best I can do is to refer you to a paper which discusses some of the different models and techniques used for cost analysis.
Software Cost Estimation - F J Heemstra

inventory, supply chain, procurement management and computer science- General, high-level question [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I would like to ask a rather, general, high-level introductory kind of question regarding inventory management.
So, I was wondering if anyone on SO had any experience/knowledge, or worked with in the past in inventory, supply chain, procurement management settings. What typical problems or challenges one might find in this field and how computer science, mainly algorithms, data structures and optimization can can be employed to deal with such challenges/problems?
Could this be relevant to operational research, queue theory etc? I am not directly related to this field but would need to know how CS is applied in these domains.
An internet search produces some vague results, so I would greatly appreciate any prior-experience insight, educated advice, specific online resources, or even examples. I hope it is ok to ask such a high level question here.
Many thanks in advance
I have some experience with warehouse management systems. Much of it is not very sophisticated in a CS point of view, but there are some juicy optimization problems where CS can be applied. For example, to reduce the time spent to "pick" an order (go through the warehouse and collect the goods for an order), it's desireable to find the shortest way to go to all those places in the warehouse, which boils down to the "traveling salesman problem".
Another place where CS is applied is inventory taking; there are some very clever software products (e.g. INVENT Xpert) that allow a random sample inventory taking to reach the accuracy required by law; this means that instead of going to all storage locations and count the quantity stored there, only a small percentage (5-10%) of the locations are actually counted.
This is a very general question, You probably need knowledge in distributed computing (depends on how big is your operation), Replicating Databases, some knowledge in traveling agent kind of problems and who knows better than you - what else - it is very dependent on the problem you need to solve.
I think you should explain the purpose of the question - so we can narrow the answer to something that might be helpful...
there are also many Of-the-shelf products (that requires a lot of customization, but holds most of what you need in this field).
"What typical problems..."
It is very common to have multiple sites/terminals updating a specific database row/record at the same time so you have to be absolutely bulletproof in your row/record locking and update procedures or you will lose both money and customers. Database concurrency issues are significant and your fail-over systems have to work.
Test under real load. If you expect to have 50,000 different widgets in your warehouse and you expect to have days (day after Thanksgiving) when you get 6,000 hits a second for 9 hours on a particular widget then that's what you test - real data and real volume and at the end of your tests your item quantity, turn, and back-order counts can't be off by even one.
Make sure you've addressed these two issues and you're on your way to a trustworthy system.
Question why are you thinking of writing your own system rather than adapting one that is commercially available?

How would you measure code "quality" across a large project [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 11 months ago.
Improve this question
I'm working on a quite large project, a few years in the making, at a pretty large company, and I'm taking on the task of driving toward better overall code quality.
I was wondering what kind of metrics you would use to measure quality and complexity in this context. I'm not looking for absolute measures, but a series of items which could be improved over time. Given that this is a bit of a macro-operation across hundreds of projects (I've seen some questions asked about much smaller projects), I'm looking for something more automatable and holistic.
So far, I have a list that looks like this:
Code coverage percentage during full-functional tests
Recurrance of BVT failures
Dependency graph/score, based on some tool like nDepend
Number of build warnings
Number of FxCop/StyleCop warnings found/supressed
Number of "catch" statements
Number of manual deployment steps
Number of projects
Percentage of code/projects that's "dead", as in, not referenced anywhere
Number of WTF's during code reviews
Total lines of code, maybe broken down by tier
You should organize your work around the six major software quality characteristics: functionality, reliability, usability, efficiency, maintainability, and portability. I've put a diagram online that describes these characteristics. Then, for each characteristic decide the most important metrics you want and are able to track. For example, some metrics, like those of Chidamber and Kemerer are suitable for object-oriented software, others, like cyclomatic complexity are more general-purpose.
Cyclomatic complexity is a decent "quality" metric. I'm sure developers could find a way to "game" it if it were the only metric, though! :)
And then there's the C.R.A.P. metric...
P.S. NDepend has about ten billion metrics, so that might be worth looking at. See also CodeMetrics for Reflector.
D'oh! I just noticed that you already mentioned NDepend.
Number of reported bugs would be interesting to track, too...
If your taking on the task of driving toward better overall code quality. You might take a look at:
How many open issues do you currently have and how long do they take to resolve?
What process to you have in place to gather requirements?
Does your staff follow best practices?
Do you have sop's defined to describing your companies programming methodology.
When you have a number of developers involved in a large project everyone has their way of programming. Each style of programming solve the problem but some answers may be less efficient than others.
How do you utlize you staff when attacking a new feature or fixing the exist code. Having developers work in teams following programming sop's forces everyone to be a better code.
When your people code more efficiently following rule you development time should get quicker.
You can get all the metrics you want but I say first you have to see how things are being done:
What are you development practices?
Without know how things are currently being done you can get all the metrics you want but you'll never see any improvemenet.
Amount of software cloning/duplicate code, less is obviously better. (Link discusses clones and various techniques to detect/measure them.)

How do you refine your estimation process? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Estimating how long any given task will take seems to be one of the hardest parts about software development. At my current shop we estimate tasks in hours at the start of an iteration, but once the task is complete we do not use it to aide us in future estimations.
How do you use the information you gather from past estimations to refine future ones?
By far one of the most interesting approaches I've ever seen for scheduling realistically is Evidence Based Scheduling which is part of the FogCreek FogBugz 6.0 release. See Joel's blog post linked above for a synopsis and some examples.
If an estimate blew out, attempt to identify if it was just random (environment broke, some once off tricky bug etc) or if there was something that you didn't identify.
If an esimate was way too large, identify what it was that you thought was going to take so long and work out why it didn't.
Doing that enough will hopefully help developers in their estimates.
For example, if a dev thinks that writing tests for a controller is going to take ages and then it ends up taking less time than he imagined, the next estimate you make for a controller of similar scope you can keep that in mind.
I estimate with my teammates iteratively until we reach consensus. Sure, we make mistakes but we don't calculate the "velocity" factor explicitely but rather, we use gathered experience in our new estimation debates.
I've found that estimating time will get you so far. Interuptions with other tasks, unforseen circumstances or project influences will inevitably change your time frames and if you were to constantly re-asses you would waste much time managing when you could be developing.
So for us here, we give an initial estimation based on experience to the solution for time (we do not use a model, I've not found one that works well enough in our environment) but do not judge our KPIs against it, nor do we assure the business that this deadline WILL be hit. Our development approach here is largely reactive, and it seems to fill the business' requirements of us very well.
when estimates are off, there is almost always a blatant cause, which leads to a lesson learned. Recent ones from memory:
user interface assumed .NET functionality that did not exist (the ability to insert a new row and edit it inline in a GridView); lesson learned is to verify functionality of chosen classes before committing to estimate. This mistake cost a week.
ftp process assumed that FtpWebRequest could talk to a bank's secure ftp server; it turned out that there's a known bug with this class if the ftp server returns anything other than a backslash for the current directory; lesson learned is to google for 'bug' and 'problem' with class name, not just 'tutorial' and 'example' to make sure there are no 'gotchas' lurking. This mistake cost three days.
these lessons go into a Project Estimation and Development "checklist" document, so they won't be forgotten for the next project

What factor determines the cost of a software project? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If you have $100 in your hand right now. And have to bet on one of these options. That would you bet it on? The question is:
What is the most important factor, that determents the cost of a project.
Typing speed of the programmers.
The total amount of characters typed while programming.
The 'wc *.c' command. The end size of the c files.
The abstractions used while solving the problem.
Update: Ok, just for the record. This is the most stupid question I ever asked. The question should be. Rank the list above. Most important factor first. Which are the most important factor. I ask, because I think the character count matters. Less character to change when requirements change. The faster it's done. Or?
UPDATE: This question was discussed in Stackoverflow podcast #23. Thanks Jeff! :)
From McConnell:
http://www.codinghorror.com/blog/archives/000637.html
[For a software project], size is easily the most significant determinant of effort, cost, and schedule. The kind of software you're developing comes in second, and personnel factors are a close third. The programming language and environment you use are not first-tier influences on project outcome, but they are a first-tier influence on the estimate.
Project size
Kind of software being developed
Personnel factors
I don't think you accounted for #3 in the above list. There's usually an order of magnitude or more difference in skill between programmers, not to mention all the Peopleware issues that can affect the schedule profoundly (bad apples, bad management, etc).
None of those things are major factors in the cost of a project. What it all comes down to is how well your schedule is put together - can you deliver what you said you would deliver by a certain date. If your schedule estimates are off, well guess what, you're project is going to cost a lot more than you thought it would. In the end, it's schedule estimates all the way.
Edit: I realize this is a vote, and that I didn't actually vote on any of the choices in the question, so feel free to consider this a comment on the question instead of a vote.
I thing the largest amount on large projects are testing and fixing the bugs and fixing misinterpretation of the requirements. First you need write tests. Than you fix the code that the tests run. Than you make the manual tests. Then you must write more tests. On a large project the testing and fixing can consume 40-50% of time. If you have high quality requirements then it can be more.
Characters, file size, and typing speed can be considered of zero cost, compared to proper problem definition, design and testing. They are easily an order of magnitude more important.
The most important single factor determining the cost of a project is the scale and ambition of the vision. The second most important is how well you (your team, your management, etc.) control the inevitable temptation to expand that vision as you progress. The factors you list are themselves just metrics of the scale of the project, not what determines that scale.
Of the four options you gave, I'd go with #2 - the size of the project. A quick project for cleaning out spam is going to be generally quicker than developing a new word processor, after all.
After that I'd go with "The abstractions used while solving the problem." next - if you come up with the wrong method of solving the problem, either wrong because of the logic being bad or because of a restriction with the system - then you'll definitely spend more money on re-design and re-coding what has already been done.

Resources