How do you estimate the project effort/schedule of your projects? [duplicate] - estimation

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
How to create an accurate hour estimate?
Dealing with awful estimates
There are many estimation techniques. From formal ones like Cocomo and Function Points, less formal ones like Story/Feature points, to even less formal like "Toss some dice".
Which method you find most useful or most effective? Why?
What do you do if the estimation method gives results that are not quite exactly what project managers, marketeers or pointy haired bosses expect?

If you use something like FogBugz, it has Evidence Based Scheduling. Apparently it can be accurate down to the day.

Related

Looking for a path to learn the math required to understand algorithm books / theory [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've taken everything up to pre-calculus in college, but when trying to get through things like the Donald Knuth books, or even things like this link: http://en.wikipedia.org/wiki/Self-balancing_binary_search_tree I wind up looking at math that means absolutely nothing to me. I'm not looking for magic, I don't expect to make sense of this in a week, I'm just looking for a good graduated plan of things to read / explore to get me there. Any pointers are welcome, after 20+ years as a professional programmer, I feel it would be nice to have this under my belt. Thanks in advance to everyone! :-)
I actually recommend taking a discrete mathematics course at your local university. This helped me out tremendously. Until I had this, I did not understand recursion (which is based on mathematical induction.) There are a number of other concepts which you will learn in a good discrete mathematics course which are extremely, extremely helpful (graph theory, asymptotic notation, combinatorics...)
I also recommend taking the class for a grade. I have always noticed that this makes people take the course more seriously, even if it is not in line with a degree path or anything past the grade.
If your local university is good, they will likely have tutoring sessions and office hours available that you can go to in order to ask questions and get clarification. These are really, really valuable and helped me learn things in a deeper manner, and more quickly, than I ever could have on my own.
You may need to take calculus in order to meet the prerequisites, but that is something I would also recommend if you'd like to increase your mathematical literacy. This 'answer' will take at least a semester, and more like two, but I think this is the way to go. It's not an immediate solution, but you will become better at math if you perform well in these two classes (and you have a good university close by.)
Your profile says you are in Dallas. I found this course (with no prerequisites!) for you. The syllabus looks like it covered a lot of good material, and the course met at 5:30 p.m. (good for working people!). If they are offering anything similar next semester, I'd consider it. If you call up the instructor, I'm sure he'd be happy to talk with you about what he knows for summer and fall scheduling.
This path has worked well for me.
Good luck!
You can try this: http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/0201558025
There's a pdf version of this available online, you can easily google it out.
Many of my friends who are great programmers recommended it.
A lot of talented programmers understand algorithms before understanding the maths behind them. Maths are only there to help, they are not here to make you understand everything. You will need to spend more time reading about algorithms and complexity, then you might get a sense of how to evaluate them.
I recommend you to read more books about algorithm complexities.
In your long experience as a professional programmer, there surely are topics and sub-domains that you are most curious about. My advice is: identify those areas and go after them. It might be code-breaking, number theory, recursion, functional programming, computational origami, logical puzzles, crystal structures, graphs, genetic algorithms, splines...
Take your own remark to heart:
but when trying to get through things like the Donald Knuth books, or even
things like this link:...I wind up looking at math that means
absolutely nothing to me
What sort of math fascinates you?
I could say there are lots of intriguing puzzles at Project Euler. After you solve a programming challenge, you have access to a forum in which other folks share their solutions and occasionally refer to some body of knowledge they were drawing on. I love it. But what matters is what you like. Your own interests are the key to your learning.
If math and programming no longer have any appeal--you don't like doing them in your spare time--find something else to get into: acting, foreign languages, travel, French cooking, biking. Who knows, maybe you're burned out.
I'd say get a good book in discrete math and one in combinatorics as well. Here are a few I've liked. The Rosen book is good place to start.
http://www.amazon.com/Course-Combinatorics-J-van-Lint/dp/0521006015
http://www.amazon.com/Discrete-Mathematics-Applications-Kenneth-Rosen/dp/0073229725/ref=sr_1_2?s=books&ie=UTF8&qid=1305304408&sr=1-2
http://www.amazon.com/Introductory-Combinatorics-5th-Richard-Brualdi/dp/0136020402/ref=sr_1_7?s=books&ie=UTF8&qid=1305304434&sr=1-7
In line with what Vincent said, I recommend Algorithms in a Nutshell from O'Reilly (here).
There is a plenty of good video-lectures on Discrete Math, Calculus and Applied Math. Just watch them every evening, make notes and try to solve simple problems. To prepare yourself for Knuth, try "Discrete Mathematics". To understand deeply what is math and how all things in the universe are interconnected (including algorithms), try "Joy of Mathematics".
I was looking for just the same thing. I couldn't afford any of the material suggested here so far so here's a link to a YouTube lecture series on Discrete Mathematics. I wish there was a playlist but unfortunately there is not.
The videos are taken uploaded from http://www.aduni.org who ask for a donation of 25c per video to cover operation costs.

Looking for good bonus quiz to test efficiency (specifically efficiency related to time) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am doing a Introductory to Computer Science lab once a week. I was hoping to have a quick contest at the end of my next lab. I want to give them a block of code like this:
public class EfficientCode{
public static void main(){
long startTime, endTime, executionTime;
startTime = System.currentTimeMillis();
yourEfficientMethod():
endTime = System.currentTimeMillis();
executionTime = endTime – startTime;
}
public static void doSomething(){
// you do this part.
}
}
They will implement the doSomething method and the person with the fastest code will get a handful of bonus marks.
The problem is that the question needs to be somewhat simple. The students have a good grasp of: loops, if/else, Strings, adding, arrays, etc.
Here are my ideas for what the question could be:
find all the perfect numbers between 1 and 1,000,000. (A perfect number is a number where all of the number's factors add up to the number. Ie: 6 = 3 + 2 + 1)
find all prime numbers between 1 and 1,000,000
I think in order for there to be a measurable difference in performance between methods you must do something many times.
Agreed about 'many times' for short operations, but for longer ones, once might be enough on its own.
I suggest looking into Project Euler, an excellent collection of programming questions. The best part is that the problems are designed with a "one minute rule" in mind, that most problems should take a moderate computer less than one minute to execute an efficient algorithm to find the answers. So an excellent place to start. :)
Two things.
First, efficiency is about more than execution time. It also is about memory usage, memory access, file-system/resource access, etc. There are tons of things that go into efficiency. So please be explicit that you're looking for the routine with the shortest run-time. Otherwise you are sending a mixed message...
Second, I heard this problem about 15 years ago, and I can't forget it:
Produce a list of all 5-digit number pairs that sum to 121212. However, neither of the 2 numbers can repeat a decimal digit. So 1 can only appear once in either number. So an example result pair is 98167 + 23045. There are a fair number, and it's easy to build a brute-force solution, but an efficient solution requires some thought. There are 192 unique pairs...
Because this is an introductory class and your students haven't covered sorting yet I think it's going to be very hard to come up with something simple enough to do, interesting enough to have a few different ways of doing it, and complex enough that there is an appreciable difference in speed between the different implementations on a modern computer. Your real problem, though, is that anything simple enough for them to try already has a canonical implementation only a short Google search away.
My suggestion is to invert the challenge. Have your students compete to come up with the gnarliest, slowest, most memory hogging solution they can think of. I believe it's as educationally valuable to think about all the wrong ways of doing something as it is to think about the right, and it's just as hard to be the worst as it is to be the best. It's easier to see results subjectively as well since bad code will be really slow. No Googling for an answer either. Finally, in my (irrelevant) opinion, this has the added bonus of making the challenge more fun.
Something like finding a string within another string is easier made bad than good. Maybe have them extract all the prime numbers from a 2kb string of random alphanumeric characters. Lots of ways to make a pig's ear of that problem.
Those are good ideas. What about having a sorting question?
Sorting an array of numbers might also be a good idea, since there are a whole bunch of algorithms for it (insertion, selection, quick, heap, etc.) that all have different performance characteristics. This would also give students a chance to learn about big-O notation, etc.

How would you measure code "quality" across a large project [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 11 months ago.
Improve this question
I'm working on a quite large project, a few years in the making, at a pretty large company, and I'm taking on the task of driving toward better overall code quality.
I was wondering what kind of metrics you would use to measure quality and complexity in this context. I'm not looking for absolute measures, but a series of items which could be improved over time. Given that this is a bit of a macro-operation across hundreds of projects (I've seen some questions asked about much smaller projects), I'm looking for something more automatable and holistic.
So far, I have a list that looks like this:
Code coverage percentage during full-functional tests
Recurrance of BVT failures
Dependency graph/score, based on some tool like nDepend
Number of build warnings
Number of FxCop/StyleCop warnings found/supressed
Number of "catch" statements
Number of manual deployment steps
Number of projects
Percentage of code/projects that's "dead", as in, not referenced anywhere
Number of WTF's during code reviews
Total lines of code, maybe broken down by tier
You should organize your work around the six major software quality characteristics: functionality, reliability, usability, efficiency, maintainability, and portability. I've put a diagram online that describes these characteristics. Then, for each characteristic decide the most important metrics you want and are able to track. For example, some metrics, like those of Chidamber and Kemerer are suitable for object-oriented software, others, like cyclomatic complexity are more general-purpose.
Cyclomatic complexity is a decent "quality" metric. I'm sure developers could find a way to "game" it if it were the only metric, though! :)
And then there's the C.R.A.P. metric...
P.S. NDepend has about ten billion metrics, so that might be worth looking at. See also CodeMetrics for Reflector.
D'oh! I just noticed that you already mentioned NDepend.
Number of reported bugs would be interesting to track, too...
If your taking on the task of driving toward better overall code quality. You might take a look at:
How many open issues do you currently have and how long do they take to resolve?
What process to you have in place to gather requirements?
Does your staff follow best practices?
Do you have sop's defined to describing your companies programming methodology.
When you have a number of developers involved in a large project everyone has their way of programming. Each style of programming solve the problem but some answers may be less efficient than others.
How do you utlize you staff when attacking a new feature or fixing the exist code. Having developers work in teams following programming sop's forces everyone to be a better code.
When your people code more efficiently following rule you development time should get quicker.
You can get all the metrics you want but I say first you have to see how things are being done:
What are you development practices?
Without know how things are currently being done you can get all the metrics you want but you'll never see any improvemenet.
Amount of software cloning/duplicate code, less is obviously better. (Link discusses clones and various techniques to detect/measure them.)

Number of lines of code in a lifetime [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
One of the companies required from its prospective employee to give the number of lines of code written in the life time in a certain programming language like Java, or C#. Since, most of us have a number of years of experience in different projects in multiple languages and we hardly keep record of this, what would be the best approach to calculate this metrics. I am sure the smart members of stackoverlow.com will have some ideas.
This is a very respected company in its domain and I am sure they have some very good reason to ask this question. But what makes it also difficult to answer is the type of code to consider. Should I only include the difficult algorithm that I implemented or any code I wrote for e.g. a POJO that had 300 properties and whose getters/setters were generated using IDEs!
The best response to such a question is one of the following:
Why do you want to know?
What meaning would you attribute to such a number?
Is it OK if I just up and leave just about now?
I would seriously question the motives behind anyone asking such a question either of current or prospective employees. It is most likely the same type of company that would start doing code reviews focusing on the number of lines of code you type.
Now, if they argue that the number of lines of code is a measure of the experience of a programmer, then I would definitely leave the interview at that point.
Simple solutions can be found for complex problems, and are typically better than just throw enough lines of code at the problem and it'll sort itself out. Since the number of bugs produced scales linearly and above with the number of statements, I would say that the inverse is probably better, combined with the number of problems they've tackled.
As a test-response, I would ask this:
If in a program I am able to solve problem A, B and C in 1000 lines of code, and another programmer solves the same problems in 500 lines of code, which of us is the best (and the answer would be: not enough information to judge)
Now, if you still want to estimate the number of lines, I would simply start thinking about the projects the person has written, and compare their size with a known quantity. For instance, I have a class library that currently ranges about 130K lines of code, and I've written similar things in Delphi and other languages, plus some sizable application projects, so I would estimate that I have a good 10 million lines of code on my own at least. Is the number meaningful? Not in the slightest.
Sounds like this is D E Shaw's questionnaire?
This seems like one of those questions like 'How many ping-pong balls could you fit in a Boeing 747?' In that case, the questioner wants to see you demonstrate your problem solving skills more than know how many lines of code you've actually written. I would be careful not to respond with any criticism of the question, and instead honestly try to solve the problem ; )
Take a look at ohloh. The site shows metrics from open source projects.
The site estimates that 107,187 lines of code corresponds to an effort of 27 Person Years (4000 lines of code per year).
An example of the silliness of such a metric is that the number is from a project I've been toying with outside work during 2 years.
There are basically three ways of dealing with ridiculous requests for meaningless metrics.
Refuse to answer, challenging the questioner for their reasons and explaining why those reasons are silly.
Spending time gathering all the information you can, and calculating the answer to the best of your ability.
Making up a plausible answer, and moving on with as little emotional involvement possible in the stupidity as possible.
The first answers I see seem to be taking the first line. Think about whether you still want the job despite the stupidity of their demands. If the answer is still Yes, avoid number 1.
The second method would involve looking at your old code repositories from old projects.
In this case, I would go with the third way.
Multiply the number of years you have worked on a language by 200 work days per year, by 20 lines of code a day, and use that.
If you are claiming more than one language per year, apportion it out between them.
If you have been working more on analysis, design or management, drop the figure by three quarters.
If you have been working in a high-ceremony environment (defence, medicine), drop the figure by an order of magnitude.
If you have been working on an environment with particularly low ceremony, increase it by an order of magnitude.
Then put the stupidity behind you and get on with your life as quickly as possible
Depending on what they do with the answer, I don't think this is a bad question. For example, if a candidate puts JavaScript on their resume, I want to know how much JavaScript have they actually written. I may ask, for example, for the number of lines in the largest JavaScript project they've written. But I'm only looking for a sense of scale, not an actual number. Is it 10, 100, 1000, or 10,000 lines?
When I ask, I'll make very clear that I'm just looking for a crude number to gauge the size of the project. I hope the employer in the questioner's case is after the same.
It is an interesting metric to ask for considering you could write many many lines of bad code instead of writing just a few smart ones.
I can only assume they are considering more lines to be better than fewer. Would it be better to not plan at all and just start writing code, That would be a great way to write more lines of code, since at least if I do that I usually end up writing everything at least twice.
Smart of stack overflowers would generally avoid organization that ask this kind of question. Unless the correct answer is "huh, wtf??"
If you were to be truly honest then you'd say that you don't know because you have never viewed it as a valid metric. If the interviewer is a reasonable/rational person, then this is the answer they are looking for.
The only other option to saying you don't know is to guess, and that really isn't demonstrating problem solving skills.
Why bother calculating this metric without a good reason? And some random company asking for the metric really isn't a good reason.
If the company's question is actually serious, and you think the interview might lead to something interesting, then I would just pick a random number in order to see where that leads :-)
Ha, reminds me when I took over a C based testing framework, which started out as 20K+
lines that I ended up collapsing into 1K LOC by factoring down to a subroutine instead
of the 20K lines of diarrea code originally written by the original author. Unfortunately,
I got spanked harder for any errors in the code as my KLOC's written actually went
negative... I would think long and hard about shrinking the code base in a metrics driven organization....
Even if I agree with the majority in saying that this is not a really good metric, if it's a serious compmany, as you say, they may have their reasons to ask this.. This is what I would probably do:
Take one of your existing project, get the number of lines and divide it by the time it took you to code it. This will give you a kind of lines per hour metric. Then, try to estimate how many time you have worked with that specific language and multiply it with your already calculated metric. I honestly don't think it's a great way.. but honest, this isn't a great question neither.. I would also tell the company the strategy I used to come up with this number.. maybe, MAYBE, this is what they want.. to know your opinion about this question and how you would answered it? :p
Or, they just want to know if you have some experiences.. so, guess an impressive number and write it down :D
"This is a very respected company in its domain and I am sure they have some very good reason to ask this question"
And I am very sure they don't, because "being respected" does not mean "they do everything right", because this is certainly not right, or if it is, then it's at least dumb in my opinion.
What does count as "Lines of Code"? I estimate that I have written around 250.000 Lines of C# Code, possibly a lot more. The Problem? 95% was throwaway code, and not all was for learning. I still find myself writing a small 3-line program for the tenth time simply because it's easier to write those three lines again (and change a parameter) than go search for the existing ones.
Also, the lines of code means nothing. So I have two guys, one has written 20% more Lines that the other one, but those 20% more were unnecessary complicated lines, "loop-unrolling" and otherwise useless stuff that could have been refactored out.
So sorry, respected company or not: Asking for Lines of Code is a sure sign that they have no clue about measuring the efficiency of their programmers, which means they have to rely on stone-age techniques like measuring the LoC that are about as accurate as calendars in stone-age. Which means it's possibly a good place to work in if you like to slack off and inflate your Numbers every once in a while.
Okay, that was more a rant than an answer, but I really see absolutely no good reason for this number whatsoever.
And nobody has yet cited the Bill Atkinson -2000 lines story...
In my Friday afternoon (well, about one Friday per month) self-development exercises at work over the past year, tests, prototypes and infrastructure included, I've probably written about 5 kloc. However one project took an existing 25kloc C/C++ application and reimplemented it as 1100 lines of Erlang, and another took 15kloc of an existing C library and turned it into 1kloc of C++, so the net is severely negative. And the only reason I have those numbers was that I was looking to see how negative.
I know this is an old post, but this might be useful to someone anyway...
I recently moved on from a company I worked at for roughly 9.5 years as a Java developer. All our code was in CVS, then SVN, with Atlassian Fisheye providing a view into it.
When I left, Fisheye was reporting my personal, total LOC as +- 250,000. Here's the Fisheye description of its LOC metric, including the discussion on how each SVN user's personal LOC is calculated. Note the issues with branching and merging in SVN, and that LOC should usually only be based on TRUNK.

Does anyone work with Function Points? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
Some questions about Function Points:
1) Is it a reasonably precise way to do estimates? (I'm not unreasonable here, but just want to know compared to other estimation methods)
2) And is the effort required worth the benefit you get out of it?
3) Which type of Function Points do you use?
4) Do you use any tools for doing this?
Edit: I am interested in hearing from people who use them or have used them. I've read up on estimation practices, including pros/cons of various techniques, but I'm interested in the value in practice.
I was an IFPUG Certified Function Point Specialist from 2002-2005, and I still use them to estimate business applications (web-based and thick-client). My experience is mostly with smaller projects (1000 FP or less).
I settled on Function Points after using Use Case Points and Lines of Code. (I've been actively working with estimation techniques for 10+ years now).
Some questions about Function Points:
1) Is it a reasonably precise way to
do estimates? (I'm not unreasonable
here, but just want to know compared
to other estimation methods)
Hard to answer quickly, as it depends on where you are in the lifecycle (from gleam-in-the-eye to done). You also have to realize that there's more to estimation than precision.
Their greatest strength is that, when coupled with historical data, they hold up well under pressure from decision-makers. By separating the scope of the project from productivity (h/FP), they result in far more constructive conversations. (I first got involved in metrics-based estimation when I, a web programmer, had to convince a personal friend of my company's founder and CEO to go back to his investors and tell them that the date he had been promising was unattainable. We all knew it was, but it was the project history and functional sizing (home-grown use case points at the time) that actually convinced him.
Their advantage is greatest early in the lifecycle, when you have to assess the feasibility of a project before a team has even been assembled.
Contrary to common belief, it doesn't take that long to come up with a useful count, if you know what you're doing. Just off of the basic information types (logical files) inferred in an initial client meeting, and average productivity of our team, I could come up with a rough count (but no rougher than all the other unknowns at that stage) and a useful estimate in an afternoon.
Combine Function Point Analysis with a Facilitated Requirements Workshop and you have a great project set-up approach.
Once things were getting serious and we had nominated a team, we would then use Planning Poker and some other estimation techniques to come up with an independent number, and compare the two.
2) And is the effort required worth
the benefit you get out of it?
Absolutely. I've found preparing a count to be an excellent way to review user-goal-level requirements for consistency and completeness, in addition to all the other benefits. This was even in setting up Agile projects. I often found implied stories the customer had missed.
3) Which type of Function Points do
you use?
IFPUG CPM (Counting Practices Manual) 4.2
4) Do you use any tools for doing
this?
An Excel spreadsheet template I was given by the person who trained me. You put in the file or transaction attributes, and it does all of the table lookups for you.
As a concluding note, NO estimate is as precise (or more precisely, accurate) as the bean-counters would like, for reasons that have been well documented in many other places. So you have to run your projects in ways that can accommodate that (three cheers for Agile).
But estimates are still a vital part of decision support in a business environment, and I would never want to be without my function points. I suspect the people who characterize them as "fantasy" have never seen them properly used (and I have seen them overhyped and misused grotesquely, believe me).
Don't get me wrong, FP have an arbitrary feel to them at times. But, to paraphrase Churchill, Function Points are the worst possible early-lifecycle estimation technique known, except for all the others.
Mike Cohn in his Agile Estimating and Planning consider FPs to be great but difficult to get right. He (obviously) recommends to use story points-based estimation instead. I tend to agree with this as with each new project I see the benefits of Agile approach more and more.
1) Is it a reasonably precise way to do estimates? (I'm not unreasonable here, but just want to know compared to other estimation methods)
As far as estimation precision goes the functional points are very good. In my experience they are great but expensive in terms of effort involved if you want do it properly. Not that many projects could afford an elaboration phase to get the FP-based estimates right.
2) And is the effort required worth the benefit you get out of it?
FPs are great because they are officially recognised by ISO which gives your estimations a great deal of credibility. If you work on a big project for a big client it might be useful to invest in official-looking detailed estimations. But if the level of uncertainty is big to start with (like other vendors integration, legacy system, loose requirements etc.) you will not get anywhere near precision anyway so usually you have to just accept this and re-iterate the estimations later. If it is the case a cheaper way of doing the estimates (user stories and story points) are better.
3) Which type of Function Points do you use?
If I understand this part of your question correctly we used to do estimations based on the Feature Points but gradually moved away from these an almost all projects expect for the ones with heavy emphasis on the internal functionality.
4) Do you use any tools for doing this?
Excel is great with all the formulas you could use. Using Google Spreadsheets instead of Excel helps if you want to do that collaboratively.
There is also a great tool built-in to the Sparx Enterprise Architect which allows you to do the estimates based on the Use Cases which could be used for FP estimations as well.
The great hacknot is offline now, but it is in book form. He has an essay on function points: http://www.scribd.com/doc/459372/hacknot-book-a4, concluding they are a fantasy (which I agree with).
Joel on Software has a reasonable sound alternative called Evidence based scheduling that at least sounds like it might work....
From what I have study about Function Point (one of my teacher was highly involved in the process of the theory of function point) and he wasn't able to answer all our answers.Function point fail in many way because it's not because you have something read or write that you can evaluate correctly. You might have a result of 450 functions points and some of these function point will take 1 hour ans some will take 1 weeks. It's a metric that I will never use again.
No because any particular requirement can have an arbitrary amount of effort based on how precise (or imprecise) the author of the requirement is, and the level of experience of the function point assessor.
No because administration of imprecise derivations of abstract functionality yield no reliable estimate.
None if I can help it.
Tools? For function points? How about Excel? Or Word? Or Notepad? Or Edlin?
To answer your questions:
Yes they are more precise than anything else I have encountered (in 20+ years).
Yes they are well worth the effort. You can estimate size, resources, quality and schedule from just the FP count - extremely useful. It takes an average of 1 minute to count an FP manually and an average of 8 hours to fully code an FP (approximately $800 worth). Consider the carpenter's saying of "measure twice cut once". And now a shameless plug: with https://www.ScopeMaster.com you can measure 1 FP per second, and you don't need to learn how!
I like Cosmic Function Points (because they are versatile) and IFPUG because there is a lot of published data (mostly from Capers Jones).
Having invested considerable time, effort and money in developing a tool that counts FPs automatically from requirements, I shall never have to do it manually again!

Resources