How do define a metric in order to evaluate programmers? [closed] - metrics

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Lets suppose we have programmer A,B,c . The quality is defined as the number of bugs/month and productivity as LOC/month. So we have programmer A with quality 2 and prod. 2500; programmer B with quality 5 and prod. 500 and programmer C with quality 25 and prod 200. How do I define a metric to evaluate which programmers is the best? I search over the internet for a method, but with no results in finding an example of how do i evaluate programmers. Can someone help me with this ? I would really appreacite

There is probably no standard way to deal with this. Because 1. there is no standard definition of which programer is the best, and if there were something close, 2. it would probably not rely on those metrics. So you are left with subjective choices here.
As mentioned in the question's comments, you have to be careful with the selected metrics because they don't respect the representation condition [ 1 ] (e.g. counter-examples can be found -- and there would be a lot in this very case). So don't believe you can judge developers on these metrics, that would be a big mistake and a flagrant counter-productive use of metrics, see [ 2 ] for more details.
However, I believe these metrics and implied subjective choices are still worth something, as long as you know to what extent they are valid. They still represent a value, and it is interesting to have them, and monitor them: as numbers, they might help understand some aspects of the development, too.
So maybe the answer proposed would be a visualisation-based one: plot them nicely (a bubble chart may be interesting, or good old bar charts) as metrics, not as high-level characteristics. You could also map them to a scale (difficult to define, though.) to get a normalised indicator (e.g. from 1 to 5) and build a tree like in PolarSys Dashboard.
[ 1 ] Fenton, N. (1994). Software Measurement: a Necessary Scientific Basis. IEEE Transactions on Software Engineering, 20(3), 199–206. doi:10.1109/32.268921
[ 2 ] Kaner, C., & Bond, W. P. (2004). Software engineering metrics: What do they measure and how do we know? Methodology, 8(6), 1–12.

Related

What's a good selective pressure to use in tournament selection in a genetic algorithm? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What is the optimal and usual value of selective pressure in tournament selection? What percent of the best members of the current generation should propagate to the next generation?
Unfortunately, there isn't a great answer to this question. The optimal parameters will vary from problem to problem, and people use a wide range of them. Selecting the right tournament selection parameters is currently more of an art than a science. Stronger selective pressure (a larger tournament) will generally result in the population converging on a solution faster, at the cost of that solution potentially not being as good. This is called the exploration vs. exploitation tradeoff, and it underlies most algorithms for searching a large space of possible solutions - you're not going to get away from it.
I know that's not very helpful, though - you want a starting place, and that's completely reasonable. So here's the best one I know of (and I know a number of others who use it as a go-to default tournament configuration as well): a tournament size of two. Basically, this means you just keep picking random pairs of solutions, choosing the best one, and sending it to the next generation (with mutation and crossover as desired), until the next generation is the desired size. This has the nice property that any member of the population besides the absolute worst has a chance of getting to the next generation, but better ones have a better chance.

Is there any online judge for data mining [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
There are many Online Judges (OJ) for ACM/ICPC questions. And another Online Judge for Interview questions, named Leetcode (http://leetcode.com).
I think these OJs are very useful for us to learn algorithms. Recently, I am going to learn data mining algorithms. Is there any OJ for data mining questions?
Thank you very much.
There is MLcomp, where you can submit an algorithm and it will run it on a number of data sets to judge how well it is doing.
Plus, there is Kaggle, which hosts various classification competitions.
And of course you can do classes at Cousera. These are pretty much low level, but in order to get submission points you need to reproduce the known performance.
In particular the first also allows you to run several standard algorithms such as naive bayes and SVM and see how well they did. Obviously, your own implementation should perform similar then.
Unfortunately, both are pretty much focused on machine learning (i.e. classification and regression). There is very little in the unsupervised domain, clustering and outlier detection. On unlabeled data, things get too hard even to evaluate locally, so doing any kind of online judging is pretty much unsolved. What you can do is largely a one-class classification, or you just strip labels before running the algorithm.

Does Ball park estimate ever help [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In our projects we are often asked to give ball park estimates for activities. My question does it really help in taking decisions based on the estimate.
Yes as pointed out above.
No if the client later says "Oh, but that's much more than the X days you initially estimated"
You need to be careful in explaining and agreeing on what "ballpark" really means
Yes .. it can help to give rough estimates to the client but later on these estimates can show upto +/- 50% variation.
But it can help to gauge the size of the project and roughly manday efforts
Something to add to the existing responses.
Pros:
Helpful for a teamleader to assess
the number of resources needed for a
set of activities.
Useful to assess
whether a task would fit in a pre
defined timeline
Cons:
Very rough estimate
Need to be very careful while sharing with the customer.
I often use these ballpark estimates to give a quick price quote to a client, when based on models such as WMFP or COCOMO-II they can also help me make an unbiased assessment.

Developer Skill Matrix: Useful or Harmful? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In a big corp, they often ask developers to fill in a matrix of what skills they have at what level. It's generally seen as a bit of a pain but is it actually useful, or another way for bureaucrats to try and reduce developers to a bunch of numbers on a spreadsheet?
Skills matrix are only partially helpful, they are good at giving you a general picture of your current "experience".
However these skills matrix does not include the most important aspect, the ability to learn.
This is the most important skill in IT in my view. And everyone learns at different speeds.
Eg. Throwing guy A into a new technology stack, and how long before he/she is productive?
Since IT/software development is a very wide field I regard skill sheets as quite useful. I used to be a Linux expert and my skill sheet reflected that. Then I shifted into iOS/Mac development and my now-employer asked me to fill out a skill sheet tuned to Mac... and I immediately noticed that I was novice in this field back then ;-) Vice versa, they were able to see whether I can fit into the company and where (in which team).
So of course they can be harmful if you lack the skills, but I think they make choices for employers easier (and I regard a big skill sheet in my CV as the most important part of the CV, even more so than the list of projects done).
The usefulness is totally dependent on what is being assessed. I work in an insurance company and this was done for all staff here. There was no category that I fit into and all the criteria were irrelevant.
I can see the benefit of assessing relevant criteria, it can identify weaknesses and target training, but those criteria need to be defined by someone who knows what you might not know.
Most of all, don't berate the bureaucrat for simplifying a complex object into a manageable set of information. As a programmer that's what you should be doing every day.
I think it is appropiate on big corps, but for small and specialized consultancies I would make a personal interview.
In big corporations if you dont fit in one place you may fit in other... in small teams I rather do personal assessment .

What is the best practice for estimating required time for development of the SDLC phases? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As a project manager, you are required to organize time so that the project meets a deadline.
Is there some sort of equations to use for estimating how long the development will take?
let's say the database
time = sql storedprocedures * tables manipulated or something similar
Or are you just stuck having to get the experience to get adequate estimations?
As project manager you have to remember that the best you will ever we be able to do on your own is give your best guess as to how long a given project will take. How accurate you are. depends on your experience and the scope of the project.
The only way I know of to get a reasonably accurate estimate that is it to break the project into individual tasks and get the developer who will be doing the actual work to put an estimate on each task. You can then use an evidence based algorithm that takes the estimation accuracy of each developer into account to give you the probability of hitting a given deadline.
If the probability is too low, you have two choices: remove features or move the deadline.
Further reading:
http://www.joelonsoftware.com/items/2007/10/26.html
http://www.wordyard.com/2007/10/11/evidence-based-scheduling/
http://en.wikipedia.org/wiki/Monte_Carlo_method
There's no set formula out there that I've seen that would really work. Fogbugz has its monte carlo simulator which has somewhat of a concept for this, but really, experience is going to be your best point of reference. Every developer and every project will be different!
There will be such a formula as soon as computers can start generating all code themselves. Until then you are stuck with human developers who all have different levels of skill and development speed.

Resources