Module development time ( cost ) estimation [closed] - time

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I need some idea regarding the development time estimation of a to be developed software. Though there are formal methods in theory like COCOMO , Function point ,etc, such methods seem impractical to apply before having any work done. (I am not sure if possible?)
I have attached a sample module. Please help me in learning estimation in the practical purpose.
Scenario: Student Registration Module
Check whether the student is new or already registered.
1.1. if already registered then activate the registration
1.2. If new student then, Record all the necessary data related with the new student. (certificates in different formats like pdf,docx, jpg,png)
Check for late registration. If late then apply late registration fee.
Time check: student registration must be done within a week of academic session start.
Also, I think the development time may differ in terms of programming language used. For eg. Java , C sharp or PHP. Please guide me with your understanding.
Thanking You.

I would warmly suggest you to read the book "Software Estimation: Demystifying the Black Art" by Steve McConnell.
You'll get many useful thumb-rules from there. Many derived from COCOMO :-)
E.g. the fact, that as you state the time will be different depending on programming language. It's true. some studies have found out, that the number of LOCs produced by a programmer doesn't depend on language. But the productivity of those lines does.
So some very basic rules in software-estimation:
Understand the cone of insecurity
Give estimate in ranges, not
in single numbers
Re-estimate continuously
Divide the task in
as many as possible smaller task, and estimate them individually.
Estimate in this order: size, effort, costs/schedule

Related

Deep Learning, recommandation regarding which architecture to use [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm currently developping an application which allows psychologists to manage their schedule and budget. As a proof of concept, I would like to create an intelligent appointment service. There can be 3 cases:
I know the client, I need to guess the day and time for his next appointment
I know the day, I need to guess which client and at what time
I know nothing, I need to guess which client, which day and what time
I'm currently in the process of learning deep learning algorithms just to get a bit of theory, but it's a little bit overwhelming.
There are features I know I can extract from the appointments:
Day preference in the week (always on monday, say)
Reccurence (every two weeks or such)
Nb of days since last appointment
Whether the client was present or not to his last appointment
etc..
I know there are things like "features extraction" that you can train a neural network to find the features itself, but all examples refers to image recognition or speech analysis.
I want the algorithm to train on the existing and future appointments (stored in a MongoDB). I would also like that the algorithm trains live, that is if it proposes an appointment to the user and the user takes it, it should train positively. On the other hand, if the user navigates or change any parameter, the algorithm should adjust its weights accordingly.
I also know I should start by extracting data from the DB that will be transformed in a vector or matrix, then the algorithm is supposed to train on that data.
Is this correct? How can I start and what kind of architecture do I need?
Since It's a POC, I assume you don't have a large dataset, I would not recommend to go with deep learning, start with something smaller like a decision tree kind of algo and when you have good amount of data, move to deep models. Why? It's always easier to tweak the tree kind of model and explain it to client too. Also, as suggested by Prof Andrew NG, Deep learning require at least 100K observations to learn and perform well. With simulated dataset, it's always unpredictable.

Is there some reliable way of detecting fake Facebook profiles [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I believe this could be interesting for many Facebook developers. Is there some reliable way of detecting fake profiles on Facebook? I am developing some games and applications for Facebook and have some virtual goods for sale. If player wants to play more he can create another profile or many others and play as much as he like. The idea here is to somehow detect this and stop them from doing so.
Best Regards!
Put validation on no. of friends.. if no. of friends < A PARTICULAR THRESHOLD, disallow user, else continue. Well.. That's only an opinion, not a solution.. :)
You can try using anomaly detection.
Make your 'features' number of likes/spam/friends/other relevant features you've found helpful, and use the algorithm to detect the anomalies.
Another approach could be supervised learning - but will require a labeled set of examples of "fake" and "real" users. The 'features' will be similar to these for anomaly detection.
Train your learning algorithm using the labeled set (usually referred as training set), and use the resulting classifier to decide if a new user is fake or not.
Some algorithms you can use are SVM, C4.5, KNN, Naive Bayes.
You can evaluate results for both methods using cross-validation (this requires a training set, of course)
If you want to learn more about machine learning approaches, I recommend taking the webcourse at coursera.

Developer Skill Matrix: Useful or Harmful? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In a big corp, they often ask developers to fill in a matrix of what skills they have at what level. It's generally seen as a bit of a pain but is it actually useful, or another way for bureaucrats to try and reduce developers to a bunch of numbers on a spreadsheet?
Skills matrix are only partially helpful, they are good at giving you a general picture of your current "experience".
However these skills matrix does not include the most important aspect, the ability to learn.
This is the most important skill in IT in my view. And everyone learns at different speeds.
Eg. Throwing guy A into a new technology stack, and how long before he/she is productive?
Since IT/software development is a very wide field I regard skill sheets as quite useful. I used to be a Linux expert and my skill sheet reflected that. Then I shifted into iOS/Mac development and my now-employer asked me to fill out a skill sheet tuned to Mac... and I immediately noticed that I was novice in this field back then ;-) Vice versa, they were able to see whether I can fit into the company and where (in which team).
So of course they can be harmful if you lack the skills, but I think they make choices for employers easier (and I regard a big skill sheet in my CV as the most important part of the CV, even more so than the list of projects done).
The usefulness is totally dependent on what is being assessed. I work in an insurance company and this was done for all staff here. There was no category that I fit into and all the criteria were irrelevant.
I can see the benefit of assessing relevant criteria, it can identify weaknesses and target training, but those criteria need to be defined by someone who knows what you might not know.
Most of all, don't berate the bureaucrat for simplifying a complex object into a manageable set of information. As a programmer that's what you should be doing every day.
I think it is appropiate on big corps, but for small and specialized consultancies I would make a personal interview.
In big corporations if you dont fit in one place you may fit in other... in small teams I rather do personal assessment .

Algorithms for City Simulation? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to create a city filled with virtual creatures.
Say like Sim City, where each creature walks around, doing it's own tasks.
I'd prefer the city to not 'explode' or do weird things -- like the population dies off, or the population leaves, or any other unexpected crap.
Is there a set of basic rules I can encode each agent with so that the city will be 'stable'? (Much like how for physics simulations, we have some basic rules that govern everything; is there a set of rules that governs how a simulation of a virtual city will be stable?)
I'm new to this area and have no idea what algorithms/books to look into. Insights deeply appreciated.
Thanks!
I would start with the game of Life.
Here is the original SimCity source code:
http://www.donhopkins.com/home/micropolis/micropolis-activity-source.tgz
It may be hard to find any general resources on the subject, because it is quite specific area.
I have implemented some population dynamics and I know that it is not easy to get all the behavior correct to ensure that the population does not die off or overgrows. It is relatively easy if you implement a simple scenario like in predator-prey model, but tends to get tricky as the number of factors increases.
Some advice:
Try to make behavior of agents parametrized
Optimize the behavior parameters using some soft method, a neural network, a genetic algorithm or a simple hillclimbing algorithm, optimizing a single parameter of the simulation (like the time before the whole population dies off combined with average growth factor)
Here is a pointer to some research on the topic, but be advised -- the population in this research study all died off.
http://www.nsf.gov/news/news_summ.jsp?cntn_id=104261

What is the best practice for estimating required time for development of the SDLC phases? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As a project manager, you are required to organize time so that the project meets a deadline.
Is there some sort of equations to use for estimating how long the development will take?
let's say the database
time = sql storedprocedures * tables manipulated or something similar
Or are you just stuck having to get the experience to get adequate estimations?
As project manager you have to remember that the best you will ever we be able to do on your own is give your best guess as to how long a given project will take. How accurate you are. depends on your experience and the scope of the project.
The only way I know of to get a reasonably accurate estimate that is it to break the project into individual tasks and get the developer who will be doing the actual work to put an estimate on each task. You can then use an evidence based algorithm that takes the estimation accuracy of each developer into account to give you the probability of hitting a given deadline.
If the probability is too low, you have two choices: remove features or move the deadline.
Further reading:
http://www.joelonsoftware.com/items/2007/10/26.html
http://www.wordyard.com/2007/10/11/evidence-based-scheduling/
http://en.wikipedia.org/wiki/Monte_Carlo_method
There's no set formula out there that I've seen that would really work. Fogbugz has its monte carlo simulator which has somewhat of a concept for this, but really, experience is going to be your best point of reference. Every developer and every project will be different!
There will be such a formula as soon as computers can start generating all code themselves. Until then you are stuck with human developers who all have different levels of skill and development speed.

Resources