Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have found that dynamic programming is a bit skillful and demanding. But since I expect myself to become an adequate software engineer, I am wondering in which development scenario will DP massively be used or in other words are there any practical usage of it in development based on modern computers?
If you think about design patterns like Proxy pattern and dynamic proxy, which is broadly used in spring framework, DP seems like it is only useful in tech interview.
Also, application of parallelized computing and distributed system seems not easy to empower DP in modern computer context.
Are there any not rare scenarios where DP is widely used in very practical ways?
Please forgive my ignorance, since I haven't meet DP in real production level development, which makes me doubt the meaning of digging into DP.
I agree with # Matt Timmermans.
You don't learn about DP in case you have to use DP someday. By practicing DP, you learn ways of thinking about problems that will make you a better developer. In 10 years, nobody will care about the spring framework, but the techniques you learned from DP will still serve you well.
Now, the answer to your questions, part by parts:
1) Why DP if we have modern computers?
I think you got confused by the analogy of modern computers and the need for DP. Although modern computers are powerful in processing, you may think why I need DP if I have modern fast processors to run my application on.
Not every task can be executed on these modern computers as they come up with storage, network, and compute costs. In fact, as an engineer, we should be thinking of optimizing the usage of such resources, that is, making your code efficient to make it capable of running on minimum system configurations.
In today's world, we have a shared service architecture. It means that different independent services share resources. But the fact is they are interdependent indirectly. Imagine what will happen if a non-optimized code is consuming a lot of memory and compute time. These processors will face difficulty in allocating resources for other services or applications.
The thing is, "Why should I buy an apartment if a multi-bedroom flat can meet my needs and also creates an opportunity for others to buy a flat in the same apartment?"
2) DP in tech interviews
The fact that makes DP the most challenging topic to ace is the number of variations in DP.
It checks your ability to break down a difficult task into small ones to avoid reputations and thus save time, efforts, and thus overall resources.
That is one of the most prime reasons why DP is part of tech interviews.
Not only DP teaches you to optimize and learn useful things, but it also highlights bad practices of writing codes.
3) Usage of DP in real life
In Google Maps to find the shortest path between source and the series of destinations (one by one) out of the various available paths.
In networking to transfer data from a sender to various receivers in a sequential manner.
Document Distance Algorithms- to identify the extent of similarity between two text documents used by Search engines like Google, Wikipedia, Quora, and other websites
Edit distance algorithm used in spell checkers.
Databases caching common queries in memory: through dedicated cache tiers storing data to avoid DB access, web servers store common data like configuration that can be used across requests. Then multiple levels of caching in code abstractions within every single request that prevents fetching the same data multiple times and save CPU cycles by avoiding recomputation. Finally, caches within your browser or mobile phones that keep the data that doesn't need to be fetched from the server every time.
Git merge. Document diffing is one of the most prominent uses of LCS.
Dynamic programming is used in TeX's system of calculating the right amounts of hyphenations and justifications.
Genetic algorithms.
Also, I found a great answer on Quora which lists the areas in which DP can be used:
Operations research,
Decision making,
Query optimization,
Water resource engineering,
Economics,
Reservoir Operations problems,
Connected speech recognition,
Slope stability analysis,
Using Matlab,
Using Excel,
Unit commitment,
Image processing,
Optimal Inventory control,
Reservoir operational Problems,
Sap Abap,
Sequence Alignment,
Simulation for sewer management,
Finance,
Production Optimization,
Genetic Algorithms for permutation problem,
Haskell,
HTML,
Healthcare,
Hydropower scheduling,
LISP,
Linear space,
XML indexing and querying,
Business,
Bioinformatics
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Considering the fact that currently many libraries already have optimized sort engines, then why still many companies ask about Big O and some sorting algorithms, when in reality in our every day in computing, this type of implementation is not any longer really needed?
For example, self balancing binary tree, is a kind of problem that some big companies from the embedded industry still ask programmers to code as part of their testing and candidate selection process.
Even for embedded coding, there are any circumstances when such kind of implementation is going to take place, given fact that boost, SQLite and other libraries are available for use? In other words, is it worth still to think on ways to optimize such algorithms?
As an embedded programmer, I would say it all comes down to the problem and system constraints. Especially on a microprocessor, you may not want/need to pull in Boost and SQLite may still be too heavy for a given problem. How you chop up problems looks different if you have say, 2K of RAM - but this is definitely the extreme.
So for example, you probably don't want to rewrite code for a red-black tree yourself because as you pointed out, you will likely end up with highly non-optimized code. But in the pursuit of reusability, abstraction often adds layers of indirection to the code. So you may end up reimplementing at least simpler "solved" problems where you can do better than the built-in library for certain niche cases. Recently I wrote a specialized version of linked lists using shared memory pools for allocation across lists, for example. I had benchmarked against STL's list and it just wasn't cutting it because of the added layers of abstraction. Is my code as good? No, probably not. But I was more easily able to specialize the specific uses cases, so it came out better.
So I guess I'd like to address a few things from your post:
-Why do companies still ask about big-O runtime? I have seen even pretty good engineers make bad choices with regards to data structures because they didn't reason carefully about the O() runtime. Quadratic versus linear or linear versus constant time operation is a painful lesson when you get the assumption wrong. (ah, the voice of experience)
-Why do companies still ask about implementing classic structures/algorithms? Arguably you won't reimplement quick sort, but as stated, you may very well end up implementing slightly less complicated structures on a more regular basis. Truthfully, if I'm looking to hire you, I'm going to want to make sure that you understand the theory inside and out for existing solutions so if I need you to come up with a new solution you can take an educated crack at it. And if the other applicant has that and you don't, I'd probably say they have an advantage.
Also, here's another way to think about it. In software development, often the platform is quite powerful and the consumer already owns the hardware platform. In embedded software development, the consumer is probably purchasing the hardware platform - hopefully from your company. This means that the software is often selling the hardware. So often it makes more cents to use less powerful, cheaper hardware to solve a problem and take a bit more time to develop the firmware. The firmware is a one-time cost upfront, whereas hardware is per-unit. So even from the business side there are pressures for constrained hardware which in turn leads to specialized structure implementation on the software side.
If you suggest using SQLite on a 2 kB Arduino, you might hear a lot of laughter.
Embedded systems are bit special. They often have extraordinarily tight memory requirements, so space complexity might be far more important than time complexity. I've rarely worked in that area myself, so I don't know what embedded systems companies are interested in, but if they're asking such questions, then it's probably because you'll need to be more acquainted with such issues than in other areas of I.T.
Nothing is optimized enough.
Besides, the questions are meant to test your understanding of the solution (and each part of the solution) and not how great you are at memorizing stuff. Hence it makes perfect sense to ask such questions.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
My question is how someone can really estimate the duration of a project in terms of hours?
This is my overall problem during my carreer.
Let's say you need a simple app with customer/orders forms and some inventory capacity along with user/role privileges.
If you'd go with Java/Spring etc you should have X time.
If you'd go with ASP.NET you should have Y time.
If you would go with PHP or something else you would get another time
right?
All say break the project in smaller parts. I agree. But then what how do you estimate that the user/role takes so much time?
The problem gigles up when you implement things that let's say are 'upgraded' ( Spring 2.5 vs Spring 3.0 has a lot of differences for example ).
Or perhaps you meet things that you can't possibly know as it is new and always you meet new things!Sometimes I found odd behaviours like some .net presentation libraries that gave me the jingles! and could cost me 2 nights of my life for something that was perhaps a type error in some XML file...
Basically what I see as a problem, is that there is no standardized schedule on that things? A time and cost pre-estimated evaluation? It is a service not a product. So it scales to the human ability.
But what are the average times for average things for average people?
We have lots and lots of cost estimation models that give us a schedule estimate, COCOMO 2 being a good example. although more or less as you said, a type error can cost you 2 nights. so whats the solution
in my view
Expert judgement is serving the industry best and will continue to do
so inspite of various cost estimation techniques springing up as the
judgement is more or less keeping in mind the overheads that you
might have faced doing such projects in past
some models give you direct mapping between programming language LOC
per function point but that is till a higher level and does not say
what if a new framework is intorduced (as you mentioned, switching
from spring 2.5 to 3.0)
as you said, something new keeps coming up always, so i think thats
where expert judgement again comes into play. experience can help you
determine what time you might end up working more.
so, in my view, it should be a mix of estimation technique with a expert judgement of overheads that might occur with the project. more or less, you will end up with a figure very near to your actual effort.
My rule of thumb is to always keep the estimation initially language agnostic. Meaning, do NOT create different estimations for different implementation languages. This does not make sense, as a client will express to you requirements, in a business sense.
This is the "what". The "how" is up to you, and the estimate given to a client should not give the choice to them. The "how" is solely your domain, as an analyst. This you will have to arrive upon, by analyzing the demands of the business, then find what experiences you have in the teams, your own time constraints, your infrastructure and so on. Then deliver an estimate in BOTH time, and the tech you assume you will use.
Programming languages and algorithms are just mediators to achieve business need. Therefore the client should not be deciding this, so you shall only give one estimate, where you as an analyst have made the descision on what to use, given the problem at hand and your resources.
I follow these three domains as a rule:
The "what" - Requirements, they should be small units of CLEAR scoping, they are supplied by the client
The "how" - Technical architecture, the actual languages and tech used, infrastructure, and team composition. Also the high level modeling of how the system should integrate and be composed (relationship of components), this is supplied by the analyst with help of his desired team
The "when" - This is your actual time component. This tells the client when to expect each part of the "what". When will requirement or feature x be delivered. This is deducted by the analyst, the team AND the client together based on the "how" and "what" together with the clients priorities. Usually i arrive at the sums based on what the team tells me (technical time), what i think (experience) and what the client wants (priority). The "when" is owned by all stakeholders (analyst/CM, team and client).
The forms to describe the above can vary, but i usually do high level use case-models that feed backlogs with technical detailing. These i then put a time estimate on, first with the team, then i revise with the client.
The same backlog can then be used to drive development sprints...
Planning / estimation is one of the most difficult parts in software engineering.
Then:
- Split the work in smaller parts
- Invite some team members (about 5-8),
- Discuss what is meant with each item
- Let them fill in a form how many hours each item is, don't discuss or let them look at others
- Then for each item, check the average and discuss if there is a lot of variance (risks?)
- Remove the lowest and highest value per item
- Take the average of the rest
This works best for work that is based on experience, for new projects with completely new things to do it is always more difficult to plan.
Hope this helps.
There is no short easy answer to your question. Best I can do is to refer you to a paper which discusses some of the different models and techniques used for cost analysis.
Software Cost Estimation - F J Heemstra
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
What are some data structures that should be known by somebody involved in bioinformatics? I guess that anyone is supposed to know about lists, hashes, balanced trees, etc., but I expect that there are domain specific data structures. Is there any book devoted to this subject?
The most fundamental data structure used in bioinformatics is string. There are also a whole range of different data structures representing strings. And algorithms like string matching are based on the efficient representation/data structures.
A comprehensive work on this is Dan Gusfield's Algorithms on Strings, Trees and Sequences
A lot of introductory books on bioinformatics will cover some of the basic structures you'd use. I'm not sure what the standard textbook is, but I'm sure you can find that. It might be useful to look at some of the language-specific books:
Bioinformatics Programming With Python
Beginning Perl for Bioinformatics
I chose those two as examples because they're published by O'Reilly, which, in my experience, publishes good quality books.
I just so happen to have the Python book on my hard drive, and a great deal of it talks about processing strings for bioinformatics using Python. It doesn't seem like bioinformatics uses any fancy special data structures, just existing ones.
Spatial hashing datastructures (kd-tree) for example are used often for nearest neighbor queries of arbitrary feature vectors as well as 3d protein structure analysis.
Best book for your $$ is Understanding Bioinformatics by Zvelebil because it covers everything from sequence analysis to structure comparison.
In addition to basic familiarity with the structures you mentioned, suffix trees (and suffix arrays), de Bruijn graphs, and interval graphs are used extensively. The Handbook of Computational Molecular Biology is very well written. I've never read the whole thing, but I've used it as a reference.
I also highly recommend this book, http://www.comp.nus.edu.sg/~ksung/algo_in_bioinfo/
And more recently, python is much more frequently used in bioinformatics than perl. So I really suggest you start with python, it is widely used in my projects.
Many projects in bioinformatics involve combining information from different, semi-structured sources. RDF and ontologies are essential for much of this. See, for example, the bio2RDF project. http://bio2rdf.org/. A good understanding of identifiers is valuable.
Much bioinformatics is exploratory and rapid lightweight tools are often used. See workflow tools such as Taverna where the primary resource is often a set of web services - so HTTP/REST are common.
Whatever your mathematical or computational expertise is, you are likely to find an application in computational biology. If not, make this another question of stackoverflow and you'll be helped :o)
As mentioned in the other answers, somewhat timeless are string comparisons and pattern discovery in 1-dimensional data since sequences are so easy to get. With a renewed interest in medical informatics though you also have two/three-dimensional image analysis that you run e.g. against genomic data. With molecular biochemistry you also have pattern searches on 3D surfaces and molecular simulations. To study drug effects you will work with gene networks and compare those across tissues. Typical challenges for big data and information integration apply. And then, you need statistical descriptions of the likelihood of a pattern or the clinical association of any features identified to be found by chance.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I was wondering if anyone has experience in metrics used to measure software quality. I know there are code complexity metrics but I'm wondering if there is a specific way to measure how well it actually performs during it's lifetime. I don't mean runtime performance, but rather a measure of the quality. Any suggested tools that would help gather these are welcome too.
Is there measurements to answer these questions:
How easy is it to change/enhance the software, robustness
If it is a common/general enough piece of software, how reusable is it
How many defects were associated with the code
Has this needed to be redesigned/recoded
How long has this code been around
Do developers like how the code is designed and implemented
Seems like most of this would need to be closely tied with a CM and bug reporting tool.
If measuring code quality in the terms you put it would be such a straightforward job and the metrics accurate, there would probably be no need for Project Managers anymore. Even more, the distinction between good and poor managers would be very small. Because it isn't, that just shows that getting an accurate idea about the quality of your software, is no easy job.
Your questions span to multiple areas that are quantified differently or are very subjective to quantification, so you should group these into categories that correspond to common targets. Then you can assign an "importance" factor to each category and derive some metrics from that.
For instance you could use static code analysis tools for measuring the syntactic quality of your code and derive some metrics from that.
You could also derive metrics from bugs/lines of code using a bug tracking tool integrated with a version control system.
For measuring robustness, reuse and efficiency of the coding process you could evaluate the use of design patterns per feature developed (of course where it makes sense). There's no tool that will help you achieve this, but if you monitor your software growing bigger and put numbers on these it can give you a pretty good idea of how you project is evolving and if it's going in the right direction. Introducing code-review procedures could help you keep track of these easier and possibly address them early in the development process. A number to put on these could be the percentage of features implemented using the appropriate design patterns.
While metrics can be quite abstract and subjective, if you dedicate time to it and always try to improve them, it can give you useful information.
A few things to note about metrics in the software process though:
Unless you do them well, metrics could prove to be more harm than good.
Metrics are difficult to do well.
You should be cautious in using metrics to rate individual performance or offering bonus schemes. Once you do this everyone will try to cheat the system and your metrics will prove worthless.
If you are using Ruby, there are some tools to help you out with metrics ranging from LOCs/Method and Methods/Class Saikuros Cyclomatic complexity.
My boss actually held a presentation on software metric we use at a ruby conference last year, these are the slides.
A interesting tool that brings you a lot of metrics at once is metric_fu. It checks alot of interesting aspects of your code. Stuff that is highly similar, changes a lot, has a lot of branches. All signs your codes could be better :)
I imagine there are lot more tools like this for other languages too.
There is a good thread from the old Joel on Software Discussion groups about this.
I know that some SVN stat programs provide an overview over changed lines per submit. If you have a bugtracking system and persons fixing bugs adding features etc are stating their commit number when the bug is fixed you can then calculate how many line were affected by each bug/new feature request. This could give you a measurement of changeability.
The next thing is simply count the number of bugs found and set them in ratio to the number of code lines. There are some values how many bugs a high quality software should have per codeline.
You could do it in some economic way or in programmer's way.
In case of economic way you mesaure costs of improving code, fixing bugs, adding new features and so on. If you choose the second way, you may want to measure how much staff works with your program and how easy it is to, say, find and fix an average bug in human hours. Certainly they are not flawless, because costs depend on the market situation and human hours depend on the actual people and their skills, so it's better to combine both methods.
This way you get some instruments to mesaure quality of your code. Of course you should take into account the size of your project and other factors, but I hope main idea is clear.
A more customer focused metric would be the average time it takes for the software vendor to fix bugs and implement new features.
It is very easy to calculate, based on your bug tracking software's date created, and closed information.
If your average bug fixing/feature implementation time is extremely high, this could also be an indicator for bad software quality.
You may want to check the following page describing various different aspects of software quality including sample plots. Some of the quality characteristics you require to measure can be derived using tool such as Sonar. It is very important to figure out how would you want to model some of the following aspects:
Maintainability: You did mention about how easy is it to change/test the code or reuse the code. These are related with testability and re-usability aspect of maintainability which is considered to be key software quality characteristic. Thus, you could measure maintainability as a function of testability (unit test coverage) and re-usability (cohesiveness index of the code).
Defects: Defects alone may not be a good idea to measure. However, if you can model defect density, it could give you a good picture.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
Some questions about Function Points:
1) Is it a reasonably precise way to do estimates? (I'm not unreasonable here, but just want to know compared to other estimation methods)
2) And is the effort required worth the benefit you get out of it?
3) Which type of Function Points do you use?
4) Do you use any tools for doing this?
Edit: I am interested in hearing from people who use them or have used them. I've read up on estimation practices, including pros/cons of various techniques, but I'm interested in the value in practice.
I was an IFPUG Certified Function Point Specialist from 2002-2005, and I still use them to estimate business applications (web-based and thick-client). My experience is mostly with smaller projects (1000 FP or less).
I settled on Function Points after using Use Case Points and Lines of Code. (I've been actively working with estimation techniques for 10+ years now).
Some questions about Function Points:
1) Is it a reasonably precise way to
do estimates? (I'm not unreasonable
here, but just want to know compared
to other estimation methods)
Hard to answer quickly, as it depends on where you are in the lifecycle (from gleam-in-the-eye to done). You also have to realize that there's more to estimation than precision.
Their greatest strength is that, when coupled with historical data, they hold up well under pressure from decision-makers. By separating the scope of the project from productivity (h/FP), they result in far more constructive conversations. (I first got involved in metrics-based estimation when I, a web programmer, had to convince a personal friend of my company's founder and CEO to go back to his investors and tell them that the date he had been promising was unattainable. We all knew it was, but it was the project history and functional sizing (home-grown use case points at the time) that actually convinced him.
Their advantage is greatest early in the lifecycle, when you have to assess the feasibility of a project before a team has even been assembled.
Contrary to common belief, it doesn't take that long to come up with a useful count, if you know what you're doing. Just off of the basic information types (logical files) inferred in an initial client meeting, and average productivity of our team, I could come up with a rough count (but no rougher than all the other unknowns at that stage) and a useful estimate in an afternoon.
Combine Function Point Analysis with a Facilitated Requirements Workshop and you have a great project set-up approach.
Once things were getting serious and we had nominated a team, we would then use Planning Poker and some other estimation techniques to come up with an independent number, and compare the two.
2) And is the effort required worth
the benefit you get out of it?
Absolutely. I've found preparing a count to be an excellent way to review user-goal-level requirements for consistency and completeness, in addition to all the other benefits. This was even in setting up Agile projects. I often found implied stories the customer had missed.
3) Which type of Function Points do
you use?
IFPUG CPM (Counting Practices Manual) 4.2
4) Do you use any tools for doing
this?
An Excel spreadsheet template I was given by the person who trained me. You put in the file or transaction attributes, and it does all of the table lookups for you.
As a concluding note, NO estimate is as precise (or more precisely, accurate) as the bean-counters would like, for reasons that have been well documented in many other places. So you have to run your projects in ways that can accommodate that (three cheers for Agile).
But estimates are still a vital part of decision support in a business environment, and I would never want to be without my function points. I suspect the people who characterize them as "fantasy" have never seen them properly used (and I have seen them overhyped and misused grotesquely, believe me).
Don't get me wrong, FP have an arbitrary feel to them at times. But, to paraphrase Churchill, Function Points are the worst possible early-lifecycle estimation technique known, except for all the others.
Mike Cohn in his Agile Estimating and Planning consider FPs to be great but difficult to get right. He (obviously) recommends to use story points-based estimation instead. I tend to agree with this as with each new project I see the benefits of Agile approach more and more.
1) Is it a reasonably precise way to do estimates? (I'm not unreasonable here, but just want to know compared to other estimation methods)
As far as estimation precision goes the functional points are very good. In my experience they are great but expensive in terms of effort involved if you want do it properly. Not that many projects could afford an elaboration phase to get the FP-based estimates right.
2) And is the effort required worth the benefit you get out of it?
FPs are great because they are officially recognised by ISO which gives your estimations a great deal of credibility. If you work on a big project for a big client it might be useful to invest in official-looking detailed estimations. But if the level of uncertainty is big to start with (like other vendors integration, legacy system, loose requirements etc.) you will not get anywhere near precision anyway so usually you have to just accept this and re-iterate the estimations later. If it is the case a cheaper way of doing the estimates (user stories and story points) are better.
3) Which type of Function Points do you use?
If I understand this part of your question correctly we used to do estimations based on the Feature Points but gradually moved away from these an almost all projects expect for the ones with heavy emphasis on the internal functionality.
4) Do you use any tools for doing this?
Excel is great with all the formulas you could use. Using Google Spreadsheets instead of Excel helps if you want to do that collaboratively.
There is also a great tool built-in to the Sparx Enterprise Architect which allows you to do the estimates based on the Use Cases which could be used for FP estimations as well.
The great hacknot is offline now, but it is in book form. He has an essay on function points: http://www.scribd.com/doc/459372/hacknot-book-a4, concluding they are a fantasy (which I agree with).
Joel on Software has a reasonable sound alternative called Evidence based scheduling that at least sounds like it might work....
From what I have study about Function Point (one of my teacher was highly involved in the process of the theory of function point) and he wasn't able to answer all our answers.Function point fail in many way because it's not because you have something read or write that you can evaluate correctly. You might have a result of 450 functions points and some of these function point will take 1 hour ans some will take 1 weeks. It's a metric that I will never use again.
No because any particular requirement can have an arbitrary amount of effort based on how precise (or imprecise) the author of the requirement is, and the level of experience of the function point assessor.
No because administration of imprecise derivations of abstract functionality yield no reliable estimate.
None if I can help it.
Tools? For function points? How about Excel? Or Word? Or Notepad? Or Edlin?
To answer your questions:
Yes they are more precise than anything else I have encountered (in 20+ years).
Yes they are well worth the effort. You can estimate size, resources, quality and schedule from just the FP count - extremely useful. It takes an average of 1 minute to count an FP manually and an average of 8 hours to fully code an FP (approximately $800 worth). Consider the carpenter's saying of "measure twice cut once". And now a shameless plug: with https://www.ScopeMaster.com you can measure 1 FP per second, and you don't need to learn how!
I like Cosmic Function Points (because they are versatile) and IFPUG because there is a lot of published data (mostly from Capers Jones).
Having invested considerable time, effort and money in developing a tool that counts FPs automatically from requirements, I shall never have to do it manually again!