As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've always thought defensive programming was evil (and I still do), because typically defensive programming in my experience has always involved some sort of unreasonable sacrifices based on unpredictable outcomes. For example, I've seen a lot of people try to code defensively against their own co-workers. They'll do things "just in case" the code changes in some way later on. They end up sacrificing performance in some way, or they'll resort to some silver bullet for all circumstances.
This specific coding practice, does it count as defensive programming? If not, what would this practice be called?
Wikipedia defines defensive programming as a guard for unpredictable usage of the software, but does not indicate defensive programming strategies for code integrity against other programmers, so I'm not sure if it applies, nor what this is called.
Basically I want to be able to argue with the people that do this and tell them what they are doing is wrong, in a professional way. I want to be able to objectively argue against this because it does more harm than good.
"Overengineering" is wrong.
"Defensive Programming" is Good.
It takes wisdom, experience ... and maybe a standing policy of frequent code reviews ... to tell the difference.
It all depends on the specifics. If you're developing software for other programmers to reuse, it makes sense to do at least a little defensive programming. For instance, you can document requirements about input all you want, but sometimes you need to test that the input actually conforms to the requirements to avoid disastrous behavior (e.g., destroying a data base). This usually involves a (trivial) performance hit.
On the other hand, defensiveness can be way overdone. Perhaps that is what's informing your view. A specific example or two would help distinguish what's going on.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm presented with a need to rewrite an old legacy desktop application. It is a smallish non-Java desktop program that still supports the daily tasks of several internal user communities.
The language in which the application is both antiquated and no longer supported. I'm a junior developer, and I need to rewrite it. In order to avoid the app rewrite sinkhole, I'm planning on starting out using the existing database & data structures (there are some significant limitations, but as painful as refactoring will be, this approach will get the initial work done more quickly, and avoid a migration, both of which are key to success).
My challenge is that I'm very conflicted about the concept of Keep It Simple. I understand that it is talking about functionality, not design. But as I look to writing this app, it seems like a tremendous amount of time could be spend chasing down design patterns (I'm really struggling with dependency injection in particular) when sticking with good (but non-"Group of Four") design could get the job done dramatically faster and simpler.
This app will grow and live for a long time, but it will never become a 4 million line enterprise behemoth, and its objects aren't going to be used by another app (yes, I know, but what if....YAGNI!).
The question
Does KISS ever apply to architecture & design? Can the "refactor it later" rule be extended so far as to say, for example, "we'll come back around to dependency injection later" or is the project dooming itself if it doesn't bake in all the so-called critical framework support right away?
I want to do it "right"....but it also has to get done. If I make it too complex to finish, it'll be a failure regardless of design.
I'd say KISS certainly applies to architecture and design.
Over-architecture is a common problem in my experience, and there's a code smell that relates:
Contrived complexity
forced usage of overly complicated design patterns where simpler
design would suffice.
If the use of a more advanced design pattern, paradigm, or architecture isn't appropriate for the scale of your project, don't force it.
You have to weigh the potential costs for the architecture against the potential savings... but also consider what the additional time savings will be for implementing it sooner rather than later.
yes, KISS, but see http://www.amazon.com/Refactoring-Patterns-Joshua-Kerievsky/dp/0321213351 and consider refactoring towards a design pattern in small steps. the code should sort of tell you what to do.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been programming for the last 6 years. I just recently started my first degree in computer science. My work seems to be constantly marked down for different reasons, amongst many of them:
Uncommented code
Writing too long identifier names and methods
Writing too many methods
After working as a programmer for six years for numerous startup companies, and absorbing best practices which include the requirement to write "self explanatory code" I find it very difficult to go back to bad practices.
What can I do?
Self documented code is not synonymous with comments.
I've argued with many senior devs around this point. Code can go a long way in communicating intent but there are some things which simply cannot (and should not) be documented through code.
For example if you have a highly optimised function/method or chunk of code which is heavily coupled to the underlying problem domain and requires very specific knowledge of the business or solution. Comments are needed in these scenarios.
Yes, yes, comments come with there fair share of problems but this doesn't mean they aren't helpful (or mandatory in certain cases).
I can't tell you how many times I've read a colleagues line of code and thought "what the hell?!?" only for them to explain that they needed to do that due to some quirk of some library or browser we were targeting etc.
Comments are a mechanism for a developer to justify a design decision.
As for your other problems, they are subjective. How long is too long? How many is too many?
Point them at Microsoft's guidelines if you are on the MS stack or there will be countless articles for whichever language you're using...
Hope that helps.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Assuming I wrote a program in two different languages and I need to compare the performance,
what aspects should I focus on other than comparing their running time?
You're focusing on the wrong thing. Really the question is do you want to use a low level language (faster) or high level language (slower). A high level language is going to do many things for you and it will also make certain assumptions which will make it slower. With a high level language you are going through multiple layers of abstraction and therefore it is going to be naturally slower. If you want top performance, use c++. If you want something even faster use assembly language. A high level language like c# or java is going to be more convenient as a lot of the underlining plumbing is handled for you, but with that comes a performance decrease. Again this comes with certain assumptions that are made and extra code that is executed that might not pertain specifically to what you are trying to accomplish.
If you want to test the performance of different language pick functions that might require the language or platform to handle many of the underlining functions for you. Also lower level language tend to give you direct access to hardware, allowing you to tweak how you interact with it. Gaming engines and other items that require top performance are typically written in c++ vs a language like Visual Basic because of the amount of control and the increased performance of using unmanaged code. I would focus first on categories like (graphics, etc.) that you would need increase performance for and then pick some tests from there. I'm also sure you can find existing tests already posted on the internet that compare language performance.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
LINQ is extremely powerful and can be used highly in code. But is it best practice to use it?
It's a good idea to use it when it makes the code clearer and simpler to maintain, and when you're not in any of the situations where the performance of LINQ is too slow for your needs.
You haven't specified whether you're talking about LINQ to Objects or LINQ to SQL etc, but I know there are situations where the latter has proved too slow for some high traffic sites, and they've moved off it... but only after it's been shown to be an issue. LINQ to Objects will often have a very small performance hit compared with "hard-coding" the same logic, but that's even less likely to be a real problem.
Of course LINQ can certainly be overused, and I've seen people reaching for a LINQ solution when there are far more appropriate ways of achieving the same thing - so don't try to use it everywhere you possibly can. Just use it where it clearly helps.
the declarative nature of linq is one of its strongest features. Almost always this makes your code more readable and maintainable, so yes, unless there is a compelling performance reason not to, I'd say that it is best practice.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 13 years ago.
Matz said :
I designed Ruby to minimize my surprise. I wanted to minimize my frustration during programming, so I want to minimize my effort in programming.
But sometime we get (bad) surprise in ruby practice.
As beginner in ruby, I found some example :
Exception Thread do not produce any immediate traces by default, we must do Thread.abort_on_exception = true or don't forget to join all thread.
socket search dns name for any accept, do BasicSocket.do_not_reverse_lookup = true for do not be surprise by long delay
split(regexp) don't split empty field in the end of string, do split(regexp,-1) for splitting all string
string.trim is unknown, use sting.strip in place (for old tcl dev...)
Have you other case for improve my ruby practice ?
thank you.
The design of Ruby the language is different from the design of Ruby libraries (which mostly seem to be what you use as examples). Matz designed the language around the principle of least surprise, but not every library (even modules in the Ruby standard library) were designed that way. (Keep in mind that Matz didn't write every Ruby library, or even the entire Ruby standard library, himself.)
A gentle note, I think you are over-extending the idea of least surprise. To me you are extending Matz's idea of least surprise from his idea of least surprise to include your idea of least surprise. Remember that what surprises you may not surprise another and may in fact surprise them if it worked the way that you think it should. All that said, it's good to voice your opinions about how you think it should work because we can all learn from it but to say that "we get (bad) surprise" is extending your idea of surprise onto others.
As for me, all of these examples have the feel that you want these to work better for your preference (or app) than the general case.