Why is RAII so named? [closed] - raii

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The sense I get about this idiom is that it is useful because it ensures that resources are released after the object that uses them goes out of scope.
In other words, it's more about de-acquisition and de-initialisation, so why is this idiom named the way it is?

First, I should note that it's widely considered a poorly named idiom. Many people prefer SBRM, which stands for Stack Bound Resource Management. Although I (grudgingly) go along with using "RAII" simply because it's widely known and used, I do think SBRM gives a much better description of the real intent.
Second, when RAII was new, it applied as much to the acquisition as releasing of resources. In particular, at the time it was fairly common to see initialization happen in two steps. You'd first define an object, and only afterwards dynamically allocate any resources associated with that object. Many style guides advocated this, largely because at that time (before C++ had exception handling) there was no good way to deal with failure in a constructor. Therefore, the style guides often said, constructors should do only the bare minimum of work, and specifically avoid anything that was open to failure -- especially allocating resources (and a few still say things like that).
Quite a few of those already handled releasing the resources in the destructor though, so that wouldn't have been as clear a distinction from previous practice.

Related

When to use declarative programming over imperative programming [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
As far as I know the main difference between declarative and imperative programming is the fact that in declarative programming you rather specify what the problem is, while in imperative programming you state exactly how to solve a problem.
However it is not entirely clear for me when to use one over another. Imagine you are imposed to solve a certain problem, according to which properties you decide to tackle this down declaratively (i.e using prolog) or imperatively (i.e using Java)? For what kind of problems would you prefer to use one over the other?
Imperative programming is closer to what the actual machine performs. This is a quite low level form of programming, and the more complex your application grows, the harder it will be for you to grasp all details at such a low level. On the plus side, being close to the machine, you can write quite performant code if you are good at that.
Declarative programming is more abstract and higher level: With comparatively little code, you can express quite sophisticated relationships in a way that can be more easily seen to be correct.
To see an important difference, compare for example pure Prolog with Java: Suppose I take away one of the rules in a Prolog program. I know a priori that this can make the program at most more specific: Some things that held previously may now no longer hold.
On the other hand, suppose I take away a statement in a Java program: Nothing much can be said about the effect in general. The program may even no longer compile.
So, changes in an imperative program can have very unforeseen effects, and are extremely hard to reason about, because there are few guarantees and invariants, and many things are implicit in some global state of the program. This makes imperative programming very error-prone.

go-lang: lack of contains method design-justification [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
while browsing for a contains method, I came across the following Q&A
contains-method-for-a-slice
It is said time and again in this Q&A that the method is really trivial to implement. What I don't understand is, if it were so easy to implement, and seeing how DRY is a popular software principle && and most modern languages implement said method , what sort of design reasoning could be involved behind the exclusion of such a simple method?
The triviality of the implementation depends on the scope of the implementation. It is trivial to implement when you know how to compare each value. Application code usually knows how to compare the types used in that application. But it is not trivial to implement in the general case for arbitrary types, and that is the situation for the language and standard library.
Figuring out if a slice contains a certain object is an O(n) operation where n is the length of the slice. This would not change if the language provided a function to do this. If your code relies on frequently checking if a slice contains a certain value, you should reevaluate your choice of data structures; a map is usually better in these kind of cases. Why should the standard library include functions that encourage you to use the wrong data structure for the task you have?

Immutable objects in ruby [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Can anyone list out the immutable objects in Ruby..
I saw Ruby - Immutable Objects this and I know about how to convert mutable objects into immutable objects but no clarity on immutable objects in ruby.
It's been my experience that you have two choices:
Aggressively freeze any objects, and this means deep freeze where you not only freeze the main object but any contained objects, in order to prevent modification.
Be disciplined about not modifying the objects in certain sections of your code.
The second approach is what most applications use because once frozen there's no way to un-freeze something. Objective-C has mutable and non-mutable variants of many objects, C++ has const that can prevent modification of any object, but there's no such thing in Ruby.
This is largely because Ruby methods are free to do whatever they want with very little in the way of constraints. Can a reader method modify the state of the object? Yes. You might have a very good reason for doing this and Ruby won't get in your way.
If you're writing code that depends on objects being in a non-changing state, make a copy, freeze it, and use that for reference. This will probably slow down and complicate your application considerably, so it's a very heavy handed approach.
The best method is to share as little information as is necessary, provide interfaces to this information that are read-only by design, and avoid tampering with things outside of specific circumstances by employing proper locking measures.

Should I focus on code quality while Rapid prototyping? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
When you are rapid prototyping for features should you really worry about code quality & optimization?.
Looking back at the number of times a "prototype" ended up becoming the product, the answer would be yes.
Don't forget that you are not only prototyping the feature, you are also prototyping the design.
Yes to quality. No to optimization. This question should be community wiki.
I would focus on clarity.
If quality and optimisation are requirements for the prototype then yes. If not, then no. Just because you are doing rapid prototyping you don't abandon standard operating procedures like programming to a specification, using source code control, testing, etc. It is, perhaps, relatively unusual for high performance to be a requirement for a rapidly developed prototype, but that's another matter.
Yes. Focus on quality, clarity and simplicity AND comments to explain what its doing and why (don't bother with the how, unless its really complicated, that is what the code is for).
Just about all work we do here starts out as a what if? And if it works we continue with it.
We write comments that describe what should happen, before we write the code, then write the code to match the comments. Writing the comments first forces you to think about how you will structure it all. We've found that it prevents a lot of FALSE assumptions and actually makes development faster.
It also makes reusing this when you come back to it so much easier - you don't need to read the code and understand it, just read the comments. Don't go for the nonesense of self-documenting code, all that does is self-document the bugs, you've got nothing to check to see if the code matches the comments/documentation at all.
You can worry about optimization later - see this description of a huge win I got by changing from MFC CMaps to STL when working on a hobby project parsing some Apache log files. This was done after I had the initial concept working and only when it became apparent there was a problem with performance.

pair programming with comments [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Over the years, I've discovered that green-programmers tend to read the comments rather than the code to debug issues.
Does having one person document the other person's code (and vice-versa) with the code writer's approval increase code quality in the long term?
Is this a good idea?
aside: I'm looking for middle-ground between solo programming and pair programming in terms of budgeting.
People tend to look for the easiest solution to a problem. If there's an "human" description available, it's likely to be utilized before the reader delves into esoteric code. IOW, the comments will often be considered first, regardless of how green the programmer happens to be.
Comments should maintained as well as possible. Unfortunately, they can easily become stale (because they cannot be validated by the compiler). Therefore, they should be kept to a reasonable minimum, because, ultimately, the code itself is the only real comment that can be trusted.
As for who should write the comments, it depends at what level the comments are being written. For example, at higher levels the comments should describe the outside behavior of a module, and could be written by a larger group of people. Internally, however, the comments should explain the intent of the various chunks of code. That way, its easier for the reader to glean the mannerisms of the code. Those comments should be written by the coder.
I've found that "pair programming" works best when one person writes the code and the other one writes unit tests (working side by side so they can see what each other is doing). You can swap the roles around occasionally too.
You run a higher risk of misinterpreting algorithms if the original author does not document the code. In my opinion, the only thing more frustrating than inadequately documented code is incorrectly documented code.
You may wish to try this approach:
Perform code reviews with a developer that was not involved in the programming effort.
Have the review performed without the physical presence of the code's author. Just the reviewer, a copy of the code from source control, and written documentation.
If the reviewer can not reasonably understand the code without outside assistance, it is not adequately documented and should be given back to the author.
Repeat as necessary.
I find it works best when the helper does the broad scope thinking (i.e. What are we trying to accomplish) and the keyboard cowboy does the detail scope thinking. I don't think comments have anything to do with it.
I tend to write comment first and code immediately after that or at times or at times side by side. By the time I end up writing my comment the code becomes very clear in my mind (thanx to the fact of verbalizing my ideas while writing the comment). I don't like to comment the code which I haven't written. And whenever I come back to revise the code, first read the original comments, then think of new comments, write them and write the code side by side.

Resources