Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I already seen this one Why are there so many slightly different ways to do the same thing in Ruby? but it doesn't help a bit. Having multiple slightly different semantics and syntax for the same thing is IMHO confusing and counterproductive. I was trying to find some spec, or rationale on why this is so, but they're nowhere to be found (unlike Java and Python where every language feature is well documented with motivation etc.).
It's not just String, it's everything. What I'm looking for is a generic explanation on why the japanese think that having 100 ways to do the same thing is better than one explicit way. Remember that we're talking about programming language, that's not a musical instrument, or a paint brush, it's a tool to get the job done and have fun along the way. It's not fun to read some code and wonder why she used %<hello kitty> instead of "hello kitty", especially when you're looking for a bug.
I see the benefit in standardization, which is a related concept. I hope everyone else does. Why doesn't Ruby support some 100 versions of customized HTTP protocol, for the same reason they support 100 ways to create a String?
The reason there's many different ways to create a string is because there's many reasons you might need a string. Since strings are the backbone of many applications, it makes sense that this facility is robust and varied.
Once you're used to it, you'll find the rigid quotation systems in other languages to be more of a nuisance than anything. HTML often requires using both single ' and double " quotes for embedded JavaScript or other attributes, and unless you want to render your string into unreadable pulp by spiking in backslashes \ before any of them you'll be better off with the %q[ ... ] type method of quoting.
Any good toolbox has a variety of tools. Don't complain that your wrench set is confusing because it has so many different sizes of wrench. Sometimes you'll need a specific one, and then you'll be grateful. Many of these methods have been borrowed from languages like Perl, and Perl is inspired by other things like bash.
Typical use cases:
double_quotes = "typical use case with #{variables}"
single_quotes = 'strict string literals'
alternate_double_quotes = %Q[<for html="tags('with both quotes', 'and #{variables}')">]
alternate_single_quotes = %q[<for html="tags('with both quotes')">]
inline_string = <<END
Useful for long blocks of freeform
text such as sample data or templates.
END
I've found that the %q[ ... ] method comes in handy when trying to express multi-line strings that contain quotes as is often the case with SQL:
execute(%q[
INSERT INTO examples (id, feature_code)
SELECT id, CONCAT('example_', feature_code)
FROM things
GROUP BY foo
ORDER BY bar
])
It is relatively easy to spot [ ... ] pairings but not as easy when you have many escaped quotes. It's easy to miss a closing quote unless you have a syntax highlighting editor.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I noticed that the Smalltalk language has no concept of private/protected methods. All methods are public. Coming from a Java/C++ background, I've thought of this as a fundamental weakness in the language as any application created in Smalltalk would be completely open to manipulation. I guess you could rely on naming conventions to document the public API and prefix methods to indicate them as private (I believe Squeak does this), but it's still completely open.
Are there any benefits to this approach over having explicit access modifiers to control
access to method invocations?
Indeed, the Smalltalk way is to put private methods in the 'private' category. This indicates that you shouldn't use these methods, but of course doesn't enforce this.
This is by design - it's a feature, not a bug. Smalltalk was designed from the beginning precisely to be an open system.
Some advantages:
If I simply have to - maybe the library designer didn't foresee a need to expose some particular thing I simply have to have - I can still call those private methods. Obviously, this isn't something one does lightly: rather, judiciously, cautiously, knowing that it's a tactical solution.
Language simplicity.
(As per Alexandre Jasmin's comment) Smalltalk makes no distinction between what you, the programmer, can do and what the language/environment can do. That means that Smalltalk-the-image exposes all the things needed for you to build your own inspectors/debuggers/whatever without having to supply special tools using we-can-do-this-but-you-can't techniques.
Private and protected methods are in fact a significant weakness of languages like c++, java and c#. They basically say to their users: I don't want to learn and evolve. The consequence of that (and a lot more early binding) is that those languages require much more BDUF and are thus far less usable for a modern (agile) development process.
The first question is what private/protected access modifiers are about? Fundamentally, it is not about safety or security. It is about exposing the right interface to the user. Starting from that, it makes little difference between having categories protected/private and a language construct specifically for that.
I would even say that having private/protected visibility modifier brings more complexity to the problem than it actually solves.
Besides that, I don't think that private/protected visibility is a good answer to this problem
At the least, Smalltalk should have the textual convention that method names that begin with 'underscore' are verboten to call outside of the objects themselves. Unfortunately, I don't think that 'underscore' is allowed as the first character of a method name.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
If you are given a large n-tier project (.NET) with 15,000 lines of code written in "Spanish" (variables, tables, classes names etc) that requires feature addition and bug fixing, what would be your strategy to work on it?
Converting the whole project to English(Google Translation or other tools) names does not seem to be a good options as it will be time consuming
Hire a developer who knows "Spanish" or a translator
EDIT: The developers who wrote the original software does not understand English so they are not useful in this case.
Attempt to work on it as it is without translating anything. If it doesn't work, start translating it on-demand, only pieces that are relevant to you.
A dictionary can get you quite far already. You can translate code elements on your own. Naturally, don't add any more pieces to the puzzle. What you add should be in English.
I would also notify the customer that due to the code being written against common sense and best practices in non-English (and even unfamiliar to you language) there will be a delivery delay. Blame on the original creator of the novel.
Unless this is The Project From Hell, there should be far fewer than 15000 variables and methods in your code. My on-the-cheap suggestion would be for you to extract a cross-reference list of variable names as found in your program, hire a quick cheap Spanophone to translate those names for you, and then keep the translation list handy as you and your teammates code.
It's handy to have an idea of what is meant by a variable name, but it's not essential. I spent 20 years writing programs with only 4 significant characters in the variable name.
It's subjective, but my personal opinion is Option B) Hire a developer who can speak spanish - primarily because all the commenting will likely to be in spanish and if the commenting has been done well - it will have valuable information within that should not be ignored / lost.
A translator might not be able to understand the terms within the comments / code and a translation by a non programmer could go bad.
Best option would be to get in touch with the guys who wrote the darn thing...if possible at all. Second best, a developer who knows Spanish.
Translate your classes first. Then you should be able to keep track of instances by their type.
Sorry mostly questions.....
Is the customer a Spanish speaker? If so the software should be written by a Spanish programmer. As the cost of communicating with the customer is a lot less if the programmer understands the customer.
If the customers is not Spanish, why was a Spanish programmer used at all?
Was the Spanish programmer chosen to save money?
If so, is the software worth keeping at all?
How can you tell how good the code is if you can’t read Spanish?
I think the translation should be done as needed on demand, e.g
All new code should be in “English”.
All methods that are changed should be in “English”
All class/methods the new code uses should have English names and summary comments.
The names and comments on all unit tests for class/methods with English names should be in English
Missing unit tests should be written for any class/method when it is not clear what the spec is. (So as to check the translation of the comments into English.)
I think a willing English programmer will be able to use Google translate to do the above, however as with any new source code base, the programme will have to spend a long timer really understanding what each class/method does before using it.
An English programmer that knows some Spanish would be able to do it quicker. However don’t use a Spanish programmer, as you always want a translator translating into their native language.
First step, and this is true when you inherit a legacy code base whether it's in your native tongue or not, is to set up regression tests based on "known good" output, and begin writing more tests as you go, for the changes you make.
Quite possibly, given the relatively small size of the code base, you will fairly shortly start to understand what various routines are doing, and may be capable of beginning the translation effort yourself, maybe supplemented by automated translation.
This assumes you understand the problem domain, and that the original code was written professionally.. although if it were, you'd already have tests, wouldn't you? You don't mention whether that's the case.
Doing anything here without regression tests is foolhardy. Doing it with tests, you may find the whole task relatively manageable and don't need a serious translation effort. Definitely respect the other suggestions to do this incrementally too.
I can get all of methods, fields, annotations and etc. with reflection methods and etc. to export excel or etc. Then I can send this excel file to spanish translator. After translated, i can convert all of project codes by reference which is translated excel file by text processor applications (find / change etc.)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I was having a discussion with some programmer friends who said that they see Ruby programmers (in particular) producing a lot of code that's "too clever". So I'm wondering what would that look like? I'm referring to the unnecessary use of an obscure language feature in a context in which something straightforward would have worked just as well or better. Know any good Ruby examples of this?
After giving a straight answer to your question, I'd like to also dispute the premise; whenever a group of programmers characterizes the users of another language in this way, the odds are that they are telling you more about themselves than about the community they are describing.
You could, for example, accuse c programmers of being too obsessed with low level details, or haskell programmers with being blinded by their desire for functional purity; perl mongers for brevity, etc. But you would, IMHO, by getting the causality backwards when you do so.
When I want to write a program that is best expressed in a certain style, I try to choose a language that supports that style. Sometimes you want a tool that lets you do unusual things, and for such a task having a language such as ruby is as valuable as having mathematica for math or javascript for browser manipulation in your toolkit. If I want to play with typography I hop into postscript because that's what it's best at.
It's like saying "Have you ever noticed that people who use power drills are always poking holes in things?" It's true, but it kind of misses the point.
class Tree
def initialize*d;#d,=d;end
def to_s;#l||#r?"<#{#d},<#{#l}>,<#{#r}>>":#d;end
def total;(#d.is_a?(Numeric)?#d:0)+(#l?#l.total: 0)+(#r?#r.total: 0);end
def insert d
alias g instance_variable_get
p=lambda{|s,o|d.to_s.send(o,#d.to_s)&&
(g(s).nil??instance_variable_set(s,Tree.new(d)):g(s).insert(d))}
#d?p[:#l,:<]||p[:#r,:>]:#d=d
end
end
The double-bang: !!something
I'm not gonna write what it does. Forget that you ever saw this syntax.
Any use of metaprogramming without having thought damn hard about whether there's a better way to acheive this using the normal, non-'meta' idioms of the language, I tend to find annoying.
An obsession with "DRY" (don't repeat yourself) where some fiendish piece of metaprogramming spaghetti is invoked to avoid repeating yourself, say, twice in a simple and actually-more-straightforward-and-readable-than-the-alternative fashion.
Any use of 'eval' in particular. As metaprogramming goes, this one should be your absolute last resort after trying everything else. eg a lot of rubyists appear not to have heard of Class#define_method.
The output phase of yaml.rb; that's why I co-authored zaml.rb. The standard yaml version does all sorts of metaprogramming (it was originally written by why-the-lucky-stiff, who I generally admire) but by replacing it with a straight forward hierarchical version that directly maps to the class tree we were able to eliminate several O(n^3) cases, resulting in a factor of ten speedup for cases of interest, fix several bugs, and do so in a fraction of the code.
Plus, even people who aren't ruby gurus can see what it does.
Many of the examples in this article would seem to qualify:
21 Ruby Tricks You Should Be Using In Your Own Code.
The title of the article was a bit of a giveaway, given that it reads "Should" instead of "Should Not". Code "should" be transparent. Code "should not" be tricky.
I'm not sure if this qualifies as "too clever," but I have seen code that made me wonder if the author was either a genius or an idiot. One developer seemed to have a rule that no method should have more than two lines of code. That pushed the call stack very deep and made debugging rather difficult. The upside is that his overall design was very abstract and even elegant from a distance.
Cucumber (or RSpec Stories)
Quoted from the above RSpec Stories link:
Based around plain text descriptions
of application behaviour, it lets you
write integration tests with good
reuse and good diagnostic reporting.
For example, here's a story I wrote to
check the login process.
Story: login as an existing user
As an unauthenticated user
I want to log in to Expectnation
So I can see my account details
Scenario: login details are correct
Given an event provider
And my test#example.org account
When I log in with email test#example.org and password foofoo
Then I will be logged in
And I will be shown the account page
The words such as "Given", "When" and
"Then" are cues to the story runner to
execute some code. Behind the story
sits a collection of steps. Here's a
couple of steps from this test:
Given "my $email account" do |email|
#user = find_or_create_user_by_email({:email => email,
:password => 'foofoo',
:password_confirmation => 'foofoo'})
end
When "I log in with email $email and password $password" do |email, password|
post '/user/account/authenticate',
:user => {:email => email, :password => password}
end
Notice how a clever bit of string
matching allows you to pass parameters
from the story prose.
With a small bit of bolting together, the prose stories are then run as code and the tests executed.
It depends. (I love "it depends" questions)
It depends on the knowledge of the writer and reader. I used to think the use of Symbol#to_proc in Rails was unnecessarily arcane, for example, preferring
a.map { |e| e.downcase }
to
a.map(&:downcase)
Now I'm happy when I read it, although I still don't think to write it.
There are areas of libraries (Rails and others) where I have felt excessive and self-indulgent metaprogramming may have occurred but again the division between "too clever" and "really very clever indeed" is often paper-thin. DSLs are a good area: the ways in which "macros" are made available within classes (think of all that declarative goodness in things like ActiveRecord::Base or ActionController::Base) is very hard for a relative novice to understand and would probably seem like over-cleverness. It did to me. Now I find myself referencing the same code for guidance as I implement similar capabilities.
method_missing can be abused and it's one of those things that may cause you to pull your hair out when you have to fix a bug 3 months after you've written code.
Take a look at the source of Markaby. Insanity.
You shouldn't have to go from method to method to method to try to figure out what in the hell something is doing, for the sole purpose of not repeating a few lines of code. Being too focused on the LOC count and ultra-skinny methods might feel cool at the time but is time-consuming for someone else trying to debug or follow the code (and that someone may be you months later).
compare:
if MODELS.keys.inject(true) {|b, klass| b and klass.constantize.columns.map(&:name).include? association.options [:foreign_key]} then
# ...
end
1 line (if), 132 chars, 132 avg len, 22.9 flog
vs
fk = association.options[:foreign_key]
columns = MODELS.keys.map { |key| key.constantize.columns.map { |c| c.name } }
if columns.all? {|column| column.include? fk} then
# ...
end
4 lines, 172 chars, 43 avg chars, 15.9 flog
much faster too.
Original author actually argued maintainability for the first version.
Recently uncovered this monster:
def id
unless defined?(#id)
#id = if id = local_body.to_s[/(?:#\s*|#[[:punct:]]?)#{URL_REGEX}/,1]
id.to_i
end
end
#id
end
Not that I disagree with caching a calculation, it could just be far more clear.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 14 years ago.
How do you explain your team mates the importance of commenting the code they write? I know some coders who write episodic comments while others leave a lot to be expected, what do you expect when you read comments?
There are some minimums:
All functions and classes should be commented
Try/Catch and exception handling is better to be commented
Constants hard coded in the code should be definitely
dummy objects and dummy classes, as well as TO-DO parts should be commented
When you get a code from a URL, the address should be cited in the comments for further consideration and copyright infringement problems
Also commits to the version control system should be well commented
Although comments should be kept to minimum, there is no need to comment a for loop definition when it is obvious,
I usually set ground rules for my programmers, they stick to it when it is well defined
The best comments are always concise, in a few words. It should say what's not obvious in the code. I see allot of people making obvious and therefore useless comments like:
if x==0 //if x equals 0 then...
oh really?! This is only "polluting" the code, because unless you're learning how to program, its pretty useless.
Even if the code is only yours, you should write comments as if you were about to share it with another programmer that is completely unaware of it. That way you make sure that you will always understand it, and in long term if somebody comes along and picks that code up, that person will be able to understand it and extend/use it.
I see comments as a boost of reusability. And I expect, like every other programmer, to fully understand a block of code with a single, simple and concise comment.
Write comments when you're writing code that's not intuitive. There's really no reason for commenting a method that just iterates an array, but when you fix a bug or have to hack something together to get around an issue, it's good to have a comment so you can quickly understand that code 6 months later (and not accidently undo it).
What do you mean by commenting code?
The actual code, or the function headers?
If you're actually talking about the code, it's a lost cause. You need to get them to write readable code and to break it into meaningful chunks. Commenting bad code doesn't make it into good code, it just leaves an inconsistent mess.
As for header documentation, you have to get them to capture the important things (e.g., surprises, directives) and compromise about trivial things (listing all parameters, repeating what the signature does). People hate documenting functions because most of the effort is spent writing trivial text that almost insults your intelligence (e.g., on getHandleToFile(), "this gets a handle to the file"). Since there are actually a lot less important details than one would expect, they'd be pleasantly surprised and would be more likely to invest the effort in those specific situations.
I think if you are writing code that others may someday have to follow, then it is prudent to leave good comments about what things are doing. If you are just writing something for yourself, the tendency is strong to leave minimal or none at all. That being said, I have had the "not so luxury" of having to go back to code I wrote 8 years ago and didn't comment adequately, in a language I don't use anymore (class ASP) and I can tell you, I wish I had left more comments!
I try to comment most of my public methods and classes, and in those comments you can read what the method does, what the meaning of the parameters is, and, if applicable, what the output will be.
I also sometimes put comments inside my methods, but, there I do not comment what I'm doing, but why I am doing it like that.
if the language you are writing in is not human readable, i suggest very granular method and procedure level comments.
if the language you are writing is human readable (C#, VB, etc..) i suggest that you use somewhat detailed comments at the method level and minimal comments at the procedure level.
Include comments for document generation on methods and classes.
Don't comment every line.
If you are doing something expected or that is not obvious from the code, explain why in comments.
The most important thing to do in commenting is to tell the truth. The number of times I've been investigating a bug only to find a section of code that is "less than obvious" along with a comment that says it's supposed to do the opposite to what it is doing. Who wins? You decide...
On a related note, any comment that is longer than the section it is documenting is normally too long.