What benefit does discriminating between local and global variables provide? - ruby

I'm wondering what benefit discriminating between local and global variables provides. It seems to me that if everything were made a global variable, there would be a lot less confusion.
Wouldn't declaring everything a global variable result in fewer errors because one wouldn't mistakenly call a local variable in a global instance, thereby encountering fewer errors?
Where is my logic wrong on this?

Some of this boils down to good coding practices. Keeping variables local also means it becomes simpler to share code from one application to another without having to worry about code conflicts. While its simpler to make everything global, getting into the habit of only using global variables when you actually have to will force you to code more efficiently and will make your code more structured.

I think your key oversight is thinking that an error telling you a local variable doesn't exist is a bad thing - it isn't. You've made a mistake and ruby is telling you so. This type of mistake is usually easy to fix: you've misspelled something or you're using something that you forgot to create.
Global variables everywhere might remove those errors but they would replace them with a far harder set of errors to reason about: accidentally using a variable that another bit of code is using. Imagine if every time you called a function (one of your own or a standard library one or one from a gem) you had to check which global variables it might change (and which functions it called, since it might also change global variables) If you make a mistake then you might get an error message (if the class of the object in the variable changes enough) but often you would just silently get incorrect results (if the value of a variable you were using changes unexpectedly).
In general global variables are much harder to work with and people avoid them when possible.

If all variables are global, every line of code in every program (including those which haven't been written yet) written by every programmer on the planet (including those who haven't been born yet or are already dead) must universally, uniquely agree on the names of variables. If you use a variable name that someone else on a different continent two years from now will also use, both of your programs will break, when used together.

Related

Setting a global value (and keeping it) within the scope of an eval

I've got a large Rails 5 app (Ruby 2.6.x at present) that makes crucial use of Kernel::eval (please don't tell me to try to refactor this out because eval is dangerous - I didn't write the original code, and this is not in the cards for any time soon).
There are a very wide variety of Ruby expressions (coming out of the db) that can be passed to eval, sometimes of great complexity, making extensive use of classes and resources of the app.
(you might want to jump straight down to BIG EDIT below)
What I want is to be able to set a global value ($global) that will be seen within the scope of the eval execution, but that will not "infect" any of the execution context outside of that. I can't try to interpolate this into the string and pass it down though method params and such, because, as I say, the code being eval'ed is complex and stacks can get very deep, and I want the value to potentially be accessed (though never modified) anywhere within.
I understand about Bindings. I have played around with setting local and instance vars in a binding, and passing this to eval, but inevitably these are not seen inside any method calls within the eval, especially if I'm inside a method of some random class (which I always am). Seems like global is the only possibility. But experimentation shows that a global set inside an eval remains in the code that calls the eval:
2.6.3 :002 > $foo
=> nil
2.6.3 :003 > eval("$foo = 12")
=> 12
2.6.3 :004 > $foo
=> 12
Although I might find some hacky way to deal with this situation, I'm sure you can see where I'd really rather not.
The Binding class offers methods to set local and instance vars dynamically within a Binding object, but nothing for globals (apparently). I've thought about something like this:
...
eval code_string, get_binding()
...
def get_binding
$global = :special_value
binding
end
but I'm really worried, with a Rails app that might be servicing lots of requests at the same time, that these settings of $global will step on each other in unpredictable ways. Related clarifying question: Is a global value in a Rails app global to the entire thing, readable and writable within the scope of all the requests whose servicing may be overlapping in time? (I'm running under Passenger, if that means anything)
So this is a fairly simple and straightforward problem when you understand it, although oddly not addressed in anything I can google about it, and I think I've written enough words. Thanks for any help or ideas to try.
BIG EDIT:
Ok, let me try to refocus this in a different way. I'm getting that the scope of a global can never, no-how, be constrained (duh, right?), but how about this strategy (similar to above):
...
eval code_string, get_binding()
...
def get_binding
luaapg = :special_value ## local used as a pseudo-global
binding
end
So, now I've got this Binding that includes the local var luaapg. I've confirmed that. I eval code_string with this Binding. When I am somewhere inside the execution of code_string, where do I find luaapg - how do I access it? If you look at pretty much every tutorial on this stuff on the web, they show you puts eval("luaapg", get_binding) and voila, the assigned value comes out! But this is too simplistic for real life. When I am in the middle of my code_string, in some method scope of some class, luaapg is not there. I had great hope that this would work, even deep down the stack:
TOPLEVEL_BINDING.local_variable_get(:luaapg)
but it doesn't (I learned about TOPLEVEL_BINDING from here - thanks to that author). So this is the new question: what does it mean to say that I have executed (eval'ed) my code_string in the context of that Binding, which contains a local variable, if I have no way to access that variable, other than with the most simpleminded code? (incidentally I played around with instance vars too - same thing). I'm still hoping there's some magic incantation...
I think you've put your finger on it in the name of the type of variable - it's global - common to all the code in the executing program. I'm not sure exactly how Passenger works but I suspect it runs several copies of your program, so it won't be common between the copies.
To get reliable shared information I think you're going to have to use your database or some sort of information cache like memcached. You choose how you save/name it there.

Why is prefixing class, method, or variable names with "My" considered bad practice?

Often in code snippets or examples, the prefix My is used for classes, methods, and variables.
We see this all the time, from Google's gcm example to numerous questions on stackoverflow. It can be found in documentation for PHP, Javascript, C#, and really just about every other language.
So, given it's extreme prevalence, why shouldn't I put MyClass with myMethod using myVar1 into production? Why is this bad practice, and what should I do instead?
Name are important
Everything else comes down to that one simple fact.
Having clear, meaningful variable names is an important factor in writing understandable and maintainable code. - Computational Fairy Tales
When you write a piece of code, you want anyone reading it (which includes you in three months!) to know what it does almost immediately. In order to remember, you would probably need to add a comment about it. However...
The proper use of comments is to compensate for our failure to express our self in code - Express Names in Code: Bad vs Clean
To help with this, your name should be both descriptive and concise. Generally, when the prefix My is added, it replaces a prefix that could be more informational. Instead of MyService, why not GcmListenerService?
What if I make it descriptive, but I just add My? Like MyGcmListenerService?
That breaks the consice part of the rule. What do you gain from adding My? Is the My adding value to the name? Even if you were attempting to take possession of the code (which would be better done and is feasible using vcs), My is meaningless. Who wrote My? Well, I did, obviously. It says it right there, "My".
If I really shouldn't use My, why is it in so many examples?
Really, it's just a placeholder.
It's like the metasyntactic variables "foo" and "bar" - it's usually used as a placeholder for a real name. - Opinions on using My as a class name prefix
Unfortunately, most experienced programmers just assume that people looking up examples will know this, and they will replace My with a good variable name when they actually use the code. However, for those new to programming, if you see it all over the place, instead of knowing it is a placeholder, you may think it is actually a standard, and best practice to use My.
Ok, then what do I use instead of My?
There are a lot of really good guides about naming variables and classes out there. You can start with wikipedia, but if you google around a bit you can find lots of articles about good vs bad names. At the heart of it all, though, is one rules: make names descriptive, and keep them concise.
If I read the name of a class, method, or variable and immediately know its purpose, it is a good name.

How to name bools that hold the return value of IsFoo() functions?

I read that it's a good convention to name functions that return a bool like IsChecksumCorrect(Packet), but I also read that it's a good convention to name boolean variables like IsAvailable = True
But the two rules are incompatible: I can't write:
IsChecksumCorrect = IsChecksumCorrect(Packet)
So what's the best way to name vars that store boolean values returned by such functions?
PS: Extra points if you can think of a way that doesn't depend on changing the case (some languages--like Delphi--are case-insensitive).
First of all, there can be difficulties only with functions that don't require arguments, in your example instead the variable should just be called IsPacketChecksumCorrect.
Even with functions with no arguments I think you would only have problems if you were just caching the result of the function, for performance's sake, and you could safely replace all instances of the variables with calls to the function if it weren't for the performance. In all other cases I think that you could always come up with a more specific name for the variable.
If you were indeed just caching, why not just call the variable Functionname_cache? It seems quite clear to me.
If you needed to use a lot this "technique" in your project and _cache seemed too long or you did not like it you could well settle on a convention of your own; as long as you are consistent you can adopt whatever works best for you, people new to the project just need to be explained the convention once and they will easily recognize it ever after.
By the way, there are various opinions on the conventions for the naming of booleans. Personally I prefer to put the subject first, which makes the Ifs more readable, e.g. ChecksumIsCorrect, ChecksumCorrect or ChecksumCorrectness. I actually prefer not to put the Is altogether, the name usually remains clear even if you omit it.

Ruby: What is considered global, and how do you avoid it?

I am having a tough time grasping the idea of completely avoiding globals in Ruby.
To my understanding, if I define a method, the method would be considered global because I can call the method later on in the script. Same goes for classes. Can you completely avoid globalsl?
Research has pointed my inconclusively towards closures and singleton methods but I am still having trouble understanding how I would 'completely avoid globals.'
EDIT: I have also programmed a bit in JavaScript and used closure as follows to avoid the use of any globals: (function(){...})(); Can something similar be done in Ruby?
It is important to understand the reasoning behind avoiding globals. The main reason is avoiding global state. By storing variable information in a global variable, you are allowing components of the program to behave differently when used in the same way at different times. This usually results in unintended side-effects, causing testing and maintenance issues. Global classes or methods are unable to change (not considering reflection) and are not an issue because of that.
Another thing you may have associated with globals is namespace pollution, which can be partially resolved by nesting namespaces in a way that groups components semantically. Those are still global, though and thus not really avoidable.
Modules and Classes as globals are not a problem.
You could use a module to conceal a class, but ultimately you're going to have some globals.
Avoiding global variables and methods would be advised, though.

ColdFusion: More efficient structKeyExists() instead of isDefined()

Which of these is more efficient in ColdFusion?
isDefined('url.myvar')
or
structKeyExists(url, 'myvar')
These days (CF8+) the difference in speed is not that great. However, structKeyExists is indeed a little faster. Here's why.
When you use isDefined, the string you pass in is searched for as a key name in several scopes. As of CF9, the list of scopes, in the order checked is: (source)
Local (function local, UDFs and CFCs only)
Arguments
Thread local (inside threads only)
Query (not a true scope, applies for variables within query loops)
Thread
Variables
CGI
CFFile
URL
Form
Cookie
Client
Even if you use the scope name with isDefined (like: if isDefined('variables.foo')) the list will still be checked in order; and if the variable local.variables.foo is defined, it will be found BEFORE variables.foo.
On the other hand, structKeyExists only searches the structure you pass it for the existence of the key name; so there are far fewer places it will have to look.
By using more explicit code (structKeyExists), not only are you gaining some performance, but your code is more readable and maintainable, in my opinion.
Use the one which is easier to read and best shows what you're doing.
The difference between the two is incredibly small, and very likely not worth worrying about at all.
Don't waste time optimising code unless you have a proven and repeatable test case which demonstrates the slowness.

Resources