ColdFusion: More efficient structKeyExists() instead of isDefined() - performance

Which of these is more efficient in ColdFusion?
isDefined('url.myvar')
or
structKeyExists(url, 'myvar')

These days (CF8+) the difference in speed is not that great. However, structKeyExists is indeed a little faster. Here's why.
When you use isDefined, the string you pass in is searched for as a key name in several scopes. As of CF9, the list of scopes, in the order checked is: (source)
Local (function local, UDFs and CFCs only)
Arguments
Thread local (inside threads only)
Query (not a true scope, applies for variables within query loops)
Thread
Variables
CGI
CFFile
URL
Form
Cookie
Client
Even if you use the scope name with isDefined (like: if isDefined('variables.foo')) the list will still be checked in order; and if the variable local.variables.foo is defined, it will be found BEFORE variables.foo.
On the other hand, structKeyExists only searches the structure you pass it for the existence of the key name; so there are far fewer places it will have to look.
By using more explicit code (structKeyExists), not only are you gaining some performance, but your code is more readable and maintainable, in my opinion.

Use the one which is easier to read and best shows what you're doing.
The difference between the two is incredibly small, and very likely not worth worrying about at all.
Don't waste time optimising code unless you have a proven and repeatable test case which demonstrates the slowness.

Related

Is it good in Rails view to define constant just to be used once?

To a partial in a HAML file, I am passing parameters whose value is long, or methods whose name is long, for example:
"Some quite long string"
quiteLongMethodNameHere(otherConstant)
To make them shorter, I wrapped them in a constant/variable:
- message = "Some quite long string"
- is_important = quiteLongMethodNameHere(otherConstant)
= render :some_component, msg: message, is_important: is_important
Is this a good practice? Or should I just put the value on the param without wrapping it inside variable/constant?
It's a case-by-case decision. You want to balance the sometimes-competing interests of clarity and conciseness. For me, it depends on the expressiveness of both forms. If the long method name is clear, precise, and expressive, then I would be less interested in using an intermediate variable to hold its result than if it were not.
In other cases where the long form is less expressive, I will often use intermediate variables as "living" documentation, even if they are used only on the next line of code. This more explicitly reveals your intention to the reader (who may someone else, or you at some future point in time).
I find intermediate variables are much better than code comments because code comments can more easily become obsolete, and having the clarification in code makes it available for debuggers, etc. The performance hit of creating an extra variable is minimal, and significant in only the most unusual of cases.
Another factor is if you are aggregating things (in arrays, hashes, etc.) that include these function calls and values, then using the intermediate variable makes the code neater, and possibly easier to understand, as you can customize the name to make the most sense in the context of that collection.
Regardless of the length of the string, it makes sense to assign it to a variable/constant, and not directly refer to it in a view file. If it is a text, it makes more sense to put it in a i18n file.
However, it is not good to do that in the main view file. If you are going to do it, do it in the controller file or a helper file.

What benefit does discriminating between local and global variables provide?

I'm wondering what benefit discriminating between local and global variables provides. It seems to me that if everything were made a global variable, there would be a lot less confusion.
Wouldn't declaring everything a global variable result in fewer errors because one wouldn't mistakenly call a local variable in a global instance, thereby encountering fewer errors?
Where is my logic wrong on this?
Some of this boils down to good coding practices. Keeping variables local also means it becomes simpler to share code from one application to another without having to worry about code conflicts. While its simpler to make everything global, getting into the habit of only using global variables when you actually have to will force you to code more efficiently and will make your code more structured.
I think your key oversight is thinking that an error telling you a local variable doesn't exist is a bad thing - it isn't. You've made a mistake and ruby is telling you so. This type of mistake is usually easy to fix: you've misspelled something or you're using something that you forgot to create.
Global variables everywhere might remove those errors but they would replace them with a far harder set of errors to reason about: accidentally using a variable that another bit of code is using. Imagine if every time you called a function (one of your own or a standard library one or one from a gem) you had to check which global variables it might change (and which functions it called, since it might also change global variables) If you make a mistake then you might get an error message (if the class of the object in the variable changes enough) but often you would just silently get incorrect results (if the value of a variable you were using changes unexpectedly).
In general global variables are much harder to work with and people avoid them when possible.
If all variables are global, every line of code in every program (including those which haven't been written yet) written by every programmer on the planet (including those who haven't been born yet or are already dead) must universally, uniquely agree on the names of variables. If you use a variable name that someone else on a different continent two years from now will also use, both of your programs will break, when used together.

How to name bools that hold the return value of IsFoo() functions?

I read that it's a good convention to name functions that return a bool like IsChecksumCorrect(Packet), but I also read that it's a good convention to name boolean variables like IsAvailable = True
But the two rules are incompatible: I can't write:
IsChecksumCorrect = IsChecksumCorrect(Packet)
So what's the best way to name vars that store boolean values returned by such functions?
PS: Extra points if you can think of a way that doesn't depend on changing the case (some languages--like Delphi--are case-insensitive).
First of all, there can be difficulties only with functions that don't require arguments, in your example instead the variable should just be called IsPacketChecksumCorrect.
Even with functions with no arguments I think you would only have problems if you were just caching the result of the function, for performance's sake, and you could safely replace all instances of the variables with calls to the function if it weren't for the performance. In all other cases I think that you could always come up with a more specific name for the variable.
If you were indeed just caching, why not just call the variable Functionname_cache? It seems quite clear to me.
If you needed to use a lot this "technique" in your project and _cache seemed too long or you did not like it you could well settle on a convention of your own; as long as you are consistent you can adopt whatever works best for you, people new to the project just need to be explained the convention once and they will easily recognize it ever after.
By the way, there are various opinions on the conventions for the naming of booleans. Personally I prefer to put the subject first, which makes the Ifs more readable, e.g. ChecksumIsCorrect, ChecksumCorrect or ChecksumCorrectness. I actually prefer not to put the Is altogether, the name usually remains clear even if you omit it.

Is checking Perl function arguments worth it?

There's a lot of buzz about MooseX::Method::Signatures and even before that, modules such as Params::Validate that are designed to type check every argument to methods or functions. I'm considering using the former for all my future Perl code, both personal and at my place of work. But I'm not sure if it's worth the effort.
I'm thinking of all the Perl code I've seen (and written) before that performs no such checking. I very rarely see a module do this:
my ($a, $b) = #_;
defined $a or croak '$a must be defined!';
!ref $a or croak '$a must be a scalar!";
...
#_ == 2 or croak "Too many arguments!";
Perhaps because it's simply too much work without some kind of helper module, but perhaps because in practice we don't send excess arguments to functions, and we don't send arrayrefs to methods that expect scalars - or if we do, we have use warnings; and we quickly hear about it - a duck typing approach.
So is Perl type checking worth the performance hit, or are its strengths predominantly shown in compiled, strongly typed languages such as C or Java?
I'm interested in answers from anyone who has experience writing Perl that uses these modules and has seen benefits (or not) from their use; if your company/project has any policies relating to type checking; and any problems with type checking and performance.
UPDATE: I read an interesting article on the subject recently, called Strong Testing vs. Strong Typing. Ignoring the slight Python bias, it essentially states that type checking can be suffocating in some instances, and even if your program passes the type checks, it's no guarantee of correctness - proper tests are the only way to be sure.
If it's important for you to check that an argument is exactly what you need, it's worth it. Performance only matters when you already have correct functioning. It doesn't matter how fast you can get a wrong answer or a core dump. :)
Now, that sounds like a stupid thing to say, but consider some cases where it isn't. Do I really care what's in #_ here?
sub looks_like_a_number { $_[0] !~ /\D/ }
sub is_a_dog { eval { $_[0]->DOES( 'Dog' ) } }
In those two examples, if the argument isn't what you expect, you are still going to get the right answer because the invalid arguments won't pass the tests. Some people see that as ugly, and I can see their point, but I also think the alternative is ugly. Who wins?
However, there are going to be times that you need guard conditions because your situation isn't so simple. The next thing you have to pass your data to might expect them to be within certain ranges or of certain types and don't fail elegantly.
When I think about guard conditions, I think through what could happen if the inputs are bad and how much I care about the failure. I have to judge that by the demands of each situation. I know that sucks as an answer, but I tend to like it better than a bondage-and-discipline approach where you have to go through all the mess even when it doesn't matter.
I dread Params::Validate because its code is often longer than my subroutine. The Moose stuff is very attractive, but you have to realize that it's a way for you to declare what you want and you still get what you could build by hand (you just don't have to see it or do it). The biggest thing I hate about Perl is the lack of optional method signatures, and that's one of the most attractive features in Perl 6 as well as Moose.
I basically concur with brian. How much you need to worry about your method's inputs depends heavily on how much you are concerned that a) someone will input bad data, and b) bad data will corrupt the purpose of the method. I would also add that there is a difference between external and internal methods. You need to be more diligent about public methods because you're making a promise to consumers of your class; conversely you can be less diligent about internal methods as you have greater (theoretical) control over the code that accesses it, and have only yourself to blame if things go wrong.
MooseX::Method::Signatures is an elegant solution to adding a simple declarative way to explain the parameters of a method. Method::Signatures::Simple and Params::Validate are nice but lack one of the features I find most appealing about Moose: the Type system. I have used MooseX::Declare and by extension MooseX::Method::Signatures for several projects and I find that the bar to writing the extra checks is so minimal it's almost seductive.
Yes its worth it - defensive programming is one of those things that are always worth it.
The counterargument I've seen presented to this is that checking parameters on every single function call is redundant and a waste of CPU time. This argument's supporters favor a model in which all incoming data is rigorously checked when it first enters the system, but internal methods have no parameter checks because they should only be called by code which will pass them data which has already passed the checks at the system's border, so it is assumed to still be valid.
In theory, I really like the sound of that, but I can also see how easily it can fall like a house of cards if someone uses the system (or the system needs to grow to allow use) in a way that was unforeseen when the initial validation border is established. All it takes is one external call to an internal function and all bets are off.
In practice, I'm using Moose at the moment and Moose doesn't really give you the option to bypass validation at the attribute level, plus MooseX::Declare handles and validates method parameters with less fuss than unrolling #_ by hand, so it's pretty much a moot point.
I want to mention two points here.
The first are the tests, the second the performance question.
1) Tests
You mentioned that tests can do a lot and that tests are the only way
to be sure that your code is correct. In general i would say this is
absolutly correct. But tests itself only solves one problem.
If you write a module you have two problems or lets say two different
people that uses your module.
You as a developer and a user that uses your module. Tests helps with the
first that your module is correct and do the right thing, but it didn't
help the user that just uses your module.
For the later, i have one example. i had written a module using Moose
and some other stuff, my code ended always in a Segmentation fault.
Then i began to debug my code and search for the problem. I spend around
4 hours of time to find the error. In the end the problem was that i have
used Moose with the Array Trait. I used the "map" function and i didn't
provide a subroutine function, just a string or something else.
Sure this was an absolutly stupid error of mine, but i spend a long time to
debug it. In the end just a checking of the input that the argument is
a subref would cost the developer 10 seconds of time, and would cost me
and propably other a lot of more time.
I also know of other examples. I had written a REST Client to an interface
completly OOP with Moose. In the end you always got back Objects, you
can change the attributes but sure it didn't call the REST API for
every change you did. Instead you change your values and in the end you
call a update() method that transfers the data, and change the values.
Now i had a user that then wrote:
$obj->update({ foo => 'bar' })
Sure i got an error back, that update() does not work. But sure it didn't
work, because the update() method didn't accept a hashref. It only does
a synchronisation of the actual state of the object with the online
service. The correct code would be.
$obj->foo('bar');
$obj->update();
The first thing works because i never did a checking of the arguments. And i don't throw an error if someone gives more arguments then i expect. The method just starts normal like.
sub update {
my ( $self ) = #_;
...
}
Sure all my tests absolutely works 100% fine. But handling these errors that
are not errors cost me time too. And it costs the user propably a lot
of more time.
So in the end. Yes, tests are the only correct way to ensure that your code
works correct. But that doesn't mean that type checking is meaningless.
Type checking is there to help all your non-developers (on your module)
to use your module correctly. And saves you and others time finding
dump errors.
2) Performance
The short: You don't care for performance until you care.
That means until your module works to slow, Performance is always fast
enough and you don't need to care for this. If your module really works
to slow you need further investigations. But for these investigions
you should use a profiler like Devel::NYTProf to look what is slow.
And i would say. In 99% slowliness is not because you do type
checking, it is more your algorithm. You do a lot of computation, calling
functions to often etc. Often it helps if you do completly other solutions
use another better algorithm, do caching or something else, and the
performance hit is not your type checking. But even if the checking is the
performance hit. Then just remove it where it matters.
There is no reason to leave the type checking where performance don't
matters. Do you think type checking does matter in a case like above?
Where i have written a REST Client? 99% of performance issues here are
the amount of request that goes to the webservice or the time for such an
request. Don't using type checking or MooseX::Declare etc. would propably
speed up absolutly nothing.
And even if you see performance disadvantages. Sometimes it is acceptable.
Because the speed doesn't matter or sometimes something gives you a greater
value. DBIx::Class is slower then pure SQL with DBI, but DBIx::Class
gives you a lot for these.
Params::Validate works great,but of course checking args slows things down. Tests are mandatory(at least in the code I write).
Yes it's absolutely worth it, because it will help during development, maintenance, debugging, etc.
If a developer accidentally sends the wrong parameters to a method, a useful error message will be generated, instead of the error being propagated down to somewhere else.
I'm using Moose extensively for a fairly large OO project I'm working on. Moose's strict type checking has saved my bacon on a few occassions. Most importantly it has helped avoid situations where "undef" values are incorrectly being passed to the method. In just those instances alone it saved me hours of debugging time..
The performance hit is definitely there, but it can be managed. 2 hours of using NYTProf helped me find a few Moose Attributes that I was grinding too hard and I just refactored my code and got 4x performance improvement.
Use type checking. Defensive coding is worth it.
Patrick.
Sometimes. I generally do it whenever I'm passing options via hash or hashref. In these cases it's very easy to misremember or misspell an option name, and checking with Params::Check can save a lot of troubleshooting time.
For example:
sub revise {
my ($file, $options) = #_;
my $tmpl = {
test_mode => { allow => [0,1], 'default' => 0 },
verbosity => { allow => qw/^\d+$/, 'default' => 1 },
force_update => { allow => [0,1], 'default' => 0 },
required_fields => { 'default' => [] },
create_backup => { allow => [0,1], 'default' => 1 },
};
my $args = check($tmpl, $options, 1)
or croak "Could not parse arguments: " . Params::Check::last_error();
...
}
Prior to adding these checks, I'd forget whether the names used underscores or hyphens, pass require_backup instead of create_backup, etc. And this is for code I wrote myself--if other people are going to use it, you should definitely do some sort of idiot-proofing. Params::Check makes it fairly easy to do type checking, allowed value checking, default values, required options, storing option values to other variables and more.

Refactoring methods in existing code base with huge number of parameters

I have inherited an existing code base where the "features" are as follows:
huge monolithic classes with
(literally) 100's of member variables
and methods that go one for pages
(er. screens)
public and private methods with a large number of arguments.
I am trying to clean up and refactor the code, to leave it a little better
than how I found it. So my questions
is worth it (or do you) refactor methods with 10 or so arguments so that they are more readable ?
are there best practices on how long methods should be ? How long do you usually keep them?
are monolithic classes bad ?
is worth it (or do you) refactor methods with 10 or so arguments so that they are more readable ?
Yes, it is worth it. It is typically more important to refactor methods that are not "reasonable" than ones that already are nice, short, and have a small argument list.
Typically, if you have many arguments, it's because a method does too much - most likely, it should be a class of it's own, not a method.
That being said, in those cases when many parameters are required, it's best to encapsulate the parameters into a single class (ie: SpecificAlgorithmOptions), and pass one instance of that class. This way, you can provide clean defaults, and its very obvious which methods are essential vs. optional (based on what is required to construct the options class).
are there best practices on how long methods should be ? How long do you usually keep them?
A method should be as short as possible. It should have one purpose, and be used for one task, whenver possible. If it's possible to split it into separate methods, where each as a real, qualitative "task", then do so when refactoring.
are monolithic classes bad ?
Yes.
if the code is working and there is no need to touch it, i wouldn't refactor. i only refactor very problematic cases if i anyway have to touch them (either for extending them for functionality or bug-fixing). I favor the pragmatic way: Only (in 95%) touch, what you change.
Some first thoughts on your specific problem (though in detail it is difficult without knowing the code):
start to group instance variables, these groups will then be target to do 'extract class'
when having grouped these variables you hopefully can group some methods, which also be moved when doing 'extract class'
often there are many methods which aren't using any fields. make them static (they most likely are helper methods, which can be extracted to helper-classes.
in case non-related instance fields are mixed in many methods, do loads of 'extract method'
use automatic refactoring tools as much as possible, because you most likely have no tests in place and automation is more safe.
Regarding your other concrete questions.
is worth it (or do you) refactor methods with 10 or so arguments so that they are more readable?
definetely. 10 parameters are too many to grasp for us humans. most likely the method is doing too much.
are there best practices on how long methods should be ? How long do you usually keep them?
it depends... on preferences. i stated some things on this thread (though the question was PHP). still i would apply these numbers/metrics to any language.
are monolithic classes bad ?
it depends, what you mean with monolithic. if you mean many instance variables, endless methods, a lot of if/else complexity, yes.
also have a look at a real gem (to me a must have for every developer): working effectively with legacy code
Assuming the code is functioning I would suggest you think about these questions first:
is the code well documented?
do you understand the code?
how often are new features being added?
how often are bugs reported and fixed?
how difficult is it to modify and fix the code?
what is the expected life of the code?
how many versions of the compiler are you behind (if at all)?
is the OS it runs on expected to change during its lifetime?
If the system will be replaced in five years, is documented well, will undergo few changes, and bugs are easy to fix - leave it alone regardless of the size of the classes and the number of parameters. If you are determined to refactor make a list of your refactoring proposals in the order of maximum benefit with minimum changes and attack it incrementally.

Resources