Why does Xcode keeps insisting on using let rather than var? - xcode

With one of the recent Xcode updates it keeps telling my to try and use "let" rather than "var". Why? I would have thought "var" would be the best to use and to only use "let" in specific situations.

It is the opposite that is true, let is safer.
With let the developer is explicitly stating that the value is a constant that should not change and the compiler insures this, it enforces the developers concept of the usage.
With var the developer is explicitly stating that the value can change, that change is expected. In the case that the value is not changed the compiler is notifying the developer of this and that the developer can (should) change to let.
By the compiler enforcing this the code base is safer from inadvertent value changes.

You should always try to do the safest thing possible. If you can modify an object then you can modify it by mistake. If you can't modify an object then you can't modify it by mistake. Therefore if you don't want to modify an object or value you should use "let" and not "var".
Or look at it the other way round: If you use "var" you declare you intend to modify the value. If you don't modify the value, it may be because you used the wrong variable name. Say you have var x and var y, then you write x = 100 and x = 200. You used the wrong identifier in the second case. The compiler detects that your intent and actions were different.

Related

Using std::move, and preventing further use of the data

I have been using c++11 for some time but I always avoided using std::move because I was scared that, while reading a library where the user does not have the access to the code, it would try to use the variable after I move it.
So basically something like
void loadData(std::string&& path);
Would not be enough to make the user understand that it will be moved.
Is it expected that the use of && would imply that the data will be moved. I know that comments can be used to explain the use case, but a lot of people dont pay attention to that.
Is it safe to assume that when you see a && the data will be moved, or when should I use std::move and how to make it explicit from the signature.
Is it expected that the use of && would imply that the data will be moved.
Generally speaking yes. A user cannot call loadData with an lvalue. They must provide a prvalue or an xvalue. So if you have a variable to pass, your code would generally look like loadData(std::move(variable)), which is a pretty good indicator of what you're doing from your side. forwarding could also be employed, but you'd still see it at the call site.
Indeed, generally speaking it is extremely rude to move from a parameter which is not an rvalue reference.

What benefit does discriminating between local and global variables provide?

I'm wondering what benefit discriminating between local and global variables provides. It seems to me that if everything were made a global variable, there would be a lot less confusion.
Wouldn't declaring everything a global variable result in fewer errors because one wouldn't mistakenly call a local variable in a global instance, thereby encountering fewer errors?
Where is my logic wrong on this?
Some of this boils down to good coding practices. Keeping variables local also means it becomes simpler to share code from one application to another without having to worry about code conflicts. While its simpler to make everything global, getting into the habit of only using global variables when you actually have to will force you to code more efficiently and will make your code more structured.
I think your key oversight is thinking that an error telling you a local variable doesn't exist is a bad thing - it isn't. You've made a mistake and ruby is telling you so. This type of mistake is usually easy to fix: you've misspelled something or you're using something that you forgot to create.
Global variables everywhere might remove those errors but they would replace them with a far harder set of errors to reason about: accidentally using a variable that another bit of code is using. Imagine if every time you called a function (one of your own or a standard library one or one from a gem) you had to check which global variables it might change (and which functions it called, since it might also change global variables) If you make a mistake then you might get an error message (if the class of the object in the variable changes enough) but often you would just silently get incorrect results (if the value of a variable you were using changes unexpectedly).
In general global variables are much harder to work with and people avoid them when possible.
If all variables are global, every line of code in every program (including those which haven't been written yet) written by every programmer on the planet (including those who haven't been born yet or are already dead) must universally, uniquely agree on the names of variables. If you use a variable name that someone else on a different continent two years from now will also use, both of your programs will break, when used together.

Defining constants for 0 and 1

I was wondering whether others find it redundant to do something like this...
const double RESET_TIME = 0.0;
timeSinceWhatever = RESET_TIME;
rather than just doing
timeSinceWhatever = 0.0;
Do you find the first example to aid in readability? The argument comes down to using magic numbers, and while 0 and 1 are considered "exceptions" to the rule, I've always kind of thought that these exceptions only apply to initializing variables, or index accessing. When the number is meaningful, it should have a variable attached to its meaning.
I'm wondering whether this assumption is valid, or if it's just redundant to give 0 a named constant.
Well, in your particular example it doesn't make much sense to use a constant.
But, for example, if there was even a small chance that RESET_TIME will change in the future (and become, let's say, 1) then you should definitely use a constant.
You should also use a constant if your intent is not obvious from the number. But in your particular example I think that timeSinceWhatever = 0; is more clear than timeSinceWhatever = RESET_TIME.
Typically, one benefit of defining a constant rather than just using a literal is if the value ever needs to change in several places at once.
From your own example, what if REST_TIME needed to be -1.5 due to some obscure new business rule? You could change it one place, the definition of the constant, or you could change it everywhere you had last used 0.0 as a float literal.
In short, defining constants, in general, aids primarily in maintainability.
If you want to be more specific and letting others know why you're changing doing what you're doing you might want to instead create a function (if your language permits functions to float about) such as
timeSinceWhenever = ResetStopWatch();
or better yet when dealing with units either find a library which has built in function types or create your own. I wouldn't suggest creating your own with time as there are an abundant amount of such libraries. I've seen this before in code if it helps:
Temperature groundTemp = Temperature.AbsoluteZero();
which is a nice way of indicating what is going on.
I would define it only if there was ever a chance that RESET_TIME could be something different than 0.0, that way you can make one change and update all references. Otherwise 0.0 is the better choice to my eye just so you don't have to trace back and see what RESET_TIME was defined to.
Constants are preferable as it allows to use a value that can be then changed in successive versions of the code. It is not always possible to use constants, especially if you are programming in a OO language, and it is not possible to define a constant that doesn't contain a basic datatype. Generally, a programming language always has a way to define not modifiable objects / datatypes.
Well suppose that RESET_TIME is used often in your code and you want to change the value, it will be better to do it once and not in every statement.
better than a constant, make it a configuration variable, and set it to a default value. But yes, RESET_TIME is more readable, provided its used more than once otherwise just use a code comment.
That code is ok. const variable are unchangeable variables. so whenever you feel to reset something, you can always have your const to do that

Should we create objects if we need them only once in our code?

This is a coding style questions:-
Here is the case
Dim obj1 as new ClassA
' Some lines of code which does not uses obj1
Something.Pass(obj1) ' Only line we are using obj1
Or should we directly initiaize the object when passing it as an argument?
Something.new(new ClassA())
If you're only using the object in that method call, it's probably better to just pass in "new ClassA()" directly into the call. This way, you won't have an extra variable lying around, that someone might mistakenly try to use in the future.
However, for readability, and debugging it's often useful to create the temporary object and pass it in. This way, you can inspect the variable in the debugger before it gets passed into the method.
Your question asks "should we create objects"; both your examples create an object.
There is not logically any difference at all between the two examples. Giving a name to an object allows it to be referred to in more than one place. If you aren't doing that, it's much clearer to not give it a name, so someone maintaining the code can instantly see that the object is only passed to one other method.
Generally speaking, I would say no, there's nothing wrong with what you're doing, but it does sound like there may be some blurring of responsibilities between your calling function, the called function, and the temporary object. Perhaps a bit of refactoring is in order.
I personally prefer things to be consistent, and to make my life easier (what I consider making my life easier may not be what you consider making your life easier... so do with this advice what you will).
If you have something like this:
o = new Foo();
i = 7
bar(o, i, new Car());
then you have an inconsistency (two parameters are variables, the other is created on the fly). To be consistent you would either:
always pass things as variables
always pass things created on the fly
only one of those will work (the first one!).
There are also practical aspects to it as well: making a variable makes debugging easier.
Here are some examples:
while(there are still lines in the file)
{
foo(nextLine());
}
If you want to display the next line for debugging you now need to change it to:
while(there are still lines in the file)
{
line = nextLine();
display(line);
foo(line);
}
It would be easier (and safer) to have made the variable up front. It is safer because you are less likely to accidentally call nextLine() twice (by forgetting to take it out of the foo call).
You can also view the value of "line" in a debugger without having to go into the "foo" method.
Another one that can happen is this:
foo(b.c.d()); // in Java you get a NullPointerException on this line...
was "b" or "c" the thing that was null? No idea.
Bar b;
Car c;
int d;
b = ...;
c = b.c; // NullPointException here - you know b was null
d = c.d(); // NullPointException here - you know c was null
foo(d); // can view d in the debugger without having to go into foo.
Some debuggers will let you highlight "d()" and see what it outputs, but that is dangerous if "d()" has side effects as the debugger will wind up calling "d()" each time you get the value via the debugger).
The way I code for this does make it more verbose (like this answer :-) but it also makes my life easier if things are not working as expected - I spend far less time wondering what went wrong and I am also able to fix bugs much faster than before I adopted this way of doing things.
To me the most important thing when programming is to be consistent. If you are consistent then the code is much easier to get through because you are not constantly having to figure out what is going on, and your eyes get drawn to any "oddities" in the code.

Ruby Terminology Question: Is this a Ruby declaration, definition and assignment, all at the same time?

If I say:
x = "abc"
this seems like a declaration, definition and assignment, all at the same time, regardless of whether I have said anything about x in the program before.
Is this correct?
I'm not sure what the correct terminology is in Ruby for declarations, definitions and assigments or if there is even a distinction between these things because of the dynamic typing in Ruby.
#tg: Regarding your point # 2: even if x existed before the x = "abc" statement, couldn't you call the x = "abc" statement a definition/re-definition?
Declaration: No.
It doesn't make sense to talk about declaring variables in Ruby, because there's nothing analogous to a declaration in the languages. Languages designed for compilers have declarations because the compiler needs to know in advance how big datatypes are and how to access different parts of them. e.g., if I say in C:
int *i;
then the compiler knows that somewhere there is some memory set aside for i, and it's as big as it needs to be to hold a pointer to an int. Eventually the linker will hook all the references to i together, but at least the compiler knows it's out there somewhere.
Definition: Probably.
A definition typically set an initial value for something (at least in the familiar compiled languages). If x didn't exist before the x = "abc" statement, then I guess you could call this a definition, since that is when Ruby has to assign a value to the symbol x.
Again, though, definition is a specific term that people typically use to distinguish the initial, static assignment of a value to some variable from that variable's declaration. In Ruby, you don't have that kind of statement. You typically just say a variable is defined if it's been assigned a value somewhere in your current scope, and you say it's undefined if it hasn't.
You usually don't talk about it having a definition, because in Ruby that just amounts to assignment. There's no special context that would justify you saying definition like there is in other languages.
Which brings us to...
Assignment: Yes.
You can definitely call this an assignment, since it is assigning a value to the symbol x. I don't think anyone will disagree with that.
Pretty much. And if, on the very next line, you do:
x = 1
Then you've just re-defined it, as well as assigned it (its now an integer, not a string). Duck typing is very different to what you're probably used to.

Resources