I know that the override contextual keyword was introduced to write safer code (by checking for a virtual function with the same signature) but I don't feel good about it, because it seems to be redundant for me to write override every time I want to override a virtual function.
Is it a bad practice to not use override contextual keyword for 99% of cases? Why/when should I have to use it (a compiler warning is not enough when we are hiding a virtual function mistakenly)?
EDIT: In other words; what is the advantage of using the override contextual keyword in C++11 while we always had a compiler warning if we were hiding a virtual function mistakenly in C++03 (without using override contextual keyword)?
The override keyword is totally useful and I would recommend using it all the time.
If you misspell your virtual function it will compile fine but at runtime the program will call the wrong function. It will call the base class function rather than your override.
It can be a really difficult bug to find:
#include <iostream>
class Base
{
public:
virtual ~Base() {}
virtual int func()
{
// do stuff for bases
return 3;
}
};
class Derived
: public Base
{
public:
virtual int finc() // WHOOPS MISSPELLED, override would prevent this
{
// do stuff for deriveds
return 8;
}
};
int main()
{
Base* base = new Derived;
std::cout << base->func() << std::endl;
delete base;
}
Annotations are what you call contextual keywords, they serve as clarification, to make sure anyone who reads the code realizes it is a function that overrides a function in a superclass or a interface.
The compiler can also give a warning if the originally overridden feature was removed, in which case you might want to think about removing your function as well.
As far as I know, nothing bad happens if you ommit anotations. It's neither right nor wrong. Like you stated correctly already: annotations are introduced to write safer code.
However: They won't change your code in any functional way.
If you work as a single programmer on your own project it might not matter wheter you use them or not. It is however good practice to stick to one style (i.e. either you use it, or you don't use it. Anything inbetween like sometimes using it and sometimes not only causes confusion)
If you work in a Team you should discuss the topic with your teammates and decide wheter you all use it or not.
What is the advantage of using override contextual keyword in C++11 while we always had a compiler warning if we were hiding a virtual function mistakenly
Nearly none!?
But:
It depends on how much warnings will be accepted by your build rules. If you say, every warning MUST be fixed, you will get the same result UNTIL you are using a compiler which give you the warning.
We have decided to always use override and remove virtual on overriding methods. So the "overhead" is zero and the code is portable in the meaning of "give an error" on misuse.
I personally like this new feature, because it makes the language clearer. If you say this is an override, it will be checked! And if we want to add a new method with different signature, we will NOT get a false positive warning, which is important in your scenario!
Related
I have a D module which I hope contains public and private parts. I have tried using the keywords private and static before function definitions. I have a function that I wish to make externally-callable / public and ideally I would like it to be inlined at the call-site. This function calls other module-internal functions that are intended to be private, i.e. not externally callable. Calls to these are successfully inlined within the module and a lot of the cruft is disposed of by CTFE plus known-constant propagation. However the GDC compiler also generates copies of these internal routines, even though they have been inlined where needed and they are not supposed to be externally callable. I'm compiling with -O3 -frelease. What should I be doing - should I expect this even if I use static and/or private?
I have also taken a brief look at this thread concerning GCC hoping for insight.
As I mentioned earlier, I've tried both using private and static on these internal functions, but I can't seem to suppress the code generation. I could understand this if a debugger needed to have copies of these routines to set breakpoints in. I need to stress that this could perhaps be sorted out somehow at link-time, for all I know. I haven't tried linking the program, I'm just looking at the generated code in the Matt Godbolt D Compiler Explorer using GDC. Everything can be made into templates with a zero-length list of template parameters (e.g. auto my_fn()( in arg_t x ) ), tried that, it doesn't help but does no harm.
A couple of other things to try: I could try and make a static class with private parts, as a way of implementing a package, Ada-style. (Needs to be single-instance strictly.) I've never done any C++, only massive amounts of asm and C professionally. So that would be a learning curve.
The only other thing I can think of is to use nested function definitions, Pascal/Ada-style, move the internal routines to be inside the body of their callers. But that has a whole lot of disadvantages.
Rough example
module junk;
auto my_public_fn() { return my_private_fn(); }
private
static // 'static' and/or 'private', tried both
auto my_private_fn() { xxx ; return whatever; }
I just had a short discussion with Iain about this and implementing this is not as simple as it seems.
First of all static has many meanings in D, but the C meaning of translation unit local function is not one of them ;-)
So marking these functions as private seems intuitive. After all, if you can't access a function from outside of the translation unit and you never leak an address to the function why not remove it? It could be either completely unused or inlined into all callers in this case.
Now here's the catch: We can't know for sure if a function is unused:
private void fooPrivate() {}
/*template*/ void fooPublic()()
{
fooPrivate();
}
When compiling the file GDC knows nothing about the fooPublic template (as templates can only be fully analyzed when instantiated), so fooPrivate appears to be unused. When later using fooPublic in a different file GDC will rely on fooPrivate being already emitted in the original source - after all it's not a template so it's not being emitted into the new module.
There might be workarounds but this whole problem seems nontrivial. We could also introduce a custom gcc.attribute attribute for this. It would cause the same problems with templates, but as it's a specific annotation for one usecase (unlike private) we could rely on the user to do the right thing.
I'm trying my best to adhere to some strict design patterns while developing my current code base, as I am hoping I will be working on it for quite a while to come and I want it to be as flexible and clean as possible. However, in trying to combine all these design patterns to solve the current problem I am facing, I am running into some issues I'm hoping someone can advise me a bit on.
I'm working on some basic homebrewn GUI widgets that are to provide some generic click/drag/duplication behavior. On the surface, the user clicks the widget, then drags it somewhere. Having dragged it far enough, the widget will 'appear' to clone itself and leave the user dragging this new copy.
The Prototype design pattern obviously enters the foray to make this duplication functionality generalizable for many types of widgets. For most objects, the story ends there. The Prototype object is virtually an identical copy of the duplicated version the user ends up dragging.
However, one of the objects I want to duplicate has some fairly big resources attached to it, so I don't want to load them until the user actually decides to click and drag, and subsequently duplicate, that particular object. Enter Lazy initialization. But this presents me with a bit of a conundrum. I cannot just let the prototype object clone itself, as it would need to have the big resources loaded before the user duplicates the dummy prototype version. I'm also not keen on putting some logic into the object which, upon being cloned/duplicated, checks what's going on and decides if it should load in these resources. Instead, a helpful person suggested I create a kind of shell object and when it gets cloned, it were to return this more derived version containing the resources allowing for me to both use RAII and lazy initialization.
But I'm having some trouble implementing this, and I'm starting to wonder if I can even do it the way I'm thinking it should be done. Right now it looks like this:
class widgetSpawner : public widget {
public:
widgetSpawner();
~widgetSpawner();
private:
widget* mPrototypeWidget; // Blueprint with which to spawn new elements
};
class widgetAudioShell : public widget {
public:
widgetAudioShell(std::string pathToAudioFile);
widgetAudioShell( const widgetAudioShell& other );
~widgetAudioShell();
virtual widgetAudio* clone() const { return new widgetAudio(*this); };
private:
std::string mPathToAudioFile;
};
class widgetAudio : public widgetAudioShell {
public:
widgetAudio(AudioEngineAudioTrack &aTrack);
widgetAudio( const widgetAudio& other );
widgetAudio( const widgetAudioShell& other );
~widgetAudio();
virtual widgetAudio* clone() const { return new widgetAudio(*this); };
private:
AudioEngineAudioTrack &mATrack;
};
Obviously, this is not workable as the shell doesn't know the object that's using it to derive a new class. So it cannot return it via the clone function. However, if I keep the two inheritance-wise independent (as in they both inherit from widget), then the compiler complains about lack of co-variance which I think makes sense? Or maybe it's because I am again having the trouble of properly defining one before the other.
Essentially widgetAudioShell needs to know about widgetAudio so it can return a 'new' copy. widgetAudio needs to know about widgetAudioShell so it can read it's member functions when being created/cloned.
If I am not mistaken, this circular dependency is there because of my like of using references rather than pointers, and if I have to use pointers, then suddenly all my other widgets need to do the same which I'd find quite hellish. I'm hoping someone who's had their fingers in something similar might provide some wisdom?
I create class libraries, some which are used by others around the world, and now that I'm starting to use Visual Studio 2010 I'm wondering how good idea it is for me to switch to using code contracts, instead of regular old-style if-statements.
ie. instead of this:
if (fileName == null)
throw new ArgumentNullException("fileName");
use this:
Contract.Requires(fileName != null);
The reason I'm asking is that I know that the static checker is not available to me, so I'm a bit nervous about some assumptions that I make, that the compiler cannot verify. This might lead to the class library not compiling for someone that downloads it, when they have the static checker. This, coupled with the fact that I cannot even reproduce the problem, would make it tiresome to fix, and I would gather that it doesn't speak volumes to the quality of my class library if it seemingly doesn't even compile out of the box.
So I have a few questions:
Is the static checker on by default if you have access to it? Or is there a setting I need to switch on in the class library (and since I don't have the static checker, I won't)
Are my fears unwarranted? Is the above scenario a real problem?
Any advice would be welcome.
Edit: Let me clarify what I mean.
Let's say I have the following method in a class:
public void LogToFile(string fileName, string message)
{
Contracts.Requires(fileName != null);
// log to the file here
}
and then I have this code:
public void Log(string message)
{
var targetProvider = IoC.Resolve<IFileLogTargetProvider>();
var fileName = targetProvider.GetTargetFileName();
LogToFile(fileName, message);
}
Now, here, IoC kicks in, resolves some "random" class, that provides me with a filename. Let's say that for this library, there is no possible way that I can get back a class that won't give me a non-null filename, however, due to the nature of the IoC call, the static analysis is unable to verify this, and thus might assume that a possible value could be null.
Hence, the static analysis might conclude that there is a risk of the LogToFile method being called with a null argument, and thus fail to build.
I understand that I can add assumptions to the code, saying that the compiler should take it as given that the fileName I get back from that method will never be null, but if I don't have the static analyzer (VS2010 Professional), the above code would compile for me, and thus I might leave this as a sleeping bug for someone with Ultimate to find. In other words, there would be no compile-time warning that there might be a problem here, so I might release the library as-is.
So is this a real scenario and problem?
When both your LogToFile and Log methods are part of your library, it is possible that your Log method will not compile, once you turn on the static checker. This of course will also happen when you supply code to others that compile your code using the static checker. However, as far as I know, your client's static checker will not validate the internals of the assembly you ship. It will statically check their own code against the public API of your assembly. So as long as you just ship the DLL, you'd be okay.
Of course there is a change of shipping a library that has a very annoying API for users that actually have the static checker enabled, so I think it is advisable to only ship your library with the contract definitions, if you tested the usability of the API both with and without the static checker.
Please be warned about changing the existing if (cond) throw ex calls to Contracts.Requires(cond) calls for public API calls that you have already shipped in a previous release. Note that the Requires method throws a different exception (a RequiresViolationException if I recall correctly) than what you'd normally throw (a ArgumentException). In that situation, use the Contract.Requires overload. This way your API interface stays unchanged.
First, the static checker is really (as I understand it) only available in the ultimate/academic editions - so unless everyone in your organization uses it they may not be warned if they are potentially violating an invariant.
Second, while the static analysis is impressive it cannot always find all paths that may lead to violation of the invariant. However, the good news here is that the Requires contract is retained at runtime - it is processed in an IL-transformation step - so the check exists at both compile time and runtime. In this way it is equivalent (but superior) to a regular if() check.
You can read more about the runtime rewriting that code contract compilation performs here, you can also read the detailed manual here.
EDIT: Based on what I can glean from the manual, I suspect the situation you describe is indeed possible. However, I thought that these would be warninings rather than compilation errors - and you can suppress them using System.Diagnostics.CodeAnalysis.SuppressMessage(). Consumers of your code who have the static verifier can also mark specific cases to be ignored - but that could certainly be inconvenient if there are a lot of them. I will try to find some time later today to put together a definitive test of your scenario (I don't have access to the static verifier at the moment).
There's an excellent blog here that is almost exclusively dedicated to code contracts which (if you haven't yet seen) may have some content that interests you.
No; the static analyzer will never prevent compilation from succeeding (unless it crashes!).
The static analyzer will warn you about unproven pre-/post-conditions, but doesn't stop compilation.
Is it possible at all to create eventlisteners (i.e. when the value changes) for a variable of type string, int, bool, etc.?
I haven't seen this in any programming language so far, except for some Collections (like ArrayCollection in Flex), which use events to detect changes in the collection.
If not possible at all, why not? What's the reason for this? Are there any best practices to achieve the same sort of functionality? And what about extending functionality with databinding?
I don't think there is anything by default, however, you can create a custom event and raise it on the set of the method. Something like...
C# example
public delegate void MyValueChangedEventHandler(bool oldValue, bool newValue);
public event MyValueChangedEventHandler MyValueChanged;
private bool myValue;
...
public bool MyValue
{
get { return myValue; }
set
{
if (myValue != value)
{
var old = myValue;
myValue = value;
MyValueChanged(old, myValue);
}
}
}
I guess this sort of functionality is not added in any framework/runtime since it would create a big overhead (think on how many times you modify a variable holding a primitive type within the average application) while being not used under normal circunstances.
Anyway, in .NET at least (and I guess that in other OO environments as well), you can define properties, which are accessed as normal variables but can have associated code that reacts when its value is read or modified.
It is possible if you wrap your variables in getters and setters and fire the event when the setter is called.
How about using setter methods and having them register events when changing the value of the variable?
In general, no. The reason is that primitive types are simply bits and bytes stored in some memory location: changing the data in that memory location does just that, and nothing else. Firing events would require calling some methods/functions. So the functionality can be achieved by wrapping the primitive types in some kind of wrapper objects - but of course, they're not 100 % interchangeable: for instance Java's primitive wrapper types (Integer etc.) are marked final, so it's not possible to extend them with event-firing versions to take advantage of auto(un)boxing.
Another approach is to poll the variable frequently and fire appropriate events if it has changed. This is a "dirty" approach with obvious disadvantages (performance overhead, not immediate reaction), but could regardless be useful in some situations. If you do this from another thread in Java, be sure to mark the variable volatile.
It is possible to create listeners, as some of ther others have mentioned, by making a class that fires an event whenever a property changes. This is obviously a lot less efficient than just assigning a value, but there are cases where it could be useful.
Some languages (VB6 and some others) have the ability in debug mode to stop execution when the value of a variable changes. I haven't seen this in .net, but it's liable to be in there somewhere. :-)
It seems to me that using an event to signal a simple variable change could be accomplished with if statements at each assignment, unless the value that variable is being changed externally, in which case you could use a class to handle it.
Every so often, I'm done modifying a piece of code and I would like to "lock" or make a region of code "read only". That way, the only way I can modify the code is if I unlock it first. Is there any way to do this?
The easiest way which would work in many cases is to make it a partial type - i.e. a single class whose source code is spread across multiple files. You could then make one file read-only and the other writable.
To declare a partial type, you just use the partial contextual keyword in the declaration:
// MyClass.First.cs:
public partial class MyClass
{
void Foo()
{
Bar();
}
void Baz()
{
}
}
// MyClass.Second.cs:
public partial class MyClass
{
void Bar()
{
Baz();
}
}
As you can see, it ends up as if the source was all in the same file - you can call methods declared in one file from the other with no problems.
Compile it into into a library dll and make it available for reference in other projects.
Split up the code into separate files and then check into a source control system?
Given your rebuttal to partial classes... there is no way that I know of in a single file, short of documentation. Other options?
inheritance; but the protected code in the base-class (in an assembly you control); inheritors can only call the public/protected members
postsharp - stick the protected logic in attributes declared externally
However, both of these still require multiple files (and probably multiple assemblies).
I thought about this, but I would prefer to keep the class in one file. – danmine
Sorry, mac. A bit of voodoo as a SVN pre-commit might catch it but otherwise no solution other than // if you change this code you are fired
This is totally unnecessary if you're using a version control system. Because once you've checked it in, it doesn't matter what part of the code you edit, you can always diff or roll back. Heck, you could accidentally wipe out all the source code and still get it back.
I'm getting a really bad "code smell" from the fact that you want to lock certain parts of the code. I'm guessing that maybe you're doing too much in one class, in which case, refactor it to a proper set of classes. The fact that, after the 10+ years visual studio has existed, this feature isn't available, should suggest that perhaps your desire to do this is a result of poor design.