I just read, that this construct:
const bg::AppSettings& bg::AppSettings::GetInstance()
{
static AppSettings instance;
return instance;
}
is a thread safe and working way to create a singleton?! Am I correct, that the static AppSettings variable will be the same, every time I call this method?! I get a little bit confused about the scoping on this one ...
My normal approach was to use a unique_ptr as a static member of my class ... but this seems to work...can someone explain to me, what's going on here?!
And btw.: does the const make sense here?!
In C++11 (and forward), the construction of the function local static AppSettings is guaranteed to be thread-safe. Note: Visual Studio did not implement this aspect of C++11 until VS-2015.
The compiler will lay down a hidden flag along side of AppSettings that indicates whether it is:
Not constructed.
Being constructed.
Is constructed.
The first thread through will find the flag set to "not constructed" and attempt to construct the object. Upon successful construction the flag will be set to "is constructed". If another thread comes along and finds the flag set to "being constructed", it will wait until the flag is set to "is constructed".
If the construction fails with an exception, the flag will be set to "not constructed", and construction will be retried on the next pass through (either on the same thread or a different thread).
The object instance will remain constructed for the remainder of your program, until main() returns, at which time instance will be destructed.
Every time any thread of execution passes through AppSettings::GetInstance(), it will reference the exact same object.
In C++98/03, the construction was not guaranteed to be thread safe.
If the constructor of AppSettings recursively enters AppSettings::GetInstance(), the behavior is undefined.
If the compiler can see how to construct instance "at compile time", it is allowed to.
If AppSettings has a constexpr constructor (the one used to construct instance), and the instance is qualified with constexpr, the compiler is required to construct instance at compile time. If instance is constructed at compile time, the "not-constructed/constructed" flag will be optimized away.
The behavior of your code is similar to this:
namespace {
std::atomic_flag initialized = ATOMIC_FLAG_INIT;
std::experimental::optional<bg::AppSettings> optional_instance;
}
const bg::AppSettings& bg::AppSettings::GetInstance()
{
if (!initialized.test_and_set()) {
optional_instance.emplace();
}
return *optional_instance;
}
By having a thread-safe flag that lives for the entire duration of the program, the compiler can check this flag each time the function is called and only initialize your variable once. A real implementation can use other mechanisms to get this same effect though.
Related
This is a simplification of an issue I encountered in another project.
Say I have the following class:
class MyClass {
public:
MyClass() {
std::cout << "MyClass constructed\n";
Instances().insert(this);
}
~MyClass() {
std::cout << "MyClass destructed\n";
Instances().erase(this);
}
static std::unordered_set<MyClass*>& Instances() {
static std::unordered_set<MyClass*> _instances;
return _instances;
}
};
It has a static unordered_set that it uses for keeping track of existing instances of the class. When an instance is constructed, its address is added to the set; when an instance is destroyed, its address is removed from the set.
Now, I have another class that has a vector of shared_ptrs containing instances of MyClass:
struct InstanceContainer {
std::vector<std::shared_ptr<MyClass>> instances;
};
A key point here is that there is a global instance of this class above main. This seems to be part of the problem, because declaring the class inside of main does not produce the issue.
Inside of main, I do the following (say the global instance of InstanceContainer is called container):
container.instances.emplace_back(std::shared_ptr<MyClass>(new MyClass));
Everything is fine until the program terminates, when I get a read access violation ("vector subscript out of range") when Instances().erase(this) is executed in MyClass's destructor.
I thought that maybe I was attempting to erase the instance from _instances multiple times (hence the couts)-- However, the contructor is only called once, and the destructor is only called once, as you'd expect. I found that when this happens, _instances.size() is equal to 0. The weird thing is, it's equal to 0 before any calls to erase. Prior to anything being erased from the set, it's empty?!
My theory at this point is that this has to do with the order in which the objects are destructed as the program terminates. Perhaps the static _instances is being freed before the destructor for MyClass is called.
I was hoping someone would be able to shed some light on this, and confirm whether or not that's what's happening.
My workaround now is to check to see if _instances.size() is 0 before attempting to erase. Is this safe? If not, what else can I do?
If it matters, I'm using MSVC. Here's an executable example.
Here's what happens. That global variable of type InstanceContainer is constructed first, before main is entered. The function-static variable _instances is created later, when Instances() is called for the first time.
At program shutdown, destructors for these objects are called in the reverse order of construction. Therefore, _instances is destroyed first, and then InstanceContainer, which in turn destroys its vector of shared pointers, which in turn run ~MyClass on all objects still in the vector, which in turn call _instances.erase() on already-destroyed _instances. Whereupon your program exhibits undefined behavior by way of accessing an object whose lifetime has ended.
There are several ways you could work around this. One, you could ensure that InstanceContainer::instances is empty before main returns. No idea how feasible this is, as you've never explained what role InstanceContainer plays in your design.
Two, you could allocate _instances on the heap, and just leak it:
static std::unordered_set<MyClass*>& Instances() {
static auto* _instances = new std::unordered_set<MyClass*>;
return *_instances;
}
This will keep it alive through the destruction of global objects.
Three, you could put something like this before the definition of InstanceContainer global variable:
static int dummy = (MyClass::Instances(), 0);
This will ensure that _instances is created earlier, and therefore destroyed later.
I have two overloads of a subroutine that takes an argument of a type that occupies several Megabytes of dynamic memory and has a move constructor and assignment operator:
// Version intended for use when we the caller has
// deliberately passed an rvalue reference using std::move
void MyClass::setParameter(MyMoveableType &&newParameter)
{
m_theLocalParameter = std::move(newParameter);
}
// Version intended for use when the caller has passed
// some other type of value which shouldn't be moved
void MyClass::setParameter(MyMoveableType newParameter)
{
m_theLocalParameter = std::move(newParameter);
}
The intention is clearly that the first overload moves the contents of newParameter from wherever up the chain of subroutine-calls the newParameter object originated, whilst the second overload creates a brand new copy of newParameter (or invokes copy elision to avoid doing so where appropriate, such as where the argument is actually the return value from a function) and then moves the copy into the local data member, thus avoiding a further copy.
However, if I try actually to move an object into my class using the first overload:
{
MyClass theDestination;
MyMoveableType theObject
...
// ...Various actions which populate theObject...
...
TheDestination.setParameter(std::move(theObject));
...
}
...then on every compiler I've tried I get an error along the lines of:
call to member function 'setParameter' is ambiguous
Now I can see that passing an rvalue reference to the second overload would in fact be perfectly legal, and is what I'd expect the compiler to do, without giving a warning, if I hadn't provided the first overload. Even so, I'd expect it to be perfectly clear to the compiler what the intent of this code is, and therefore I'd expect that it would select the second overload as being the best match.
I can eliminate the error by redefining the second constructor to take a const reference and do away with the std::move (though it wouldn't be an error to
leave it in; the compiler would just ignore it). This would work all right, but I'd lose the opportunity to take advantage of copy elision. This could be
significant in performance terms for this particular application; the objects under discussion are high-resolution video frames streaming through at 30
frames per second.
Is there anything I can do under this circumstance to disambiguate the overloads and so have both a pass-by-value and pass-by-rvalue-reference version of my routine?
The intention is clearly that the first overload moves the contents of newParameter from wherever up the chain of subroutine-calls the newParameter object originated, whilst the second overload creates a brand new copy
Which is not really how you do it. You have two sane options:
Approach A
You write just the value overload and then move from it anyway - that means you'll always pay a constructor price, either move or copy.
Approach B
You write overloads for (const T&) and (T&&). That way you copy in the first one and skip the move CTOR in the second one using perfect forwarding.
I recommend approach A as a default, and B only when the c-tor call actually matters that much.
Hi this is in regard to some code given in C++ CLI i action which i have trouble understanding.The code is given below
delegate bool EnumWindowsDelegateProc(
IntPtr hwnd,IntPtr lParam);
ref class WindowEnumerator
{
private:
EnumWindowsDelegateProc^ _WindowFound;
public:
WindowEnumerator(EnumWindowsDelegateProc^ handler)
{
_WindowFound = handler;
}
void Init()
{
pin_ptr<EnumWindowsDelegateProc^> tmp = &_WindowFound;
EnumWindows((WNDENUMPROC)
Marshal::GetFunctionPointerForDelegate(
_WindowFound).ToPointer(), 0);
}
};
In the above code _WindowFound has been pinned so GC wont moove it.The Question is
Isn't tmp only valid inside Int() thus _WindowFound pinned only
during call to Int() ?
If thats the case Isn't there a chance the delegate location in
memory might change at the time EnumWindows calls it as a function
pointer?
A pin_ptr<> automatically unpins, RAII-style, when code execution leaves the block that it is declared it. So it will be pinned for the entire body of the Init() method in your code. So your 2 bullet does not apply.
It is notable that the code is in not infact correct. It works, but by accident. Marshal.GetFunctionPointerForDelegate() invokes the stub compiler to auto-generate the native code that's needed to allow the native code to invoke the delegate target. The lifetime of that stub is controlled by the lifetime of the delegate object. In other words, as soon as the delegate object gets garbage collected, the stub will be destroyed as well.
Pinning the delegate object does not in any way affect the stub. It is already unmovable, the GC never moves code. It works by accident because pinning an object requires creating an extra GC handle for the object (GCHandle::Alloc), enough to prevent premature collection.
It doesn't make an enormous difference in this kind of code, EnumWindows() is slow anyway. Not necessarily the case when you call other native code that requires a callback, avoiding pinning should always be a goal in general. All you have to do is let the jitter see a reference to the delegate object beyond the code where it can still be used, like this:
void Init() {
EnumWindows((WNDENUMPROC)
Marshal::GetFunctionPointerForDelegate(
_WindowFound).ToPointer(), 0);
GC::KeepAlive(_WindowFound);
}
Very efficient, GC::KeepAlive() doesn't generate any code, it just tells the jitter to extend the lifetime of the _WIndowFound reference so it can't be collected while EnumWindows() is executing. Even that is overkill in this specific case since somebody is going to have a reference to the WindowEnumerator object in order to retrieve _WindowFound, but better safe than sorry.
Is it legal/proper c++0x to leave an object moved for the purpose of move-construction in a state that can only be destroyed? For instance:
class move_constructible {...};
int main()
{
move_constructible x;
move_constructible y(std::move(x));
// From now on, x can only be destroyed. Any other method will result
// in a fatal error.
}
For the record, I'm trying to wrap in a c++ class a c struct with a pointer member which is always supposed to be pointing to some allocated memory area. All the c library API relies on this assumption. But this requirement prevents to write a truly cheap move constructor, since in order for x to remain a valid object after the move it will need its own allocated memory area. I've written the destructor in such a way that it will first check for NULL pointer before calling the corresponding cleanup function from the c API, so that at least the struct can be safely destroyed after the move.
Yes, the language allows this. In fact it was one of the purposes of move semantics. It is however your responsibility to ensure that no other methods get called and/or provide proper diagnostics. Note, usually you can also use at least the assignment operator to "revive" your variable, such as in the classical example of swapping two values.
See also this question
Let's say my class has a private integer variable called count.
I've already hit a breakpoint in my code. Now before I press continue, I want to make it so the debugger will stop anytime count gets a new value assigned to it.
Besides promoting count to a field and setting a breakpoint on the set method of the field, is there any other way to do this?
What you're looking for is not possible in managed code. In C++ this is known as data break point. It allows you to break whenever a block of memory is altered by the running program. But this is only available in pure native C++ code.
A short version of why this is not implemented is that it's much harder in managed code. Native code is nice and predictable. You create memory and it doesn't move around unless you create a new object (or explicitly copy memory).
Managed code is much more complex because it's a garbage collected language. The CLR commonly moves objects around in memory. Therefore simply watching a bit of memory is not good enough. It requires GC interaction.
This is just one of the issues with implementing managed break points.
I assume you're trying to do this because you want to see where the change in value came from. You already stated the way I've always done it: create a property, and break on the set accessor (except that you must then always use that set accessor for this to work).
Basically, I'd say that since a private field is only storage you can't break on it because the private field isn't a breakable instruction.
The only way I can think do do this, is to right click on the variable, and select "Find all references". Once it finds all the references, you can create a new breakpoint at each point in the code where the variable is assigned a value. This would probable work pretty well, unless you were passing the variable in by reference to another function and changing the value in there. In that case, you'd need some way of watching a specific point in memory to see when it changed. I'm not sure if such a tool exists in VS.
Like ChrisW commented. You can set a 'Data Breakpoint' but only for native (non-managed) code. The garbage collector will move allocated memory blocks around when the garbage collector runs. Thus, data breakpoints are not possible for managed code.
Otherwise, no. You must encapsulate access to your item for which you want to 'break on modify'. Since its a private member already, I suggest following Kibbee's suggestion of setting breakpoints wherever its used.
Besides promoting count to a field and setting a breakpoint on the set method of the field, is there any other way to do this?
Make it a property of a different class, create an instance of the class, and set a breakpoint on the property.
Instead of ...
test()
{
int i = 3;
...etc...
i = 4;
}
... have ...
class Int
{
int m;
internal Int(int i) { m = i; }
internal val { set { m = value; } get { return m; } }
}
test()
{
Int i = new Int(3);
...etc...
i.val = 4;
}
The thing is that, using C#, the actual memory location of everything is being moved continually: and therefore the debugger can't easily use the CPU's 'break on memory access' debugging register, and it's easier for the debugger to, instead, implement a code-location breakpoint.