C++ - Clear array of pointers stored in two places - c++11

In a constructor for a class Block i push the created instance into a static vector static std::vector<Block*> allBlocks, and remove them in the destructor.
This works if i just create blocks without pointers, ie. Block b;
However, want to have another class return a std::vector<Block*> grid for use, which creates them and adds them into the allBlocks vector also. Destroying the grid vector doesn't seem to run their destructors.
I've tried:
grid.clear() using erase/remove and just pop_back
What would be a better way to store/return them so that when the grid is destroyed, so will the contained Blocks.

Okay, so if you want a really better way, a couple of changes:
No statics! They aren't necessary at all here.
If, as you've stated, you want two containers containing those objects but in such a way that removing one object removes them from every other container, it becomes more problematic.
First, it's impossible for one container to remove elements from another unless it has a reference to it in a way. You could create a variable that would hold all of the containers of your blocks and use that to remove the block from every container, but... yeah.
In this case, a weak reference solution is acceptable, as long as you remember about the implications.
std::shared_ptr ownership and std::weak_ptr reference
Create an std::set<shared_ptr<Block>> blocks;, then two containers with weak references; might be called allBlocks or grid or whatever. Those weak references collection could be for example std::set<std::weak_ptr<Block>>.
Now when removing an element from the grid or allBlocks, you need to remove it from the blocks. To do the lookup on it, you'd need something like this:
struct null_deleter {
void operator()(void const *) const { }
};
To properly create a value for the set lookup. Then, when iterating over any other container, you'd need to use ptr.expired() on the weak_ptr reference in order to see if it was deleted previously.
The caveat of that idea is that the original shared_ptr isn't shared; the class is used just for convenience of weak_ptr and automatic destruction.
std::unique_ptr ownership and int reference
Another, perhaps simpler way is to use std::unordered_map and create a "unique ID" key for each block.
std::unordered_map<unsigned, std::unique_ptr<Block>> blocks;
Now your containers need to be std::set<unsigned>, and the lookup would look like:
for (auto b : grid) {
auto bIt = blocks.find(b);
if (bIt != blocks.end) {
// do things with *bIt
} else {
// e.g. add the b to the "delete list"
}
}
Now you could process your "delete list" and remove the dead ids from the container.
Wrapping up
Since this might get tedious to use, a nice idea could be to wrap the set into a container that would do the cleanup before returning begin() or end() for custom iteration across Block values.
Similarly, the destructor of such wrapped structure could remove the values from the map, thus effectively making all ids in all other containers dangling.
This of course raises an issue of thread safety, because the original blocks map would need to be locked for the whole iteration; at least for modification. Having cbegin()/cend() could allow two threads to read from the same shared map, but ... The kind of problems that arise when sharing data across threads are out of scope for this post, though.
Poisoning
Another idea that came to my mind is sometimes referred to as "poisoning". In this case, you don't need a master container; both of your regular containers would hold shared_ptrs to the Blocks... with a twist.
When a Block is chosen for deletion, a special flag is set on it. It becomes "poisoned", and every container should sweep such blocks before doing iteration.
If every container indeed does so, all references to the Block die and its destructor will fire properly. You're essentially communicating the command through a special value of it.
If you don't want to modify the Block class, having std::shared_ptr<std::optional<Block>> and nullyfying the optional can work just the same, except the destructor of the Block will run immediately, and not when the last structure decides to do its sweep. This might be better or worse depending on your goals and needs.

Related

interface function getting rvalue pointers to initialize shared_ptr

I have a class exposing through it's interface add function:
void AddObject(Object *o);
Inside the class I maintain the objects in set<shared_ptr<Object>>.
Since I will create shared_ptr from the received pointer I thought to limit the function argument to only rvalue pointers so to make sure that the user will not delete the pointer I use. And so I'll change the function declaration to:
void AddObject(Object* &&o);
so a typical use will be:
AddObject(new Object())
preventing the user to accidentally delete pointer I hold.
I don't want to to use shared_ptr in the interface because the user is not familiar with shared_ptr.
Do you think my suggestion is a good idea?
I think this is a bad idea. I'm sure there is a reason why shared_ptr c-tor that gets a raw pointer is marked explicit instead of using r-value. In my eyes, It's better to teach the users once about smart pointers or at least teach them about using make_shared/make_unique (which are safer and, in the case of make_shared, more efficient, BTW).
BTW, why shared_ptr and not unique_ptr?
Also, why set? Even if you want to make sure you hold each pointer only once and searching a vector each time doesn't look natural enough in your code, I don't see a reason to hold the pointers sorted instead of using unordered_set.
First of all, this approach will not prevent the user from deleting the pointer. Consider this example
auto obj = new Object();
AddObject(std::move(obj));
delete obj;
Secondly, the amount of steps between calling new and the creation of shared_ptr should be as few as possible. If anything happens inside AddObject before it can create a shared_ptr, the object will never get deleted.
The same applies if there are more arguments to AddObject(). If constructing those fails, you will leak memory.
void AddObject(Object* &&o, SomeOtherObject* x);
AddObject(new Object(), xx()); // if xx() throws, memory leak will occur
Ideally you would "wrap" object creating into shared_ptr construction:
void AddObject(std::shared_ptr<Object> o);
AddObject(std::make_shared<Object>());
Either of the following methods may solve your problem.
You may append more comments for AddObject to tell users that delete the pointer they added is not allowed. This is almost enough.
Or, you could also make Object inherits from a base class which has a private destructor and a method named destroyByOwner.

Overloading for pass-by-value and pass-by-rvalue-reference

I have two overloads of a subroutine that takes an argument of a type that occupies several Megabytes of dynamic memory and has a move constructor and assignment operator:
// Version intended for use when we the caller has
// deliberately passed an rvalue reference using std::move
void MyClass::setParameter(MyMoveableType &&newParameter)
{
m_theLocalParameter = std::move(newParameter);
}
// Version intended for use when the caller has passed
// some other type of value which shouldn't be moved
void MyClass::setParameter(MyMoveableType newParameter)
{
m_theLocalParameter = std::move(newParameter);
}
The intention is clearly that the first overload moves the contents of newParameter from wherever up the chain of subroutine-calls the newParameter object originated, whilst the second overload creates a brand new copy of newParameter (or invokes copy elision to avoid doing so where appropriate, such as where the argument is actually the return value from a function) and then moves the copy into the local data member, thus avoiding a further copy.
However, if I try actually to move an object into my class using the first overload:
{
MyClass theDestination;
MyMoveableType theObject
...
// ...Various actions which populate theObject...
...
TheDestination.setParameter(std::move(theObject));
...
}
...then on every compiler I've tried I get an error along the lines of:
call to member function 'setParameter' is ambiguous
Now I can see that passing an rvalue reference to the second overload would in fact be perfectly legal, and is what I'd expect the compiler to do, without giving a warning, if I hadn't provided the first overload. Even so, I'd expect it to be perfectly clear to the compiler what the intent of this code is, and therefore I'd expect that it would select the second overload as being the best match.
I can eliminate the error by redefining the second constructor to take a const reference and do away with the std::move (though it wouldn't be an error to
leave it in; the compiler would just ignore it). This would work all right, but I'd lose the opportunity to take advantage of copy elision. This could be
significant in performance terms for this particular application; the objects under discussion are high-resolution video frames streaming through at 30
frames per second.
Is there anything I can do under this circumstance to disambiguate the overloads and so have both a pass-by-value and pass-by-rvalue-reference version of my routine?
The intention is clearly that the first overload moves the contents of newParameter from wherever up the chain of subroutine-calls the newParameter object originated, whilst the second overload creates a brand new copy
Which is not really how you do it. You have two sane options:
Approach A
You write just the value overload and then move from it anyway - that means you'll always pay a constructor price, either move or copy.
Approach B
You write overloads for (const T&) and (T&&). That way you copy in the first one and skip the move CTOR in the second one using perfect forwarding.
I recommend approach A as a default, and B only when the c-tor call actually matters that much.

XNA phase management

I am making a tactical RPG game in XNA 4.0 and was wondering what the best way to go about "phases" is? What I mean by phases is creating a phase letting the player place his soldiers on the map, creating a phase for the player's turn, and another phase for the enemy's turn.
I was thinking I could create some sort of enum and set the code in the upgrade/draw methods to run accordingly, but I want to make sure this is the best way to go about it first.
Thanks!
Edit: To anaximander below:
I should have mentioned this before, but I already have something implemented in my application that is similar to what you mentioned. Mine is called ScreenManager and Screen but it works exactly in the same way. I think the problem is that I am treating screen, phase, state, etc, to be different things but in reality they are the same thing.
Basically what I really want is a way to manage different "phases" in a single screen. One of my screens called map will basically represent all of the possible maps in the game. This is where the fighting takes place etc. I want to know what is the best way to go about this:
Either creating an enum called FightStage that holds values like
PlacementPhase,PlayerPhase, etc, and then split the Draw and
Update method according to the enum
Or create an external class to manage this.
Sorry for the confusion!
An approach I often take with states or phases is to have a manager class. Essentially, you need a GamePhase object which has Initialise(), Update(), Draw() and Dispose() methods, and possibly Pause() and Resume() as well. Also often worth having is some sort of method to handle the handover. More on that later. Once you have this class, inherit from it to create a class for each phase; SoldierPlacementPhase, MovementPhase, AttackPhase, etc.
Then you have a GamePhaseManager class, which has Initialise(), Update(), Draw() and Dispose() methods, and probably a SetCurrentPhase() method of some kind. You'll also need an Add() method to add states to the manager - it'll need a way to store them. I recommend a Dictionary<> using either an int/enum or string as the key. Your SetCurrentPhase() method will take that key as a parameter.
Basically, what you do is to set up an instance of the GamePhaseManager in your game, and then create and initialise each phase object and add it to the manager. Then your game's update loop will call GamePhaseManager.Update(), which simply calls through to the current state's Update method, passing the parameters along.
Your phases will need some way of telling when it's time for them to end, and some way of handling that. I find that the easiest way is to set up your GamePhase objects, and then have a method like GamePhase.SetNextPhase(GamePhase next) which gives each phase a reference to the one that comes next. Then all they need is a boolean Exiting with a protected setter and a public getter, so that they can set Exiting = true in their Update() when their internal logic decides that phase is over, and then in the GamePhaseManager.Update() you can do this:
public void Update(TimeSpan elapsed)
{
if (CurrentPhase.Exiting)
{
CurrentPhase.HandOver();
CurrentPhase = CurrentPhase.NextPhase;
}
CurrentPhase.Update(elapsed);
}
You'll notice I change phase before the update. That's so that the exiting phase can finish its cycle; you get odd behaviour otherwise. The CurrentPhase.HandOver() method basically gets the current phase to pass on anything the next phase needs to know to carry on from the right point. This is probably done by having it call NextPhase.Resume() internally, passing it any info it needs as parameters. Remember to also set Exiting = false in here, or else it'll keep handing over after only one update loop.
The Draw() methods are handled in the same way - your game loop calls GamePhaseManager.Draw(), which just calls CurrentPhase.Draw(), passing the parameters through.
If you have anything that isn't dependent on phase - the map, for example - you can either have it stored in the GamePhaseManager and call its methods in GamePhaseManager's methods, you can have the phases pass it around and have them call its methods, or you can keep it up at the top level and call it's methods alongsideGamePhaseManager's. It depends how much access the phases need to it.
EDIT
Your edit shows that a fair portion of what's above is known to you, but I'm leaving it there to help anyone who comes across this question in future.
If already you have a manager to handle stages of the game, my immediate instinct would be to nest it. You saw that your game has stages, and built a class to handle them. You have a stage that has its own stages, so why not use the stage-handling code you already wrote? Inherit from your Screen object to have a SubdividedScreen class, or whatever you feel like calling it. This new class is mostly the same as the parent, but it also contains its own instance of the ScreenManager class. Replace the Screen object you're calling map with one of these SubdividedScreen objects, and fill its ScreenManager with Screen instances to represent the various stages (PlacementPhase, PlayerPhase, etc). You might need a few tweaks to the ScreenManager code to make sure the right info can get to the methods that need it, but it's much neater than having a messy Update() method subdivided by switch cases.

State of object after std::move construction

Is it legal/proper c++0x to leave an object moved for the purpose of move-construction in a state that can only be destroyed? For instance:
class move_constructible {...};
int main()
{
move_constructible x;
move_constructible y(std::move(x));
// From now on, x can only be destroyed. Any other method will result
// in a fatal error.
}
For the record, I'm trying to wrap in a c++ class a c struct with a pointer member which is always supposed to be pointing to some allocated memory area. All the c library API relies on this assumption. But this requirement prevents to write a truly cheap move constructor, since in order for x to remain a valid object after the move it will need its own allocated memory area. I've written the destructor in such a way that it will first check for NULL pointer before calling the corresponding cleanup function from the c API, so that at least the struct can be safely destroyed after the move.
Yes, the language allows this. In fact it was one of the purposes of move semantics. It is however your responsibility to ensure that no other methods get called and/or provide proper diagnostics. Note, usually you can also use at least the assignment operator to "revive" your variable, such as in the classical example of swapping two values.
See also this question

How can I stop execution in the Visual Studio Debugger when a private member variable changes value?

Let's say my class has a private integer variable called count.
I've already hit a breakpoint in my code. Now before I press continue, I want to make it so the debugger will stop anytime count gets a new value assigned to it.
Besides promoting count to a field and setting a breakpoint on the set method of the field, is there any other way to do this?
What you're looking for is not possible in managed code. In C++ this is known as data break point. It allows you to break whenever a block of memory is altered by the running program. But this is only available in pure native C++ code.
A short version of why this is not implemented is that it's much harder in managed code. Native code is nice and predictable. You create memory and it doesn't move around unless you create a new object (or explicitly copy memory).
Managed code is much more complex because it's a garbage collected language. The CLR commonly moves objects around in memory. Therefore simply watching a bit of memory is not good enough. It requires GC interaction.
This is just one of the issues with implementing managed break points.
I assume you're trying to do this because you want to see where the change in value came from. You already stated the way I've always done it: create a property, and break on the set accessor (except that you must then always use that set accessor for this to work).
Basically, I'd say that since a private field is only storage you can't break on it because the private field isn't a breakable instruction.
The only way I can think do do this, is to right click on the variable, and select "Find all references". Once it finds all the references, you can create a new breakpoint at each point in the code where the variable is assigned a value. This would probable work pretty well, unless you were passing the variable in by reference to another function and changing the value in there. In that case, you'd need some way of watching a specific point in memory to see when it changed. I'm not sure if such a tool exists in VS.
Like ChrisW commented. You can set a 'Data Breakpoint' but only for native (non-managed) code. The garbage collector will move allocated memory blocks around when the garbage collector runs. Thus, data breakpoints are not possible for managed code.
Otherwise, no. You must encapsulate access to your item for which you want to 'break on modify'. Since its a private member already, I suggest following Kibbee's suggestion of setting breakpoints wherever its used.
Besides promoting count to a field and setting a breakpoint on the set method of the field, is there any other way to do this?
Make it a property of a different class, create an instance of the class, and set a breakpoint on the property.
Instead of ...
test()
{
int i = 3;
...etc...
i = 4;
}
... have ...
class Int
{
int m;
internal Int(int i) { m = i; }
internal val { set { m = value; } get { return m; } }
}
test()
{
Int i = new Int(3);
...etc...
i.val = 4;
}
The thing is that, using C#, the actual memory location of everything is being moved continually: and therefore the debugger can't easily use the CPU's 'break on memory access' debugging register, and it's easier for the debugger to, instead, implement a code-location breakpoint.

Resources