C++/CLI Resource Management Confusion - memory-management

I am extremely confused about resource management in C++/CLI. I thought I had a handle (no pun intended) on it, but I stumbled across the auto_gcroot<T> class while looking through header files, which led to a google search, then the better part of day reading documentation, and now confusion. So I figured I'd turn to the community.
My questions concern the difference between auto_handle/stack semantics, and auto_gcroot/gcroot.
auto_handle: My understanding is that this will clean up a managed object created in a managed function. My confusion is that isn't the garbage collector supposed to do that for us? Wasn't that the whole point of managed code? To be more specific:
//Everything that follows is managed code
void WillThisLeak(void)
{
String ^str = gcnew String ^();
//Did I just leak memory? Or will GC clean this up? what if an exception is thrown?
}
void NotGoingToLeak(void)
{
String ^str = gcnew String^();
delete str;
//Guaranteed not to leak, but is this necessary?
}
void AlsoNotGoingToLeak(void)
{
auto_handle<String ^> str = gcnew String^();
//Also Guaranteed not to leak, but is this necessary?
}
void DidntEvenKnowICouldDoThisUntilToday(void)
{
String str();
//Also Guaranteed not to leak, but is this necessary?
}
Now this would make sense to me if it was a replacement for the C# using keyword, and it was only recommended for use with resource-intensive types like Bitmap, but this isnt mentioned anywhere in the docs so im afraid ive been leaking memory this whole time now
auto_gcroot
Can I pass it as an argument to a native function? What will happen on copy?
void function(void)
{
auto_gcroot<Bitmap ^> bmp = //load bitmap from somewhere
manipulateBmp(bmp);
pictureBox.Image = bmp; //Is my Bitmap now disposed of by auto_gcroot?
}
#pragma unmanaged
void maipulateBmp(auto_gcroot<Bitmap ^> bmp)
{
//Do stuff to bmp
//destructor for bmp is now called right? does this call dispose?
}
Would this have worked if I'd used a gcroot instead?
Furthermore, what is the advantage to having auto_handle and auto_gcroot? It seems like they do similar things.
I must be misunderstanding something for this to make so little sense, so a good explanation would be great. Also any guidance regarding the proper use of these types, places where I can go to learn this stuff, and any more good practices/places I can find them would be greatly appreciated.
thanks a lot,
Max

Remember delete called on managed object is akin to calling Dispose in C#. So you are right, that auto_handle lets you do what you would do with the using statement in C#. It ensures that delete gets called at the end of the scope. So, no, you're not leaking managed memory if you don't use auto_handle (the garbage collector takes care of that), you are just failing to call Dispose. there is no need for using auto_handle if the types your dealing with do not implement IDisposable.
gcroot is used when you want to hold on to a managed type inside a native class. You can't just declare a manged type directly in a native type using the hat ^ symbol. You must use a gcroot. This is a "garbage collected root". So, while the gcroot (a native object) lives, the garbage collector cannot collect this object. When the gcroot is destroyed, it lets go of the reference, and the garbage collector is free to collect the object (assuming it has no other references). You declare a free-standing gcroot in a method like you've done above--just use the hat ^ syntax whenever you can.
So when would you use auto_gcroot? It would be used when you need to hold on to a manged type inside a native class AND that managed type happens to implement IDisposable. On destruction of the auto_gcroot, it will do 2 things: call delete on the managed type (think of this as a Dispose call--no memory is freed) and free the reference (so the type can be garbage collected).
Hope it helps!
Some references:
http://msdn.microsoft.com/en-us/library/aa730837(v=vs.80).aspx
http://msdn.microsoft.com/en-us/library/481fa11f(v=vs.80).aspx
http://www.codeproject.com/Articles/14520/C-CLI-Library-classes-for-interop-scenarios

Related

Should I use smart pointers here?

I have read several answers about similar issues, but I am still confused as to when it is a good time to use smart pointers. I have a class Foo which looks like this:
class Bar;
class Baz;
class Foo
{
public:
Foo(Bar* param1, std::vector<Baz*>& param2);
virtual ~Foo();
// Method using myBar and myBaz polymorphically...
private:
Bar* myBar;
std::vector<Baz*> myBaz;
};
I need the two data members to be pointer for polymorphism. It is part
of an API and what I fear is that someone writes:
int main()
{
//...
std::vector<Baz*> allOnHeap {new Baz(), new Baz()};
Foo(new Bar(), allOnHeap);
// ...
}
which would be legal but would result in memory leaks. I could add deletes in
the destructor, but then what if no dynamic allocation has been
made? In your opinion, would it be a good idea to use smart
pointers in this case? Please explain your opinion. If you have better ideas on how to do this, please feel free to share it.
I have read several answers about similar issues, but I am still
confused as to when it is a good time to use smart pointers. I have a
class Foo which looks like this:
I would use smart pointers in the case you've mentioned.
Presently copying of your class is not safe
You can't know whether the objects that your are referencing in the list still exists at the time of referencing it
RAII is not automatically used to for
myBar
More specifically, one can IMHO model the ownership semantics that you require:
Use unique_ptr<Bar> and use std::vector<weak_ptr<Baz>>.
unique_ptr<Bar> gives you automatic cleanup at the end of the holding class's scope
std::vector<weak_ptr<Baz>>: Each of the weak_ptr's indicate that the using class (class that would get access through the weak_ptr) has no ownership semantics, and prior to usage has to attempt gaining temporary ownership (by locking which gives one a shared_ptr).
With the above mentioned usage of smart pointers, you will get a compiler error when mistakenly copying out of the box (as unique_ptr is not copyable), and more importantly, each item in the list will be safely accessible. The instantiator of the holding class also has a clear idea wrt the ownership semantics envisaged by the class writer
BTW, you don't have to explicitly delete your default constructor. See for detail.
Smart pointers don't immediately jump to mind in this case mostly because it's not clear what the ownership requirements are. If you are following the C++ Core Guidelines recommendation on ownership transfer and raw pointers then it's clear that class Foo is not intended to take ownership of the Bar and Baz objects and should not be deleting them.
If your intent is to maybe sometimes take ownership of the objects and sometimes not to take ownership, I would suggest considering an alternate design where you are able to pick a single functionality and stick with it (:

interface function getting rvalue pointers to initialize shared_ptr

I have a class exposing through it's interface add function:
void AddObject(Object *o);
Inside the class I maintain the objects in set<shared_ptr<Object>>.
Since I will create shared_ptr from the received pointer I thought to limit the function argument to only rvalue pointers so to make sure that the user will not delete the pointer I use. And so I'll change the function declaration to:
void AddObject(Object* &&o);
so a typical use will be:
AddObject(new Object())
preventing the user to accidentally delete pointer I hold.
I don't want to to use shared_ptr in the interface because the user is not familiar with shared_ptr.
Do you think my suggestion is a good idea?
I think this is a bad idea. I'm sure there is a reason why shared_ptr c-tor that gets a raw pointer is marked explicit instead of using r-value. In my eyes, It's better to teach the users once about smart pointers or at least teach them about using make_shared/make_unique (which are safer and, in the case of make_shared, more efficient, BTW).
BTW, why shared_ptr and not unique_ptr?
Also, why set? Even if you want to make sure you hold each pointer only once and searching a vector each time doesn't look natural enough in your code, I don't see a reason to hold the pointers sorted instead of using unordered_set.
First of all, this approach will not prevent the user from deleting the pointer. Consider this example
auto obj = new Object();
AddObject(std::move(obj));
delete obj;
Secondly, the amount of steps between calling new and the creation of shared_ptr should be as few as possible. If anything happens inside AddObject before it can create a shared_ptr, the object will never get deleted.
The same applies if there are more arguments to AddObject(). If constructing those fails, you will leak memory.
void AddObject(Object* &&o, SomeOtherObject* x);
AddObject(new Object(), xx()); // if xx() throws, memory leak will occur
Ideally you would "wrap" object creating into shared_ptr construction:
void AddObject(std::shared_ptr<Object> o);
AddObject(std::make_shared<Object>());
Either of the following methods may solve your problem.
You may append more comments for AddObject to tell users that delete the pointer they added is not allowed. This is almost enough.
Or, you could also make Object inherits from a base class which has a private destructor and a method named destroyByOwner.

GetFunctionPointerForDelegate and pin pointer

Hi this is in regard to some code given in C++ CLI i action which i have trouble understanding.The code is given below
delegate bool EnumWindowsDelegateProc(
IntPtr hwnd,IntPtr lParam);
ref class WindowEnumerator
{
private:
EnumWindowsDelegateProc^ _WindowFound;
public:
WindowEnumerator(EnumWindowsDelegateProc^ handler)
{
_WindowFound = handler;
}
void Init()
{
pin_ptr<EnumWindowsDelegateProc^> tmp = &_WindowFound;
EnumWindows((WNDENUMPROC)
Marshal::GetFunctionPointerForDelegate(
_WindowFound).ToPointer(), 0);
}
};
In the above code _WindowFound has been pinned so GC wont moove it.The Question is
Isn't tmp only valid inside Int() thus _WindowFound pinned only
during call to Int() ?
If thats the case Isn't there a chance the delegate location in
memory might change at the time EnumWindows calls it as a function
pointer?
A pin_ptr<> automatically unpins, RAII-style, when code execution leaves the block that it is declared it. So it will be pinned for the entire body of the Init() method in your code. So your 2 bullet does not apply.
It is notable that the code is in not infact correct. It works, but by accident. Marshal.GetFunctionPointerForDelegate() invokes the stub compiler to auto-generate the native code that's needed to allow the native code to invoke the delegate target. The lifetime of that stub is controlled by the lifetime of the delegate object. In other words, as soon as the delegate object gets garbage collected, the stub will be destroyed as well.
Pinning the delegate object does not in any way affect the stub. It is already unmovable, the GC never moves code. It works by accident because pinning an object requires creating an extra GC handle for the object (GCHandle::Alloc), enough to prevent premature collection.
It doesn't make an enormous difference in this kind of code, EnumWindows() is slow anyway. Not necessarily the case when you call other native code that requires a callback, avoiding pinning should always be a goal in general. All you have to do is let the jitter see a reference to the delegate object beyond the code where it can still be used, like this:
void Init() {
EnumWindows((WNDENUMPROC)
Marshal::GetFunctionPointerForDelegate(
_WindowFound).ToPointer(), 0);
GC::KeepAlive(_WindowFound);
}
Very efficient, GC::KeepAlive() doesn't generate any code, it just tells the jitter to extend the lifetime of the _WIndowFound reference so it can't be collected while EnumWindows() is executing. Even that is overkill in this specific case since somebody is going to have a reference to the WindowEnumerator object in order to retrieve _WindowFound, but better safe than sorry.

Convert C++/CLI delegate^ to long and back

How can I convert PaintDelegate^ to a long to be sent as the refCon param so that once inside the TrackTransferCB I can convert it back and invoke it? The long it is converted to doesn't have to mean anything as long as I can convert it back to the delegate.
This is the general idea:
PaintDelegate^ paintDel = ...;
refCon = (long)paintDel; // This conversion doesn't work
...
static OSErr TrackTransferCB(Track t, long refCon) {
(PaintDelegate^)refCon->Invoke(); // This conversion doesn't work
}
Which conversions will work this way?
Delegate objects are garbage collected objects, just like any other non-value type in .NET. Which means that the garbage collector can move them. Which means that getting their address cannot work, the address will change when the GC compacts the heap.
I'm guessing you need to do this to pass unmanaged code some kind of reference to the delegate. A handle is the typical solution. Just keep a counter around that you increment each time you create a new object. Store it in a Dictionary<int, PaintDelegate^>^ and pass the counter value to the unmanaged code.
Marshal::GetFunctionPointerForDelegate() is another approach, the unmanaged code can now directly invoke the delegate target. Not a long but a void*. You do however still have to store the delegate object somewhere safe so it won't get garbage collected. I recommend the former.

Using linq with Sharepoint and disposing of objects

How do i dispose reference to the subWeb in the following query?
using (SPSite spSite = Utility.GetElevatedSite(_rootUrl))
{
from SPWeb web in spSite.AllWebs
where web.ServerRelativeUrl.ToLower() == path
from SPWeb subWeb in web.Webs
select subWeb
}
Do i even need to worry about disposing the subWeb if iam already wraped the spSite in the Using statement?
Edit:
Is it a good idea too call garbage collection in this scenario?
Unfortunately, you do.
The trouble starts from the SPSite.AllWebs property.
The SPWeb.Web property isn't safe either.
Read this very thorough reference of the situations where you need to worry about disposing SharePoint objects.
(I suggest adding this to your SharePoint cheat-sheet).
As a result, I feel the current SharePoint object model can not be safely used with LINQ syntax.
Your code would need a re-write with the various implied loops broken out so that you can explicitly dispose the objects involved.
Edit:
The SPDisposeCheck tool is a command-line console app that will scan your .NET assemblies and warn you of undisposed references based on the above best-practice guidelines. Check it out.
http://code.msdn.microsoft.com/SPDisposeCheck
http://blogs.msdn.com/sharepoint/archive/2008/11/12/announcing-spdisposecheck-tool-for-sharepoint-developers.aspx
The technical answer to your original question is a qualified "No": all SPWeb objects opened from an SPSite are automatically disposed when the SPSite is disposed. However, in practice it is a good idea to dispose an SPWeb as soon as you're done with it to reduce memory pressure, especially when working with code like this that opens several SPWeb objects.
Implementing this dispose-safe behavior for LINQ is actually quite simple in C#. You can find full details in this post, but the short version is that a C# iterator can handle disposal for you. Using my AsSafeEnumerable() extension method, your code is relatively safe written like this:
using (SPSite spSite = Utility.GetElevatedSite(_rootUrl))
{
var sw = from SPWeb web in spSite.AllWebs.AsSafeEnumerable()
where web.ServerRelativeUrl.ToLower() == path
from SPWeb subWeb in web.Webs.AsSafeEnumerable()
select subWeb;
foreach(SPWeb aSubWeb in sw)
{
// Do something
}
}
Now the result of your query, which I've assigned to sw, is a lazy iterator of type IEnumerable<SPWeb>. As you enumerate over that result, each SPWeb will be disposed when the enumerator moves to the next item. This means that it is not safe to use that SPWeb reference, or any SP* object created from it (SPList, etc), outside of your foreach loop. Also sw would not be safe to use outside of that using block because the iterator's SPWebCollections will be tied to the now-disposed SPSite.
That said, code like this that enumerates over all webs (twice!) is extremely expensive. There is almost certainly a more efficient way that this could be implemented, if only by using spSite.AllWebs[path] instead of your from/where.
Regarding garbage collection, these objects require disposal because of unmanaged memory allocated that the GC doesn't even know about.
Finally, a word of caution about your 'GetElevatedSite' utility. If you're using RunWithElevatedPrivileges in your helper method to get your elevated SPSite, there are a number of issues you could run into by returning your SPSite out of that elevated context. If possible, I would suggest using SPSite impersonation instead - my preferred method is described here.

Resources