I'm considering using V8 as an embedded JavaScript engine for a project but I'm having trouble figuring out how to manage the lifetime of native C++ objects. This experiment was supposed to demonstrate the Weak Pointer callback.
Near the end of the code below I call v8::Persistent::SetWeak and install a callback. All I want is to be able to create a demonstration of this callback being called.
I half-heartedly hoped it would be as easy as letting the handles fall out of scope, but the code below doesn't call the callback. I also read somewhere that calling Isolate::IdleNotificationDeadline might force a garbage collection, but this didn't work either.
How can I demonstrate the weak pointer callback being called? I'd like to write some code that will result in the the cleanup function being called at some point before the program exits.
I clearly am having trouble understanding how to set this up properly and would appreciate some assistance an an explanation. I'm afraid I just don't get it yet.
My expectation is that it's possible to create a Weak Pointer via a Persistent handle and that when there are no more handles to an object, the callback will (eventually) be called so that native C++ resources associated with that JavaScript object can be freed.
I'm particularly put off by a comment in the v8.h header file:
NOTE: There is no guarantee as to when or even if the callback is invoked. The invocation is performed solely on a best effort basis. As always, GC-based finalization should not be relied upon for any critical form of resource management!
This makes the entire engine seem useless to me for managing a native object with this mechanism. But I'm confident there's at least some minimal contrived scenario in which the callback is called.
My requirement is that I am able to write some JavaScript to allocate an object that will eventually be freed when there are no more references to it.
foo = createFoo(); // creates a JavaScript object wrapping the native C++ Foo object.
doSomethingWith(foo); // do stuff with the Foo here
foo = null; // make sure there are no more JavaScript handles to the wrapper for the Foo object.
// After this point, I'm hoping V8 will eventually let me know that I can delete the native C++ Foo object
I'm assuming I don't actually have to execute any JavaScript to demonstrate the weak pointer and cleanup mechanism. I was hoping I could just create a Persistent handle and install the Weak callback then let it go out of scope. I seem to be wrong in that assumption, or I have failed to demonstrate it here.
#include <iostream>
#include "include/libplatform/libplatform.h"
#include "include/v8.h"
class Foo {};
void cleanup(const v8::WeakCallbackInfo<Foo>& data)
{
std::cout << "Weak Callback called" << std::endl;
delete data.GetParameter();
}
int main(int argc, char* argv[]) {
std::cout << "Start..." << std::endl;
v8::V8::InitializeICUDefaultLocation(argv[0]);
v8::V8::InitializeExternalStartupData(argv[0]);
std::unique_ptr<v8::Platform> platform = v8::platform::NewDefaultPlatform();
v8::V8::InitializePlatform(platform.get());
v8::V8::Initialize();
// Create a new isolate and make it the current one
v8::Isolate::CreateParams create_params;
create_params.array_buffer_allocator = v8::ArrayBuffer::Allocator::NewDefaultAllocator();
v8::Isolate* isolate = v8::Isolate::New(create_params);
{
v8::Isolate::Scope isolate_scope(isolate);
v8::HandleScope handle_scope(isolate);
v8::Local<v8::Context> context = v8::Context::New(isolate, NULL, v8::ObjectTemplate::New(isolate));
v8::Context::Scope context_scope(context);
v8::Local<v8::Object> obj = v8::ObjectTemplate::New(isolate)->NewInstance(context).ToLocalChecked();
v8::Persistent<v8::Object> persistent;
persistent.Reset(isolate, obj);
persistent.SetWeak(new Foo(), cleanup, v8::WeakCallbackType::kParameter);
}
isolate->IdleNotificationDeadline(1.0);
std::cout << "...Finish" << std::endl;
}
Note: The above code example should be built the same way the hello_world example for V8 is built.
For a contrived example, calling isolate->LowMemoryNotification() should do the trick. I wouldn't recommend doing that in production though, as it's a huge waste of CPU time (unless you really have a low memory situation where you're close to crashing due to OOM).
Apart from that, the comment you found holds. Relying on weak callbacks to free objects is fine; relying on it for managing critical and scarce resources is not recommended. If the objects in question add up to significant size, you should use isolate->AdjustAmountOfExternalAllocatedMemory(...) as appropriate, to let the GC know that there is something to be freed. And you should have your own fallback mechanism to clean up everything when the Isolate goes away (if you're not terminating the entire process then anyway).
Related
Good Day,
I usually find it best to look at other people's code when I try to learn a programming language.
I am now trying to learn C++ but having some trouble understanding the following function (as an example):
Vehicle* MyClass::GetVehicleByID(uint id)
{
Vehicle* car = new Vehicle;
car->model = vehiclesArray[id].model;
return car;
}
int main()
{
Vehicle* car = myClass.GetVehicleID(0);
std::cout << "Car Model: " << car->model << std::endl;
}
I think I understand the concept of pointers. But I don't understand when this object will be destroyed. Will I have to manually delete the object "car" in the main function? Also, why are we using the new keyword instead of just using "Vehicle car();"? As I understand it the new keyword will allocate memory before populating it with the object?
Am I completely out of my depth just by asking these questions? Where or how can I learn to understand what is going on in this code? Because it seems like all tutorials only explain the "basics" like what a pointer is and the most basic way of using them.
Any help would be much appreciated.
Yes, if you create an object with new keyword, you will have to delete it with delete keyword. There is no garbage collection in C++. So in your case, this would be:
delete car;
Also, the difference between creating with new and just using the constructor directly as you suggest is that with new, the object is created on the heap, and it's lifetime is extended until it is explicitly deleted by the programmer. In the other case, it will be created on the stack and will be deleted automatically as soon as the enclosing funcion or block is exited.
What happens in your case is that you create an object on the heap and never delete it. This causes a so-called memory leak. Since your program is small, this is not an issue and this memory is released after the program finishes. However, in a case of a long running program, or a program that often allocates on the heap, it can cause the program to run out of available memory.
Also note that you could create an object inside the function, change the signature to return an object instead of a pointer, and have the function return that object directly. This would work, but what would happen is that first an object local to the function would be created on the stack. Then that object would be copied into another object created in the main function, and the first object would then be deleted. This is not very efficient, that is why a pointer to an objecet allocated on the heap is used. One reason more to use the heap would be for storing large objects. The stack is small compared to the heap, and should not be used to store very large objects.
I hope this clarifies it a bit, but to understand all this well takes a lot of time and work, on stackoverflow answer is not enough. Iwould suggest to read more about differences between objects on heap and object on stack in C++. There is an abundance of information online.
Just for your question of code.
Maybe it's better to use shared_ptr instead of pointer
#include <memory>
std::shared_ptr<Vehicle> MyClass::GetVehicleByID(uint id)
{
std::shared_ptr<Vehicle> car = std::make_shared<Vehicle>();
car->model = vehiclesArray[id].model;
return car;
}
int main()
{
std::shared_ptr<Vehicle> car = myClass.GetVehicleID(0);
std::cout << "Car Model: " << car->model << std::endl;
}
the class shared_ptr is based on RAII guidelines. So you may not have to delete it. When main() end, the destructor shared_ptr::~shared_ptr() will be called and the pointer will be delete.
Anyway, it's a good idea to read <C++ Primer>
C++/WinRT's agile_ref supposedly allows usage of non-agile objects in an agile way.
However, I've found that this fails with at least CoreWindow instances.
As a short example:
void Run()
{
auto window{ CoreWindow::GetForCurrentThread() };
window.Activate();
auto agile_wnd{ make_agile(window) };
ThreadPool::RunAsync([=](const auto&) {
auto other_wnd{ agile_wnd.get() };
other_wnd.SetPointerCapture();
});
auto dispatcher{ window.Dispatcher() };
dispatcher.ProcessEvents(CoreProcessEventsOption::ProcessUntilQuit);
}
Run() is called on the UI thread, then attempts to create an agile reference and then use it to call the CoreWindow from the thread pool. However, this fails with "The application called an interface that was marshaled for a different thread." Since agile_ref uses RoGetAgileReference internally to marshal the object, and the calls to create the reference and then unmarshal it are both succeeding, this appears to me to be CoreWindow simply refusing to be marshaled at all.
Unless, of course, this is working as intended and the RoGetAgileReference call silently fails to marshal the CoreWindow.
So what causes the SetPointerCapture call to fail, even with the agile_ref?
The error is misleading. Most of the Windows.UI classes are actually agile. The challenge is that they perform an explicit thread check to ensure that you are actually calling them from the appropriate UI thread. That's why an agile_ref won't help. The solution is to use the Dispatcher, which gets you on the correct thread. You can then simply call methods on the object directly.
Hi this is in regard to some code given in C++ CLI i action which i have trouble understanding.The code is given below
delegate bool EnumWindowsDelegateProc(
IntPtr hwnd,IntPtr lParam);
ref class WindowEnumerator
{
private:
EnumWindowsDelegateProc^ _WindowFound;
public:
WindowEnumerator(EnumWindowsDelegateProc^ handler)
{
_WindowFound = handler;
}
void Init()
{
pin_ptr<EnumWindowsDelegateProc^> tmp = &_WindowFound;
EnumWindows((WNDENUMPROC)
Marshal::GetFunctionPointerForDelegate(
_WindowFound).ToPointer(), 0);
}
};
In the above code _WindowFound has been pinned so GC wont moove it.The Question is
Isn't tmp only valid inside Int() thus _WindowFound pinned only
during call to Int() ?
If thats the case Isn't there a chance the delegate location in
memory might change at the time EnumWindows calls it as a function
pointer?
A pin_ptr<> automatically unpins, RAII-style, when code execution leaves the block that it is declared it. So it will be pinned for the entire body of the Init() method in your code. So your 2 bullet does not apply.
It is notable that the code is in not infact correct. It works, but by accident. Marshal.GetFunctionPointerForDelegate() invokes the stub compiler to auto-generate the native code that's needed to allow the native code to invoke the delegate target. The lifetime of that stub is controlled by the lifetime of the delegate object. In other words, as soon as the delegate object gets garbage collected, the stub will be destroyed as well.
Pinning the delegate object does not in any way affect the stub. It is already unmovable, the GC never moves code. It works by accident because pinning an object requires creating an extra GC handle for the object (GCHandle::Alloc), enough to prevent premature collection.
It doesn't make an enormous difference in this kind of code, EnumWindows() is slow anyway. Not necessarily the case when you call other native code that requires a callback, avoiding pinning should always be a goal in general. All you have to do is let the jitter see a reference to the delegate object beyond the code where it can still be used, like this:
void Init() {
EnumWindows((WNDENUMPROC)
Marshal::GetFunctionPointerForDelegate(
_WindowFound).ToPointer(), 0);
GC::KeepAlive(_WindowFound);
}
Very efficient, GC::KeepAlive() doesn't generate any code, it just tells the jitter to extend the lifetime of the _WIndowFound reference so it can't be collected while EnumWindows() is executing. Even that is overkill in this specific case since somebody is going to have a reference to the WindowEnumerator object in order to retrieve _WindowFound, but better safe than sorry.
I am extremely confused about resource management in C++/CLI. I thought I had a handle (no pun intended) on it, but I stumbled across the auto_gcroot<T> class while looking through header files, which led to a google search, then the better part of day reading documentation, and now confusion. So I figured I'd turn to the community.
My questions concern the difference between auto_handle/stack semantics, and auto_gcroot/gcroot.
auto_handle: My understanding is that this will clean up a managed object created in a managed function. My confusion is that isn't the garbage collector supposed to do that for us? Wasn't that the whole point of managed code? To be more specific:
//Everything that follows is managed code
void WillThisLeak(void)
{
String ^str = gcnew String ^();
//Did I just leak memory? Or will GC clean this up? what if an exception is thrown?
}
void NotGoingToLeak(void)
{
String ^str = gcnew String^();
delete str;
//Guaranteed not to leak, but is this necessary?
}
void AlsoNotGoingToLeak(void)
{
auto_handle<String ^> str = gcnew String^();
//Also Guaranteed not to leak, but is this necessary?
}
void DidntEvenKnowICouldDoThisUntilToday(void)
{
String str();
//Also Guaranteed not to leak, but is this necessary?
}
Now this would make sense to me if it was a replacement for the C# using keyword, and it was only recommended for use with resource-intensive types like Bitmap, but this isnt mentioned anywhere in the docs so im afraid ive been leaking memory this whole time now
auto_gcroot
Can I pass it as an argument to a native function? What will happen on copy?
void function(void)
{
auto_gcroot<Bitmap ^> bmp = //load bitmap from somewhere
manipulateBmp(bmp);
pictureBox.Image = bmp; //Is my Bitmap now disposed of by auto_gcroot?
}
#pragma unmanaged
void maipulateBmp(auto_gcroot<Bitmap ^> bmp)
{
//Do stuff to bmp
//destructor for bmp is now called right? does this call dispose?
}
Would this have worked if I'd used a gcroot instead?
Furthermore, what is the advantage to having auto_handle and auto_gcroot? It seems like they do similar things.
I must be misunderstanding something for this to make so little sense, so a good explanation would be great. Also any guidance regarding the proper use of these types, places where I can go to learn this stuff, and any more good practices/places I can find them would be greatly appreciated.
thanks a lot,
Max
Remember delete called on managed object is akin to calling Dispose in C#. So you are right, that auto_handle lets you do what you would do with the using statement in C#. It ensures that delete gets called at the end of the scope. So, no, you're not leaking managed memory if you don't use auto_handle (the garbage collector takes care of that), you are just failing to call Dispose. there is no need for using auto_handle if the types your dealing with do not implement IDisposable.
gcroot is used when you want to hold on to a managed type inside a native class. You can't just declare a manged type directly in a native type using the hat ^ symbol. You must use a gcroot. This is a "garbage collected root". So, while the gcroot (a native object) lives, the garbage collector cannot collect this object. When the gcroot is destroyed, it lets go of the reference, and the garbage collector is free to collect the object (assuming it has no other references). You declare a free-standing gcroot in a method like you've done above--just use the hat ^ syntax whenever you can.
So when would you use auto_gcroot? It would be used when you need to hold on to a manged type inside a native class AND that managed type happens to implement IDisposable. On destruction of the auto_gcroot, it will do 2 things: call delete on the managed type (think of this as a Dispose call--no memory is freed) and free the reference (so the type can be garbage collected).
Hope it helps!
Some references:
http://msdn.microsoft.com/en-us/library/aa730837(v=vs.80).aspx
http://msdn.microsoft.com/en-us/library/481fa11f(v=vs.80).aspx
http://www.codeproject.com/Articles/14520/C-CLI-Library-classes-for-interop-scenarios
What is a "Handle" when discussing resources in Windows? How do they work?
It's an abstract reference value to a resource, often memory or an open file, or a pipe.
Properly, in Windows, (and generally in computing) a handle is an abstraction which hides a real memory address from the API user, allowing the system to reorganize physical memory transparently to the program. Resolving a handle into a pointer locks the memory, and releasing the handle invalidates the pointer. In this case think of it as an index into a table of pointers... you use the index for the system API calls, and the system can change the pointer in the table at will.
Alternatively a real pointer may be given as the handle when the API writer intends that the user of the API be insulated from the specifics of what the address returned points to; in this case it must be considered that what the handle points to may change at any time (from API version to version or even from call to call of the API that returns the handle) - the handle should therefore be treated as simply an opaque value meaningful only to the API.
I should add that in any modern operating system, even the so-called "real pointers" are still opaque handles into the virtual memory space of the process, which enables the O/S to manage and rearrange memory without invalidating the pointers within the process.
A HANDLE is a context-specific unique identifier. By context-specific, I mean that a handle obtained from one context cannot necessarily be used in any other aribtrary context that also works on HANDLEs.
For example, GetModuleHandle returns a unique identifier to a currently loaded module. The returned handle can be used in other functions that accept module handles. It cannot be given to functions that require other types of handles. For example, you couldn't give a handle returned from GetModuleHandle to HeapDestroy and expect it to do something sensible.
The HANDLE itself is just an integral type. Usually, but not necessarily, it is a pointer to some underlying type or memory location. For example, the HANDLE returned by GetModuleHandle is actually a pointer to the base virtual memory address of the module. But there is no rule stating that handles must be pointers. A handle could also just be a simple integer (which could possibly be used by some Win32 API as an index into an array).
HANDLEs are intentionally opaque representations that provide encapsulation and abstraction from internal Win32 resources. This way, the Win32 APIs could potentially change the underlying type behind a HANDLE, without it impacting user code in any way (at least that's the idea).
Consider these three different internal implementations of a Win32 API that I just made up, and assume that Widget is a struct.
Widget * GetWidget (std::string name)
{
Widget *w;
w = findWidget(name);
return w;
}
void * GetWidget (std::string name)
{
Widget *w;
w = findWidget(name);
return reinterpret_cast<void *>(w);
}
typedef void * HANDLE;
HANDLE GetWidget (std::string name)
{
Widget *w;
w = findWidget(name);
return reinterpret_cast<HANDLE>(w);
}
The first example exposes the internal details about the API: it allows the user code to know that GetWidget returns a pointer to a struct Widget. This has a couple of consequences:
the user code must have access to the header file that defines the Widget struct
the user code could potentially modify internal parts of the returned Widget struct
Both of these consequences may be undesirable.
The second example hides this internal detail from the user code, by returning just void *. The user code doesn't need access to the header that defines the Widget struct.
The third example is exactly the same as the second, but we just call the void * a HANDLE instead. Perhaps this discourages user code from trying to figure out exactly what the void * points to.
Why go through this trouble? Consider this fourth example of a newer version of this same API:
typedef void * HANDLE;
HANDLE GetWidget (std::string name)
{
NewImprovedWidget *w;
w = findImprovedWidget(name);
return reinterpret_cast<HANDLE>(w);
}
Notice that the function's interface is identical to the third example above. This means that user code can continue to use this new version of the API, without any changes, even though the "behind the scenes" implementation has changed to use the NewImprovedWidget struct instead.
The handles in these example are really just a new, presumably friendlier, name for void *, which is exactly what a HANDLE is in the Win32 API (look it up at MSDN). It provides an opaque wall between the user code and the Win32 library's internal representations that increases portability, between versions of Windows, of code that uses the Win32 API.
A HANDLE in Win32 programming is a token that represents a resource that is managed by the Windows kernel. A handle can be to a window, a file, etc.
Handles are simply a way of identifying a particulate resource that you want to work with using the Win32 APIs.
So for instance, if you want to create a Window, and show it on the screen you could do the following:
// Create the window
HWND hwnd = CreateWindow(...);
if (!hwnd)
return; // hwnd not created
// Show the window.
ShowWindow(hwnd, SW_SHOW);
In the above example HWND means "a handle to a window".
If you are used to an object oriented language you can think of a HANDLE as an instance of a class with no methods who's state is only modifiable by other functions. In this case the ShowWindow function modifies the state of the Window HANDLE.
See Handles and Data Types for more information.
A handle is a unique identifier for an object managed by Windows. It's like a pointer, but not a pointer in the sence that it's not an address that could be dereferenced by user code to gain access to some data. Instead a handle is to be passed to a set of functions that can perform actions on the object the handle identifies.
So at the most basic level a HANDLE of any sort is a pointer to a pointer or
#define HANDLE void **
Now as to why you would want to use it
Lets take a setup:
class Object{
int Value;
}
class LargeObj{
char * val;
LargeObj()
{
val = malloc(2048 * 1000);
}
}
void foo(Object bar){
LargeObj lo = new LargeObj();
bar.Value++;
}
void main()
{
Object obj = new Object();
obj.val = 1;
foo(obj);
printf("%d", obj.val);
}
So because obj was passed by value (make a copy and give that to the function) to foo, the printf will print the original value of 1.
Now if we update foo to:
void foo(Object * bar)
{
LargeObj lo = new LargeObj();
bar->val++;
}
There is a chance that the printf will print the updated value of 2. But there is also the possibility that foo will cause some form of memory corruption or exception.
The reason is this while you are now using a pointer to pass obj to the function you are also allocating 2 Megs of memory, this could cause the OS to move the memory around updating the location of obj. Since you have passed the pointer by value, if obj gets moved then the OS updates the pointer but not the copy in the function and potentially causing problems.
A final update to foo of:
void foo(Object **bar){
LargeObj lo = LargeObj();
Object * b = &bar;
b->val++;
}
This will always print the updated value.
See, when the compiler allocates memory for pointers it marks them as immovable, so any re-shuffling of memory caused by the large object being allocated the value passed to the function will point to the correct address to find out the final location in memory to update.
Any particular types of HANDLEs (hWnd, FILE, etc) are domain specific and point to a certain type of structure to protect against memory corruption.
A handle is like a primary key value of a record in a database.
edit 1: well, why the downvote, a primary key uniquely identifies a database record, and a handle in the Windows system uniquely identifies a window, an opened file, etc, That's what I'm saying.
Think of the window in Windows as being a struct that describes it. This struct is an internal part of Windows and you don't need to know the details of it. Instead, Windows provides a typedef for pointer to struct for that struct. That's the "handle" by which you can get hold on the window.,