How to track/find out which userdata are GC-ed at certain time? - luajit

I've written an app in LuaJIT, using a third-party GUI framework (FFI-based) + some additional custom FFI calls. The app suddenly loses part of its functionality at some point soon after being run, and I'm quite confident it's because of some unpinned objects being GC-ed. I assume they're only referenced from the C world1, so Lua GC thinks they're unreferenced and can free them. The problem is, I don't know which of the numerous userdata are unreferenced (unpinned) on Lua side?
To confirm my theory, I've run the app with GC disabled, via:
collectgarbage 'stop'
and lo, with this line, the app works perfectly well long past the point where it got broken before. Obviously, it's an ugly workaround, and I'd much prefer to have the GC enabled, and the app still working correctly...
I want to find out which unpinned object (userdata, I assume) gets GCed, so I can pin it properly on Lua side, to prevent it being GCed prematurely. Thus, my question is:
(How) can I track which userdata objects got collected when my app loses functionality?
One problem is, that AFAIK, the LuaJIT FFI already assigns custom __gc handlers, so I cannot add my own, as there can be only one per object. And anyway, the framework is too big for me to try adding __gc in each and every imaginable place in it. Also, I've already eliminated the "most obviously suspected" places in the code, by removing local from some variables — thus making them part of _G, so I assume not GC-able. (Or is that not enough?)
1 Specifically, WinAPI.

For now, I've added some ffi.gc() handlers to some of my objects (printing some easily visible ALL-CAPS messages), then added some eager collectgarbage() calls to try triggering the issue as soon as possible:
ffi.gc(foo, function()
print '\n\nGC FOO !!!\n\n'
end)
[...]
collectgarbage()
And indeed, this exposed some GCing I didn't expect. Specifically, it led me to discover a note in luajit's FFI docs, which is most certainly relevant in my case:
Please note that [C] pointers [...] are not followed by the garbage collector. So e.g. if you assign a cdata array to a pointer, you must keep the cdata object holding the array alive [in Lua] as long as the pointer is still in use.

Related

What's the correct way to CheckDeviceState in DirectX11?

I have built the DX11VideoRenderer sample (a replacement for EVR that uses DirectX11 instead of EVR's DirectX9), and it's working. Problem is, it's not working very well. It's using twice the CPU time that the EVR does for the same videos (more on this in the next question).
Since I've got the source, I decided to profile it to see what's going on. (Among other things) this led me to:
HRESULT DX11VideoRenderer::CPresenter::CheckDeviceState(BOOL* pbDeviceChanged)
I'm not much of a DirectX expert (actually, I'm not one at all), but it seems likely that window handles can invalidate as monitors get unplugged, windows get FullScreened, closed, etc so a function like this makes perfect sense to me.
However.
When I look at the code for CheckDeviceState, the first thing it does is call SetVideoMonitor, which seems odd.
SetVideoMonitor looks like the routine you call when you first initialize the presenter (or change the target window), not something you'd call repeatedly to "Check" the device state.
Indeed, SetVideoMonitor calls TerminateDisplaySystem, followed by InitializeDisplaySystem. I could see doing this once at startup, but those functions are being called once per frame. That can't be right.
I can comment out the call to SetVideoMonitor in CheckDeviceState (or actually all of CheckDeviceState), and the code continues to function correctly (it's predictably a bit faster). But then I'm not checking the device state anymore.
Trying to figure out the proper way to check for state changes in DX11 brought me here which talks about just checking the return codes for IDXGISwapChain::Present and ResizeBuffers. Is that how this should be done? Because that makes it seem like this whole routine is some leftover from DX9 (where it still would have been poorly implemented).
What's the correct way to check the device state in DX11? Is this even a thing anymore?

where is fyne's thread safety defined?

I was attracted to Fyne (and hence Go) by a promise of thread safety. But now that I'm getting better at reading Go I'm seeing things that make be believe that the API as a whole is not thread safe and perhaps was never intended to be. So I'm trying to determine what "thread safe" means in Fyne.
I'm looking specifically at
func (l *Label) SetText(text string) {
l.Text = text
l.textProvider.SetText(text) // calls refresh
}
and noting that l.Text is also a string. Assignments in Go are not thread safe, so it seems obvious to me that if two threads fight over the text of a label and both call label.SetText at the same time, I can expect memory corruption.
"But you wouldn't do that", one might say. No, but I am worried about the case of someone editing the content of an Entry while an app thread decides it needs to replace all the Entry's text - this is entirely possible in my app because it supports simultaneous editing by multiple users over a network, so updates to all sorts of widgets come in asynchronously. (Note I don't care what happens if two people edit the same Entry at the same time; someone's changes will be lost and I don't care who's. But it must not result in memory corruption.) Note that one approach I could take would be to have the background thread create an entirely new Entry widget, which would then replace the one in the current Box. But is that thread safe?
It's not that I don't know how to serialize things with channels. But I was hoping that Fyne would eliminate the need for it (a blog post claims it does); and even using channels I can't convince myself that a user meddling with a widget in various ways while some background thread is altering it, hiding it, etc, isn't going to result in crashes. Maybe all that is serialized under the covers and is perfectly safe, but I don't want to find out the hard way that it isn't, because I'll have no way to fix it.
Fyne is clearly pretty new and seems to have tons of promise, but documentation seems light on details. Is more information available somewhere? Have people tried this successfully?
You have found some race conditions here. There are plans to improve, but the 1.2 release was required to get a new "BaseWidget" first - and that was only released a few weeks ago.
Setting fields directly is primarily for setup purposes and so not expected to be used in the way you illustrate. That said, we do want to support it. The base widget will soon introduce something akin to SetFieldsAndRefresh(func()) which will ensure the safety of the code passed and refresh the widget afterward.
There is indeed a race currently within Refresh(). The use of channels internally were designed to remove this - but there are some corners such as multiple goroutines calling it. This is the area that our new BaseWidget code can help with - as they can internally lock automatically. Using this approach will be thread safe with no changes to the developer in a future release.
The API so far has made it possible for developers to not worry about threading and work from any goroutines - we do need to work internally to make it safer - you are quite right. https://github.com/fyne-io/fyne/issues/506

Can I obtain a task in KEXT?

Just wondering if it is possible to obtain a task for a given proc_t inside a kext.
I have tried task_for_pid() which didn't work for some reason that I don't remember.
I tried proc_task(proc_t p) from sys/proc.h but I can't load my kext since that function is not exported.
I guess that I'm doing something wrong but I can't quite figure out what. Assuming I can get the task for a process, I'd like to use some mach calls and allocate memory, write memory and whatnot but for that, I would need the task I believe.
After some research it would appear that it's not the case.
There is proc_task() defined in proc.h but it's under the #ifdef KERNEL_PRIVATE. The KEXT will compile albeit the warning.
In order to use that function, you have to add the com.apple.kpi.private in the list of dependencies but even that will fail since you are most likely NOT Apple :)
Only Apple kexts may link against com.apple.kpi.private.
Anyway, the experiment was interesting in the sense that other APIs such as vm_read, vm_write etc. are not available to use inside a KEXT (which probably makes sense since they are declared in a vm_user.h and I suppose are reserved for user mode).
I'm not aware of a public direct proc_t->task_t lookup KPI, unfortunately.
However, in some cases, you might be able to get away with using current_task() and holding on to that pointer for as long as you need it. Use task_reference and task_deallocate for reference counting (but don't hold references forever obviously, otherwise they'll never be freed). You can also access the kernel's task (corresponding to process 0) anytime via the global variable kernel_task.

What is the design rationale behind HandleScope?

V8 requires a HandleScope to be declared in order to clean up any Local handles that were created within scope. I understand that HandleScope will dereference these handles for garbage collection, but I'm interested in why each Local class doesn't do the dereferencing themselves like most internal ref_ptr type helpers.
My thought is that HandleScope can do it more efficiently by dumping a large number of handles all at once rather than one by one as they would in a ref_ptr type scoped class.
Here is how I understand the documentation and the handles-inl.h source code. I, too, might be completely wrong since I'm not a V8 developer and documentation is scarce.
The garbage collector will, at times, move stuff from one memory location to another and, during one such sweep, also check which objects are still reachable and which are not. In contrast to reference-counting types like std::shared_ptr, this is able to detect and collect cyclic data structures. For all of this to work, V8 has to have a good idea about what objects are reachable.
On the other hand, objects are created and deleted quite a lot during the internals of some computation. You don't want too much overhead for each such operation. The way to achieve this is by creating a stack of handles. Each object listed in that stack is available from some handle in some C++ computation. In addition to this, there are persistent handles, which presumably take more work to set up and which can survive beyond C++ computations.
Having a stack of references requires that you use this in a stack-like way. There is no “invalid” mark in that stack. All the objects from bottom to top of the stack are valid object references. The way to ensure this is the LocalScope. It keeps things hierarchical. With reference counted pointers you can do something like this:
shared_ptr<Object>* f() {
shared_ptr<Object> a(new Object(1));
shared_ptr<Object>* b = new shared_ptr<Object>(new Object(2));
return b;
}
void g() {
shared_ptr<Object> c = *f();
}
Here the object 1 is created first, then the object 2 is created, then the function returns and object 1 is destroyed, then object 2 is destroyed. The key point here is that there is a point in time when object 1 is invalid but object 2 is still valid. That's what LocalScope aims to avoid.
Some other GC implementations examine the C stack and look for pointers they find there. This has a good chance of false positives, since stuff which is in fact data could be misinterpreted as a pointer. For reachability this might seem rather harmless, but when rewriting pointers since you're moving objects, this can be fatal. It has a number of other drawbacks, and relies a lot on how the low level implementation of the language actually works. V8 avoids that by keeping the handle stack separate from the function call stack, while at the same time ensuring that they are sufficiently aligned to guarantee the mentioned hierarchy requirements.
To offer yet another comparison: an object references by just one shared_ptr becomes collectible (and actually will be collected) once its C++ block scope ends. An object referenced by a v8::Handle will become collectible when leaving the nearest enclosing scope which did contain a HandleScope object. So programmers have more control over the granularity of stack operations. In a tight loop where performance is important, it might be useful to maintain just a single HandleScope for the whole computation, so that you won't have to access the handle stack data structure so often. On the other hand, doing so will keep all the objects around for the whole duration of the computation, which would be very bad indeed if this were a loop iterating over many values, since all of them would be kept around till the end. But the programmer has full control, and can arrange things in the most appropriate way.
Personally, I'd make sure to construct a HandleScope
At the beginning of every function which might be called from outside your code. This ensures that your code will clean up after itself.
In the body of every loop which might see more than three or so iterations, so that you only keep variables from the current iteration.
Around every block of code which is followed by some callback invocation, since this ensures that your stuff can get cleaned if the callback requires more memory.
Whenever I feel that something might produce considerable amounts of intermediate data which should get cleaned (or at least become collectible) as soon as possible.
In general I'd not create a HandleScope for every internal function if I can be sure that every other function calling this will already have set up a HandleScope. But that's probably a matter of taste.
Disclaimer: This may not be an official answer, more of a conjuncture on my part; but the v8 documentation is hardly
useful on this topic. So I may be proven wrong.
From my understanding, in developing various v8 based backed application. Its a means of handling the difference between the C++ and javaScript environment.
Imagine the following sequence, which a self dereferencing pointer can break the system.
JavaScript calls up a C++ wrapped v8 function : lets say helloWorld()
C++ function creates a v8::handle of value "hello world =x"
C++ returns the value to the v8 virtual machine
C++ function does its usual cleaning up of resources, including dereferencing of handles
Another C++ function / process, overwrites the freed memory space
V8 reads the handle : and the data is no longer the same "hell!#(#..."
And that's just the surface of the complicated inconsistency between the two; Hence to tackle the various issues of connecting the JavaScript VM (Virtual Machine) to the C++ interfacing code, i believe the development team, decided to simplify the issue via the following...
All variable handles, are to be stored in "buckets" aka HandleScopes, to be built / compiled / run / destroyed by their
respective C++ code, when needed.
Additionally all function handles, are to only refer to C++ static functions (i know this is irritating), which ensures the "existence"
of the function call regardless of constructors / destructor.
Think of it from a development point of view, in which it marks a very strong distinction between the JavaScript VM development team, and the C++ integration team (Chrome dev team?). Allowing both sides to work without interfering one another.
Lastly it could also be the sake of simplicity, to emulate multiple VM : as v8 was originally meant for google chrome. Hence a simple HandleScope creation and destruction whenever we open / close a tab, makes for much easier GC managment, especially in cases where you have many VM running (each tab in chrome).

What is the order of destruction of objects in VBScript?

In what order are objects in a .vbs destroyed?
That is, given these globals:
Set x = New Xxx
Set y = New Yyy
I'm interested in answers to any of the following.
For instances of classes implemented in the .VBS, in what order will Class_Terminate be called? Cursory poking suggests in the order (not reverse order!) of creation, but is this guaranteed?
EDIT: I understand that Class_Terminate will be called when the last last reference to an object is released. What I meant was: in what order will x and y be released, and is it guaranteed? Assume for simplicity that x & y are the only references to their respective objects.
Does the type of object matter? e.g. if I have classes implemented in the .VBS mixed in with other COM objects such as Scripting.FileSystemObject.
EDIT: I understand that a COM library may set up its own internal circular references that the script host engine knows nothing about; I'm interested in exploring what could affect the answer to the first question.
Are the answers to the above different if x and y were local to a Sub or Function rather than global?
Does it depend on whether the exit is normal, by exception, or via WScript.Quit? (In the latter case, it seems that Class_Terminate is still called on any outstanding objects before exiting, however these may cause an error to be reported).
When is the WScript object destroyed?
Does the script host matter? (wscript.exe vs cscript.exe vs. whatever the web host engine is called)
Does JScript's object destruction model differ to VBScript's?
I can find the answers to some of these questions empirically, but I'm interested in whether any of them are guaranteed / documented.
Do post even if you only know some of the answers - or further relevant issues.
I designed and implemented this feature in VBScript.
Most of the answers are in my articles that Mark references, but just to clarify:
in what order will Class_Terminate be called?
Terminators are in general called immediately when the last reference to an object is released. However, due to circular references and other issues, it is generally a very bad idea to rely upon a deterministic order of termination.
Cursory poking suggests in the order (not reverse order!) of creation, but is this guaranteed?
As I noted in my articles, unterminated objects are terminated when the engine is shut down. As an implementation detail, the termination queue is executed in the order that the objects were created in. However, this is an undocumented implementation detail that you should not rely upon.
Does the type of object matter? e.g. if I have classes implemented in the .VBS mixed in with other COM objects such as Scripting.FileSystemObject.
It can. There could be circular references amongst those objects that are torn down at unpredictable times.
I'm thinking of objects at global scope, when the program quits - is it different for objects at e.g. function scope?
I don't understand the question. Can you clarify?
Does it depend on whether the exit is normal, by exception, or via WScript.Quit? (In the latter case, it seems that Class_Terminate is still called on any outstanding objects before exiting, however these may cause an error to be reported).
It can matter, yes. VBScript does not make any guarantee that terminators always run. The host that owns the engine can shut down its process by "failing fast" in a manner that is not guaranteed to cleanly shut down the engine, for example. (In the event of a catastrophic failure, this is sometimes desirable; if you don't know what is wrong then sometimes running termination code makes the problem worse, not better.)
Windows Script Host does attempt to shut down the engine cleanly when Quit is called.
When is the WScript object destroyed?
When the Windows Script Host process termination logic runs.
Does the script host matter? (wscript.exe vs cscript.exe vs. whatever the web host engine is called)
Yes, it can matter.
Does JScript's object destruction model differ to VBScript's?
Yes, very much so.
JScript "Classic" from the period when I worked on it (pre 2001) uses a nondeterministic mark-and-sweep garbage collector which does handle circular references amongst script objects, but does NOT handle circular references between script and browser objects. More recent versions of JScript "Classic" have a modified garbage collector that DOES handle circular references between script and browser objects (though it does not necessarily detect circularities involving JScript objects and third party ActiveX objects.)
The IE 9 version of JScript has a completely rewritten garbage collector that uses very different technology; I have chatted a bit with its designer but I do not have enough technical knowledge to discuss its characteristics in any kind of depth.
JScript .NET of course uses the CLR garbage collector.
Can I ask why you care about all this stuff?
Also, note that I haven't looked at this code in over a decade; take all of this with the appropriate level of skepticism. My memory may be faulty.

Resources