Should I delete QSensorReading after using it? - memory-management

I am trying to use QSensor and friends in Qt5.5 for the first time, and a question has come up; who is responsible for managing instances of QSensorReading? I have tried to understand this by reading the documentation without getting any wiser.
Example:
QAccelerometer *accelerometer=new QAccelerometer(this);
if(accelerometer->connectToBackend()){
accelerometer->start();
}
//Some time later in handler for QSensorReading::readingChanged()signal:
QAccelerometerReading *myReading=accelerometer->reading();
What can I do with myReading here? Should I delete it? Will it be automaticaly delted? Can I pass it safely along as a parameter? Do I risk it being updated (mutable)? Can I copy it somehow?

It's owned by the QSensorBackend, so it'll be deleted with it. The pointer can be passed, but the object doesn't look like copyable. The value inside may be updated (but it's thread safe if it's used in the same thread where the backend lives). The pointer stays the same.

Related

Calling .clone() too many times in vb, will it cause any trouble?

I am an embedded engineer and I have never worked with neither windows nor visual basic.
For my current task I have to maintain and improve a test system running on Windows, written in Visual Studio, C#(also have no experience with) .
This project uses some libraries written in visual basic(all legacy code). And I detect a problem in there. I cannot copy the code directly in here but because of legal bindings but it is something like that:
'getter()
dim temp as byte = global_data
Array.reverse(temp);
...
This is a getter function. Since there is a reverse inside, the return of this function is different after each call because when temp changed, global_data is also changed. And I can get the real value only after odd number of calls. Previous handler told me to call function only once or three times... I think this is stupid and changed it by adding a .clone() like this:
dim temp as byte = global_data.clone()
Array.reverse(temp);
And it worked :)
There are a lot of functions like this so I'm gonna make similar adjustments to them too.
But since I am not familiar with the dynamics of this system, I am afraid to face with a problem later. For example can making multiple number of clones consume my RAM? Can those clones be destroyesd? If yes, do I have to destroy them? How?
Or are there any other possible problems?
And is there an other way to do this?
Thanks in advance!
To answer your question, no there is nothing wrong with calling Clone multiple times.
The cloned byte arrays will take up memory as long as they are referenced, but that isn't unique to the byte array being cloned. Presumably that cloned byte array is being passed to other methods. Once those methods are executed the array will be eligible for garbage collection, and the system will take care of it. If this code runs very very frequently, there might be better approaches that are more efficient than allocating and eventual garbage collection of those arrays, but you won't "break" anything using the Clone over an over.
For variables of basic type, clone method copies its value, which requires the stack to allocate space for it.
Value type allocates memory in the stack. They have their own life cycle, so they are automatically allocated and released without management. So you do not have to worry about taking a lot of memory, calling it many times will not cause trouble.

How to track/find out which userdata are GC-ed at certain time?

I've written an app in LuaJIT, using a third-party GUI framework (FFI-based) + some additional custom FFI calls. The app suddenly loses part of its functionality at some point soon after being run, and I'm quite confident it's because of some unpinned objects being GC-ed. I assume they're only referenced from the C world1, so Lua GC thinks they're unreferenced and can free them. The problem is, I don't know which of the numerous userdata are unreferenced (unpinned) on Lua side?
To confirm my theory, I've run the app with GC disabled, via:
collectgarbage 'stop'
and lo, with this line, the app works perfectly well long past the point where it got broken before. Obviously, it's an ugly workaround, and I'd much prefer to have the GC enabled, and the app still working correctly...
I want to find out which unpinned object (userdata, I assume) gets GCed, so I can pin it properly on Lua side, to prevent it being GCed prematurely. Thus, my question is:
(How) can I track which userdata objects got collected when my app loses functionality?
One problem is, that AFAIK, the LuaJIT FFI already assigns custom __gc handlers, so I cannot add my own, as there can be only one per object. And anyway, the framework is too big for me to try adding __gc in each and every imaginable place in it. Also, I've already eliminated the "most obviously suspected" places in the code, by removing local from some variables — thus making them part of _G, so I assume not GC-able. (Or is that not enough?)
1 Specifically, WinAPI.
For now, I've added some ffi.gc() handlers to some of my objects (printing some easily visible ALL-CAPS messages), then added some eager collectgarbage() calls to try triggering the issue as soon as possible:
ffi.gc(foo, function()
print '\n\nGC FOO !!!\n\n'
end)
[...]
collectgarbage()
And indeed, this exposed some GCing I didn't expect. Specifically, it led me to discover a note in luajit's FFI docs, which is most certainly relevant in my case:
Please note that [C] pointers [...] are not followed by the garbage collector. So e.g. if you assign a cdata array to a pointer, you must keep the cdata object holding the array alive [in Lua] as long as the pointer is still in use.

What happens when kernel delayed_work is rescheduled

I am using the kernel shared workqueue, and I have a delayed_work struct that I want to reschedule to run immediately.
Will the following code guarantee that the delayed_work will run as soon as possible?
cancel_delayed_work(work);
schedule_delayed_work(work, 0);
What happens in a situation where the work is already running? cancel_delayed_work will return 0, but I'm not sure what schedule_delayed_work will do if the work is currently running or is unscheduled.
Well, you know what they say about necessity being the mother of all invention (or research in this case). I really needed this answer and got it by digging through kernel/workqueue.c. Although the answer is mostly contained in the doc comments combined with Documentation/workqueue.txt, it isn't clearly spelled out without reading the whole spec on the Concurrency Managed Workqueue (cmwq) subsystem and even then, some of the information is out of date!
Short Answer
Will [your code] guarantee that the delayed_work will run as soon as possible?
Yes (with the below caveat)
What happens in a situation where the work is already running?
It will run at some point after the currently running delayed_work function exits and on the same CPU as the last one, although any other work already queued on that workqueue (or delayed work that is due) will be run first. This is presuming that you have not re-initialized your delayed_work or work_struct object and that you have not changed the work->function pointer.
Long Answer
So first off, struct delayed_work uses pseudo-inheritance to derive from struct work_struct by embedding a struct work_struct as its first member. This subsystem uses some amazing atomic bit-frigging to have some serious concurrency. A work_struct is "owned" when it's data field has the WORK_STRUCT_PENDING bit set. When a worker executes your work, it releases ownership and records the last work pool via the private set_work_pool_and_clear_pending() function -- this is the last time the API modifies the work_struct object (until you re-schedule it, of course). Calling cancel_delayed_work() does the exact same thing.
So if you call cancel_delayed_work() when your work function has already begun executing, it returns false (as advertised) since it is no longer owned by anybody, even though it may still be running. However, when you try to re-add it with schedule_delayed_work(), it will examine the work to discover the last pool_workqueue and then find out if any of that pool_workqueue's workers are currently running your work. If they are (and you haven't changed the work->func pointer), it simply appends the work to the queue of that pool_workqueue and that's how it avoids re-entrancy! Otherwise, it will queue it on the pool for the current CPU. (The reason for the work->func pointer check is to allow for reuse of the work_struct object.)
Note however that simply calling schedule_delayed_work() without cancelling it first will result in no change if the work is still queued, so you definitely must cancel it first.
EDIT: Oh yeah, if you are confused by the discussion in Documentation/workqueue.txt about WQ_NON_REENTRANT, ignore it. This flag is deprecated and ignored and all workqueues are now non-reetrant.

Is midiOutPrepareHeader a quick call?

Does midiOutPrepareHeader, midiInPrepareHeader just setup some data fields, or does it do something that is more time intensive?
I am trying to decide whether to build and destroy the MIDIHDR's as needed, or to maintain a pool of them.
You really have only two ways to tell (without the Windows source):
1) Profile it. Depending on your findings for how long it takes, have a debug-only scoped timer that logs when it suddenly takes longer than what you think is acceptable for your application, or do your pool solution. Though the docs say not to modify the buffer once you call the prepare function, and it seems if you wanted to re-use it you may have to modify it. I'm not familiar enough with the docs to say one way or the other if your proposed solution would work.
2) Step through the assembly and see. Don't be afraid. Get the MSFT public symbols and see if it looks like it's just filling out fields or if it's doing something complicated.

LINQ to XML updates - how does it handle multiple concurrent readers/writers?

I have an old system that uses XML for it's data storage. I'm going to be using the data for another mini-project and wanted to use LINQ to XML for querying/updating the data; but there's 2 scenarios that I'm not sure whether I need to handle myself or not:
1- If I have something similar to the following code, and 2 people happen to hit the Save() at the same time? Does LINQ to XML wait until the file is available again before saving, or will it just throw? I don't want to put locks in unless I need to :)
// I assume the next line doesn't lock the file
XElement doc = XElement.Load("Books.xml");
XElement newBook = new XElement("Book",
new XAttribute("publisher", "My Publisher"),
new XElement("author", "Me")));
doc.Add(newBook);
// What happens if two people try this at the same time?
doc.Save("Books.xml");
2- If I Load() a document, add a entry under a particular node, and then hit Save(); what happens if another user has already added a value under that node (since I hit my Load()) or even worse, deleted the node?
Obviously I can workaround these issues, but I couldn't find any documentation that could tell me whether I have to or not, and the first one at least would be a bit of a pig to test reliably.
It's not really a LINQ to XML issue, but a basic concurrency issue.
Assuming the two people are hitting Save at the same time, and the backing store is a file, then depending on how you opened the file for saving, you might get an error. If you leave it to the XDocument class (by just passing in a file name), then chances are it is opening it exclusively, and someone else trying to do the same (assuming the same code hitting it) will get an exception. You basically have to synchronize access to any shared resource that you are reading from/writing to.
If another user has already added a value, then assuming you don't have a problem obtaining the resource to write to, your changes will overwrite the resource. This is a frequent issue with databases known as optimistic concurrency, and you need some sort of value to indicate whether a change has occurred between the time you loaded the data, and when you save it (most databases will generate timestamp values for you).

Resources