Irrlicht Differences between device->drop() and device->closeDevice() - irrlicht

I am willing to run two separate Irrlicht devices, basically a new one after the old one is closed, but by using the two above mentioned methods to close the old one I cannot get the new device to appear (segfault). What is the correct way of doing that?

Just for the clarification.
All the closeDevice() does is just tell Irrlicht to return false on next run() call. It is safe to call it from any part of your code (from the event handler or in the middle of the drawing geometry). Basically you can make your own variable to hold the flag like needBreakRenderingLoop and ignore what run() returns, instead check your variable and change it manually instead of calling closeDevice(). But that is done by the engine already for you.

To fully close a device in a clean way, you must call closeDevice(), then run() to clear all the late events then drop() to clear the memory.
So basically do the following:
device->closeDevice();
device->run();
device->drop();

Related

What's the correct way to CheckDeviceState in DirectX11?

I have built the DX11VideoRenderer sample (a replacement for EVR that uses DirectX11 instead of EVR's DirectX9), and it's working. Problem is, it's not working very well. It's using twice the CPU time that the EVR does for the same videos (more on this in the next question).
Since I've got the source, I decided to profile it to see what's going on. (Among other things) this led me to:
HRESULT DX11VideoRenderer::CPresenter::CheckDeviceState(BOOL* pbDeviceChanged)
I'm not much of a DirectX expert (actually, I'm not one at all), but it seems likely that window handles can invalidate as monitors get unplugged, windows get FullScreened, closed, etc so a function like this makes perfect sense to me.
However.
When I look at the code for CheckDeviceState, the first thing it does is call SetVideoMonitor, which seems odd.
SetVideoMonitor looks like the routine you call when you first initialize the presenter (or change the target window), not something you'd call repeatedly to "Check" the device state.
Indeed, SetVideoMonitor calls TerminateDisplaySystem, followed by InitializeDisplaySystem. I could see doing this once at startup, but those functions are being called once per frame. That can't be right.
I can comment out the call to SetVideoMonitor in CheckDeviceState (or actually all of CheckDeviceState), and the code continues to function correctly (it's predictably a bit faster). But then I'm not checking the device state anymore.
Trying to figure out the proper way to check for state changes in DX11 brought me here which talks about just checking the return codes for IDXGISwapChain::Present and ResizeBuffers. Is that how this should be done? Because that makes it seem like this whole routine is some leftover from DX9 (where it still would have been poorly implemented).
What's the correct way to check the device state in DX11? Is this even a thing anymore?

Should I delete QSensorReading after using it?

I am trying to use QSensor and friends in Qt5.5 for the first time, and a question has come up; who is responsible for managing instances of QSensorReading? I have tried to understand this by reading the documentation without getting any wiser.
Example:
QAccelerometer *accelerometer=new QAccelerometer(this);
if(accelerometer->connectToBackend()){
accelerometer->start();
}
//Some time later in handler for QSensorReading::readingChanged()signal:
QAccelerometerReading *myReading=accelerometer->reading();
What can I do with myReading here? Should I delete it? Will it be automaticaly delted? Can I pass it safely along as a parameter? Do I risk it being updated (mutable)? Can I copy it somehow?
It's owned by the QSensorBackend, so it'll be deleted with it. The pointer can be passed, but the object doesn't look like copyable. The value inside may be updated (but it's thread safe if it's used in the same thread where the backend lives). The pointer stays the same.

Editing waveform audio input before it reaches a application

I am working on a voice changer that is supposed to manipulate the input buffer of a waveform-audio input device before the buffer is returned to a application.
The waveInOpen()-function gives 4 options to be notified when the buffer provided by waveInAddBuffer() has been filled.
The options are CALLBACK_EVENT, CALLBACK_FUNCTION, CALLBACK_THREAD, CALLBACK_WINDOW.
I have tried several things to to get my waveform manipulation to work but haven't found a reliable and clean solution yet.
What worked so far was intercepting waveInAddBuffer()-calls with Detours. I am saving all WAVEHDR-pointer used by waveInAddBuffer() and each time the function is called I delay the program for a few miliseconds and search for waveform-buffers that have been filled during the delay.
This isn't reliable though because the buffer size differs for each application and therefore there isn't a delay-time that works for every application.
I would be really thankful for new ideas!
edit:
Heres the other stuff I have tried:
Most applications set multiple flags when calling waveInOpen() that actually exclude each other. So you can never be sure what callback method actually is used. (e.g.: the flags CALLBACK_EVENT | CALLBACK_FUNCTION | CALLBACK_WINDOW are all set.)
When the CALLBACK_WINDOW flag is set, I have used the SetWindowLongPtr() function to create a subclass window that received MM_WIM_DATA messages before the window of the application. Unfortunately this didn't work, my subclass window never gets called.
I have created a custom-callback function that I replace with the callback function of the application when the CALLBACK_FUNCTION flag is set.
This didn't work because my function never gets called. I guess this is because my function is defined in a DLL, outside of the address space of the application.
There were several other things I have tried that didn't work because I made attempts that never could have worked because I didn't know enough about injection and hooks. I have learned quite a lot and I cant really summarize everything I have tried, because it's not helping the cause.

Data sharing in GUI, Matlab

now I'm developing a GUI with pop-up windows, so actually it is a workpackage with multiple GUIs.
I have read thorough the examples given in help files (changme, and toolpalette), but I failed to animate the method to transfer data from the new one back to the old one.
Here is my problem.
I have two GUIs, A, the Main one and B that I use it to collect input data and I want to transfer the data back to B.
Question 1:
I want to define new subclasses of handles in A.
lets say,
handles.newclass
how can I define its properties, e.g. 'Strings'?
Question 2:
In A, a button has the callback
B('A', handles.A);
so we activate B.fig.
After finished the work in B,
it has collected the following data (string and double) in B(!)
title_1 itle_2 ... title_n
and
num_1 num_2 ... num_n
I want to pass the data back to A.
Following the instruction, I wrote the codes shown below.
mainHandles = guidata(A);
title = mainHandles.title_1;
set(title,'String',title_1);
However, when I go back to A, handles in A was not changed at all.
Please someon help me out here.
Thank you!
=============update================
The solution I found is adding extra variables (say handles.GUIdata) to handles structure of one GUI, and whenever the data are required, just read them from the corresponding GUI.
And It works well for me, since I have a main control panel and several sub-GUIs.
There is a short discussion of this issue here.
I have had similar issues where I wanted external batch scripts to actually control my GUI applications, but there is no reason two GUI's would not be able to do the same.
I created a Singleton object, and when the GUI application starts up it gets the reference to the Singleton controller and sets the appropriate gui handles into the object for later use. Once the Singleton has the handles it can use set and get functions to provide or exchange data to any gui control that it has the handle for. Any function/callback in the system can get the handle to the singleton and then invoke routines on that Singleton that will allow data to be exchanged or even control operations to be run. Your GUI A can, for instance, ask the controller for the value in GUI B's field X, or even modify that value directly if desired. Its very flexible.
In your case be sure to invalidate any handles if GUI A or B go away, and test if that gui component actually exists before getting or modifying any values. The Singleton object will even survive across multiple invocations of your app, as long as Matlab itself is left running, so be sure to clean up on exit if you don't want stale information laying around.
http://www.mathworks.com/matlabcentral/fileexchange/24911-design-pattern-singleton-creational
Regarding Question 2, it looks like you forgot to first specify that Figure A should be active when setting the title. Fix that and everything else looks good (at least, the small snippets you've posted).

How can I implement a blocking process in a single slot without freezing the GUI?

Let's say I have an event and the corresponding function is called. This function interacts with the outside world and so can sometimes have long delays. If the function waits or hangs then my UI will freeze and this is not desirable. On the other hand, having to break up my function into many parts and re-emitting signals is long and can break up the code alot which would make hard to debug and less readable and slows down the development process. Is there a special feature in event driven programming which would enable me to just write the process in one function call and be able to let the mainThread do its job when its waiting? For example, the compiler could reckognize a keyword then implement a return then re-emit signals connected to new slots automatically? Why do I think this would be a great idea ;) Im working with Qt
Your two options are threading, or breaking your function up somehow.
With threading, it sounds like your ideal solution would be Qt::Concurrent. If all of your processing is already in one function, and the function is pretty self-contained (doesn't reference member variables of the class), this would be easy to do. If not, things might get a little more complicated.
For breaking your function up, you can either do it as you suggested and break it into different functions, with the different parts being called one after another, or you can do it in a more figurative way, but scattering calls to allow other processing inside your function. I believe calling processEvents() would do what you want, but I haven't come across its use in a long time. Of course, you can run into other problems with that unless you understand that it might cause other parts of your class to run once more (in response to other events), so you have to treat it almost as multi-threaded in protecting variables that have an indeterminate state while you are computing.
"Is there a special feature in event driven programming which would enable me to just write the process in one function call and be able to let the mainThread do its job when its waiting?"
That would be a non-blocking process.
But your original query was, "How can I implement a blocking process in a single slot without freezing the GUI?"
Perhaps what you're looking for a way to stop other processing when some - any - process decides it's time to block? There are typically ways to do this, yes, by calling a method on one of the parental objects, which, of course, will depend on the specific objects you are using (eg a frame).
Look to the parent objects and see what methods they have that you'd like to use. You may need to overlay one of them to get your exactly desired results.
If you want to handle a GUI event by beginning a long-running task, and don't want the GUI to wait for the task to finish, you need to do it concurrently, by creating either a thread or a new process to perform the task.
You may be able to avoid creating a thread or process if the task is I/O-bound and occasional callbacks to handle I/O would suffice. I'm not familiar with Qt's main loop, but I know that GTK's supports adding event sources that can integrate into a select() or poll()-style loop, running handlers after either a timeout or when a file descriptor becomes ready. If that's the sort of task you have, you could make your event handler add such an event source to the application's main loop.

Resources