Should I call SDL_DestroyWindow if Window Creation fails? I have the following code below:
if(this->Window == NULL)
{
std::cout << "Error: Can't create the SDL Window \n" << SDL_GetError() << "\n";
SDL_DestroyWindow(this->Window);
std::exit(EXIT_FAILURE);
}
Is it wrong?
From the SDL wiki :
If window is NULL, this function will return immediately after setting the SDL error message to "Invalid window"
You don't have to call SDL_DestroyWindow if you don't have a window in the first place : it will not do anything (other than setting an error message).
You can think of it like free in C or delete in C++. If you give them NULL or nullptr (respectively), they do nothing.
Related
I am trying to implement a game loop in FLTK
void SnakeFLTK::init() {
_display = new Fl_Window(900, 600);
if (!_display)
throw SnakeFLTKException("Couldn't make fltk window!");
_display->color(FL_BLACK);
_display->show();
while (!_doExit) {
std::cout << "-->" << std::endl;
}
Fl::run();
}
the problem I have is the window is not showing. I want to keep showing and redrawing on the window in the while (!_doExit) loop and it's important that I use _doExit. I have tried using
while (Fl::wait > 0)
but this method seems to have its own loop that waits for events.
How do I Implement a loop like I did and show the window?
FLTK is doing nothing until Fl::run is called. And as this, you can not do anything after you call Fl::run because the function returns only if main window is closed.
Exactly for doing something while Fltk itself is "running" you can register to the idle loop like this:
void CallbackFunc( void* )
{
std::cout << "Hallo" << std::endl;
}
int main() {
auto _display = new Fl_Window(900, 600);
_display->color(FL_BLACK);
_display->show();
Fl::add_idle( CallbackFunc );
Fl::run();
}
In the given callback function you can do the drawing or anything youl like to achieve in FLTK which is not driven by events coming from the active widgets itself.
Used Versions: OMNeT++ 5.0 with iNET 3.4.0
I created some code, which gives me reliable results in ‘step-by-step’- or ‘animated’ simulation mode. The moment I change to ‘fast’ or ‘express’ mode, it gets buggy. The following simplified example will explain my problems:
void MyMacSlave::handleSelfMessage(cMessage *msg)
{
if (msg == CheckAck) {
std::cout << “CheckAck: “ << msg << std::endl;
}
if (msg == transmissionAnnouncement) {
std::cout << “transmissionAnncouncement: “ << msg << std::endl;
}
if (msg == transmissionEvent) {
std::cout << “transmissionEvent: “ << msg << std::endl;
}
delete msg;
}
There is a function, which is called for handling self-messages. Depending on what self-message I got, I need to run different if queries.
I get this correct output in step-by-step or animated mode:
CheckAck: (omnetpp::cMessage)CheckAck
transmissionAnncouncement: (omnetpp::cMessage)transmissionAnncouncement
transmissionEvent: (omnetpp::cMessage)transmissionEvent
And this is the strange output I get using fast or express mode:
CheckAck: (omnetpp::cMessage)CheckAck
transmissionAnncouncement: (omnetpp::cMessage)transmissionAnncouncement
transmissionAnncouncement: (omnetpp::cMessage)transmissionEvent
transmissionEvent: (omnetpp::cMessage)transmissionEvent
The third output line shows that the self-message is ‘transmissionEvent’, but the ‘if (msg == transmissionAnnouncement)’ is mistakenly considered as true as well.
As shown above I get different simulation results, depending on the simulation mode I am using. What is the reason for the different output? Why is there even a difference?
As Christoph and Rudi mentioned there was something wrong with the memory allocation. When a pointer is de-allocated and a new one is allocated on the same memory, there will be something wrong. The difference regarding the usage of different running modes is just a sign that there are errors to that effect.
In my case it was useful to check for message-kinds like:
if (msg->getKind() == checkAckAckType) {
instead of the method used in the originally question. I defined the message-kinds using simple enums.
This was a problem in Qt 5.4.0. and has been fixed in Qt 5.6.0
I have an application that allows the user to launch a process with QProcess.
Initially I wanted to connect the QProcess::finished signal to a lambda function, but since it is an overloaded function, it appears that it can't be done due to ambiguity of which function to connect with.
Therefore, I've experimented with monitoring the state change of QProcess.
void MainWindow::on_actionLaunchApplication_triggered()
{
// launch the file open dialog for the user to select a file
QString filePath = QFileDialog::getOpenFileName(this, "Select Application to Launch", "/Applications");
if(filePath == "")
return;
QProcess* proc = new QProcess(this);
// can't connect to QProcess::exited with lambda, due to its overloaded function, so will check state changed instead
connect(proc, &QProcess::stateChanged, [filePath, proc, this](QProcess::ProcessState state){
if(state == QProcess::NotRunning)
{
qDebug << "Deleting proc";
disconnect(proc, &QProcess::stateChanged, 0 , 0);
proc->deleteLater();
}
});
proc->start(filePath);
}
Generally this works as expected; the application selected is executed and different applications can be selected to run this way, one after another. Quitting such an application results in execution of the tidyup code that deletes the QProcess.
However, if an application that has been launched with QProcess is quit and then selected again for execution, it fails to launch and instead the process is deleted immediately from the call to deleteLater in the lambda function.
So, what's going on? Considering that a new QProcess is created each time, why would it work the first time for each application, but if such an application is quit and selected to launch again, it is instantly deleted?
I'm fully aware that I can connect to QProcess::finished without a lambda function or via the SIGNAL and SLOT macros. This question is academic and I'm looking for an understanding of what's going on here.
In response to answers and comments so far, it looks like this is a Qt bug. Connecting to the QProcess::finished slot results in the same problem of an application only being launched the first time.
// launch the file open dialog for the user to select a file
QString filePath = QFileDialog::getOpenFileName(this, "Select Application to Launch", "/Applications");
if(filePath == "")
return;
QProcess* proc = new QProcess();
connect(proc, static_cast<void (QProcess::*)(int)>(&QProcess::finished), [filePath, proc, this](int exitStatus) {
Q_UNUSED(exitStatus);
Log("Deleting proc for launched app");
proc->deleteLater();
proc->disconnect(proc, static_cast<void (QProcess::*)(int)>(&QProcess::finished), 0, 0);
});
proc->start(filePath);
In fact, you can connect to the signal! All you have to do is to tell you compiler which signal it should choose, because it can't decide this.
There is a good answere to that problem in this question: Qt5 overloaded Signals and Slots.
This won't solve your problem with the strange delete behavior, but maybe the problem will solve itself this way.
The finished signal indicates a state transition. But instead, you're checking for a static state, not a transition.
You should keep a property related to the process to indicate that it is running or starting, and then only delete the process when it stops running or fails to start.
void MainWindow::on_actionLaunchApplication_triggered()
{
auto locations = QStandardPaths::standardLocations(QStandardPaths::ApplicationsLocation);
if (locations.isEmpty())
locations << QString();
auto filePath = QFileDialog::getOpenFileName(this, "Select Application to Launch",
locations.first());
if (filePath.isEmpty())
return;
bool wasActive = false; // upon capture, it becomes a per-process field
auto proc = new QProcess(this);
connect(proc, &QProcess::stateChanged, [=](QProcess::ProcessState state) mutable {
if (state == QProcess::Running) {
qDebug() << "Process" << proc << "is running";
wasActive = true;
}
else if (state == QProcess::Starting) {
qDebug() << "Process" << proc << "is starting";
wasActive = true;
}
else if (state == QProcess::NotRunning && wasActive) {
qDebug() << "Will delete a formerly active process" << proc;
proc->deleteLater();
}
else /* if (state == QProcess::NotRunning) */
qDebug() << "Ignoring a non-running process" << proc;
});
proc->start(filePath);
}
I'm using GetRawInputDeviceInfo to get the device name of a USB HID device name.
For some reason, when I run my code under Windows XP I get a device name which starts with \??\ and not \\?\.
This of course means, that when I try to use this device name (in CreateFile for example" it does not work. If I edit the device name and manually fix it to be \\?\ everything works great.
This does not happens in Windows 7. In Win7 everything works great.
I also test for GetLastError after every API call and no errors occur.
All my OS's are 32 bit and my project is compiling with unicode.
Any suggestions what am I doing wrong?? Here's a code snippets from my console application which gets the device name.
nResult = GetRawInputDeviceInfo( pDeviceList[i].hDevice, RIDI_DEVICENAME, NULL, &nBufferSize );
if( nResult < 0 )
{
cout << "ERR: Unable to get Device Name character count.." << endl;
return false;
}
WCHAR* wcDeviceName = new WCHAR[ nBufferSize + 1 ];
if( wcDeviceName == NULL )
{
cout << "ERR: Unable to allocate memory for Device Name.." << endl;
return false;
}
nResult = GetRawInputDeviceInfo( pDeviceList[i].hDevice, RIDI_DEVICENAME, wcDeviceName, &nBufferSize );
if( nResult < 0 )
{
cout << "ERR: Unable to get Device Name.." << endl;
delete [] wcDeviceName;
return false;
}
wcDeviceName[1]='\\';
//This is the manual fix for the device name in WinXP. How do I get rid of it????
pDesc->hHandle = CreateFile(wcDeviceName, GENERIC_READ|GENERIC_WRITE, FILE_SHARE_READ|FILE_SHARE_WRITE, NULL, OPEN_EXISTING, NULL, NULL);
...
...
You are not doing anything wrong.
Just change the second character to \ and you are set. What you see is the raw device path in its native form (\??\...). When you have the form \\?\ that is a crutch MS invented to make long path names available on Win32 when NT arrived despite the limitation of the Win32 subsystem to the \?? object directory.
Please read a few of the chapters of "Windows Internals" by Russinovich (any old edition will do) and use winobj.exe from Sysinternals to explore the object namespace of Windows to see what I'm talking about.
Side-note: when you call CreateFile the code in kernel32.dll will literally undo the suggested change and convert it back to its native form before the native functions get to see the path. So all you are doing with this is to make the Win32 layer understand the path.
I have an exception thrown from native code in Visual Studio 10. I've enabled breaking on throw for all exceptions in the debug->exceptions menu. It's a regular C++ std::runtime_error, no SEH or managed exceptions involved here. But the runtime won't break on throw. It also won't catch them- even though I explicitly caught runtime_errors, since I throw some. They're finally caught by the managed calling code. I put a breakpoint before the throw statement and I know which one is throwing and why- but still mystified as to why I can't break on it or catch it.
try {
//...
std::for_each(...) {
if (condition) {
std::stringstream str;
str << "Error: Unexpected end of file in file " << arg.first << "\n";
str << "Unused tokens: \n";
for(int i = 0; i < token_stack.size(); i++) {
auto ref = token_stack.top();
str << " " << ref->contents << " at line " << ref->line << "\n";
token_stack.pop();
}
throw std::runtime_error(str.str());
}
});
//...
}
catch(const std::runtime_error& exception) {
std::cout << exception.what() << std::endl;
return;
}
The function is eventually called from managed code. I know that this throw statement is the one throwing.
If this is a managed appliaction as you seem to imply, then I think you need to enable "mixed mode debugging". Have you checked if you can set a breakpoint in this function?