While working with some code I experienced QT Creator performance degradation. Actually it launches a thread that occupies 100% CPU in an infinite loop: even closing the IDE process without killing it becomes impossible. This is fully reproducible on my machine. Before submitting a bug I wish to get a confirmation from other users and to collect some statistics for the versions of QT Creator, OS, compiler, STL, etc. The code requires C++11 and higher.
After some investigation I reduced my code to the shortest sample that reproduces the issue (don't look at the symantics of the code, the problem is in how does the IDE treats it):
#include <set>
int main() {
std::set<int> s;
auto iter = s.insert(1).first;
iter->second;
return 0;
}
The highlights:
auto is important
the same behavior can be reproduced with the map instead of the set
insert is important as it returns not a simple iterator but a pair< iterator, bool>
The line iter->second is symantically incorrect, but that is not important (you may use std::set< std::pair> to make it correct). The problem is that the IDE crashes after iter-> whatever it could mean.
My configuration is: QT Creator 3.5.1 based on Qt 5.5.1 (MSVC 2013, 32 bit); Windows 10.
A lot has happened since Qt Creator 3.5. The code model is completely new, based on Clang. Hence, I cannot reproduce your problem with Qt Creator 4.9. (And yes, the old code model had several limitations and bugs.)
In general, always make sure you have the latest supported version of the software before you prepare a bug report.
Related
I've been using Qt Creator for about a year. Because I was new to it and was trying to get things up and running as quick as possible, I opted for the MinGW compiler. My projects have worked well enough, but I am working on one that processes a lot of data but is nonetheless processing much slower than hoped.
Before I undertake the task of installing MSVS, are there any thoughts on whether a switch to the MSVS compiler on Windows 7 generally produces a "faster" exe?
I realize there are subjective elements to this question, but I just want some general ideas.
Ages ago, the Qt-msvc2010 builds were faster than MinGW.
It depends which Qt version, which MSVC version etc you're comparing, how you include headers, etc.
I would like to use clang for c++ development (windows for now but, linux, android etc...), and so far for the past 6 months I was able to compile quite complicated code with little issues. But couple of weeks ago I stumbled on the problem with exceptions not being handled. I researched and read anything i could find but i still don't have definitive answer if it is possible to use exceptions with any combination of mingw/g++/llvm/clang.
The closest leads so far seem to be ruben's builds, but I can't get them to work due to another known problem - strerror_s.
The minimal code i am trying to make work is quite simple:
int main()
{
try { throw 0; }
catch(...) { return 1; }
return 0;
}
Any help will be greatly appreciated because i have stopped my work and am struggling to get the exceptions going.
Thanks,
Orlin++
I'm sorry you're having trouble with my builds. I must admit Windows XP isn't high on my priority list...
What you can try is to build clang 3.2 yourself using the GCC dw2 toolchain on Windows XP, so that the problematic strerror_s function is not used. This is something that only affects the clang binaries, not any binaries that they produce.
I've been using Qt for several months now with no problems. I originally downloaded the Qt 4.8 library with the most recent Qt Creator as of summer 2012, and I was able to start constructing my application. My application has demanding graphics needs, so I've been using the great windowing context Qt provides for OpenGL.
I've been slowly building my skills. I have explored programable shaders with success, and I wanted to leverage the power of Geometry Shaders. I am running OS X 10.7.5 on a MacBookPro6,2 with a GeForce GT 330M GPU. According to what I've read from others here, the upgrade to OS X Lion included a driver to run this GPU under the OpenGL 3.2 Core Specification, including support for programable Geometry Shaders. I also read here that while Qt 4.8 did not support OpenGL 3.2 on OS X, this support was included in their recent release of Qt 5.
I saw that Digia had also released an update to Qt Creator, so (being a little too excited for this potential breakthrough in my work) I uninstalled Qt and downloaded the Qt 5.0.0 library + Qt Creator 2.6.1. I went through the steps in the wizard, started up the new Qt Creator and now nothing works, haha. I have developed a love-hate relationship with my compiler and the cryptic messages it gives me, but this is different. The errors that are being thrown make it sound like it doesn't know how to read the code any more (just to pick one example out of several hundred errors "#include "). The wizard installed Qt fine, and all the guts are there, but I think the link to my gcc compiler has somehow been broken. Not even the examples that came with Qt 5 compile.
Qt has introduced a new "kit" paradigm to make developing on multiple platforms easier, and I have made efforts to change the setup of the kit. Qt detects several gcc compiler options, which I have tried, and I have manually pointed it to the path I get from the terminal command:
which gcc
/usr/bin/gcc
It appears to be gcc 4.2. I see that the most current version is gcc 4.7, but I have the most up-to-date version XCode provides. I also downloaded "Command Line Tools" from XCode and restarted, but it did not remedy my problems as magically as I had hoped. I am trying to update gcc manually, but I'm running into issues because it is asking me to update gmp and mpfr as well, and they are not fully cooperating.
Since the kit paradigm allows multiple libraries to co-exist in Qt, I re-downloaded the Qt 4.8 library, but it suffers from the same problem. I have pointed Qt Creator to qmake for both the 4.8 and 5.0 libraries, but that doesn't seem to be the problem either.
I haven't been able to see evidence of anyone else running into such a crippling problem, so that suggests that I am missing something simple. But even for being a newbie last summer, I felt I had gotten pretty comfortable with Qt, C++, and OpenGL from what I have managed to piece together from the Internet.
If anyone can nudge me in the right direction, I would greatly appreciate it. I am willing to rebuild my application from scratch in Qt 5.0, but I can't use Qt at all at the moment.
I finally got it to work! In the directory /usr/bin/ there was more than one g++ executable. They were labeled with different version numbers (g++-4.0, g++-4.2), but they showed up in Qt's automatic detection. All I needed to do was delete the extras. Leave only the g++ executable that is not labeled with a version number. By limiting the options available to Qt, it automatically selected a compiler and now it works.
It is embarrassing that it took so long for me to find such a fast solution, but it is still a relief. I hope others save time from my experience.
I have some OpenCL kernels that aren't doing what they should be, and I would love to debug them in Xcode. Is this possible?
If not, is there any way I can use printf() in my CPU-based kernels? When I use printf() in my kernels the OpenCL compiler always gives me a whole bunch of errors.
Casting the format string to const char * appears to fix this problem.
This works for me on Lion:
printf((char const *)"%d %d\n", dl, dll);
This has the error described above:
printf("%d %d\n", dl, dll);
Have you tried adding this pragma to enable printf?
#pragma OPENCL EXTENSION cl_amd_printf : enable
You might also want to try using Quartz Composer to test out your kernels. If you have access to the WWDC 2010 videos, I believe they show how to use Quartz Composer for rapid prototyping of OpenCL kernels in Sessions 416: "Harnessing OpenCL in Your Application" or 418: "Maximizing OpenCL Performance". There were also some good sessions on this during WWDC 2009 and 2008 that might also be available via ADC on iTunes.
Using Quartz Composer, you can quickly set up inputs and outputs for a kernel, then monitor the results in realtime. You can avoid the change-compile-test cycle because everything is compiled as you type. Syntax errors and the like will pop up as you change code, which makes it fairly easy to identify those.
I've used this tool to develop and test out OpenGL shaders, which have many things in common with OpenCL kernels.
Have you given the gDEBugger a try already? I think it's the only choice you have currently, for OpenCL debugging on the Mac.
Intel offers a printf in their new OpenCL 1.1 SDK, but that's only for Linux and Windows. Lion has OpenCL 1.1, but at least my Core 2 Duo does not support the printf extension.
AMD ist still developing their OpenCL tools, and the Nvidia Debugging tools are only for CUDA, as far as I understand.
I have an application which has many services and one UI module. All these are developed in VC++ 6.0. The total KLOC would be 560 KLOC.
It uses Mutltithreading,MFC and all datatypes like word,int, long.
Now we need to support 64bit OS. What would be the changes we would need to make to the product.
By support i mean both like running the application on a 64bit OS and also making use of the 64bit memory.
Edit: I am ruling out migration to VS2005 or anything higher than VC6.0 due to time constraints.
So what changes need to be done.
64bit Windows includes 32bit via WOW. Any 32bit application should just continue to work.
(It is only drivers that have to match the bitness of the OS.)
[Note to commenters: plugins—of whatever type—are not separate applications but dlls used by other applications which do need to match the host. In that case you also get the same problem where 64bit extensions are incompatible with 32bit hosts.]
As Richard says, the 32-bit version should continue to work unless you've got a driver or a shell extension or something.
However if you do need to upgrade the code you're going to have to upgrade the compiler too: I don't think MFC got good 64-bit support until VS2005 or later. I'd suggest you get the 32-bit code building in VS2010 - this will not be trivial - and then start thinking about converting it to 64-bit. You can of course leave the production 32-bit builds in VC6 but then you add maintainership burden.
You'll probably get most of the way converting by flipping the compiler to 64-bit and turning on full warnings - particularly given the size of your code it may be impractical to review it all. One thing to watch out for is storing pointers in ints, dwords, etc. which may now be too short to hold the pointer - you need DWORD_PTR etc. now - but I think the warnings do catch that.
Or if this is in many components then you might get away with only migrating a few components to 64-bit. Then, unfortunately, you've got data length issues for communication between the two versions.
You must convert to a newer compiler. Time constraints are pretty much irrelevant. The VC6 compiler simply cannot generate 64 bits code. Every pointer it generates is 32 bits, for starters. If you need to access "64 bit memory", i.e. memory above 0x00000000FFFFFFFF, then 32 bits is simply not enough.
If you're ruling out changing your IDE to one that intrinsically supports 64-bit compiling and debugging, you're making your job unnecessarily more complex. Are you sure it's not worth the hit?
Just for running on a 64bit OS, you won't need to make any changes. That's what WOW64 is for.
If, however, you want to run 64bit natively (i.e., access the 64bit memory space) you will have to compile as 64bit. That means using an IDE that supports 64bit. There is no way around this.
Most programs should have no problem converting to 64bit if they are written with decent coding standards (mainly, no assumptions about the size of a pointer, like int-pointer conversions). You'll get a lot of warnings about things like std::size_t conversions, but they will be fairly meaningless.