"deleting" copy ctor/assignment in C++11 - c++11

In VS 2010 SP1, the following:
class Foo
{
public:
Foo() { }
Foo(Foo const&) = delete; // Line 365
Foo& operator=(Foo const&) = delete; // Line 366
};
does not compile. It complains:
CPPConsole.cpp(365): error C2059: syntax error : ';'
CPPConsole.cpp(365): error C2238: unexpected token(s) preceding ';'
CPPConsole.cpp(366): error C2059: syntax error : ';'
CPPConsole.cpp(366): error C2238: unexpected token(s) preceding ';'
Is this not supported yet? The strange thing is, Intellisense seems to recognize this construct. It says "IntelliSense: function "Foo::operator=(const Foo &)" (declared at line 366) cannot be referenced -- it is a deleted function"
What am I missing?

VS 2010 has something of a dual personality. Specifically, it actually has two entirely separate compiler front ends.
When you compile the code, that's done with Microsoft's own compiler, which goes all the way back to MS C 3.0 for MS-DOS, released ~3 decades ago (in case you're wondering why it was 3.0, MS sold a re-labeled version of Lattice C before that).
Up until VS 2008, the parsing in the IDE was rather primitive compared to the compiler's, so it didn't parse a lot of more sophisticated C++ quite correctly. They decided that was unacceptable, and rather than try to upgrade the IDE's existing parser to match the compiler's, they licensed the EDG compiler front-end.
This gives more or less the opposite situation: the IDE's parser for Intellisense is now considerably closer to conforming than the one on the compiler, and recognizes a fair number of C++0x constructs that the compiler doesn't.
There's a little more to the story than just that though: the EDG compiler front-end supports a switch to make it act more like VC++, including emulating a fair number of VC++ bugs. Although I don't have data to confirm it, my assumption would be that Microsoft uses that capability. Since that's based on EDG taking the VC++ compiler, and emulating its bugs, it's probably a fair guess that (at least usually) EDG's VC++ emulation will run about a version behind VC++ itself. That gives the somewhat paradoxical situation where EDG (in normal use) is normally quite a bit ahead of VC++, but the version MS uses in the IDE is probably doing to be at least slightly behind most of the time.

It isn't implemented yet in VS2010.

Related

Exploring c++ forward declarations of nested types and a standards compliance teaser?

I recently, thanks to awesome Jan 9th 2018 planet clang team work and others great articles, figured out how to configure my dev-setup to integrate llvm 7.0, visual studio 2017 15.5.6 for 32-bit and 64-bit Intel architecture.
My motivation for this was forced by Microsoft's 15.4.X breaking std lib framework changes that rendered the Microsoft experimental clang c2 front end (somewhat buggy anyway) to break.
In the Intel setup I use LLVM tools to both compile and link PDB debuggable Windows code.
See other posts I made on that topic for instructions on how to integrate everything so you can use latest updates to Visual Studio 2017 tools and debugging with latest 7.X builds of LLVM/clang. For those interested in full Windows 10 on ARM development (not UWP sillyness), I am separately still working the same setup for arm-32, arm-64 to all work together using LLVM tools as well as use Wine-ARM for Linux.
Once done with that integration, I was looking to see what LLVM 7.0 clang allowed to be done without needing P0289R0 Forward declarations of nested classes.
struct ds1; struct ds1::ds2;
I started fiddling around and wrote the sample below. It compiles and runs correctly with clang-llvm 7.0. It will not compile (without errors) when using Microsoft's latest c++ front-ends.
So, I scratched my head wondering if it is actually legal c++ code. I often do that, as I use c++ in non-standard style with templates and forward references. For example, I only use one cpp file per module and use includes within templates to minimize any forward declarations. That pattern avoids the need for makefiles and dramatically improves portability, compilation time, and often code generation quality. In effect, I use the .cpp file as a makefile with minimal to no macros and pure clang/gcc and c++ features.
In asking myself if the code was actually legit c++ according to the standard, I re-read through standard. I concluded that yes it was standard compliant to template rules, but that would make Microsoft's compiler be non-compliant. So I wondered if others more knowledgeable might chime in and clarify the question.
Here is the code sample:
(note it happens to also be intentionally infinitely recursive in a number of ways to test compiler detection)
// P0289R0 clang Forward declarations of nested classes
//- http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0289r0.pdf
// struct ds1; struct ds1::ds2;
struct nested1;
struct nested2;
// Code that references "nested1" here (could be pointer, but in template model this is more interesting case)
template <bool f_ta = true>
void foo(nested1& n1) {
n1.run();
}
template <bool f_ta = true>
struct outer {
struct inner {
void run() {
nested1 n1; nested2 n2;
n2.test();
foo(n1);
}
};
void test() {
nested1 n1;
foo(n1);
}
};
//using nested = outer::inner; // Must be defined at module-scope
//typedef outer::inner nested; // Must be defined at module-scope
struct nested1 : outer<>::inner { using super_t = outer<>::inner; using super_t::super_t; };
struct nested2 : outer<> { using super_t = outer<>; using super_t::super_t; };
bool fAutoRunIt = []() {
nested2 a;
a.test();
return true;
}();
You can easily modify the example to not be recursive and still test out the other forward reference behavior.
Cheers, David

GNU C++ import name mangling [duplicate]

I came across an interesting error when I was trying to link to an MSVC-compiled library using MinGW while working in Qt Creator. The linker complained of a missing symbol that went like _imp_FunctionName. When I realized That it was due to a missing extern "C", and fixed it, I also ran the MSVC compiler with /FAcs to see what the symbols are. Turns out, it was __imp_FunctionName (which is also the way I've read on MSDN and quite a few guru bloggers' sites).
I'm thoroughly confused about how the MinGW linker complains about a symbol beginning with _imp, but is able to find it nicely although it begins with __imp. Can a deep compiler magician shed some light on this? I used Visual Studio 2010.
This is fairly straight-forward identifier decoration at work. The imp_ prefix is auto-generated by the compiler, it exports a function pointer that allows optimizing binding to DLL exports. By language rules, the imp_ is prefixed by a leading underscore, required since it lives in the global namespace and is generated by the implementation and doesn't otherwise appear in the source code. So you get _imp_.
Next thing that happens is that the compiler decorates identifiers to allow the linker to catch declaration mis-matches. Pretty important because the compiler cannot diagnose declaration mismatches across modules and diagnosing them yourself at runtime is very painful.
First there's C++ decoration, a very involved scheme that supports function overloads. It generates pretty bizarre looking names, usually including lots of ? and # characters with extra characters for the argument and return types so that overloads are unambiguous. Then there's decoration for C identifiers, they are based on the calling convention. A cdecl function has a single leading underscore, an stdcall function has a leading underscore and a trailing #n that permits diagnosing argument declaration mismatches before they imbalance the stack. The C decoration is absent in 64-bit code, there is (blessfully) only one calling convention.
So you got the linker error because you forgot to specify C linkage, the linker was asked to match the heavily decorated C++ name with the mildly decorated C name. You then fixed it with extern "C", now you got the single added underscore for cdecl, turning _imp_ into __imp_.

Resources listing known compiler bugs in VC++ 6.0

Is there a resource listing CString fixes between VC6.0 and Visual Studio 2010. We have encountered what appears to be a compiler bug in VC6.0 sp6 that works in 2010.
I'm working to distill it into a small test case but essentially in cases where ~300 strings are referenced two nearly identical strings resolve such that one is lost at the assembly level. Seems like a possible internal hash table collision internal to vc6.0.
I need to prove this for a vc6.0 work around solution. (Our legacy code is vc6.0). I'll try to post a code snippet once I can / (if I can ) distill it to something I can post.
Visual C++ uses COMDAT naming to support the /GF string pooling flag (which is implied by /ZI) However, under VC++6.0 the symbol name length was truncated to 256 characters.
I suspect your strings have identical prefixes up to the 256th character.

Visual studio 2005: is there a compiler option to initialize all stack-based variables to zero?

This question HAS had to be asked before, so it kills me to ask it again, but I can't find it for all of my google and searching stackoverflow.
I'm porting a bunch of linux code to windows, and a good chunk of it makes the assumption that everything is automatically initialized to zero or null.
int whatever;
char* something;
...and then immediately doing something that may leave 'something' null, and testing against 'something'
if(something == NULL)
{
.......
}
I would REALLY like not to have to go back throughout this code and say:
int whatever = 0;
char* something = NULL;
Even though that is the proper way to deal with it. It's just very time consuming.
Otherwise, I declare a variable, and it's initialized to something crazy if I don't set it myself.
This option doesn't exist in MSVC, and honestly, whoever coded your application made a big mistake. That code is not portable, as C/C++ say that uninitialized variables have an undefined value. I suggest setting the "treat warnings as errors" option and recompiling; MSVC should give you a warning every time a variable is used without being initialized.
No - there's no option to do that in MSVC.
Debug builds will initialize them with something else (0xcc I think), but not zero. Unfortunately, your code is bugged and needs fixed (of course this applies only to automatic variables -for statics and globals it's fine to assume they're zero initialized). I'm surprised there was any compiler that supported that behavior - if there's an option to do that in GCC, I haven't heard of it (but I'm no expert in the dusty corners of GCC).
You may hear that an earlier version of MSVC would init variables to zero in debug builds (similar to the way 0xcc is used in VS 2005), but as far as I know that's untrue.
edit ----------
Well, I'll be damned - GCC does (or did?) have the -finit-local-zero option. Looks like it's there mostly for Fortran support, I think.
I'd suggest using compiler warnings about using uninitialized variables to help you catch 99% of your problems. I know it's not a great bit of work, but it should be done if at all possible.
Interestingly, MSVC now does have the ability to do this. The Microsoft Security team wrote a blog post about it here, and there's a CppCon talk here.
Unfortunately, it doesn't seem like this option is exposed to the public. This page lists a bunch of 'hidden MSVC flags', and it includes an option called -initall, so that might be it.
What I ended-up doing was switching to /w4. At this level, it caught most of the "yeah, that's going to be an issue" areas of initialization. Otherwise, there's nothing that can change everything from being 0xcccccccc on initialization to 0x00000000 that I saw.
Massive thanks to everyone for answering this, and yes, we will tighten it up in the future.

Xcode equivalent of ' __asm int 3 / DebugBreak() / Halt?

What's the instruction to cause a hard-break in Xcode? For example under Visual Studio I could do '_asm int 3' or 'DebugBreak()'. Under some GCC implementations it's asm("break 0") or asm("trap").
I've tried various combos under Xcode without any luck. (inline assembler works fine so it's not a syntax issue).
For reference this is for an assert macro. I don't want to use the definitions in assert.h both for portability, and because they appear to do an abort() in the version XCode provides.
John - Super, cheers. For reference the int 3 syntax is the one required for Intel Macs and iPhone.
Chris - Thanks for your comment but there are many reasons to avoid the standard assert() function for codebases ported to different platforms. If you've gone to the trouble of rolling your own assert it's usually because you have additional functionality (logging, stack unwinding, user-interaction) that you wish to retain.
Your suggestion of attempting to replace the hander via an implementation of '__assert" or similar is not going to be portable. The standard 'assert' is usually a macro and while it may map to __assert on the Mac it doesn't on other platforms.
http://developer.apple.com/documentation/DeveloperTools/Conceptual/XcodeProjectManagement/090_Running_Programs/chapter_11_section_3.html
asm {trap} ; Halts a program running on PPC32 or PPC64.
__asm {int 3} ; Halts a program running on IA-32.
You can just insert a call to Debugger() — that will stop your app in the debugger (if it's being run under the debugger), or halt it with an exception if it's not.
Also, do not avoid assert() for "portability reasons" — portability is why it exists! It's part of Standard C, and you'll find it wherever you find a C compiler. What you really want to do is define a new assertion handler that does a debugger break instead of calling abort(); virtually all C compilers offer a mechanism by which you can do this.
Typically this is done by simply implementing a function or macro that follows this prototype:
void __assert(const char *expression, const char *file, int line);
It's called when an assertion expression fails. Usually it, not assert() itself, is what performs "the printf() followed by abort()" that is the default documented behavior. By customizing this function or macro, you can change its behavior.
__builtin_trap();
Since Debugger() is depreciated now this should work instead.
https://developer.apple.com/library/mac/technotes/tn2124/_index.html#//apple_ref/doc/uid/DTS10003391-CH1-SECCONTROLLEDCRASH
For posterity: I have some code for generating halts at the correct stack frame in the debugger and (optionally) pausing the app so you can attach the debugger just-in-time. Works for simulator and device (and possibly desktop, if you should ever need it). Exhaustively detailed post at http://iphone.m20.nl/wp/2010/10/xcode-iphone-debugger-halt-assertions/
I found the following in an Apple Forum:
Xcode doesn't come with any symbolic breaks built in - but they're
quick to add. Go to the breakpoints window and add:
-[NSException raise]
kill(getpid(), SIGINT);
Works in the simulator and the device.
There is also the following function that is available as cross platform straight Halt() alternative:
#include <stdlib.h>
void abort(void);
We use it in our cross platform engine for the iPhone implementation in case of fatal asserts. Cross platform across Nintendo DS/Wii/XBOX 360/iOS etc...

Resources