Why does gcc misinterpret this macro? - gcc

I have found the large-precision code of MPFR C++ to be very useful, and have used it successfully in the past. Recently, while developing a new app, I encountered an enormous number of compiler errors in their header code (mpreal.h). I have identified the cause of all these errors: the the use of a name both in a typedef and as the name of a function, coupled with an unintuitive result of a macro. The relevant macro was in the mpfr package, and occurred between mpfr 4.0.2-5 and 4.1.0-6. I am using the latest version of mpreal.h (version 3.6.8), but other earlier versions behave the same.
The compiler errors vary somewhat, but the following is typical:
In file included from mpreal.h:125:
mpreal.h:624:32: error: no matching function for call to ‘mpfr::mpreal::mpfr_srcptr(const __mpfr_struct*&)’
624 | mpfr_init2(mpfr_ptr(), mpfr_get_prec(u));
| ^~~~~~~~~~~~~
mpreal.h:324:19: note: candidate: ‘const __mpfr_struct* mpfr::mpreal::mpfr_srcptr() const’
324 | ::mpfr_srcptr mpfr_srcptr() const;
| ^~~~~~~~~~~
mpreal.h:324:19: note: candidate expects 0 arguments, 1 provided
The relevant lines of code (int addition to the above) are:
mpreal.h:125 #include <mpfr>
mpfr.h:866 #define mpfr_get_prec(_x) MPFR_VALUE_OF(MPFR_SRCPTR(_x)->_mpfr_prec)
mpfr.h:845 #define MPFR_VALUE_OF(x) (0 ? (x) : (x))
mpfr.h:847 #define MPFR_SRCPTR(x) ((mpfr_srcptr) (0 ? (x) : (mpfr_srcptr) (x)))
The problem seems to be in the macro of line 847. The (mpfr_srcptr) (x) appearing in MPFR_SRCPTR(x) is meant to be a type-cast of x to the type mpfr_srcptr, but is being interpreted to mean a call to mpfr_srcptr() with argument x. Outside of a macro, gcc can tell the difference between (mpfr_srcptr)(x) and mpfr_srcptr(x), but the macro is apparently ignoring the parentheses. Can anyone explain this macro behavior? I know that gcc has a huge number of switches to control almost everything, but is there an option somewhere that would affect the interpretation of parentheses in macros?
I suppose that this behavior could be unique to my system, but I find that hard to believe. But I also find it hard to believe that such a bug has gone unnoticed by the rest of the community; I found no suggestion of any problem either on the website or on github, to which the project has recently been transferred.

The macro SRCPTR is not ignoring parentheses as I originally thought; the behavior is explained by the difference in scopes. The SRCPTR macro, while occurring within the mpfr coding at global scope, is actually being called from mpreal's scope. Since mpreal has redefined srcptr as a function, that definition is the only one used when SRCPTR is executed from mpreal. (SRCPTR, being a macro, has no scope.) When mpfr's functions are called from mpreal, the functions operate with the global scope, and the SRCPTR macro invoked there would therefore use the global definition.

Related

GNU C++ import name mangling [duplicate]

I came across an interesting error when I was trying to link to an MSVC-compiled library using MinGW while working in Qt Creator. The linker complained of a missing symbol that went like _imp_FunctionName. When I realized That it was due to a missing extern "C", and fixed it, I also ran the MSVC compiler with /FAcs to see what the symbols are. Turns out, it was __imp_FunctionName (which is also the way I've read on MSDN and quite a few guru bloggers' sites).
I'm thoroughly confused about how the MinGW linker complains about a symbol beginning with _imp, but is able to find it nicely although it begins with __imp. Can a deep compiler magician shed some light on this? I used Visual Studio 2010.
This is fairly straight-forward identifier decoration at work. The imp_ prefix is auto-generated by the compiler, it exports a function pointer that allows optimizing binding to DLL exports. By language rules, the imp_ is prefixed by a leading underscore, required since it lives in the global namespace and is generated by the implementation and doesn't otherwise appear in the source code. So you get _imp_.
Next thing that happens is that the compiler decorates identifiers to allow the linker to catch declaration mis-matches. Pretty important because the compiler cannot diagnose declaration mismatches across modules and diagnosing them yourself at runtime is very painful.
First there's C++ decoration, a very involved scheme that supports function overloads. It generates pretty bizarre looking names, usually including lots of ? and # characters with extra characters for the argument and return types so that overloads are unambiguous. Then there's decoration for C identifiers, they are based on the calling convention. A cdecl function has a single leading underscore, an stdcall function has a leading underscore and a trailing #n that permits diagnosing argument declaration mismatches before they imbalance the stack. The C decoration is absent in 64-bit code, there is (blessfully) only one calling convention.
So you got the linker error because you forgot to specify C linkage, the linker was asked to match the heavily decorated C++ name with the mildly decorated C name. You then fixed it with extern "C", now you got the single added underscore for cdecl, turning _imp_ into __imp_.

What does this preprocessor line mean?

I am working on a project with the library ADOL-C (for automatic differentiation) using gcc. Right now I am trying to recompile the library to use the parallelization features however the make process does not working apparently due to some preprocessor stuff.
I think the problematic line is :
#define ADOLC_OPENMP_THREAD_NUMBER int ADOLC_threadNumber
However I could not find what it means. Does it make a link between two variables? Also ADOLC_threadNumber has not been declared before...
The preprocessor doesn't even know what a variable is. All that #define does is define a short(long?)hand for declaring a variable. I.e., if you type
ADOLC_OPENMP_THREAD_NUMBER;
It becomes
int ADOLC_threadNumber;
It's just a text substitution. Everywhere in code where ADOLC_OPENMP_THREAD_NUMBER appears, it's substituted by int ADOLC_threadNumber.
As far as I see it, the line with the define itself is not problematic, but maybe the subsequent appearance of ADOLC_OPENMP_THREAD_NUMBER. However, to check this, we need to know more about the context.
#define is a directive used often in .h files,
it creates a macro, which is the association of an identifier or parameterized identifier with a token string.
After the macro is defined, the compiler can substitute the token string for each occurrence of the identifier in the source file.
#define may be associated with #ifndef directive to avoid to delare the identifier more than once :
#ifndef ADOLC_OPENMP_THREAD_NUMBER
#define ADOLC_OPENMP_THREAD_NUMBER int ADOLC_threadNumber
#endif

MinGW's gcc reports error where Cygwin's accepts

MingGW's gcc (4.8.1) reports the following error (and more to come) when I try to compile Expstack.c:
parser.h:168:20: error: field '__p__environ' declared as a function
struct term *environ;
where 'environ' is declared inside 'struct term{ ... }'. In unistd.h you find
char **environ
but nowhere a '__p__environ'.
Some other fields are declared likewise, but are accepted. Subsequent errors related to environ are reported as follows
Expstack.c:1170:38: error: expected identifier before '(' token
case Term_src: return e->item.src->environ;
^
Cygwin's gcc (4.8.3) accepts these constructs and has done so over various versions since
2006 at least, and gcc with various versions of Linux before that.
The source text uses CRLF despite my attempts to convert to DOS, and this is my only guess for an explanation.
I'll appreciate clues or ideas to fix the problem, but renaming the field is not especially attractive and ought to be totally irrelevant.
This is very unlikely to have to do with CR/LF.
The name ought to be irrelevant but it isn't: this one relates to the Windows integration that MinGW does and Cygwin does not, as alluded to in http://sourceforge.net/p/mingw/mailman/message/14901207/ (that person is trying to use an extern environ that it expects to be defined by the system. Clearly, the fashion in which MinGW developers have made this variable available breaks the use of the name as a struct member).
You should report this as a MinGW bug. Unpleasant as it may be, in the interim, renaming the member is the simplest workaround. It is possible that a well-placed #undef environ might help, but no guarantees.

Error: functions that differ only in their return type cannot be overloaded

I'm using mac os 10.9, I have a C++ program that uses freeglut library. When I try to make the project. It gives an error which I don't know if it's my fault or not. This is the message:
In file included from /usr/X11/include/GL/freeglut.h:18:
/usr/X11/include/GL/freeglut_ext.h:177:27: error: functions that differ only in their return type cannot be overloaded
FGAPI GLUTproc FGAPIENTRY glutGetProcAddress( const char *procName );
More information: I used CMake (version 2.8.12) to generate the Makefile, and installed the latest version of Xcode and XQuartz.
Any help is appreciated. Thank you.
In glut.h and freeglut_ext.h files:
In glut.h:
#if (GLUT_API_VERSION >= 5)
extern void * APIENTRY glutGetProcAddress(const char *procName) OPENGL_DEPRECATED(10_3, 10_9);
#endif
In freeglut_ext.h:
/*
* Extension functions, see freeglut_ext.c
*/
typedef void (*GLUTproc)();
FGAPI GLUTproc FGAPIENTRY glutGetProcAddress( const char *procName );
One of the declarations returns a function type GLUTproc (specifying a function that takes no arguments), and the other declaration returns a pointer (void*). Both functions take the same arguments (a single const char*). What the compiler says is true.
You're only seeing a complaint about "overloading" because it's C++. In C++, if a compiler thinks it's seen two different functions with the same name then each one needs to have different arguments (e.g. a different number of arguments, or distinct types).
In this case, I doubt the functions are meant to be different; they're meant to be the same, and at some point the API evolved and changed the declaration.
You need to find some way to prevent the compiler from seeing both declarations at the same time (perhaps by setting GLUT_API_VERSION). If you have to, you can #include just one of the files and see if you really need the other file (and if you did, you may have to manually declare some things to avoid a 2nd #include).

#defining something as #error

In the process of getting rid of old macros in our code, I need to define the old macro as an error with a meaningful compiler message.
E.g., old code:
#define DIVIDE_BY_TWO(x) x/2
In the new code, to prevent the usage of this macro I'd like to write:
#define DIVIDE_BY_TWO(x) #error DIVIDE_BY_TWO is obsolete, use DIV_2 instead
But when I compile the above line I get:
error C2162: expected macro formal parameter
What is the correct way to do it?
A macro can't have directives or change the preprocessor state. You could leave DIVIDE_BY_TWO undefined, but then it doesn't help to find the replacement macro. The only way to do it portably is to define it as something like this:
#define DIVIDE_BY_TWO error_DIVIDE_BY_TWO_is_obsolete_use_DIV_2_instead
Which should give an error that error_DIVIDE_BY_TWO_is_obsolete_use_DIV_2_instead is not defined and hopefully that will give enough hints has to how to replace it.
The problem with using #error is that creates an error at the time that part of the code is analyzed by the preprocessor. You want to create an error when the macro is expanded. You can't, unfortunately, use #error for that.
I don't believe there is a way to generate a clear human-readable error message reliably in portable C. (You can, of course, make the macro expand to something that's syntactically invalid, though, which will at least stop compilation.) gcc supports doing it with _Pragma. Your question is effectively equivalent to this question and the answer there explains how to use _Pragma as well as other options for creating a fatal error.
You cannot use a preprocessor directive in #define since what you're asking for is to run the preprocessor twice.
If a line begins with an # it is a directive to the preprocessor and would be interpreted. If it doesn't, then it is subjected to macro substituion and replacement if such a macro exists.
The best you could do is to define the deprecated macro as some expression that would return an error for sure.

Resources