static function with no prototype (AIX compiler allowed, gcc doesn't) - gcc

I'm attempting to port a large set of modules from AIX to Linux. Unfortunately, the AIX xlc compiler allowed you to define a static function and use it prior to the definition with no prototype. Not good, but at least you get the proper static scope. In any case, the code is there, and I can't get it to compile on Linux without explicitly adding a static prototype.
So, is there any way to inhibit the "static declaration follows non-static declaration" error in gcc (or make it a warning instead of a hard error), or do I have to edit each of these modules to add prototypes wherever they're missing? As I understand it, this is a case where the standard behavior is undefined - so it's kind of nasty if gcc wouldn't allow you a way to relax its internal standard to allow for code that compiles elsewhere, no...?

This has been a hard error in GCC since 2004. The only option to get this to compile is to downgrade to a really old version of GCC. I verified that GCC 3.4.6 still compiles this, but GCC 4.0.3 does not.
Of course, depending on your target, getting GCC 3.4 to work might be close to impossible.

Related

Does Dev-Cpp 5.11 support C++ 11?

I struggled to find a clear answer on the first Google page.
I have troubles understanding the term "Language standard". I mean, the new standard should be implemented on a software level, right? It's not just a list of things discovered that users can now do, right?
I use delegating constructors, get a warning:
[Warning] delegating constructors only available with -std=c++11 or -std=gnu++11
Though things seem to work the way I want them to. Is such warning critical? If so, how do I get rid of it?
Dev-Cpp is just IDE (frontend) for coder and behind it sits MinGW with GCC 4.9.2 as compiler*. So every time you click "Run" or "Build" it is GCC to do the dirty job. GCC by default uses C++03 standard and to use newer one you have to tell it explicitly via compiler flag -std=c++11. You can change it in Tools->Compiler Options->Settings->Code generation->Language standard (-std).
I am not sure why delegating constructors could work without C++11 (probably some GCC feature), but for sure you will not be able to use C++11 libraries without -std=c++11. It will also get rid of the warning.
(* Assuming you used default Dev-C++ installer.)

link SO against libbfd

I need to link my SO against libbfd, for the purpose of having human-readable backtraces.
Static linking against libbfd.a fails, because it's not compiled with -fPIC, so as I understand, it can participate in executable only.
Though linking against libbfd.so also gives some troubles.
I need to compile on both Ubuntu-14.04 and Debian Wheezy 7.8
And they have non-intersecting sets of binutils versions. In particular, Ubuntu has 2.24, and Debian has 2.22 and 2.25. And the problem is, gcc doesn't want to take symlink's name libbfd.so to reference it, and uses SONAME instead. So i end with either libbfd-2.24-system.so or libbfd-2.25-system.so in dependencies.
For now I see several approaches:
There's some hidden flag which allows to override SONAME during linking. This is the preferred path
I have no way other than compile libbfd by hand. I would evade this as much as possible.
Manual dlopen+dlsym for everything I need.
I read answer gcc link shared library against symbolic link, but it suggests to change SONAME I'm not able to do.
Any suggestions?
Thanks.
EDIT: it seems that virtually all static libs in Ubuntu repos are not position-independent. Can't guess why. With the inability to override SONAME it makes things much more complicated.
Not sure whether i understand your (4 years old problem) correctly, but having similar problems with libbfd, I found this solution:
Using the linker flag -lbfd seems to work.
It is a linker flag that specifies g++ to link against libbfd.
My full command was g++ loader.cc -lbfd.
At least for me, errors at link-time a la "unknown function" are solved.

Undefined reference to __libc_init_array

I am trying to compile some code for an STM32 chip using CodeBench G++ Lite tools. However, it generates an error.
startup.o: In function `LoopFillZerobss':
(.text.Reset_Handler+0x2a): undefined reference to `__libc_init_array'
I have googled and it appears that libc_init_array is probably part of some standard gcc library...but I am not sure how to fix this error?
I also have errors such as this
arm-none-eabi-ld: cannot find libc.a
and similarly for libgcc.a and libm.a
The function __libc_init_array is part of CodeSourcery's 'CS3' mechanism for 'start up' code which ensures all of a programs static initialisation happens before main is executed.
Start by ensuring all of the libraries are found. That might be enough to fix all your problems.
One approach is to use arm-none-eabi-g++, and not use arm-none-eabi-ld directly, to do the linking because g++ should correctly pass some important parameters to arm-none-eabi-ld. In some case, that might be all that is needed to find and link the correct libraries.
If you aren't sure how to build on the command line, or arm-none-eabi-g++ isn't doing everything to resolve the missing libraries, go and have a look at LeafLabs web site, where they show how build from the command line using Makefiles
http://leaflabs.com/docs/unix-toolchain.html
They provide a free, Open Source, IDE for STM32, built for Windows, Linux and Mac, which includes a working gcc-based toolchain for each of those platforms, and enough of the libraries to get started http://leaflabs.com/docs/maple-ide-install.html
Even if you'd prefer to use your toolchain for the actual build, it may be worth using theirs, with their Makefiles, to sanity check the process you are using to build your program.
I am not a member of LeafLabs staff, and have no relationship with the company other than I have bought some of their products, and try to answer questions on their forum.

Is it possible to compile/link to occi with gcc on HPUX?

We have Oracle 11 running on HP-UX 11.31 and gcc 4.4.3. It seems that there is no way to link to occi, because it was built with aCC. Is there any workaround for this?
I had the silly idea that I could somehow build a library that basically proxied the connection - build the library with aCC in some way that could be linked to by gcc. Is this possible?
No, there isn't a way around that.
Different C compilers have interchangeable code using a standard ABI. You can mix and match their object code more or less with impunity.
However, different C++ compilers have a variety of different conventions that mean that their object code is not compatible. These relate to class layout (especially in multiple inheritance hierarchies and the dreaded 'diamond-of-death'), but also in name mangling conventions and exception handling. The name mangling schemes are deliberately made different so that you cannot accidentally link objects from one compiler with another.
Generally, if libraries are built using a C++ compiler, you have to link your code using the same - or at least a compatible - C++ compiler. And that almost invariably means a compiler from the same family. For example, you might be able to use G++ 4.5.0 even if the code was built with G++ 4.4.2. However, you won't be able to mix aCC with G++.

Migrating from Winarm to Yagarto

This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.

Resources