Building full gcc under MinGw - gcc

I'm trying to build gcc 4.7.2 under MinGw with cc,c++,fortran,objc and java.
When it reaches to the compiling libgcc at final linking level it gives an error.
cannot find dllcrt2.o
Followed by
cannot find -lmingwthrd,-lmingw32,-lmingwex,-lmoldname,-lmsvcrt,-ladvapi,-lshell32,-luser32,-lkernal32
I think this is because ld.exe couldn't locate /mingw/lib dir.Is there any solution to fix this?I tried googling but nothing worked.

What is wrong with the MinGw compilers they ship that you are trying to build your own set?
Configuring/building GCC is a complex task (even more so in such a hostile environment), a small slip up can lead to results like the ones you see. Recheck each step.
Grab a copy of the source for MinGw and study how they do it, check their documentation.

Related

GCC built from source in different location is incorrectly using same shared libs as native GCC

I'm a student doing research involving extending the TM capabilities of gcc. My goal is to make changes to gcc source, build gcc from the modified source, and, use the new executable the same way I'd use my distro's vanilla gcc.
I built and installed gcc in a different location (not /usr/bin/gcc), specifically because the modified gcc will be unstable, and because our project goal is to compare transactional programs compiled with the two different versions.
Our changes to gcc source impact both /gcc and /libitm. This means we are making a change to libitm.so, one of the shared libraries that get built.
My expectation:
when compiling myprogram.cpp with /usr/bin/g++, the version of libitm.so that will get linked should be the one that came with my distro;
when compiling it with ~/project/install-dir/bin/g++, the version of libitm.so that will get linked should be the one that just got built when I built my modified gcc.
But in reality it seems both native gcc and mine are using the same libitm, /usr/lib/x86_64-linux-gnu/libitm.so.1.
I only have a rough grasp of gcc internals as they apply to our project, but this is my understanding:
Our changes tell one compiler pass to conditionally insert our own "function builtin" instead of one it would normally use, and this is / becomes a "symbol" which needs to link to libitm.
When I use the new gcc to compile my program, that pass detects those conditions and successfully inserts the symbol, but then at runtime my program gives a "relocation error" indicating the symbol is not defined in the file it is searching in: ./test: relocation error: ./test: symbol _ITM_S1RU4, version LIBITM_1.0 not defined in file libitm.so.1 with link time reference
readelf shows me that /usr/lib/x86_64-linux-gnu/libitm.so.1 does not contain our new symbols while ~/project/install-dir/lib64/libitm.so.1 does; if I re-run my program after simply copying the latter libitm over the former (backing it up first, of course), it does not produce the relocation error anymore. But naturally this is not a permanent solution.
So I want the gcc I built to use the shared libs that were built along with it when linking. And I don't want to have to tell it where they are every time - my feeling is that it should know where to look for them since I deliberately built it somewhere else to behave differently.
This sounds like the kind of problem any amateur gcc developer would have when trying to make a dev environment and still be able to use both versions of gcc, but I had difficulty finding similar questions. I am thinking this is a matter of lacking certain config options when I configure gcc before building it. What is the right configuration to do this?
My small understanding of the instructions for building and installing gcc led me to do the following:
cd ~/project/
mkdir objdir
cd objdir
../source-dir/configure --enable-languages=c,c++ --prefix=/home/myusername/project/install-dir
make -j2
make install
I only have those config options because they seemed like the ones closest related to "only building the parts I need" and "not overwriting native gcc", but I could be wrong. After the initial config step I just re-run make -j2 and make install every time I change the code. All these steps do complete without errors, and they produce the ~/project/install-dir/bin/ folder, containing the gcc and g++ which behave as described.
I use ~/project/install-dir/bin/g++ -fgnu-tm -o myprogram myprogram.cpp to compile a transactional program, possibly with other options for programs with threads.
(I am using Xubuntu 16.04.3 (64 bit), within VirtualBox on Windows. The installed /usr/bin/gcc is version 5.4.0. Our source at ~/project/source-dir/ is a modified version of 5.3.0.)
You’re running into build- versus run-time linking differences. When you build with -fgnu-tm, the compiler knows where the library it needs is found, and it tells the linker where to find it; you can see this by adding -v to your g++ command. However when you run the resulting program, the dynamic linker doesn’t know it should look somewhere special for the ITM library, so it uses the default library in /usr/lib/x86_64-linux-gnu.
Things get even more confusing with ITM on Ubuntu because the library is installed system-wide, but the link script is installed in a GCC-private directory. This doesn’t happen with the default GCC build, so your own GCC build doesn’t do this, and you’ll see libitm.so in ~/project/install-dir/lib64.
To fix this at run-time, you need to tell the dynamic linker where to find the right library. You can do this either by setting LD_LIBRARY_PATH (to /home/.../project/install-dir/lib64), or by storing the path in the binary using -Wl,-rpath=/home/.../project/install-dir/lib64 when you build it.

libstdc++ for cross gcc with multilib support placed in wrong directory

I managed to build a cross gcc-7.2.0 with multilib support for several Cortex-M-Targets with and without hard and soft floating point according to the processor capabilities.
Now, after final install step (make install), I find only one libstc++.a in the installation directory. For the other C++ libraries I see the same problem.
I expected one in every multilib subdirectory, the same way as I can find libc, libm and the like. But there are no libstdc++.a in the multilib subdirectories.
I think, this is not right.
Linking my test project failes with
libstdc++.a(atexit_arm.o) uses VFP register arguments, ../target.elf does not.
This suggests problems with the multilib installation.
How can I fix this multilib problem in the build phase?
After I added some configuration options to the configure call of binutils and GCC, the multilib configuration is working like a charm now.
For binutils I added --enable-version-specific-runtime-libs.
For GCC I added --enable-multiarch --enable-version-specific-runtime-libs.
Don't know, if the multiarch option is really necessary for my problem, but I didn't investigate further and I leave the information here.

Building cmake with non-default GCC uses system libstdc++

I'm trying to compile CMake using a non-default GCC installed in /usr/local/gcc530, on Solaris 2.11.
I have LD_LIBRARY_PATH=/usr/local/gcc530/lib/sparcv9
Bootstrap proceeds fine, bootstrapped cmake successfully compiles various object files, but when it tries to link the real cmake (and other executables), I get pages of "undefined reference" errors to various standard library functions, because, as running the link command manually with -Wl,-verbose shows, the linker links with /usr/lib/64/libstdc++.so of the system default, much older GCC.
This is because apparently CMake tries to find curses/ncurses libraries (even if I tell it BUILD_CursesDialog:BOOL=OFF), finds them in /usr/lib/64, and adds -L/usr/lib/64 to build/Source/CMakeFiles/cmake.dir/link.txt, which causes the linker to use libstdc++.so from there, and not my actual GCC's own.
I found a workaround: I can get the path to proper libraries from $CC -m64 -print-file-name=libstdc++.so then put it with -L into LDFLAGS when running ./configure, and all works well then.
Is there a less hacky way? It's really weird that I can't tell GCC to prioritize its own libraries.
Also, is there some way to have CMake explain where different parts of a resulting command line came from?

Cross compiling gcc

Im trying to get gcc running on my Kindle 3. I have a native terminal and ssh working, and I found that the Sourcery G++ toolchain for arm eabi linux produces working binaries, so it seems like I should be able to just configure gcc to build using arm-none-eabi-gcc and compile it, but Im obviously misunderstanding something... When I configure with --target=arm-none-eabi --host=arm-none-eabi and run make, it compiles using gcc, not arm-none-eabi-gcc. Unless theres some pre-compilation phase that uses gcc? I havent gotten all the way through the build yet, since Ive had other errors im working through, but I dont want to waste all this time if im configuring something wrong in the beginning. I found this question, Compile GCC with Code Sourcery where the op seems to want the same thing, but it was never answered... Also, Ive run into problems when compiling lua with the sourcery compiler, it seems to be using a different version of GLIBC, which I assume will be a problem. Is there any easier way to do this?
/pathToStuffifNotPathed/arm-none-linux-gnueabi-g++ foo.c
Should you not invoke the compiler via it's tool-chain invocation? I believe so.

GCC GCJ needs ECJ and Other Libraries?

So I just downloaded mingw-w64-bin_i686-mingw_20110410.zip from here (GCC 4.7 apparently), and discovered it had a very recent version of the GCJ compiler.
I tried using it, but apparently gcj requires ecj1.exe, which is the Eclipse compiler for Java... so, where do I find a compatible version of the binaries of ECJ and the associated Java libraries that are needed (libgcj, etc.)?
Ideally this would be found on the MinGW-w64 project page, but it doesn't seem to exist.
(I've already tried copying them from a slightly older GCC version; it doesn't work.)
The cause for an openSUSE version of the gcc is basically this:
If the configure step of the compilation of gcc did not find the ecj.jar
file, ecj1 will be missing at the time when gcj, which has just been build,
is called.
ecj.jar can be taken from ftp://sourceware.org/pub/java/ecj-4.8.jar
for example.
The two options are:
i) Put ecj.jar in $HOME/share/java/ecj.jar, reconfigure gcc with
./configure .... --with-ecj-jar=$HOME/java/ecj.jar
and recompile gcc. Future compilations with that gcc will not require
ecj1 .
ii) Put ecj.jar in $HOME/share/java/ecj.jar and create ecj1(.exe)
through a compilation like
gcj -o$HOME/bin/ecj1(.exe) --main=org.eclipse.jdt.internal.compiler.batch.GCCMain $HOME/share/java/ecj.jar
assuming that the $HOME/bin is in the PATH for subsequent calls of gcj.
The thing that is actually "broken" here the fact that gcc 4.8.* is not shipped
by default with ecj.jar at some standard place.
That is a very old version of a MinGW-w64 toolchain.
I would suggest downloading one of my builds, I've had reports of gcj working (without libgcj, which does not work on Windows), although I can't seem to find a link to the discussion I had long ago with a user. The user's case had something to do with creating a JNI interface or something, which didn't require libgcj.
My old builds can be found here for 32-bit and here for 64-bit. I checked the 4.8 release build, and it contains the gcj compiler.
Would you be opposed to downloading the source and building it? I looked over the build doc in basic and advanced build docs. I didn't see anything about the GCJ compiler or ECJ, but you'll need gcc 4.5.1 in order to build it.

Resources