Using the How To Build GCC 4.8.2 ARM Cross-Compiler, I have installed and setup everything and it works just fine as mentioned in the post i.e., I was able to cross compile a simple C code. But, when I try to compile a simple GMP code, I get this error.
fatal error: gmp.h: No such file or directory
Compilation terminated
How should I fix this? My goal is to compile a gmp program. If possible, refer me to good tutorials.
Thanks!
If you want GMP compiled for the target system (ARM), you must compile it by itself using the newly built cross-compiler, not as a part of building GCC. GMP (along with MPFR, MPC, ISL, CLooG, etc.) being placed in the GCC toplevel source directory simply means that it gets compiled and linked for the cross-compiler you're building.
Since the cross-compiler will run on the host system, GMP will also be compiled for the host system, else linking the library would fail, and you wouldn't get a cross-compiler. It may sound silly, but there are reasons for doing it this way, such as buggy prebuilt packages provided by the package manager on the host system or merely to avoid installing those libraries on the host system when all you want is the cross-compilation toolchain.
Related
I'd like to compile a Haskell project with a .cabal file under windows.
I have installed the Haskell Platform and Cygwin. One of the dependancies is time, which fail to build during the cabal install command.
The error message is the following:
checking for gcc... C:\PROGRA~1\HASKELL~1\826561~1.1\mingw\bin\gcc.exe
checking if the C compiler is working... no
configure error: C compiler cannot create executable
So I downloaded another gcc withing Cygwin that, I suppose, will work better.
However, this other question mention that the Haskell platform now uses MinGW rater than Cygwin to run GNU software.
I changed in the cabal configure file the location of gcc, but I still get the same error message (but with the new location of gcc).
So I'm a bit confused here: what is exactly the problem with gcc? Do you have any inputs on how I could continue the building of my software?
Fixed (partially) by using Stack. Building is failing but for another reason, so I'll ask another question.
Thanks again.
I'm trying to compile Pyaudio, (a Python module) from source, since I'm using Windows, and only 32-bit binaries are available - I need 64. Following these instructions I downloaded Cygwin, and installed every component, to be safe. Installing Portaudio, another module, is required first.
When I run CFLAGS="-mno-cygwin" LDFLAGS="-mno-cygwin" ./configure, I get the error configure: error: C compiler cannot create executables. See 'config.log' for more details. config.log has an additional line below that message: gcc: The -mno-cygwin flag has been removed; use a mingw-targeted cross-compiler.
This leads me to believe that perhaps Cygwin is using the wrong compiler; the instructions are for using MinGw with Cygwin, but I never specified minigw in the process. I also wonder if there's something in the PyAudio build files that needs to be changed for 64-bit. I know nothing about C, compiling, Cygwin or MinGW, and am new to programming in general. Any ideas? Any other information I can provide?
Current versions of Cygwin gcc do not support -mno-cygwin anymore because it never really worked correctly. Instead, you should use a proper cross-compiler, which is provided by the mingw64-i686-gcc packages, then run ./configure --host=i686-w64-mingw32.
In some cases it is an antivirus that is causing problems.
I had avast and had to disable it.
I'm working on Linux PC x86_64.
I've set up a cross compile toolchain for raspberry pi and I can compile basic helloworld and run it on raspberry.
I'm stuck at compiling some open-source programs as ./configure is complaining about missing packages, for example:
configure: No package 'glib-2.0' found
I'm using this. ./configure --host=arm-linux-gnueabihf to cross compile and it looks good until that error above.
Should I tell ./configure to use libs from target system or? How to do it?
Pass options to the configure script so that it finds the development headers and libraries. It uses using LDFLAGS and CFLAGS.
I've installed GCC 4.6.3 into a non-system path on a Mac system and it works fine. However, GCC wants to use code from libgcc for all the binaries I compile, and running otool -L shows that these compiled programs look for libgcc_s.1.dylib in GCC's install path. I can override this by passing -static-libgcc, which just compiles the stuff needed into the binary and that's fine. The problem is this only seems to work with executables, not shared libraries. If I use GCC to compile some third-party lib I want to use in one of my programs as a .dylib, these libraries still look for libgcc_s.1.dylib in the local GCC install path even if I specify -static-libgcc! Needless to say, this is a problem as there's no guarantee that those libraries will find libgcc when run on some other system.
I tried this with ffmpeg. If I look at config.log, the -static-libgcc is most certainly being used. GCC is just not linking libgcc statically with the resulting dylibs. I even tried the -nostdlib, -nostartfiles and -nodefaultlibs options but they were ignored. Again, I checked config.log and they're definitely there!
I believe this is to do with throwing exceptions across the shared library boundary. This page says:
There are several situations in which an application should use the
shared libgcc instead of the static version. The most common of these
is when the application wishes to throw and catch exceptions across
different shared libraries. In that case, each of the libraries as
well as the application itself should use the shared libgcc.
Therefore, the G++ and GCJ drivers automatically add -shared-libgcc
whenever you build a shared library or a main executable, because C++
and Java programs typically use exceptions, so this is the right thing
to do.
The rest of that sections gives a possible workaround (it appears) and that is to use the GCC driver to link your shared library, however if the statically-linked library throws exceptions you'll probably get a Segmentation Violation.
I am using buildroot to build a fresh gcc cross-compiler on a dedicated machine.
It worked fine, but I now need to run this gcc from another machine, on which I have not the same libc version :-(. Of course gcc then crashed.
Is it possible to build gcc statically using buildroot ?
You could try passing -static to the linker (via LDFLAGS), but be aware that full static linking is not supported by glibc anymore (resp. it needs a glibc build which supports static linking).
This is due to the fact that nss libraries (name server switch) will be loaded dynamically (unless you compile your own glibc - but that defeats the purpose of nss). This might be enough for you however to reduce dependencies against system libraries.
But I could assume that a statically linked gcc is fairly huge - this might result in long startup times.
If your objective is only to make a relocatable toolchain, statically link with expat, gmp, mpfr and mpc should enough. You can simply apply https://patchwork.ozlabs.org/patch/359841/