I am trying to make the ARM toolchain in ubuntu. The way it is specified in http://hri.sourceforge.net/tools/arm-elf-gcc.html
I am getting the following error:
Configuring for a x86_64-unknown-linux-gnu host.
Invalid configuration `x86_64-unknown-linux-gnu': machine `x86_64-unknown' not recognized
Invalid configuration `x86_64-unknown-linux-gnu': machine `x86_64-unknown' not recognized
Unrecognized host system name x86_64-unknown-linux-gnu.
does anybody have idea whats going wrong here.
A Google-search on the "machine `x86_64-unknown' not recognized" error message indicates that this can happen if the config.guess and config.sub files in the program you're building are too old to recognize the machine type for 64-bit linux. I expect that's your problem. You can fix that by replacing the ones in your GCC source tree with newer versions; your system should have some in the /usr/share/libtool directory that will work. Alternately, compile in a 32-bit Linux installation, or with "--build=i686-pc-linux-gnu --host=i686-pc-linux-gnu" configure options.
There are also copies here:
http://cvs.savannah.gnu.org/viewvc/*checkout*/config/config/config.guess
http://savannah.gnu.org/cgi-bin/viewcvs/*checkout*/config/config/config.sub
The real question, though, is: Why you are trying to build a version of the ARM toolchain that's that old? The directions on the site you link to will lead you to download the sources for the 2.95.3 version of GCC -- which was released nearly a decade ago. In GCC terms, that's positively ancient; the latest version is 4.5. It's older than a lot of ARM instruction-set changes, too.
Thus, the right solution to your problem, unless you have some specific need for a 2.95 compiler, is to get a version of GCC that's much more recent.
Also, you'll probably save some pain by not compiling it yourself, unless you particularly want to. There are numerous sources of precompiled cross-compilers; since I work at CodeSourcery, I'll recommend ours (which you can download and use for free):
http://www.codesourcery.com/sgpp/lite/arm/portal/subscription?#template=lite. If you want something equivalent to the compiler on the page you linked to, you probably want the "uClinux" version.
Related
I am patching code into my car's ECU. This has a Motorola MC68376 processor, so I'm using the appropriate CPU32 instruction set.
I want to continue to write in assembly code so that I can explicitly manage control registers, RAM access and allocation, as well as copying code structures which are already in use.
My first patch was successfully compiled in EASy68k, but that program does not support the full instruction set for the CPU32. For example, the DIVS.L command is not supported, so I cannot take a quotient of a 32-bit value.
Thus, before writing my own compiler out of sheer incompetence with available tools, I'm looking for an easier path. I read that gcc has the capability to compile code for the CPU32, but I have failed to get it to work.
I'm using MinGW's gcc (6.3.0) and Eclipse (2020-03). I added the '-mcpu32' or '-march=cpu32' flags to the compiler call, according to:
https://gcc.gnu.org/onlinedocs/gcc/M680x0-Options.html
Unfortunately this returns an error:
gcc: error: unrecognized command line option '-mcpu32'; did you mean '-mcpu='?
or
error: bad value (cpu32) for -march= switch
May I please have some advice for making this work? Does anyone know of a better CPU32 compiler that works with Eclipse?
I did not understand that gcc is conventionally distributed as binary files that are compiled with different functionality to suit the needs of a given user.
There seem to be two paths forward:
1) compile my own cross-compiler version of GCC
2) download a pre-compiled cross-compiler version of GCC
I chose to follow route 2).
I began the process of installing the 'Windows Subsystem for Linux' and Ubuntu 20.04 Focal Fossa, because I found a pre-made compiler that should be capable of performing cross compilation for the m68k processor: "gobjc-10-m68k-linux-gnu"
https://ubuntu.pkgs.org/20.04/ubuntu-universe-i386/gobjc-10-m68k-linux-gnu_10-20200411-0ubuntu1cross1_i386.deb.html
While I was installing that, I also found an m68k-elf gcc toolchain that is pre-compiled for windows 10:
https://gnutoolchains.com/m68k-elf/
I played with the latter for much of today. Although I was unable to get the toolchain integrated well with Eclipse, it works from the command line to compile a *.s assembly code file. This includes compatibility with the '-mcpu32' flag that I wanted at the outset.
There is still a lot for me to figure out, even after floundering through learning gcc's assembler directives (https://www.eecs.umich.edu/courses/eecs373/readings/Assembler.pdf) and the differences in gcc's assembly syntax compared to the MC68k reference manual (https://www.nxp.com/files-static/archives/doc/ref_manual/M68000PRM.pdf).
I can even convert the code section of the output file to be a proper s-record by using objcopy with the '-O srec' and '--only-section=.text' flags. This helps me patch the code into my ECU.
Thus I've answered my original question.
My developing/producing environments are all CentOS-7.7.
In order to compile my program with gcc-8.3.0, I have installed "devtoolset-8" on my developing env, but it can not be used in the way same as gcc-4.8.5 that was shipped with CentOS7 oringinally.
Every time I need to compile a program, I must use "scl enable devtoolset-8 -- bash" to switch to gcc8 instead of gcc4.8.5.
When the program was deploying onto the producing-env, there is no gcc8, nor libstdc++.so.6.0.25, so it can not run.
I guess libstdc++.so.6.0.25 should be released with gcc8? I can neither install "devtoolset-8" on the producing-env, nor build gcc8 from source on the producing env.
The version of libstdc++ that can be installed from the official yum repo of CentOS, is libstdc++.so.6.0.19, hence my programs can not be loaded at the producing-env.
How to let such programs to run?
Thanks!
Pls forgive my Ugly English.
In order to not have to copy or ship a separate libstdc++.so but rather link statically (as suggested in a comment) against the C++ runtime, one can link C++ programs with -static-libstdc++ (also specifying -static-libgcc will also make sure that the program does not depend on a recent enough version of libgcc_s.so on the system - although that should rarely be a problem).
There can also be the issue of the target system having a version of glibc that is too old (relative to the build system). In that case, one could anyhow compile gcc of no matter how recent of a version on the older system, so that the resulting C++ executables as well as libstdc++ are linked against the older glibc. Linking C++ programs with -static-libstdc++ will again help to not depend on the program having to be able to find libstdc++.so at run-time.
Finally, the C++ program could also be linked with -static not depending on any dynamic libraries at all.
I'm a student doing research involving extending the TM capabilities of gcc. My goal is to make changes to gcc source, build gcc from the modified source, and, use the new executable the same way I'd use my distro's vanilla gcc.
I built and installed gcc in a different location (not /usr/bin/gcc), specifically because the modified gcc will be unstable, and because our project goal is to compare transactional programs compiled with the two different versions.
Our changes to gcc source impact both /gcc and /libitm. This means we are making a change to libitm.so, one of the shared libraries that get built.
My expectation:
when compiling myprogram.cpp with /usr/bin/g++, the version of libitm.so that will get linked should be the one that came with my distro;
when compiling it with ~/project/install-dir/bin/g++, the version of libitm.so that will get linked should be the one that just got built when I built my modified gcc.
But in reality it seems both native gcc and mine are using the same libitm, /usr/lib/x86_64-linux-gnu/libitm.so.1.
I only have a rough grasp of gcc internals as they apply to our project, but this is my understanding:
Our changes tell one compiler pass to conditionally insert our own "function builtin" instead of one it would normally use, and this is / becomes a "symbol" which needs to link to libitm.
When I use the new gcc to compile my program, that pass detects those conditions and successfully inserts the symbol, but then at runtime my program gives a "relocation error" indicating the symbol is not defined in the file it is searching in: ./test: relocation error: ./test: symbol _ITM_S1RU4, version LIBITM_1.0 not defined in file libitm.so.1 with link time reference
readelf shows me that /usr/lib/x86_64-linux-gnu/libitm.so.1 does not contain our new symbols while ~/project/install-dir/lib64/libitm.so.1 does; if I re-run my program after simply copying the latter libitm over the former (backing it up first, of course), it does not produce the relocation error anymore. But naturally this is not a permanent solution.
So I want the gcc I built to use the shared libs that were built along with it when linking. And I don't want to have to tell it where they are every time - my feeling is that it should know where to look for them since I deliberately built it somewhere else to behave differently.
This sounds like the kind of problem any amateur gcc developer would have when trying to make a dev environment and still be able to use both versions of gcc, but I had difficulty finding similar questions. I am thinking this is a matter of lacking certain config options when I configure gcc before building it. What is the right configuration to do this?
My small understanding of the instructions for building and installing gcc led me to do the following:
cd ~/project/
mkdir objdir
cd objdir
../source-dir/configure --enable-languages=c,c++ --prefix=/home/myusername/project/install-dir
make -j2
make install
I only have those config options because they seemed like the ones closest related to "only building the parts I need" and "not overwriting native gcc", but I could be wrong. After the initial config step I just re-run make -j2 and make install every time I change the code. All these steps do complete without errors, and they produce the ~/project/install-dir/bin/ folder, containing the gcc and g++ which behave as described.
I use ~/project/install-dir/bin/g++ -fgnu-tm -o myprogram myprogram.cpp to compile a transactional program, possibly with other options for programs with threads.
(I am using Xubuntu 16.04.3 (64 bit), within VirtualBox on Windows. The installed /usr/bin/gcc is version 5.4.0. Our source at ~/project/source-dir/ is a modified version of 5.3.0.)
You’re running into build- versus run-time linking differences. When you build with -fgnu-tm, the compiler knows where the library it needs is found, and it tells the linker where to find it; you can see this by adding -v to your g++ command. However when you run the resulting program, the dynamic linker doesn’t know it should look somewhere special for the ITM library, so it uses the default library in /usr/lib/x86_64-linux-gnu.
Things get even more confusing with ITM on Ubuntu because the library is installed system-wide, but the link script is installed in a GCC-private directory. This doesn’t happen with the default GCC build, so your own GCC build doesn’t do this, and you’ll see libitm.so in ~/project/install-dir/lib64.
To fix this at run-time, you need to tell the dynamic linker where to find the right library. You can do this either by setting LD_LIBRARY_PATH (to /home/.../project/install-dir/lib64), or by storing the path in the binary using -Wl,-rpath=/home/.../project/install-dir/lib64 when you build it.
I am trying to build a cross compiler for PowerPC e500mc with target powerpc-e500mc-eabi. As some websites mentioned, i built an bootstrap compiler first. and then tried to compile newlib with it. But i got some error like,
/bin/sh: powerpc-e500mc-eabi-cc: command not found
I want to know, can we directly compile GCC cross compiler without newLib. Also, can anyone tell me the exact pre-requisites for powerpc e500mc architecture. I have GMP, MPC, MPFR, BinUtils not sure whether newLib required or not.
You can build gcc without any C library, there is no need for newlib. Please find a list of dependencies from a crosstool-ng build below.
1. Alternatives
You can build everything manually, what you obviously attempted to do. This is possible but there are various constraints to keep in mind, for instance builds might fail if your filesystem is not case sensitive or if you build inside your source directory. Besides all the dependencies and their versions.
You let crosstool-ng build your cross toolchain. Crosstool-ng is mostly self-contained with few external dependencies. It will download and build all dependencies in the right versions for you, and it comes with various sample configurations. It will check for obstacles like a case insensitive filesystem. It lets you configure your cross toolchain in a similar way you configure a Linux kernel. I've built various cross toolchains by means of it for several years on several host systems without trouble, including Linux, OS X (homebrew) and Windows (cygwin). You find it here: http://crosstool-ng.org/.
I'm going to line out the steps that it takes to build a cross toolchain by means of crosstool-ng. I tested this setup on Windows (cygwin) today with the crosstool-ng from git.
2. Download crosstool-ng, build and install it.
Follow the steps in
docs/2 - Installing crosstool-NG.txt.
3. Configure your cross toolchain.
Follow the steps in
docs/3 - Configuring a toolchain.txt.
4. Example
mkdir powerpc-e500v2-eabi
cd powerpc-e500v2-eabi
ct-ng powerpc-e500v2-linux-gnuspe
The last step will create a configuration from a sample which I thought is similar enough to what you want. In the next step, we will adapt this configuration, the criteria are
no operating system
no C library
EABI
you want e500mc, the sample is e500v2. I leave it up to you to adapt the configuration to it.
4.1. Configuration
ct-ng menuconfig
Operating System
Target OS
Select bare-metal
C-library
C library
Select none
Target options
ABI
Select EABI
C-Compiler
gcc version
Select 5.1.0 (the sample configured a gcc 4.6.4
here)
Deselect C++ (since you do not want a C library
either and it'd pull extra dependencies)
Debug facilities
Deselect gdb (unless you want it, can pull extra dependencies)
4.2. Build
ct-ng build.4
Crosstool-ng will download and build the following dependencies.
gmp-6.0.0a
mpfr-3.1.2
isl-0.14
mpc-1.0.2
binutils-2.25
gcc-5.1.0
It will install the cross toolchain in
${HOME}/x-tools/powerpc-e500v2-eabi.
You can specify a different install prefix in the configuration.
This build issue can be fixed by creating a symlink powerpc-e500mc-eabi-cc pointing to powerpc-e500mc-eabi-gcc.
I've downloaded MinGW with mingw-get-inst, and now I've noticed that it cannot compile for x64.
So is there any 32-bit binary version of the MinGW compiler that can both compile for 32-bit Windows and also for 64-bit Windows?
I don't want a 64-bit version that can generate 32-bit code, since I want the compiler to also run on 32-bit Windows, and I'm only looking for precompiled binaries here, not source files, since I've spent countless hours compiling GCC and failing, and I've given up for a while. :(
AFAIK mingw targets either 32 bit windows or 64 bit windows, but not both, so you would need two installs. And the latter is still considered beta.
For you what you want is either mingw-w64-bin_i686-mingw or mingw-w64-bin_i686-cygwin if you want to compile for windows 64. For win32, just use what you get with mingw-get-inst.
See http://sourceforge.net/apps/trac/mingw-w64/wiki/download%20filename%20structure for an explanation of file names.
I realize this is an old question. However it's linked to the many times the question has been repeated.
I have found, after lots of research that, by now, years later, both compilers are commonly installed by default when installing mingw from your repository (i.e. synaptic).
You can check and verify by running Linux's locate command:
$ locate -r "mingw32.*[cg]++$"
On my Ubuntu (13.10) install I have by default the following compilers to choose from... found by issuing the locate command.
/usr/bin/amd64-mingw32msvc-c++
/usr/bin/amd64-mingw32msvc-g++
/usr/bin/i586-mingw32msvc-c++
/usr/bin/i586-mingw32msvc-g++
/usr/bin/i686-w64-mingw32-c++
/usr/bin/i686-w64-mingw32-g++
/usr/bin/x86_64-w64-mingw32-c++
/usr/bin/x86_64-w64-mingw32-g++
Finally, the least you'd have to do on many systems is run:
$ sudo apt-get install gcc-mingw32
I hope the many links to this page can spare a lot of programmers some search time.
for you situation, you can download multilib (include lib32 and lib64) version for Mingw64:
Multilib Toolchains(Targetting Win32 and Win64)
By default it is compiled for 64bit.You can add -m32 flag to compile for 32bit program.
But sadly,no gdb provided,you ought to add it manually.
Because according to mingw-64's todo list, gcc multilib version is done,but gdb
multilib version is still in progress,you could use it maybe in the future.
Support of multilib build in configure and in gcc. Parts are already present in gcc's 4.5 version by using target triplet -w64-mingw32.
gdb -- Native support is present, but some features like multi-arch support (debugging 32-bit and 64-bit by one gdb) are still missing features.
mingw-64-todo-list