How to cross compile GCC on x86 for Sun4v? - c++11

I am writing a programming assignment using C++. The instructor of this course requires all code to be compiled and run on the UNIX server. The server is a SunOS machine. I wrote all my code on my personal laptop with GCC 5.2, which support most C++11 features. However, when I upload my code to the server and tried to compile it, I surprisingly found that the g++ version on the server is 4.2.1, which was released in mid-2007. Many of the C++11 features are not supported. Even the -std argument is not accepted.
I tried to download the source code of the latest GCC and compile it on the server. Unfortunately there is a disk quota limiting to 500M per account. I am just wondering if it is possible to cross compile GCC on my x86 machine and upload the binary on to the server so that I can compile my C++ code.
By the way, I have contacted the IT department about updating the software but they responded that they do not have such plans in the near future.
I did do research on the Internet about cross compilation and found a couple tutorials. But they are not easy to follow. In addition to binaries, there are also a lot dependencies like headers and libraries. So before I give up and modify my code to fit the old compiler, can anyone give me some suggestions?
Thank you.
uname -a returns the following result
SunOS 5.10 Generic_147147-26 sun4v sparc SUNW,T5240

Of course it's possible, and it's the way you usually do the things when writing operating systems.
First of all, you need to take binutils in the toolbox, too. Once you have all the Holy Sources, let's prepare!
export PREFIX="$HOME/opt" # All the stuff will get installed here!
export TARGET=sparc-sun-solaris # I'm not *100%* sure if this is correct, check it yourself
export PATH="$PREFIX/bin:$PATH" # If you forget this/close the terminal, you're doomed!
Now, let's get with the little monster... Shall binutils be built!
cd $HOME/src # Or where you have the sources
mkdir binutils-build
cd binutils-build
../binutils-src/configure --target=$TARGET --prefix="$PREFIX" --disable-nls
make
make install
--disable-nls disables the support for native natural languages (a.k.a: the compiler prints errors in your own language!), and just uses English for messages. That's not a must, but it certainly speeds up the process of building binutils.
Now, compiling GCC itself is a very fragile process, and it can fail anywhere, anyhow, so be prepared! This process is long (it can take up to an hour on some machines), but trust me, LLVM+Clang is worse ;).
cd $HOME/src
cd gcc-src
./contrib/download_prerequisites # Get GMP, MPFR, and MPC
cd ..
mkdir gcc-build
cd gcc-build
../gcc-src/configure --target=$TARGET --prefix="$PREFIX" --disable-nls --enable-languages=c,c++
make all-gcc
make all-target-libgcc
make install-gcc
make install-target-libgcc
If you don't get into issues while compiling (trust me, you will unless you're too lucky for this world), you'll have a toolchain that runs on your machine, but compiles for SunOS/SPARC! BTW, --enable-languages=c,c++ means that GCC will have support for compiling C and C++ code. Nothing less, nothing more. Try it out with...
sparc-sun-solaris-g++ --version
Now, if you want to get a compiler for the server, that runs on the server, you will have to some mess with a double canadian cross. Basically, what you have to do is...
export PREFIX="$HOME/some-holy-directory" # This path *must* be the same for both your machine and the target server!
export HOST=$TARGET
And then repeat the compilation process again, remembering to adding the option --host=$HOST to both configure scripts! Once done, you must move that some-holy-directory at exactly the same location into the server. If it didn't fit into the 500MB, well, ask your teacher if you can at least compile assignments in your own machine, then upload them to the server. Otherwise, you're left out with C++98.
BTW: Please note that cross-compiling GCC itself is an even more fragile process. All this post is just theoretical, because I won't do all this steps just for the sake of doing it. Please comment if you have any major issues, or if someone spots an error in the steps ;).
Edit: Apparently, you'll have to build Glibc and all that funky stuff too...
I hope this has led some light on you!

Related

Run a program built with gcc8 on a producing environment without gcc8

My developing/producing environments are all CentOS-7.7.
In order to compile my program with gcc-8.3.0, I have installed "devtoolset-8" on my developing env, but it can not be used in the way same as gcc-4.8.5 that was shipped with CentOS7 oringinally.
Every time I need to compile a program, I must use "scl enable devtoolset-8 -- bash" to switch to gcc8 instead of gcc4.8.5.
When the program was deploying onto the producing-env, there is no gcc8, nor libstdc++.so.6.0.25, so it can not run.
I guess libstdc++.so.6.0.25 should be released with gcc8? I can neither install "devtoolset-8" on the producing-env, nor build gcc8 from source on the producing env.
The version of libstdc++ that can be installed from the official yum repo of CentOS, is libstdc++.so.6.0.19, hence my programs can not be loaded at the producing-env.
How to let such programs to run?
Thanks!
Pls forgive my Ugly English.
In order to not have to copy or ship a separate libstdc++.so but rather link statically (as suggested in a comment) against the C++ runtime, one can link C++ programs with -static-libstdc++ (also specifying -static-libgcc will also make sure that the program does not depend on a recent enough version of libgcc_s.so on the system - although that should rarely be a problem).
There can also be the issue of the target system having a version of glibc that is too old (relative to the build system). In that case, one could anyhow compile gcc of no matter how recent of a version on the older system, so that the resulting C++ executables as well as libstdc++ are linked against the older glibc. Linking C++ programs with -static-libstdc++ will again help to not depend on the program having to be able to find libstdc++.so at run-time.
Finally, the C++ program could also be linked with -static not depending on any dynamic libraries at all.

GCC built from source in different location is incorrectly using same shared libs as native GCC

I'm a student doing research involving extending the TM capabilities of gcc. My goal is to make changes to gcc source, build gcc from the modified source, and, use the new executable the same way I'd use my distro's vanilla gcc.
I built and installed gcc in a different location (not /usr/bin/gcc), specifically because the modified gcc will be unstable, and because our project goal is to compare transactional programs compiled with the two different versions.
Our changes to gcc source impact both /gcc and /libitm. This means we are making a change to libitm.so, one of the shared libraries that get built.
My expectation:
when compiling myprogram.cpp with /usr/bin/g++, the version of libitm.so that will get linked should be the one that came with my distro;
when compiling it with ~/project/install-dir/bin/g++, the version of libitm.so that will get linked should be the one that just got built when I built my modified gcc.
But in reality it seems both native gcc and mine are using the same libitm, /usr/lib/x86_64-linux-gnu/libitm.so.1.
I only have a rough grasp of gcc internals as they apply to our project, but this is my understanding:
Our changes tell one compiler pass to conditionally insert our own "function builtin" instead of one it would normally use, and this is / becomes a "symbol" which needs to link to libitm.
When I use the new gcc to compile my program, that pass detects those conditions and successfully inserts the symbol, but then at runtime my program gives a "relocation error" indicating the symbol is not defined in the file it is searching in: ./test: relocation error: ./test: symbol _ITM_S1RU4, version LIBITM_1.0 not defined in file libitm.so.1 with link time reference
readelf shows me that /usr/lib/x86_64-linux-gnu/libitm.so.1 does not contain our new symbols while ~/project/install-dir/lib64/libitm.so.1 does; if I re-run my program after simply copying the latter libitm over the former (backing it up first, of course), it does not produce the relocation error anymore. But naturally this is not a permanent solution.
So I want the gcc I built to use the shared libs that were built along with it when linking. And I don't want to have to tell it where they are every time - my feeling is that it should know where to look for them since I deliberately built it somewhere else to behave differently.
This sounds like the kind of problem any amateur gcc developer would have when trying to make a dev environment and still be able to use both versions of gcc, but I had difficulty finding similar questions. I am thinking this is a matter of lacking certain config options when I configure gcc before building it. What is the right configuration to do this?
My small understanding of the instructions for building and installing gcc led me to do the following:
cd ~/project/
mkdir objdir
cd objdir
../source-dir/configure --enable-languages=c,c++ --prefix=/home/myusername/project/install-dir
make -j2
make install
I only have those config options because they seemed like the ones closest related to "only building the parts I need" and "not overwriting native gcc", but I could be wrong. After the initial config step I just re-run make -j2 and make install every time I change the code. All these steps do complete without errors, and they produce the ~/project/install-dir/bin/ folder, containing the gcc and g++ which behave as described.
I use ~/project/install-dir/bin/g++ -fgnu-tm -o myprogram myprogram.cpp to compile a transactional program, possibly with other options for programs with threads.
(I am using Xubuntu 16.04.3 (64 bit), within VirtualBox on Windows. The installed /usr/bin/gcc is version 5.4.0. Our source at ~/project/source-dir/ is a modified version of 5.3.0.)
You’re running into build- versus run-time linking differences. When you build with -fgnu-tm, the compiler knows where the library it needs is found, and it tells the linker where to find it; you can see this by adding -v to your g++ command. However when you run the resulting program, the dynamic linker doesn’t know it should look somewhere special for the ITM library, so it uses the default library in /usr/lib/x86_64-linux-gnu.
Things get even more confusing with ITM on Ubuntu because the library is installed system-wide, but the link script is installed in a GCC-private directory. This doesn’t happen with the default GCC build, so your own GCC build doesn’t do this, and you’ll see libitm.so in ~/project/install-dir/lib64.
To fix this at run-time, you need to tell the dynamic linker where to find the right library. You can do this either by setting LD_LIBRARY_PATH (to /home/.../project/install-dir/lib64), or by storing the path in the binary using -Wl,-rpath=/home/.../project/install-dir/lib64 when you build it.

Can we have a compiler running in embedded device

It may sound weird but I would like to know if we can have compiler in embedded device (lets say gcc support on imx6).
Of course, it is not uncommon to have target tools, but is is not trivial. A non-native (from the host perspective) compiler must be cross-compiled for the target architecture. You didn't provide any details, but maybe your build system can build target tools for you. Of course, you need much more than just a compiler. You probably need make, autotools, and probably more. It depends on what you are trying to compile on the target.
Your best bet would be to gain some proficiency using a cross-compiler environment. If you haven't already, you might check out the Yocto Project. It supports i.mx6 (and much more) and probably provides a path to get target tools on your board.
Good luck!
To arm arch, it will be easy to get target compiler, linaro ubuntu of linaro project will provide a completely solution for arm arch, it can provide GNOME desktop、toolchain and informative tools on your target.
You can get more info from the following link:
https://wiki.linaro.org/Platform/DevPlatform/Ubuntu
Yes that should easy enough.. What version of cross-compiler do you have in your machine, download the matching gcc compiler from here https://ftp.gnu.org/gnu/gcc/
Now what you want to do is cross-compile the GCC which you downloaded using the crosscompiler which you already have.
Following is an example of compiling 4.7.4, NOTE: replace the HOST and BUILD according to your platform:
./contrib/download_prerequisites
cd ..
mkdir objdir
cd objdir
../gcc-4.7.4/configure --build=$BUILD \
--host=$HOST \
--target=$HOST \
--prefix=/usr \
--disable-nls \
--enable-languages=c,c++ \
--with-float=hard
make -j $JOBS
make DESTDIR=<path_where_to_install> install

gcc dll - compiled under Linux

I have a project written in gcc - bison -flex on Linux environment. All the project is implemented into a *.so file and is called from python-tkinter graphic surface.
There is a need to run it on windows. However I'd avoid to install all the windows equivalent of gcc - bison -flex programs.
Is it possible to force gcc IN LINUX ENVIRONMENT to compile WINDOWS DLL instead of *.so? It could make life easier to use the same technics as I do now: just do calls from python-tkinter graphic surface.
You can, of course, cross-compile it.
You'll need some packages installed, though.
Your normal project would be able to build if you use the MINGW equivalent of GCC for the target architecture.
Also, take a look at this:
Manual for cross-compiling a C++ application from Linux to Windows?
The linking can be kind of troublesome though, since it could come a time where softlinking fails due to versions. In that case you'll need to create some symbolic links to the correct version.
The output of the compilation process should be with -o DYNAMIC-LIBRARIE-NAME.dll and of course use the -shared flag.
Hope it gives you some pointers..
Regards.

Proper way to upgrade from llvm-g++-4.2 to g++-4.7 on Mac

I have Lion 10.7.3 with the Command-line tool installed. I wanted to experiment with C++11, so I used homebrew to install GCC 4.7 as documented here.
How can I now upgrade the /usr/bin/g++ to be the one installed by Homebrew? Is it as simple as symlinking it? I just want to double check and make sure. Thanks!
First, are you sure you need g++ 4.7? As you can see from the C++11 implementation status page, recent versions of clang support most of C++11 too. Of course there are still things that g++ handles and clang doesn't, but there are also still things that clang supports and g++ doesn't. And, more importantly, you already have a recent version of clang, from Apple, configured and ready to go, as your default compiler. Plus, g++ after 4.2 doesn't support Mac extensions like, say, -arch, which means you can't use it to build a whole lot of third-party software (because most configure scripts assume that if you're on a Mac, your compiler supports Mac extensions).
But if you want g++ 4.7, you can do it. Just not by trying to replace /usr/bin/g++ with a different version. Never replace anything in /usr/bin (or /System) with non-Apple stuff except in a few very rare cases (when you have a strong reassurance from someone who knows what they're talking about).
A better thing to do is to just install another compiler in parallel. Just let Homebrew install its favorite way (so it installs into some prefix like /usr/local/Cellar/gcc/4.7, then symlinks all the appropriate stuff into /usr/local/bin, etc.), and use it that way.
When compiling your code, instead of writing g++, write /usr/local/bin/g++, or g++-4.7.
If you get tired of doing that, put /usr/local/bin higher on your PATH that /usr/bin, or create a shell alias, or stick it in the environment variable CXX and write $CXX instead of g++.
If you're using a GUI IDE, you should be able to configure it to use your compiler by setting the path to it somewhere. (Unless you're using Xcode, which you can only configure to work with Apple-tested compilers.)
This is all you need for experimenting with your own code. If you want to compile third-party applications with this compiler, that may be a bit more complicated. You don't often actually compile each source file and link the result together; you just do configure && make and let them do the heavy lifting for you.
Fortunately, most packages will respect the standard environment variables, especially CXX for specifying a default C++ compiler and CC for a default C compiler. (That's why I suggested the name CXX above.)
Just remember that, again, g++ 4.7 doesn't support Mac extensions, so if you're not prepared to debug a bunch of autoconf-based configure scripts complaining that your compiler can't generate code because it assumed it could throw -arch x86_64 at any compiler on a Mac, etc., don't do this.

Resources