compile simple hello-world c program for mips netbsd - compilation

the last week I have been trying to set-up a compiler which can compile to netbsd with mips architecture.
I cannot find anything on the internet how to do this. All documents refer to compiling the kernel to the architecture but not programs.
How can this be so hard....
host is netbsd amd64 machine

Set the compiler appropriately. Point it at the version of gcc in your TOOLDIR. In this case, something like mips--netbsd-gcc. Definitely make sure TOOLDIR is on your path, so the driver can find the proper assembler, proper loader, and proper libraries.
Take a look at the Makefile in any of src/bin/* as an example, and read through the system mk include files referenced (in src/share/mk)

Generally speaking, the goal is to have a working cross-compiler and a filesystem root for the target, all installed on your development machine. The target root is needed since you need all sorts of libraries to build userland applications. Those libraries need to be compiled for the target, not for the host.
Assuming you build everything from source, it goes as follows:
Choose a prefix for the toolchain (say /opt/mips) and another prefix for the root filesystem of the target (say /opt/target). All of those are on your development machine, not on the target!
Configure, build and install the cross-compiler for your target. This goes into the toolchain prefix.
Configure, build and install the kernel for your target, into the target root prefix. This should install the necessary kernel development headers needed later. If you can install such headers without compiling the kernel, more power to you, of course.
Configure, build and install the C library (say glibc) for your target, into the target root.
Configure, build and install whatever other libraries your userland application needs - into the target root.
Finally, configure, build and install the userland application. Once installed into the target root, you can copy it over to the target into the same prefix (say /opt/target as suggested before).
Generally to install into a different prefix - one that overlaps stuff on your build host (like /usr) - you'd need to do some tricks to fool make install into seeing the target prefix instead of your own. A simple approach would be to have a chroot environment on your build host, where you can bind-mount the prefix (say /usr) read-only, with a writable (mount_union) overlay on top of it.
When you build stuff for the target, you need to pass proper arguments to configure, of course.

Related

Cmake implicit link directory paths and how to set/override to enable building a common sandbox on different OS versions w.r.t lib dependencies

I work with a large source tree that builds with cmake (rev 3.9.6). I often check the tree
out into a common NFS mounted path, and then build the sandbox on different machines (mounted
thru a common NFS path).
I would ideally like to be able to first build the tree on a CentOS7 system, and
then log into a CentOS8 system, type make from the top of the tree, and see
there is no need to reconfigure anything (cmake) or rebuild anything (all
make dependencies satisfied).
All the binaries built on the CentOS7 system run on the CentOS8 system and it looks
like all tests pass on both CentOS7 and CentOS8. It also works to build the source tree
from scratch on a CentOS8 system as well (after dealing with how compiler warnings changed for
the newer compilers on CentOS8).
In practice I've found that building first on CentOS7 causes cmake generated files
(build.make) to end up with a CentOS7 gcc compiler specific path in them as a make dependency.
Or it looks to me like these generated cmake files from the CentOS7 build,
<top of tree>/CMakeFiles/3.9.6/{CMakeCCompiler.cmake,CMakeCXXCompiler.cmake}
set the serarch paths for the compiler and system libraries via,
set(CMAKE_C_IMPLICIT_LINK_DIRECTORIES "/usr/lib/gcc/x86_64-redhat-linux/4.8.5;/usr/lib64;/lib64;/usr/lib")
set(CMAKE_CXX_IMPLICIT_LINK_DIRECTORIES "/usr/lib/gcc/x86_64-redhat-linux/4.8.5;/usr/lib64;/lib64;/usr/lib")
The first path, /usr/lib/gcc/x86_64-redhat-linux/4.8.5, is unique to the compiler
version on CentOS7 and so does not exist on CentOS8. A cmake generated dependency using this path (in build.make files)stops
make dead in its tracks when typing make at the top of the sandbox in NFS on a CentOS8 machine
because it can't resolve the non-existant path:
foo: /usr/lib/gcc/x86_64-redhat-linux/4.8.2/libgomp.so
However, that pathname to libgomp.so thru the CentOS7 gcc path is a soft link,
/usr/lib/gcc/x86_64-redhat-linux/4.8.2/libgomp.so -> ../../../../lib64/libgomp.so.1.0.0
Or the same version of libgomp.so (libgomp.so.1.0.0) will be referenced on CentOS7 by:
/usr/lib/gcc/x86_64-redhat-linux/4.8.2/libgomp.so
/usr/lib64/libgomp.so.1
Alternately, on CentOS8 libgomp.so.1.0.0 is found via the paths:
/usr/lib/gcc/x86_64-redhat-linux/8/libgomp.so
/usr/lib64/libgomp.so.1
I've experimented with removing the gcc specific path from the implicit link cmake
variables to instead have,
set(CMAKE_C_IMPLICIT_LINK_DIRECTORIES "/usr/lib64;/lib64;/usr/lib")
set(CMAKE_CXX_IMPLICIT_LINK_DIRECTORIES "/usr/lib64;/lib64;/usr/lib")
But the CentOS7 compiler specific path will still be found via the /usr/lib entry and I cannot exclude
/usr/lib as a search path altogether. I would really just like to exclude /usr/lib/gcc (?) as an implicit search path
and even then I would have to specify linking against libgomp.so.1 or libgomp.so.1.0.0 instead of libgomp.so.
(BTW - I'm not sure setting those cmake variables explicitly before the project statement in the top level
CMakeLists.txt file worked, but I did not see any way to modify/change how the CMakeC[XX]Compiler.cmake files
are generated in the first place).
Now I could add a custom command to trigger PRE_BUILD on every cmake target to edit any cmake generated build.make
with that the dependency such that,
foo: /usr/lib/gcc/x86_64-redhat-linux/4.8.2/libgomp.so
... edited thru custom command becomes...
foo: /lib64/libgomp.so.1.0.0
This does in fact work and allows make to finish checking (and find) all dependencies when I run it from the top
of the tree on CentOS7 or CentOS8 (after first building on CentOS7). But I would prefer not to retroactively edit
cmake generated files.
So happy to hear any suggestions or if it's just unreasonable to expect to achieve this kind
of parity when building the same NFS mounted sandbox on two revs of the OS with different compiler (revisions and paths), i.e.,
the "right" thing to do is to always re-run cmake on CentOS8 and then rebuild everything on CentOS8.

How to run a C program in android-x86 terminal?

I have to C program which is compiled using gcc in ubuntu. I want to run that executable in android terminal. When i run it is showing either "file or directory is not found" or "not executable:ELF32".
I want to run the code in android terminal. Is there any way or flags in gcc or using any other compiler so that i can run my code in the android terminal.?
Android does not use the same system libraries as Ubuntu, so they will not be found.
There are two solutions:
Copy the libraries you need.
If you can place them in the same filesystem locations they have in Ubuntu then great, otherwise you'll need to run the ld-linux.so manually and tell it where to find the libraries. Or, you could relink the program such that it expects to find the dynamic linker and libraries in a non-standard place. You might also use a chroot, but that requires root, and you'd need to find a chroot binary that works.
Use a static link.
This usually just means passing -static to GCC. You get a much larger binary that should be entirely self-contained, with no dependencies. It requires that static versions of all your libraries are available on your build system. Also, some features (such as DNS lookup) always expect a shared library, so they won't work this way.
Even then, you should expect some Linux features to not work. Basically, anything that requires hardware features or configuration files in /etc are going to need a lot of effort. There are various projects that have done this already (search "linux chroot android").
I'm not sure what the "not executable:ELF32" message means, but you should check whether you're building 32 or 64-bit executables, and which the Android binaries are using (file <whatever> should tell you).

cross-gcc doesn't search for target as and ld in path?

I've successfully built a couple of cross-gcc compilers, hosted on OSX Lion and targeting both i386-pc-solaris2.10 and x86_64-linux-gnu.
I have 2.22 binutils for those target installed under $BINUTILSROOT and $BINUTILSROOT/bin in my PATH. Reading http://gcc.gnu.org/install/configure.html, in particular
--with-as=pathname
Specify that the compiler should use the assembler pointed to by pathname, rather than the one found by the standard rules to find an assembler, which are:
Unless GCC is being built with a cross compiler, check the libexec/gcc/target/version directory. libexec defaults to exec-prefix/libexec; exec-prefix defaults to prefix, which defaults to /usr/local unless overridden by the --prefix=pathname switch described above. target is the target system triple, such as `sparc-sun-solaris2.7', and version denotes the GCC version, such as 3.0.
If the target system is the same that you are building on, check operating system specific directories (e.g. /usr/ccs/bin on Sun Solaris 2).
Check in the PATH for a tool whose name is prefixed by the target system triple.
Check in the PATH for a tool whose name is not prefixed by the target system triple, if the host and target system triple are the same (in other words, we use a host tool if it can be used for the target as well).
I thought my -gcc (configured with --with-gnu-as --with-gnu-ld) would have picked up respectively i386-pc-solaris2.10-as and x86_64-linux-gnu-as (and corresponding -ld) because they are in $BINUTILSROOT/bin which is in the PATH and so the 3rd bullet from the above list should apply.
But this doesn't seem to work, and I've confirmed with dtrace that -gcc doesn't search for -as and -ld in the PATH.
The only solution I've found to be working is to also fully specify as and ld adding
--with-as=$BINUTILSROOT/bin/-as --with-ld=$BINUTILSROOT/bin/-ld
when configuring gcc.
Am I misinterpreting gcc docs, or this is the only way to have cross-compilation working?
Ordinarily you'd install a cross-compiler in the same directory as your cross-binutils. If you do that it'll Just Work.
If you're not installing the compiler into the same directory because you want to "stage" it for building a package, then you should configure with the --prefix of the final installed location (in which the binutils should already be present), and then install with
make DESTDIR=/path/to/staging/dir install
to override the prefix setting. You'd then copy those files into the true prefix (presumably as part of a package install) before you use them.
If you don't want to install in the same directory for another reason then you have to specify the path as you've discovered. There are other ways to make it work, but --with-as is the intended solution. If you really don't like that solution, then you can do
make configure-gcc
ln -s $BINUTILSROOT/bin/as gcc/as
ln -s $BINUTILSROOT/bin/ld gcc/ld
That will make the build work (IIRC), but the final installed compiler will still look in the standard places. In fact, this works because, during build only, the gcc directory is one of the standard places.
The reason for all this is that it doesn't use "x86_64-linux-gnu-as": it actually uses "prefix/x86_64-linux-gnu/bin/as" and if that doesn't exist it looks in the other standard places for "as", and typically finds the host "/usr/bin/as" which doesn't work well (and leads to very confusing error messages).

Cross-compiling Linux kernel for ARM on Windows using Sourcery Toolchain

I am trying to cross-compile a Linux kernel for an ARM-target (Freescale i.Mx28) on a Windows host. I know that this approach is not the best one compared to using a Linux host, but unfortunately it's not up to me to decide that.
The restrictions are:
The kernel has to be the one provided by Freescale (L2.6.35_MX28_SDK_10.12)
It must be build using Sourcery Toolchain and CodeBench
The whole thing must be done on Windows
I got that far, that I worked around the missing case sensitivity on Windows so that I can extract the kernel sources using Cygwin. But now I got problems with the kernel Makefile. I think there are some issues with the Windows paths as I get the error message *** multiple target patterns. Stop., which comes from the : in paths and other errors concerning the dependency check when configuring:
HOSTCC scripts/basic/fixdep
/usr/bin/sh: scripts/basic/fixdep: cannot execute binary file
make[1]: *** [scripts/basic/fixdep] Error 126
make: *** [scripts_basic] Error 2
Is there a way to port the Makefile without having to rewrite it or is there another way to build the kernel without using the given Makefile? Can I use the sourcery toolchain or IDE to handle the Makefile?
Is there a way to build the kernel within the given restrictions?
To cross compile the kernel, you'll need two compilers: One that is able to build tools that run in your build environment, and one that can create executables for your target.
It seems like you aren't really cross compiling but you have just replaced your compiler. You are now building tools required for the build for ARM and try to run them on Windows.
You can specify which cross compiler to use:
make ARCH=arm CROSS_COMPILE=your-compiler-prefix- ...
You might also have a problem with the filesystem. The filesystem in Windows is case-insensitive and the Kernel build might create files where the case matters. To get support for a case-insensitive filesystem on Windows, you can have a look at Windows Services for UNIX.
Use another toolchain! CodeBench is NOT compatible with building Linux on Windows hosts, not matter what eye candy (lies) they put on their website about using CYGPATH, etc.
I have tried this myself for weeks, and the problem is that CodeBench accepts POSIX paths, but insists outputting Win32 paths that are hard, if not impossible, to control in the Linux Kernel Make procedures.
I'm not saying it is impossible; I'm sure it is possible. But it is not worth the time, no matter what your boss tells you. There are more problems to consider. Another problem is that the tools in the Linux sources ./scripts directory are not directly compatible with the Windows environment and thus, although they might compile they don't run as expected. They need to be patched!
The best chance you have, is by compiling your own cross-compiler with Cygwin. Or find one already cooked for you.

USB GCC Development Environment with Libraries

I'm trying to get something of an environment on a usb stick to develop C++ code in. I plan to use other computers, most of the time linux, to work on this from a command line using g++ and make.
The problem is I need to use some libraries, like Lua and OpenGL, which the computers don't have. I cannot add them to the normal directories, I do not have root on these computers. Most of the solutions I've found involve putting things in /usr/lib/ and the like, but I cannot do that. I've also attempted adding options like '-L/media//lib', which is where they are kept, and it didn't work. When compiling, I get the same errors I got when first switching to an OS with the libraries not installed.
Is there somewhere on the computer outside of /usr/ I can put them, or a way to make gcc 'see' them?
You need more than the libraries to be able to compile code utilizing those libraries. (I'm assuming Linux here, things might be slightly different on e.g. OSX,BSDs,Cygwin,Mingw..)
Libraries
For development you need these 3 things when your code uses a library:
The library header files, .h files
The library development files, libXXX.so or libXXX.a typically
The library runtime files , libXXX.so.Y where Y is a version number. These are not needed if you statically link in the library.
You seem to be missing the header files (?) Add them to your usb stick, say under /media/include
Development
Use (e.g.) the compiler flag -I/media/include when compiling source code to refer to a non-standard location of header files.
Use the compiler/linker flag -L/media/lib to refer to non-standard location of libraries.
You might be missing the first step.
Running
For dynamically linked libraries, the system will load those only from default locations, typically /lib/ , /usr/lib/
Learn the ldd tool to help debug this step.
You need to tell the system where to load additional libraries when you're running a program, here's 3 alternatives:
Systemwide: Edit /etc/ld.so.conf and add /media/libs there. Run ldconfig -a afterwards.
Local, to the current shell only. set the LD_LIBRARY_PATH environment variable to refer to /media/lib, run export LD_LIBRARY_PATH=/media/lib
Executable: Hardcode the non-standard library path in the executable. You add this to the linking step when creating your executable: -Wl,-rpath,/media/lib
Etc.
There could be other reasons things are not working out, if so,
show us the output of ls -l /media/libs , and where you put the library header files, the command line you use to compile/link, and the exact errors you get.
Missing the headers and/or development libraries (for dynamic libraries there is usually a symlink from a libXXX.so to a libXXX.so.Y , the linker needs the libXXX.so , it will not look directly at libXXX.so.Y)
using libraries not compatible with your current OS/architecture. (libraries compiled on one linux distro is often not compatible with another distro, or even another minor version of the same distro)
using an usb stick with a FAT32 filesystem, you'll get in trouble with symlinks..

Resources