How can I separately build and develop libgomp (openMP runtime)? - gcc

I am trying to make changes to the openMP runtime library (GOMP). As far as I know, the library comes with GCC compiler but my goal is to work on GOMP alone. So I wonder how I can build and develop GOMP separately from GCC. Any help would be highly appreciated. Thank you!

Building libgomp separately of GCC is not supported upstream. It can be done (you'd need to figure out some "lengthy" configure command lines, and so on), so you're mostly on your own if attempting that. But: why wouldn't you just build libgomp in its standard GCC build environment?

Related

Can I use GCC compiler AND Clangd Language Server?

I am working on a project that uses a GCC library (SFML), which is not available for clang, as far as I know. I am using COC with vim for code completions, but for C++ it needs clangd. Is there a way to use GCC as my compiler, but still use the clangd language server?
I have also heard that there may be a way to make clang recognize GCC libraries/headers, but I've never been able to make it work right. If somebody could point me in the right direction there that would be helpfull too. But I'm used to GCC (I've been using it since I started programming C++), so being able to use clangd and GCC would be preferable.
Yes it is. I do it with ccls (which is clang based as well).
Given my installation of clang is not the standard one (I compile it, tune it to use libc++ by default, and I install it somewhere in my personal space) I have to inject paths to header files known by clang but unknown by other clang based tools.
I obtain them with
clang++ -E -xc++ - -Wp,-v < /dev/null
Regarding the other options related to the current project, I make sure to have a compile_commands.json compilation database (generated by CMake, or bear if I have no other choice), and ccls can work from there. I expect clangd to be quite similar in these aspects.
Ops, answered the wrong question.
But for those who use ccls:
create a .ccls file in your project directory and append --gcc-toolchain=/usr to it.
use this tool to generate a compile_commands.json file
see https://github.com/MaskRay/ccls/wiki/FAQ#compiling-with-gcc

can single gcc generate executable for multiple targets like x86,arm,ppc?

We want to use a single gcc for multiple targets. Is it possible to build from source for supporting multiple targets?
The answer is no, you cannot do this with gcc. You can use some cross compilers to achieve this goal.
But if you really need to do this, you can use clang compiler. Here is the link:
https://clang.llvm.org/docs/CrossCompilation.html
Adding answer to the Gabriel. All architecture what you mentioned above are different CPU's.
It's not possible to generate different binaries with the gcc compiler.
You need to have different toolchains for each compiler that produce corresponding compatible code.
x86, PPC and ARM are the different machines. You cant run the code which you build using host toolchains.
Below provided reference use machine specific toolchains not host-gcc. This is very cumbersome and not straightforward approach.
For curiosity, you can have a look at the bitbake, parallel build of multiple machines
I'll also add that to have a useful toolchain you also need to build other components besides GCC. Components like an assembler and linker (from binutils for example) and a C library (e.g. glibc, musl, newlib etc). Each such component needs to be configured for a specific target

Building GCC with MPFR, GMP and MPC

Of course we all know building GCC version >= 4.1.x requires the supplementary packages MPFR, GMP and MPC to be present.
There's a few ways to handle these GCC dependencies:
1) Download and build each supporting package separately and then tell make where the binaries are located during GCC build time.
2) Download each supporting package, untar and move the source into your GCC build directory and make will automatically build each of the packages when needed.
(Executing the gcc-src/contrib/download_prerequisites script does the same as option 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Is there an advantage to either method? Does pre-compiling the binaries provide something I'm missing by taking the "easy route" and just dumping the package's source into my GCC build directory and letting make figure it out?
I've seen it done more frequently in various build scripts by pre-compiling each package to a binary, and then telling make where they are located during gcc compilation. Is this the "preferred" way to do it? Why?
To add context, I'm mainly building cross-compilers targeting various ARM platforms.
For most use cases I believe that option 2 is just as good as option 1. However, I can see a few situations in which one would want to do it manually.
A package maintainer wants to build separately as they want separate packages for mpfr et al.
Someone who wants to pass different configure arguments/CFLAGS to each of the packages.
A GCC developer who wants to keep their source and build trees small as they don't make any changes to MPFR/GMP/etc.
I haven't done too much work with the (rather ugly) GCC build system, but I haven't seen any obvious differences in how the binaries are built.
I'm not the biggest authority on this though, so YMMV; I may be wrong.

Writing an IDE, use GCC to compile

I want to write an own c/c++ IDE with syntax-check etc. And of course I need a compiler-functionality. For this I want to use gcc, I think it is a good option, isn't it? The IDE should not call a gcc-binary to compile, it should include the gcc source code, because after compiling the IDE I want a stay alone executable.
So my question: Is there sth like a tutorial or a good hint how to realize this?
btw it's for Mac, I'll write the IDE with XCode
Thank you!
Use LLVM's Clang and its libClang API, it's built for this purpose. GCC is not made to be used as a library.
You might develop a plugin for GCC, or a GCC MELT extension. But it could be that on MacOSX GCC plugins are not supported yet. You might also look into GCCSense which might fill some of your goals (but I never used it).

Migrating from Winarm to Yagarto

This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.

Resources