Can I use GCC compiler AND Clangd Language Server? - gcc

I am working on a project that uses a GCC library (SFML), which is not available for clang, as far as I know. I am using COC with vim for code completions, but for C++ it needs clangd. Is there a way to use GCC as my compiler, but still use the clangd language server?
I have also heard that there may be a way to make clang recognize GCC libraries/headers, but I've never been able to make it work right. If somebody could point me in the right direction there that would be helpfull too. But I'm used to GCC (I've been using it since I started programming C++), so being able to use clangd and GCC would be preferable.

Yes it is. I do it with ccls (which is clang based as well).
Given my installation of clang is not the standard one (I compile it, tune it to use libc++ by default, and I install it somewhere in my personal space) I have to inject paths to header files known by clang but unknown by other clang based tools.
I obtain them with
clang++ -E -xc++ - -Wp,-v < /dev/null
Regarding the other options related to the current project, I make sure to have a compile_commands.json compilation database (generated by CMake, or bear if I have no other choice), and ccls can work from there. I expect clangd to be quite similar in these aspects.

Ops, answered the wrong question.
But for those who use ccls:
create a .ccls file in your project directory and append --gcc-toolchain=/usr to it.
use this tool to generate a compile_commands.json file
see https://github.com/MaskRay/ccls/wiki/FAQ#compiling-with-gcc

Related

Does Dev-Cpp 5.11 support C++ 11?

I struggled to find a clear answer on the first Google page.
I have troubles understanding the term "Language standard". I mean, the new standard should be implemented on a software level, right? It's not just a list of things discovered that users can now do, right?
I use delegating constructors, get a warning:
[Warning] delegating constructors only available with -std=c++11 or -std=gnu++11
Though things seem to work the way I want them to. Is such warning critical? If so, how do I get rid of it?
Dev-Cpp is just IDE (frontend) for coder and behind it sits MinGW with GCC 4.9.2 as compiler*. So every time you click "Run" or "Build" it is GCC to do the dirty job. GCC by default uses C++03 standard and to use newer one you have to tell it explicitly via compiler flag -std=c++11. You can change it in Tools->Compiler Options->Settings->Code generation->Language standard (-std).
I am not sure why delegating constructors could work without C++11 (probably some GCC feature), but for sure you will not be able to use C++11 libraries without -std=c++11. It will also get rid of the warning.
(* Assuming you used default Dev-C++ installer.)

link SO against libbfd

I need to link my SO against libbfd, for the purpose of having human-readable backtraces.
Static linking against libbfd.a fails, because it's not compiled with -fPIC, so as I understand, it can participate in executable only.
Though linking against libbfd.so also gives some troubles.
I need to compile on both Ubuntu-14.04 and Debian Wheezy 7.8
And they have non-intersecting sets of binutils versions. In particular, Ubuntu has 2.24, and Debian has 2.22 and 2.25. And the problem is, gcc doesn't want to take symlink's name libbfd.so to reference it, and uses SONAME instead. So i end with either libbfd-2.24-system.so or libbfd-2.25-system.so in dependencies.
For now I see several approaches:
There's some hidden flag which allows to override SONAME during linking. This is the preferred path
I have no way other than compile libbfd by hand. I would evade this as much as possible.
Manual dlopen+dlsym for everything I need.
I read answer gcc link shared library against symbolic link, but it suggests to change SONAME I'm not able to do.
Any suggestions?
Thanks.
EDIT: it seems that virtually all static libs in Ubuntu repos are not position-independent. Can't guess why. With the inability to override SONAME it makes things much more complicated.
Not sure whether i understand your (4 years old problem) correctly, but having similar problems with libbfd, I found this solution:
Using the linker flag -lbfd seems to work.
It is a linker flag that specifies g++ to link against libbfd.
My full command was g++ loader.cc -lbfd.
At least for me, errors at link-time a la "unknown function" are solved.

Writing an IDE, use GCC to compile

I want to write an own c/c++ IDE with syntax-check etc. And of course I need a compiler-functionality. For this I want to use gcc, I think it is a good option, isn't it? The IDE should not call a gcc-binary to compile, it should include the gcc source code, because after compiling the IDE I want a stay alone executable.
So my question: Is there sth like a tutorial or a good hint how to realize this?
btw it's for Mac, I'll write the IDE with XCode
Thank you!
Use LLVM's Clang and its libClang API, it's built for this purpose. GCC is not made to be used as a library.
You might develop a plugin for GCC, or a GCC MELT extension. But it could be that on MacOSX GCC plugins are not supported yet. You might also look into GCCSense which might fill some of your goals (but I never used it).

Migrating from Winarm to Yagarto

This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.

Cross compile Boost 1.40 for VxWorks 6.4

I'm trying to migrate a project which uses Boost (particularly boost::thread and boost::asio) to VxWorks.
I can't get boost to compile using the vxworks gnu compiler. I figured that this wasn't going to be an issue as I'd seen patches on the boost trac that purport to make this possible, and since the vxworks compiler is part of the gnu tool chain I should be able to follow the directions in the boost docs for cross compilation.
I'm building on windows for a ppc vxworks.
I changed the user-config.jam file as specified in the boost docs, and used the target-os=linux option to bjam, but bjam appears to hang before it can compile. Closer inspection of the commands issued by bjam (by invoking it using the -n option) reveal that it's trying to compile with boost::thread's win32 files. This can't be right, as vxworks uses pthreads.
My bjam command: .\bjam --with-thread toolset=gcc-ppc target-os=linux gcc-ppc is set in user-config to point to the g++ppc vxworks cross compiler.
What am I doing wrong? I believe I have followed the docs to the letter.
If it's #including win32 headers instead of the pthread ones, there could be a discrepancy between the set of macros your compiler is defining and the macros the boost headers are checking for. I had a problem like that with the smart pointer headers, which in an older version of boost would check for __ppc but my compiler defined __ppc__ (or vice versa, can't remember).
touch empty.cpp
ccppc -dD -E empty.cpp
That will show you what macros are predefined by your compiler.
I never tried to compile boost for VxWorks, since I only needed a few of the headers.
Try also adding
threadapi=pthread
The documentation you mention is for Boost.Build -- which is standalone build tool -- and the above flag is something specific to Boost.Thread library. What do you mean by "hang"? Because Boost libraries are huge, it sometimes take a lot of time to scan dependencies prior to build.
If it actually hangs, can you catch bjam in a debugger and produce a backtrace? Also, log of any output will help.

Resources