I have a CortexM0 project using a custom Makefile that builds and debugs successfully on a 1st machine.
Now trying to move the project to a second Mac.
Same version of Eclipse.
On build I get a linker error:
EclipseApr2019/gcc-arm-none-eabi-5_2-2015q4/bin/../lib/gcc/arm-none-eabi/5.2.1/../../../../arm-none-eabi/bin/ld: cannot find -lg
My make file looks like this (extract):
# echo "path="$(TOOLS)
$(TOOLS)arm-none-eabi-gcc -n -v -mcpu=cortex-m0 -mthumb -g -nostartfiles -T STM32F031C6_simple.ld main.c StartUp_simple.s -o $(NAME).elf
I have tried to append the ARM gcc tools directory to the PATH variable in the Project, but no luck.
I would add a -l option to the link stage in the makefile, but do not know why this library is being pulled in or where it is. My code only does a series of shifts and reads/writes to registers on an MCU. The build on the 1st machine worked fine without specifying a library location like this.
Given I have custom makefile and am not generating Makefile automatically, there are no tool settings (and Library search path) available under Properties/CC++Build/Settings.
What is library "g" that the linker is pulling in?
Where is it?
Under Eclipse, how can I point the linker to the library?
Why didn't I need to do that before?
What is some general advice for designing an Eclipse project with a custom makefile to make it most portable between machines?
Thank you.
Eclipse IDE for C/C++ Developers
Version: 2019-03 (4.11.0)
Related
I'm a student doing research involving extending the TM capabilities of gcc. My goal is to make changes to gcc source, build gcc from the modified source, and, use the new executable the same way I'd use my distro's vanilla gcc.
I built and installed gcc in a different location (not /usr/bin/gcc), specifically because the modified gcc will be unstable, and because our project goal is to compare transactional programs compiled with the two different versions.
Our changes to gcc source impact both /gcc and /libitm. This means we are making a change to libitm.so, one of the shared libraries that get built.
My expectation:
when compiling myprogram.cpp with /usr/bin/g++, the version of libitm.so that will get linked should be the one that came with my distro;
when compiling it with ~/project/install-dir/bin/g++, the version of libitm.so that will get linked should be the one that just got built when I built my modified gcc.
But in reality it seems both native gcc and mine are using the same libitm, /usr/lib/x86_64-linux-gnu/libitm.so.1.
I only have a rough grasp of gcc internals as they apply to our project, but this is my understanding:
Our changes tell one compiler pass to conditionally insert our own "function builtin" instead of one it would normally use, and this is / becomes a "symbol" which needs to link to libitm.
When I use the new gcc to compile my program, that pass detects those conditions and successfully inserts the symbol, but then at runtime my program gives a "relocation error" indicating the symbol is not defined in the file it is searching in: ./test: relocation error: ./test: symbol _ITM_S1RU4, version LIBITM_1.0 not defined in file libitm.so.1 with link time reference
readelf shows me that /usr/lib/x86_64-linux-gnu/libitm.so.1 does not contain our new symbols while ~/project/install-dir/lib64/libitm.so.1 does; if I re-run my program after simply copying the latter libitm over the former (backing it up first, of course), it does not produce the relocation error anymore. But naturally this is not a permanent solution.
So I want the gcc I built to use the shared libs that were built along with it when linking. And I don't want to have to tell it where they are every time - my feeling is that it should know where to look for them since I deliberately built it somewhere else to behave differently.
This sounds like the kind of problem any amateur gcc developer would have when trying to make a dev environment and still be able to use both versions of gcc, but I had difficulty finding similar questions. I am thinking this is a matter of lacking certain config options when I configure gcc before building it. What is the right configuration to do this?
My small understanding of the instructions for building and installing gcc led me to do the following:
cd ~/project/
mkdir objdir
cd objdir
../source-dir/configure --enable-languages=c,c++ --prefix=/home/myusername/project/install-dir
make -j2
make install
I only have those config options because they seemed like the ones closest related to "only building the parts I need" and "not overwriting native gcc", but I could be wrong. After the initial config step I just re-run make -j2 and make install every time I change the code. All these steps do complete without errors, and they produce the ~/project/install-dir/bin/ folder, containing the gcc and g++ which behave as described.
I use ~/project/install-dir/bin/g++ -fgnu-tm -o myprogram myprogram.cpp to compile a transactional program, possibly with other options for programs with threads.
(I am using Xubuntu 16.04.3 (64 bit), within VirtualBox on Windows. The installed /usr/bin/gcc is version 5.4.0. Our source at ~/project/source-dir/ is a modified version of 5.3.0.)
You’re running into build- versus run-time linking differences. When you build with -fgnu-tm, the compiler knows where the library it needs is found, and it tells the linker where to find it; you can see this by adding -v to your g++ command. However when you run the resulting program, the dynamic linker doesn’t know it should look somewhere special for the ITM library, so it uses the default library in /usr/lib/x86_64-linux-gnu.
Things get even more confusing with ITM on Ubuntu because the library is installed system-wide, but the link script is installed in a GCC-private directory. This doesn’t happen with the default GCC build, so your own GCC build doesn’t do this, and you’ll see libitm.so in ~/project/install-dir/lib64.
To fix this at run-time, you need to tell the dynamic linker where to find the right library. You can do this either by setting LD_LIBRARY_PATH (to /home/.../project/install-dir/lib64), or by storing the path in the binary using -Wl,-rpath=/home/.../project/install-dir/lib64 when you build it.
I have a problem with PCL: specifically I want to use it in the existing project with existing Makefiles. However, PCL is using CMake and I couldn't find how to add it to Makefile directly. Does anyone know how to do that?
First try to compile the one of the example provided in PCL website using CMake.
http://pointclouds.org/documentation/tutorials/pcl_visualizer.php
After compiling the above example, you will find various new files and a folder created by CMake in your directory.
Go to CMakeFiles/pcl_visualizer_demo.dir/ .
Open file named link.txt, which contains the terminal command which has various pcl(point cloud libraries) linked dynamically to the file.
command should look similar to the command shown below
/usr/bin/c++ -O3 -Wno-deprecated -s CMakeFiles/pcl_visualizer_demo.dir -o pcl_visualizer_demo -rdynamic -lpcl_common -Wl,-Bstatic -lflann_cpp_s -Wl,-Bdynamic -lpcl_kdtree -lpcl_octree -lpcl_search -lqhull -lpcl_surface -lpcl_sample_consensus -lpcl_io -lpcl_filters -lpcl_features -lpcl_keypoints -lpcl_registration -lpcl_segmentation -lpcl_recognition -lpcl_visualization -lpcl_people -lpcl_outofcore -lpcl_tracking /usr/lib/libvtkGenericFiltering.so.5.8.0 /usr/lib/libvtkGeovis.so.5.8.0 /usr/lib/libvtkCharts.so.5.8.0 /usr/lib/libvtkViews.so.5.8.0 /usr/lib/libvtkInfovis.so.5.8.0 /usr/lib/libvtkWidgets.so.5.8.0
You can include these libraries in your Makefile directly.
If you use different functions or pcl headers files, then first try compiling it using CMake and get the libraries linked and add it to your Makefile of previous project.
I tried this method for my project which worked perfectly fine. I tried pkg-config to link the libraries, which didn't work in my case. I was not able to find any other method that easily links all the required libraries.
Is there a way to configure CLion to use a local makefile to compile code, rather than CMake? I can't seem to find the way to do it from the build options.
Update: If you are using CLion 2020.2, then it already supports Makefiles. If you are using an older version, read on.
Even though currently only CMake is supported, you can instruct CMake to call make with your custom Makefile. Edit your CMakeLists.txt adding one of these two commands:
add_custom_target
add_custom_command
When you tell CLion to run your program, it will try to find an executable with the same name of the target in the directory pointed by PROJECT_BINARY_DIR. So as long as your make generates the file where CLion expects, there will be no problem.
Here is a working example:
Tell CLion to pass its $(PROJECT_BINARY_DIR) to make
This is the sample CMakeLists.txt:
cmake_minimum_required(VERSION 2.8.4)
project(mytest)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
add_custom_target(mytest COMMAND make -C ${mytest_SOURCE_DIR}
CLION_EXE_DIR=${PROJECT_BINARY_DIR})
Tell make to generate the executable in CLion's directory
This is the sample Makefile:
all:
echo Compiling $(CLION_EXE_DIR)/$# ...
g++ mytest.cpp -o $(CLION_EXE_DIR)/mytest
That is all, you may also want to change your program's working directory so it executes as it is when you run make from inside your directory. For this edit: Run -> Edit Configurations ... -> mytest -> Working directory
While this is one of the most voted feature requests, there is one plugin available, by Victor Kropp, that adds support to makefiles:
Makefile support plugin for IntelliJ IDEA
Install
You can install directly from the official repository:
Settings > Plugins > search for makefile > Search in repositories > Install > Restart
Use
There are at least three different ways to run:
Right click on a makefile and select Run
Have the makefile open in the editor, put the cursor over one target (anywhere on the line), hit alt + enter, then select make target
Hit ctrl/cmd + shift + F10 on a target (although this one didn't work for me on a mac).
It opens a pane named Run target with the output.
Newest version has better support literally for any generated Makefiles, through the compiledb
Three steps:
install compiledb
pip install compiledb
run a dry make
compiledb -n make
(do the autogen, configure if needed)
there will be a compile_commands.json file generated
open the project and you will see CLion will load info from the json file.
If you your CLion still try to find CMakeLists.txt and cannot read compile_commands.json, try to remove the entire folder, re-download the source files, and redo step 1,2,3
Orignal post: Working with Makefiles in CLion using Compilation DB
To totally avoid using CMAKE, you can simply:
Build your project as you normally with Make through the terminal.
Change your CLion configurations, go to (in top bar) :
Run -> Edit Configurations -> yourProjectFolder
Change the Executable to the one generated with Make
Change the Working directory to the folder holding your executable (if needed)
Remove the Build task in the Before launch:Activate tool window box
And you're all set! You can now use the debug button after your manual build.
Currently, only CMake is supported by CLion. Others build systems will be added in the future, but currently, you can only use CMake.
An importer tool has been implemented to help you to use CMake.
Edit:
Source : http://blog.jetbrains.com/clion/2014/09/clion-answers-frequently-asked-questions/
I am not very familiar with CMake and could not use Mondkin's solution directly.
Here is what I came up with in my CMakeLists.txt using the latest version of CLion (1.2.4) and MinGW on Windows (I guess you will just need to replace all:
g++ mytest.cpp -o bin/mytest by make if you are not using the same setup):
cmake_minimum_required(VERSION 3.3)
project(mytest)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
add_custom_target(mytest ALL COMMAND mingw32-make WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR})
And the custom Makefile is like this (it is located at the root of my project and generates the executable in a bin directory):
all:
g++ mytest.cpp -o bin/mytest
I am able to build the executable and errors in the log window are clickable.
Hints in the IDE are quite limited through, which is a big limitation compared to pure CMake projects...
The android ndk supplied by google is unable to compile call to c++11 functions such as std::to_string() and std::stoul etc. {I had tried it in r10b one from the official site}. So the suggestion in SO was to try crystax NDK. I have downloaded and placed the root folder next to the google's NDK. All I changed in my root CMakeLists.txt file was:
from:
set(PLATFORM_PREFIX "/some-path/android-ndk-r10b/platforms/android-19/arch-arm")
set(PLATFORM_FLAGS "-fPIC -Wno-psabi --sysroot=${PLATFORM_PREFIX}")
set(CMAKE_CXX_FLAGS "${PLATFORM_FLAGS} -march=armv7-a -mfloat-abi=softfp -mfpu=neon" CACHE STRING "")
To:
set(PLATFORM_PREFIX "/some-path/android-ndk-r8-crystax-1/platforms/android-14/arch-arm")
set(PLATFORM_FLAGS "-fPIC -Wno-psabi --sysroot=${PLATFORM_PREFIX}")
set(CMAKE_CXX_FLAGS "${PLATFORM_FLAGS} -march=armv7-a -mfloat-abi=softfp -mfpu=neon" CACHE STRING "")
and cmake command-line from:
cmake .. -DCMAKE_CXX_COMPILER=/some-path/android-ndk-r10b/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/arm-linux-androideabi-g++ -DCMAKE_C_COMPILER=/some-path/android-ndk-r10b/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/arm-linux-androideabi-gcc -DANDROID_BUILD=ON -DANDROID_NDK_ROOT=/some-path/android-ndk-r10b
To:
cmake .. -DCMAKE_CXX_COMPILER=/some-path/android-ndk-r8-crystax-1/toolchains/arm-linux-androideabi-4.7/prebuilt/linux-x86_64/bin/arm-linux-androideabi-g++ -DCMAKE_C_COMPILER=/some-path/android-ndk-r8-crystax-1/toolchains/arm-linux-androideabi-4.7/prebuilt/linux-x86_64/bin/arm-linux-androideabi-gcc -DANDROID_BUILD=ON -DANDROID_NDK_ROOT=/some-path/android-ndk-r8-crystax-1
ie., changed from normal ndk to crystax-ndk. The program was compiling fine previously till it tried to compile a file with call to std::to_string() etc. But after the change Cmake gives an error that it is unable to compile a simple test program because:
/some-path/android-ndk-r8-crystax-1/toolchains/arm-linux-androideabi-4.7/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.7/../../../../arm-linux-androideabi/bin/ld:
error: cannot find -lcrystax
I can see libcrystax.a and .so in directorie:
/some-path/android-ndk-r8-crystax-1/sources/crystax/libs/armeabi-v7a
I tried adding link_directories("path-to-above") right at the beginning of the CMakeLists.txt file too, but that didn't solve it either.
It should find it there (after i supply the --sysroot etc above) just like the normal ndk. So how should this be solved ? Any other cmake variable to be set or something ?
I don't know how your cmake-based build system works, but actually if you properly add path /some-path/android-ndk-r8-crystax-1/sources/crystax/libs/armeabi-v7a to linker search paths, it should find libcrystax and link with it successfully.
Please note that NDK have several parts separated each from other - i.e. sysroot, libcrystax, C++ library - all are separated. It is done to work with NDK build system which offer some flexibility choosing C++ standard library implementation, and NDK build system know where to find all of them. In your case this approach is not so good so I suggest you first make standalone toolchain, which contains all things assembled together. In other words, it would be classic cross-compile toolchain which contains sysroot, libcrystax and GNU C++ standard library in places known to compiler/linker without passing of any additional options.
To create such toolchain, cd to NDK root directory and run the following command:
./build/tools/make-standalone-toolchain.sh --system=linux-x86_64 --toolchain=arm-linux-androideabi-4.7 --platform=android-14 --install-dir=$HOME/arm-linux-androideabi
Then use $HOME/arm-linux-androideabi as full standalone toolchain for your cmake-based build system.
Please note, however, that application built with CrystaX NDK r8 will not run on newest Android 5.0 due to changes in Bionic (libc). Previous Android versions (<=4.4) are all fine. We fixed that issue (and many others) in upcoming r10 release which is on final testing stage. In the meantime you could adopt your project to our r8 release and quickly switch to r10 when it done - the same approach will work with r10 as well as with r8.
I have compiled gdc together with gcc using the android build-gcc.sh script, and have included a new stub in build/core/definitions.mk to deal with D language files as a part of the build process. I know things are compiling OK at this point, but my problem is linking:
When I build a project, I get this error:
ld: crtbegin_so.o: No such file: No such file or directory
This is true for regular c-only projects as well. Now I ran a quick find in my build directory, and found that the file (crtbegin_so.o) does exist within the sysroot I specified when I compiled gcc (or rather, when build-gcc.sh built it).
What are some things I could look for to find a solution to this problem?
Would copying the files locally and linking directly to them be a decent solution in the
interim?
Why would ld (or collect2) be trying to include these for a gdc (D Language) linkage?
The issue arises on NDK r7c for linux as well.
I found that the toolchain ignores the platform location ($NDK_ROOT/platforms/android-8/arch-arm/usr/lib/) and searches for it in the toolchain path, which is incorrect.
However, as the toolchain also searches for the file in the current directory, one solution is to symlink the correct platform crtbegin_so.o and crtend_so.o into the source directory:
cd src && ln -s NDK_ROOT/platforms/android-8/arch-arm/usr/lib/crtbegin_so.a
cd src && ln -s NDK_ROOT/platforms/android-8/arch-arm/usr/lib/crtend_so.a
Thus your second point should work out (where you can do a symlink, instead of a copy)
NOTE 1:This assumes that the code is being compiled for API8 (Android 2.2) using the NDK. Please alter the path to the correct path as per your requirement.
NOTE 2:Configure flags used:
./configure \
--host=arm-linux-androideabi \
CC=arm-linux-androideabi-gcc \
CPPFLAGS="-I$NDK_ROOT/platforms/android-8/arch-arm/usr/include/" \
CFLAGS="-nostdlib" \
LDFLAGS="-Wl,-rpath-link=$NDK_ROOT/platforms/android-8/arch-arm/usr/lib/ -L$NDK_ROOT/platforms/android-8/arch-arm/usr/lib/" \
LIBS="-lc"
I have found that adding --sysroot=$(SYSROOT) to the compiler options fixes the error:
cannot open crtbegin_so.o: No such file or directory
from my makefile...
CC= $(CROSS_COMPILE)gcc -fvisibility-hidded $(INC) $(LIB) -shared
Note: this assumes that the setenv-android.sh has been run to setup the environment
$. ./setenv-android.sh
In my case quotes were missing from sysroot path.
When I changed
--sysroot=${ANDROID_NDK}\platforms\android-17\arch-arm
to
--sysroot="${ANDROID_NDK}\platforms\android-17\arch-arm"
the project was compiled and linked successfully.
I faced with the same issue in two separate cases:
during building boost for android
during using android-cmake project.
Once I have switched to standalone toolchain issue gone, here is example of command which prepare standalone toolchain
$NDK_ROOT/build/tools/make-standalone-toolchain.sh --platform=android-9 --install-dir=android-toolchain --ndk-dir=$NDK_ROOT --system=darwin-x86_64 --toolchain=arm-linux-androideabi-4.9
Boost specific
for boost you need specify --sysroot several times in your jam
<compileflags>--sysroot=$NDK_ROOT/platforms/android-9/arch-arm
<linkflags>--sysroot=$NDK_ROOT/platforms/android-9/arch-arm