Please help me on the gcc compiler command
gcc -c -ID:\pjtName\lib -c -fprofile-arcs -ftest-coverage D:\pjtName\source\tmp.ada
I am trying to compile the tmp.ada with coverage. adb and ads files are located in D:\pjtName\source folder. and my lib files are located in D:\pjtName\lib folder.
The problem is gcc is not locates tmp.ads file and the library files in the D:\pjtName\lib folder. it show file not found error
after this command i need to run gcov command for the tmp.ada file
GNAT’s build process is complicated. gcc is really too low-level a tool to use easily; instead, use gnatmake and GNAT project files.
You’ve tagged gnat-gps, so I assume you actually have GPS. If that’s so, your best bet would be, when opening GPS, to select Create new project with wizard and go on from there. If you get stuck, use GPS’s included help or come back here.
To get coverage information with GPS, you go to Edit / Edit Project Properties and
in the Ada tab, select Code coverage and Instrument arcs, which includes -ftest-coverage and -fprofile-arcs;
in the Ada Linker tab, select Code coverage, which includes -fprofile-generate (you get link time errors otherwise).
By the way, you mention a file tmp.ada; it’s best to stick with .ads for specs and .adb for bodies. GNAT does its best, but if your other code includes with Tmp; GNAT will look for tmp.ads. You can alter this behaviour, but why bother unless you have to for other reasons!
As was stated above, don't use gcc.
I usually use gnat compile filename.adb then use gnat bind filename.ali, then gnat link filename.ali -Lexternaldirectory -lexternallib
Related
I have some C++ library code that I want strictly compiled for a quick check, and I don't want any files produced to be used for later stages (assembly, linkage, etc.)
I can do
g++ -S main.cpp
but this will give me an assembly file that I'm just going to wind up deleting anyway.
Is there an option that will tell the compiler to just compile a source file but don't produce any files?
EDIT[0]: I'm using mingw on Windows.
gcc has the option -fsyntax-only:
Check the code for syntax errors, but don’t do anything beyond that.
I'm a student doing research involving extending the TM capabilities of gcc. My goal is to make changes to gcc source, build gcc from the modified source, and, use the new executable the same way I'd use my distro's vanilla gcc.
I built and installed gcc in a different location (not /usr/bin/gcc), specifically because the modified gcc will be unstable, and because our project goal is to compare transactional programs compiled with the two different versions.
Our changes to gcc source impact both /gcc and /libitm. This means we are making a change to libitm.so, one of the shared libraries that get built.
My expectation:
when compiling myprogram.cpp with /usr/bin/g++, the version of libitm.so that will get linked should be the one that came with my distro;
when compiling it with ~/project/install-dir/bin/g++, the version of libitm.so that will get linked should be the one that just got built when I built my modified gcc.
But in reality it seems both native gcc and mine are using the same libitm, /usr/lib/x86_64-linux-gnu/libitm.so.1.
I only have a rough grasp of gcc internals as they apply to our project, but this is my understanding:
Our changes tell one compiler pass to conditionally insert our own "function builtin" instead of one it would normally use, and this is / becomes a "symbol" which needs to link to libitm.
When I use the new gcc to compile my program, that pass detects those conditions and successfully inserts the symbol, but then at runtime my program gives a "relocation error" indicating the symbol is not defined in the file it is searching in: ./test: relocation error: ./test: symbol _ITM_S1RU4, version LIBITM_1.0 not defined in file libitm.so.1 with link time reference
readelf shows me that /usr/lib/x86_64-linux-gnu/libitm.so.1 does not contain our new symbols while ~/project/install-dir/lib64/libitm.so.1 does; if I re-run my program after simply copying the latter libitm over the former (backing it up first, of course), it does not produce the relocation error anymore. But naturally this is not a permanent solution.
So I want the gcc I built to use the shared libs that were built along with it when linking. And I don't want to have to tell it where they are every time - my feeling is that it should know where to look for them since I deliberately built it somewhere else to behave differently.
This sounds like the kind of problem any amateur gcc developer would have when trying to make a dev environment and still be able to use both versions of gcc, but I had difficulty finding similar questions. I am thinking this is a matter of lacking certain config options when I configure gcc before building it. What is the right configuration to do this?
My small understanding of the instructions for building and installing gcc led me to do the following:
cd ~/project/
mkdir objdir
cd objdir
../source-dir/configure --enable-languages=c,c++ --prefix=/home/myusername/project/install-dir
make -j2
make install
I only have those config options because they seemed like the ones closest related to "only building the parts I need" and "not overwriting native gcc", but I could be wrong. After the initial config step I just re-run make -j2 and make install every time I change the code. All these steps do complete without errors, and they produce the ~/project/install-dir/bin/ folder, containing the gcc and g++ which behave as described.
I use ~/project/install-dir/bin/g++ -fgnu-tm -o myprogram myprogram.cpp to compile a transactional program, possibly with other options for programs with threads.
(I am using Xubuntu 16.04.3 (64 bit), within VirtualBox on Windows. The installed /usr/bin/gcc is version 5.4.0. Our source at ~/project/source-dir/ is a modified version of 5.3.0.)
You’re running into build- versus run-time linking differences. When you build with -fgnu-tm, the compiler knows where the library it needs is found, and it tells the linker where to find it; you can see this by adding -v to your g++ command. However when you run the resulting program, the dynamic linker doesn’t know it should look somewhere special for the ITM library, so it uses the default library in /usr/lib/x86_64-linux-gnu.
Things get even more confusing with ITM on Ubuntu because the library is installed system-wide, but the link script is installed in a GCC-private directory. This doesn’t happen with the default GCC build, so your own GCC build doesn’t do this, and you’ll see libitm.so in ~/project/install-dir/lib64.
To fix this at run-time, you need to tell the dynamic linker where to find the right library. You can do this either by setting LD_LIBRARY_PATH (to /home/.../project/install-dir/lib64), or by storing the path in the binary using -Wl,-rpath=/home/.../project/install-dir/lib64 when you build it.
I'm trying to compile CMake using a non-default GCC installed in /usr/local/gcc530, on Solaris 2.11.
I have LD_LIBRARY_PATH=/usr/local/gcc530/lib/sparcv9
Bootstrap proceeds fine, bootstrapped cmake successfully compiles various object files, but when it tries to link the real cmake (and other executables), I get pages of "undefined reference" errors to various standard library functions, because, as running the link command manually with -Wl,-verbose shows, the linker links with /usr/lib/64/libstdc++.so of the system default, much older GCC.
This is because apparently CMake tries to find curses/ncurses libraries (even if I tell it BUILD_CursesDialog:BOOL=OFF), finds them in /usr/lib/64, and adds -L/usr/lib/64 to build/Source/CMakeFiles/cmake.dir/link.txt, which causes the linker to use libstdc++.so from there, and not my actual GCC's own.
I found a workaround: I can get the path to proper libraries from $CC -m64 -print-file-name=libstdc++.so then put it with -L into LDFLAGS when running ./configure, and all works well then.
Is there a less hacky way? It's really weird that I can't tell GCC to prioritize its own libraries.
Also, is there some way to have CMake explain where different parts of a resulting command line came from?
1. cmake is a command from CMake software: preparation for build automation system; make and make install are commands from Make software: build automation system.
2. From reading this post, what I understand is that:
a. This "cmake and make" stuffs actually use g++ / gcc in its implementation. cmake and make stuffs are basically just tools in using g++ / gcc. Is that correct?
b. gcc / g++ are the compiler that do the actual work.
c. So I can just use gcc / g++ directly without using the make and CMake things?
3. According to this stackoverflow answer: CMake takes a CMakeList.txt file, and outputs it to a platform-specific build format, e.g., a Makefile, Visual Studio, etc.
However when I came across this openCV installation :
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local ..
It executes cmake command in a directory where there is no CMakeLists.txt file. Can you explain and elaborate on this?
4. The usual steps that I've seen are: cmake, make, sudo make install.
I read this stackoverflow post, what I understand:
(i) make is for building the project.
(ii) make install is to copy the binary / executables to the installed directories.
a. So when we make, where are the result / binary files / executables stored at?
b. If we only run make without make install, does it mean that the files are not generated?
c. I came across this openCV tutorial on using openCV with GCC and CMake. It uses:
cd <DisplayImage_directory>
cmake .
make
Why doesn't it do make install as well?
5. In summary:
CMake takes CMakeList.txt file (which is cross platform) to generate a Makefile (which is specific to a platform).
I can just write Makefile manually and skip the CMake step. but it is better to do with the CMake step because it is cross platform, otherwise I have to rewrite the Makefile again if I change platform.
Make takes Makefile (which is generated by CMake or written manually) as a guide to compile and build. Make basically uses gcc / g++ or other compiler in its work. Make itself is just a tool for the compiler.
Make install put the result / executables into the install path
CMake generates files for other build systems. These can be Makefiles, Ninja files or projects files for IDEs like Visual Studio or Eclipse. The build files contain calls to compilers like GCC, Clang, or cl.exe. If you have several compilers installed, you can choose one.
All three parts are independent. The compiler, the build system and CMake.
It is easier to understand when you have the history. People used their compiler. Over time they added so many flags, that it was cumbersome to type them every time. So they put the calls in a script. From that the build systems (Make, Ninja) evolved.
The people wanted to support multiple platforms, compilers, scenarios and so on and the build system files became hard to maintain and their use was error-prone. That's the reason people invented meta build system that creates the files for the actual build system. Examples are Autotools or CMake.
Yes
CMake does not use your compiler, make does not implement it, but it calls (uses) the compiler.
The CMakeLists.txt file should be in the parent directory of release. The last argument of the CMake call indicates the path where the CMakeLists.txt file is located.
Right, make generates the file in the build directory. In your example from 3. release is the build directory. You can find all the generated files and use them. Installing is optional, especially if you want to develop the software, you are not installing it.
Try writing Makefiles for a large project and you will see how much work it is. But yes, everything in 5 is right.
I'm currently working on a project using Arduino 1.0.6 IDE and it does not seem to accept C++11 std::array. Is it possible to change the compiler flag to make this work?
Add custom compiler flags to platform.local.txt. Just create it in the same directory where platform.txt is. For example:
compiler.c.extra_flags=
compiler.c.elf.extra_flags=
compiler.S.extra_flags=
compiler.cpp.extra_flags=-mcall-prologues -fno-split-wide-types -finline-limit=3 -ffast-math
compiler.ar.extra_flags=
compiler.objcopy.eep.extra_flags=
compiler.elf2hex.extra_flags=
In this example C++ flags will make large sketch smaller. Of course, you can use your own flags instead. Since platform.local.txt does not overwrite standard files and is very short, it is very easy to experiment with compiler flags.
You can save platform.local.txt for each project in its directory. It will NOT have any effect in project's directory, but this way if you decide to work on your old project again you will be able to just copy it to the same directory where platform.txt is (typically ./hardware/arduino/avr/) and continue work on your project with project-specific compiler flags.
Obviously, using Makefile as ladislas suggests is more professional and more convenient if you have multiple projects and do not mind dealing with Makefile. But still, using platform.local.txt is better than messing with platform.txt directly and an easy way to play with compiler flags for people who are already familiar with Arduino IDE.
You can use #pragma inside the *.ino file so as not to have to create the local platforms file:
#pragma GCC diagnostic warning "-fpermissive"
#pragma GCC diagnostic ignored "-Wwrite-strings"
For other ones, see HERE.
Using the IDE is very difficult to do that.
I would advise you to go full command line by using Sudar's great Arduino Makefile.
This way you'll be able to customise the compiler flags to your liking.
I've also created the Bare Arduino Project to help you get started. The documentation covers a lot points, from installing the latest avr-gcc toolchain to how to use the repository, compile and upload your code.
If you find something missing, please, feel free to fill an issue on Github so that I can fix it :)
Hope this helps! :)
Yes, but not in 1.0.6, in 1.5.? the .\Arduino\hardware\arduino\avr\platform.txt specifies the command lines used for compiling.
One can either modify this file directly or copy it to your user .\arduino\hardware\... directory to create a custom platform. As not to alter the stock IDE. This will also then exist in other/updated IDEs that you can run. You can copy just the platform file and boards.txt. And have your boards.txt file link to the core: libraries as not to have a one-off. See
Reference: Change CPU speed, Mod New board
I wanted to add the -fpermissive flag.
Under Linux here what I have done with success
The idea is to replace the two compilers avr-gcc and avr-g++ by two bash scripts in which you add your flags (-fpermissive for me)
With root privilege:
rename the compiler avr-gcc (present in /usr/bin) avr-gcc-real
rename the compiler avr-g++ (present in /usr/bin) avr-gcc-g++-real
Now create to bash scripts avr-gcc and avr-g++ under /usr/bin/
script avr-gcc contains this line:
avr-gcc-real -fpermissive $#
script avr-g++ contains this line:
avr-g++-real -fpermissive $#
As you may know $# denotes the whole parameters passed to the script. Thus all the parameters transmitted by the IDE to compilers are transimitted to your bash scripts replacing them (which call the real compilers with your flags and the IDE one)
Don't forget to add executable property to your scripts:
chmod a+x avr-gcc
chmod a+x avr-g++
Under Windows I don't know if such a solution can be done.