Is there a way to configure the Makefiles compiling the linux kernel, modules and userspace applications leading to
the generation of assembly files (.s - files)? I'm primarily interested in the assembly files for the ./lib and the userspace applications
(which I want to modify for some experiment and want to compile and integrate in a second run)? I'm aware of this requires finally to pass gcc the -S option, but I'm a liitle confused how to set this via HOSTCFLAGS,
CFLAGS_MODULE, CFLAGS_KERNEL, CFLAGS_GCOV, KBUILD_CFLAGS, KBUILD_CFLAGS_KERNEL, KBUILD_CFLAGS_MODULE, KBUILD_CFLAGS_MODULE variables etc.?
You can use objdump -S to create an assembly file from a compiled .o file. For example:
objdump -S amba-pl011.o > amba-pl011.S
Related
I have installed GCC cross compiler for Raspberry Pi to my Ubuntu 20.04 to opt folder. Now When I create new cross compile project I have list of include in my Eclipse project explorer:
/opt/gcc-arm-10.2-2020.11-x86_64-arm-none-linux-gnueabihf/arm-none-linux-gnueabihf/include
/opt/gcc-arm-10.2-2020.11-x86_64-arm-none-linux-gnueabihf/arm-none-linux-gnueabihf/libc/usr/include
/opt/gcc-arm-10.2-2020.11-x86_64-arm-none-linux-gnueabihf/lib/gcc/arm-none-linux-gnueabihf/10.2.1/usr/include
/opt/gcc-arm-10.2-2020.11-x86_64-arm-none-linux-gnueabihf/lib/gcc/arm-none-linux-gnueabihf/10.2.1/usr/include-fixed
How Eclipse knows these include folders?
What is purpose of all of these folders? What kind of includes they are defined for?
Suppose I need to use SDL2 library. Where I should place it's header and binary?
As explained in this article (which is a little dated) https://www.eclipse.org/community/eclipse_newsletter/2013/october/article4.php CDT will try to detect built-in compiler symbols and include paths running the compiler with special options and parse the output of this special run. The command will be probably something like: arm-linux-gnueabihf-cpp -v /dev/null -o /dev/null supposing the compiler you are using is arm-linux-gnueabihf-gcc.
All these folders contains include files like stdio.h , stdlib.h ... of libc , libm ... and also some arm specific header files.
If you are not 100% sure, install the cross compiled in a directory all by itself and add the include directory to your eclipse project.
I have some source files that previously have been compiled for X86 architecture. Now I need to compile them for ARM architecture. When I try to use something like
g++ -c -x c++ foo.h
gcc -c -x c foo.h
it only gives me few instructions. I believe it doesn't link my header file to other included files. I only get "Disassembly of section .comment:".
Please note it does prompt me for other files, for example if foo includes foo1.h and foo2.h, if I don't include foo1 and foo2 headers in the same directory the compiler doesn't work.
Another thing which I don't understand is that both gcc and g++ produce the same assembly code, maybe because they only generate the comment section they look the same.
A little bit more details about my problem:
I'm using a USB-I2C converter board. The board provides only support for x86/x64 architecture. I managed to access the source file and to get the driver setup. Now I need to test everything together. In order to do so I need to be able to compile a sample code. When I want to do so, it calls on static libraries that need to be in .a extension. I need to create my own library.a file. In order to do so, I have found the source c files (.h header). Now I need to link them together during compilation and make object files, and eventually archive them together in a .a file.
Thank you for your help.
Edit (update):
A quick summary of what I have achieved so far:
- I was able to find a driver from a github repo.
- I was able to make the module
- I also compiled a new fresh kernel from a raspbian source code (I'm doing this for a Raspberry PI3):
uname -a
Linux raspberrypi 4.9.35-v7+ #1 SMP Tue Jul 4 22:40:25 BST 2017 armv7l GNU/Linux
I was able to load the module properly:
lsmod
Module Size Used by
spi_diolan_u2c 3247 0
i2c_diolan_u2c 3373 0
diolan_u2c_core 4268 2 i2c_diolan_u2c,spi_diolan_u2c
lsusb
Bus 001 Device 010: ID a257:2014
dmesg
[ 126.758990] i2c_diolan_u2c: loading out-of-tree module taints kernel.
[ 126.759241] i2c_diolan_u2c: Unknown symbol diolan_u2c_transfer (err 0)
[ 130.651475] i2c_diolan_u2c: Unknown symbol diolan_u2c_transfer (err 0)
[ 154.671532] usbcore: registered new interface driver diolan-u2c-core
[ 5915.799739] usb 1-1.2: USB disconnect, device number 4
[10591.295014] usb 1-1.2: new full-speed USB device number 6 using dwc_otg
[10591.425984] usb 1-1.2: New USB device found, idVendor=a257, idProduct=2014
[10591.425997] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[10591.426005] usb 1-1.2: Product: Diolan DLN1
[10591.426012] usb 1-1.2: Manufacturer: Diolan
What I'm not sure:
If the kernel space is properly mapped to physical hardware (if the loaded module can get ahold of my diolan-board)!!!
progress: based on my research, I think a hot-plug protocol would take care of that, but not sure!
confusion: why the lsusb still only shows the device ID. I know the manufacturer and kernel driver can determine what info to be shown, but only showing ID doesn't seem right to me
What I want to do now:
I would like to write a simple c source or python code to interact with my device. Basically I don't know how I can make connection between user space and kernel space. The manufacturer has provided some source code examples, libraries, etc. . However, I have given up on using them because they are for another architecture , based on qt, and I find it near impossible to find replacement for libraries they use in their static libraries ( I figured those libraries by restoring their .a archived file provided for x86)
I just need to know exactly what the next step should be for me to move on towards getting the board working with the PI.
Thank you :)
You don't make any object file from a header file. On Linux object files (and executables) are ELF files. Use file(1) (or objdump(1) ...) to check.
Instead, a header file should be included (by #include preprocessor directive) in some *.cc file (technically a translation unit).
(You could precompile headers, but this is only useful to improve compilation time, which it does not always, and is an advanced and GCC specific usage; see this)
You do compile a C++ source file (some *.cc file) into an object file suffixed .o (or a C source file *.c compiled into an object file suffixed .o)
Read more about the preprocessor, and do spend several days reading about C or C++ (which are different programming languages). Read also more about compiling and linking.
I recommend to compile your C++ code with g++ -Wall -Wextra -g to get all warnings (with -Wall -Wextra ) and debug information (with -g).
A minimal compilation command to compile some yourfile.cc in C++ into an object file yourfile.o should probably be
g++ -c -Wall -Wextra -g yourfile.cc
(you could remove -Wall -Wextra -g but I strongly recommend to keep them)
You may need to add other arguments to g++. They order matters a lot. Read the chapter about Invoking GCC
Notice that yourfile.cc contains very likely some (and often several) #include directives (usually near its start)
You very rarely need the -x c++ option to g++ (or -x c with gcc). I used it only once in my lifetime. In your case it surely is a mistake.
Very often, you use some build automation tool like GNU make. So you just use make to compile (but you need to write a Makefile - where tabs are significant)
Notice that some libraries can be header only (but this is not very usual), then you don't build any shared or static ELF libraries from them, but you just include headers in your own C or C++ code.
addenda
Regarding your http://dlnware.com/sites/dlnware.com/files/downloads/linux_setup.2.0.0.zip package (which indeed is poorly documented), you should look at the several examples given in the linux_setup/examples/ directory. Such code all have a #include "../common/dln_generic.h" (for instance, 4th line of examples/leds_gui/main.cpp) which itself have other includes. All the examples are Qt applications and provide a *.pro file for qmake (which itself generates a Makefile for make from that .pro file). And passing -x c++ to g++ is rightly not mentioned at all.
You don't compile header files, you compile the C files that include them.
All code in a header file should be declarations and type definitions, which give information to the compiler, but don't actually produce any machine code. That's why there's nothing in your object files.
I'm currently working on a project using Arduino 1.0.6 IDE and it does not seem to accept C++11 std::array. Is it possible to change the compiler flag to make this work?
Add custom compiler flags to platform.local.txt. Just create it in the same directory where platform.txt is. For example:
compiler.c.extra_flags=
compiler.c.elf.extra_flags=
compiler.S.extra_flags=
compiler.cpp.extra_flags=-mcall-prologues -fno-split-wide-types -finline-limit=3 -ffast-math
compiler.ar.extra_flags=
compiler.objcopy.eep.extra_flags=
compiler.elf2hex.extra_flags=
In this example C++ flags will make large sketch smaller. Of course, you can use your own flags instead. Since platform.local.txt does not overwrite standard files and is very short, it is very easy to experiment with compiler flags.
You can save platform.local.txt for each project in its directory. It will NOT have any effect in project's directory, but this way if you decide to work on your old project again you will be able to just copy it to the same directory where platform.txt is (typically ./hardware/arduino/avr/) and continue work on your project with project-specific compiler flags.
Obviously, using Makefile as ladislas suggests is more professional and more convenient if you have multiple projects and do not mind dealing with Makefile. But still, using platform.local.txt is better than messing with platform.txt directly and an easy way to play with compiler flags for people who are already familiar with Arduino IDE.
You can use #pragma inside the *.ino file so as not to have to create the local platforms file:
#pragma GCC diagnostic warning "-fpermissive"
#pragma GCC diagnostic ignored "-Wwrite-strings"
For other ones, see HERE.
Using the IDE is very difficult to do that.
I would advise you to go full command line by using Sudar's great Arduino Makefile.
This way you'll be able to customise the compiler flags to your liking.
I've also created the Bare Arduino Project to help you get started. The documentation covers a lot points, from installing the latest avr-gcc toolchain to how to use the repository, compile and upload your code.
If you find something missing, please, feel free to fill an issue on Github so that I can fix it :)
Hope this helps! :)
Yes, but not in 1.0.6, in 1.5.? the .\Arduino\hardware\arduino\avr\platform.txt specifies the command lines used for compiling.
One can either modify this file directly or copy it to your user .\arduino\hardware\... directory to create a custom platform. As not to alter the stock IDE. This will also then exist in other/updated IDEs that you can run. You can copy just the platform file and boards.txt. And have your boards.txt file link to the core: libraries as not to have a one-off. See
Reference: Change CPU speed, Mod New board
I wanted to add the -fpermissive flag.
Under Linux here what I have done with success
The idea is to replace the two compilers avr-gcc and avr-g++ by two bash scripts in which you add your flags (-fpermissive for me)
With root privilege:
rename the compiler avr-gcc (present in /usr/bin) avr-gcc-real
rename the compiler avr-g++ (present in /usr/bin) avr-gcc-g++-real
Now create to bash scripts avr-gcc and avr-g++ under /usr/bin/
script avr-gcc contains this line:
avr-gcc-real -fpermissive $#
script avr-g++ contains this line:
avr-g++-real -fpermissive $#
As you may know $# denotes the whole parameters passed to the script. Thus all the parameters transmitted by the IDE to compilers are transimitted to your bash scripts replacing them (which call the real compilers with your flags and the IDE one)
Don't forget to add executable property to your scripts:
chmod a+x avr-gcc
chmod a+x avr-g++
Under Windows I don't know if such a solution can be done.
I want to create a go executable which communicates with xen through it's native interface. There is a C shared library (actually 2) for this purpose and I created a simple go wrapper with cgo.
The problem is that I want to target 3 xen versions (3.2, 3.4, 4.0), each of which has a different shared library. The library itself provides basically the same API, but the sizes and shape of the structs defined in the C header are different, and thus the same compiled go binary cannot be used with these different shared libraries.
I want to have a go binary holding the 'main' and a go pkg which is the wrapper for xen.
I was thinking about 2 solutions:
I could build 3 different versions of the compiled pkg and also 3 different versions of the main binary each linked with the corresponding pkg version. This solution requires building manually the makefiles so that I can pass the correct paths etc
I could build a thin C wrapper as a shared library and build it in 3 versions against the 3 versions of xen C bindings. This C wrapper would then export a stable C interface which is then used by one single go pkg. Then I can deploy the correct C wrapper shared library to the hosts and have it resolve at runtime
Is there any other way to handle that ? I would prefer using pure (c)go code but I don't like the additional burden of maintaining complicated makefiles.
EDIT: More details about why I feel uncomfortable about handling that manually in the makefiles:
For example the name of the _obj dir is hardcoded in Make.inc and company, and these makefiles rely on some generated .c files containing special information about the name of the shared library, which I have to cleanup before building the next version of the pkg. A snipped of my makefile:
all:
rm -f _obj/_*
$(MAKE) -f Makefile.common XENVERSION=3.0
rm -f _obj/_*
$(MAKE) -f Makefile.common XENVERSION=3.4
rm -f _obj/_*
$(MAKE) -f Makefile.common XENVERSION=4.0
where Makefile.common basically is a normal pkg makefile which uses TARG=$(XENVERSION)/vimini/xen, as I cannot encode the version in the package name because I'd have to modify the imports in the source.
By encoding the version in the package directory I can use GCIMPORTS=-I../../pkg/xen/_obj/$(XENVERSION) to select the right une from the main cmd's Makefile.
Of course I could roll out my own makefile which invokes 6c and 6l, cgo etc, but I prefer to leverage the existing make infrastructure since it seems that there is some wisdom in it.
Have you tried this approach?
Architecture- and operating system-specific code
It could easily be adapted to use a $XENVER environment variable.
I have a following setup. Although my working setup deals with ARM compiler Real View Developer Suite (RVDS) 3.2 on a Windows host, the situation could be generic for any other C compiler on any host.
I build a ARM library (static library - .a file) of C code using RVDS 3.2 compiler toolchain on Windows host. Then I link this library with an application using an ARM-Linux compiler toolchain on a Linux host, to get a ARM executable. Now when I try to debug this generated ARM executable on Linux using gdb, by trying to put a breakpoint in some function which is present in the library that is linked, gdb is not able to put breakpoint there citing source not found. So I manually copied all the source files(*.c) used to create the library in the Linux folder where the executable file is present. Still gdb fails to put a breakpoint.
So now I started thinking:
How can I do source level debugging of this library which I create on Windows using a different compiler chain by launching the executable which is generated by linking this library to an application, in gdb. Is it possible? How can I do it? Is there any compiler option in RVDS compiler toolchain to enable this library source level debug?
Do I need to copy the source files to linux in exactly same folder structure as that is present in windows for those source files?
You could try to see if mimicking the exact same directory structure works. If you're not sure what directory structure the compiler annotated in the debug info in the executable, you can always look at it with dwarfdump (on linux).
First, GDB does not need any source to put breakpoints on functions; so your description of what is actually happening is probably inaccurate. I would start by verifying that the function you want to break on is actually there in the binary:
nm /path/to/app | grep function_desired
Second, to do source level debugging, GDB needs debug info in a format GDB understands. On Linux this generally means DWARF or STABS. It is quite possible that your RVDS compiler does not emit such debug info; if so, source level debugging will not be possible.
Did you build the library with debugging enabled (-g option)? Without that, there would be difficulties identifying lines etc.
I've found that -fPIC will cause this sort of issue, but the only work around I've found is to not use -fPIC when I want to debug.