Why can't I run custom application on my Beaglebone board? [duplicate] - linux-kernel

This question already has answers here:
what does "-sh: executable_path:not found" mean
(2 answers)
Closed 3 years ago.
I have cross-compiled the small application for my beaglebone board:
/* led_test.c */
int main(int argc, char const *argv[])
{
return 0;
}
Compiling was done successfully, but if I try to run the application in target board, I get this:
# cd /bin/
# ls -la | grep led_test
-rwxr-xr-x 1 default default 13512 Feb 5 2020 led_test
# led_test
-sh: ./led_test: not found
Why can't I run custom application on my Beaglebone board? Could anyone explain me it, please?
Some information about my environment:
1. work-station: Ubuntu 18.04.4 LTS x86-64
2. target machine: ARMv7 beaglebone board
3. cross-compiler: gcc-linaro-7.3.1-2018.05-x86_64_arm-linux-gnueabihf
4. I built u-boot and Linux kernel with this toolchain and mounted rootfs via NFS.
UPD 1:
I tried use this ./led_test instead led_test. It doesn't matter, because my application is placed into /bin directory.

It's likely your userspace was not built with the cross-toolchain that you compiled your binary with. Maybe you compiled the kernel with it, but that does not matter, what matters is the rootfs.
Your program is dynamically linked. When the program runs, the kernel really loads the dynamic linker, and that then maps the executable into the process's address space along with the libraries.
To see the dynamic linker, run readelf -l on your host or target system on the binary. Example:
$ readelf -l a.out
Elf file type is EXEC (Executable file)
[... more lines ...]
[Requesting program interpreter: /lib/ld-linux-armhf.so.3]
[... more output ...]
The line with program interpreter is the one to look file. file will also give this information. It's probably the case the the file named here is not preset on your rootfs. That is the "file not found" that the error is from.
What you need to do is use the correct cross-toolchain (which uses the same C library version) for the rootfs you are using or build a new rootfs with the same toolchain.
Take a look at buildroot for an easy way to make a new, simple, BeagleBone Black rootfs that's way faster and simpler than using Yocto/Poky to make one.

try run by absolute path, or if you in directory run as ./led_test
Also, show output from file
file /bin/led_test
Or run application with gdb

Related

Cross compiled binary not running on RPI, did I compile it correctly?

I am trying to cross compile a small rust application for the RPI. I am cross compiling because compiling directly on the PI takes way too long and it hits 75C.
I followed various instructions, but what I ended up doing is this:
Install "armv7-unknown-linux-gnueabihf" target with rustup
Download rpi tools from here: https://github.com/raspberrypi/tools
Add the "tools/arm-bcm2708/arm-linux-gnueabihf/bin/" folder to PATH
Add ".cargo/config" file with:
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
run "cargo build --target armv7-unknown-linux-gnueabihf --release"
scp the file to the RPI
chmod +x the_file
do "./the_file"
I get bash: ./the_file: No such file or directory
Yes, I am indeed in the right directory.
So this is the output from "file":
ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), dynamically
linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 2.6.32,
with debug_info, not stripped
I'm not experienced enough with this sort of stuff to determine if the binary that I produced is suitable to be run on an RPI3 B.
Did I produce the correct "type" of binary?
P.S. I am running DietPi distro on the PI. It is based on debian if that's of any relevance.
So I solved this by cheating. I found https://github.com/rust-embedded/cross which took about 30 seconds to get going and now I can cross compile to pretty much anything. I highly recommend it!
The error message "No such file or directory" is not about the your executable but about the dynamic libraries linked to it which are missing from the target system.
To find out which libraries your executable needs you have to run the following command.
ldd /usr/bin/lsmem
This will output something like this
linux-vdso.so.1 (0x00007fffc87f1000)
libsmartcols.so.1 => /lib/x86_64-linux-gnu/libsmartcols.so.1 (0x00007fe82fe71000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe82fc7f000)
/lib64/ld-linux-x86-64.so.2 (0x00007fe82fedd000)
Now you have to check that all this libraries are available on your system. rust-cross probably uses the correct linker for your target so that is probably the reason this works with it. To modify the linker see https://stackoverflow.com/a/57817848/5809980

How to properly make a object file from header file (using an I2C board with a debian linux machine)

I have some source files that previously have been compiled for X86 architecture. Now I need to compile them for ARM architecture. When I try to use something like
g++ -c -x c++ foo.h
gcc -c -x c foo.h
it only gives me few instructions. I believe it doesn't link my header file to other included files. I only get "Disassembly of section .comment:".
Please note it does prompt me for other files, for example if foo includes foo1.h and foo2.h, if I don't include foo1 and foo2 headers in the same directory the compiler doesn't work.
Another thing which I don't understand is that both gcc and g++ produce the same assembly code, maybe because they only generate the comment section they look the same.
A little bit more details about my problem:
I'm using a USB-I2C converter board. The board provides only support for x86/x64 architecture. I managed to access the source file and to get the driver setup. Now I need to test everything together. In order to do so I need to be able to compile a sample code. When I want to do so, it calls on static libraries that need to be in .a extension. I need to create my own library.a file. In order to do so, I have found the source c files (.h header). Now I need to link them together during compilation and make object files, and eventually archive them together in a .a file.
Thank you for your help.
Edit (update):
A quick summary of what I have achieved so far:
- I was able to find a driver from a github repo.
- I was able to make the module
- I also compiled a new fresh kernel from a raspbian source code (I'm doing this for a Raspberry PI3):
uname -a
Linux raspberrypi 4.9.35-v7+ #1 SMP Tue Jul 4 22:40:25 BST 2017 armv7l GNU/Linux
I was able to load the module properly:
lsmod
Module Size Used by
spi_diolan_u2c 3247 0
i2c_diolan_u2c 3373 0
diolan_u2c_core 4268 2 i2c_diolan_u2c,spi_diolan_u2c
lsusb
Bus 001 Device 010: ID a257:2014
dmesg
[ 126.758990] i2c_diolan_u2c: loading out-of-tree module taints kernel.
[ 126.759241] i2c_diolan_u2c: Unknown symbol diolan_u2c_transfer (err 0)
[ 130.651475] i2c_diolan_u2c: Unknown symbol diolan_u2c_transfer (err 0)
[ 154.671532] usbcore: registered new interface driver diolan-u2c-core
[ 5915.799739] usb 1-1.2: USB disconnect, device number 4
[10591.295014] usb 1-1.2: new full-speed USB device number 6 using dwc_otg
[10591.425984] usb 1-1.2: New USB device found, idVendor=a257, idProduct=2014
[10591.425997] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[10591.426005] usb 1-1.2: Product: Diolan DLN1
[10591.426012] usb 1-1.2: Manufacturer: Diolan
What I'm not sure:
If the kernel space is properly mapped to physical hardware (if the loaded module can get ahold of my diolan-board)!!!
progress: based on my research, I think a hot-plug protocol would take care of that, but not sure!
confusion: why the lsusb still only shows the device ID. I know the manufacturer and kernel driver can determine what info to be shown, but only showing ID doesn't seem right to me
What I want to do now:
I would like to write a simple c source or python code to interact with my device. Basically I don't know how I can make connection between user space and kernel space. The manufacturer has provided some source code examples, libraries, etc. . However, I have given up on using them because they are for another architecture , based on qt, and I find it near impossible to find replacement for libraries they use in their static libraries ( I figured those libraries by restoring their .a archived file provided for x86)
I just need to know exactly what the next step should be for me to move on towards getting the board working with the PI.
Thank you :)
You don't make any object file from a header file. On Linux object files (and executables) are ELF files. Use file(1) (or objdump(1) ...) to check.
Instead, a header file should be included (by #include preprocessor directive) in some *.cc file (technically a translation unit).
(You could precompile headers, but this is only useful to improve compilation time, which it does not always, and is an advanced and GCC specific usage; see this)
You do compile a C++ source file (some *.cc file) into an object file suffixed .o (or a C source file *.c compiled into an object file suffixed .o)
Read more about the preprocessor, and do spend several days reading about C or C++ (which are different programming languages). Read also more about compiling and linking.
I recommend to compile your C++ code with g++ -Wall -Wextra -g to get all warnings (with -Wall -Wextra ) and debug information (with -g).
A minimal compilation command to compile some yourfile.cc in C++ into an object file yourfile.o should probably be
g++ -c -Wall -Wextra -g yourfile.cc
(you could remove -Wall -Wextra -g but I strongly recommend to keep them)
You may need to add other arguments to g++. They order matters a lot. Read the chapter about Invoking GCC
Notice that yourfile.cc contains very likely some (and often several) #include directives (usually near its start)
You very rarely need the -x c++ option to g++ (or -x c with gcc). I used it only once in my lifetime. In your case it surely is a mistake.
Very often, you use some build automation tool like GNU make. So you just use make to compile (but you need to write a Makefile - where tabs are significant)
Notice that some libraries can be header only (but this is not very usual), then you don't build any shared or static ELF libraries from them, but you just include headers in your own C or C++ code.
addenda
Regarding your http://dlnware.com/sites/dlnware.com/files/downloads/linux_setup.2.0.0.zip package (which indeed is poorly documented), you should look at the several examples given in the linux_setup/examples/ directory. Such code all have a #include "../common/dln_generic.h" (for instance, 4th line of examples/leds_gui/main.cpp) which itself have other includes. All the examples are Qt applications and provide a *.pro file for qmake (which itself generates a Makefile for make from that .pro file). And passing -x c++ to g++ is rightly not mentioned at all.
You don't compile header files, you compile the C files that include them.
All code in a header file should be declarations and type definitions, which give information to the compiler, but don't actually produce any machine code. That's why there's nothing in your object files.

Run SWI-Prolog binary without swipl installed on the machine

I want to run swi-prolog program on the machine (actually a server) where there is no prolog installed.
The prolog code swipl_test.pl:
main :- write('Hello, world\n').
On the local machine 4.4.0-64-generic #85~14.04.1-Ubuntu SMP Mon Feb 20 12:10:54 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux making the binary hello with SWI-Prolog version 7.2.3 for amd64:
swipl --goal=main --toplevel=halt --stand_alone=true --foreign=save -o hello1 -c swipl_test.pl
Moving hello on the remote machine 2.6.32-5-amd64 #1 SMP Wed Jun 17 16:09:06 UTC 2015 x86_64 GNU/Linux gives the following error:
error while loading shared libraries: libswipl.so.7.2: cannot open shared object file: No such file or directory
How I can prepare a self-contained binary from a prolog code?
I do not have sudo rights on the remote machine.
I had the same problem and i could solve it looking for the shared libraries necessary for the execution of my program. You can find these libraries by executing the ldd command. Once you have them, you can distribute them in the same directory as your executable and set the LD_LIBRARY_PATH variable so that the executable can find them.
This happens because, as clarified in the documentation, when using the option --stand_alone = true the executable becomes a copy of swipl with the saved state and, if SWI-Prolog is statically linked (by default in Linux/386) and the state does not use external packages, there will be no problems to run the program on another machine. Otherwise (our case) the shared objects must be made available so that the executable can find them. In Linux, these shared objects are found using ldd (in your case, the library libswipl.so.7.2). Therefore, you should look for this library (by default in /usr/lib) and copy it to the directory of your executable to distribute it with it. Then, in the machine where you are going to run the program, you must set the LD_LIBRARY_PATH variable so that the executable knows where to find those libraries that it needs to run, that is, the same directory where it is, or use chrpath(1) to change the address where the executable will search.
It is became available and possible to install swi prolog as:
pkg install swi-prolog
This will fix it

Booting custom kernel on xeon-phi

I am trying to boot a custom kernel on Xeon-phi instead of the default Linux kernel. At this link, I found a way to cross compile my kernel which compiles successfully using k1om-mpss-linux-gcc cross compiler. Is cross compiling enough ? I get the error
mykernel.img is not a k1om Linux bzImage
Edit:
So, I used /usr/linux-k1om-4.7/bin/x86_64-k1om-linux-gcc compiler to compile a simple helloworld.c program and the kernel source. I get two different types of results for objdump -f on the executables.
for helloworld.c:
hello: file format elf64-k1om
architecture: k1om, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0000000000400400
for mykernel:
mykernel: file format elf32-i386
architecture: i386, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0010000c
I compiled using the same compiler, yet they show different architectures. What is the reason for this ?
The first thing to do is figure out what mykernel.img is. Try running file on it.
$ file /opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4
/opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4: ELF 64-bit LSB executable, version 1 (SYSV), statically linked, BuildID[sha1]=0xa4c16ee85c11aca4e78dc4ae46d3827fb74289c1, not stripped
$ objdump -f /opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4
/opt/mpss/3.4/sysroots/k1om-mpss-linux/boot/vmlinux-2.6.38.8+mpss3.4: file format elf64-k1om
architecture: k1om, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0000000001000000
The answer to your original question - no, unfortunately, it is not as simple as just cross-compiling. There were a number of changes made to the kernel that comes with the MPSS. I don't know all the changes but a big one that I do know is that they had to add support for the larger register set on the coprocessor in order to be able to save state on a context switch.
As to why the file format is elf32-i386 instead of elf32-k1om -
The web site you referenced referred to recompiling the kernel that came with the MPSS after possibly make a few changes in the files. You'll notice that they also copied over a configuration file for the installed version of the kernel. So they had all the files to remake the kernel exactly as it had been made.
I suspect that, in your case, either a) there was a configuration script of some sort in your source directory that picked up the architecture you were running on and caused confusion when the makefile ran or b) your makefile had no idea what k1om was. In either case, it fell back to what it believed to the the lowest common denominator i386. As I say, this is just a suspicion on my part but a careful reading of your makefiles should lead to the answer.

compiling the 2.6.0 kernel on slackware

out of sheer curiosity I tried compiling a 2.6.0 kernel on my slackware machine.
root#darkstar:/home/linux-2.6.0# uname -a
Linux darkstar 2.6.37.6-smp #2 SMP Sat Apr 9 23:39:07 CDT 2011 i686 Intel(R) Core(TM)2 Duo CPU P8600 # 2.40GHz GenuineIntel GNU/Linux
When I try compiling I get :-
root#darkstar:/home/linux-2.6.0# make menuconfig
HOSTCC scripts/fixdep
scripts/fixdep.c: In function 'traps':
scripts/fixdep.c:359:2: warning: dereferencing type-punned pointer will break strict-aliasing rules
scripts/fixdep.c:361:4: warning: dereferencing type-punned pointer will break strict-aliasing rules
HOSTCC scripts/kconfig/conf.o
HOSTCC scripts/kconfig/mconf.o
scripts/kconfig/mconf.c:91:21: error: static declaration of 'current_menu' follows non-static declaration
scripts/kconfig/lkc.h:63:21: note: previous declaration of 'current_menu' was here
make[1]: *** [scripts/kconfig/mconf.o] Error 1
make: *** [menuconfig] Error 2
Some hints on what im doing wrong? Thanks!
How are you doing this to start with?
Typically, you download the latest kernel from kernel.org, copy the tarball to /usr/src, then:
1. tar -zxvvf linux-2.6.xxxx.tar.gz
2. ln -nsf linux-2.6.xxxx linux # ie: Update the "/usr/src/linux" symbolic link to
# point to the new kernel source directory
3. make menuconfig # or make xconfig
4. make modules # Build the kernel modules
5. make modules_install # Install the previously built modules for the
# new kernel
6. make bzImage # Create the boot image
At this point, DO NOT run make install. Most guides say to do this, but this is WRONG! Instead, copy the newly created bzImage file to /boot (ie: find -name bzImage /usr/src/linux, then cp to /boot), then edit your LILO configuration file (edit /etc/lilo.conf, and when done, run lilo), then reboot your system (ie: init 6 or shutdown -r now), and try out the new kernel.
The whole point of skipping the make install step is because it overwrites/replaces your existing kernel. The steps I described above allow you to have the new kernel and your existing kernel both installed and runnable in parallel. If the new kernel is broken or your left out an important option, you can still fall back to your existing stable/working kernel without the need for a boot/recovery CD/DVD.
If I recall well i think you are missing the ncurses libraries. Those are needed to create the interface with menuconfig.
Try a to do a make xconfig from an X session and see if it works.
if that is the case then the ncurses libs are definitely missing.
check with:
ls /var/log/packages/ncurses*
to see if installed

Resources