In vagrant vm, I execute:
docker exec -it container-id /bin/bash
root#297f6e974824:~$ make
Segmentation fault (core dumped)
What could cause this segmentation fault?
As I use docker run to enter:
docker run --name cc-122711 -P -v /home/vagrant/mm:/home -ti --cap-add NET_ADMIN --cap-add SYS_ADMIN --device /dev/fuse cc /usr/lib/systemd/systemd --system
root#c9f7f3ed6d33:~$ make
make: *** No targets specified and no makefile found. ---it works fine.
As a workaround, I need docker run to generate new container every time for the compile env.
I reinstalled make 4.1 version(also 4.0),
sh build.sh,
linking make...
done
bash-4.3# pwd
/home/tools/make-4.1
bash-4.3# ./make
Segmentation fault (core dumped)
bash-4.3# ldd make
linux-vdso.so.1 (0x00007fff0c9fe000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f729f088000)
libc.so.6 => /lib64/libc.so.6 (0x00007f729ecea000)
/lib64/ld-linux-x86-64.so.2 (0x00007f729f28c000)
If you have a good enough core(5) file (this means that the limit given thru setrlimit(2) is large enough), you could run the
file core
command (see file(1)...), it is likely to give you the name of the executable which has crashed to give you that core dump.
Apparently, the core file comes from make itself. This is unusual (you could fill a bug report). Please show the relevant Makefile. Perhaps you have a libc mismatch between the container and the make binary, or perhaps the make binary is corrupted. Try ldd make and make --version. You could also try to download the source code of GNU make and compile it yourself (GNU make can be built without make)
Related
I am trying to build the linux kernel for user-mode linux (um) architecture and x86-64 sub-architecture.
I am getting this build error:
arch/um/drivers/vde_user.c:8:24: fatal error: libvdeplug.h: No such file or directory
I installed libvdeplug-dev, and I can see the header file installed:
$ sudo dpkg -L libvdeplug-dev
/.
/usr
/usr/include
/usr/include/libvdeplug.h
/usr/include/libvdeplug_mod.h
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libvdeplug.a
/usr/share
/usr/share/doc
/usr/share/doc/libvdeplug-dev
/usr/share/doc/libvdeplug-dev/changelog.Debian.gz
/usr/share/doc/libvdeplug-dev/copyright
/usr/share/doc/vdeplug4
/usr/share/doc/vdeplug4/howto_create_a_vdeplug_plugin.gz
/usr/share/man
/usr/share/man/man3
/usr/share/man/man3/libvdeplug.3.gz
/usr/lib/x86_64-linux-gnu/libvdeplug.so
/usr/lib/x86_64-linux-gnu/libvdeplug_mod.so
/usr/share/man/man3/vde_close.3.gz
/usr/share/man/man3/vde_ctlfd.3.gz
/usr/share/man/man3/vde_datafd.3.gz
/usr/share/man/man3/vde_open.3.gz
/usr/share/man/man3/vde_recv.3.gz
/usr/share/man/man3/vde_send.3.gz
But I am still getting the same build error above.
Here is my build command:
make -s -j 7 ARCH=um SUBARCH=x86_64 CROSS_COMPILE=../x86-64-core-i7--glibc--stable/bin/x86_64-linux-
I ran the same build command above with "allyesconfig" to generate the config file.
I am building the 5.18-rc1, which is I believe is the current tip.
How am I supposed to fix this?
Thanks in advance.
my development environment:
$ cat /proc/version
Linux version 5.4.0-66-generic (buildd#lgw01-amd64-016) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) #74~18.04.2-Ubuntu SMP Fri Feb 5 11:17:31 UTC 2021
$ ld --version
GNU ld (GNU Binutils for Ubuntu) 2.30
Copyright (C) 2018 Free Software Foundation, Inc.
$ getconf GNU_LIBC_VERSION
glibc 2.27
$ #my glibc source version is 2.32.9000-development
$ cat ./version.h
/* This file just defines the current version number of libc. */
#define RELEASE "development"
#define VERSION "2.32.9000"
For some reasons, I need to modify and test glibc. I follow the steps of this website(https://sourceware.org/glibc/wiki/Testing/Builds#Compile_against_glibc_in_an_installed_location) to modify glibc and write test programs.
compile glibc.(confgure and make)
install glibc.(make install to a directory)
...other steps in the website above.
I successfully modified some pthread functions and passed the test (the test program I wrote can compiled against the install glibc and ran successfully). ldd the program.
$ ldd ./exec/1-1.out
linux-vdso.so.1 (0x00007ffcbf367000)
libpthread.so.0 => /home/cjl-target/gnu/install/lib64/libpthread.so.0 (0x00007fcadcea9000)
libc.so.6 => /home/cjl-target/gnu/install/lib64/libc.so.6 (0x00007fcadcaed000)
/home/cjl-target/gnu/install/lib64/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007fcadd2ca000)
As shown above, the shared libraries that the program depends on all point to the glibc installation path.
But when I compiled message-queue's test program(test mq_unlink) and ran it, failed as bellow:
./exec/1-1.out: symbol lookup error: /lib/x86_64-linux-gnu/libpthread.so.0: undefined symbol: __libc_vfork, version GLIBC_PRIVATE
check the library that is depended by the program:
$ ldd ./exec/1-1.out
linux-vdso.so.1 (0x00007ffce3f72000)
librt.so.1 => /home/cjl-target/gnu/install/lib64/librt.so.1 (0x00007f0a389a2000)
libc.so.6 => /home/cjl-target/gnu/install/lib64/libc.so.6 (0x00007f0a385e6000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f0a383c7000)
/home/cjl-target/gnu/install/lib64/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f0a38dac000)
As shown above, the shared libraries libpthread.so.0 points to the system library. Why?
my compile script is(from the website above):
# dobuild.sh
SYSROOT=/home/xxx/xxx/xxx #the glibc's installation path
(set -x; \
gcc \
-L${SYSROOT}/usr/lib64 \
-I${SYSROOT}/usr/include \
--sysroot=${SYSROOT} \
-Wl,-rpath=${SYSROOT}/lib64 \
-Wl,--dynamic-linker=${SYSROOT}/lib64/ld-linux-x86-64.so.2 \
-Wall $*
)
when I compile the pthread's test program:./dobuild 1-1.c -pthread -Wall
when I compile the mq's test program:./dobuild 1-1.c -lrt -Wall
In addition, it is confusing that when invoke the pthread_create in the mq_unlink's test program, compiling it ./dobuild 1-1.c -lrt -pthread, the ldd result shows that all dependent libraries point to the installed glibc.
I've tried multiple variations of this, but none of them seem to work. Any ideas?
First, you should stop using ldd -- in the presence of multiple GLIBCs on a host, ldd is more likely to mislead than to illuminate.
If you want to see which libraries are really loaded, do this instead:
LD_TRACE_LOADED_OBJECTS=1 ./exec/1-1.out
Second, you should almost never use $* in shell scripts. Use "$#" instead (note: quotes are important). See this answer.
Third, the behavior you are observing is easily explained. To understand it, you need to know the difference between DT_RPATH and DT_RUNPATH, described here.
You can verify that your binaries are currently using RUNPATH, like so:
readelf -d 1-1.out | grep 'R.*PATH'
And you can verify that everything starts working as you expect by adding -Wl,--disable-new-dtags to the link command (which would cause the binary to use RPATH instead).
To summarize:
RUNPATH affects the search for the binary itself, but not for any libraries the binary depends on.
RPATH affects the search path for the binary and all libraries it depends on.
with RUNPATH, expected libpthread.so.0 is found only when the binary depends on it directly, but not when the dependency on libpthread is indirect (via librt).
with RPATH, expected libpthread.so.0 is found regardless of whether the dependency is direct or indirect.
Update:
If I want to use DT_RUNPATH, how to set the library runpath for librt?
You would need to link librt.so with -rpath=${SYSROOT}/lib64.
You could edit the rt/Makefile, or build with:
make LDFLAGS-rt.so='-Wl,--enable-new-dtags,-z,nodelete,-rpath=${SYSROOT}/lib64'
You would need to do the same for any other library that may bring transitive dependency on other parts of GLIBC. I don't know of a general way to do this, but setitng LDFLAGS-lib.so='-Wl,-rpath=${SYSROOT}/lib64' and rebuilding everything might do the trick.
I have been working with some Cortex-M4 (Freescale K60) devices with a compiled by me GCC (v4.7.2), BinUtils (v2.22), Newlib (v1.20) and GDB (v7.5). I have always been annoyed by GDB's inability to unwind from hard exceptions.
recently I had an opportunity to use FreeScale's CodeWarrior, where I loaded my binary for debug (compiled by my tools), and it could unwind the exception. It looks like CodeWarrior is running GDB v7.4.1 under the hood. Is there some patch I missed for GDB, or some configure option?
Here is the script used to build GDB:
TOOLCHAIN=gdb-7.5
mkdir -p BUILD/gdb
cd BUILD/gdb
../../${TOOLCHAIN}/configure --prefix=${PREFIX} --target=${TARGET} --enable-interwork --enable-multilib --with-expat=yes --with-python --without-auto-load-safe-path 2>&1 | tee configure.out
make all install
cd ../../
Thanks!
GDB can do Cortex M profile exception unwinding, once you tell it that the target is actually Cortex M profile using a Target Description XML with correct Feature.
This can be done via the set target tdesc <filename> command, but newer gdb servers (e.g. OpenOCD) should do so already.
I'm reading the following book about operating systems. In Page 43, they use the following command to convert annotated machine code into a raw machine code file:
$ ld -o basic.bin -Ttext 0x0 --oformat binary basic.o
When running that command in my MacBook Pro (running Mavericks), I get:
ld: unknown option: -Ttext
I've did some research and found out that OS X's linker doesn't allow using a script file as the linker script.
Some other posts on the internet recommend using the following "correct" format:
$ ld -T text 0x0 --o format binary -o basic.bin basic.o
Although it didn't work for me neither.
I also tried installing binutils via homebrew, but it doesn't seems to ship with GNU linker.
The command correctly runs in Ubuntu 14.04, but I'd like to continue developing in OS X if possible.
Is there a way to obtain the same results with OS X's linker, potentially with different flags?
UPDATE:
I was able to generate a bin with the following command, using gobjcopy from binutils:
$ gobjcopy -j .text -O binary basic.o basic.bin
However I couldn't find a way to offset label addresses in the code, as I could with GNU ld with -Ttext 0x1000 for example.
I tried with --set-start <hex> without any luck:
$ gobjcopy -j .text --set-start 0x1000 -O binary basic.o basic.bin
I am following the same os-dev.pdf guide and encountered the same problem as you.
The bottom of the issue is that we need to compile a cross-compiled gcc anyway, so the solution is just to do so.
There is a good guide at OSDev but if you're running a recent version of OSX I prepared a specific guide for this on Github
Here are the commands, though please test them before pasting the whole wall of text on your computer! At the Github link you will find the full explanations, but since Stack Overflow seems to like the solution embedded on the answer, here it is.
Also, if you encounter any error, please report it back to me (here or with a Github issue) so that I can fix it for other people.
brew install gmp
brew install mpfr
brew install libmpc
brew install gcc
export CC=/usr/local/bin/gcc-4.9
export LD=/usr/local/bin/gcc-4.9
export PREFIX="/usr/local/i386elfgcc"
export TARGET=i386-elf
export PATH="$PREFIX/bin:$PATH"
mkdir /tmp/src
cd /tmp/src
curl -O http://ftp.gnu.org/gnu/binutils/binutils-2.24.tar.gz # If the link 404's, look for a more recent version
tar xf binutils-2.24.tar.gz
mkdir binutils-build
cd binutils-build
../binutils-2.24/configure --target=$TARGET --enable-interwork --enable-multilib --disable-nls --disable-werror --prefix=$PREFIX 2>&1 | tee configure.log
make all install 2>&1 | tee make.log
cd /tmp/src
curl -O http://mirror.bbln.org/gcc/releases/gcc-4.9.1/gcc-4.9.1.tar.bz2
tar xf gcc-4.9.1.tar.bz2
mkdir gcc-build
cd gcc-build
../gcc-4.9.1/configure --target=$TARGET --prefix="$PREFIX" --disable-nls --disable-libssp --enable-languages=c --without-headers
make all-gcc
make all-target-libgcc
make install-gcc
make install-target-libgcc
You will find GNU's binutils and your cross-compiled gcc at /usr/local/i386elfgcc/bin
manually install binutils always throw errors in my macOS.
The solution is to use homebrew:
brew tap nativeos/i386-elf-toolchain
brew install i386-elf-binutils i386-elf-gcc
then, you can use i386-elf-ld command instead of ld
I've created very minimal chroot environment on sdb and mounted it on /mnt/sdb. I've also created a symbolic link /mnt/sdb/bin/cc that points to /usr/bin/gcc.
ldd /mnt/sdb/bin/cc returned
linux-gate.so.1 => (0xb7829000)
libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb76dd000)
/lib/ld-linux.so.2 (0xb782a000)
So I copied the necessary libraries by running:
cp /lib/i686/cmov/libc.so.6 /mnt/sdb/lib/i686/cmov/libc.so.6
cp /lib/ld-linux.so.2 /mnt/sdb/lib/ld-linux.so.2
Glancing through this article, I figured that since linux-gate.so.1 is a part of the kernel, I don't need to copy it over.
However, after I run chroot /mnt/sdb /bin/sh then try cc I get
cc: error while loading shared libraries: libm.so.6: cannot open shared object file: No such file or directory
How come ldd couldn't tell cc needed libm.so.6? Is there an easy way to get cc to work in the chrooted environment without simply copying over all the libraries? I'd just like to use cc temporarily so that I can build tcc with it, then build everything else with tcc (I've also tried simply building tcc outside then using it in chroot, but I'm afraid that that might deserve its own post).
Note:
I'm using Debian in Virtualbox, and the only program that currently runs in the chroot environment is a single (static) busybox binary.