Where is this set of boot arguments coming from? - linux-kernel

I'm using Yocto to port the EVL Core to an iMX6 Sabre SD board.
I had an core-image-minimal of a kernel (apparently not mainline) that booted correctly with the following boot arguments being displayed before boot:
[ 0.000000] Kernel command line: console=ttymxc0,115200 root=/dev/mmcblk2p2 rootwait rw
I then proceeded to change the core for that of the EVL core (which is a mainline kernel) (both Kernels are 5.4). After making some changes to the device tree, of the new kernel, I verified that it is booting with the following:
[ 0.000000] Kernel command line: console=ttymxc0,115200 root=/dev/mmcblk2p2 r
That is, the rootwait rw was replaced by just r. Needless to say this is causing problems during boot and I want to change it.
However, I can't figure out where is this line coming from. I searched the kernel source code and couldn't find the specific line, not even some parts of it. I reckon that instead of just being written in a script, it is being formed by a sequence of some sort of append command.
For reference, the files in which I am searching for this are here.
Question: How is this line being formed, where are the arguments coming from?
I checked that most boards have a "bootargs" line in their device tree that gives in directly the arguments to be passed. However, for the specific case of the Sabre SD this seems to be different, with the line being formed via another script (That I failed to identify which).
Note: I understand that this question could arguably be considered tangent to the topic of this site. If that is the case I can move to the Unix/Linux SE.
Additional Info: I am booting using U-boot, and trying to boot via an SD card. The problem being caused by the absence of rootwait rw is that during boot the kernel fails to see one of the SD partitions and thus cannot continue to boot, resulting in a Kernel Panic.

So after some help received I found the answer. In my bootenv environment there is no bootargs variable. That is because one can see, as part of the output of printenv*:
bootcmd=run findfdt;run findtee;mmc dev ${mmcdev};if mmc rescan; then if run loadbootscript; then run bootscript; else if run loadimage; then run mmcboot; else run netboot; fi; fi; else run netboot; fi
The first two commands are tests that succeeded. After that, in enters in a sequence of nested ifs. In these ifs we can see that it runs MMC boot (since I am booting from a SD card). This commands then defines the bootargs variable on-the-run during boottime, that's why I couldn't find it in the source code.
*Which I failed to provide in this question. This detail was fundamental.

Related

Use non-built-in bash commands without modifying .bashsrc

I'm working on cluster and using custom toolkits (more specifically SRA toolkit). In order to use it, I fist had to download (and unpack it) to a specific folder in my directory.
Then I had to modify .bashsrc to include the following segment:
# User specific aliases and functions
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
Now I can use a stuff from SRATools in bash command line, e.g.
prefetch SR111111
My question is, can I use those tools without modifying my .bashsrc?
The reason that I want to do that is because I wrote a .sh script that takes a long time to run, and my cluster has Sun Grid Engine job management system, and I submitted my script to it, only to see the process fail - because a SRA Toolkit command I used was unrecognized.
EDIT (1):
I modified the location where my prefetch command is, and now it looks like:
/MYNAME/APPS/SRA_TOOLS/bin
different from how it is in .bashsrc:
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
And run what #Darkman suggested (put IF THEN ELSE FI and under ELSE put export). The output is that it didn't find SRATools (because path in .bashsrc is different), but it found them under ELSE and script is running normally. Weird. It works on my job management system.
Thanks everybody.

Monitor arguments of call to dylib in Macos

I'd like something similar to apimonitor but for Macos. Is there something like this already? Thank you. I'd like to be able to know the arguments used by an application when calling dylib functions.
You have several options:
Have you considered just attaching a debugger (i.e. lldb) to the app, setting a breakpoint on the function of interest, and observing the arguments? You could set the breakpoint to automaticaly print the arguments and then continue.
You can use the pid provider of DTrace. Much of DTrace is disabled by System Integrity Protection (SIP). I don't recall if the pid provider is or not. If it's disabled, you can enable it when booted to Recovery Mode using the csrutil command (csrutil enable --without dtrace).
Anyway, the command to use the pid provider is:
sudo dtrace -n 'pid$target:library pattern:function pattern:entry { actions }' -p <PID of target>
The patterns are file-glob-style, using * to match any characters and ? to match a single character.
An action can be something like ustack(); to dump the user stack, printf("%x\n", arg0); to print the first argument, etc. See a DTrace manual for more.
Finally, you can use the DYLD_INSERT_LIBRARIES environment variable to inject a library of your own. That library, in turn, can use dyld symbol interposing to install your own version of a given function or functions, which can do whatever you want. It can call through to the original and thus act as a wrapper.
Note that SIP can also interfere with passing DYLD_* environment variables through to the executable.

How to debug gstreamer pipeline with leaking file descriptors after gst_object_unref()?

I have a custom pipeline that looks roughly like this in gstreamer shorthand:
gst-launch-1.0 rtspsrc location=rtsp://<url-for-stream> ! rtph264depay ! h264parse ! imxvpudec ! *any-sink*
any-sink doesn't matter, could be fakesink, imxipusink, or whatever (I'm on imx6 platform using freescale imx plugins). I can output to whichever sink I want and the issue is the same.
This type of pipeline works fine in gst-launch-1.0 because it doesn't need to clean itself up properly, but I need to use it inside my C++ application using direct GST API. This means I use myPipeline = gst_pipeline_new("custom-pipeline"), then allocate each plugin by name, link them, and run the pipeline. I later have a requirement to stop the pipeline and call gst_object_unref(myPipeline). When doing this, I observe file descriptors being left behind. I later need to start the pipeline all over again, and so the leak is compounding. This needs to happen often enough that the leaking descriptors give me an exception:
GLib-ERROR **: Creating pipes for GWakeup: Too many open files
I can profile the open files with lsof...
lsof +E -aUc myGstApplication
lsof: netlink UNIX socket msg peer info error
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
myGstApplication 5943 root 3u unix 0xabfb6c00 0t0 11200335 type=STREAM
myGstApplication 5943 root 11u unix 0xd9d47180 0t0 11207020 type=STREAM
... many more, depending on how long it runs...
myGstApplication 5943 root 50u unix 0xabe99080 0t0 11211987 type=STREAM
I appear to get two new 'type=STREAM' file descriptors each time i unref() and rebuild the pipeline.
This is all fine and dandy to see the descriptors in lsof, but I don't know how to track down where these files are coming from in the code. Does any of the lsof output actually lead me to better debug information, for instance? . How do I track down where these leaks are really coming from and stop them? There has be be a better way... right?
I suspect rtspsrc gstreamer pipeline element is having something to do with this, but rtspsrc is itself a morass of underlying gstreamer implementation (udpsrcs, demuxers, etc, etc.) I'm not convinced that it's a bug within rtspsrc, because so many other people appear to use this one without reproducing the same thing. Could I be doing something in my application code that can bring about this behavior in a non-obvious way?
Any help is much appreciated, thanks!
Well researched & interesting question!
According to the lsof output, the leaking file descriptors seem to originate from socketpair syscalls. You confirm this with strace:
strace -fe socketpair myGstApplication
After this, you could leave out filtering for the socketpair sycall, and look through the full strace output, trying to understand what these FDs are used for. I tried this with gst-launch-1.0, with inconclusive results. These FDs seem to be set readonly on both ends and nothing is ever transferred... so they must be used for control/coodination between several threads/subprocesses of the same application only.
Next try would be gdb:
gdb -ex 'break socketpair' -ex run myGstApplication
When it halts at the breakpoint, look at the stacktrace with the bt command. Probably installing the debug packages of gstreamer is a good idea to get more readably stacktraces.
HTH :)

How can I make bash execute an ELF binary from stdin?

For some obscure reason I have written a bash script which generates some source code, then compiles it, using
... whatever ... | gcc -x c -o /dev/stdout
Now, I want to execute the result on the compilation. How can I make that happen? No use of files please.
As Charles Duffy said, to execute a binary, you'd have to tell your operating system (which seems to be a Unix variant) to load and execute something – and Unix systems only take files to execute them directly.
What you could do is have a process that prepares a memory region containing the ELF binary, fork and jump into that region - but even that is questionable, considering that there's CPU support to suppress exactly that operation (R^X). Basically, what you need is a runtime linker, and shells do not (and also: should not) include something like that.
Let's drop the Bash requirement (which really just sounds like you're trying to find an obvious hole in an application that is older than I am grumpy):
Generally, requiring ELF (which is a file format) and avoiding files at the same time is a tad complicated. GCC generates machine code. If you just want to execute known machine code, put it into some buffer, build a function pointer to that and call it. Simple as that. However, you'd obviously don't have all the nice relocation and dynamic linking that the process of executing an ELF binary or loading a shared object (dlopen) would have.
If you want that, I'd look in the direction of things like LLVM – I know, for a fact, that there's people building "I compile C++ at runtime and execute it" with LLVM as executing instance, and clang as compiler. In the end, what your gcc|something is is really just JIT – an old technology :)
If your goal is to not write to the filesystem at all, then neither bash nor any other UNIX program will be able to help you execute an ELF from a pipe - execve only takes a path to a regular file as its filename and will fail (setting errno to EACCES) if you pass it a special file (device or named pipe) or a directory.
However, if your goal is to keep the executable entirely in RAM and not touch the hard disk (perhaps because the disk is read-only) you can do something with the same effect on your machine by using tmpfs, which comes with many UNIX-like systems (and is used in Linux to implement semaphores) and allows you to create a full-permissions filesystem that resides entirely in RAM:
$ sudo mount -t tmpfs -o size=10M tmpfs /mnt/mytmpfs
You can then write your binary to that:
... whatever ... | gcc -x c -o /mnt/mytmpfs/program.out
/mnt/mytmpfs/program.out
and bash will load it for you as if it was on disk.
Note, however, that you do still need enough RAM onboard the device to store and execute the program - though due to the nature of most executable binaries, you would need that anyway.
If you don't want to leave the program behind on your ramdisk (or normal disk, if that is acceptable) for others to find, you can also delete the file immediately after starting to execute it:
/mnt/mytmpfs/program.out &
rm /mnt/mytmpfs/program.out
The name will disappear immediately, but the process will internally hold a hard link to that file, then release that hard link when it terminates, allowing the file to be immediately deleted from disk. (However, the storage won't actually be freed until the program exits, and the program will not be able to exec itself either).

Xeon Phi cannot execute binary file

I am trying to execute a binary file on a xeon phi coprocessor, and it is coming back with "bash: cannot execute binary file". So I am trying to find how to either view an error log or have it display what's happening when I tell it to execute that is causing it not work. I have already tried bash --verbose but it didn't display any additional information. Any ideas?
You don't specify where you compiled your executable nor where you tried to execute from.
To compile a program on the host system to be executed directly on the coprocessor, you must either:
if using one of the Intel compilers, add -mmic to the compiler
command line
if using gcc, use the cross-compilers provided with the MPSS
(/usr/linux-k1om-4.7) - note, however, that the gcc compiler does not
take advantage of vectorization on the coprocessor
If you want to compile directly on the coprocessor, you can install the necessary files from the additional rpm files provided for the coprocessor (found in mpss-/k1om) using the directions from the MPSS user's guide for installing additional rpm files.
To run a program on the coprocessor, if you have compiled it on the host, you must either:
copy your executable file and required libraries to the coprocessor
using scp before you ssh to the coprocessor yourself to execute the
code.
use the micnativeloadex command on the host - you can find a man page
for that on the host.
If you are writing a program using the offload model (part of the work is done using the host then some of the work is passed off to the coprocessor), you can compile on the host using the Intel compilers with no special options.
Note, however, that, regardless of what method you use, any libraries to be used with an executable for the coprocessor will need themselves to be built for the coprocessor. The default libraries exist but any libraries you add, you need to build a version for the coprocessor in addition to any version you make for the host system.
I highly recommend the articles you will find under https://software.intel.com/en-us/articles/programming-and-compiling-for-intel-many-integrated-core-architecture. These articles are written by people who develop and/or support the various programming tools for the coprocessor and should answer most of your questions.
Update: What's below does NOT answer the OP's question - it is one possible explanation for the cannot execute binary file error, but the fact that the error message is prefixed with bash: indicates that the binary is being invoked correctly (by bash), but is not compatible with the executing platform (compiled for a different architecture) - as #Barmar has already stated in a comment.
Thus, while the following contains some (hopefully still somewhat useful) general information, it does not address the OP's problem.
One possible reason for cannot execute binary file is to mistakenly pass a binary (executable) file -- rather than a shell script (text file containing shell code) -- as an operand (filename argument) to bash.
The following demonstrates the problem:
bash printf # fails with '/usr/bin/printf: /usr/bin/printf: cannot execute binary file'
Note how the mistakenly passed binary's path prefixes the error message twice; If the first prefix says bash: instead, the cause is most likely not a problem of incorrect invocation, but one of trying to a invoke an incompatible binary (compiled for a different architecture).
If you want bash to invoke a binary, you must use the -c option to pass it, which allows you to specify an entire command line; i.e., the binary plus arguments; e.g.:
bash -c '/usr/bin/printf "%s\n" "hello"' # -> 'hello'
If you pass a mere binary filename instead of a full path - e.g., -c 'program ...' - then a binary by that name must exist in one of the directories listed in the $PATH variable that bash sees, otherwise you'll get a command not found error.
If, by contrast, the binary is located in the current directory, you must prefix the filename with ./ for bash to find it; e.g. -c './program ...'

Resources