I'm trying to get a nice flamegraph of my Rust code. Unfortunately, Xcode 8.3 doesn't support exporting profiling data anymore, so I've been trying to use DTrace to get the profiling data.
I have enabled debug info in my Cargo.toml for the release binaries:
[profile.release]
debug = true
Then I run the release binary (mybinaryname), and sample stack traces using DTrace:
sudo dtrace -n 'profile-997 /execname == "mybinaryname"/ { #[ustack(100)] = count(); }' -o out.user_stacks
The end result is something like this:
0x10e960500
0x10e964632
0x10e9659e0
0x10e937edd
0x10e92aae2
0x10e92d0d7
0x10e982c8b
0x10e981fc1
0x7fff93c70235
0x1
1
For comparison, getting traces of iTerm2 gets me nice traces like this:
CoreFoundation`-[__NSArrayM removeAllObjects]
AppKit`_NSGestureRecognizerUpdate+0x769
CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__+0x17
CoreFoundation`__CFRunLoopDoObservers+0x187
CoreFoundation`__CFRunLoopRun+0x4be
CoreFoundation`CFRunLoopRunSpecific+0x1a4
HIToolbox`RunCurrentEventLoopInMode+0xf0
HIToolbox`ReceiveNextEventCommon+0x1b0
HIToolbox`_BlockUntilNextEventMatchingListInModeWithFilter+0x47
AppKit`_DPSNextEvent+0x460
AppKit`-[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:]+0xaec
AppKit`-[NSApplication run]+0x39e
AppKit`NSApplicationMain+0x4d5
iTerm2`main+0x6e
libdyld.dylib`start+0x1
iTerm2`0x1
1
Is it possible to get stack traces with debug info in Rust code? (Xcode's Instruments for sure can see the function names, so they are there!) If it is possible, do I need to do take some additional steps, or am I just doing something wrong?
I found a workaround and got some insight why it might not have worked, but the reason why is not 100% clear.
The debug symbols that rustc produces can be found in target/release/deps/mybinaryname-hashcode.dSYM. In the same directory there is a binary file target/release/deps/mybinaryname-hashcode to which the symbols correspond to.
The debug symbol finding library on MacOS is highly magical – as is mentioned in the LLDB docs, symbols are found using various methods, including Spotlight search. I'm not even sure which Frameworks are the ones being used by Xcode's Instruments and the bundled DTrace. (There are mentions about frameworks called DebugSymbols.framework and CoreSymbolication.framework.) Because of this magic, I gave up trying to understand why didn't it work.
The workaround is to pass dtrace the -p option along with the PID of the inspected process:
sudo dtrace -p $PID -n 'profile-997 /pid == '$PID'/ { #[ustack(100)] = count(); }' -o $TMPFILE &>/dev/null
Here's the man of -p:
Grab the specified process-ID pid, cache its symbol tables, and exit upon its completion. If more than one -p option is present on the command line, dtrace exits when all commands have exited, reporting the exit status for each process as it terminates. The first process-ID is made available to any D programs specified on the command line or using the -s option through the $target macro variable.
It's not clear why the debug info of various other binaries is shown by default, or why Rust binaries need the -p option, but it does its job as a workaround.
Related
I'm using PVS-Studio in docker image based on ubuntu:18.04 for cross-compiling a couple of files with arm-none-eabi-gcc. After doing pvs-studio-analyzer trace -- .test/compile_with_gcc.sh strace_out file is successfully created, it's not empty and contains calls to arm-none-eabi-gcc.
However pvs-studio-analyzer analyze complains that "No compilation units were found". I tried using --compiler arm-none-eabi-gcc key with no success.
Any ideas?
The problem was in my approach to compilation. Instead of using a proper build system, I used a wacky shell script (surely, I thought, using a build system for 3 files is an overkill, shell script won't hurt anybody). And in that script I used grep to redefine one constant in the source - kinda like that: grep -v -i "#define[[:blank:]]\+${define_name}[[:blank:]]" ${project}/src/main/main.c | ~/opt/gcc-arm-none-eabi-8-2018-q4-major/bin/arm-none-eabi-gcc -o main.o -xc
So compiler didn't actually compiled a proper file, it compiled output of grep. So naturally, PVS-Studio wasn't able to analyze it.
TL;DR: Don't use shell scripts as build system.
We have reviewed the stace_out file. It can be handled correctly by the analyzer, if the source files and compilers are located by the absolute path in the stace_out file. We have a suggestion what might help you. You can "wrap" the build command in a call to pvs-studio-analyzer -- trace and pvs-studio-analyzer analyze and place them inside your script (compile_with_gcc.sh). Thus, the script should start with the command:
pvs-studio-analyzer trace --
and end with the command:
pvs-studio-analyzer analyze
This way we will make sure that the build and analysis were started at the same container run. If the proposed method does not help, please describe in more detail, by commands, the process of building the project and running the analyzer. Also tell us whether the container reruns between the build and the formation of strace_out, and the analysis itself.
It would also help us a lot if you ran the pvs-studio-analyzer command with the optional --dump-log flag and provided it to us. An example of a command that can be used to do this:
pvs-studio-analyzer analyze --dump-log ex.log
Also, it seems that it is not possible to quickly solve the problem and it is probably more convenient to continue the conversation via the feedback form on the product website.
I see a string being output to my Terminal, when I ran an executable. I have the source code (in C) of the executable, but it was not written by me. I compiled it with -g flag. Is there any way to know which line in which file resulted in the output, with dtrace, lldb, gdb, or any other means?
I am using macOS 10.13. When I ran gdb and the following:
catch syscall write
I got this error:
The feature 'catch syscall' is not supported on this architecture yet.
Is there any way that can achieve my goal?
lldb tends to be better supported on macOS than gdb. You should be able to trace this call by using its conditional breakpoint feature.
While you can certainly trace the write() call with dtrace and get a stack trace using the ustack() action, I think you'll have a harder time pinpointing the state of the program than if you break on it in the debugger.
Your comment suggests you might be searching for a substring match. I suspect you can create a conditional breakpoint in lldb that matches a substring using something like this:
br s -n write -c 'strnstr((const char*)$rsi, "test", $rdx) != NULL'
I'm assuming lldb does not have argument names for the write function, so I'm using x86-64 calling convention register names directly. ($rdi = first argument, which would be the file descriptor; $rsi = second argument, buffer; $rdx = third argument, buffer length)
This question asked about how to conceal a segmentation fault in a bash script and #yellowantphil provided a solution: pipe the output anywhere
Now I am looking through plenty of repositories handed in from my students. I need to check whether source codes in each repository could be compiled, and if so, whether the executable could work properly.
And I've observed that some of their executables end in failure with output 'segmentation fault'. Since I want to hide most details in my script, I prefer not showing any of this annoying output (and thus I found the question mentioned above). However, I still need to be aware that happens (to skip a loop). What should I do now?
A minimum reproduction of this problem:
Create any executable that causes 'segmentation fault'
Place it in a Bash script:
#!/bin/bash
./segfaultgen >/dev/null 2>&1 | :
echo $?
With that | : (mentioned in #yellowantphil's answer), the following sentence shows the output 0, which does not tell the truth. However error messages appear if | : is commented out. I've also tried appending || echo 1 before | :. It doesn't work as well :(
By default a pipeline only fails if the right-side fails. Enable pipefail so the pipeline will fail if either command fails.
(It's a good option in general. I enable by default it in all of my scripts.)
#!/bin/bash
set -o pipefail
./segfaultgen &>/dev/null | :
echo $?
Also, since you're using bash, &>/dev/null is shorter.
I have a lot of executable files and I want to use valgrind to do memory checking.
I am using the follow command to do the memory check:
valgrind -q ./a1.out
valgrind -q ./a2.out
...
valgrind -q ./a100.out
I must stare at the terminal for a long time to find is there any memory problem exist in my code.
Can valgrind return some value to us?
The value stand for there exist problem or not.
And shell can operate the value. So we can write some script and automatically get the conclusion that is there any problem in the executable files.
For example, I want somthing like this:
exist_problem = valgrind -q ./a1.out
if [exist_problem == no]
printf "ALL PASS\n"
fi
Thanks in advance.
Look at valgrind option
--error-exitcode=<number> exit code to return if errors found [0=disable]
If you use memcheck, you can also define what kinds of leaks are errors:
--errors-for-leak-kinds=kind1,kind2,.. which leak kinds are errors?
[definite,possible]
Finally, you can also redirect valgrind output to a file, use
--error-markers=<begin>,<end> add lines with begin/end markers before/after
each error output in plain text mode [none]
and grep in your output files.
I'd like something similar to apimonitor but for Macos. Is there something like this already? Thank you. I'd like to be able to know the arguments used by an application when calling dylib functions.
You have several options:
Have you considered just attaching a debugger (i.e. lldb) to the app, setting a breakpoint on the function of interest, and observing the arguments? You could set the breakpoint to automaticaly print the arguments and then continue.
You can use the pid provider of DTrace. Much of DTrace is disabled by System Integrity Protection (SIP). I don't recall if the pid provider is or not. If it's disabled, you can enable it when booted to Recovery Mode using the csrutil command (csrutil enable --without dtrace).
Anyway, the command to use the pid provider is:
sudo dtrace -n 'pid$target:library pattern:function pattern:entry { actions }' -p <PID of target>
The patterns are file-glob-style, using * to match any characters and ? to match a single character.
An action can be something like ustack(); to dump the user stack, printf("%x\n", arg0); to print the first argument, etc. See a DTrace manual for more.
Finally, you can use the DYLD_INSERT_LIBRARIES environment variable to inject a library of your own. That library, in turn, can use dyld symbol interposing to install your own version of a given function or functions, which can do whatever you want. It can call through to the original and thus act as a wrapper.
Note that SIP can also interfere with passing DYLD_* environment variables through to the executable.