LLDB Restart process without user input - debugging

I am trying to debug a concurrent program in LLDB and am getting a seg fault, but not on every execution. I would like to run my process over and over until it hits a seg fault. So far, I have the following:
b exit
breakpoint com add 1
Enter your debugger command(s). Type 'DONE' to end.
> run
> DONE
The part that I find annoying, is that when I get to the exit function and hit my breakpoint, when the run command gets executed, I get the following prompt from LLDB:
There is a running process, kill it and restart?: [Y/n]
I would like to automatically restart the process, without having to manually enter Y each time. Anyone know how to do this?

You could kill the previous instance by hand with kill - which doesn't prompt - then the run command won't prompt either.
Or:
(lldb) settings set auto-confirm 1
will give the default (capitalized) answer to all lldb queries.
Or if you have Xcode 6.x (or current TOT svn lldb) you could use the lldb driver's batch mode:
$ lldb --help
...
-b
--batch
Tells the debugger to running the commands from -s, -S, -o & -O,
and then quit. However if any run command stopped due to a signal
or crash, the debugger will return to the interactive prompt at the
place of the crash.
So for instance, you could script this in the shell, running:
lldb -b -o run
in a loop, and this will stop if the run ends in a crash rather than a normal exit. In some circumstances this might be easier to do.

Related

How to keep LLDB running with redirected stdin, stdout and stderr

I'm working on an automation task where I need to check whether some application is debuggable or not. So my workflow is:
Attach LLDB to a process
Set some breakpoints
Perform some action in a target App
Check a frame in LLDB for some expected expressions
Detach debugger from App.
Since my test script has to switch between the App and the Debugger, I decided to have LLDB being run in a separate (background) process and interact with it through named pipes. Something like this:
#!/usr/bin
pipe=/tmp/mypipe
mkfifo $pipe
lldb >out.txt 2>err.txt < $pipe &
I'm sending commands this way: echo "attach -n AppName" > /tmp/mypipe. But my problem is that after running any command, the LLDB exits. So the last lines in my out.txt are:
Executable module set to "/path/to/application".
Architecture set to: x86_64-apple-macosx.
And LLDB process is no longer exists. Is there any way to keep it running?
Thanks!

Debugging GDB itself and signal handling issues

I am trying to debug GDB itself and dealing with a Ctrl+C signal problem that is sent from another terminal.
I run GDB to be debugged in terminal 1 in TUI mode. Right after, I open another terminal 2 and find the PID number of the GDB that runs on Terminal 1. Then attach that process to debug.
In Terminal 1
$ build-gdb/gdb/gdb -tui ./build/output.elf -tty=$TTY
In Terminal 2
$ ps -elf | less
$ sudo gdb -p PID_NUMBER-tty=$TTY -tui
The problem is when I hit Ctrl+C to stop GDB in terminal 1, GDB runs on Terminal 2 stops. GDB in Terminal 1 does not responds to ^C command at all. I tried to use -tty parameter and get the current TTY, but id did not solved the problem. GDB uses readline GNU library, but i should be configure terminal and its input properly.
Any idea?
You can use
handle SIGINT pass
to instruct GDB to pass the signal to the inferior. See Signals in the GDB manual. The nostop argument could be useful in this situation, too.

how to close gdb connection without stopping running program

Is there a way to exit from gdb connnection without stopping / exiting running program ? I need that running program continues after gdb connection closed.
Is there a way to exit from gdb connnection without stopping / exiting running program ?
(gdb) help detach
Detach a process or file previously attached.
If a process, it is no longer traced, and it continues its execution. If
you were debugging a file, the file is closed and gdb no longer accesses it.
List of detach subcommands:
detach checkpoint -- Detach from a checkpoint (experimental)
detach inferiors -- Detach from inferior ID (or list of IDS)
Type "help detach" followed by detach subcommand name for full documentation.
Type "apropos word" to search for commands related to "word".
Command name abbreviations are allowed if unambiguous.
Since the accepted (only other) answer does not specifically address how to shut down gdb without stopping the program under test, I'm throwing my hat into the ring.
Option 1
Kill the server from the terminal in which it's running by holding Ctrl+c.
Option 2
Kill the gdb server and/or client from another terminal session.
$ ps -u username | grep gdb
667511 pts/6 00:00:00 gdbserver
667587 pts/7 00:00:00 gdbclient
$ kill 667587
$ kill 667511
These options are for a Linux environment. A similar approach (killing the process) would probably also work in Windows.

How to pause the execution of a program after 10 seconds and get a backtrace?

A legacy program most likely gets into an infinite loop on certain pathological inputs. I have >1000 such instances, however, I suspect that the vast majority of them trigger the same bug. Therefore, I would like to reduce the >1000 instances to the fundamentally different ones. The first step is to pause the application after, say, 10 seconds and collect the backtrace.
If I run:
gdb --batch --command=backtrace.txt --args ./legacy_program
with backtrace.txt
run
bt
and I hit Ctrl + C after 10 seconds in the same terminal I get exactly the backtrace I want.
Now, I would like to do that automatically. I have tried sending SIGINT (the expected equivalent of Ctrl + C) from another terminal but I do not get the backtrace anymore. Here are some of my failed attempts based on
GDB how to stop execution without a breakpoint?
Neither of these have any effect:
pkill -SIGINT gdb
kill -SIGINT 5717
where 5717 is the pid of the only gdb running. Sending SIGINT to the legacy_program the same way does kill it but then I do not get the backtrace:
Program received signal SIGINT, Interrupt.
Quit
How can I programmatically pause the execution of the legacy_program after 10 seconds and get a backtrace?
This post was motivated by my frustration not being able to find an answer to this question here at StackOverflow.
Also note that
[it is not merely OK to ask and answer your own question, it is explicitly encouraged.](https://blog.stackoverflow.com/2011/07/its-ok-to-ask-and-answer-your-own-questions/)
Apparently, it is a known (bug) feature in gdb, see
GDB is not trapping SIGINT. Ctrl+C terminates program when should break gdb. Try sending SIGSTOP instead from the other terminal:
pkill -STOP legacy_program
It works on my machine.
Note that you do not have to run the legacy_program in the debugger. Enable core dumps
ulimit -c unlimited
and send the program SIGTRAP to make it crash, then get the backtrace from the core dump. So, start the program:
./legacy_program
From another terminal:
pkill -TRAP legacy_program
The backtrace can be obtained like this:
gdb --batch -ex=bt ./legacy_program core

Shell script behaves strangely when called via an Erlang port

When calling shell scripts from Erlang, I generally need their exit status (0 or something else), so I run them using this function:
%% in module util
os_cmd_exitstatus(Action, Cmd) ->
?debug("~ts starting... Shell command: ~ts", [Action, Cmd]),
try erlang:open_port({spawn, Cmd}, [exit_status, stderr_to_stdout]) of
Port ->
os_cmd_exitstatus_loop(Action, Port)
catch
_:Reason ->
case Reason of
badarg ->
Message = "Bad input arguments";
system_limit ->
Message = "All available ports in the Erlang emulator are in use";
_ ->
Message = file:format_error(Reason)
end,
?error("~ts: shell command error: ~ts", [Action, Message]),
error
end.
os_cmd_exitstatus_loop(Action, Port) ->
receive
{Port, {data, Data}} ->
?debug("~ts... Shell output: ~ts", [Action, Data]),
os_cmd_exitstatus_loop(Action, Port);
{Port, {exit_status, 0}} ->
?info("~ts finished successfully", [Action]),
ok;
{Port, {exit_status, Status}} ->
?error("~ts failed with exit status ~p", [Action, Status]),
error;
{'EXIT', Port, Reason} ->
?error("~ts failed with port exit: reason ~ts",
[Action, file:format_error(Reason)]),
error
end.
This worked fine, until I used this to start a script which forks off a program and exits:
#!/bin/sh
FILENAME=$1
eog $FILENAME &
exit 0
(In the actual usecase, there are quite a few more arguments, and some massaging before they are passed to the program). When run from the terminal, it shows the image and exits immediately, as expected.
But running from Erlang, it doesn't. In the log file I see that it starts fine:
22/Mar/2011 13:38:30.518 Debug: Starting player starting... Shell command: /home/aromanov/workspace/gmcontroller/scripts.dummy/image/show-image.sh /home/aromanov/workspace/media/images/9e89471e-eb0b-43f8-8c12-97bbe598e7f7.png
and the eog window appears. But I don't get
22/Mar/2011 13:47:14.709 Info: Starting player finished successfully
until killing the eog process (with kill or just closing the window), which isn't suitable for my requirements. Why the difference in behavior? Is there a way to fix it?
Normally if you run a command in background with & in a shell script and the shell script terminates before the command, then the command gets orphaned. It might be that erlang trys to prevent orphaned processes in open_port and waits for eog to terminate. Normally if you want to run something in background during a shell script you should put in a wait at the end of the script to wait for your background processes to terminate. But this is exactly what youd don't want to do.
You might try the following in your shell script:
#!/bin/sh
FILENAME=$1
daemon eog $FILENAME
# exit 0 not needed: daemon returns 0 if everything is ok
If your operating system has a daemon command. I checked in FreeBSD and it has one: daemon(8)
This is not a command available on all Unix alike systems, however there might be a different command doing the same thing in your operating system.
The daemon utility detaches itself from the controlling terminal and executes the program specified by its arguments.
I'm not sure if this solves your problem, but I suspect that eog somehow stays attached to stdin/stdou as a kind of controling terminal. Worth a try anyway.
This should also solve the possible problem that job control is on erroneously which could also cause the problem. Since daemon does exit normally your shell can't try to wait for the background job on exit because there is none in the shells view.
Having said all this: why not just keep the port open in Erlang while eog runs?
Start it with:
#!/bin/sh
FILENAME=$1
exec eog $FILENAME
Calling it with exec doesn't fork it bu replaces the shell process with eog. The exit status you'll see in Erlang will then be the status of eog when it terminates. Also you have the possibility to close the port and terminate eog from Erlang if you want to do so.
Perhaps your /bin/sh doesn't support job control when it isn't run interactively? At least the /bin/sh (actually dash(1)!) on my Ubuntu system mentions:
-m monitor Turn on job control (set automatically
when interactive).
When you run the script from a terminal, the shell probably recognizes that it is being run interactively and supports job control. When you run the shell script as a port, the shell probably runs without job control.

Resources