How to keep LLDB running with redirected stdin, stdout and stderr - bash

I'm working on an automation task where I need to check whether some application is debuggable or not. So my workflow is:
Attach LLDB to a process
Set some breakpoints
Perform some action in a target App
Check a frame in LLDB for some expected expressions
Detach debugger from App.
Since my test script has to switch between the App and the Debugger, I decided to have LLDB being run in a separate (background) process and interact with it through named pipes. Something like this:
#!/usr/bin
pipe=/tmp/mypipe
mkfifo $pipe
lldb >out.txt 2>err.txt < $pipe &
I'm sending commands this way: echo "attach -n AppName" > /tmp/mypipe. But my problem is that after running any command, the LLDB exits. So the last lines in my out.txt are:
Executable module set to "/path/to/application".
Architecture set to: x86_64-apple-macosx.
And LLDB process is no longer exists. Is there any way to keep it running?
Thanks!

Related

Why does this nested bash command with subshells hang? [duplicate]

I have a script (lets call it parent.sh) that makes 2 calls to a second script (child.sh) that runs a java process. The child.sh scripts are run in the background by placing an & at the end of the line in parent.sh. However, when i run parent.sh, i need to press Ctrl+C to return to the terminal screen. What is the reason for this? Is it something to do with the fact that the child.sh processes are running under the parent.sh process. So the parent.sh doesn't die until the childs do?
parent.sh
#!/bin/bash
child.sh param1a param2a &
child.sh param1b param2b &
exit 0
child.sh
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#email.com
As you can see, I don't want to run the java process in the background because i want to send a mail out when the process dies. Doing it as above works fine from a functional standpoint, but i would like to know how i can get it to return to the terminal after executing parent.sh.
What i ended up doing was to make to change parent.sh to the following
#!/bin/bash
child.sh param1a param2a > startup.log &
child.sh param1b param2b > startup2.log &
exit 0
I would not have come to this solution without your suggestions and root cause analysis of the issue. Thanks!
And apologies for my inaccurate comment. (There was no input, I answered from memory and I remembered incorrectly.)
The following link from the Linux Documentation Project suggests adding a wait after your mail command in child.sh:
http://tldp.org/LDP/abs/html/x9644.html
Summary of the above document
Within a script, running a command in the background with an ampersand (&)
may cause the script to hang until ENTER is hit. This seems to occur with
commands that write to stdout. It can be a major annoyance.
....
....
As Walter Brameld IV explains it:
As far as I can tell, such scripts don't actually hang. It just
seems that they do because the background command writes text to
the console after the prompt. The user gets the impression that
the prompt was never displayed. Here's the sequence of events:
Script launches background command.
Script exits.
Shell displays the prompt.
Background command continues running and writing text to the
console.
Background command finishes.
User doesn't see a prompt at the bottom of the output, thinks script
is hanging.
If you change child.sh to look like the following you shouldn't experience this annoyance:
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#gmail.com
wait
Or as #SebastianStigler states in a comment to your question above:
Add a > /dev/null at the end of the line with mail. mail will otherwise try to start its interactive mode.
This will cause the mail command to write to /dev/null rather than stdout which should also stop this annoyance.
Hope this helps
The process was still linked to the controlling terminal because STDOUT needs somewhere to go. You solved that problem by redirecting to a file ( > startup.log ).
If you're not interested in the output, discard STDOUT completely ( >/dev/null ).
If you're not interested in errors, either, discard both ( &>/dev/null ).
If you want the processes to keep running even after you log out of your terminal, use nohup — that effectively disconnects them from what you are doing and leaves them to quietly run in the background until you reboot your machine (or otherwise kill them).
nohup child.sh param1a param2a &>/dev/null &

Pass command from scheduled script to program running in xterm window

I have a game server running in an xterm window.
Once daily I'd like to send a warning message to any players followed after a delay by the stop command to the program inside the xterm window from a script running on a schedule. This causes the cleanup and save functions to run automatically.
Once the program shuts down I can bring it back up easily but I don't know how to send the warning and stop commands first.
Typed directly into xterm the commands are:
broadcast Reboot in 2 minutes
(followed by a 2 minute wait and then simply):
stop
no / or other characters required.
Any help?
Do you also need to type something from the xterm itself (from time to time) or do you want your server to be fully driven from external commands?
Is your program line-oriented? You may first try something like:
mkfifo /tmp/f
tail -f /tmp/f | myprogram
and then try to send commands to your program (from another xterm) with
echo "mycommand" > /tmp/f
You may also consider using netcat for turning your program to a server:
Turn simple C program into server using netcat
http://lifehacker.com/202271/roll-your-own-servers-with-netcat
http://nc110.sourceforge.net/
Then you could write a shell script for sending the required commands to your "server".
If you know a little about C programming; I remember having once hacked the script program (which was very easy to do: code is short) in order to launch another program but to read commands from a FIFO file (then again a shell script would be easy to write for driving your program).
Something you might try is running your program inside a screen session.
You can then send commands to the session from cron that will be just
as if you typed them.
In your xterm, before launching the program do:
screen -S myscreen bash
(or you can even replace bash by your program). Then from your cron
screen -S myscreen -X stuff 'broadcast Reboot in 2 minutes\n'
sleep 120
screen -S myscreen -X stuff 'stop\n'
will enter that text. You can exit the session using screen -S myscreen -X quit
or by typing ctrl-a \.
screen is intended to be transparent. To easily see you are inside screen, you can
configure a permanent status bar at the bottom of your xterm:
echo 'hardstatus alwayslastline' >~/.screenrc
Now when you run screen you should see a reverse video bottom line. Depending
on your OS it may be empty.

LLDB Restart process without user input

I am trying to debug a concurrent program in LLDB and am getting a seg fault, but not on every execution. I would like to run my process over and over until it hits a seg fault. So far, I have the following:
b exit
breakpoint com add 1
Enter your debugger command(s). Type 'DONE' to end.
> run
> DONE
The part that I find annoying, is that when I get to the exit function and hit my breakpoint, when the run command gets executed, I get the following prompt from LLDB:
There is a running process, kill it and restart?: [Y/n]
I would like to automatically restart the process, without having to manually enter Y each time. Anyone know how to do this?
You could kill the previous instance by hand with kill - which doesn't prompt - then the run command won't prompt either.
Or:
(lldb) settings set auto-confirm 1
will give the default (capitalized) answer to all lldb queries.
Or if you have Xcode 6.x (or current TOT svn lldb) you could use the lldb driver's batch mode:
$ lldb --help
...
-b
--batch
Tells the debugger to running the commands from -s, -S, -o & -O,
and then quit. However if any run command stopped due to a signal
or crash, the debugger will return to the interactive prompt at the
place of the crash.
So for instance, you could script this in the shell, running:
lldb -b -o run
in a loop, and this will stop if the run ends in a crash rather than a normal exit. In some circumstances this might be easier to do.

Running bash script does not return to terminal when using ampersand (&) to run a subprocess in the background

I have a script (lets call it parent.sh) that makes 2 calls to a second script (child.sh) that runs a java process. The child.sh scripts are run in the background by placing an & at the end of the line in parent.sh. However, when i run parent.sh, i need to press Ctrl+C to return to the terminal screen. What is the reason for this? Is it something to do with the fact that the child.sh processes are running under the parent.sh process. So the parent.sh doesn't die until the childs do?
parent.sh
#!/bin/bash
child.sh param1a param2a &
child.sh param1b param2b &
exit 0
child.sh
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#email.com
As you can see, I don't want to run the java process in the background because i want to send a mail out when the process dies. Doing it as above works fine from a functional standpoint, but i would like to know how i can get it to return to the terminal after executing parent.sh.
What i ended up doing was to make to change parent.sh to the following
#!/bin/bash
child.sh param1a param2a > startup.log &
child.sh param1b param2b > startup2.log &
exit 0
I would not have come to this solution without your suggestions and root cause analysis of the issue. Thanks!
And apologies for my inaccurate comment. (There was no input, I answered from memory and I remembered incorrectly.)
The following link from the Linux Documentation Project suggests adding a wait after your mail command in child.sh:
http://tldp.org/LDP/abs/html/x9644.html
Summary of the above document
Within a script, running a command in the background with an ampersand (&)
may cause the script to hang until ENTER is hit. This seems to occur with
commands that write to stdout. It can be a major annoyance.
....
....
As Walter Brameld IV explains it:
As far as I can tell, such scripts don't actually hang. It just
seems that they do because the background command writes text to
the console after the prompt. The user gets the impression that
the prompt was never displayed. Here's the sequence of events:
Script launches background command.
Script exits.
Shell displays the prompt.
Background command continues running and writing text to the
console.
Background command finishes.
User doesn't see a prompt at the bottom of the output, thinks script
is hanging.
If you change child.sh to look like the following you shouldn't experience this annoyance:
#!/bin/bash
java com.test.Main
echo "Main Process Stopped" | mail -s "WARNING-Main Process is down." user#gmail.com
wait
Or as #SebastianStigler states in a comment to your question above:
Add a > /dev/null at the end of the line with mail. mail will otherwise try to start its interactive mode.
This will cause the mail command to write to /dev/null rather than stdout which should also stop this annoyance.
Hope this helps
The process was still linked to the controlling terminal because STDOUT needs somewhere to go. You solved that problem by redirecting to a file ( > startup.log ).
If you're not interested in the output, discard STDOUT completely ( >/dev/null ).
If you're not interested in errors, either, discard both ( &>/dev/null ).
If you want the processes to keep running even after you log out of your terminal, use nohup — that effectively disconnects them from what you are doing and leaves them to quietly run in the background until you reboot your machine (or otherwise kill them).
nohup child.sh param1a param2a &>/dev/null &

Shell script behaves strangely when called via an Erlang port

When calling shell scripts from Erlang, I generally need their exit status (0 or something else), so I run them using this function:
%% in module util
os_cmd_exitstatus(Action, Cmd) ->
?debug("~ts starting... Shell command: ~ts", [Action, Cmd]),
try erlang:open_port({spawn, Cmd}, [exit_status, stderr_to_stdout]) of
Port ->
os_cmd_exitstatus_loop(Action, Port)
catch
_:Reason ->
case Reason of
badarg ->
Message = "Bad input arguments";
system_limit ->
Message = "All available ports in the Erlang emulator are in use";
_ ->
Message = file:format_error(Reason)
end,
?error("~ts: shell command error: ~ts", [Action, Message]),
error
end.
os_cmd_exitstatus_loop(Action, Port) ->
receive
{Port, {data, Data}} ->
?debug("~ts... Shell output: ~ts", [Action, Data]),
os_cmd_exitstatus_loop(Action, Port);
{Port, {exit_status, 0}} ->
?info("~ts finished successfully", [Action]),
ok;
{Port, {exit_status, Status}} ->
?error("~ts failed with exit status ~p", [Action, Status]),
error;
{'EXIT', Port, Reason} ->
?error("~ts failed with port exit: reason ~ts",
[Action, file:format_error(Reason)]),
error
end.
This worked fine, until I used this to start a script which forks off a program and exits:
#!/bin/sh
FILENAME=$1
eog $FILENAME &
exit 0
(In the actual usecase, there are quite a few more arguments, and some massaging before they are passed to the program). When run from the terminal, it shows the image and exits immediately, as expected.
But running from Erlang, it doesn't. In the log file I see that it starts fine:
22/Mar/2011 13:38:30.518 Debug: Starting player starting... Shell command: /home/aromanov/workspace/gmcontroller/scripts.dummy/image/show-image.sh /home/aromanov/workspace/media/images/9e89471e-eb0b-43f8-8c12-97bbe598e7f7.png
and the eog window appears. But I don't get
22/Mar/2011 13:47:14.709 Info: Starting player finished successfully
until killing the eog process (with kill or just closing the window), which isn't suitable for my requirements. Why the difference in behavior? Is there a way to fix it?
Normally if you run a command in background with & in a shell script and the shell script terminates before the command, then the command gets orphaned. It might be that erlang trys to prevent orphaned processes in open_port and waits for eog to terminate. Normally if you want to run something in background during a shell script you should put in a wait at the end of the script to wait for your background processes to terminate. But this is exactly what youd don't want to do.
You might try the following in your shell script:
#!/bin/sh
FILENAME=$1
daemon eog $FILENAME
# exit 0 not needed: daemon returns 0 if everything is ok
If your operating system has a daemon command. I checked in FreeBSD and it has one: daemon(8)
This is not a command available on all Unix alike systems, however there might be a different command doing the same thing in your operating system.
The daemon utility detaches itself from the controlling terminal and executes the program specified by its arguments.
I'm not sure if this solves your problem, but I suspect that eog somehow stays attached to stdin/stdou as a kind of controling terminal. Worth a try anyway.
This should also solve the possible problem that job control is on erroneously which could also cause the problem. Since daemon does exit normally your shell can't try to wait for the background job on exit because there is none in the shells view.
Having said all this: why not just keep the port open in Erlang while eog runs?
Start it with:
#!/bin/sh
FILENAME=$1
exec eog $FILENAME
Calling it with exec doesn't fork it bu replaces the shell process with eog. The exit status you'll see in Erlang will then be the status of eog when it terminates. Also you have the possibility to close the port and terminate eog from Erlang if you want to do so.
Perhaps your /bin/sh doesn't support job control when it isn't run interactively? At least the /bin/sh (actually dash(1)!) on my Ubuntu system mentions:
-m monitor Turn on job control (set automatically
when interactive).
When you run the script from a terminal, the shell probably recognizes that it is being run interactively and supports job control. When you run the shell script as a port, the shell probably runs without job control.

Resources