Escaping last bg job pid ($!) on mac osx shell - macos

I am trying to run a background job and get its PID from the bash command line, such as this:
$ cat &
$ echo $!
These two commands work perfectly, but if I try to inline them into one line I run into problems with bash history expansion conflicting with $!:
$ (cat &); echo $!;
-bash: !: event not found
I have tried various types of quoting around the exclamation mark, but the most I could get was for echo to display the literal string "$!".
Any help will be appreciated!

You are putting the first command in the background with the & delimiter. You can easily capture the pid of the backgrouded process by assigning $! to a variable to later use to kill the process (or you can write it to a tmp file): E.g.
cat & savpid=$!; ...do stuff...; kill $savpid (or kill %1 if no other jobs running)
You also have to option to check the status of your backgrounded process with the jobs command to list backgrounded jobs, and the use of the fg (foreground) command to bring the command to the foreground. Let us know if you have any further questions.
In order to accomplish what you are attempting, you must redirect stderr and stdout before backgrounding the process:
cat 2>&1 & echo $!

Related

Program has no output only when run by a shell script enforcing a timeout

I will first write a sequence of commands which I want to run
echo <some_input> | <some_command> > temp.dat & pid=$!
sleep 5
kill -INT "$pid"
The above is working perfectly fine when i run it one by one from bash shell, and the contents in the temp.dat file is exactly what I want. But, when I create a bash script containing the same set of commands, I am getting nothing in the temp.dat file.
Now, I'll mention why I'm writing those commands in such a way:
<some_command> asks for an input, that's why I'm piping <some_input>
I want the output of that command in a separate file, that's why I've redirected the output.
I want to kill the command by sending SIGINT signal after some time.
I've tried running an interactive shell by writing #!/bin/bash -i in the first line of the shell script, but it's not working.
Any alternate method to achieve the same results will be appreciated.
Update: <some_command> is also invoking a python script, but I don't think that this will cause it to behave differently.
Update2: python script was the only cause of that different behavior.
One likely cause here is that your Python process may not be flushing stdout within the allowed five seconds of runtime.
export PYTHONUNBUFFERED=1
...will cause content to be promptly written, rather than waiting for process exit / file close / amount of buffered content to reach a level sufficient to justify the overhead of a flush operation.
will this work for you?
read -p "Input Data : " inputdata ; echo $inputdata > temp.data ; sleep 5; exit
obvs
#!/usr/bin/env bash
read -p "Input Data : " inputdata
echo $inputdata > temp.data
sleep 5
should work as a script
to suit :D
#!/usr/bin/env bash
read -p "Input Data : " inputdata
<code you write eg echo $inputdata> > temp.data
sleep 5

Launch process from Bash script in the background, then bring it to foreground

The following is a simplified version of some code I have:
#!/bin/bash
myfile=file.txt
interactive_command > $myfile &
pid=$!
# Use tail to wait for the file to be populated
while read -r line; do
first_output_line=$line
break # we only need the first line
done < <(tail -f $file)
rm $file
# do stuff with $first_output_line and $pid
# ...
# bring `interactive_command` to foreground?
I want to bring interactive_command to the foreground after its first line of output has been stored to a variable, so that a user can interact with it via calling this script.
However, it seems that using fg %1 does not work in the context of a script, and I cannot use fg with the PID. Is there a way that I can do this?
(Also, is there a more elegant way of capturing the first line of output, without writing to a temp file?)
Job control using fg and bg are only available on interactive shells (i.e. when typing commands in a terminal). Usually the shell scripts run in non-interactive shells (same reason why aliases don't work in shell scripts by default)
Since you already have the PID stored in a variable, foregrounding the process is same as waiting on it (See Job Control Builtins). For example you could just do
wait "$pid"
Also what you have is a basic version of coproc bash built-in which allows you get the standard output messages captured from background commands. It exposes two file descriptors stored in an array, using which one can read outputs from stdout or feed inputs to its stdin
coproc fdPair interactive_command
The syntax is usually coproc <array-name> <cmd-to-bckgd>. The array is populated with the file descriptor id's by the built-in. If no variable is used explicitly, it is populated under COPROC variable. So your requirement can be written as
coproc fdPair interactive_command
IFS= read -r -u "${fdPair[0]}" firstLine
printf '%s\n' "$firstLine"

In bash print to line above terminal output

EDIT: Corrected process/thread terminology
My shell script has a foreground process that reads user input and a background process that prints messages. I would like to print these messages on the line above the input prompt rather than interrupting the input. Here's a canned example:
sleep 5 && echo -e "\nINFO: Helpful Status Update!" &
echo -n "> "
read input
When I execute it and type "input" a bunch of times, I get something like this:
> input input input inp
INFO: Helpful Status Update!
ut input
But I would like to see something like this:
INFO: Helpful Status Update!
> input input input input input
The solution need not be portable (I'm using bash on linux), though I would like to avoid ncurses if possible.
EDIT: According to #Nick, previous lines are inaccessible for historical reasons. However, my situation only requires modifying the current line. Here's a proof of concept:
# Make named pipe
mkfifo pipe
# Spawn background process
while true; do
sleep 2
echo -en "\033[1K\rINFO: Helpful Status Update!\n> `cat pipe`"
done &
# Start foreground user input
echo -n "> "
pid=-1
collected=""
IFS=""
while true; do
read -n 1 c
collected="$collected$c"
# Named pipes block writes, so must do background process
echo -n "$collected" >> pipe &
# Kill last loop's (potentially) still-blocking pipe write
if kill -0 $pid &> /dev/null; then
kill $pid &> /dev/null
fi
pid=$!
done
This produces mostly the correct behavior, but lacks CLI niceties like backspace and arrow navigation. These could be hacked in, but I'm still having trouble believing that a standard approach hasn't already been developed.
The original ANSI codes still work in bash terminal on Linux (and MacOS), so you can use \033[F where \033 is the ESCape character. You can generate this in bash terminal by control-V followed by the ESCape character. You should see ^[ appear. Then type [F. If you test the following script:
echo "original line 1"
echo "^[[Fupdated line 1"
echo "line 2"
echo "line 3"
You should see output:
updated line 1
line 2
line 3
EDIT:
I forgot to add that using this in your script will cause the cursor to return to the beginning of the line, so further input will overwrite what you have typed already. You could use control-R on the keyboard to cause bash to re-type the current line and return the cursor to the end of the line.

Open a shell in the second process of a pipe

I'm having problems understanding what's going on in the following situation. I'm not familiar with UNIX pipes and UNIX at all but have read documentation and still can't understand this behaviour.
./shellcode is an executable that successfully opens a shell:
seclab$ ./shellcode
$ exit
seclab$
Now imagine that I need to pass data to ./shellcode via stdin, because this reads some string from the console and then prints "hello " plus that string. I do it in the following way (using a pipe) and the read and write works:
seclab$ printf "world" | ./shellcode
seclab$ hello world
seclab$
However, a new shell is not opened (or at least I can't see it and iteract with it), and if I run exit I'm out of the system, so I'm not in a new shell.
Can someone give some advice on how to solve this? I need to use printf because I need to input binary data to the second process and I can do it like this: printf "\x01\x02..."
When you use a pipe, you are telling Unix that the output of the command before the pipe should be used as the input to the command after the pipe. This replaces the default output (screen) and default input (keyboard). Your shellcode command doesn't really know or care where its input is coming from. It just reads the input until it reaches the EOF (end of file).
Try running shellcode and pressing Control-D. That will also exit the shell, because Control-D sends an EOF (your shell might be configured to say "type exit to quit", but it's still responding to the EOF).
There are two solutions you can use:
Solution 1:
Have shellcode accept command-line arguments:
#!/bin/sh
echo "Arguments: $*"
exec sh
Running:
outer$ ./shellcode foo
Arguments: foo
$ echo "inner shell"
inner shell
$ exit
outer$
To feed the argument in from another program, instead of using a pipe, you could:
$ ./shellcode `echo "something"`
This is probably the best approach, unless you need to pass in multi-line data. In that case, you may want to pass in a filename on the command line and read it that way.
Solution 2:
Have shellcode explicitly redirect its input from the terminal after it's processed your piped input:
#!/bin/sh
while read input; do
echo "Input: $input"
done
exec sh </dev/tty
Running:
outer$ echo "something" | ./shellcode
Input: something
$ echo "inner shell"
inner shell
$ exit
outer$
If you see an error like this after exiting the inner shell:
sh: 1: Cannot set tty process group (No such process)
Then try changing the last line to:
exec bash -i </dev/tty

pgrep prints a different pid than expected

I wrote a small script and for some reason I need to escape any spaces passed in parameters to get it to work.
I read numerous other articles about people with this issue and it is typically due to not quoting $#, but all of my variables are quoted within the script and the parameters quoted on the command line as well. Also, if I run the script in debug mode the line that is returned can be run successfully via copy paste but fails when executed from within the script.
CODE:
connections ()
{
args="$#"
pid="$(pgrep -nf "$args")"
echo $pid
# code that shows TCP and UDP connections for $pid
}
connections "$#"
EXAMPLE:
bash test.sh "blah blah"
fails and instead returns the pid of the currently running shell
bash test.sh "blah\ blah"
succeeds and returns the pid of the process you are searching for via pgrep
Your problem has nothing to do with "$#".
If you add a -l option to pgrep, you can see why it's matching the current process.
The script you're running also includes what you're trying to search for in its own arguments.
It's like doing this, and seeing grep:
$ ps -U $USER -o pid,cmd | grep gnome-terminal
12410 grep gnome-terminal
26622 gnome-terminal --geometry=180x65+135+0
The reason the backslash makes a difference? pgrep thinks backslash+space just means space. It doesn't find your script, because that contains blah\ blah, not blah blah.

Resources