Read from executable and write back response - bash

I'm starting an executable with
./test
It will then write a text to stdout
static test: DYNAMIC_VAL
The static test is always the same but the value DYNAMIC_VAL changes.
I need to read DYNAMIC_VAL, process it and send back byte hex codes \x12\x34\x56 to the stdin depending on DYNAMIC_VAL.
./test is an executable and the stdin should be performed to the original invocation of test, otherwise the DYNAMIC_VAL would have changed with a new invocation.
Is there a simple way of doing this in bash?

If I'm understanding this question correctly, you want to read a line from your ./test process, and write data back to the same process and repeat until it produces something saying it's done (Or forever)?
One way is to use a coprocess.
Example:
$ cat test.sh
#!/bin/sh
echo "static test: foo"
read line
echo "static test: bar"
read line
echo "static test: done"
$ cat demo.sh
#!/usr/bin/env bash
coproc ./test.sh
while true; do
read -r -u "${COPROC[0]}" s t dynamic_val
case "$dynamic_val" in
"done")
echo "Exiting"
break;;
*)
echo "read $dynamic_val"
printf "\x12\x34\x56\n" >&"${COPROC[1]}";;
esac
done
$ ./demo.sh
read foo
read bar
Exiting

Related

Redirect copy of stdin to file from within bash script itself

In reference to https://stackoverflow.com/a/11886837/1996022 (also shamelessly stole the title) where the question is how to capture the script's output I would like to know how I can additionally capture the scripts input. Mainly so scripts that also have user input produce complete logs.
I tried things like
exec 3< <(tee -ia foo.log <&3)
exec <&3 <(tee -ia foo.log <&3)
But nothing seems to work. I'm probably just missing something.
Maybe it'd be easier to use the script command? You could either have your users run the script with script directly, or do something kind of funky like this:
#!/bin/bash
main() {
read -r -p "Input string: "
echo "User input: $REPLY"
}
if [ "$1" = "--log" ]; then
# If the first argument is "--log", shift the arg
# out and run main
shift
main "$#"
else
# If run without log, re-run this script within a
# script command so all script I/O is logged
script -q -c "$0 --log $*" test.log
fi
Unfortunately, you can't pass a function to script -c which is why the double-call is necessary in this method.
If it's acceptable to have two scripts, you could also have a user-facing script that just calls the non-user-facing script with script:
script_for_users.sh
--------------------
#!/bin/sh
script -q -c "/path/to/real_script.sh" <log path>
real_script.sh
---------------
#!/bin/sh
<Normal business logic>
It's simpler:
#! /bin/bash
tee ~/log | your_script
The wonderful thing is your_script can be a function, command or a {} command block!

Why is there a difference in behavior when the command are piped to bash versus when bash reads the file with the commands?

Consider this bash script
§ cat sample.sh
echo "PRESS ENTER:"
read continue;
echo "DONE";
If I run it this way, the script exits after the first echo without waiting for the read:
§ cat sample.sh | bash --noprofile --norc
PRESS ENTER:
However, if I run it this way, it works as expected:
§ bash --noprofile --norc sample.sh
PRESS ENTER:
DONE
Why the difference?
In the first instance, the read will absorb echo "DONE"; as both the script and user input for read are coming from stdin.
$ cat sample.sh
echo "PRESS ENTER:"
read continue;
echo "DONE";
echo "REALLY DONE ($continue)";
$ cat sample.sh | bash --noprofile --norc
PRESS ENTER:
REALLY DONE (echo "DONE";)
$
If you add echo "$continue" to the end, the issue becomes obvious:
(Also I removed the semicolons since they do nothing.)
$ cat test.sh
echo "PRESS ENTER:"
read continue
echo "DONE"
echo "$continue"
$ bash test.sh
PRESS ENTER:
foo
DONE
foo
$ bash < test.sh
PRESS ENTER:
echo "DONE"
read continue is taking echo "DONE" as input since it's coming from stdin

How do I programmatically execute a carriage return in bash?

My main file is main.sh:
cd a_folder
echo "executing another script"
source anotherscript.sh
cd ..
#some other operations.
anotherscript.sh:
pause(){
read -p "$*"
}
echo "enter a number: "
read number
#some operation
pause "Press enter to continue..."
I wanted to skip the pause command. But when I do:
echo "/n" | source anotherscript.sh
It doesn't allow to enter the number. I want the "/n" to occur so that I allow the user to enter a number but skip the pause statement.
PS: can't do any changes in anotherscript.sh. All changes to be done in main.sh.
Try
echo | source anotherscript.sh
Your approach does not work, because the script to be sourced expectes two lines from stdin: First a line containing a number, then an empty line (which is doing the pause). Hence you would have to feed two lines, the number and the empty line, to the script. If you still want to get the number from your own stdin, you would have to use a read command before:
echo "executing another script"
echo "enter a number: "
read number
printf "$number\n\n" | source anotherscript.sh
But this still has some danger lurking: The source command is executed in a subshell; hence, any changes in the environment performed by anotherscript.sh won't be visible in your shell.
A workaround would be to be to put the number-reading logic outside of main.sh:
# This is script supermain.sh
echo "executing another script"
echo "enter a number: "
read number
printf "$number\n\n"|bash main.sh
where in main.sh, you simply keep your source anotherscript.sh without any piping.
As user1934428 comments, the bash pipeline causes the cascading
commands to be executed in subshells and the variable modifications
there are not reflected in the current process.
To change this behavior, you can set lastpipe with shopt builtin.
Then bash changes the job control so that the last command in the
pipeline is executed in the current shell (as tsch does).
Then would you please try:
main_sh
#!/bin/bash
shopt -s lastpipe # this changes the job control
read -p "enter a number: " x # ask for the number in main_sh instead
cd a_folder
echo "executing another script"
echo "$x" | source anotherscript.sh > /dev/null
# anotherscript.sh is executed in the current process
# unnecessary messages are redirected to /dev/null
cd ..
echo "you entered $number" # check the result
#some other operations.
which will properly print the value of number.
Alternatively you can also say as:
#!/bin/bash
read -p "enter a number: " x
cd a_folder
echo "executing another script"
source anotherscript.sh <<< "$x" > /dev/null
cd ..
echo "you entered $number"
#some other operations.

Redirect to a file which is got from command line

I'm a beginner. :)
I'm trying to ask the name of file from prompt in a shell
and edit that file in another shell like this:
test.sh
echo "enter file name"
read word
sh test2.sh
test2.sh
read number
echo "$number" >> $word
I get an error
Test2.sh: line 1: $mAmbiguous redirect
Any suggestion?
If you want a variable from test.sh to be visible to its child processes, you need to export it. In your case, you would seem to want to export word. Perhaps a better approach would be for test2.sh to accept the destination file as a parameter instead, though.
test.sh
echo "enter file name"
read word
test2.sh "$word"
test2.sh
#!/bin/sh
: ${1?must have destination file name} # fail if missing
read number
echo "$number" >> "$1"

Script works until it's modified to redirect stdout & stderr to a log file

Explanation
When The Script below is ran with no modifications, it outputs (correctly):
two one START
After uncommenting the exec statements in the script below, file descriptor 3 points to standard output, and file descriptor 4 points to standard error:
exec 3>&1 4>&2
And all standard output & standard error gets logged to a file instead of printing to the console:
# Two variances seen in the script below
exec 1>"TEST-${mode:1}.log" 2>&1 # Log Name based on subcommand
exec 1>"TEST-bootstrap.log" 2>&1 # Initial "bootstrap" log file
When the exec statements are in place, the script should create three files (with contents):
TEST-bootstrap.log (sucessfully created - empty file)
TEST-one.log (sucessfully created)
one START
TEST-two.log (is not created)
two one START
But instead it seems to stop after the first log file and never creates TEST-two.log. What could be the issue considering it works with the exec statements commented out?
The Script
#!/bin/bash
SELFD=$(cd -P $(dirname ${0}) >/dev/null 2>&1; pwd) # Current scripts directory
SELF=$(basename ${0}) # Current scripts name
SELFX=${SELFD}/${SELF} # The current script exec path
fnOne() { echo "one ${*}"; }
fnTwo() { echo "two ${*}"; }
subcommand() {
# exec 3>&1 4>&2
# exec 1>"TEST-${mode:1}.log" 2>&1
case "${mode}" in
-one) fnOne ${*}; ;;
-two) fnTwo ${*}; ;;
esac
}
bootstrap() {
# exec 3>&1 4>&2
# exec 1>"TEST-bootstrap.log" 2>&1
echo "START" \
| while read line; do ${SELFX} -one ${line}; done \
| while read line; do ${SELFX} -two ${line}; done
exit 0
}
mode=${1}; shift
case "${mode:0:1}" in
-) subcommand ${*} ;;
*) bootstrap ${mode} ${*} ;;
esac
exit 0
This script is strictly an isolated example of the problem I am facing in another more complex script of mine. I've tried to keep it as concise as possible but will expand upon it if needed.
What I'm trying to accomplish (extra reading for those interested)
In The Script above, I am using while loops instead of gnu-parallel for simplicity sake, and am using simple functions that echo "one" and "two" for ease of debugging & asking the question here.
In my actual script, the program would have the following functions: list-archives, download, extract, process that would fetch a list of archives from a URL, download them, extract their contents, and process the resulting files respectively. Considering these operations take a varying amount of time, I planned on running them in parallel. My script's bootstrap() function would look something like this:
# program ./myscript
list-archives "${1}" \
| parallel myscript -download \
| parallel myscript -extract \
| parallel myscript -process
And would be called like this:
./myscript "http://www.example.com"
What I'm trying to accomplish is a way to start a program that can can call it's own functions in parallel and record everything it does. This would be useful to determine when the data was last fetched, debug any errors, etc.
I'd also like to have the program record logs when it's invoked with a subcommand, e.g.
# only list achives
>> ./myscript -list-archives "http://www.example.com"
# download this specific archive
>> ./myscript -download "http://www.example.com/archive.zip"
# ...
I think this is the problem:
echo "START" \
| while read line; do ${SELFX} -one ${line}; done \
| while read line; do ${SELFX} -two ${line}; done
The first while loop reads START from the echo statement. The second while loop processes the output of the first while loop. But since the script is redirecting stdout to a file, nothing is piped to the second loop, so it exits immediately.
I'm not sure how to "fix" this, since it's not clear what you're trying to accomplish. If you want to feed something to the next command in the pipeline, you can echo to FD 3, which you've carefully used to save the original stdout.
I've accepted #Barmar's answer as the answer and the comment left by #Wumpus Q. Wumbley nudged me in the right direction.
The solution was to use tee /dev/fd/3 on the case statement like so:
subcommand() {
exec 3>&1 4>&2
exec 1>"TEST-${mode:1}.log" 2>&1
case "${mode}" in
-one) fnOne ${*}; ;;
-two) fnTwo ${*}; ;;
esac | tee /dev/fd/3
}

Resources