STDOUT & STDERR from previous Command as Arguments for next Command - bash

Somehow I don't find a sufficient answer to my problem, only parts of hackarounds.
I'm calling a single "chained" shell command (from a Node app), that starts a long-running update process, which it's stdout/-err should be handed over, as arguments, to the second part of the shell command (another Node app that logs into a DB).
I'd like to do something like this:
updateCommand 2>$err 1>$out ; logDBCommand --log arg err out
Can't use > as it is only for files or file descriptors.
Also if I use shell variables (like error=$( { updateCommand | sed 's/Output/tmp/'; } 2>&1 ); logDBCommand --log arg \"${error}.\"), I can only have stdout or both mixed into one argument.
And I don't want to pipe, as the second command (logCommand) should run whether the first one succeeded or failed execution.
And I don't want to cache to file, cause honestly that's missing the point and introduce another asynchronous error vector
List item

After a little chat in #!/bin/bash someone suggested to just make use of tpmsf (file system held in RAM), which is the 2nd most elegant (but only possible) way to do this. So I can make use of the > operator and have stdout and stderr in separate variables in memory.
command1 >/dev/shm/c1stdout 2>/dev/shm/c1stderr
A=$(cat /dev/shm/c1sdtout)
B=$(cat /dev/shm/c1stderr)
command2 $A $B
(or shorter):
A=$(command1 2>/dev/shm/c1stderr )
B=$(cat /dev/shm/c1stderr)
command2 $A $B

Related

Sending two processes ouput pipes to a dual pipe input process

I have two commands (command 1 and command2) that output to stdout (fd 1) and I'd like to send them to a new command3 that is prepared to receive them in two pipes, one on stdin from command1 and the other in any other file descriptor, i.e. in the fd 3, from command2.
How can I do this in bash ?
This could be done by using process subsitution technique, from bash ref:
Process substitution allows a process’s input or output to be referred
to using a filename. It takes the form of
<(list)
or
>(list)
The process list is run asynchronously, and its input or output
appears as a filename.
Using this technique basically you can read the output of a command (list in the above example) as if you were reading from a file. In fact, you can have several inputs which can solve your problem as following:
command3 <( command1 ) <( command2 )
For this, you have to open both files (received as arguments) and read from them.
The process substitution basically creates a file (/dev/fd/XX) and uses its name as input to the receiving command (command3 in the above example). Please, Keep in mind that both the commands command1 and command2 will run asynchronously, thus you can't expect/rely on any execution order when launching the above command.

How to run a time-limited background command and read its output (without timeout command)

I'm looking at https://stackoverflow.com/a/10225050/1737158
And in same Q there is an answer with timeout command but it's not in all OSes, so I want to avoid it.
What I try to do is:
demo="$(top)" &
TASK_PID=$!
sleep 3
echo "TASK_PID: $TASK_PID"
echo "demo: $demo"
And I expect to have nothing in $demo variable while top command never ends.
Now I get an empty result. Which is "acceptable" but when i re-use the same thing with the command which should return value, I still get an empty result, which is not ok. E.g.:
demo="$(uptime)" &
TASK_PID=$!
sleep 3
echo "TASK_PID: $TASK_PID"
echo "demo: $demo"
This should return uptime result but it doesn't. I also tried to kill the process by TASK_PID but I always get. If a command fails, I expect to have stderr captures somehow. It can be in different variable but it has to be captured and not leaked out.
What happens when you execute var=$(cmd) &
Let's start by noting that the simple command in bash has the form:
[variable assignments] [command] [redirections]
for example
$ demo=$(echo 313) declare -p demo
declare -x demo="313"
According to the manual:
[..] the text after the = in each variable assignment undergoes tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal before being assigned to the variable.
Also, after the [command] above is expanded, the first word is taken to be the name of the command, but:
If no command name results, the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment.
So, as expected, when demo=$(cmd) is run, the result of $(..) command substitution is assigned to the demo variable in the current shell.
Another point to note is related to the background operator &. It operates on the so called lists, which are sequences of one or more pipelines. Also:
If a command is terminated by the control operator &, the shell executes the command asynchronously in a subshell. This is known as executing the command in the background.
Finally, when you say:
$ demo=$(top) &
# ^^^^^^^^^^^ simple command, consisting ONLY of variable assignment
that simple command is executed in a subshell (call it s1), inside which $(top) is executed in another subshell (call it s2), the result of this command substitution is assigned to variable demo inside the shell s1. Since no commands are given, after variable assignment, s1 terminates, but the parent shell never receives the variables set in child (s1).
Communicating with a background process
If you're looking for a reliable way to communicate with the process run asynchronously, you might consider coprocesses in bash, or named pipes (FIFO) in other POSIX environments.
Coprocess setup is simpler, since coproc will setup pipes for you, but note you might not reliably read them if process is terminated before writing any output.
#!/bin/bash
coproc top -b -n3
cat <&${COPROC[0]}
FIFO setup would look something like this:
#!/bin/bash
# fifo setup/clean-up
tmp=$(mktemp -td)
mkfifo "$tmp/out"
trap 'rm -rf "$tmp"' EXIT
# bg job, terminates after 3s
top -b >"$tmp/out" -n3 &
# read the output
cat "$tmp/out"
but note, if a FIFO is opened in blocking mode, the writer won't be able to write to it until someone opens it for reading (and starts reading).
Killing after timeout
How you'll kill the background process depends on what setup you've used, but for a simple coproc case above:
#!/bin/bash
coproc top -b
sleep 3
kill -INT "$COPROC_PID"
cat <&${COPROC[0]}

bash which OR operator to use - pipe v double pipe

When I'm looking at bash script code, I sometimes see | and sometimes see ||, but I don't know which is preferable.
I'm trying to do something like ..
set -e;
ret=0 && { which ansible || ret=$?; }
if [[ ${ret} -ne 0 ]]; then
# install ansible here
fi
Please advise which OR operator is preferred in this scenario.
| isn't an OR operator at all. You could use ||, though:
which ansible || {
true # put your code to install ansible here
}
This is equivalent to an if:
if ! which ansible; then
true # put your code to install ansible here
fi
By the way -- consider making a habit of using type (a shell builtin) rather than which (an external command). type is both faster and has a better understanding of shell behavior: If you have an ansible command that's provided by, say, a shell function invoking the real command, which won't know that it's there, but type will correctly detect it as available.
There is a big difference between using a single pipe (pipe output from one command to be used as input for the next command) and a process control OR (double pipe).
cat /etc/issue | less
This runs the cat command on the /etc/issue file, and instead of immediately sending the output to stdout it is piped to be the input for the less command. Yes, this isn't a great example, since you could instead simply do less /etc/issue - but at least you can see how it works
touch /etc/testing || echo Did not work
For this one, the touch command is run, or attempted to run. If it has a non-zero exit status, then the double pipe OR kicks in, and tries to execute the echo command. If the touch command worked, then whatever the other choice is (our echo command in this case) is never attempted...

complex command within a variable

I am writing a script that among other things runs a shell command several times. This command doesn't handle exit codes very well and I need to know if the process ended successfully or not.
So what I was thinking is to analyze the stderr to find out the word error (using grep). I know this is not the best thing to do, I'm working on it....
Anyway, the only way I can imagine is to put the stderr of that program in a variable and then use grep to well, "grep" it and throw it to another variable. Then I can see if that variable is valorized, meaning that there was an error, and do my work.
The qustion is: how can I do this ?
I don't really want to run the program inside a variable, because it has got a lot of arguments (with special characters such as backslash, quotes, doublequotes...) and it's a memory and I/O intensive program.
Awaiting your reply, thanks.
Redirect the stderr of that command to a temporary file and check if the word "error" is present in that file.
mycommand 2> /tmp/temp.txt
grep error /tmp/temp.txt
Thanks #Jdamian, this was my answer too, in the end.
I asked my principal if I can write a temp file and it allowed, so this is the end result:
... script
command to be launched -argument -other "argument" -other "other" argument 2>&1 | tee $TEMPFILE
ERRORCODE=( `grep -i error "$TEMPFILE" `)
if [ -z $ERRORCODE ] ;
then
some actions ....
I didn't tested this yet because I got some other scripts involved that I need to write before.
What I'm trying to do is:
run the command, having its stderr redirected to stdout;
using tee, have the above result printed on screen and also to the temp file;
have grep to store the string error found on the temp file (if any) in a variable called ERRORCODE;
if that variable is populated (which mean if it has been created by grep), then the script stops, quitting with status 1, else it contiunes.
What do you think ?
If you don't need the standard output:
if mycommand 2>&1 >/dev/null | grep -q error; then
echo an error occurred
fi

Getting stdout+stderr in a log file

I am trying to implement something which my logic says can't be done. But I need your help to understand why can't it be.
Short Version of Question
Is it possible to log stdout+stderr of a script in csh without using file redirection ( >& or tee ).
Detailed Explanation of Question
I have a requirement with a csh script (script1) where I am not allowed to use file redirection.(I will give the reason in a while)
So that means I can't use something like
echo just checking >& logfile
hence I can't use this or tee to create my logfile.
I also have a another script (script2) which is a top level script.
I can either run script1 in standalone mode or through script2.
In either case i need to create a log(stdout+stderr) of script1 in logfile.
There are two possible(but not complete) option for that
write this line in script2
./script1 >& logfile
But then I can't log script1 in logfile when script1 is run in standalone mode.
Another option is to use file redirections in script1 like this:
echo test starting >> logfile
echo test over
In this case thee are two disadvantages:
1) "test over" prints before "test starting" , i.e. the order of occurring of command logs is not certain.
2) It's tedious to put >>& after every statement if I am intending to cover whole script.
Now is there any other way,I can get what I need. That is I can run script1 without file redirection and still get to log its stdout+stderr in logfile.
You mention csh, so this may not help you. On the other had, it may motivate you to stop using csh for scripts, a task for which it is notoriously inappropriate. In sh, you can simply do:
#!/bin/sh
exec > logfile 2>&1
echo foo
To write foo (and the output and errors of all subsequent commands) to the logfile

Resources