complex command within a variable - bash

I am writing a script that among other things runs a shell command several times. This command doesn't handle exit codes very well and I need to know if the process ended successfully or not.
So what I was thinking is to analyze the stderr to find out the word error (using grep). I know this is not the best thing to do, I'm working on it....
Anyway, the only way I can imagine is to put the stderr of that program in a variable and then use grep to well, "grep" it and throw it to another variable. Then I can see if that variable is valorized, meaning that there was an error, and do my work.
The qustion is: how can I do this ?
I don't really want to run the program inside a variable, because it has got a lot of arguments (with special characters such as backslash, quotes, doublequotes...) and it's a memory and I/O intensive program.
Awaiting your reply, thanks.

Redirect the stderr of that command to a temporary file and check if the word "error" is present in that file.
mycommand 2> /tmp/temp.txt
grep error /tmp/temp.txt

Thanks #Jdamian, this was my answer too, in the end.
I asked my principal if I can write a temp file and it allowed, so this is the end result:
... script
command to be launched -argument -other "argument" -other "other" argument 2>&1 | tee $TEMPFILE
ERRORCODE=( `grep -i error "$TEMPFILE" `)
if [ -z $ERRORCODE ] ;
then
some actions ....
I didn't tested this yet because I got some other scripts involved that I need to write before.
What I'm trying to do is:
run the command, having its stderr redirected to stdout;
using tee, have the above result printed on screen and also to the temp file;
have grep to store the string error found on the temp file (if any) in a variable called ERRORCODE;
if that variable is populated (which mean if it has been created by grep), then the script stops, quitting with status 1, else it contiunes.
What do you think ?

If you don't need the standard output:
if mycommand 2>&1 >/dev/null | grep -q error; then
echo an error occurred
fi

Related

STDOUT & STDERR from previous Command as Arguments for next Command

Somehow I don't find a sufficient answer to my problem, only parts of hackarounds.
I'm calling a single "chained" shell command (from a Node app), that starts a long-running update process, which it's stdout/-err should be handed over, as arguments, to the second part of the shell command (another Node app that logs into a DB).
I'd like to do something like this:
updateCommand 2>$err 1>$out ; logDBCommand --log arg err out
Can't use > as it is only for files or file descriptors.
Also if I use shell variables (like error=$( { updateCommand | sed 's/Output/tmp/'; } 2>&1 ); logDBCommand --log arg \"${error}.\"), I can only have stdout or both mixed into one argument.
And I don't want to pipe, as the second command (logCommand) should run whether the first one succeeded or failed execution.
And I don't want to cache to file, cause honestly that's missing the point and introduce another asynchronous error vector
List item
After a little chat in #!/bin/bash someone suggested to just make use of tpmsf (file system held in RAM), which is the 2nd most elegant (but only possible) way to do this. So I can make use of the > operator and have stdout and stderr in separate variables in memory.
command1 >/dev/shm/c1stdout 2>/dev/shm/c1stderr
A=$(cat /dev/shm/c1sdtout)
B=$(cat /dev/shm/c1stderr)
command2 $A $B
(or shorter):
A=$(command1 2>/dev/shm/c1stderr )
B=$(cat /dev/shm/c1stderr)
command2 $A $B

Piping to a process when the process doesn't exist?

Say I start with the following statement, which echo-s a string into the ether:
$ echo "foo" 1>/dev/null
I then submit the following pipeline:
$ echo "foo" | cat -e - 1>/dev/null
I then leave the process out:
$ echo "foo" | 1>/dev/null
Why is this not returning an error message? The documentation on bash and piping doesn't seem to make direct mention of may be the cause. Is there an EOF sent before the first read from echo (or whatever the process is, which is running upstream of the pipe)?
A shell simple command is not required to have a command name. For a command without a command-name:
variable assignments apply to the current execution environment. The following will set two variables to argument values:
arg1=$1 arg3=$3
redirections occur in a subshell, but the subshell doesn't do anything other than initialize the redirect. The following will truncate or create the indicated file (if you have appropriate permissions):
>/file/to/empty
However, a command must have at least one word. A completely empty command is a syntax error (which is why it is occasionally necessary to use :).
Answer summarized from Posix XCU§2.9.1

Bash: How to redirect stdin, stdout and stderr to a log file for an install log

I currently have a bunch of installer scripts which log stderr/stdout to a log file that works well but I need to also redirect stdin for user responses to the same log file. The install scripts sometime call functions in a shared library (an include), which may also read user input. I thought about adding a custom read function but this will require altering the shared library and wondered if there's a way to do this from the calling script.
At the moment the scripts are similar to this:
#!/usr/bin/bash
. ./libInstall
INSTALL_LOG="./install.log"
( (
echo "INFO: Installing..."
# Run some arbitrary commands...
# Read some input...
read ANSWER1
read ANSWER2
# Call function in libInstall which will prompt the user...
funcWhichAsksAQuestion ANSWER3
echo "INFO: Installation Complete"
) 2>&1 ) | tee -a "${INSTALL_LOG}"
If I change "( (" to reflect the line below I can tee off stdin to the log file:
cat - 2> /dev/null | tee -a ${INSTALL_LOG} | ( (
This works but requires 2 carriage returns once the script ends, presumably because the pipe is broken.
It's almost there but I'd it to work without having to press enter twice at the end to get back to the shell prompt.
These scripts have to be fairly portable to work on RHEL >=5, AIX >=5.1, Solaris >=9 with the lowest bash version being v2.05 I believe.
Any ideas how I can achieve this?
Thanks
Why not just add 'echo "\n\n"' after your "installation complete" line? Granted, you'll have two extra lines in your log file, but those seem relatively harmless.
I believe you have to return twice because of how tee is implemented. It "uses" one return by itself, and the other two come from the 'read' calls (well, one read, one funcWhichAsksAQuestion).

Unable to pass parameters to a perl script inside a bash script

I would like to pass parameters to a perl script using positional parameters inside a bash script "tablecheck.sh". I am using an alias "tablecheck" to call "tablecheck.sh".
#!/bin/bash
/scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1 &
Perl script by itself works fine. But when I do "tablecheck MySQLinstance", $1 stays $1. It won't get replaced by the instance. So I get the output as follows:
Exit /scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1 &
The job exits.
FYI: alias tablecheck='. pathtobashscript/tablecheck.sh'
I have a bunch of aliases in another bash script. Hence . command.
Could anyone help me... I have gone till the 3rd page of Google to find an answer. Tried so many things with no luck.
I am a noob. But may be it has something to do with it being a background job or $1 in a path... I don't understand why the $1 won't get replaced...
If I copy your exact set up (which I agree with other commenters, is some what unusual) then I believe I am getting the same error message
$ tablecheck foo
[1]+ Exit 127 /scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1
In the /tmp/chktables_foo file that it makes there is an additional error message, in my case "bash: /scripts/tables.pl: No such file or directory"
I suspect permissions are wrong in your case

Can I only show stdout/stderr in case of a trapped error in bash?

How I wish it to work:
When no errors are bash trapped (if nothing returns a non-zero exit code [unless overwritten by || true]), be silent. Hide stdout and stderr.
When an error is bash trapped, be verbose. Write stdout and stderr.
In my script only stdout and stderr is missing.
#!/bin/bash
exec 5>&1 >/dev/null
exec 6>&2 2>/dev/null
error_handler() {
local return_code="$?"
local last_err="$BASH_COMMAND"
local stdout= # How to read FD 5?
local stderr= # How to read FD 6?
exec 1>&5
exec 2>&6
echo "ERROR!
scriptname: $0
BASH_COMMAND: $last_err
\$?: $return_code
stdout: $stdout
stderr: $stderr
" 1>&2
exit 1
}
trap "error_handler" ERR
echo "Some message..."
# Some command fails, i.e. return a non-zero exit code.
mkdir
I could probably redirect stdout/stderr to a temporary file and use cat to show it in case an error was bash trapped. Would be a bit better, if that temporary file wasn't required. Any idea?
Credit:
This question was inspired by question How to undo exec > /dev/null in bash? and answer by Charles Duffy
Let's look at the I/O redirection carefully:
exec 5>&1 >/dev/null
exec 6>&2 2>/dev/null
We see that file descriptor 5 is a duplicate of the original standard output, but that standard output is going to /dev/null. Similarly, 6 is a duplicate of standard error, but standard error is going to /dev/null.
Now let's consider what happens when you run:
ls -l /dev/null /dev/not-actually/there
The ls command writes the output for /dev/null to /dev/null because that's where its standard output is directed. Similarly, it writes the error for the non-existent file /dev/not-actually/there to /dev/null because that's where its standard error is directed.
Thus, both the standard output and standard error of the command are irrevocably lost.
Given the expressed requirements, there isn't going to be a simple solution. Your best bet is probably to redirect both standard output and standard error to the same file (but be aware that the interleaving of error and normal output may be different because the output is a file). Alternatively, you can direct standard output and standard error to two separate files and show them when necessary.
Note that you will need to consider emptying the output file(s) after each command (letting the trap report the contents before the file(s) is/are emptied) so that you don't report the standard output or standard error of commands 1-9 when command 10 fails.
Doing this neatly and handling pipelines correctly, etc, is not trivial. I'm not sure whether to suggest a function that's passed the command and arguments (tricky for pipelines) or some other technique.
I've used the 'capture everything in one file' technique in cron-run scripts that mail the output when appropriate. It isn't wholly satisfactory, but it is a lot better than not having the error messages at all.
You can consider playing with expect and/or pseudo-ttys, but doing a good job will be really hard.
Your file descriptors 5 and 6 are write-only. There's no way for the shell to read its own output; bidirectional pipes are deadlocks waiting to happen even when it's not the same process on both ends.
I would go with the temp file idea.
The actual paths to I/O files are hidden from the shells, and other applications; you need a program that knows how to dig for the details. lsof may come to your rescue, if your system supports it. Try adding the following in your error routine:
local name0="$(basename "$0")";
lsof -p$$ -d5,6 2>/dev/null |
egrep "^${name0:0:5}[^ ]* +[^ ]+ +[^ ]+ +[56][a-zA-Z]* "
This will require some tweaking to get it to be robust (short program names, program names with spaces in them, ...), and more friendly (stdout for "4" and "stderr for "3"... say replacing the program name in column 1). But when you tweak beware the huge variations in output formats you may encounter. Not just between systems but between different file types on the same system. I leave this as an exercise for the student.

Resources