I have rather complex bash script which is normally run manually and thus needs live output on the console (stdout and stderr).
However, since the outcome and output of this script is rather important I'd like to save its output at the end into a database.
I have already a trap function for this and the database query as such is also not a problem. The problem is: How do I get the output of the entire script till that point into a variable?
The live console output should be preserved. The output after the database query (if any) does not matter.
Is this possible at all? Might it be necessary to wrap the script into another script (file)?
I'm doing similar task like this
exec 5>&1
message=$(check|tee /dev/fd/5)
mutt -s "$subjct" "$mailto" <<< "$message"
Add your script instead of check function and change mailing to db query.
Related
I am trying to get the result from /usr/lib/update-notifier/apt-check on a Ubuntu 16 Server into a Array to make a XML response for a monitoring tool, but somehow the value of this apt-check just refuses to get in my Variable. For simplicity sake, I have omitted the XML creation part.
#!/bin/bash
APTCHECK="/usr/lib/update-notifier/apt-check"
APTResult="$(${APTCHECK})"
echo "Result is $APTResult"
exit 0
if you now run this code with bash -x you will see that the result is returned to the Terminal, but not assigned to the Variable. If I substitute the "command" to something simple like "ls -lah" everything works fine.
I just don't know why this is not working ? Anybody ?
apt-check prints to the stderr, so you need to capture that instead with aptresult=$(/usr/lib/update-notifier/apt-check 2>&1).
The other option is with the --human-readable switch, which'll print to the stdout. The only problem then is that you have to parse the text output (unless the text output is what you actually want).
Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.
I'm having a little trouble figuring out how to get error output and store it in a variable or file in ksh. So in my script I have cp -p source.file destination inside a while loop.
When I get the below error
cp: source.file: The file access permissions do not allow the specified action.
I want to grab it and store it in a variable or file.
Thanks
You can redirect the error output of the command like so:
cp -p source.file destination 2>> my_log.txt
It will append the error message to the my_log.txt file.
In case you want a variable you can redirect stderr to stdout and assign the command output to a variable:
my_error_var=$(cp -p source.file destination 2>&1)
In ksh (as per Q), as in bash and other sh derivatives, you can get all/just stderr output from cp using redirection, then grabbing in a var (using $(), better than backtick if using a vaguely recent version):
output=$(cp -p source.file destination 2>&1)
cp doesn't normally output anything though this would capture stdout and stderr; to capture just stderr this way, use 1>/dev/null also. The other solutions redirecting to a file could use cat/various other commands to output/process the logfile.
Reason why I don't suggest using outputting to temporary files:
Redirecting to a file then reading that in (via read command or more inefficiently via $(cat file) ), particularly for just a single line, is less efficient and slower; though not so bad if you want to append to it each time for multiple operations before displaying the errors. You'll also leave the temporary file around unless you ALWAYS clear it up, don't forget when people interrupt (ie. Ctrl-C) or kill the script.
Using temporary files also could be a problem if the script is run multiple times at once (eg. could happen via cron if filesystem/other delays cause massive overruns or just from multiple users), unless the temporary filename is unique.
Generating temporary files is also a security risk unless done very carefully, especially if the file data is processed again or the contents could be rewritten before display by something else to confuse/phish the user/break the script. Don't get into a habit of doing it too casually, read up on temporary files (eg. mktemp) first via other questions here/google.
You can do STDERR redirects by doing:
command 2> /path/to/file.txt
I'm using a bash script to automatically run a simulation program. This program periodically prints the current status of the simulation in the console, like "Iteration step 42 ended normally".
Is it possible to abort the script, if the console output is something like "warning: parameter xyz outside range of validity"?
And what can I do, if the console output is piped to a text file?
Sorry if this sounds stupid, I'm new to this :-)
Thanks in advance
This isn't an ideal job for Bash. However, you can certainly capture and test STDOUT inside a Bash iteration loop using an admixture of conditionals, grep-like tools, and command substitution.
On the other hand, if Bash isn't doing the looping (e.g. it's just waiting for an external command to finish) then you need to use something like expect. Expect is purpose-built to monitor output streams for regular expressions, and perform branching based on expression matches.
I'm trying to debug a bash script that involves a command of the form:
VAR=$(cmd1|cmd2|cmd3)
I can debug it in bashdb, using the s command, which does something like this:
bashdb(2): s
2: VAR=$(cmd1|cmd2|cmd3)
cmd1
bashdb(3): s
2: VAR=$(cmd1|cmd2|cmd3)
cmd2
i.e. it allows me to run the commands in the pipe one by one. Logic indicates that it must therefore store the contents of the pipe somewhere, so that it can feed it into the next command when I type s again. How do I get bashdb to show this data?
Try tee.
VAR=$(cmd1|tee cmd1.out|cmd2|tee cmd2.out|cmd3|tee cmd3.out)