capture output using expect - shell

I want compare the number of files on the remote server and my local directory. I ssh into the server and I was able to capture the output of "ls somewhere/*something | wc -l" using $expect_out(buffer) and store it as a variable. Now my problem is that how do I come back to my local computer and count the files here and compare them. After this comparison, I need to go back to the server and continue the job if the result of the comparison is acceptable.

The easiest thing to do -- including from a correctness perspective -- is to not try to have a single long-running SSH session, but multiple shorter-lived ones (potentially using SSH multiplexing to reuse a single transport between such sessions):
count_remote_files() {
local host=$1 dirname=$2 dirname_q
printf -v dirname_q '%q' "$dirname"
remote_files=$(ssh "$host" "bash -s ${dirname_q}" <<EOF
cd "$1" || exit
set -- *
if [ "$#" -eq 1 ] && [ ! -e "$1" ] && [ ! -L "$1" ]; then
echo "0"
else
echo "$#"
fi
EOF
)
}
count_local_files() {
local dirname=$1
cd "$dirname" || return
set -- *
if [ "$#" -eq 1 ] && [ ! -e "$1" ] && [ ! -L "$1" ]; then
echo "0"
else
echo "$#"
fi
}
if (( $(count_remote_files "$host" "$remote_dir") ==
$(count_local_files "$local_dir") )); then
echo "File count is identical"
else
echo "File count differs"
ssh "$host" 'echo "doing something else now"'
fi

Since you are using Expect, you can easily count the local files by using Tcl commands, since Expect is built on top of Tcl:
set num_local_files [llength [glob somewhere/*something]]
For more info see http://nikit.tcl.tk/page/Tcl+Tutorial+Lesson+25

Related

How can I pipe output, from a command in an if statement, to a function?

I can't tell if something I'm trying here is simply impossible or if I'm really lacking knowledge in bash's syntax. This is the first script I've written.
I've got a Nextcloud instance that I am backing up daily using a script. I want to log the output of the script as it runs to a log file. This is working fine, but I wanted to see if I could also pipe the Nextcloud occ command's output to the log file too.
I've got an if statement here checking if the file scan fails:
if ! sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all; then
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
This works fine and I am able to handle the error if the system cannot execute the command. The error string above is sent to this function:
Print()
{
if [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "No" ]; then
echo "$1" | tee -a "$log_file"
elif [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "Yes" ]; then
echo "$1" >> "$log_file"
elif [[ "$logging" -eq 0 ]] && [ "$quiet_mode" = "No" ]; then
echo "$1"
fi
}
How can I make it so the output of the occ command is also piped to the Print() function so it can be logged to the console and log file?
I've tried piping the command after ! using | Print without success.
Any help would be appreciated, cheers!
The Print function doesn't read standard input so there's no point piping data to it. One possible way to do what you want with the current implementation of Print is:
if ! occ_output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1); then
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
Print "'occ' output: $occ_output"
Since there is only one line in the body of the if statement you could use || instead:
occ_output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1) \
|| Print "Error: Failed to scan files. Are you in maintenance mode?"
Print "'occ' output: $occ_output"
The 2>&1 causes both standard output and error output of occ to be captured to occ_output.
Note that the body of the Print function could be simplified to:
[[ $quiet_mode == No ]] && printf '%s\n' "$1"
(( logging )) && printf '%s\n' "$1" >> "$log_file"
See the accepted, and excellent, answer to Why is printf better than echo? for an explanation of why I replaced echo "$1" with printf '%s\n' "$1".
How's this? A bit unorthodox perhaps.
Print()
{
case $# in
0) cat;;
*) echo "$#";;
esac |
if [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "No" ]; then
tee -a "$log_file"
elif [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "Yes" ]; then
cat >> "$log_file"
elif [[ "$logging" -eq 0 ]] && [ "$quiet_mode" = "No" ]; then
cat
fi
}
With this, you can either
echo "hello mom" | Print
or
Print "hello mom"
and so your invocation could be refactored to
if ! sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all; then
echo "Error: Failed to scan files. Are you in maintenance mode?"
fi |
Print
The obvious drawback is that piping into a function loses the exit code of any failure earlier in the pipeline.
For a more traditional approach, keep your original Print definition and refactor the calling code to
if output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1); then
: nothing
else
Print "error $?: $output"
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
I would imagine that the error message will be printed to standard error, not standard output; hence the addition of 2>&1
I included the error code $? in the error message in case that would be useful.
Sending and receiving end of a pipe must be a process, typically represented by an executable command. An if statement is not a process. You can of course put such a statement into a process. For example,
echo a | (
if true
then
cat
fi )
causes cat to write a to stdout, because the parenthesis put it into a child process.
UPDATE: As was pointed out in a comment, the explicit subprocess is not needed. One can also do a
echo a | if true
then
cat
fi

How can I make a chat script in bash?

I want to make a chat script in bash and I've started out pretty basic and you start off by login in with any name you want, no password required and then you can write commands like connect, clear, exit and so on.
But I want to be able to actually start a chat between two people in the terminal window. Now, I've read a little about IRC but I don't know how to get it to work.
Any help would be appreciated
Here's my script:
#!/bin/bash
ver=1.0
my_ip="127.0.0.1"
function connect()
{
echo -n "Ip-address: "
read ip
if [ -n "$ip" ]
then
exit
fi
}
function check()
{
if [ "${c}" = "connect" ] || [ "${c}" = "open" ]
then
connect
fi
if [ "${c}" = "clear" ] || [ "${c}" = "cls" ]
then
clear
new
fi
if [ "${c}" = "quit" ] || [ "${c}" = "exit" ]
then
echo "[Shutdown]"
exit
fi
}
function new()
{
echo -n "$: "
read c
if [ -n "${c}" ]
then
check
else
echo "Invalid command!"
new
fi
}
function onLogin()
{
clear
echo "Logged in as ${l_name} on [${my_ip}]"
new
}
function login()
{
echo -n "Login: "
read l_name
if [ -n "${l_name}" ]
then
onLogin
else
echo "Invalid input!"
login
fi
}
#execution
clear
echo "Bash Chat v${ver} | Mac"
login
If you truly want to write a chat client in pure bash, it would have to be local chat (same physical machine) rather than network chat.
Assuming that this is adequate for your needs, you can use named pipes (FIFO)
Illustration
Here is an example that illustrates what you can do with two pipes (for bidirectional communication):
mkfifo /tmp/chatpipe1 ; mkfifo /tmp/chatpipe2
(In terminal one): cat > /tmp/chatpipe1
(In terminal two): cat /tmp/chatpipe2
( same and in reverse for Terminals 3 and 4 )
This illustrates that you can have 4 processes in bash, two writing to two pipes and two reading from the same two pipes. Two terminals on the left are for Bob, two on the right are for John.
You can organize all of this into a single script if you understand bash backgrounding, loops (and hopefully traps to clean up on shutdown).
Script
Here is a rudimentary version:
#!/bin/bash
if [ -z "$2" ] ; then
echo "Need names of chat pipes (yours and other's), eg $0 bob john"
exit 1
fi
P1=/tmp/chatpipe${1}
P2=/tmp/chatpipe${2}
[ -p "$P1" ] || mkfifo $P1
[ -p "$P2" ] || mkfifo $P2
# Background cat of incoming pipe,
# also prepend current date to each line)
(cat $P2 | sed "s/^/$(date +%H:%M:%S)> /" ) &
# Feed one notice and then STDIN to outgoing pipe
(echo "$1 joined" ; cat) >> $P1
# Kill all background jobs (the incoming cat) on exit
trap 'kill -9 $(jobs -p)' EXIT
# (Probably should delete the fifo files too)
And a chat session:
Note that this script is only a simple example. If bob and john are different unix accounts, you'll have to be more careful with file permissions (or if you don't care about security, mkfifo -m 777 ... is an option)
A chatroom can be made in literally 10 lines of bash. I posted one on my github earlier today
https://github.com/ErezBinyamin/webCatChat/blob/master/minimalWbserver
Can handle ā€œnā€ users. They just need to connect to :1234
Here's the code:
#!/bin/bash
mkdir -p /tmp/webCat && printf "HTTP/1.1 200 OK\n\n<!doctype html><h2>Erez's netcat chat server!!</h2><form>Username:<br><input type=\"text\" name=\"username\"><br>Message:<br><input type=\"text\" name=\"message\"><div><button>Send data</button></div><button http-equiv=\"refresh\" content=\"0; url=129.21.194:1234\">Refresh</button></form>" > /tmp/webCat/webpage
while [ 1 ]
do
[[ $(head -1 /tmp/webCat/r) =~ "GET /?username" ]] && USER=$(head -1 /tmp/webCat/r | sed 's#.*username=##' | sed 's#&message.*##') && MSG=$(head -1 /tmp/webCat/r | sed 's#.*message=##' | sed 's#HTTP.*##')
[ ${#USER} -gt 1 ] && [ ${#MSG} -gt 1 ] && [ ${#USER} -lt 30 ] && [ ${#MSG} -lt 280 ] && printf "\n%s\t%s\n" "$USER" "$MSG" && printf "<h1>%s\t%s" "$USER" "$MSG" >> /tmp/webCat/webpage
cat /tmp/webCat/webpage | timeout 1 nc -l 1234 > /tmp/webCat/r
unset USER && unset MSG
done

Loop outputting blank variables - SSH Bash

I am writing a script to help me sync files between my home pc and my server. One of the problems ive run into is that, when I run the for loop, it outputs blank lines.
Everything else seems to be fine, echo $REMOTE lists all files and their full paths as expected. I have tried unquoting, quoting, Changing EOF from "EOF" to EOF etc, but so far nothing has worked.
Here is the script below:
function uploadDownload()
{
if [ "$1" == "d" ]; then
ssh -i ~/Dropbox/Business/aws/first.pem ubuntu#XXXX.194.202 << EOF
echo $REMOTE
for file in $REMOTE; do
echo $file
done
EOF
#scp -r -i ~/Dropbox/Business/aws/first.pem ubuntu#XXXX.194.202:$REMOTE $LOCAL
elif [ "$1" == "u" ]; then
scp -r -i ~/Dropbox/Business/aws/first.pem $LOCAL ubuntu#XXXX.194.202:$REMOTE
fi
}
if [ "$2" == "m" ]; then
if [ -z "$3" ]; then
echo "Please enter the local location of the files and provide an absolute path..."
read LOCAL
echo "Please enter the remote location of the files..."
read REMOTE
uploadDownload
else
LOCAL=$2
if [ -z "$4" ]; then
REMOTE=$3
else
REMOTE='ubuntu#XXXX.194.202:~/test/'
fi
uploadDownload
fi
else
LOCAL='/home/will/Dropbox/Business/aws/files/binaryhustle/'
REMOTE='/home/ubuntu/dev/binaryhustle/*'
uploadDownload $1
fi
EDIT: based on advice I have edited the uploadDownload function. I should also note than I tested echo "$REMOTE"; outside of the loop and it returns nothing (unlike without quotes, where it returns something).
function uploadDownload()
{
if [ "$1" == "d" ]; then
ssh -i ~/Dropbox/Business/aws/first.pem ubuntu#54.252.194.202 << EOF
for file in "/home/ubuntu/dev/binaryhustle"; do
echo $file
done
EOF
#scp -r -i ~/Dropbox/Business/aws/first.pem ubuntu#54.252.194.202:$REMOTE $LOCAL
elif [ "$1" == "u" ]; then
scp -r -i ~/Dropbox/Business/aws/first.pem $LOCAL ubuntu#54.252.194.202:$REMOTE
fi
}
Try to quote your variable $file as it is expanded before being sent to the remote server:
echo $file
To
echo \$file
Or better yet
echo \"\$file\"
You may want to take a look at unison. Its powerful file syncing tool able to work over ssh. It doesn't do any content diff though.
Credit to #konsolebox for getting me half way there, I got it working by adding the slash to file, however this did not work with << EOF , only <
function uploadDownload()
{
if [ "$1" == "d" ]; then
ssh -i ~/Dropbox/Business/aws/first.pem ubuntu#54.252.194.202 <<EOF
for file in "$REMOTE"; do
echo \$file
done
EOF
#scp -r -i ~/Dropbox/Business/aws/first.pem ubuntu#54.252.194.202:$REMOTE $LOCAL
elif [ "$1" == "u" ]; then
scp -r -i ~/Dropbox/Business/aws/first.pem $LOCAL ubuntu#54.252.194.202:$REMOTE
fi
}

Bash remote files system directory test

the more I learn bash the more questions I have, and the more I understand why very few people do bash. Easy is something else, but I like it.
I have managed to figure out how to test directories and there writablity, but have a problem the minute I try to do this with a remote server over ssh. The first instance testing the /tmp directory works fine, but when the second part is called, I get line 0: [: missing]'`
Now if I replace the \" with a single quote, it works, but I thought that single quotes turn of variable referencing ?? Can someone explain this to me please ? Assuming that the tmp directory does exist and is writable, here the script so far
#!/bin/bash
SshHost="hostname"
SshRsa="~/.ssh/id_rsa"
SshUser="user"
SshPort="22"
Base="/tmp"
Sub="one space/another space"
BaseBashExist="bash -c \"[ -d \"$Base\" ] && echo 0 && exit 0 || echo 1 && exit 1\""
SSHBaseExist=$( ssh -l $SshUser -i $SshRsa -p $SshPort $SshHost ${BaseBashExist} )
echo -n $Base
if [ $? -eq 0 ]
then
echo -n "...OK..."
else
echo "...FAIL"
exit 1
fi
BaseBashPerm="bash -c \"[ -w \"$Base\" ] && echo 0 && exit 0 || echo 1 && exit 1\""
SSHBaseExist=$( ssh -l $SshUser -i $SshRsa -p $SshPort $SshHost ${BaseBashPerm} )
if [ $? -eq 0 ]
then
echo "...writeable"
else
echo "...not writeable"
fi
BaseAndSub="$Base/$Sub"
BaseAndSubBashExist="bash -c \"[ -d \"$BaseAndSub\" ] && echo 0 && exit 0 || echo 1 && exit 1\""
SSHBaseAndSubExist=$( ssh -l $SshUser -i $SshRsa -p $SshPort $SshHost ${BaseAndSubBashExist} )
echo -n $BaseAndSub
if [ $? -eq 0 ]
then
echo -n "...OK..."
else
echo "...FAIL"
exit 1
fi
BaseAndSubBashPerm="bash -c \"[ -w \"$BaseAndSub\" ] && echo 0 && exit 0 || echo 1 && exit 1\""
SSHBaseAndSubPerm=$( ssh -l $SshUser -i $SshRsa -p $SshPort $SshHost ${BaseAndSubBashPerm} )
if [ $? -eq 0 ]
then
echo -n "...writeable"
else
echo "...not writeable"
fi
exit 0
The first thing you should do is refactor your code with simplicity in mind, then the quoting error will go away as well. Try:
if ssh [flags] test -w "'$file'"; then
Encapsulate your SSH flags in a ssh config to facilitate re-use, and your script will shorten dramatically.
You are fine with single quotes in this context; by the time the script is seen by the remote bash, your local bash has already substituted in the variables you want to substitute.
However, your script is a total mess. You should put the repetitive code in functions if you cannot drastically simplify it.
#!/bin/bash
remote () {
# most of the parameters here are at their default values;
# why do you feel you need to specify them?
#ssh -l "user" -i ~/.ssh/id_rsa -p 22 hostname "$#"
ssh hostname "$#"
# ā€”---------^
# if you really actually need to wrap the remote
# commands in bash -c "..." then add that here
}
exists_and_writable () {
echo -n "$1"
if remote test -d "$1"; then
echo -n "...OK..."
else
echo "...FAIL"
exit 1
fi
if remote test -w "$1"; then
echo "...writeable"
else
echo "...not writeable"
fi
}
Base="/tmp"
# Note the need for additional quoting here
Sub="one\\ space/another\\ space"
exists_and_writable "$Base"
BaseAndSub="$Base/$Sub"
exist_and_writable "$BaseAndSub"
exit 0
ssh -qnx "useraccount#hostname"
"test -f ${file absolute path} ||
echo ${file absolute path} no such file or directory"

Checking Bash exit status of several commands efficiently

Is there something similar to pipefail for multiple commands, like a 'try' statement but within bash. I would like to do something like this:
echo "trying stuff"
try {
command1
command2
command3
}
And at any point, if any command fails, drop out and echo out the error of that command. I don't want to have to do something like:
command1
if [ $? -ne 0 ]; then
echo "command1 borked it"
fi
command2
if [ $? -ne 0 ]; then
echo "command2 borked it"
fi
And so on... or anything like:
pipefail -o
command1 "arg1" "arg2" | command2 "arg1" "arg2" | command3
Because the arguments of each command I believe (correct me if I'm wrong) will interfere with each other. These two methods seem horribly long-winded and nasty to me so I'm here appealing for a more efficient method.
You can write a function that launches and tests the command for you. Assume command1 and command2 are environment variables that have been set to a command.
function mytest {
"$#"
local status=$?
if (( status != 0 )); then
echo "error with $1" >&2
fi
return $status
}
mytest "$command1"
mytest "$command2"
What do you mean by "drop out and echo the error"? If you mean you want the script to terminate as soon as any command fails, then just do
set -e # DON'T do this. See commentary below.
at the start of the script (but note warning below). Do not bother echoing the error message: let the failing command handle that. In other words, if you do:
#!/bin/sh
set -e # Use caution. eg, don't do this
command1
command2
command3
and command2 fails, while printing an error message to stderr, then it seems that you have achieved what you want. (Unless I misinterpret what you want!)
As a corollary, any command that you write must behave well: it must report errors to stderr instead of stdout (the sample code in the question prints errors to stdout) and it must exit with a non-zero status when it fails.
However, I no longer consider this to be a good practice. set -e has changed its semantics with different versions of bash, and although it works fine for a simple script, there are so many edge cases that it is essentially unusable. (Consider things like: set -e; foo() { false; echo should not print; } ; foo && echo ok The semantics here are somewhat reasonable, but if you refactor code into a function that relied on the option setting to terminate early, you can easily get bitten.) IMO it is better to write:
#!/bin/sh
command1 || exit
command2 || exit
command3 || exit
or
#!/bin/sh
command1 && command2 && command3
I have a set of scripting functions that I use extensively on my Red Hat system. They use the system functions from /etc/init.d/functions to print green [ OK ] and red [FAILED] status indicators.
You can optionally set the $LOG_STEPS variable to a log file name if you want to log which commands fail.
Usage
step "Installing XFS filesystem tools:"
try rpm -i xfsprogs-*.rpm
next
step "Configuring udev:"
try cp *.rules /etc/udev/rules.d
try udevtrigger
next
step "Adding rc.postsysinit hook:"
try cp rc.postsysinit /etc/rc.d/
try ln -s rc.d/rc.postsysinit /etc/rc.postsysinit
try echo $'\nexec /etc/rc.postsysinit' >> /etc/rc.sysinit
next
Output
Installing XFS filesystem tools: [ OK ]
Configuring udev: [FAILED]
Adding rc.postsysinit hook: [ OK ]
Code
#!/bin/bash
. /etc/init.d/functions
# Use step(), try(), and next() to perform a series of commands and print
# [ OK ] or [FAILED] at the end. The step as a whole fails if any individual
# command fails.
#
# Example:
# step "Remounting / and /boot as read-write:"
# try mount -o remount,rw /
# try mount -o remount,rw /boot
# next
step() {
echo -n "$#"
STEP_OK=0
[[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$
}
try() {
# Check for `-b' argument to run command in the background.
local BG=
[[ $1 == -b ]] && { BG=1; shift; }
[[ $1 == -- ]] && { shift; }
# Run the command.
if [[ -z $BG ]]; then
"$#"
else
"$#" &
fi
# Check if command failed and update $STEP_OK if so.
local EXIT_CODE=$?
if [[ $EXIT_CODE -ne 0 ]]; then
STEP_OK=$EXIT_CODE
[[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$
if [[ -n $LOG_STEPS ]]; then
local FILE=$(readlink -m "${BASH_SOURCE[1]}")
local LINE=${BASH_LINENO[0]}
echo "$FILE: line $LINE: Command \`$*' failed with exit code $EXIT_CODE." >> "$LOG_STEPS"
fi
fi
return $EXIT_CODE
}
next() {
[[ -f /tmp/step.$$ ]] && { STEP_OK=$(< /tmp/step.$$); rm -f /tmp/step.$$; }
[[ $STEP_OK -eq 0 ]] && echo_success || echo_failure
echo
return $STEP_OK
}
For what it's worth, a shorter way to write code to check each command for success is:
command1 || echo "command1 borked it"
command2 || echo "command2 borked it"
It's still tedious but at least it's readable.
An alternative is simply to join the commands together with && so that the first one to fail prevents the remainder from executing:
command1 &&
command2 &&
command3
This isn't the syntax you asked for in the question, but it's a common pattern for the use case you describe. In general the commands should be responsible for printing failures so that you don't have to do so manually (maybe with a -q flag to silence errors when you don't want them). If you have the ability to modify these commands, I'd edit them to yell on failure, rather than wrap them in something else that does so.
Notice also that you don't need to do:
command1
if [ $? -ne 0 ]; then
You can simply say:
if ! command1; then
And when you do need to check return codes use an arithmetic context instead of [ ... -ne:
ret=$?
# do something
if (( ret != 0 )); then
Instead of creating runner functions or using set -e, use a trap:
trap 'echo "error"; do_cleanup failed; exit' ERR
trap 'echo "received signal to stop"; do_cleanup interrupted; exit' SIGQUIT SIGTERM SIGINT
do_cleanup () { rm tempfile; echo "$1 $(date)" >> script_log; }
command1
command2
command3
The trap even has access to the line number and the command line of the command that triggered it. The variables are $BASH_LINENO and $BASH_COMMAND.
Personally I much prefer to use a lightweight approach, as seen here;
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exit 111; }
try() { "$#" || die "cannot $*"; }
asuser() { sudo su - "$1" -c "${*:2}"; }
Example usage:
try apt-fast upgrade -y
try asuser vagrant "echo 'uname -a' >> ~/.profile"
I've developed an almost flawless try & catch implementation in bash, that allows you to write code like:
try
echo 'Hello'
false
echo 'This will not be displayed'
catch
echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!"
You can even nest the try-catch blocks inside themselves!
try {
echo 'Hello'
try {
echo 'Nested Hello'
false
echo 'This will not execute'
} catch {
echo "Nested Caught (# $__EXCEPTION_LINE__)"
}
false
echo 'This will not execute too'
} catch {
echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!"
}
The code is a part of my bash boilerplate/framework. It further extends the idea of try & catch with things like error handling with backtrace and exceptions (plus some other nice features).
Here's the code that's responsible just for try & catch:
set -o pipefail
shopt -s expand_aliases
declare -ig __oo__insideTryCatch=0
# if try-catch is nested, then set +e before so the parent handler doesn't catch us
alias try="[[ \$__oo__insideTryCatch -gt 0 ]] && set +e;
__oo__insideTryCatch+=1; ( set -e;
trap \"Exception.Capture \${LINENO}; \" ERR;"
alias catch=" ); Exception.Extract \$? || "
Exception.Capture() {
local script="${BASH_SOURCE[1]#./}"
if [[ ! -f /tmp/stored_exception_source ]]; then
echo "$script" > /tmp/stored_exception_source
fi
if [[ ! -f /tmp/stored_exception_line ]]; then
echo "$1" > /tmp/stored_exception_line
fi
return 0
}
Exception.Extract() {
if [[ $__oo__insideTryCatch -gt 1 ]]
then
set -e
fi
__oo__insideTryCatch+=-1
__EXCEPTION_CATCH__=( $(Exception.GetLastException) )
local retVal=$1
if [[ $retVal -gt 0 ]]
then
# BACKWARDS COMPATIBILE WAY:
# export __EXCEPTION_SOURCE__="${__EXCEPTION_CATCH__[(${#__EXCEPTION_CATCH__[#]}-1)]}"
# export __EXCEPTION_LINE__="${__EXCEPTION_CATCH__[(${#__EXCEPTION_CATCH__[#]}-2)]}"
export __EXCEPTION_SOURCE__="${__EXCEPTION_CATCH__[-1]}"
export __EXCEPTION_LINE__="${__EXCEPTION_CATCH__[-2]}"
export __EXCEPTION__="${__EXCEPTION_CATCH__[#]:0:(${#__EXCEPTION_CATCH__[#]} - 2)}"
return 1 # so that we may continue with a "catch"
fi
}
Exception.GetLastException() {
if [[ -f /tmp/stored_exception ]] && [[ -f /tmp/stored_exception_line ]] && [[ -f /tmp/stored_exception_source ]]
then
cat /tmp/stored_exception
cat /tmp/stored_exception_line
cat /tmp/stored_exception_source
else
echo -e " \n${BASH_LINENO[1]}\n${BASH_SOURCE[2]#./}"
fi
rm -f /tmp/stored_exception /tmp/stored_exception_line /tmp/stored_exception_source
return 0
}
Feel free to use, fork and contribute - it's on GitHub.
run() {
$*
if [ $? -ne 0 ]
then
echo "$* failed with exit code $?"
return 1
else
return 0
fi
}
run command1 && run command2 && run command3
Sorry that I can not make a comment to the first answer
But you should use new instance to execute the command: cmd_output=$($#)
#!/bin/bash
function check_exit {
cmd_output=$($#)
local status=$?
echo $status
if [ $status -ne 0 ]; then
echo "error with $1" >&2
fi
return $status
}
function run_command() {
exit 1
}
check_exit run_command
For fish shell users who stumble on this thread.
Let foo be a function that does not "return" (echo) a value, but it sets the exit code as usual.
To avoid checking $status after calling the function, you can do:
foo; and echo success; or echo failure
And if it's too long to fit on one line:
foo; and begin
echo success
end; or begin
echo failure
end
You can use #john-kugelman 's awesome solution found above on non-RedHat systems by commenting out this line in his code:
. /etc/init.d/functions
Then, paste the below code at the end. Full disclosure: This is just a direct copy & paste of the relevant bits of the above mentioned file taken from Centos 7.
Tested on MacOS and Ubuntu 18.04.
BOOTUP=color
RES_COL=60
MOVE_TO_COL="echo -en \\033[${RES_COL}G"
SETCOLOR_SUCCESS="echo -en \\033[1;32m"
SETCOLOR_FAILURE="echo -en \\033[1;31m"
SETCOLOR_WARNING="echo -en \\033[1;33m"
SETCOLOR_NORMAL="echo -en \\033[0;39m"
echo_success() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_SUCCESS
echo -n $" OK "
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 0
}
echo_failure() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_FAILURE
echo -n $"FAILED"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
echo_passed() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_WARNING
echo -n $"PASSED"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
echo_warning() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_WARNING
echo -n $"WARNING"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
When I use ssh I need to distinct between problems caused by connection issues and error codes of remote command in errexit (set -e) mode. I use the following function:
# prepare environment on calling site:
rssh="ssh -o ConnectionTimeout=5 -l root $remote_ip"
function exit255 {
local flags=$-
set +e
"$#"
local status=$?
set -$flags
if [[ $status == 255 ]]
then
exit 255
else
return $status
fi
}
export -f exit255
# callee:
set -e
set -o pipefail
[[ $rssh ]]
[[ $remote_ip ]]
[[ $( type -t exit255 ) == "function" ]]
rjournaldir="/var/log/journal"
if exit255 $rssh "[[ ! -d '$rjournaldir/' ]]"
then
$rssh "mkdir '$rjournaldir/'"
fi
rconf="/etc/systemd/journald.conf"
if [[ $( $rssh "grep '#Storage=auto' '$rconf'" ) ]]
then
$rssh "sed -i 's/#Storage=auto/Storage=persistent/' '$rconf'"
fi
$rssh systemctl reenable systemd-journald.service
$rssh systemctl is-enabled systemd-journald.service
$rssh systemctl restart systemd-journald.service
sleep 1
$rssh systemctl status systemd-journald.service
$rssh systemctl is-active systemd-journald.service
Checking status in functional manner
assert_exit_status() {
lambda() {
local val_fd=$(echo $# | tr -d ' ' | cut -d':' -f2)
local arg=$1
shift
shift
local cmd=$(echo $# | xargs -E ':')
local val=$(cat $val_fd)
eval $arg=$val
eval $cmd
}
local lambda=$1
shift
eval $#
local ret=$?
$lambda : <(echo $ret)
}
Usage:
assert_exit_status 'lambda status -> [[ $status -ne 0 ]] && echo Status is $status.' lls
Output
Status is 127
suppose
alias command1='grep a <<<abc'
alias command2='grep x <<<abc'
alias command3='grep c <<<abc'
either
{ command1 1>/dev/null || { echo "cmd1 fail"; /bin/false; } } && echo "cmd1 succeed" &&
{ command2 1>/dev/null || { echo "cmd2 fail"; /bin/false; } } && echo "cmd2 succeed" &&
{ command3 1>/dev/null || { echo "cmd3 fail"; /bin/false; } } && echo "cmd3 succeed"
or
{ { command1 1>/dev/null && echo "cmd1 succeed"; } || { echo "cmd1 fail"; /bin/false; } } &&
{ { command2 1>/dev/null && echo "cmd2 succeed"; } || { echo "cmd2 fail"; /bin/false; } } &&
{ { command3 1>/dev/null && echo "cmd3 succeed"; } || { echo "cmd3 fail"; /bin/false; } }
yields
cmd1 succeed
cmd2 fail
Tedious it is. But the readability isn't bad.

Resources