Display results of specific commands in bash script to screen - bash

In my script I am executing some git commands but I would like the output of the commands to display on the screen as the script runs.
OS X Yosemite, if that matters.
#!/bin/sh
# get the Git command from parameter
if [ $# -eq 0 ];
then
echo "no arguments supplied"
exit 1
fi
cmd=$1
echo "doing some stuff"
# do some stuff (not echoed to screen)
echo "executing command"
# this is the command I want to echo output to screen
git $cmd
echo "doing some other"
# do other stuff (not echoed to screen)

The stuff you don't want echoed to the screen can be redirected to /dev/null like so:
ls /tmp > /dev/null
The results of your git command will be echoed to the screen unless you've specifically said otherwise.

Adding set -x at the start of your script will print the commands before execution.
Example:
#!/bin/sh
set -x
# get the Git command from parameter
if [ $# -eq 0 ];
then
echo "no arguments supplied"
exit 1
fi
# ...

Related

How to output error in custom color in Vagrant provision script?

I want my Vagrant provision script to run some checks that will require user action if they're not satisfied. As easy as:
if [ ! -f /some/required/file ]; then
echo "[Error] Please do required stuff before provisioning"
exit
fi
But, as long as this is not a real error, I got the echo printed in green. I'd like my output to be red (or, a different color at least) to alert the user.
I tried:
echo "\033[31m[Error] Blah blah blah"
that works locally, but on Vagrant output the color code gets escaped and I got it echoed in green instead.
Is that possible?
This is happening because some tools write some of their messages to stderr, which Vagrant then interprets as an error and prints in red.
Not all terminals support ANSI colour codes and Vagrant don't take care of that. Said that, I won't suggest colorizing a word by sending it to stderr unless it is an error.
To achieve that you can simply:
echo "Your error message" > /dev/stderr
You need to use keep_color true then it works as intended;
config.vm.provision "shell", keep_color: true, inline: $echoes
$echoes = <<-ECHOES
echo "\e[32mPROVISIONING DONE\e[0m"
ECHOES
From https://www.vagrantup.com/docs/provisioning/shell.html
keep_color (boolean) - Vagrant automatically colors output in green and red depending on whether the output is from stdout or stderr. If this is true, Vagrant will not do this, allowing the native colors from the script to be outputted.
Vagrant commands runs by default with --no-color option. You could try to set color on with --color. The environmental variables for Vagrant are documented here.
Here is a bash script test.sh which should demonstrate how to output to stderr or stdout conditionally. This form is good for a command like [ / test or touch that does not return any stdout or stderr normally. This form is checking the exit status code of the command which is stored in $?.
test -f $1
if [ $? -eq 0 ]; then
echo "File exists: $1"
else
echo "File not found: $1"
fi
You can alternatively hard code your file path like your question shows:
file="/some/required/file"
test -f $file
if [ $? -eq 0 ]; then
echo "File exists: $file"
else
echo "File not found: $file"
fi
If you have output of the command, but its being sent to stderr rather than stdout and ending up in red in the Vagrant output, you can use the following forms to redirect the output to where you would expect it to be. This is good for commands like update-grub or wget.
wget
url='https://example.com/file'
out=$(wget --no-verbose $url 2>&1)
if [ $? -ne 0 ]; then
echo "$out" > /dev/stderr
else
echo "$out"
fi
update-grub
out=$(update-grub 2>&1)
if [ $? -ne 0 ]; then
echo "$out" > /dev/stderr
else
echo "$out"
fi
One Liners
wget
url='https://example.com/file'
out=$(wget --no-verbose $url 2>&1) && echo "$out" || echo "$out" > /dev/stderr
update-grub
out=$(update-grub 2>&1) && echo "$out" || echo "$out" > /dev/stderr

Has my bash script crashed?

I have a bash shell script, that should:
1) check for the existence of a file
2) If file exists exit script, otherwise create file
3) Set off a process
4) Check process has run correctly - and send result to a log file
5) Delete file
6) Exit script
if [ -f $PROPERTIES_HC ]
then
# lockfile/propertiesfile exists so exit the script
log --------- lockfile exists so operation cancelled at `date` ---------
exit 1
else
# no lockfile/propertiesfile so continue
# create the lockfile/propertiesfile
input="./$PROPERTIES_VAR"
while IFS= read -r line || [ -n "$line" ]; do
eval "echo $line" >> $PROPERTIES_HC
done < $PROPERTIES_VAR
#Run Process
RUN_MY_PROCESS --overridefile $PROPERTIES_HC >> $LOG_FILE
#Check Process Ran Okay
if [ "$?" = "0" ]; then
echo "RAN WITHOUT ERROR" >> $LOG_FILE
else
echo "SOME ERROR!" >> $LOG_FILE
fi
# Remove the lockfile/propertiesfile
rm -rf $PROPERTIES_HC
fi
This script seemed to have been running fine, however recently I came across a situation where the "RUN_MY_PROCESS" element of the script failed, and the script seems to have simply exited leaving the lockfile in place.
As I understand it unless I set something like #!/bin/sh -e, the script should run on regardless of errors. Have I misunderstood how shell scripts/shell error handling work (I am new to this!), or is it that my shell script has crashed itself - hence it didn't finish running?
Thanks in advance for any help.
The proper way to handle errors inside your script (i.e. errors that cause your script to crash) is through traps.
You could modify your script as follow :
if [ -f $PROPERTIES_HC ]
#your regular script here
#...
#Run Process
trap 'echo "SOME ERROR" >> $LOG_FILE && rm -rf $PROPERTIES_HC' ERR
RUN_MY_PROCESS --overridefile $PROPERTIES_HC >> $LOG_FILE
#rest of your script here
#....

Simple bash script for starting application silently

Here I am again. Today I wrote a little script that is supposed to start an application silently in my debian env.
Easy as
silent "npm search 1234556"
This works but not at all.
As you can see, I commented the section where I have some troubles.
This line:
$($cmdLine) &
doesn't hide application output but this one
$($1 >/dev/null 2>/dev/null) &
works perfectly. What am I missing? Many thanks.
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>/dev/null"
fi
# not working
$($cmdLine) &
# works perfectly
#$($1 >/dev/null 2>/dev/null) &
With the use of evil eval following script will work:
#!/bin/sh
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>&1"
fi
eval "$cmdLine &"
Rather than building up a command with redirection tacked on the end, you can incrementally apply it:
#!/bin/sh
if [ -z "$1" ]; then
exit
fi
exec >/dev/null
if [ -n "$2" ]; then
exec 2>&1
fi
exec $1
This first redirects stdout of the shell script to /dev/null. If the second argument is given, it redirects stderr of the shell script too. Then it runs the command which will inherit stdout and stderr from the script.
I removed the ampersand (&) since being silent has nothing to do with running in the background. You can add it back (and remove the exec on the last line) if it is what you want.
I added exec at the end as it is slightly more efficient. Since it is the end of the shell script, there is nothing left to do, so you may as well be done with it, hence exec.
& means that you're doing sort of multitask whereas
1 >/dev/null 2>/dev/null
means that you redirect the output to a sort of garbage and that's why you don't see anything.
Furthermore cmdLine="$1 >/dev/null" is incorrect, you should use ' instead of " :
cmdLine='$1 >/dev/null'
you can build your command line in a var and run a bash with it in background:
bash -c "$cmdLine"&
Note that it might be useful to store the output (out/err) of the program, instead of trow them in null.
In addition, why do you need errorsRedirect??
You can even add a wait at the end, just to be safe...if you want...
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
[ ! $1 ] && echo "Please, don't joke me..." && exit 1
cmdLine="$1>/dev/null"
# if passed a second parameter, errors will be hidden
[ $2 ] && cmdLine+=" 2>/dev/null"
# not working
echo "Running \"$cmdLine\""
bash -c "$cmdLine" &
wait

Bash command substitution stdout+stderr redirect

Good day. I have a series of commands that I wanted to execute via a function so that I could get the exit code and perform console output accordingly. With that being said, I have two issues here:
1) I can't seem to direct stderr to /dev/null.
2) The first echo line is not displayed until the $1 is executed. It's not really noticeable until I run commands that take a while to process, such as searching the hard drive for a file. Additionally, it's obvious that this is the case, because the output looks like:
sh-3.2# ./runScript.sh
sh-3.2# com.apple.auditd: Already loaded
sh-3.2# Attempting... Enable Security Auditing ...Success
In other words, the stderr was displayed before "Attempting... $2"
Here is the function I am trying to use:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
exec $1
if [ "$?" -ne 0 ]; then
echo -ne " ...Failure\n\r"
else
echo -ne " ...Success\n\r"
fi
}
saveChange "$(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)" "Enable Security Auditing"
Any help or advice is appreciated.
this is how you redirect stderr to /dev/null
command 2> /dev/null
e.g.
ls -l 2> /dev/null
Your second part (i.e. ordering of echo) -- It may be because of this you have while invoking the script. $(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)
The first echo line is displayed later because it is being execute second. $(...) will execute the code. Try the following:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
err=$($1 2>&1)
if [ -z "$err" ]; then
echo -ne " ...Success\n\r"
else
echo -ne " ...Failured\n\r"
exit 1
fi
}
saveChange "launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist" "Enable Security Auditing"
EDIT: Noticed that launchctl does not actually set $? on failure so capturing the STDERR to detect the error instead.

Shell scripting: die on any error

Suppose a shell script (/bin/sh or /bin/bash) contained several commands. How can I cleanly make the script terminate if any of the commands has a failing exit status? Obviously, one can use if blocks and/or callbacks, but is there a cleaner, more concise way? Using && is not really an option either, because the commands can be long, or the script could have non-trivial things like loops and conditionals.
With standard sh and bash, you can
set -e
It will
$ help set
...
-e Exit immediately if a command exits with a non-zero status.
It also works (from what I could gather) with zsh. It also should work for any Bourne shell descendant.
With csh/tcsh, you have to launch your script with #!/bin/csh -e
May be you could use:
$ <any_command> || exit 1
You can check $? to see what the most recent exit code is..
e.g
#!/bin/sh
# A Tidier approach
check_errs()
{
# Function. Parameter 1 is the return code
# Para. 2 is text to display on failure.
if [ "${1}" -ne "0" ]; then
echo "ERROR # ${1} : ${2}"
# as a bonus, make our script exit with the right error code.
exit ${1}
fi
}
### main script starts here ###
grep "^${1}:" /etc/passwd > /dev/null 2>&1
check_errs $? "User ${1} not found in /etc/passwd"
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
check_errs $? "Cut returned an error"
echo "USERNAME: $USERNAME"
check_errs $? "echo returned an error - very strange!"

Resources