Actual return code for SCP - bash

I am writing a bash script that goes through a list of filenames and attempts to copy each file using scp from two servers into a local folder. The script then compares the local files to each other. Sometimes however, the file will not exist on one server or the other or both.
At first, I was using this code:
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
error=$(</tmp/Error) # error catching
if [[ -n "$error" ]]; then echo -e "$file not found on $host"; fi
But I found that some (corporate) servers output a (legalese) message (to stderr I guess) every time a user connects via scp or ssh. So I started looking into utilizing exit codes.
I could simply use
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
if [[ $? -ne 0 ]]; then echo -e "$file not found on $host"; fi
but since the exit code for "file does not exist" is supposed to be 6, I would rather have a more precise
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
if [[ $? -eq 6 ]]; then echo -e "$file not found on $host"; fi
The problem is that I seem to be getting an exit code of 1 no matter what went wrong. This question is similar to this one, but that answer does not help me in Bash.
Another solution I am considering is
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
error=$(</tmp/Error) # error catching
if [[ ${error: -25} = "No such file or directory" ]]; then echo -e "$file not found on $host"; fi
But I am concerned that different versions of scp could have different error messages for the same error.
Is there a way to get the actual exit code of scp in a Bash script?

Per the comments (#gniourf_gniourf, #shelter, #Wintermute) I decided to simply switch tools to rsync. Thankfully the syntax doesn't need to be changed at all.
23 was the error code I was getting when files didn't exist so here is the code I ended up with
rsync -q $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
if [[ $? -eq 23 ]]; then echo -e "$file not found on $host"; continue; fi

I'm seeing 1 for "file not found" not found, you can do testing for these sorts of things against localhost, if you need to differentiate different errors capture stdout instead.
if $err=`scp $host:$file 2>&1`
then
echo "copied successfully
else
case "$err" in
*"file not found"* )
echo "$file Not Found on $host"
;;
*"Could not resolve hostname"* )
echo "Host not found: $host"
;;
"Permission denied "* )
echo "perm-denied! $host"
;;
* )
echo "other scp error $err"
;;
esac
this isn't going to work if you have a different locale with different messages.

Related

Why is "if [ -z "ls -l /non_existing_file 2> /dev/null" ] not true [duplicate]

This question already has answers here:
Test if a command outputs an empty string
(13 answers)
Closed 2 years ago.
I want to check if an external drive is still plugged in by checking /dev/disk/by-uuid/1234-5678.
However, I know that this could be done much easier with:
if ! [ -e "/non_existing_file" ]
echo "File dont exists anymore"
fi
But I still want to know why the script in the Title dont work. Is it because of the exit code of ls?
Thanks in advance.
It looks like the ls -l /nonexisting is a string that is not executed. This should work correctly if it is executed in a subshell.
Compare the example in the title:
if [[ -z "ls -l something.txt 2> /dev/null" ]]; then
echo "file does not exist"
fi
... with this version that returns as expected:
if [[ -z "$(ls -l something.txt 2> /dev/null)" ]]; then
echo "file does not exist"
fi

How to output error in custom color in Vagrant provision script?

I want my Vagrant provision script to run some checks that will require user action if they're not satisfied. As easy as:
if [ ! -f /some/required/file ]; then
echo "[Error] Please do required stuff before provisioning"
exit
fi
But, as long as this is not a real error, I got the echo printed in green. I'd like my output to be red (or, a different color at least) to alert the user.
I tried:
echo "\033[31m[Error] Blah blah blah"
that works locally, but on Vagrant output the color code gets escaped and I got it echoed in green instead.
Is that possible?
This is happening because some tools write some of their messages to stderr, which Vagrant then interprets as an error and prints in red.
Not all terminals support ANSI colour codes and Vagrant don't take care of that. Said that, I won't suggest colorizing a word by sending it to stderr unless it is an error.
To achieve that you can simply:
echo "Your error message" > /dev/stderr
You need to use keep_color true then it works as intended;
config.vm.provision "shell", keep_color: true, inline: $echoes
$echoes = <<-ECHOES
echo "\e[32mPROVISIONING DONE\e[0m"
ECHOES
From https://www.vagrantup.com/docs/provisioning/shell.html
keep_color (boolean) - Vagrant automatically colors output in green and red depending on whether the output is from stdout or stderr. If this is true, Vagrant will not do this, allowing the native colors from the script to be outputted.
Vagrant commands runs by default with --no-color option. You could try to set color on with --color. The environmental variables for Vagrant are documented here.
Here is a bash script test.sh which should demonstrate how to output to stderr or stdout conditionally. This form is good for a command like [ / test or touch that does not return any stdout or stderr normally. This form is checking the exit status code of the command which is stored in $?.
test -f $1
if [ $? -eq 0 ]; then
echo "File exists: $1"
else
echo "File not found: $1"
fi
You can alternatively hard code your file path like your question shows:
file="/some/required/file"
test -f $file
if [ $? -eq 0 ]; then
echo "File exists: $file"
else
echo "File not found: $file"
fi
If you have output of the command, but its being sent to stderr rather than stdout and ending up in red in the Vagrant output, you can use the following forms to redirect the output to where you would expect it to be. This is good for commands like update-grub or wget.
wget
url='https://example.com/file'
out=$(wget --no-verbose $url 2>&1)
if [ $? -ne 0 ]; then
echo "$out" > /dev/stderr
else
echo "$out"
fi
update-grub
out=$(update-grub 2>&1)
if [ $? -ne 0 ]; then
echo "$out" > /dev/stderr
else
echo "$out"
fi
One Liners
wget
url='https://example.com/file'
out=$(wget --no-verbose $url 2>&1) && echo "$out" || echo "$out" > /dev/stderr
update-grub
out=$(update-grub 2>&1) && echo "$out" || echo "$out" > /dev/stderr

If else in bash script for shell command

I have written a bash script that does not show any errors. However I would like to add conditional block list if success then show email success else show error message in email as shown in the code below.
scp -i id_rsa -r testuser#1.1.1.:/data1/scp ~/data/scp/files
success >> ~/data/scp/files/log.txt 2>&1
if success
then
| mail -s "Download
Successfull" abc#test.com <<< "Files Successfully Downloaded"
else
| mail -s "Error: Download Failed" abc#test.com <<< "Error File download
Failed!"
fi
Here is the working script without If else block
#!/module/for/bash
scp -i id_rsa -r test#1.1.1.1:/data1/scp ~/data/scp/files
echo success! >> ~/data/scp/files/log.txt 2>&1 | mail -s "Download
Successfull" abc#test.com <<< "Files Successfully
Downloaded" | mail -s "Error: Download Failed" abc#test.com <<<
"Error:file download Failed!"
The scp man page states: The scp utility exits 0 on success, and >0 if an error occurs.
So you can do something like:
if scp -i id_rsa -r testuser#1.1.1.:/data1/scp ~/data/scp/files
then
mail -s "Download Successful" abc#test.com <<<"Files Downloaded"
else
mail -s "Download Error" abc#test.com <<<"Download error"
fi
or
scp -i id_rsa -r testuser#1.1.1.:/data1/scp ~/data/scp/files
if [[ $? -eq 0 ]]
then
mail -s "Download Successful" abc#test.com <<<"Files Downloaded"
else
mail -s "Download Error" abc#test.com <<<"Download error"
fi
finally you may also want to look at something like storing the scp output. Use -q to have scp not print out progress meters and what not:
MYOUT=$(scp -q -i id_rsa -r testuser#1.1.1.:/data1/scp ~/data/scp/files 2>&1)
if [[ $? -eq 0 ]]
then
mail -s "Download Successful" abc#test.com <<<"$MYOUT"
else
mail -s "Download Error" abc#test.com <<<"$MYOUT"
fi
This link should clear the air. Hope it helped!
#Korthrun has already posted several ways to accomplish what I think you're trying to do; I'll take a look at what's going wrong in your current script. You seem to be confused about a couple of basic elements of shell scripting: pipes (|) and testing for command success/failure.
Pipes are used to pass the output of one command into the input of another (and possibly then chain the output of the second command into the input of a third command, etc). But when you use a pipe string like this:
echo success! >> ~/data/scp/files/log.txt 2>&1 |
mail -s "Download Successfull" abc#test.com <<< "Files Successfully Downloaded" |
mail -s "Error: Download Failed" abc#test.com <<< "Error:file download Failed!"
the pipes aren't actually doing anything. The first pipe tries to take the output of echo and feed it to the input of mail, but the >> in the echo command sends its output to a file instead, so no actual data is sent to the mail command. Which is probably good, because the <<< on the mail command tells it to ignore the regular input (from the pipe) and feed a string as input instead! Similarly, the second pipe tries to feed the output from the first mail command (there isn't any) to the last mail command, but again it's ignored due to another <<< input string. The correct way to do this is simply to remove the pipes, and run each command separately:
echo success! >> ~/data/scp/files/log.txt 2>&1
mail -s "Download Successfull" abc#test.com <<< "Files Successfully Downloaded"
mail -s "Error: Download Failed" abc#test.com <<< "Error:file download Failed!"
This is also causing a problem in the other version of your script, where you use:
if success
then
| mail -s "Download Successfull" abc#test.com <<< "Files Successfully Downloaded"
Here, there's no command before the pipe, so it doesn't make any sense at all (and you get a shell syntax error). Just remove the pipe.
Now, about success/failure testing: you seem to be using success as a command, but it isn't one. You can either use the command you want to check the success of directly as the if conditional:
if scp ...; then
echo "It worked!"
else
echo "It failed!"
fi
or use the shell variable $? which returns the exit status of the last command (success=0, failure=anything else):
scp ...
if [ $? -eq 0 ]; then
...
There's a subtlety here that's easy to miss: the thing after if is a command, but in the second form it appears to be a logical expression (testing whether $? is equal to 0). The secret is that [ is actually a command that evaluates logical expressions and then exits with success or failure depending on whether the expression was true or false. Do not mistake [ ] for some sort of parentheses or other grouping operator, that's not what's going on here!
BTW, the [[ ]] form that Korthrun used is very similar to [ ], but isn't supported by more basic shells. It does avoid some nasty syntax oddities with [ ], though, so if you're using bash it's a good way to go.
Also, note that $? gives the status of the last command executed, so it gets reset by every single command that executes. For example, this won't work:
scp ...
echo "scp's exit status was $?"
if [ $? -eq 0 ]; then # Don't do this!!!!
...because the if is then looking at the exit status of the echo command, not scp! If you need to do something like this, store the status in a variable:
scp ...
scpstatus=$?
echo "scp's exit status was $scpstatus"
if [ $scpstatus -eq 0 ]; then

Bash command substitution stdout+stderr redirect

Good day. I have a series of commands that I wanted to execute via a function so that I could get the exit code and perform console output accordingly. With that being said, I have two issues here:
1) I can't seem to direct stderr to /dev/null.
2) The first echo line is not displayed until the $1 is executed. It's not really noticeable until I run commands that take a while to process, such as searching the hard drive for a file. Additionally, it's obvious that this is the case, because the output looks like:
sh-3.2# ./runScript.sh
sh-3.2# com.apple.auditd: Already loaded
sh-3.2# Attempting... Enable Security Auditing ...Success
In other words, the stderr was displayed before "Attempting... $2"
Here is the function I am trying to use:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
exec $1
if [ "$?" -ne 0 ]; then
echo -ne " ...Failure\n\r"
else
echo -ne " ...Success\n\r"
fi
}
saveChange "$(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)" "Enable Security Auditing"
Any help or advice is appreciated.
this is how you redirect stderr to /dev/null
command 2> /dev/null
e.g.
ls -l 2> /dev/null
Your second part (i.e. ordering of echo) -- It may be because of this you have while invoking the script. $(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)
The first echo line is displayed later because it is being execute second. $(...) will execute the code. Try the following:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
err=$($1 2>&1)
if [ -z "$err" ]; then
echo -ne " ...Success\n\r"
else
echo -ne " ...Failured\n\r"
exit 1
fi
}
saveChange "launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist" "Enable Security Auditing"
EDIT: Noticed that launchctl does not actually set $? on failure so capturing the STDERR to detect the error instead.

Problem with pidof in Bash script

I've written a script for me to start and stop my Perforce server. To shutdown the server I use the kill -SIGTERM command with the PID of the server daemon. It works as it should but there are some discrepancies in my script concerning the output behavior.
The script looks as follows:
#!/bin/sh -e
export P4JOURNAL=/var/log/perforce/journal
export P4LOG=/var/log/perforce/p4err
export P4ROOT=/var/local/perforce_depot
export P4PORT=1666
PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
. /lib/lsb/init-functions
p4start="p4d -d"
p4stop="p4 admin stop"
p4user=perforce
case "$1" in
start)
log_action_begin_msg "Starting Perforce Server"
daemon -u $p4user -- $p4start;
echo "\n"
;;
stop)
echo "BLABLA"
echo "$(pidof /usr/local/bin/p4d)"
#daemon -u $p4user -- $p4stop;
p4dPid="$(pidof /usr/local/bin/p4d)"
echo $p4dPid
if [ -z "$(pidof /usr/local/bin/p4d)" ]; then
echo "ERROR: No Perforce Server running!"
else
echo "SUCCESS: Found Perforce Server running!\n\t"
echo "Shutting down Perforce Server..."
kill -15 $p4dPid;
fi
echo "\n"
;;
restart)
stop
start
;;
*)
echo "Usage: /etc/init.d/perforce (start|stop|restart)"
exit 1
;;
esac
exit 0
When p4d is running the stop block works as intended, but when there is no p4d running the script with stop only outputs BLABLA and an empty new line because of the echo "$(pidof /usr/local/bin/p4d)". The error message stating that no server is running is never printed. What am I doing wrong here?
PS: The part if [ -z "$(pidof /usr/local/bin/p4d)" ]; then has been changed from if [ -z "$p4dPid" ]; then for debug reasons.
EDIT: I narrowed down the problem. If I don't use the p4dPid variable and comment out the lines p4dPid="$(pidof /usr/local/bin/p4d)" and echo $p4dPid the if block is processed and the error messages is printed. Still I don't unterstand what is causing this behavior.
EDIT 2: Problem solved!
The -e in #!/bin/sh -e was causing the shell to exit the script after any statement returning a non-zero return value.
When your service is not running, the command
echo "$(pidof /usr/local/bin/p4d)"
is processed as
echo ""
because pidof did not return any string. So the command outputs an empty line.
If you do not want this empty line, then just remove this statement, after all you print an error message when the process is not running.
Problem solved!
The -e in #!/bin/sh -e was causing the shell to exit after any statement returning a non-zero return value.

Resources