Bash script from URL: return code when curl fails? - bash

One of the solutions to execute a bash script directly from a URL is:
bash <(curl -sS http://1.1.1.1/install)
The problem is that when curl fails to retrieve the url, bash get's nothing as input and it ends normally (return code 0).
I would like the whole command to abort with the return code from curl.
UPDATE:
Two possible solutions:
curl -sS http://1.1.1.1/install | bash; exit ${PIPESTATUS[0]}
or:
bash <(curl -sS http://1.1.1.1/install || echo "exit $?")
The last one is kind of hackish (but shorter)

Try this to get curl's returncode:
curl -sS http://1.1.1.1/install | bash
echo ${PIPESTATUS[0]}

Use a temporary file.
trap 'rm "$install"' EXIT
installer=$(mktemp)
curl ... > "$installer" || exit
# Ideally, verify the installer *before* running it
bash "$installer"
Here's why. If you simply pipe whatever curl returns to bash, you are essentially allowing unknown code to execute on your machine. Better to make sure that what you are executing isn't harmful first.
You might ask, how is this different from using a pre-packaged installer, or an RPM, or some other package system? With a package, you can verify via a checksum provided by the packager that the package you are about to install is the same one they are providing. You still have to trust the packager, but you don't have to worry about an attacker modifying the package en route (man-in-the-middle attack).

You could save the output of the curl command in a variable and execute it only if the return status was zero. It might not be so elegant, but it's more compatible to other shells (doesn't rely on $PIPESTATUS, which isn't available on many shells):
install=`curl -sS http://1.1.1.1/install`
if [ $? -ne 0 ]; then
echo "error"
else
echo "$install" | bash
fi

Related

how to handle exit code for multiple sequential unix command in a shell script

I am need of shell script which have multiple commands like
Command -1 mv
command 2- cp
command -3 - sed
command -4 echo ,append etc
rc=$?
if =0 success
else
exist30
but even though move command failed script retuning return code as 0 and script showing success message.
Do i need to main RC for all the command or can i better handle return code for each command to make sure every command run successfully
If you have to deal with commands which may fail you can do the following:
if mv ...; then
echo mv success
else
echo mv failure
rc=1
fi
exit 1
For having your script to crash with the first failed command make sure to add the line set -e below your shebang (#!/bin/sh).
Keep in mind that the error handling above is compatible with set -e, the script will not stop.
set -eu is considered best practice for shell scripts, -u will result in a crash of your script if a variable used is unset.

BASH If command contains 'this text' do another command?

I'm creating a bash script to check the HTTP headers on remote hosts, I'm doing this via cURL and have noted that appending http://{host} will only work for services running on tcp\80, and not tcp\443. For example for HTTPS services, you require curl -I -k {host}, as opposed to HTTP services which only required curl -I {host}. This is my script:
for host in $(cat file.txt); do
echo " "
echo "Current host: "${host}
curl -I -k https://${host}
echo " "
echo "=============================================="
done
Now what I'm wanting is some condition operator to check that if the output is "Could not resolve host" then the script should run "curl -I http://{host}" on those hosts which the stdout contained the str value "Could not resolve host".
How can I achieve this in bash?
stdout will not contain Could not resolve host though, that's output to stderr. While you could capture stderr and then do string matching, there is a much, much simpler solution: exit code.
You can see here that curl will always exit with code 6 when it fails to resolve host. Thus, simply testing the exit code is sufficient:
curl -i -k http://nowaythisthingexists.test
if [[ $? -eq 6 ]]
then
echo "oopsie, couldn't resolve host!"
fi
Alternately, if you really want to do it by matching strings, make sure to redirect stderr to stdout (and possibly also kill stdout so it doesn't interfere):
output=$(curl -i -k http://nowaythisthingexists.test 2>&1 >/dev/null)
if [[ "$output" = *"Could not resolve host"* ]]
then
echo "oopsie, couldn't resolve host!"
fi
Obviously, you are not getting the output of your request this way, so you'd need to redirect it somewhere more useful than /dev/null — a file, or a Unix pipe. Now it's getting more complicated than it needs to be.

Why does redirect with &> change the exit code in zsh?

I was debugging a shell script, and the problem was that the following code
if curl doesntexist -i &> /dev/null;
then
echo True
else
echo False
fi
Is, if running zsh, not equivalent to:
if curl 6 -i 1> /dev/null 2> /dev/null;
then
echo True
else
echo False
fi
The later echos True as expected, but if I redirect with &>, the output is false.
I do not understand this behaviour, for example here it says that
&>name is like 1>name 2>name
Can someone explain why the two snippets do not behave the same if running in zsh? From zsh docu it says that it should also redirect stdout and stderr, sounds like it should do the same as in bash:
&> Redirects both standard output and standard error (file descriptor 2) in the manner of ‘> word’
I suspect your script does not have a shebang to indicate which shell should be used to execute it, that you have made it executable, and are executing it with something like ./test.sh. If that is the case, adding something like #!/bin/bash or #!/bin/zsh will solve the problem.
Without a shebang, what actually executes your script depends on which shell you are executing it from. bash will execute the script with bash. zsh, however, will execute the script with /bin/sh.
In bash, &> is a redirection operator that redirects both standard error and standard input to the same file.
In /bin/sh, though, it is not a redirection operator. The command curl doesntexist -i &> /dev/null is parsed as two separate commands:
curl doesntexist -i &
> /dev/null
The first runs curl in the background, and immediately returns with a 0 exit status. (The exit status of curl itself is never considered.) The second command is a valid empty command that simply opens > /dev/null for writing, then exits with a 0 exit status.
As a result, no matter what curl might do, the exit status that if cares about is just the last one in the list, that of > /dev/null. Since that is 0, you get the True path.
In bash, where &> is the valid redirection operator, if looks at the exit status of curl as expected.

Allow user input in second command in bash pipe

I'm looking for how I might allow user input in a second command in a bash statement and I'm not sure how to go about it. I'd like to be able to provide a one-liner for someone to be able to install my application, but part of that application process requires asking some questions.
The current script setup looks like:
curl <url/to/bootstrap.sh> | bash
and then boostrap.sh does:
if [ $UID -ne 0 ]; then
echo "This script requires root to run. Restarting the script under root."
exec sudo $0 "$#"
exit $?
fi
git clone <url_to_repo> /usr/local/repo/
bash /usr/local/repo/.setup/install_system.sh
which in turn calls a python3 script that asks for input.
I know that the the curl in the first line is using stdin and so that might make what I'm asking impossible and that it has to be two lines to ever work:
wget <url/to/boostrap.sh>
bash bootstrap.sh
You can restructure your script to run this way:
bash -c "$(curl -s http://0.0.0.0//test.bash 2>/dev/null)"
foo
wololo:a
a
My test.bash is really just
#!/bin/bash
echo foo
python -c 'x = raw_input("wololo:");print(x)'`
To demonstrate that stdin can be read from in this way. Sure it creates a subshell to take care of curl but it allows you to keep reading from stdin as well.

How can I prevent bash from reporting an error when attempting to call a non-existing script?

I am writing a simple script in bash to check whether or not a bunch of dependencies are installed on the current system. My script attempts to run a sample script with the -h flag, greps the output for a keyword i would expected to be returned by the sample scripts, and therefore knows whether or not the sample script is installed on the system.
I then pass this through a conditional statement that basically says sample scripts = OK or sample scripts = FAIL. However, in the case in which the sample script isn't installed on the system, bash throws the warning -bash: sample_script: command not found. How can I prevent this from displaying? I tried using the 1>&2 error redirection, but the warning still appears on the screen (I want the OK/FAIL output text to be displayed on the user's screen upon running my script).
Thanks for any suggestions!
If you just want to suppress errors (stderr) and let the "OK" or "FAIL" you are echoing (stdout) pass through, you would do:
./yourscript.sh 2> /dev/null
Although the better approach would be to test whether sample_script is executable before trying to execute it. For instance:
if [ -x "$script" ]; then
*do whatever generates FAIL or OK*
fi
#devnull dixit
command -h 2>/dev/null
I use this function to be independent of which, whence, type -p and whatnot:
pathto () {
DIRLIST=$(echo "$PATH"|tr : ' ')
for e in "$#"; do
for d in $DIRLIST; do
test -f "$d/$e" -a -x "$d/$e" && echo "$d/$e"
done
done
}
pathto script will echo the full path if it can be found (and is executable). Returning 0 or 1 instead left as an exercise :-)
for bash:
if ! type -P sample_script &> /dev/null; then
echo Error: sample_script is not installed. Come back later. >&2
exit 1
fi
sample_script "$#"

Resources