I am trying to create a script that will run wget to a few sites and check if we receive a 200 OK from the site.
My problem is that the result of wget application is shown in the stdout. Is there a way I can hide this.
My current script is:
RESULT=`wget -O wget.tmp http://mysite.com 2>&1`
Later I will use regex to look for the 200 OK we receive from the errout that wget produces.
When I run the script, it works fine, but I get the result of the wget added between my echos.
Any way around this?
You can use:
RESULT=`wget --spider http://mysite.com 2>&1`
And this does the trick too:
RESULT=`wget -O wget.tmp http://mysite.com >/dev/null 2>&1`
Played around a little and came up with that one:
RESULT=`curl -fSw "%{http_code}" http://example.com/ -o a.tmp 2>/dev/null`
This outputs nothing but "200" - Nothing else.
Jack's suggestions are good. I'd modify them just slightly.
If you only need to check the status code, use the --spider option that Jack referenced. From the docs:
When invoked with this option, Wget will behave as a Web spider, which means that it will not download the pages, just check that they are there.
And Jack's second suggestion shows the core ideas behind hiding output:
... >/dev/null 2>&1
The above redirects standard output to /dev/null. The 2>&1 then redirects standard error to the current standard output file descriptor, which has already been redirected to /dev/null, so it won't give you any output.
But, since you don't want output, you might be able to use the --quiet option. From the docs:
Turn off Wget's output.
So, I'd probably use the following command
wget --quiet --spider 'http://mysite.com/your/page'
if [[ $? != 0 ]] ; then
# error retrieving page, do something useful
fi
TCP_HOST="mydomain.com"
TCP_PORT=80
exec 5<>/dev/tcp/"${TCP_HOST}"/"${TCP_PORT}"
echo -e "HEAD / HTTP/1.0\nHOST:${TCP_HOST}\n" >&5
while read -r line
do
case "$line" in
*200*OK* )
echo "site OK:$TCP_HOST"
exec >&5-
exit
;;
*) echo "site:$TCP_HOST not ok"
;;
esac
done <&5
Related
I'm creating a bash script to check the HTTP headers on remote hosts, I'm doing this via cURL and have noted that appending http://{host} will only work for services running on tcp\80, and not tcp\443. For example for HTTPS services, you require curl -I -k {host}, as opposed to HTTP services which only required curl -I {host}. This is my script:
for host in $(cat file.txt); do
echo " "
echo "Current host: "${host}
curl -I -k https://${host}
echo " "
echo "=============================================="
done
Now what I'm wanting is some condition operator to check that if the output is "Could not resolve host" then the script should run "curl -I http://{host}" on those hosts which the stdout contained the str value "Could not resolve host".
How can I achieve this in bash?
stdout will not contain Could not resolve host though, that's output to stderr. While you could capture stderr and then do string matching, there is a much, much simpler solution: exit code.
You can see here that curl will always exit with code 6 when it fails to resolve host. Thus, simply testing the exit code is sufficient:
curl -i -k http://nowaythisthingexists.test
if [[ $? -eq 6 ]]
then
echo "oopsie, couldn't resolve host!"
fi
Alternately, if you really want to do it by matching strings, make sure to redirect stderr to stdout (and possibly also kill stdout so it doesn't interfere):
output=$(curl -i -k http://nowaythisthingexists.test 2>&1 >/dev/null)
if [[ "$output" = *"Could not resolve host"* ]]
then
echo "oopsie, couldn't resolve host!"
fi
Obviously, you are not getting the output of your request this way, so you'd need to redirect it somewhere more useful than /dev/null — a file, or a Unix pipe. Now it's getting more complicated than it needs to be.
I wrote a small bash script in this post: How to search for a string in a text file and perform a specific action based on the result
I noticed that when I ran the script and check the logs, everything appears to be working but when I look at the Nagios UI, almost half of the servers listed in my text file did not get their notifications disabled. A revised version of the script is below:
host=/Users/bob/wsus.txt
password="P#assw0rd123"
while read -r host; do
region=$(echo "$host" | cut -f1 -d-)
if [[ $region == *sea1* ]]
then
echo "Disabling host notifications for: $host"
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" https://nagios.$region.blah.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
else
echo "Disabling host notifications for: $host"
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" https://nagios.$region.blah02.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
fi
done < wsus.txt >> /Users/bob/disable.log 2>&1
If i run the command against the servers having the issue manually, it does get disabled in the Nagios UI, so I'm a bit confused. FYI, I'm not well versed in Bash either so this was my attempt at trying to automate this process a bit.
1 - There is a missing double-quote before the first https occurence:
You have:
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" https://nagios.$region.blah.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
Should be:
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios.$region.blah.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
2 - Your first variable host is never used (overwritten inside the while loop).
I'm guessing what you were trying to do was something like:
hosts_file="/Users/bob/wsus.txt"
log_file="/Users/bob/disable.log"
# ...
while read -r host; do
# Do stuff with $host
done < $hosts_file >> $log_file 2>&1
3 - This looks suspicious to me:
if [[ $region == *sea1* ]]
Note: I haven't tested it yet, so this is my general feeling about this, might be wrong.
The $region isn't double-quoted, so make sure there could be no spaces / funny stuff happening there (but this should not be a problem inside a double-bracket test [[).
The *sea* looks like it would be expanded to match your current directory files matching this globbing. If you want to test this as a regular expression, you should use ~= operator or (my favorite for some reason) grep command:
if grep -q ".*sea.*" <<< "$region"; then
# Your code if match
else
# Your code if no match
fi
The -q keeps grep quiet
There is no need for test like [ or [[ because the return code of grep is already 0 if any match
The <<< simply redirects the right strings as the standard input of the left command (avoid useless piping like echo "$region" | grep -q ".*sea.*").
If this doesn't solve your problem, please provide a sample of your input file hosts_file as well as some output logs.
You could also try to see what's really going on under the hood by enclosing your script with set -x and set +x to activate debug/trace mode.
I'm trying to come up with a way script to pass a silent flag in a bash so that all output will be directed to /dev/null if it is present and to the screen if it is not.
An MWE of my script would be:
#!/bin/bash
# Check if silent flag is on.
if [ $2 = "-s" ]; then
echo "Silent mode."
# Non-working line.
out_var = "to screen"
else
echo $1
# Non-working line.
out_var = "/dev/null"
fi
command1 > out_var
command2 > out_var
echo "End."
I call the script with two variables, the first one is irrelevant and the second one ($2) is the actual silent flag (-s):
./myscript.sh first_variable -s
Obviously the out_var lines don't work, but they give an idea of what I want: a way to direct the output of command1 and command2 to either the screen or to /dev/null depending on -s being present or not.
How could I do this?
You can use the naked exec command to redirect the current program without starting a new one.
Hence, a -s flag could be processed with something like:
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
The following complete script shows how to do it:
#!/bin/bash
echo XYZZY
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
echo PLUGH
If you run it with -s, you get XYZZY but no PLUGH output (well, technically, you do get PLUGH output but it's sent to the /dev/null bit bucket).
If you run it without -s, you get both lines.
The before and after echo statements show that exec is acting as described, simply changing redirection for the current program rather than attempting to re-execute it.
As an aside, I've assumed you meant "to screen" to be "to the current standard output", which may or may not be the actual terminal device (for example if it's already been redirected to somewhere else). If you do want the actual terminal device, it can still be done (using /dev/tty for example) but that would be an unusual requirement.
There are lots of things that could be wrong with your script; I won't attempt to guess since you didn't post any actual output or errors.
However, there are a couple of things that can help:
You need to figure out where your output is really going. Standard output and standard error are two different things, and redirecting one doesn't necessarily redirect the other.
In Bash, you can send output to /dev/stdout or /dev/stderr, so you might want to try something like:
# Send standard output to the tty/pty, or wherever stdout is currently going.
cmd > /dev/stdout
# Do the same thing, but with standard error instead.
cmd > /dev/stderr
Redirect standard error to standard output, and then send standard output to /dev/null. Order matters here.
cmd 2>&1 > /dev/null
There may be other problems with your script, too, but for issues with Bash shell redirections the GNU Bash manual is the canonical source of information. Hope it helps!
If you don't want to redirect all output from your script, you can use eval. For example:
$ fd=1
$ eval "echo hi >$a" >/dev/null
$ fd=2
$ eval "echo hi >$a" >/dev/null
hi
Make sure you use double quotes so that the variable is replaced before eval evaluates it.
In your case, you just needed to change out_var = "to screen" to out_var = "/dev/tty". And use it like this command1 > $out_var (see the '$' you are lacking)
I implemented it like this
# Set debug flag as desired
DEBUG=1
# DEBUG=0
if [ "$DEBUG" -eq "1" ]; then
OUT='/dev/tty'
else
OUT='/dev/null'
fi
# actual script use commands like this
command > $OUT 2>&1
# or like this if you need
command 2> $OUT
Of course you can also set the debug mode from a cli option, see How do I parse command line arguments in Bash?
And you can have multiple debug or verbose levels like this
# Set VERBOSE level as desired
# VERBOSE=0
VERBOSE=1
# VERBOSE=2
VERBOSE1='/dev/null'
VERBOSE2='/dev/null'
if [ "$VERBOSE" -gte 1 ]; then
VERBOSE1='/dev/tty'
fi
if [ "$VERBOSE" -gte 2 ]; then
VERBOSE2='/dev/tty'
fi
# actual script use commands like this
command > $VERBOSE1 2>&1
# or like this if you need
command 2> $VERBOSE2
I am writing a simple script in bash to check whether or not a bunch of dependencies are installed on the current system. My script attempts to run a sample script with the -h flag, greps the output for a keyword i would expected to be returned by the sample scripts, and therefore knows whether or not the sample script is installed on the system.
I then pass this through a conditional statement that basically says sample scripts = OK or sample scripts = FAIL. However, in the case in which the sample script isn't installed on the system, bash throws the warning -bash: sample_script: command not found. How can I prevent this from displaying? I tried using the 1>&2 error redirection, but the warning still appears on the screen (I want the OK/FAIL output text to be displayed on the user's screen upon running my script).
Thanks for any suggestions!
If you just want to suppress errors (stderr) and let the "OK" or "FAIL" you are echoing (stdout) pass through, you would do:
./yourscript.sh 2> /dev/null
Although the better approach would be to test whether sample_script is executable before trying to execute it. For instance:
if [ -x "$script" ]; then
*do whatever generates FAIL or OK*
fi
#devnull dixit
command -h 2>/dev/null
I use this function to be independent of which, whence, type -p and whatnot:
pathto () {
DIRLIST=$(echo "$PATH"|tr : ' ')
for e in "$#"; do
for d in $DIRLIST; do
test -f "$d/$e" -a -x "$d/$e" && echo "$d/$e"
done
done
}
pathto script will echo the full path if it can be found (and is executable). Returning 0 or 1 instead left as an exercise :-)
for bash:
if ! type -P sample_script &> /dev/null; then
echo Error: sample_script is not installed. Come back later. >&2
exit 1
fi
sample_script "$#"
I have the following code:
#!/bin/bash
read -t1 < <(stat -t "/my/mountpoint")
if [ $? -eq 1 ]; then
echo NFS mount stale. Removing...
umount -f -l /my/mountpoint
fi
How do I mute the output of stat while at the same time being still able to detect its error level in the subsequent test?
Adding >/dev/null 2>&1 inside the subshell, or in the end of the read line does not work. But there must be a way...
Thanks for any insights on this!
Use Command-Subsitution, Not Process Substitution
Instead of reading in from process subsitution, consider using command substitution instead. For example:
mountpoint=$(stat -t "/my/mountpoint" 2>&1)
This will silence the output by storing standard output in a variable, but leave the results retrievable by dereferencing $mountpoint. This approach also leaves the exit status accessible through $?.
A Clearer Alternative
Alternatively, you might just rewrite this more simply as:
mountpoint="/my/mountpoint"
if stat -t "$mountpoint" 2>&-
then
echo "NFS mount stale. Removing..."
umount -f -l "$mountpoint"
fi
To me, this seems more intention-revealing and less error-prone, but your mileage may certainly vary.
(Ab)using Read Timeouts
In the comments, the OP asked whether read timeouts could be abused to handle hung input from stat. The answer is yes, if you close standard error and check for an empty $REPLY string. For example:
mountpoint="/my/mountpoint"
read -t1 < <(stat -t "$mountpoint" 2>&-)
if [[ -n "$REPLY" ]]; then
echo "NFS mount stale. Removing..."
umount -f -l "$mountpoint"
fi
This works for several reasons:
When using the read builtin in Bash:
If no NAMEs are supplied, the line read is stored in the REPLY variable.
With standard error closed, $REPLY will be empty unless stat returns something on standard output, which it won't if it encounters an error. In other words, you're checking the contents of the $REPLY string instead of the exit status from read.
I think I got it! The redirection mentioned in your response seems to work within the subshell without wiping out the return code like 2>&1 did. So this works as expected:
read -t1 < <(rpcinfo -t 10.0.128.1 nfs 2>&-)
if [ $? -eq 0 ]; then
echo "NFS server/service available!"
else
echo "NFS server/service unavailable!"
fi
Where 10.0.128.1 is a 'bad' IP (no server/service responding). The script times out within a second and produces "NFS server/service unavailable!" response, but no output from rpcinfo. Likewise, when the IP is good, the desired response is output.
I upvoted your response!