I made a simple script for checking ports with NMAP, which returns 1 if the port is open, and 0 if it is closed. I'm trying to create an item in zabbix that runs this script. In tests, I can run the script successfully through the Zabbix UI, but it returns an empty value after execution. I am able to run the script normally via CLI, I have no permission issues either. I would like some help diagnosing what might be going on.
Here is the script zabbix-nmap.sh
#!/bin/bash
PORT=$(nmap $1 -p $2)
SUB='open'
if [[ "$PORT" == *"$SUB"* ]]; then
echo 1
else
echo 0
fi
Here is the Zabbix Item template configuration/test
Zabbix Item Test
Zabbix Item Config
I tried to run the file as root and as a zabbix user via cli and I can do it normally, via UI the script execution test also does not show an error, but returns an empty result.
Related
I wrote a site diagnostic script and have been testing it using iTerm2 on MacOS. A good portion of people that may consume this script use Windows and Git Bash (it's probably 50|50 Windows vs Mac OS): so not supporting Windows might not be an option.
I had someone here test out the script on their Windows PC and it immediately failed. Would anyone know why this function would succeed on Mac and fail on Windows?
_ping_test(){
# Define Error Codes & Messages Below:
local _success="0"
local _success_message="${_green}SUCCESS:${_reset_color} Ping Test Passed"
local _unknown_host="68"
local _unknown_host_message="${_red}ERROR:${_reset_color} Ping Test Failed on ${_url} - Unknown Host"
local _unrecognized_code="${_red}ERROR:${_reset_color} Ping Test Failed - Unrecognized Status Code${_ping_test} "
local _url="${1}.testdomain.net"
local _ping_url=$(ping -c 1 -q $_url &> /dev/null; echo $?)
if [ "${_ping_url}" == "${_success}" ] ; then
echo $_success_message
elif [ "${_ping_url}" == "${_unknown_host}" ] ; then
echo $_unknown_host_message
exit 1
elif true ; then
echo $_unrecognized_code
exit 1
fi
}
Running it returns the Unrecognized Code response on his computer, but given the same site is successful on my Mac.
I echo'd back $_url, and that returns the correct site. I also tried removing the &> /dev/null portion to hear back what the issue was, but it returned that it didn't recognize the command ping
Then I just ran the ping command directly in his Git Bash window, and that worked outside of the shell script.
I realize the ping test portion is a bit silly (it's used to exit the script if they input a bad URL), but the more valuable ssh commands that run later were also failing similarly
I managed to grab a Windows PC and TiL that the parameters you pass to ping follow a different paradigm based on OS:
-c means count on Mac OS
but
-c means compartmentalize on Windows which required admin access
fixed the params and then it worked
I am working on a control panel for the inbuilt apache server in OSX/macOS
and am trying to do a check of the status of the server - wether it is running or not. In mac there is no actual single command to check if the server is running through httpd/apachectl (I am using the latter). So I am trying to check by starting the server and then if it gives a return value of 'all good' (or in os.system()'s case, 0) then turn it off again and say it wasn't running, or if it returned an error, say it was already running. Basically I need to get the return value of the bash I am executing inside applescript, and that return value goes into the rest of my python script. This is what I have so far:
status = os.system('''
osascript -e 'do shell script "sudo apachectl start" with administrator privileges'
''')
if (status == 0):
os.system('''
osascript -e 'do shell script "sudo apachectl stop" with administrator privileges'
''')
statusText.config(state=NORMAL)
statusText.insert(END, "Not Running")
statusText.config(state=DISABLED, fg="red")
else:
statusText.config(state=NORMAL)
statusText.insert(END, "Running")
statusText.config(state=DISABLED, fg="green")
I intend to make this a standalone application, so I need all built-ins.
Thank you in advance!
I use Ubuntu and am trying to write a script that makes the following:
-test if an audio stream works
-if not, send an email.
I have tried the following code (running as a cron job every 10 minutes), which 'works' if I supply the wrong pw e.g.(it sends an email then), but does nothing if the actual server is down (tested by killing the server). any ideas on how to fix the script?
Thanks in advance!
#!/bin/bash
#servertest.sh
username=user1
password=xyz
url="http://wwww.streamingaudioserver.com -passwd $password -user $username"
mplayer $url &
sleep 5
test=$(pgrep -c mplayer)
if [ $test = 0 ]; then
#server is down!
mailfile="downmail.txt"
/usr/sbin/ssmtp test#maildomain.com < "/home/test/$mailfile"
fi
killall mplayer
sleep 5
exit
Your problem is in this line:
$mailfile="downmail.txt"
remove the dollar sign and that should do it.
You should be getting error messages in your cron log or emails to the crontab owner complaining about a command not found or no such file.
Edit:
Does your script work if run from the command line (with the stream down) rather than cron?
Try using set -x (or #!/bin/bash -x) in the script to turn on tracing or use echo "PID: $$, value of \$test: $test" > /tmp/script.out after the assignment to see if you're getting the zero you're expecting.
Also, try an ssmtp command outside the if to make sure it's working (but I think you already said it is under some circumstances).
Try your script without ever starting mplayer.
Not having much luck Googling this question and I thought about posting it on SF, but it actually seems like a development question. If not, please feel free to migrate.
So, I have a script that runs via cron every morning at about 3 am. I also run the same scripts manually sometimes. The problem is that every time I run my script manually and it fails, it sends me an e-mail; even though I can look at the output and view the error in the console.
Is there a way for the bash script to tell that it's being run through cron (perhaps by using whoami) and only send the e-mail if so? I'd love to stop receiving emails when I'm doing my testing...
you can try "tty" to see if it's run by a terminal or not. that won't tell you that it's specifically run by cron, but you can tell if its "not a user as a prompt".
you can also get your parent-pid and follow it up the tree to look for cron, though that's a little heavy-handed.
I had a similar issue. I solved it with checking if stdout was a TTY. This is a check to see if you script runs in interactive mode:
if [ -t 1 ] ; then
echo "interacive mode";
else
#send mail
fi
I got this from: How to detect if my shell script is running through a pipe?
The -t test return true if file descriptor is open and refers to a terminal. '1' is stdout.
Here's two different options for you:
Take the emailing out of your script/program and let cron handle it. If you set the MAILTO variable in your crontab, cron will send anything printed out to that email address. eg:
MAILTO=youremail#example.com
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
Set an environment variable in your crontab that is used to determine if running under cron. eg:
THIS_IS_CRON=1
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
and in your script something like
if [ -n "$THIS_IS_CRON" ]; then echo "I'm running in cron"; else echo "I'm not running in cron"; fi
Why not have a command line argument that is -t for testing or -c for cron.
Or better yet:
-e=email#address.com
If it's not specified, don't send an email.
I know the question is old, but I just came across the same problem. This was my solution:
CRON=$(pstree -s $$ | grep -q cron && echo true || echo false)
then test with
if $CRON
then
echo "Being run by cron"
else
echo "Not being run by cron"
fi
same idea as the one that #eruciform mentioned - follows your PID up the process tree checking for cron.
Note: This solution only works specifically for cron, unlike some of the other solutions, which work anytime the script is being run non-interactively.
What works for me is to check $TERM. Under cron it's "dumb" but under a shell it's something else. Use the set command in your terminal, then in a cron-script and check it out
if [ "dumb" == "$TERM" ]
then
echo "cron"
else
echo "term"
fi
I'd like to suggest a new answer to this highly-voted question. This works only on systemd systems with loginctl (e.g. Ubuntu 14.10+, RHEL/CentOS 7+) but is able to give a much more authoritative answer than previously presented solutions.
service=$(loginctl --property=Service show-session $(</proc/self/sessionid))
if [[ ${service#*=} == 'crond' ]]; then
echo "running in cron"
fi
To summarize: when used with systemd, crond (like sshd and others) creates a new session when it starts a job for a user. This session has an ID that is unique for the entire uptime of the machine. Each session has some properties, one of which is the name of the service that started it. loginctl can tell us the value of this property, which will be "crond" if and only if the session was actually started by crond.
Advantages over using environment variables:
No need to modify cron entries to add special invocations or environment variables
No possibility of an intermediate process modifying environment variables to create a false positive or false negative
Advantages over testing for tty:
No false positives in pipelines, startup scripts, etc
Advantages over checking the process tree:
No false positives from processes that also have crond in their name
No false negatives if the script is disowned
Many of the commands used in prior posts are not available on every system (pstree, loginctl, tty). This was the only thing that worked for me on a ten years old BusyBox/OpenWrt router that I'm currently using as a blacklist DNS server. It runs a script with an auto-update feature. Running from crontab, it sends an email out.
[ -z "$TERM" ] || [ "$TERM" = "dumb" ] && echo 'Crontab' || echo 'Interactive'
In an interactive shell the $TERM-variable returns the value vt102 for me. I included the check for "dumb" since #edoceo mentioned it worked for him. I didn't use '==' since it's not completely portable.
I also liked the idea from Tal, but also see the risk of having undefined returns. I ended up with a slightly modified version, which seems to work very smooth in my opinion:
CRON="$( pstree -s $$ | grep -c cron )"
So you can check for $CRON being 1 or 0 at any time.
I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script.
Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script?
I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.
Since the script is on the other machines, you can just have ssh run the command for you there:
ssh $hostname my_script >> results_file
When you specify a command like that, it's executed instead of the login shell.
I'll leave it up to you to figure out how to loop over hostnames!
One trick you'll need to use is setting up pre-authorized keys for each host. Then you can run a script on one host, running something like 'ssh hostname command > log.hostname'
This script might be what you are looking for: It allows you to execute one command (which can be your script) on multiple remote machines via ssh. It's a simple script with bash source available, so you should be able to customize it to your needs:
http://www.heinzi.at/projects/upgradebest.sh/
Yes you can
You need actually 2 small scripts as following:
remote_ssh.sh ( which takes as first argument the name of the machine and the rest of the arguments are your script that you want to execute with his own arguments)
Example : remote_ssh.sh node5 "echo hello world"
remote_ssh.sh as following:
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
REST_ARG=${ALL_ARG##$FST_ARG}
echo "Executing REMOTE COMMAND ON $FST_ARG"
/usr/bin/ssh $FST_ARG bash execute_ssh_command.sh $FST_ARG pwd $REST_ARG
execute_ssh_command.sh as following :
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
DIR_ARG=$2
REM_ARG="$1 $2"
REST_ARG=${ALL_ARG##$REM_ARG}
cd $DIR_ARG
$REST_ARG
of course you have to get this 2 scripts in your path of all your nodes ( maybe ~/bin/ )
Hope that it's helpful