How to send message from zabbix to telegram? - bash

I have problems with notifications on zabbix to telegram messenger.
So, I specified different guides for that. But not successful.
For example I use this guides
This solutions works for bash. But I can send this from zabbix.
export to=$1;
export subject=$2;
export body=$3;
tgpath=/usr/src/tg/zabbix
cd ${tgpath}
(sleep 5; echo "msg $to $subject $body"; echo "safe_quit") |
${tgpath}/telegram-cli -k /etc/telegram-cli/mykey.pub -W
Key telegram-cli -e does not work correctly with a login name and with format user#XXXXXX;
I dont want to use some API to send message.
Thank you for any help.

Your script is not equal to the blog post.
The steps are:
0 - Compile
cd /usr/src
git clone --recursive https://github.com/vysheng/tg.git
cd tg
./configure
make
mkdir viacron
cp bin/telegram-cli viacron/
cp tg-server.pub viacron/
cd viacron
1 - Create a file /usr/src/tg/viacron/telegram.config an put this:
default_profile = "viacron";
viacron = {
config_directory = "/usr/src/tg/viacron/";
};
2 - Create a file /usr/src/tg/viacron/telegram.config an put this:
#!/bin/bash
MAIN_DIRECTORY="/usr/src/tg/viacron/"
USER=$1
SUBJECT=$2
TEXT=$3
cd $MAIN_DIRECTORY
if [[ $? -ne 0 ]]; then
echo "Error to enter in the main directory"
exit 1
fi
./telegram-cli -k tg-server.pub -c telegram.config -WR -e "msg $USER $SUBJECT" || exit 1
exit 0
3 - Change permissions:
chmod +x /usr/src/tg/viacron/telegram_standalone.sh
chown -R yourUser: /usr/src/tg/
4 - Test:
/usr/src/tg/viacron/telegram_standalone.sh user#12345 "GNU is not unix"
5 - Put AlertScriptsPath=/usr/src/tg/viacron/ in zabbix_server.conf and restart the server
6 - In zabbix, add new media type with name telegram_standalone.sh
More information in https://gist.github.com/gnumoksha/a95f237d82733ce1f748 and http://tobias.ws/blog/zabbix-com-notificacoes-pelo-telegram/

Now Telegram supported by default
check:
https://www.zabbix.com/integrations/telegram
and there is extra settings check:
https://youtu.be/TpP6NpS9jjg?t=143

Related

Make Bash Script Recognize It Is Already Existent

Question
How can I instruct my the bash script to not attempt to re-connect if to my rsync daemon if the process lock.file already exists? (as to prevent the bash script from attempting to infinitely create new connections after the first connection has already been made)?
This is an example of one of my rsync-daemon wrapper scripts:
#!/bin/sh
#
#
while [ 1 ]
do
cputool --load-limit 7.5 -- nice -n -15 rsync -avxP --no-i-r --rsync-path="rsync" --log-file=/var/log/rsync-home.log --exclude 'snap' --exclude 'lost+found' --exclude=".*" --exclude=".*/" 127.0.0.1::home /media/username/external/home-files-only && sync && echo 3 > /proc/sys/vm/drop_caches
if [ "$?" = "0" ] ; then
echo "rsync completed normally"
exit
else
echo "Rsync failure. Backing off and retrying..."
sleep 10
fi
done
#end of shell script
This is my /etc/rsyncd.conf:
[home]
path = /home/username
list = yes
use chroot = false
strict modes = false
uid = root
gid = root
read only = yes
# Data source information
max connections = 1
lock file = /var/run/rsyncd-home.lock
[prod-bkup]
path = /media/username/external/Server-Backups/Prod/today
list = yes
use chroot = false
strict modes = false
uid = root
gid = root
# Don't allow to modify the source files
read only = yes
max connections = 1
lock file = /var/run/rsyncd-prod-bkup.lock
[test-bkup]
path = /media/username/external/Server-Backups/Test/today
list = yes
use chroot = false
strict modes = false
uid = root
gid = root
# Don't allow to modify the source files
read only = yes
max connections = 1
lock file = /var/run/rsyncd-test-bkup.lock
[VminRoot2]
path = /root/VDI-Files
list = yes
use chroot = false
strict modes = false
uid = root
gid = root
# Don't allow to modify the source files
read only = yes
max connections = 1
lock file = /var/run/rsyncd-VminRoot2.lock
Thanks to #james-brown I now have multiple ways to ensure my script runs once.. correctly...
Solution 1 (quick & dirty):
flock -n <lock file> <script>
Or in my case, using this command to execute my cron job:
flock -n /var/run/rsyncd-home.lock /path/to/my_script.sh
caveat - this leaves your script vulnerable to stale lock files that may prevent execution on the next time interval.
Solution 2:
So, I used a bullet-proof method (so I think... I invite folks to correct my understanding, if need be)...
First, I did apt install procmail, then removed/hashed out the below two lines in my /etc/rsyncd.conf and ran systemctl restart rsync:
#max connections = 1
#lock file = /var/run/rsyncd-home.lock
From there I edited /usr/local/bin/backupscript.sh as follows:
#!/bin/bash
#
LOCK=/var/run/rsyncd-home.lock
remove_lock()
{
rm -f "$LOCK"
}
another_instance()
{
echo "There is another instance running, exiting"
exit 1
}
lockfile -r 0 -l 3600 "$LOCK" || another_instance
trap remove_lock EXIT
#new using rsyncd & perpetual restart
while [ 1 ]
do
cputool --load-limit 7.5 -- nice -n -15 rsync -avxP --no-i-r --rsync-path="rsync" --log-file=/var/log/rsync-home.log --exclude 'snap' --exclude="Variety Images" --exclude="Downloads/WebDev/Vmin-Vbox" --exclude 'Downloads/WebDev/Win10-Vbox' --exclude="Videos/other" --exclude 'lost+found' --exclude=".*" --exclude=".*/" 127.0.0.1::home /media/username/external/home-files-only && sync && echo 3 > /proc/sys/vm/drop_caches
if [ "$?" = "0" ] ; then
echo "rsync completed normally"
exit
else
echo "Rsync failure. Backing off and retrying..."
sleep 10
fi
done
#end of shell script
PRESTO:
The script will only connect to rsync daemon once, it will re-connect on dropped connections thanks to the while loop, and there is no danger of stale lock files interrupting my backup process at future intervals... (i.e. problem solved).
Very useful reference:
https://www.baeldung.com/linux/bash-ensure-instance-running

Cron job - exported file emailed is blank

I am receiving the email from the following CRON job but with a blank file attached. I have checked the /home/XXXXXX/public_html/tmp/ directory and a good tqhsa_hikashop_order.txt file is created in that /tmp/ folder but the one attached to the email that I receive is empty.
mysql --database=XXXXXXXX_jml_6UCHadruwR8veju3 -u XXXXXXXX_8eh6Nu --password=fRa6r7wRerEtuzud -B -e "SELECT order_number, order_created, order_invoice_number, order_full_price, order_discount_code, order_discount_price, order_payment_method, order_payment_price FROM tqhsa_hikashop_order;" > /home/XXXXXXXX/public_html/tmp/tqhsa_hikashop_order.txt | mail -s "Daily Discount Table Dump" -a /home/XXXXXXXX/public_html/tmp/tqhsa_hikashop_order.txt luke#example.com
Any ideas why the good text file is not being attached?
You need to ensure that your first command finishes before running the second, so that the file exists on disk.
So essentially replace the | with a &&:
mysql \
--database=XXXXXXXX_jml_6UCHadruwR8veju3 \
-u XXXXXXXX_8eh6Nu \
--password=fRa6r7wRerEtuzud \
-B \
-e "SELECT order_number, order_created, order_invoice_number, order_full_price, order_discount_code, order_discount_price, order_payment_method, order_payment_price FROM tqhsa_hikashop_order;" > /home/XXXXXXXX/public_html/tmp/tqhsa_hikashop_order.txt \
&& \
mail \
-s "Daily Discount Table Dump" \
-a /home/XXXXXXXX/public_html/tmp/tqhsa_hikashop_order.txt luke#example.com
To illustrate using an example:
myuser#myshell:~ $ echo "testing" > testy.txt | cat testy.txt
cat: testy.txt: No such file or directory
myuser#myshell:~ $ echo "testing" > testy.txt && cat testy.txt
testing

batch processing : File name comparison error

I have written a program (Cifti_subject_fmri) which compares whether file name matches in two folders and essentially executes a set of instructions
#!/bin/bash -- fix_mni_paths
source activate ciftify_v1.0.0
export SUBJECTS_DIR=/scratch/m/mchakrav/dev/functional_data
export HCP_DATA=/scratch/m/mchakrav/dev/tCDS_ciftify
## make the $SUBJECTS_DIR if it does not already exist
mkdir -p ${HCP_DATA}
SUBJECTS=`cd $SUBJECTS_DIR; ls -1d *` ## list of my subjects
HCP=`cd $HCP_DATA; ls -1d *` ## List of HCP Subjects
cd $HCP_DATA
## submit the files to the queue
for i in $SUBJECTS;do
for j in $HCP ; do
if [[ $i == $j ]];then
parallel "echo ciftify_subject_fmri $i/filtered_func_data.nii.gz $j fMRI " ::: $SUBJECTS |qbatch --walltime '05:00:00' --ppj 8 -c 4 -j 4 -N ciftify_subject_fmri -
fi
done
done
When i run this code in the cluster i am getting an error which says
./Cifti_subject_fmri: [[AS1: command not found
The query ciftify_subject_fmri is part of toolbox ciftify, for it to execute it requires following instructions
ciftify_subject_fmri <func.nii.gz> <Subject> <NameOffMRI>
I have 33 subjects [AS1 -AS33] each with its own func.nii.gz files located SUBJECTS directory,the results need to be populated in HCP directory, fMRI is name of file format .
Could some one kindly let me know why i am getting an error in loop

Issue on concatenating files shellscipt

Sorry, I'm from Brazil and my english is not fluent.
I wanna concatenate 20 files using a shellscript through cat command. However when I run it from a file, all content of files are showed on the screen.
When I run it directly from terminal, works perfectly.
That's my code above:
#!/usr/bin/ksh
set -x -a
. /PROD/INCLUDE/include.prod
DATE=`date +'%Y%m%d%H%M%S'`
FINAL_NAME=$1
# check if all paremeters are passed
if [ -z $FINAL_NAME ]; then
echo "Please pass the final name as parameter"
exit 1
fi
# concatenate files
cat $DIRFILE/AI6LM760_AI6_CF2_SLOTP01* $DIRFILE/AI6LM761_AI6_CF2_SLOTP02* $DIRFILE/AI6LM763_AI6_CF2_SLOTP04* \
$DIRFILE/AI6LM764_AI6_CF2_SLOTP05* $DIRFILE/AI6LM765_AI6_CF2_SLOTP06* $DIRFILE/AI6LM766_AI6_CF2_SLOTP07* \
$DIRFILE/AI6LM767_AI6_CF2_SLOTP08* $DIRFILE/AI6LM768_AI6_CF2_SLOTP09* $DIRFILE/AI6LM769_AI6_CF2_SLOTP10* \
$DIRFILE/AI6LM770_AI6_CF2_SLOTP11* $DIRFILE/AI6LM771_AI6_CF2_SLOTP12* $DIRFILE/AI6LM772_AI6_CF2_SLOTP13* \
$DIRFILE/AI6LM773_AI6_CF2_SLOTP14* $DIRFILE/AI6LM774_AI6_CF2_SLOTP15* $DIRFILE/AI6LM775_AI6_CF2_SLOTP16* \
$DIRFILE/AI6LM776_AI6_CF2_SLOTP17* $DIRFILE/AI6LM777_AI6_CF2_SLOTP18* $DIRFILE/AI6LM778_AI6_CF2_SLOTP19* \
$DIRFILE/AI6LM779_AI6_CF2_SLOTP20* > CF2_FINAL_TEMP
mv $DIRFILE/CF2_FINAL_TEMP $DIRFILE/$FINAL_NAME
I solved the problem putting the cat block inside a function, and redirecting stdout to the final file.
Ex:
concatenate()

SQUID3 - using multiple auth_param like basic_ncsa_auth & basic_ldap_auth

i tried to setup squid3 with multiple auth_param. Basically, the first choice should be basic_ldap_auth and if this doesnt return OK it should try basic_ncsa_auth with the same values. As far as i know squid doesnt support it however there is the possibility to use "external" ACL
auth_param basic program /usr/lib/squid3/basic_fake_auth
external_acl_type MultAuth %SRC %LOGIN %{Proxy-Authorization} /etc/squid3/multAuth.pl
acl extAuth external MultAuth
my "multAuth.pl"
use URI::Escape;
use MIME::Base64;
$|=1;
while (<>) {
($ip,$user,$auth) = split();
# Retrieve the password from the authentication header
$auth = uri_unescape($auth);
($type,$authData) = split(/ /, $auth);
$authString = decode_base64($authData);
($username,$password) = split(/:/, $authString);
# do the authentication and pass results back to Squid.
$ldap = `/bin/bash auth/ldap.sh`;
if ($ldap == "OK") {
print "OK";
}
$ncsa = `/bin/bash auth/ncsa.sh`;
if ($ncsa == "OK") {
print "OK";
} else {
print "ERR";
}
}
now i am trying to run with ncsa.sh and ldap.sh the "normal" shell command for these auth methods.
./basic_ldap_auth -R -b "dc=domain,dc=de" -D "CN=Administrator,CN=Users,DC=domain,DC=de" -w "password" -f sAMAccountName=%s -h domain.de
user password
and
./basic_ncsa_auth /etc/squid3/users
user password
Therefor i ran:
auth/ncsa.sh
#!/usr/bin/expect
eval spawn [lrange $argv 0 end]
expect ""
send [lindex $argv 1]
send '\r'
expect {
"OK" {
echo OK
exp_continue
}
"ERR" {
echo ERR
exp_continue
}
interact
with
./ncsa.sh "/usr/lib/squid3/basic_ncsa_auth /etc/squid3/users" "user password"
and i generate the following error:
couldn't execute "/usr/lib/squid3/basic_ncsa_auth /etc/squid3/users": no such file or directory
while executing
"spawn {/usr/lib/squid3/basic_ncsa_auth /etc/squid3/users} {user password}"
("eval" body line 1)
invoked from within
"eval spawn [lrange $argv 0 end]"
(file "./ncsa.sh" line 2)
Besides this error, i am not sure how to pass the variables (username & password) forward and i am also not sure how to answer the shell questions like for example the user & pw input for basic_ldap_auth .
Is there a nice way how to solve that? or any other good plan ?
thanks!
FWIW, the following script helped me transition from passwd based to LDAP based authentication.
Contrary to your requirements, my script acts the other way around: It first checks passwd, then LDAP.
#!/usr/bin/env bash
# multiple Squid basic auth checks
# originally posted here: https://github.com/HackerHarry/mSquidAuth
#
# credits
# https://stackoverflow.com/questions/24147067/verify-user-and-password-against-a-file-created-by-htpasswd/40131483
# https://stackoverflow.com/questions/38710483/how-to-stop-ldapsearch1-from-base64-encoding-userpassword-and-other-attributes
#
# requires ldap-utils, openssl and perl
# tested with Squid 4 using a "auth_param basic program /usr/lib/squid/mSquidAuth.sh" line
# authenticate first against squid password file
# if this fails, try LDAP (Active Directory) and also check group membership
# variables
# sLOGFILE=/var/log/squid/mSquidAuth.log
sPWDFILE="/etc/squid/passwd"
sLDAPHOST="ldaps://dc.domain.local:636"
sBASE="DC=domain,DC=local"
sLDS_OPTIONS="-o ldif-wrap=no -o nettimeout=7 -LLL -P3 -x "
sBINDDN="CN=LDAP-read-user,OU=Users,DC=domain,DC=local"
sBINDPW="read-user-password"
sGROUP="Proxy-Users"
# functions
function _grantAccess {
# echo "access granted - $sUSER" >>$sLOGFILE
echo "OK"
}
function _denyAccess {
# echo "access denied - $sUSER" >>$sLOGFILE
echo "ERR"
}
function _setUserAndPass {
local sAuth="$1"
local sOldIFS=$IFS
IFS=' '
set -- $sAuth
IFS=$sOldIFS
# set it globally
sUSER="$1"
sPASS="$2"
}
# loop
while (true); do
read -r sAUTH
sUSER=""
sPASS=""
sSALT=""
sUSERENTRY=""
sHASHEDPW=""
sUSERDN=""
iDNCOUNT=0
if [ -z "$sAUTH" ]; then
# echo "exiting" >>$sLOGFILE
exit 0
fi
_setUserAndPass "$sAUTH"
sUSERENTRY=$(grep -E "^${sUSER}:" "$sPWDFILE")
if [ -n "$sUSERENTRY" ]; then
sSALT=$(echo "$sUSERENTRY" | cut -d$ -f3)
if [ -n "$sSALT" ]; then
sHASHEDPW=$(openssl passwd -apr1 -salt "$sSALT" "$sPASS")
if [ "$sUSERENTRY" = "${sUSER}:${sHASHEDPW}" ]; then
_grantAccess
continue
fi
fi
fi
# LDAP is next
iDNCOUNT=$(ldapsearch $sLDS_OPTIONS -H "$sLDAPHOST" -D "$sBINDDN" -w "$sBINDPW" -b "$sBASE" "(|(sAMAccountName=${sUSER})(userPrincipalName=${sUSER}))" dn 2>/dev/null | grep -cE 'dn::? ')
if [ $iDNCOUNT != 1 ]; then
# user needs a unique account
_denyAccess
continue
fi
# get user's DN
# we need the extra grep in case we get lines back starting with "# refldap" :/
sUSERDN=$(ldapsearch $sLDS_OPTIONS -H "$sLDAPHOST" -D "$sBINDDN" -w "$sBINDPW" -b "$sBASE" "(|(sAMAccountName=${sUSER})(userPrincipalName=${sUSER}))" dn 2>/dev/null | perl -MMIME::Base64 -n -00 -e 's/\n +//g;s/(?<=:: )(\S+)/decode_base64($1)/eg;print' | grep -E 'dn::? ' | sed -r 's/dn::? //')
# try and bind using that DN to check password validity
# also test if that user is member of a particular group
# backslash in DN needs special treatment
if ldapsearch $sLDS_OPTIONS -H "$sLDAPHOST" -D "$sUSERDN" -w "$sPASS" -b "$sBASE" "name=${sGROUP}" member 2>/dev/null | perl -MMIME::Base64 -n -00 -e 's/\n +//g;s/(?<=:: )(\S+)/decode_base64($1)/eg;print' | grep -q "${sUSERDN/\\/\\\\}"; then
_grantAccess
continue
fi
_denyAccess
done

Resources