I created a bash script to transfer my zones between my primary and secondary DNS server.
It downloads my zone list from the primary and checks for any new zones and then downloads and inserts those zone files into the zone directory and into the .local file for bind.
The problem I have is that if the zone file does not exist, the script will enter the details into the .local regardless of if this config already exists or not.
Can someone help me out to distinguish between zones that already exist and simply download the zone file.
I have pasted my script below and if anyone has any queries on how it works, please feel free to ask.
(can someone wrap the code please, it never works properly for me in any browser I try!)
#!/bin/sh
NAMED="/etc/bind/named.conf.local"
TMPNAMED="/tmp/zns-441245.temp"
TMPZONEFILE="/tmp/zones.txt"
TMP="/tmp/zns-732.temp"
ZONELOCATION="/var/cache/bind"
IGNORE=`cat ignore.txt`
logger DNS Update script running...
echo -n "Checking for new named.conf... "
wget -q http://91.121.75.205:10801/named/named.conf -O $TMPNAMED
if [ -e $TMPNAMED ]
then
echo "done."
else
echo "no new data!"
exit
fi
echo -n "Generating zone names... "
grep "^zone" $TMPNAMED | cut -d " " -f "2" | cut -d "\"" -f 2 > $TMPZONEFILE
sed '1,5d' $TMPZONEFILE > $TMP
mv $TMP $TMPZONEFILE
echo "done. ("$TMPZONEFILE")"
echo "Generating zone info... "
grep -vf ignore.txt $TMPZONEFILE | while read ZONE; do
echo -n "Checking for $ZONELOCATION/$ZONE.db "
if [ -e $ZONELOCATION/$ZONE.db ]
then
echo "[ exists ]"
else
export updates="yes"
echo "[ doesn't exist ]"
echo "New zone available ($ZONE)... "
echo "zone \"$ZONE\" {
type slave;
file \"$ZONELOCATION/$ZONE.db\";
masters { 91.121.75.205; };
allow-notify { 91.121.75.205; };
};" >> $NAMED
fi
done
echo "Updating Bind configuration... "
/etc/init.d/bind9 restart
rm $TMPZONEFILE
rm $TMPNAMED
One problem may be that your wget creates a file regardless of whether there's a source file so checking for existence will always be true.
if [ -s $TMPNAMED ]
then
echo "done." # file exists AND has data
else
echo "no new data!"
exit
fi
will test to see if it's empty or non-existent and exit if so. This may be an issue with your if [ -e $ZONELOCATION/$ZONE.db ] as well.
sed or awk could do all of this in one line:
grep "^zone" $TMPNAMED | cut -d " " -f "2" | cut -d "\"" -f 2 > $TMPZONEFILE
sed '1,5d' $TMPZONEFILE > $TMP
but I would need to see some sample data to offer a solution.
Simplified quoting:
echo "done. ($TMPZONEFILE)"
You're not using the IGNORE variable or the updates variable. I don't see any reason to export it. Also, if you are relying on it elsewhere, its value won't survive once the while loop exits since piping something (grep in this case) into while sets up a subshell. It may be better to do one of these:
Bash:
while ...
do
...
done <(grep -vf ignore.txt $TMPZONEFILE)
sh:
grep -vf ignore.txt $TMPZONEFILE > tmp.out
while ...
do
...
done < tmp.out
I recommend using mktemp or tempfile to create temporary files, by the way.
This might be more readable and allows you to include quotes without having to escape them:
cat << EOF >> "$NAMED"
zone "$ZONE" {
type slave;
file "$ZONELOCATION/$ZONE.db";
masters { 91.121.75.205; };
allow-notify { 91.121.75.205; };
};
EOF
It's always a good habit to quote variables that contain filenames.
If you're going to all of that trouble to synchronise named.conf you might just as well rsync the whole config including the zone files, and not bother using zone transfers between primary and secondary.
It's by no means mandatory to use AXFR to slave servers. If you've got administrative control over all of the servers for a zone it's quite acceptable to treat them all as masters.
Related
I am trying to take a nmap scan result, determine the http ports (http, https, http-alt ...) and capture them ip and ports in order to automaticly perform web app scans.
I have my nmap results in grepable format. Using grep to delete any lines that do no contain the string "http". But I am now unsure how I can proceed.
Host: 127.0.0.1 (localhost) Ports: 3390/open/tcp//dsc///, 5901/open/tcp//vnc-1///, 8000/open/tcp//http-alt/// Ignored State: closed (65532)
This is my current result. From this I can get the IP of hosts with a http server open by using the cut command and getting the second field. which is the first part of my problem solved.
But now I am looking for a way to only get (from the above example)
8000/open/tcp//http-alt///
(NB: I'm not looking to get it just for the spefic case, using
cut -f 3 -d "," will work for this case, but if the http server was in the first field it would not work.)
after which i can use the cut command to get the port to then add it to a file with the ip, resulting in
127.0.0.1:8000
Could anyone advise a good way to do this?
Code of my simple bash script for doing a basic scan of all ports,the then doing a more advanced one based on the open ports found. Next step and objecive is to automaticly scan web apps with a directory scan and niktoo scan of identified web apps
#!/bin/bash
echo "Welcome to the quick lil tool. This runs a basic nmap scan, collects open ports and does a more advanced scan. reducing the time needed"
echo -e "\nUsage: ./getPorts.sh [Hosts]\n"
if [ $# -eq 0 ]
then
echo "No argument specified. Usage: ./getPorts.sh [Host or host file]"
exit 1
fi
if [[ "$EUID" -ne 0 ]]; then
echo "Not running as root"
exit 1
fi
nmap -iL $1 -p- -oA results
#Replace input file with gnmap scan, It will generate a list of all open ports
cat results.gnmap |awk -F'[/ ]' '{h=$2; for(i=1;i<=NF;i++){if($i=="open"){print h,":",$(i-1)}}}'| awk -F ':' '{print $2}' | sed -z 's/\n/,/g;s/,$/\n/' >> ports.list
#more advanced nmap scan
ports=$(cat ports.list)
echo $ports
nmap -p $ports -sC -sV -iL $1
EDIT: Found a way. Not sure why I was so focused on using the gnmap format for this, If I use the regular .nmap format. I can simple grep the line with http in and use cut to get the first field.
(cat results.nmap | grep 'http' | cut -d "/" -f 1)
EDIT2: I realised the method mentioned in my first edit is not optimal when processing multiple results as I then have a list of IP's from the .nmap, and a list of ports from the .gnmap. I have found a good solution to my problem using a single file. see below:
#!/bin/bash
httpalt=$(cat test.gnmap | awk '/\/http-alt\// {for(i=5;i<=NF;i++)if($i~"/open/.+/http-alt/"){sub("/.*","",$i); print "http://"$2":"$i}}')
if [ -z "$httpalt" ]
then
echo "No http-alt servers found"
else
echo "http-alt servers found"
echo $httpalt
printf "\n"
fi
http=$(cat test.gnmap | awk '/\/http\// {for(i=5;i<=NF;i++)if($i~"/open/.+/http/"){sub("/.*","",$i);print "http://"$2":"$i}}')
if [ -z "$http" ]
then
echo "No http servers found"
else
echo "http servers found"
echo $http
printf "\n"
fi
https=$(cat test.gnmap | awk '/\/https\// {for(i=5;i<=NF;i++)if($i~"/open/.+/https/"){sub("/.*","",$i); print "https://"$2":"$i}}')
if [ -z "$https" ]
then
echo "No http servers found"
else
echo "https servers found"
echo $https
printf "\n"
fi
echo ----
printf "All ip:webapps \n"
webserver=$(echo "$httpalt $http $https" | sed -e 's/\s\+/,/g'|sed -z 's/\n/,/g;s/,$/\n/')
if [[ ${webserver::1} == "," ]]
then
webserver="${webserver#?}"
else
echo 0; fi
for webservers in $webserver; do
echo $webservers
done
echo $https
https=$(echo "$https" | sed -e 's/\s\+/,/g'|sed -z 's/\n/,/g;s/,$/\n/')
echo $https
mkdir https
mkdir ./https/nikto/
mkdir ./https/dirb/
for onehttps in ${https//,/ }
do
echo "Performing Dirb and nikto for https"
dirb $onehttps > ./https/dirb/https_dirb
nikto -url $onehttps > ./https/nikto/https_nitko
done
mkdir http
mkdir ./http/nikto
mkdir ./http/dirb/
for onehttp in ${http//,/ }
do
echo $onehttp
echo "Performing Dirb for http"
dirb $onehttp >> ./http/dirb/http_dirb
nikto -url $onehttp >> ./http/nikto/http_nikto
done
mkdir httpalt
mkdir httpalt/nikto/
mkdir httpalt/dirb/
for onehttpalt in ${httpalt//,/ }
do
echo "Performing Dirb for http-alt"
dirb $onehttpalt >> ./httpalt/dirb/httpalt_dirb
nikto -url $onehttpalt >> ./httpalt/nikto/httpalt_nikto
done
This will check for any http, https, and http-alt servers, store them in a variable, check for duplicates and remove any trailing commas at the begining, It is far from perfect, but is a good solution for now!
Just want to share a brilliant open source tool on GitHub that can be used to easily parse NMAP XML files.
https://github.com/honze-net/nmap-query-xml
I use some of the python code to extract http/https URLs from the nmap xml file.
# pip3 install python-libnmap
from libnmap.parser import NmapParser
def extract_http_urls_from_nmap_xml(file):
try:
report = NmapParser.parse_fromfile(file)
urls = []
except IOError:
print("Error: Nmap XML file %s not found. Quitting!" % file)
sys.exit(1)
for host in report.hosts:
for service in host.services:
filtered_services = "http,http-alt,http-mgmt,http-proxy,http-rpc-epmap,https,https-alt,https-wmap,http-wmap,httpx"
if (service.state == "open") and (service.service in filtered_services.split(",")):
line = "{service}{s}://{hostname}:{port}"
line = line.replace("{xmlfile}", nmap_file)
line = line.replace("{hostname}", host.address if not host.hostnames else host.hostnames[0]) # TODO: Fix naive code.
line = line.replace("{hostnames}", host.address if not host.hostnames else ", ".join(list(set(host.hostnames)))) # TODO: Fix naive code.
line = line.replace("{ip}", host.address)
line = line.replace("{service}", service.service)
line = line.replace("{s}", "s" if service.tunnel == "ssl" else "")
line = line.replace("{protocol}", service.protocol)
line = line.replace("{port}", str(service.port))
line = line.replace("{state}", str(service.state))
line = line.replace("-alt", "")
line = line.replace("-mgmt", "")
line = line.replace("-proxy", "")
line = line.replace("-rpc-epmap", "")
line = line.replace("-wmap", "")
line = line.replace("httpx", "http")
urls.append(line)
return list(dict.fromkeys(urls))
printf "Host: 127.0.0.1 (localhost) Ports: 3390/open/tcp//dsc///, 5901/open/tcp//vnc-1///, 8000/open/tcp//http-alt/// Ignored State: closed (65532)" > file
cat file | tr -s ' ' | tr ',' '\n' | sed s'#^ ##g' > f2
string=$(sed -n '3p' f2 | cut -d' ' -f1)
It is only horizontal search which is difficult; vertical is easy. You can get any string out of any text you like, as long as you can get the string on its' own line, and then determine which line you need to print.
You only need complex regular expressions if you are relying exclusively on horizontal search. In almost all cases, as long as your substring is on its' own line, cut can take you the rest of the way.
Good day,
I need your help in creating next script
Every day teacher uploading files in next format:
STUDENT_ACCOUNTS_20200217074343-20200217.xlsx
STUDENT_MARKS_20200217074343-20200217.xlsx
STUNDENT_HOMEWORKS_20200217074343-20200217.xlsx
STUDENT_PHYSICAL_20200217074343-20200217.xlsx
SUBSCRIBED_STUDENTS_20200217074343-20200217.xlsx
[file_name+todaydatetime-todaydate.xlsx]
But sometimes a teacher is not uploading these files and we need to do manual renaming the files received for the previous date and then copying every separate file to separate folder like:
cp STUDENT_ACCOUNTS_20200217074343-20200217.xlsx /incoming/A1/STUDENT_ACCOUNTS_20200318074343-20200318.xlsx
cp STUDENT_MARKS_20200217074343-20200217.xlsx /incoming/B1/STUDENT_ACCOUNTS_20200318074343-20200318.xlsx
.............
cp SUBSCRIBED_STUDENTS_20200217074343-20200217.xlsx /incoming/F1/SUBSCRIBED_STUDENTS_20200318074343-20200318.xlsx.
In two words - taking the files from previous date copying them to specific folder with a new timestamp.
#!/bin/bash
cd /home/incoming/
date=$(date '+%Y%m%d')
previousdate="$( date --date=yesterday '+%Y%m%d' )"
cp /home/incoming/SUBSCRIBED_STUDENTS_'$previousdate'.xlsx /incoming/F1/SUBSCRIBED_STUDENTS_'$date'.xlsx
and there could be case when teacher can upload one file and others not, how to do check for existing files?
Thanks for reading that, if you can help me i will ne really thankful - you will save plenty of manual work for me.
The process can be automated completely if your directory structure is known. If it follows some kind of pattern, do mention it here.
For the timing, this maybe helpful:
Filename "tscp"
#
# Stands for timestamped cp
#
tscp() {
local file1=$1 ; shift
local to_dir=$1 ; shift
local force_copy=$1 ; shift
local current_date="$(date '+%Y%m%d')"
if [ "${force_copy}" == "--force" ] ; then
cp "${file1}" "${to_dir}/$(basename ${file1%-*})-${current_date}.xlsx"
else
cp -n "${file1}" "${to_dir}/$( basename ${file1%-*})-${current_date}.xlsx"
fi
}
tscp "$#"
It's usage is as follows:
tscp source to_directory [-—force]
Basically the script takes 2 arguments and the 3rd one is optional.
First arg is source file path and second are is the directory path to where you want to copy (. if same directory).
By default this copy would be made if and only if destination file doesn't exist.
If you want to overwrite the destination file then pass a third arg —force.
Again, this can be refined much much more based on details provided.
Sample usage for now:
bash tscp SUBSCRIBED_STUDENTS_20200217074343-20200217.xlsx /incoming/F1/
will copy SUBSCRIBED_STUDENTS_20200217074343-20200217.xlsx to directory /incoming/F1/ with updated date if it doesn't exist yet.
UPDATE:
Give this a go:
#! /usr/bin/env bash
printf_err() {
ERR_COLOR='\033[0;31m'
NORMAL_COLOR='\033[0m'
printf "${ERR_COLOR}$1${NORMAL_COLOR}" ; shift
printf "${ERR_COLOR}%s${NORMAL_COLOR}\n" "$#" >&2
}
alias printf_err='printf_err "Line ${LINENO}: " '
shopt -s expand_aliases
usage() {
printf_err \
"" \
"usage: ${BASH_SOURCE##*/} " \
" -f copy_data_file" \
" -d days_before" \
" -m months_before" \
" -o" \
" -y years_before" \
" -r " \
" -t to_dir" \
>&2
exit 1
}
fullpath() {
local path="$1" ; shift
local abs_path
if [ -z "${path}" ] ; then
printf_err "${BASH_SOURCE}: Line ${LINENO}: param1(path) is empty"
return 1
fi
abs_path="$( cd "$( dirname "${path}" )" ; pwd )/$( basename ${path} )"
printf "${abs_path}"
}
OVERWRITE=0
REVIEW=0
COPYSCRIPT="$( mktemp "/tmp/copyscriptXXXXX" )"
while getopts 'f:d:m:y:t:or' option
do
case "${option}" in
d)
DAYS="${OPTARG}"
;;
f)
INPUT_FILE="${OPTARG}"
;;
m)
MONTHS="${OPTARG}"
;;
t)
TO_DIR="${OPTARG}"
;;
y)
YEARS="${OPTARG}"
;;
o)
OVERWRITE=1
;;
r)
REVIEW=1
COPYSCRIPT="copyscript"
;;
*)
usage
;;
esac
done
INPUT_FILE=${INPUT_FILE:-$1}
TO_DIR=${TO_DIR:-$2}
if [ ! -f "${INPUT_FILE}" ] ; then
printf_err "No such file ${INPUT_FILE}"
usage
fi
DAYS="${DAYS:-1}"
MONTHS="${MONTHS:-0}"
YEARS="${YEARS:-0}"
if date -v -1d > /dev/null 2>&1; then
# BSD date
previous_date="$( date -v -${DAYS}d -v -${MONTHS}m -v -${YEARS}y '+%Y%m%d' )"
else
# GNU date
previous_date="$( date --date="-${DAYS} days -${MONTHS} months -${YEARS} years" '+%Y%m%d' )"
fi
current_date="$( date '+%Y%m%d' )"
tmpfile="$( mktemp "/tmp/dstnamesXXXXX" )"
awk -v to_replace="${previous_date}" -v replaced="${current_date}" '{
gsub(to_replace, replaced, $0)
print
}' ${INPUT_FILE} > "${tmpfile}"
paste ${INPUT_FILE} "${tmpfile}" |
while IFS=$'\t' read -r -a arr
do
src=${arr[0]}
dst=${arr[1]}
opt=${arr[2]}
if [ -n "${opt}" ] ; then
if [ ! -d "${dst}" ] ;
then
printf_err "No such directory ${dst}"
usage
fi
dst="${dst}/$( basename "${opt}" )"
else
if [ ! -d "${TO_DIR}" ] ;
then
printf_err "No such directory ${TO_DIR}"
usage
fi
dst="${TO_DIR}/$( basename "${dst}" )"
fi
src=$( fullpath "${src}" )
dst=$( fullpath "${dst}" )
if [ -n "${OVERWRITE}" ] ; then
echo "cp ${src} ${dst}"
else
echo "cp -n ${src} ${dst}"
fi
done > "${COPYSCRIPT}"
if [ "${REVIEW}" -eq 0 ] ; then
${BASH} "${COPYSCRIPT}"
rm "${COPYSCRIPT}"
fi
rm "${tmpfile}"
Steps:
Store the above script in a file, say `tscp`.
Now you need to create the input file for it.
From you example, a sample input file can be like:
STUDENT_ACCOUNTS_20200217074343-20200217.xlsx /incoming/A1/
STUDENT_MARKS_20200217074343-20200217.xlsx /incoming/B1/
STUNDENT_HOMEWORKS_20200217074343-20200217.xlsx
STUDENT_PHYSICAL_20200217074343-20200217.xlsx
SUBSCRIBED_STUDENTS_20200217074343-20200217.xlsx /incoming/FI/
Where first part is the source file name and after a "tab" (it should be a tab for sure), you mention the destination directory. These paths should be either absolute or relative the the directory where you are executing the script. You may not mention destination directory if all are to be sent to same directory (discussed later).
Let's say you named this file `file`.
Also, you don't really have to type all that. If you have these files in the current directory, just do this:
ls -1 > file
(the above is ls "one", not "l".)
Now we have the `file` from above in which we didn't mention destination directory for all but only for some.
Let's say we want to move all other directories to `/incoming/x` and it exists.
Now script is to be executed like:
bash tscp -f file -t /incoming/x -r
Where `/incoming/x` is the default directory i.e. when none other directory is mentioned in `file`, your files are moved to this directory.
Now in the current directory a script named `copyscript` will be generated which will contain `cp` commands to copy all files. You can open a review `copyscript` and if the copying seems right, go ahead and:
bash copyscript
which will copy all the files and then you can:
rm copyscript
You need not generate to `copyscript` and can straight away go for a copy like:
bash tscp -f file -t /incoming/x
which won't generate any copyscript and copy straight away.
Previously `-r` caused the generation of `copyscript`.
I would recomment to use version with `-r` because that is a little safer and you will be sure that right copies are being made.
By default it would check for the previous day and rename to current date, but you can override that behaviour as:
bash tscp -f file -t /incoming/x -d 3
`-d 3` would look for 3 days back files in `file`.
By default copies won't overwrite i.e. if file at the destination already exists, copies won't be made.
If you want to overwrite, add flag `-o`.
As a conclusion I would advice to use:
bash tscp -f file -r
where file contains tab separated values like above for all.
Also, adding tscp to path would be a good idea after you are sure it works ok.
Also the scipt is made on mac and there is always a change of version clash of tools used. I would suggest to try the script on some sample data first to make sure script works right on your machine.
I was wondering if there is a way to save the current package selections for cygwin for a later reinstall or porting on a different system.
It would be really great to:
run a command to export a list of installed packages on an existing system
pass the list to the installer on another system in a way such as setup-x86_64.exe --list list.txt
I don't think the setup has such a switch, so even any type of script or batch working in this direction would be just fine.
Since the number of needed packages is very high, it should be unattended in order to consider it as a good solution!
What would be the best way to accomplish a quick reinstall like this?
The list of installed packages is available with cygcheck. Setup does not accept a list option but you can specific the list with -P
The following code, when used with -A option will create
a crafted cyg-reinstall-${Arch}.bat batch file to install all
packages existing in a system.
#!/bin/bash
# Create a batch file to reinstall using setup-{ARCH}.exe
# all packages reported as incomplete
print_error=1
if [ $# -eq 1 ]
then
if [ $1 == "-I" ]
then
lista=$(mktemp)
cygcheck -c | grep "Incomplete" > $lista
print_error=0
fi
if [ $1 == "-A" ]
then
lista=$(mktemp)
cygcheck -cd | sed -e "1,2d" > $lista
print_error=0
fi
fi
if [ $# -eq 2 ]
then
if [ $1 == "-f" ]
then
lista=$2
print_error=0
fi
fi
# error message if options are incorrect.
if [ $print_error -eq 1 ]
then
echo -n "Usage : " $(basename $0)
echo " [ -A | -I | -f filelist ]"
echo " create cyg-reinstall-{ARC}.bat from"
echo " options"
echo " -A : All packages as reported by cygcheck"
echo " -I : incomplete packages as reported by cygcheck"
echo " -f : packages in filelist (one per raw)"
exit 1
fi
if [ $(arch) == "x86_64" ]
then
A="x86_64"
else
A="x86"
fi
# writing header
echo -n -e "setup-${A}.exe " > cyg-reinstall-${A}.bat
# option -x remove and -P install
# for re-install packages we need both
if [ $1 == "-I" ]
then
awk 'BEGIN{printf(" -x ")} NR==1{printf $1}{printf ",%s", $1}' ${lista} >> cyg-reinstall-${A}.bat
fi
awk 'BEGIN{printf(" -P ")} NR==1{printf $1}{printf ",%s", $1} END { printf "\r\n pause "}' ${lista} >> cyg-reinstall-${A}.bat
# execution permission for the script
chmod +x cyg-reinstall-${A}.bat
I recognize that this question is several years old, but I've often found useful information on here from even longer ago, so this might still help someone someday.
The script above did not work for me; I suspect the list was too long, or something of that nature. So I kept trying things, and I eventually arrived at a shell one-liner that worked correctly by trimming the list to only those items that I had explicitly requested. The key came from #Andrey's comment above: /etc/setup/installed.db!
Here's the command I used:
(ORIG_PKGS="/path/to/other-cygwin64/etc/setup/installed.db" ; PKGS=$(awk '/ 1$/ {print $1}' "${ORIG_PKGS}") ; PLIST=$(tr '\n' ',' <<< "${PKGS}") ; /setup-x86_64 -q -P "${PLIST%%,}")
For readability, here it is split up into multiple lines:
ORIG_PKGS="/path/to/other-cygwin64/etc/setup/installed.db"
PKGS=$(awk '/ 1$/ {print $1}' "${ORIG_PKGS}")
PLIST=$(tr '\n' ',' <<< "${PKGS}")
/setup-x86_64 -q -P "${PLIST%%,}"
All you should need is the /etc/setup/installed.db from the previous Cygwin installation; just alter the value of ORIG_PKGS with the correct path to that file, and the rest should Just Work®!
I use:
md5sum * > checklist.chk # Generates a list of checksums and files.
and use:
md5sum -c checklist.chk # runs through the list to check them
How can I automate a PASS or FAIL state? I basically want to get a notification if something on my app changes. Whether by hacker or unauthorized change by a developer. I want to write a script that will notify of any changes to my code.
I found a few scripts online but they only appear to work for single files, I have been unable to adapt the script to work for multiple files with pass or fail states.
if [ "$(md5sum < File.name)" = "24f4ce42e0bc39ddf7b7e879a -" ]
then
echo Pass
else
echo Fail
fi
Reference:
Shell scripts and the md5/md5sum command: need to decide when to use which one
https://unix.stackexchange.com/questions/290240/md5sum-check-no-file
I would do something like:
for f in $(awk '{printf"%s ", $2}' checklist.chk); do
md5sum=$(grep $f checklist.chk | awk '{print $1}')
if [[ "$(md5sum < $f)" = "$md5sum -" ]]; then
echo Pass
else
echo Fail
fi
done
Store your checksums directly in the script. Then just run the md5sum -c.
Something like:
#!/bin/bash
get_stored_checksums() {
grep -P '^[0-9a-f]{32} .' <<-'EOF'
#########################################################
# Stored checksums in the script itself
# the output from the md5sum for the files you want guard
5e3f61b243679426d7f74c22b863438b Workbook1.xls
777a161c82fe0c810e00560411fb076e Workbook1.xlsx
# empty lines and comments - theyre simply ignored
d41d8cd98f00b204e9800998ecf8427e abc def.xxx
# this very important file
809f911bcde79d6d0f6dc8801d367bb5 jj.xxx
#########################################################
EOF
}
#MAIN
cd /where/the/files/are
#run the md5sum in the check-mode
result=$( md5sum -c <(get_stored_checksums) )
if (( $? ))
then
#found some problems
echo "$result"
#mail -s "PROBLEM" security.manager#example.com <<<"$result"
#else
# echo "all OK"
fi
If something is wrong, you will see something like:
Workbook1.xls: OK
Workbook1.xlsx: OK
abc def.xxx: FAILED
jj.xxx: OK
md5sum: WARNING: 1 of 4 computed checksums did NOT match
Of course, you can change the get_stored_checksums function to anything other, like:
get_stored_checksums() {
curl -s 'http://integrityserver.example.com/mytoken'
}
and you will fetch the guarded checksums from the remote server...
My problem is to add a username to a file, I really stuck to proceed, please help.
Problem: I am having a file called usrgrp.dat. The format of this file is like:
ADMIN:srikanth,admin
DEV:dev1
TEST:test1
I am trying to write a shell script which should give me the output like:
Enter group name: DEV
Enter the username: dev2
My expected output is:
User added to Group DEV
If I see the contents of usrgrp.dat, it should now look like:
DEV:dev1,dev2
TEST:test1
And it should give me error saying user already present if I am trying to add already existing user in that group. I am trying this out with the following script:
#!/bin/sh
dispgrp()
{
groupf="/home/srikanth/scm/auths/group.dat"
for gname in `cat $groupf | cut -f1 -d:`
do
echo $gname
done
echo "Enter the group name:"
read grname
for gname in `cat $groupf | cut -f1 -d:`
do
if [ "$grname" = "$gname" ]
then
echo "Enter the username to be added"
read uname
for grname in `cat $groupf`
do
$gname="$gname:$uname"
exit 1
done
fi
done
}
echo "Group display"
dispgrp
I am stuck and need your valuable help.
#!/bin/sh
dispgrp()
{
groupf="/home/srikanth/scm/auths/group.dat"
tmpfile="/path/to/tmpfile"
# you may want to pipe this to more or less if the list may be long
cat "$groupf" | cut -f1 -d:
echo "Enter the group name:"
read grname
if grep "$grname" "$groupf" >/dev/null 2>&1
then
echo "Enter the username to be added"
read uname
if ! grep "^$grname:.*\<$uname\>" "$groupf" >/dev/null 2>&1
then
sed "/^$grname:/s/\$/,$uname/" "$groupf" > "$tmpfile" && mv "$tmpfile" "$groupf"
else
echo "User $uname already exists in group $grname"
return 1
fi
else
echo "Group not found"
return 1
fi
}
echo "Group display"
dispgrp
You don't need to use loops when the loops are done for you (e.g. cat, sed and grep).
Don't use for to iterate over the output of cat.
Don't use exit to return from a function. Use return.
A non-zero exit or return code signifies an error or failure. Use 0 for normal, successful return. This is the implicit action if you don't specify one.
Learn to use sed and grep.
Since your shebang says #!/bin/sh, the changes I made above are based on the Bourne shell and assume POSIX utilities (not GNU versions).
Something like (assume your shell is bash):
adduser() {
local grp="$1"
local user="$2"
local gfile="$3"
if ! grep -q "^$grp:" "$gfile"; then
echo "no such group: $grp"
return 1
fi
if grep -q "^$grp:.*\\<$user\\>" "$gfile"; then
echo "User $user already in group $grp"
else
sed -i "/^$grp:/s/\$/,$user/" "$gfile"
echo "User $user added to group $grp"
fi
}
read -p "Enter the group name: " grp
read -p "Enter the username to be added: " user
adduser "$grp" "$user" /home/srikanth/scm/auths/group.dat