Free Ansible / Puppet / Chef alternative - bash

This is a question about your opinion on a script I wrote.
At my workplace I wrote a bash-script to execute commands on Linux-Systems (redhat). Since we got Ansible and Puppet nobody was interessted in the script. And to be true: if you got Ansible or Puppet and you are working in a team, it isnt advised to use such a solution... BUT (and thats why I want to post it here and ask for your opinion/improvements on it) if you just want to manage a few Linux hosts and dont want to buy any licence etc. this script may help you out.
the script is called "rotator" and it works like this:
It takes a serverlist and a functionfile. It breaks the Serverlist up into a given maximum number of tmp-lists and executes the commands in the functionfile in the background. Then it waits for all the parallel processes to finish. At the end it checks if all the hosts were visited.
code of rotator.sh:
\#!/bin/bash
usage ()
{
echo ""
echo ""
echo ""
echo "ERROR: $1"
echo ""
echo "You need to provide a serverlist and a"
echo "function file as parameter."
echo "e.g.: sh check_rpm.sh \<server.list\> \<funktions-file.sh\>"
echo "note that the master pushes the config to the client mashines"
echo ""
echo ""
exit 1
}
if \[\[ $# -eq 0 || "$#" == "-h" || "$#" == "--help" || "$#" == "-?" \]\]
then
usage "missing parameter"
exit 1
fi
\######################
# Variablendeklaration
\######################
basedir=$(dirname $0)
hosts=$(cat $1)
funcfile=$2
myhostname=$(hostname -a)
maxlists=100
i=0
j=0
templist=${basedir}/tmp/templist${j}
num=$(cat $1 | wc -l)
length=0
divisor=1
\###################################
# determine num of temp-lists
\###################################
if \[\[ num -gt $maxlists \]\]
then
length=$((num/$maxlists))
((length++))
else
length=1
fi
\######################################
# delete existing old temp lists
\######################################
if \[ ! -z "$(ls -A tmp)" \];
then
rm ./tmp/\*
fi
\######################
# create temp lists
\######################
echo "cutting hostlist"
for host in $hosts;
do
if \[\[ ! "${host^^}" == "${myhostname^^}" \]\]
then
echo $host \>\> $templist
((i++))
fi
if \[\[ $i -eq $length \]\]
then
((j++))
templist=${basedir}/tmp/templist${j}
i=0
fi
done
if \[\[ $j -eq 0 \]\]
then
j=1
fi
\##################################################
# start func-file for all lists and remember PID
\##################################################
k=0
for ((i=0;i\<$j;i++));
do
sleep 0.2
sh $funcfile ${basedir}/tmp/templist${i} &
pids\[${k}\]=$!
((k++))
done
\######################################
# wait till all processes terminate
\######################################
echo "waiting till all processes are done"
for pid in ${pids\[\*\]};
do
wait $pid
done
\#######################
# cleanup temp lists
\#######################
for ((i=0;i\<$j;i++));
do
rm -f ${basedir}/tmp/templist${i}
done
\##########################################
# determine if all hosts were visited
\##########################################
echo "determine if all hosts were visited..."
checkhosts=$(cat tmp/check.list)
done=false
absenthosts=""
for host in $hosts
do
for server in $checkhosts
do
host=$(echo ${host^^})
server=$(echo ${server^^})
if \[\[ "$host" == "$server" \]\]
then
done=true
fi
done
if \[\[ "$done" == "false" \]\]
then
absenthosts+="$host "
else
done=false
fi
done
rm tmp/check.list
\############################################
# tell user which hosts were not visited
\############################################
echo "the following hosts werent visited and need to be handled manually"
echo ""
for host in $absenthosts
do
echo "$host"
done
**# END OF SCRIPT**
Here is an example of a function-file:
\#!/bin/bash
#
#
#
# ================||
# PLEASE BE AWARE ||
# ================||
# Add the code, which should be executed within the marked space
# instead of using "ssh" use $SSHOPTS
#
#
hosts=$(cat $1)
SSHOPTS="ssh -q -o BatchMode=yes -o ConnectTimeout=10"
for host in $hosts
do
$SSHOPTS $host "hostname" \>\> /opt/c2tools/serverliste_kontrollskripten/rotator/tmp/check.list
#
# MARK: add code after this mark
#
#
# MARK: add code before this mark
#
done
**# END OF SCRIPT**
I would say, that if you combine the rotator with cron-job, you could more or less use it as an free alternative to puppet or ansible, with the bonus that you can write the function-files in the bash scripting language.
Please share your opinion
The script runs, but could be improved

Related

Running two loops simultaneously in bash

I am reading file in which two different types of key-value pairs are there and I have to pass those key-value pairs to another file. For that I have to run two loops which doesn't seems me the optimized approach. Below the code I am trying and it will also clarify you what I am trying to do :
#!/bin/bash
set -e
# Prints command usage
function usage() {
cat <<EOF
Usage: $0 [--in-file <infile>] [--env <env>]
EOF
}
env=""
# Parse Args
POSITIONAL=()
while [[ $# -gt 0 ]]; do
key="$1"
case $key in
-i|--in-file)
in_file=$2
shift 2 # past argument and value
;;
-e|--env)
env="$2"
shift 2 # past argument and value
;;
*) # unknown option
echo "unknown option"
shift # past argument
;;
esac
done
set -- "${POSITIONAL[#]}" # restore positional parameters
if [[ ${in_file} == "" ]]; then
echo "ERROR: Missing in-file argument"
usage
exit 1
fi
cat ${in_file}
echo "Unsilencing alarms"
declare -A summary
overall_status_failed="false"
silence_ids=`yq read -j ${in_file} alarm_ids | jq -M -r .[]`
while read -r silence_id; do
echo "Unsilencing ${silence_id} silence id"
if [env != ""]; then
./unsilence_alarm.sh --alarm-id ${silence_id} --env ${env}|| script_failed="true"
else
./unsilence_alarm.sh --alarm-id ${silence_id}|| script_failed="true"
fi
if [[ "${script_failed}" == "true" ]]; then
summary["silence id ${silence_id} "]='Fail'
overall_status_failed="true"
script_failed="false"
else
summary["silence id ${silence_id} "]='Pass'
fi
done <<< "${silence_ids}"
echo "Unsilencing moody alarms"
silence_ids=`yq read -j ${in_file} moody_alarm_ids | jq -M -r .[]`
while read -r silence_id; do
echo "Unsilencing ${silence_id} moody silence id"
if [env != ""]; then
./unsilence_alarm.sh --moody-alarm-id ${silence_id} --env ${env}|| script_failed="true"
else
./unsilence_alarm.sh --moody-alarm-id ${silence_id}|| script_failed="true"
fi
if [[ "${script_failed}" == "true" ]]; then
summary["moody silence id ${silence_id}"]="Fail"
overall_status_failed="true"
script_failed="false"
else
summary["moody silence id ${silence_id}"]="Pass"
fi
done <<< "${silence_ids}"
set +x
echo
echo
echo
echo +------------------------- Unsilence summary --------------------------+
for status in "${!summary[#]}"; do
printf "${status} - ${summary[$status]}\n"
done
echo +----------------------------------------------------------------------+
if [[ "${overall_status_failed}" == "true" ]]; then
exit 1
fi
exit
Can anyone tell me the optimized approach for merging these two while loops? Another file accepts the parameters like:
[--alarm-id <id>] [--moody-alarm-id <id>] [--env <env>]
Combine both while blocks with the help of associative arrays and wrap it in for
Does this work as expected?
...
declare -A flags
flags[alarm_ids]="--alarm-id"
flags[moody_alarm_ids]="--moody-alarm-id"
for alarm_id in alarm_ids moody_alarm_ids; do
silence_ids=`yq read -j ${in_file} ${alarm_id} | jq -M -r .[]`
while read -r silence_id; do
echo "Unsilencing ${silence_id} silence id"
./unsilence_alarm.sh ${flags[$alarm_id]} ${silence_id} ${env:+"--env ${env}"} \
|| script_failed="true"
if [[ "${script_failed}" == "true" ]]; then
summary["silence ${flag[alarm_id]//-/ } ${silence_id} "]='Fail'
overall_status_failed="true"
script_failed="false"
else
summary["silence ${flag[alarm_id]//-/ } ${silence_id} "]='Pass'
fi
done <<< "${silence_ids}"
done
...
Simultaneous would not be easy. If you do need it, wrap the above code snippet in a function without the for loop; and use a parameter to pass the "key" to the the while loop. And then you could start them as background activities with foo &. But that would not make it easier for summary. Since you need to wait for it to complete anyway - synchronous is the way to go.
Also ${env:+"--env ${env}"} expands to the rhs of + if lhs (env) is not null. Using 4-statement long if-else block seemed tedious. I am not sure if this method is frowned upon, but this is how I personally would have done it.

How do I maintain watch over multiple background processes and respond according to their return code?

In bash, I'm trying to maintain a certain # of processes for downloading files. Basically, I have a list of files from one of two sources. So my script will do the following:
Read which list is being processed
Pull the full URL to be fetched
Extract the actual file name since the URLs are long and ugly and frequently cause files to be misnamed. Each list has it's own position for the filename in the URL so the list name is checked and the position of the filename in the URL is determined from the source file name.
Begin downloading the file IF the number of active downloads is less than 10
if Active Downloads = 10 then wait until one of them exits
on RC of 0 begin downloading the next file in the list.
on a Non-Zero RC, report the bad RC from wget and abort the script, leaving any currently running wget instances running
When the list is empty, either because it had less than 10 to start, or all files are being or have been pulled, wait for remaining wget instances to finish cleanly, then exit script cleanly
My problem is that despite wrapping wget in a function and calling that function with &, the wget doesn't go into the background. It pauses the whole script until it finishes. This is not good because some of the URLS are time encoded and if I don't download them within a certain amount of minutes the web server throw errors instead of the files I want. So waiting for each file to finish isn't going to work if I'm pulling 30+ files (not uncommon).
Here's the script. Any help is GREATLY appreciated! Ignore the doDEBUG stuff. That's just troubleshooting I tried that revealed WHAT the problem is but not WHY.
#!/bin/bash
# when script is confirmed as working ok. set DEBUG to 0
#set -x # should not be necessary!
DEBUG=1
SET_X=0
doDEBUG() {
[ $DEBUG -eq 0 ] && return
if [ $SET_X -eq 0 ] ; then
echo -n "";
set -x
SET_X=1;
fi
read -p "Press <ENTER> to Continue..."
}
grabFN() {
echo `echo $1 | cut -f $FLD -d / | cut -f 1 -d \?`
}
enqueue() {
URL=$1 FN=$(grabFN $1)
# Fetch Limit of 10 processes at once
doDEBUG
wget $URL -O $FN -o log.$FN
doDEBUG
[ -n $? ] && echo $URL >> $INFILE
}
fetchURL() {
THISURL="`head -n 1 $INFILE`"
[ "$THISURL" = "__EMD__" ] && return 1;
grep -v "$THISURL" $INFILE > tmp.$INFILE
mv tmp.$INFILE $INFILE
}
doDEBUG
if [ $1 = "getlst.src1" ] ; then FLD=5; else if [ $1 = "getlst.src2" ]; then FLD=9; else echo "Unknown file: ${1}! Use known files or update the script!"; fi; fi
pids=""
INFILE=$1
doDEBUG
# Run five concurrent processes
for i in {1..10}; do
fetchURL || break
doDEBUG
( enqueue $THISURL ) &
doDEBUG
# store PID of process
pids+=" $!"
doDEBUG
done
# Wait for all processes to finish, will take max 14s
# as it waits in order of launch, not order of finishing
for p in $pids; do
if wait $p; then
NP=""
for X in $pids do [ $X -eq $p ] || NP+=" $X";
pids=$NP
doDEBUG
while [ `echo $pids | wc -l` -lt 10]; do
if [ `FETCHURL` ]; then
doDEBUG
( enqueue $THISURL ) &
doDEBUG
pids+=" $!" ;
else
break;
fi;
done;
else
doDEBUG
echo "Fetching $THISURL failed! Aborting Loop!"
exit 1;
fi;
done
Never mind. I found a couple of glitches in my code and once I fixed them it seems to be working okay. Sometimes you just have to get away for a few minutes.

how to run multiple commands on a remote linux server using bash script

I am currently writing the following script that logs into a remote server and runs couple of commands to verify the performance of the server and prints a message based on the output of those commands .But the ssh doesn't work and returns the stats of the server that hosts the script instead .
Script
#!/bin/bash
#######################
#Function to add hosts to the array
#the following function takes the ip addresses provided while the script is run and stores them in an array
#######################
Host_storing_func () {
HOST_array=()
for i in $# ;do
HOST_array+=(${i});
done
#echo ${HOST_array[*]}
}
#######################
#Calling above function
#######################
Host_storing_func "$#"
############################################################
#Collect Stats of Ping,memory,iowait time test function
############################################################
b=`expr ${#HOST_array[*]} - 1 `
for i in `seq 0 $b` ;do
sshpass -f /root/scripts/passwordFile.txt /usr/bin/ssh student35#${HOST_array[${i}]} << HERE
echo `hostname`
iowaittm=`sar 2 2|awk '/^Average/{print $5};'`
if [ $iowaittm > 10 ];then
echo "IO ==> BAD"
else
echo "IO ==> GOOD"
fi
memoryy=`free -m |grep Swap|awk '{if($2 == 0) print 0;else print (($4 / $2 ) * 100)}'`
if [ ${memoryy} < '10' ] ;then
echo "memory ==> good"
elif [[ "${memory}" -ge 0 ]] && [[ "${memory}" -le 10 ]];then
echo "No Swap"
else
echo "memory ==> bad"`enter code here`
fi
ping -w2 -c2 `hostname` | grep "packet loss"|awk -F, '{print $3}'|awk -F% '{print $1}'|sed 's/^ *//'|awk '{if ($1 == 0) print "Yes" ;else print "No"}'
HERE
done
Output : oc5610517603.XXX.com is the name of the source server
[root#oc5610517603 scripts]# ./big_exercise.sh 9.XXX.XXX.XXX 9.XXX.XXX.XXX
Pseudo-terminal will not be allocated because stdin is not a terminal.
oc5610517603.XXX.com
IO ==> GOOD
No Swap
ping: oc5610517603.ibm.com: Name or service not known
Pseudo-terminal will not be allocated because stdin is not a terminal.
oc5610517603.XXX.com
IO ==> GOOD
No Swap
ping: oc5610517603.XXX.com: Name or service not known
thanks for checking the script , I figured out a way to solve the problem
It is the sshpass command that is causing issue , you just have to put the opening HERE in single quotes if you want to use variables with in the HEREdoc but if the variables are calculated before ssh then you don't have to put opening HERE in single quotes
sshpass -f /root/scripts/passwordFile.txt /usr/bin/ssh -T student35#${i} << 'HERE'
after I changed the sshpass command as above my script worked
I have modified your script a bit.
As suggested by #chepner, I am not using the Host_storing_func.
Heredocs for sshpaas are somewhat tricky. You have to escape every back-tick and $ sign in the heredoc.
Notice the - before the heredoc start, it allows you to indent the heredoc body. Also, try to avoid back-ticks when you can. use $(command) instead.
Hope it helps.
#!/bin/bash
#######################
#Function to add hosts to the array
#the following function takes the ip addresses provided while the script is run and stores them in an array
#######################
array=( "$#" )
user="student35"
############################################################
#Collect Stats of Ping,memory,iowait time test function
############################################################
for host in ${array[#]}; do
sshpass -f /root/scripts/passwordFile.txt /usr/bin/ssh -l ${user} ${host} <<-HERE
thishost=\$(hostname)
echo "Current Host -> \$thishost";
iowaittm=\`sar 2 2|awk '/^Average/{print \$5}'\`
if [ \$iowaittm > 10 ]; then
echo "IO ==> BAD"
else
echo "IO ==> GOOD"
fi
memory=\$(free -m | grep Swap | awk '{if(\$2 == 0) print 0;else print ((\$4 / \$2 ) * 100)}')
if [ \${memory} < '10' ] ;then
echo "memory ==> good"
elif [[ "\${memory}" -ge 0 ]] && [[ "\${memory}" -le 10 ]]; then
echo "No Swap"
else
echo "memory ==> bad"\`enter code here\`
fi
ping -w2 -c2 \`hostname\` | grep "packet loss"|awk -F, '{print \$3}'|awk -F% '{print \$1}'|sed 's/^ *//'|awk '{if (\$1 == 0) print "Yes" ;else print "No"}'
HERE
done

Create a script that makes two touch command in the right order always

For performing one task, I need to make two touch commands in a precise order:
touch aaaa-ref-bbb.done
touch cccc-grp-dddd.done
Beinng aaaa .. dddd any kind of strings. The first string contains "-ref-" and the second string contains "-done-"
I want to make a script that applies both touch commands, independently of the orders that the parameters are passed.
For instance (parameters in the wrong order)
./script.sh bla-grp-bla bleh-ref-bleh
Will produce an output of
touch bleh-ref-bleh
touch bla-grp-bla
If the parameters are written in the right order, the touch commands follow the right order.
I have done several tries and each change goes closer to the goal, but now I'm stuck.
Could you help with this?
#### tool for touch debug mode (set -x / set +x)
#!/bin/bash
#
#### USAGE
##### Constants
#start debug code
exec 5> >(logger -t $0)
BASH_XTRACEFD="5"
PS4='$LINENO: '
set -x
FIRSTPARAM=$1
SECONDPARAM=$2
echo $FIRSTPARAM
echo $SECONDPARAM
dotouch()
{
if [[ "$FIRSTPARAM" =~ 'ref' ]]; then
echo 'correct order, processing...'
sleep 3
firsttouch = $FIRSTPARAM'.done'
secondtouch = $SECONDPARAM'.done'
echo $firsttouch
touch $firsttouch
sleep 1
touch $secondtouch
echo "touch was" $1 $2
else
secondtouch = $FIRSTPARAM'.done'
firstouch = $SECONDPARAM'.done'
touch $firsttouch
sleep 1
touch $secondtouch
echo "touch was" $2 $1
fi
}
if [ "$FIRSTPARAM" =~ "ref" ] || [ "$FIRSTPARAM" =~ "grp" ]; then
dotouch()
echo "touch commands executed"
exit 0
else
echo "Usage: $0 [xxxx_ref_xxxx.tar] [xxxx_grp_yyyy.tar] "
exit 1
fi
exit 0
#end debug code
set +x
You are defining 2sttouch and 1ndtouch variables and using firsttouch and secondtouch. You should use the same variable names.
Let's start by putting your shebang line in the right place, and drastically simplifying the code;
#!/bin/bash
#### tool for touch debug mode (set -x / set +x)
exec 5> >(logger -t $0)
BASH_XTRACEFD="5"
PS4='$LINENO: '
set -x
FIRSTPARAM=$1
SECONDPARAM=$2
echo $FIRSTPARAM
echo $SECONDPARAM
dotouch() {
touch "$1"
echo "$Just touched $1"
return
}
case $FIRSTPARAM in
*"ref"*) dotouch $FIRSTPARAM'.done' ; dotouch $SECONDPARAM'.done' ;;
*"grp"*) dotouch $SECONDPARAM'.done' ; dotouch $FIRSTPARAM'.done' ;;
*) echo "Usage: $0 [xxxx_ref_xxxx.tar] [xxxx_grp_yyyy.tar] " ; exit 1 ;;
esac
exit 0
#end debug code
set +x
There was no need for most of that.
The problem is you are not considering the different cases on the main if before entering to the dotouch function. In your expression you are only evaluating the first parameter so you don't really know the content of the second parameter.
My suggestion is:
Create a doTouch function that simply touches 2 received parameters in the order they are received.
Add the different cases on the main code (if there are more, add more elif statements).
Here is the code (without the debug annotations):
#!/bin/bash
#######################################
# Script function helpers
#######################################
doTouch() {
local ref=$1
local grp=$2
echo "Touching $ref"
touch "$ref"
echo "Touching $grp"
touch "$grp"
echo "Touching order was: $ref $grp"
}
usage() {
echo "Usage: $0 [xxxx_ref_xxxx.tar] [xxxx_grp_yyyy.tar]"
}
#######################################
# Main
#######################################
# Retrieve parameters
FIRSTPARAM=$1
SECONDPARAM=$2
echo $FIRSTPARAM
echo $SECONDPARAM
# Check parameter order and touch
if [[ $FIRSTPARAM == *"ref"* ]] && [[ $SECONDPARAM == *"grp"* ]]; then
doTouch $FIRSTPARAM $SECONDPARM
elif [[ $SECONDPARAM == *"ref"* ]] && [[ $FIRSTPARAM == *"grp"* ]]; then
doTouch $SECONDPARM $FIRSTPARAM
else
usage
exit 1
fi
# Regular exit
exit 0

Run a shell script with While condition in an infinite loop based on conditions

I need to create a shell script to place some indicator/flag files in a directory say /dir1/dir2/flag_file_directory based on the request flags received from a shell script in a directory /dir1/dir2/req_flag_file_directory and the source files present in a directory say dir1/dir2/source_file_directory. For this I need to run a script using a while condition in an infinite loop as I do not know when the source files will be made available.
So, my implementation plan is somewhat like this - Lets say JOB1 which is scheduled to run at some time in the morning will first place(touch) a request flag (eg. touch /dir1/dir2/req_flag_file_directory/req1.req), saying that this job is running, so look for the Source files of pattern file_pattern_YYYYMMDD.CSV (the file patterns are different for different jobs) present in the source file directory, if they are present, then count the number. If the count of the files is correct, then first delete the request flag for this job and then touch a indicator/flag file in the /dir1/dir2/flag_file_directory. This indicator/flag file will then be used as an indicator that the source files are all present and the job can be continued to load these files into our system.
I will have all the details related to the jobs and their flag files in a file whose structure is as shown below. Based on the request flag, the script should know what other criterias it should look for before placing the indicator file:
request_flags|source_name|job_name|file_pattern|file_count|indicator_flag_file
req1.req|Sourcename1|jobname1|file_pattern_1|3|ind1.ind
req2.req|Sourcename2|jobname2|file_pattern_2|6|ind2.ind
req3.req|Sourcename3|jobname3|file_pattern_3|1|ind3.ind
req**n**.req|Sourcename**n**|jobname**n**|file_pattern_**n**|2|ind**n**.ind
Please let me know how this can be achieved and also if you have other suggestions or solutions too
Rather have the service daemon script polling in an infinite loop (i.e. waking up periodically to check if it needs to do work), you could use file locking and a named pipe to create an event queue.
Outline of the service daemon, daemon.sh. This script will loop infinitely, blocking by reading from the named pipe at read line until a message arrives (i.e., some other process writes to $RequestPipe).
#!/bin/bash
# daemon.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
while true ; do
if read line < "$RequestPipe" ; then
# ... commands to be executed after message received ...
echo "$line" # for example
fi
done
An outline of requestor.sh, the script that wakes up the service daemon when everything is ready. This script does all the preparation necessary, e.g. creating files in req_flag_file_directory and source_file_directory, then wakes the service daemon script by writing to the named pipe. It could even send a message that that contains more information for the service daemon, say "Job 1 ready".
#!/bin/bash
# requestor.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
# ... create all the necessary files ...
(
flock --exclusive 200
# Unblock the service daemon/listener by sending a line of text.
echo Wake up sleepyhead. > "$RequestPipe"
) 200>"$LockFile" # subshell exit releases lock automatically
daemon.sh fleshed out with some error handling:
#!/bin/bash
# daemon.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
SharedGroup=$(echo need to put a group here 1>&2; exit 1)
#
if [[ ! -w "$RequestPipe" ]] ; then
# Handle 1st time. Or fix a problem.
mkfifo --mode=775 "$RequestPipe"
chgrp "$SharedGroup" "$RequestPipe"
if [[ ! -w "$RequestPipe" ]] ; then
echo "ERROR: request queue, can't write to $RequestPipe" 1>&2
exit 1
fi
fi
while true ; do
if read line < "$RequestPipe" ; then
# ... commands to be executed after message received ...
echo "$line" # for example
fi
done
requestor.sh fleshed out with some error handling:
#!/bin/bash
# requestor.sh
LockDir="/dir1/dir2/req_flag_file_directory"
LockFile="${LockDir}/.MultipleWriterLock"
RequestPipe="${LockDir}/.RequestQueue"
SharedGroup=$(echo need to put a group here 1>&2; exit 1)
# ... create all the necessary files ...
#
if [[ ! -w "$LockFile" ]] ; then
# Handle 1st time. Or fix a problem.
touch "$LockFile"
chgrp "$SharedGroup" "$LockFile"
chmod 775 "$LockFile"
if [[ ! -w "$LockFile" ]] ; then
echo "ERROR: write lock, can't write to $LockFile" 1>&2
exit 1
fi
fi
if [[ ! -w "$RequestPipe" ]] ; then
# Handle 1st time. Or fix a problem.
mkfifo --mode=775 "$RequestPipe"
chgrp "$SharedGroup" "$RequestPipe"
if [[ ! -w "$RequestPipe" ]] ; then
echo "ERROR: request queue, can't write to $RequestPipe" 1>&2
exit 1
fi
fi
(
flock --exclusive 200 || {
echo "ERROR: write lock, $LockFile flock failed." 1>&2
exit 1
}
# Unblock the service daemon/listener by sending a line of text.
echo Wake up sleepyhead. > "$RequestPipe"
) 200> "$LockFile" # subshell exit releases lock automatically
Still having some doubts about the contents of requests file, but I think I've come up with a rather simple solution:
#!/bin/bash
DETAILS_FILE="details.txt"
DETAILS_LINES=$((`wc -l $DETAILS_FILE|awk '{print $1}'`-1)) # to remove banner line (first line)
DETAILS=`tail -$DETAILS_LINES $DETAILS_FILE|tr '\n\r' ' '`
PIDS=()
IFS=' '
waitall () { # PIDS...
## Wait for children to exit and indicate whether all exited with 0 status.
local errors=0
while :; do
debug "Processes remaining: $*"
for pid in $#; do
echo "PID: $pid"
shift
if kill -0 "$pid" 2>/dev/null; then
debug "$pid is still alive."
set -- "$#" "$pid"
elif wait "$pid"; then
debug "$pid exited with zero exit status."
else
debug "$pid exited with non-zero exit status."
((++errors))
fi
done
(("$#" > 0)) || break
# TODO: how to interrupt this sleep when a child terminates?
sleep ${WAITALL_DELAY:-1}
done
((errors == 0))
}
debug () { echo "DEBUG: $*" >&2; }
#function to check for # of sourcefiles matching pattern in dir
#params: req3.req Sourcename3 jobname3 file_pattern_3 1 ind3.ind
check () {
NOFILES=`find $2 -type f | egrep -c $4`
if [ $NOFILES -eq "$5" ];then
echo "Touching file $6. done."
touch $6
else
echo "$NOFILES matching $4 pattern. exiting"
fi
}
echo "parsing $DETAILS_FILE file..."
read -a lines <<< "$DETAILS"
for line in "${lines[#]}"
do
IFS='|'
read -a ARRAY <<< "$line"
echo "Line processed. Dispatching job ${ARRAY[2]}..."
check ${ARRAY[#]} &
IFS=' '
PIDS="$PIDS $!"
#echo $PIDS
done
waitall ${PIDS}
wait
Although not exactly in a infinite loop. This script is intended to run in a crontab.
First it reads details.txt file, as per your example.
After parsing all details, this script dispatches the check function, with sole purpose of counting the number of files matching file_pattern of each source_name folder, and if the number of files is equal to file_count, then touches the indicator_flag_file.
Hope that helps!

Resources