I am trying to use inotifywait within a bash script to monitor a directory for a file with a certain tag in it (*SDS.csv).
I also only want to execute once (once when the file is written to the directory data ).
example:
#! /bin/bash
inotifywait -m -e /home/adam/data | while read LINE
do
if [[ $LINE == *SDS.csv ]]; then
./another_script.sh
fi
done
While this may not be the ideal solution, it may do the trick:
#! /bin/bash
while true
do
FNAME="$(inotifywait -e close_write /home/adam/data | awk '{ print $NF }')"
if [ -f "/home/adam/data/$FNAME" ]
then
if grep -q 'SDS.csv' "/home/adam/data/$FNAME"
then
./another_script.sh
fi
done
done
Related
I have a script called "upcall" which calls 4 different scripts. In upcall I call them in the way show. The first part of the script works when I run the script directly (bash upload_cloud1), but does not when its called from the script below. Im sure there is a way to fix this, but just not sure what it is. I have it currently setup in crontab to run every 15 mins to check for used space.
#!/bin/bash
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then
echo "This script is already running with PID `pidof -x $(basename $0) -o %PPID`"
exit; fi
count=$(</opt/rclone/scripts/upcount)
size=$(df -k /dev/sda2 | tail -1 | awk '{print $3}')
if [ "$size" -gt "234003200" ]; then
bash /opt/rclone/scripts/upload_cloud${count}
else
echo "Not full yet"
fi
I have a small shell script as follows that I am using to login to multiple servers to capture whether the target server is using Redhat or Ubuntu as the OS version.
#!/bin/ksh
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in `cat hostlist`
do
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
The above script works, however, if I attempt to add in some awk as shown below and the server is a redhat server...my results in the osver.report.txt will only show the hostname and no OS version. I have played around with the quoting, but nothing seems to work.
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release | awk {'print $1,$2,$6,$7'} 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
If I change the script as suggested to the following:
#!/bin/bash
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in cat hostlist
do
OSVER=$(
ssh $USER#${x} bash << 'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
Then I get the following errors:
./test.bash: line 9: unexpected EOF while looking for matching `)'
./test.bash: line 16: syntax error: unexpected end of file
You're suffering from a quoting problem. When you pass a quoted command to ssh, you effectively lose one level of quoting (as if you passed the same arguments to sh -c "..."). So the command that you're running on the remote host is actually:
cat /etc/redhat-release | awk '{print ,,,}' | grep -i DISTRIB_DESCRIPTION /etc/lsb-release
One way of resolving this is to pipe your script into a shell, rather than passing it as arguments:
OSVER=$(
ssh $USER#${x} bash <<'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null ||
grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
The use of <<'EOF' here inhibits any variable expansion in the here document...without that, expressions like $1 would be expanded locally.
A better solution would be to look into something like ansible which has built-in facilities for sshing to groups of hosts and collecting facts about them, including distribution version information.
inotifwait won't run command
"Setting up watches.
Watches established" is output, script just exit
#!/bin/bash
while $(inotifywait -e modify,close_write /home/centos/test.txt);
do
touch /home/centos/log.txt
done
but when i modify test.txt log.txt is not created
Tried this version:
#!/bin/bash
inotifywait -e modify,close_write /home/centos/test.txt |
while read output; do
touch /home/centos/log.txt;
done
tried this also:
inotifywait -e modify,close_write /home/centos/test.txt |
while read -r filename event; do
echo "test" # or "./$filename"
done
Solved it by adding -m /folder
I'm currently monitoring a log file and my ultimate goal is to write a script that uses tail -n0 -f and execute a certain command once grep finds a correspondence. My current code:
tail -n 0 -f $logfile | grep -q $pattern && echo $warning > $anotherlogfile
This works but only once, since grep -q stops when it finds a match. The script must keep searching and running the command, so I can update a status log and run another script to automatically fix the problem. Can you give me a hint?
Thanks
use a while loop
tail -n 0 -f "$logfile" | while read LINE; do
echo "$LINE" | grep -q "$pattern" && echo "$warning" > "$anotherlogfile"
done
awk will let us continue to process lines and take actions when a pattern is found. Something like:
tail -n0 -f "$logfile" | awk -v pattern="$pattern" '$0 ~ pattern {print "WARN" >> "anotherLogFile"}'
If you need to pass in the warning message and path to anotherLogFile you can use more -v flags to awk. Also, you could have awk take the action you want instead. It can run commands via the system() function where you pass the shell command to run
I have a bash script that I want to expand to support piping json into.
Example:
echo '{}' | myscript store
So, I tried the following:
local value="$1"
if [[ -z "$value" ]]; then
while read -r piped; do
value=$piped
done;
fi
Which works in a simple case above, but doing:
cat input.json | myscript store
Only get's the last line of the file input.json, it does not handle every line.
How can I support all cases of piping?
The following works:
if [[ -z "$value" && ! -t 0 ]]; then
while read -r piped; do
value+=$piped
done;
fi
The trick was using += and also checking ! -t 0 which checks if we are piping.
If you want to behave like cat, why not use it?
#! /bin/bash
value="$( cat "$#" )"