cronjob to run once while monitor a directory - shell

I have a script which runs by a conjob upon receiving a file.
a=$(find /home/cassandra -type f -name "*.tar.gz" | wc -l); if [[ $a -gt 0 ]]; then python monitor.py ;fi
this script run continuously and execute monitor.py
I WANT shell script to run the monitor.py only upon receiving the tar.gz file.

Does your monitor.py take a tgz file as its argument?
If it does, I would simplify your command and call monitor.py from the find.
find /home/cassandra -type f -name "*.tar.gz" -exec python monitor.py {} \;
The {} in the find command gets replaced with the file that was found. the \; ends the command string. The command in your exec gets called once for each file found.
If you wanted to pass all the files that were found by find to a command once, you can use xargs.

Related

How to cd into grep output?

I have a shell script which basically searches all folders inside a location and I use grep to find the exact folder I want to target.
for dir in /root/*; do
grep "Apples" "${dir}"/*.* || continue
While grep successfully finds my target directory, I'm stuck on how I can move the folders I want to move in my target directory. An idea I had was to cd into grep output but that's where I got stuck. Tried some Google results, none helped with my case.
Example grep output: Binary file /root/ant/containers/secret/Documents/2FD412E0/file.extension matches
I want to cd into 2FD412E0and move two folders inside that directory.
dirname is the key to that:
cd $(dirname $(grep "...." ...))
will let you enter the directory.
As people mentioned, dirname is the right tool to strip off the file name from the path.
I would use find for such kind of task:
while read -r file
do
target_dir=`dirname $file`
# do something with "$target_dir"
done < <(find /root/ -type f \
-exec grep "Apples" --files-with-matches {} \;)
Consider using find's -maxdepth option. See the man page for find.
Well, there is actually simpler solution :) I just like to write bash scripts. You might simply use single find command like this:
find /root/ -type f -exec grep Apples {} ';' -exec ls -l {} ';'
Note the second -exec. It will be executed, if the previous -exec command exited with status 0 (success). From the man page:
-exec command ;
Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of ; is encountered. The string {} is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find.
Replace the ls -l command with your stuff.
And if you want to execute dirname within the -exec command, you may do the following trick:
find /root/ -type f -exec grep -q Apples {} ';' \
-exec sh -c 'cd `dirname $0`; pwd' {} ';'
Replace pwd with your stuff.
When find is not available
In the comments you write that find is not available on your system. The following solution works without find:
grep -R --files-with-matches Apples "${dir}" | while read -r file
do
target_dir=`dirname $file`
# do something with "$target_dir"
echo $target_dir
done

use file comand instead of -name

I want to write a shell script that searches in all .txt files the word cat and replaces it with mouse.I wrote the following code:
!/bin/bash
read directory
for F in ` find $directory -name '*.txt' -type f`
do
echo $F
`sed -i "s/\<cat\>/mouse/g" $F`
done
I am supposed to use "file" command.I searched for it and it seems like file command finds all the files of a certain type.I want to know how can I include that command in my script.
Assuming you are in the directory where all *.txt files are. You can execute the following command:
find . -name *.txt -exec sed -i "s/\<cat\>/mouse/g" "{}" \;

How to use if command as an extension of find?

I want to find for some name(s) in directory tree, and when I find specific directory, I want to check if it has some default subdirectory. Problem is, I do not know how to accomplish this. I tried using this command:
find -iname $i -exec if [ -d $1/subdir ] then echo $1 fi
but then I get report like this:
find: missing argument to `-exec'
So, what is right solution for this?
exec requires a single executable, not an arbitrary shell command. Run a new shell instance explicitly, and pass your shell command as the argument to the -c option. Use {} as the single positional argument to sh so that the name of the found directory is
properly passed to the shell command.
find -iname "$i" -exec sh -c 'if [ -d "$1"/subdir ]; then echo "$1"; fi' '{}' \;
It might be a little simpler to reorganize your logic, if possible:
find -wholename "$i/subdir" -type d -exec dirname '{}' \;
This has find look for the actual subdir directory instead of its parent, then prints the directory name containing subdir.

Linux Find and execute

I need to write a Linux script where it does the following:
Finding all the .ear and .war files from a directory called ftpuser, even the new ones that are going to appear there and then execute one command that it produces some reports. When the command finishes then those files need to be moved to another directory.
Below I think that I have found how the command starts. My question is does anyone know how to find the new entries on the directory and then execute the command so I can get the report?
find /directory -type f -name "*.ear" -or -type f -name "*.war"
It seems that you'd want the script to run indefinitely. Loop over the files that you find in order to perform the desired operations:
while : ; do
for file in $(find /directory -type f -name "*.[ew]ar"); do
some_command "${file}" # Execute some command for the file
mv "${file}" /target/directory/ # Move the file to some directory
done
sleep 60s # Maybe sleep for a while before searching again!
done
This might also help: Monitor Directory for Changes
If it is not time-critical, but you are not willing to start the script (like the one suggested by devnull) manually after each reboot or something, I suggest using a cron-job.
You can set up a job with
crontab -e
and appending a line like this:
* * * * * /usr/bin/find /some/path/ftpuser/ -type f -name "*.[ew]ar" -exec sh -c '/path/to/your_command $1 && mv $1 /target_dir/' _ {} \;
This runs the search every minute. You can change the interval, see https://en.wikipedia.org/wiki/Cron for an overview.
The && causes the move to be only executed if your_command succeeded. You can check by running it manually, followed by
echo $?
0 means true or success. For more information, see http://tldp.org/LDP/abs/html/exit-status.html

Using find and xargs how can I stop execution on errors without crapping out

In my script I have the following 3 commands
Basically what it is trying to do is:
create a symlink to a certain bunch of files based on their filenames, in a temp directory.
change the name of the symlink to match the current date
move the symlinks from a temp directory to their proper location
-
find . -type f -name "*${regex}-*" -exec ln -s {} "${DataTempPath}/"{} \;
find "$DataTempPath" -type l | sed -e "p;s/A[0-9]*/A${today}/" | xargs -n2 mv
mv $DataTempPath/* $DataSetPath
This will be inserted as a cron job to run every 15 mins, which is not a problem when the source directory contains valid data.
However when it doesn't contain any files I get errors on the second find command and the mv command
What I want I guess is a way of not executing the last two lines of the script if the first one does not create any new links
GNU xargs supports a --no-run-if-empty parameter that, to quote the documentation "If the standard input is completely empty, do not run the command. By default, the command is run once even if there is no input".
This should help avoid the xargs error (assuming you are running GNU xargs)
check the status of the command:
find . -type f -name "*${regex}-*" -exec ln -s {} "${DataTempPath}/"{} \;
if [[ $? == 0 ]]; then
find "$DataTempPath" -type l | sed -e "p;s/A[0-9]*/A${today}/" | xargs -n2 mv
mv $DataTempPath/* $DataSetPath
fi

Resources