I need to write a Linux script where it does the following:
Finding all the .ear and .war files from a directory called ftpuser, even the new ones that are going to appear there and then execute one command that it produces some reports. When the command finishes then those files need to be moved to another directory.
Below I think that I have found how the command starts. My question is does anyone know how to find the new entries on the directory and then execute the command so I can get the report?
find /directory -type f -name "*.ear" -or -type f -name "*.war"
It seems that you'd want the script to run indefinitely. Loop over the files that you find in order to perform the desired operations:
while : ; do
for file in $(find /directory -type f -name "*.[ew]ar"); do
some_command "${file}" # Execute some command for the file
mv "${file}" /target/directory/ # Move the file to some directory
done
sleep 60s # Maybe sleep for a while before searching again!
done
This might also help: Monitor Directory for Changes
If it is not time-critical, but you are not willing to start the script (like the one suggested by devnull) manually after each reboot or something, I suggest using a cron-job.
You can set up a job with
crontab -e
and appending a line like this:
* * * * * /usr/bin/find /some/path/ftpuser/ -type f -name "*.[ew]ar" -exec sh -c '/path/to/your_command $1 && mv $1 /target_dir/' _ {} \;
This runs the search every minute. You can change the interval, see https://en.wikipedia.org/wiki/Cron for an overview.
The && causes the move to be only executed if your_command succeeded. You can check by running it manually, followed by
echo $?
0 means true or success. For more information, see http://tldp.org/LDP/abs/html/exit-status.html
Related
I have bash script that intents to find all files older then "X" minutes and to redirect the output into a file. The logic is a have a for loop and i want to do a find through all files, but for some reason it prints and redirect in the output file just the file from the last directory(TESTS[3]="/tmp/test/"). So i want all the files from the directories to be redirected there. Thank you for the help :D
Here is the sh:
#!/bin/bash
set -x
if [ ! -d $TEST ]
then
echo "The directory does not exist (${TEST})!"
echo "Aborted."
exit 1
fi
TESTS[0]="/tmp/t1/"
TESTS[1]="/tmp/t2/"
TESTS[2]="/tmp/t3/"
TESTS[3]="/tmp/test/"
for TEST in "${TESTS[#]}"
do
find $TEST -type f -mmin +1 -exec ls -ltrah {} \; > /root/alex/out
done
You are using > inside the loop to redirect the output of the latest command to the file each time, overwriting the previous contents of the file. If you used >> it would open the file in "append" mode each time instead, but...
A better way to fix your issue would be by moving the redirection to outside the loop:
done > /root/alex/out
And an even better way than that would be to avoid a loop entirely and just use:
find "${TESTS[#]}" -type f -mmin +1 -exec ls -ltrah {} \; > /root/alex/out
Since find accepts multiple paths.
I think you can use {} + instead of {} \; to call the minimum number of ls required to process all arguments, and you might want to check -printf in man find because you can probably get a similar output using built-in format specifiers without calling ls at all.
This is my first time using Automator and it seems like a pretty nifty tool. I am running into an issue however at the very end of the automation. The goal of my automator workflow is to specify a path, then create a directory Highschool1, for example, within that path.
From here, I want it to search for any files within the specified path that include "Highschool1" in the name, and move them into the new Highschool1 directory. Here is how my bash script works within terminal:
mkdir "/Users/tommy/Desktop/TestShow1/WG/Highschool1"
This creates the directory as intended. Then:
find /Users/tommy/Desktop/TestShow1/WG -name 'Highschool1' -prune -o -type f -name '*Highschool1*' -exec mv -- {} /Users/tommy/Desktop/TestShow1/WG/Highschool1 \;
This finds the files I want while excluding the new Highschool1 directory, and then moves the found files into that Highschool1 directory. It is all working as intended at the base.
It's when I try to apply this script within my automation using positional parameters that it stops working.
-I stuff a variable called "HighschoolName" with the input "Highschool1"
-Then I stuff a variable called "pathA" with the input, which is the path I chose: "/Users/tommy/Desktop/TestShow1/WG"
-Then I call back my HighschoolName variable and begin with the positional parameters.
This is the final script I used:
mkdir "$1/$2"
find /$1 -name '$2' -prune -o -type f -name '*$2*' -exec mv -- {} /$1/$2 \;
This creates the directory Highschool1 where I want it, but fails to move any files into it. It gives me no error message either. It simply acts as if the script was run successfully. Does anyone have any idea what the problem could be?
Read about quoting
In:
find /$1 -name '$2' -prune -o -type f -name '*$2*' -exec mv -- {} /$1/$2 \;
'$2' will not interpolate the variables, you need to use "$2" (same for '*$2*')
I have a script which runs by a conjob upon receiving a file.
a=$(find /home/cassandra -type f -name "*.tar.gz" | wc -l); if [[ $a -gt 0 ]]; then python monitor.py ;fi
this script run continuously and execute monitor.py
I WANT shell script to run the monitor.py only upon receiving the tar.gz file.
Does your monitor.py take a tgz file as its argument?
If it does, I would simplify your command and call monitor.py from the find.
find /home/cassandra -type f -name "*.tar.gz" -exec python monitor.py {} \;
The {} in the find command gets replaced with the file that was found. the \; ends the command string. The command in your exec gets called once for each file found.
If you wanted to pass all the files that were found by find to a command once, you can use xargs.
I have a directory containing multiple files. These files are coming via a network location; the source is sending these file via scp command. I have a batch command that will run for each of these incoming files; this command runs for around 5-6 hours.
I am trying to run below command in by linux box
find Documents/wget/ -maxdepth 1 -type f -exec btaudip '{}' \;
My goal is to start the batch program for all files in the directory simultaneously. but, the above command only run for one file at a time. So, for this I changed this command as below. but it failed.
find Documents/wget/ -maxdepth 1 -type f -exec btaudip '{}' & \;
How should I change the my command for this?
The fact is that find gathers all arguments after the -exec option up to the first semicolon and then runs (exec) the resulting command, without passing them to a shell.
With your modifications, the final command generated by find is something like:
btaudip FNAME '&'
Therefore at each run btaudip is passed two parameters: the current file name (as found by find) and an ampersand.
To achieve what you want, you need to invoke a shell to process the '&' correctly, for example by using the following command:
find Documents/wget/ -maxdepth 1 -type f -exec bash -c "btaudip '{}' &" \;
I don't think find has this kind of job control; it is not a shell, and while it is able to spawn shells, the shells would run in sequence. Commands run asynchronously in those shells would run in sequence as a result.
Instead, you can rewrite this as a shell loop:
for f in *; do [ -f "$f" ] || continue; btaudip "$f" & done
Using above suggestions and improvisong I made a script file to convert all the PDF files in a folder and move to respective folders
ConvertPDFtoJPG.sh
#!/bin/bash
sudo apt-get install poppler-utils
mkdir ConvertedJPG
for f in *
do
echo "converting" $f
pdftoppm -jpeg -rx 300 -ry 300 "$f" JPG
echo "moving converted JPG to " $f "\n"
mkdir "./ConvertedJPG/$f"
mv *.jpg "./ConvertedJPG/$f/"
done
you can create this file by opening terminal in the folder and using these commands
touch ConvertPDFtoJPG.sh
chmod ./ConvertPDFtoJPG.sh +x
gedit ./ConvertPDFtoJPG.sh
paste the code in the text editor
#!/bin/bash
sudo apt-get install poppler-utils
mkdir ConvertedJPG
for f in *
do
echo "converting" $f
pdftoppm -jpeg -rx 300 -ry 300 "$f" JPG
echo "moving converted JPG to " $f "\n"
mkdir "./ConvertedJPG/$f"
mv *.jpg "./ConvertedJPG/$f/"
done
close editor
run in terminal
./ConvertPDFtoJPG.sh
Recently (that is in winter in few days) I wrote a simple script which packs some folders, script is listed below:
#!/bin/bash
for DIR in `find -name "MY_NAME*" -type d`
do
tar -zcvf $DIR.tar.gz $DIR &
done
echo "Packing is done" > packing.txt
It works fine except that it searches for MY_NAME* in every sub-directory of the folder where it runs.
Because MY_NAME* folders contain lots of files, and packing takes long hours, I want to limit time loss and I want the find command to find those MY_NAME* directories only within the folder where the script is running (without sub-directories). Is it possible with command find ?
If you want it only in the folder you are in, don't use find. Try this:
for DIR in MY_NAME*/
do
tar -zcvf "$DIR".tar.gz "$DIR" &
done
echo "Packing is done" > packing.txt
It seems you want to use the -maxdepth flag on the find command:
find -name "MY_NAME*" -type d -maxdepth 1