Run a script on all recently modified files in bash - bash

I would like to:
Find latest modified file in a folder
Change some files in the folder
Find all files modified after file of step 1
Run a script on these files from step 2
This this where I've end up:
#!/bin/bash
var=$(find /home -type f -exec stat \{} --printf="%y\n" \; |
sort -n -r |
head -n 1)
echo $var
sudo touch -d $var /home/foo
find /home/ -newer /home/foo
Can anybody help me in achieving these actions ?

Use inotifywait instead to monitor files and check for changes
inotifywait -m -q -e modify --format "%f" {Path_To__Monitored_Directory}
Also, you can make it output to file, loop over it's contents and run your script on every entry.
inotifywait -m -q -e modify --format "%f" -o {Output_File} {Path_To_Monitored_Directory}
sample output:
file1
file2
Example
We are monitoring directory named /tmp/dir which contains file1 and file2.
The following script which monitor the whole directory and echo the file name:
#!/bin/bash
while read ch
do
echo "File modified= $ch"
done < <(inotifywait -m -q -e modify --format "%f" /tmp/dir)
Run this script and modify file1 echo "123" > /tmp/dir/file1, the script will output the following:
File modified= file1
Also you can look at this stackoverflow answer

Related

Is there a script to compress every single file in a directory and ouput it to another Directory?

So I'm looking to compress the files in a given directory, for example /etc/input and output the files in /etc/output, and it should be like:
$ ls /etc/input
file1
file2
file3
$ script.sh
$ ls /etc/output
file1.zip
file2.zip
file3.zip
$ ls /etc/input
For now, what i wrote looks like this:
find . -type f -print | while read fname ; do
mkdir -p "../output/`dirname \"$fname\"`"
gzip -c "$fname" > "../output/$fname.gz"
done
You could use find, but I think it's simpler with pure Bash.
INPUT=/etc/input
OUTPUT=/etc/output
mkdir -p "$OUTPUT"
for file in "$INPUT"/* ; do
gzip -c "$file" > "${OUTPUT}/${file}.gz"
done
Change INPUT and OUTPUT to match what you want.

I want a for loop script to do some commands in every numbered subdirectory (1-400)

I am looking for "for loop" to run multiple script in each direcorty and save the output in the same directory
#!/bin/bash
for dir in /home/Desktop/4Testing/batch_2019-05-16/* # inside batch_2019-05-16 there are 400 folders"
do
dir=${dir%*/} # remove the trailing "/"
echo "$1" *.pcap > 12.txt # print everything after the final "/"
tshark -r *.pcap -T fields -e frame.time -e _ws.col.Source -e _ws.col.Destination -e frame.len -e _ws.col.src.prt -e _ws.col.dst.prt -E separator=, > home.csv
cat home.txt 12.txt > Final.csv
done
You can run into trouble if you directory names have spaces/weird characters or are files! Much better to store your processing in a function/script that takes the directory name as a parameter and use find, something like:
find /home/Desktop/4Testing/batch_2019-05-16 -maxdepth 1 -type d -exec <script_name> {} \;

Shell Script: How to copy files with specific string from big corpus

I have a small bug and don't know how to solve it. I want to copy files from a big folder with many files, where the files contain a specific string. For this I use grep, ack or (in this example) ag. When I'm inside the folder it matches without problem, but when I want to do it with a loop over the files in the following script it doesn't loop over the matches. Here my script:
ag -l "${SEARCH_QUERY}" "${INPUT_DIR}" | while read -d $'\0' file; do
echo "$file"
cp "${file}" "${OUTPUT_DIR}/${file}"
done
SEARCH_QUERY holds the String I want to find inside the files, INPUT_DIR is the folder where the files are located, OUTPUT_DIR is the folder where the found files should be copied to. Is there something wrong with the while do?
EDIT:
Thanks for the suggestions! I took this one now, because it also looks for files in subfolders and saves a list with all the files.
ag -l "${SEARCH_QUERY}" "${INPUT_DIR}" > "output_list.txt"
while read file
do
echo "${file##*/}"
cp "${file}" "${OUTPUT_DIR}/${file##*/}"
done < "output_list.txt"
Better implement it like below with a find command:
find "${INPUT_DIR}" -name "*.*" | xargs grep -l "${SEARCH_QUERY}" > /tmp/file_list.txt
while read file
do
echo "$file"
cp "${file}" "${OUTPUT_DIR}/${file}"
done < /tmp/file_list.txt
rm /tmp/file_list.txt
or another option:
grep -l "${SEARCH_QUERY}" "${INPUT_DIR}/*.*" > /tmp/file_list.txt
while read file
do
echo "$file"
cp "${file}" "${OUTPUT_DIR}/${file}"
done < /tmp/file_list.txt
rm /tmp/file_list.txt
if you do not mind doing it in just one line, then
grep -lr 'ONE\|TWO\|THREE' | xargs -I xxx -P 0 cp xxx dist/
guide:
-l just print file name and nothing else
-r search recursively the CWD and all sub-directories
match these works alternatively: 'ONE' or 'TWO' or 'THREE'
| pipe the output of grep to xargs
-I xxx name of the files is saved in xxx it is just an alias
-P 0 run all the command (= cp) in parallel (= as fast as possible)
cp each file xxx to the dist directory
If i understand the behavior of ag correctly, then you have to
adjust the read delimiter to '\n' or
use ag -0 -l to force delimiting by '\0'
to solve the problem in your loop.
Alternatively, you can use the following script, that is based on find instead of ag.
while read file; do
echo "$file"
cp "$file" "$OUTPUT_DIR/$file"
done < <(find "$INPUT_DIR" -name "*$SEARCH_QUERY*" -print)

Ubuntu: Rename multiple files in different directory

I need to rename files which are in this structure:
Dir1
--file1
--file2
-- ...
Dir2
--file1
--file2
-- ...
...
Dir62
--file1101
--file1102
-- ...
The new names would be 1_01,1_02 in 1 dir and 2_01,2_02 in 2nd dir and so on...
is there a way to do it in a single go...
Currently, I am using:
ls | cat -n | while read n f; do mv "$f" "10_$n.png"; done
Which work in 1 dir at a time...
Any better way, please?
If you run this command, it will use GNU Parallel to start a new bashshell in each of the directories in parallel, and run ls in each one in parallel independently:
parallel --dry-run -k 'cd {} && ls' ::: */
Sample Output
cd Dir01/ && ls
cd Dir02/ && ls
cd Dir78/ && ls
If you remove the --dry-run it will do it for real.
So, instead of running ls, let's now look at using the rename command in each of the directories. The following will rename all the files in a directory with sequentially increasing numbers ($N):
rename --dry-run '$_=$N' *
Sample Output
'file87' would be renamed to '1'
'file88' would be renamed to '2'
'file89' would be renamed to '3'
'fred' would be renamed to '4'
All the foregoing suggests the command you want would be:
parallel --dry-run -k 'cd {} && rename --dry-run "s/.*/{#}_\$N/" *' ::: */
You can run it as it is and it will just show you what it is going to do, without actually doing anything.
If you like the look of that, remove the first --dry-run and run it again and it will actually go into each subdirectory and do a dry-run of the rename, again without actually changing anything.
If you still like the look of the command, make a small copy of your files somewhere in a temporary directory and try removing both the --dry-run parameters ands if it lives up to your needs.
ls -1 -d ./*/ | cat -n | xargs -I % bash -c 'echo "%" | while read dirnum dirname; do { ls "${dirname}" | cat -n | while read filenum filename; do { mv -v "${dirname}${filename}" "${dirnum}_${filenum}.png"; }; done }; done'
We create a directory structure with mkdir and touch:
mkdir Dir{1,2,3,4,5}
touch Dir{1,2,3,4,5}/file{1,2,3}
Which gives the result:
1_1.png
1_2.png
1_3.png
2_1.png
2_2.png
2_3.png
3_1.png
3_2.png
3_3.png
4_1.png
4_2.png
4_3.png
5_1.png
5_2.png
5_3.png

Newest file in directories

Hi i have two directories
Directory 1:
song.mp3
work.txt
Directory 2:
song.mp3
work.txt
These files are the same, but song.mp3 in directory1 is newer than song.mp3 in directory2 and work.txt in directory 2 is newest than work.txt in directory 1.
And now how i can print in two files for example
in file1 files that are newer that in directory 2 so it must be song.mp3
and in file2 files that are newer that in directory 1 so it must be work.txt
i tried
find $directory1 -type f -newer $directory2
but it always print me the newest file in both directories. Could someone help me ?
-newer $directory2 is just using the timestamp on the directory $directory2 as the reference point for all the comparisons. It doesn't look at any of the files inside $directory2.
I don't think there's anything like a "compare each file to its counterpart in another directory" operation built in to find, so you'll probably have to do some of the work yourself. Here's a short script demonstrating one way it can be done:
(cd $directory1 && find . -print) | while IFS= read -r fn; do
if [ "$directory1/$fn" -nt "$directory2/$fn" ]; then
printf "%s\n" "$directory1/$fn"
else
printf "%s\n" "$directory2/$fn"
fi
done
# set up the test
mkdir directory1 directory2
touch directory1/song.mp3
touch -t 200101010000 directory1/work.txt
touch -t 200101010000 directory2/work.txt
touch directory2/work.txt
# find the newest of each filename:
# sort the files in both directories by mtime
# then only output the filename (regardless of directory) the first time seen
stat -c '%Y %n' directory[12]/* |
sort -rn |
cut -d " " -f 2- |
awk -F / '!seen[$2]++'
directory2/work.txt
directory1/song.mp3
If you are on a Linux that supports the following:
Fileage=`date +%s -r filename`
You could run a "find" and print age in seconds followed by filename for each file and then sort that file. This has the benefit that it will work across any number of directories - not just two. Glenn's more widely available "stat -c" could be used in place of my "date" command - and he's done the "sort" and "awk" for you!

Resources