I have bash script that intents to find all files older then "X" minutes and to redirect the output into a file. The logic is a have a for loop and i want to do a find through all files, but for some reason it prints and redirect in the output file just the file from the last directory(TESTS[3]="/tmp/test/"). So i want all the files from the directories to be redirected there. Thank you for the help :D
Here is the sh:
#!/bin/bash
set -x
if [ ! -d $TEST ]
then
echo "The directory does not exist (${TEST})!"
echo "Aborted."
exit 1
fi
TESTS[0]="/tmp/t1/"
TESTS[1]="/tmp/t2/"
TESTS[2]="/tmp/t3/"
TESTS[3]="/tmp/test/"
for TEST in "${TESTS[#]}"
do
find $TEST -type f -mmin +1 -exec ls -ltrah {} \; > /root/alex/out
done
You are using > inside the loop to redirect the output of the latest command to the file each time, overwriting the previous contents of the file. If you used >> it would open the file in "append" mode each time instead, but...
A better way to fix your issue would be by moving the redirection to outside the loop:
done > /root/alex/out
And an even better way than that would be to avoid a loop entirely and just use:
find "${TESTS[#]}" -type f -mmin +1 -exec ls -ltrah {} \; > /root/alex/out
Since find accepts multiple paths.
I think you can use {} + instead of {} \; to call the minimum number of ls required to process all arguments, and you might want to check -printf in man find because you can probably get a similar output using built-in format specifiers without calling ls at all.
Related
Can anyone please help me out in explaining what is this adddate() function doing exactly in this piece of code? Can anyone tell me line by line especially the while IFS=read -r line part.
What are the 3 or more problems with this script?
What is a better/different way to solve this task?
Thanks a lot guys!
#!/bin/bash
adddate() {
while IFS= read -r line; do
echo "$(date) $line"
done
}
for file in $( find /tmp/ -type f -mtime +5 -name '*.fish.temp' )
do
ls -la $file | adddate >> /tmp/clean.log
done
find /tmp/ -type f -mtime +5 -name '*.fish.temp' | xargs rm
exit 0
adddate is a bash shell function used bellow to pipe the output of ls with the intent to prepend the date before the line so it creates a new clean.log with date included (date of the time this script ran, and not the time of the actual log - this may be your 1st issue)
ls -la $file | adddate >> /tmp/clean.log
2nd - while IFS=read -r line issue has been explained on stackoverflow/6830735
3d issue - I would say is duplicating the find command. I would execute the find command once, as depending on the folder recursivity, may take some time.
4th issue might be the fact that exit 0 is useless as all sucess process outputs exit with 0 by default (so is redundant)
5th issue is an optimization that can be made to find:
find /tmp/ -type f -mtime +5 -name '*.fish.temp' | xargs rm
so that it executes in oneline lik:
find /tmp/ -type f -mtime +5 -name '*.fish.temp' -exec rm {} \;
An bash alias is nothing but the shortcut to commands - more
UPDATE-1:
what is the "-r" argument for"
Also on man read (thanks to urbanespaceman), it means that if you have in your stream (string), something like \n to be interpreted like 2 characters (\ and n, and not the special character newline.
-r Do not treat a <backslash> character in any special way. Consider each <backslash> to be part of the input line.
UPDATE-2:
is there any security issues with this script?
I guess depends how is used and how often. You're appending to the *.fish.temp so, you can go easily out of space if abused. Also, you're removing from your system whatever is there. You are also exit 0, regardless how find or any command there exited. Is that what you want?
It's looking for a list of all files under /tmp/ that were modified within the last 5 days and are called ".fish.temp"
For each of these files it is writing them line by line to /tmp/clean.log, prepending the timestamp from the date command. (The -la really isn't needed here though I don't think).
Then it runs the same find command and runs the results through rm to delete the files.
Finally it exits with a success code.
Step 3 is dodgy actually, as the find command could potentially return different results, depending on how often files in that dir are added/changed, how long the process takes to run, etc. This should be included in the for loop.
IFS defines the seperator - when set to blank it will just be end of line.
I am flattening a directory of nested folders/picture files down to a single folder. I want to move all of the nested files up to the root level.
There are 3,381 files (no directories included in the count). I calculate this number using these two commands and subtracting the directory count (the second command):
find ./ | wc -l
find ./ -type d | wc -l
To flatten, I use this command:
find ./ -mindepth 2 -exec mv -i -v '{}' . \;
Problem is that when I get a count after running the flatten command, my count is off by 46. After going through the list of files before and after (I have a backup), I found that the mv command is overwriting files sometimes even though I'm using -i.
Here's details from the log for one of these files being overwritten...
.//Vacation/CIMG1075.JPG -> ./CIMG1075.JPG
..more log
..more log
..more log
.//dog pics/CIMG1075.JPG -> ./CIMG1075.JPG
So I can see that it is overwriting. I thought -i was supposed to stop this. I also tried a -n and got the same number. Note, I do have about 150 duplicate filenames. Was going to manually rename after I flattened everything I could.
Is it a timing issue?
Is there a way to resolve?
NOTE: it is prompting me that some of the files are overwrites. On those prompts I just press Enter so as not to overwrite. In the case above, there is no prompt. It just overwrites.
Apparently the manual entry clearly states:
The -n and -v options are non-standard and their use in scripts is not recommended.
In other words, you should mimic the -n option yourself. To do that, just check if the file exists and act accordingly. In a shell script where the file is supplied as the first argument, this could be done as follows:
[ -f "${1##*/}" ]
The file, as first argument, contains directories which can be stripped using ##*/. Now simply execute the mv using ||, since we want to execute when the file doesn't exist.
[ -f "${1##*/}" ] || mv "$1" .
Using this, you can edit your find command as follows:
find ./ -mindepth 2 -exec bash -c '[ -f "${0##*/}" ] || mv "$0" .' '{}' \;
Note that we now use $0 because of the bash -c usage. It's first argument, $0, can't be the script name because we have no script. This means the argument order is shifted with respect to a usual shell script.
Why not check if file exists, prior move? Then you can leave the file where it is or you can rename it or do something else...
Test -f or, [] should do the trick?
I am on tablet and can not easyly include the source.
Is there a way to have the find command return a value when it does not find a match? Basically, I have an old backup, and I want to search for each of the files in it on my current computer. Here is a cheesy way I was going to do it:
first run the following from the home directory:
$ ls -lR * > AllFiles.txt;
This will build my listing of all of my files on the current computer.
Next run the following script for each file in the back up:
#! /bin/bash
if ! grep $1 ~/AllFiles.txt; then
echo file $1 not found;
exit 1;
fi
Of course this is clunky, and it does not account for filename changes, but it's close enough. Alternatively, I'd like to do a find command for each of the back up files.
You can use standard return value test if using a standard gnu find, such as:
find . ! -readable -prune -o -name 'filenameToSearch.ext'
then check for return value using:
echo $?
if any value other than 0 means it did not find a match.
If I understood you correctly;
grep -r valueORpatternToSearchinTEXT $(find . -type f) |wc -l
This will find for every file in the working/existing directory you are and its subdirs, search for what you need, then count for lines, if it is not found, you will get 0 at the end. Remove pipe and afterwards if you want to see what is found and where.
Part of my Bash script's intended function is to accept a directory name and then iterate through every file.
Here is part of my code:
#! /bin/bash
# sameln --- remove duplicate copies of files in specified directory
D=$1
cd $D #go to directory specified as default input
fileNum=0 #save file numbers
DIR=".*|*"
for f in $DIR #for every file in the directory
do
files[$fileNum]=$f #save that file into the array
fileNum=$((fileNum+1)) #increment the fileNum
echo aFile
done
The echo statement is for testing purposes. I passed as an argument the name of a directory with four regular files, and I expected my output to look like:
aFile
aFile
aFile
aFile
but the echo statement only shows up once.
A single operation
Use find for this, it's perfect for it.
find <dirname> -maxdepth 1 -type f -exec echo "{}" \;
The flags explained: maxdepth defines how deep int he hierarchy you want to look (dirs in dirs in dirs), type f defines files, as opposed to type d for dirs. And exec allows you to process the found file/dir, which is can be accessed through {}. You can alternatively pass it to a bash function to perform more tasks.
This simple bash script takes a dir as argument and lists all it's files:
#!/bin/bash
find "$1" -maxdepth 1 -type f -exec echo "{}" \;
Note that the last line is identical to find "$1" -maxdepth 1 -type f -print0.
Performing multiple tasks
Using find one can also perform multiple tasks by either piping to xargs or while read, but I prefer to use a function. An example:
#!/bin/bash
function dostuff {
# echo filename
echo "filename: $1"
# remove extension from file
mv "$1" "${1%.*}"
# get containing dir of file
dir="${1%/*}"
# get filename without containing dirs
file="${1##*/}"
# do more stuff like echoing results
echo "containing dir = $dir and file was called $file"
}; export -f dostuff
# export the function so you can call it in a subshell (important!!!)
find . -maxdepth 1 -type f -exec bash -c 'dostuff "{}"' \;
Note that the function needs to be exported, as you can see. This so you can call it in a subshell, which will be opened by executing bash -c 'dostuff'. To test it out, I suggest your comment to mv command in dostuff otherwise you will remove all your extensions haha.
Also note that this is safe for weird characters like spaces in filenames so no worries there.
Closing note
If you decide to go with the find command, which is a great choice, I advise you read up on it because it is a very powerful tool. A simple man find will teach you a lot and you will learn a lot of useful options to find. You can for instance quit from find once it has found a result, this can be handy to check if dirs contain videos or not for example in a rapid way. It's truly an amazing tool that can be used on various occasions and often you'll be done with a one liner (kinda like awk).
You can directly read the files into the array, then iterate through them:
#! /bin/bash
cd $1
files=(*)
for f in "${files[#]}"
do
echo $f
done
If you are iterating only files below a single directory, you are better off using simple filename/path expansion to avoid certain uncommon filename issues. The following will iterate through all files in a given directory passed as the first argument (default ./):
#!/bin/bash
srchdir="${1:-.}"
for i in "$srchdir"/*; do
printf " %s\n" "$i"
done
If you must iterate below an entire subtree that includes numerous branches, then find will likely be your only choice. However, be aware that using find or ls to populate a for loop brings with it the potential for problems with embedded characters such as a \n within a filename, etc. See Why for i in $(find . -type f) # is wrong even though unavoidable at times.
I'm looping through certain files (all files starting with MOVIE) in a folder with this bash script code:
for i in MY-FOLDER/MOVIE*
do
which works fine when there are files in the folder. But when there aren't any, it somehow goes on with one file which it thinks is named MY-FOLDER/MOVIE*.
How can I avoid it to enter the things after
do
if there aren't any files in the folder?
With the nullglob option.
$ shopt -s nullglob
$ for i in zzz* ; do echo "$i" ; done
$
for i in $(find MY-FOLDER/MOVIE -type f); do
echo $i
done
The find utility is one of the Swiss Army knives of linux. It starts at the directory you give it and finds all files in all subdirectories, according to the options you give it.
-type f will find only regular files (not directories).
As I wrote it, the command will find files in subdirectories as well; you can prevent that by adding -maxdepth 1
Edit, 8 years later (thanks for the comment, #tadman!)
You can avoid the loop altogether with
find . -type f -exec echo "{}" \;
This tells find to echo the name of each file by substituting its name for {}. The escaped semicolon is necessary to terminate the command that's passed to -exec.
for file in MY-FOLDER/MOVIE*
do
# Skip if not a file
test -f "$file" || continue
# Now you know it's a file.
...
done