At the simplest, if I execute
find . -type f -exec cp {} /new/path/{}
The path that is expanded is /new/path/./path/to/file. I would like to remove that ./ that is prefixed by the find command before I use {} in the exec.
I am using the builtin Freebsd find, but I do have access to gnufind if that will help (though I do not normally use gnufind).
Where you will have a problem is when find descends into subdirectories, and it tries to exec something like cp ./foo/bar.txt /new/path/./foo/bar.txt and "/new/path" has no subdirectory "foo" -- you might want to:
specify -maxdepth 1 so you do not descend into subdirs
find . -maxdepth 1 -type f -exec cp {} /new/path/{} \;
just use a directory destination for cp, so the files end up in a single dir (will suffer from collisions if you have "./foo/bar.txt" and "./qux/bar.txt")
find . -type f -exec cp -t /new/path {} +
use tar to copy the whole tree: this will preserve directory structure
tar cf - . | ( cd /new/path && tar xvf - )
Related
The current directory contains files and directories. The directories have no sub-directories, but may contain zero or more files, for example:
./file1
./file2
./directory1/file3
./directory2/file4
./directory2/file5
./directory3/
When I execute find . -type d -maxdepth 1 I get a listing of the directories:
./directory1
./directory2
If I execute mv ./directory1/* . all files in directory1 are moved to the current level . so I thought I could use find -exec to do everything in one go:
find . -type d -maxdepth 1 -exec mv "{}/*" . \;
But I get this response:
mv: rename ./directory1/* to ./*: No such file or directory
How can I move all the files in subdirectories to the current level?
Globbing (replacing foo/* with foo/dirA, foo/dirB, etc) is performed by the shell, not by mv. find -exec doesn't start a shell unless you do so manually; for example:
find . -type d -mindepth 1 -maxdepth 1 \
-exec sh -c 'for dir; do mv -- "$dir"/* .; done' _ {} +
There's no real need to use find. You can do it with a single mv to move the files and rmdir to remove the now-empty directories.
mv */* .
rmdir */
I am searching specific directory and subdirectories for new files, I will like to copy the files. I am using this:
find /home/foo/hint/ -type f -mtime -2 -exec cp '{}' ~/new/ \;
It is copying the files successfully, but some files have same name in different subdirectories of /home/foo/hint/.
I will like to copy the files with its base directory to the ~/new/ directory.
test#serv> find /home/foo/hint/ -type f -mtime -2 -exec ls '{}' \;
/home/foo/hint/do/pass/file.txt
/home/foo/hint/fit/file.txt
test#serv>
~/new/ should look like this after copy:
test#serv> ls -R ~/new/
/home/test/new/pass/:
file.txt
/home/test/new/fit/:
file.txt
test#serv>
platform: Solaris 10.
Since you can't use rsync or fancy GNU options, you need to roll your own using the shell.
The find command lets you run a full shell in your -exec, so you should be good to go with a one-liner to handle the names.
If I understand correctly, you only want the parent directory, not the full tree, copied to the target. The following might do:
#!/usr/bin/env bash
findopts=(
-type f
-mtime -2
-exec bash -c 'd="${0%/*}"; d="${d##*/}"; mkdir -p "$1/$d"; cp -v "$0" "$1/$d/"' {} ./new \;
)
find /home/foo/hint/ "${findopts[#]}"
Results:
$ find ./hint -type f -print
./hint/foo/slurm/file.txt
./hint/foo/file.txt
./hint/bar/file.txt
$ ./doit
./hint/foo/slurm/file.txt -> ./new/slurm/file.txt
./hint/foo/file.txt -> ./new/foo/file.txt
./hint/bar/file.txt -> ./new/bar/file.txt
I've put the options to find into a bash array for easier reading and management. The script for the -exec option is still a little unwieldy, so here's a breakdown of what it does for each file. Bearing in mind that in this format, options are numbered from zero, the {} becomes $0 and the target directory becomes $1...
d="${0%/*}" # Store the source directory in a variable, then
d="${d##*/}" # strip everything up to the last slash, leaving the parent.
mkdir -p "$1/$d" # create the target directory if it doesn't already exist,
cp "$0" "$1/$d/" # then copy the file to it.
I used cp -v for verbose output as shown in "Results" above, but IIRC it's also not supported by Solaris, and can be safely ignored.
The --parents flag should do the trick:
find /home/foo/hint/ -type f -mtime -2 -exec cp --parents '{}' ~/new/ \;
Try testing with rsync -R, for example:
find /your/path -type f -mtime -2 -exec rsync -R '{}' ~/new/ \;
From the rsync man:
-R, --relative
Use relative paths. This means that the full path names specified on the
command line are sent to the server rather than just the last parts of the
filenames.
The problem with the answers by #Mureinik and #nbari might be that the absolute path of new files will spawn in the target directory. In this case you might want to switch to the base directory before the command and go back to your current directory afterwards:
path_current=$PWD; cd /home/foo/hint/; find . -type f -mtime -2 -exec cp --parents '{}' ~/new/ \; ; cd $path_current
or
path_current=$PWD; cd /home/foo/hint/; find . -type f -mtime -2 -exec rsync -R '{}' ~/new/ \; ; cd $path_current
Both ways work for me at a Linux platform. Let’s hope that Solaris 10 knows about rsync’s -R ! ;)
I found a way around it:
cd ~/new/
find /home/foo/hint/ -type f -mtime -2 -exec nawk -v f={} '{n=split(FILENAME, a, "/");j= a[n-1];system("mkdir -p "j"");system("cp "f" "j""); exit}' {} \;
On a unix server, I'm trying to figure out how to remove a file, say "example.xls", from any subdirectories that start with v0 ("v0*").
I have tried something like:
find . -name "v0*" -type d -exec find . -name "example.xls" -type f
-exec rm {} \;
But i get errors. I have a solution but it works too well, i.e. it will delete the file in any subdirectory, regardless of it's name:
find . -type f -name "example.xls" -exec rm -f {} \;
Any ideas?
You will probably have to do it in two steps -- i.e. first find the directories, and then the files -- you can use xargs to make it in a single line, like
find . -name "v0*" -type d | \
xargs -l -I[] \
find [] -name "example.xls" -type f -exec rm {} \;
what it does, is first generating a list of viable directory name, and let xargs call the second find with the names locating the file name within that directory
Try:
find -path '*/v0*/example.xls' -delete
This matches only files named example.xls which, somewhere in its path, has a parent directory name that starts with v0.
Note that since find offers -delete as an action, it is not necessary to invoke the external executable rm.
Example
Consider this directory structure:
$ find .
.
./a
./a/example.xls
./a/v0
./a/v0/b
./a/v0/b/example.xls
./a/v0/example.xls
We can identify files example.xls who have one of their parent directories named v0*:
$ find -path '*/v0*/example.xls'
./a/v0/b/example.xls
./a/v0/example.xls
To delete those files:
find -path '*/v0*/example.xls' -delete
Alternative: find only those files directly under directory v0*
find -regex '.*/v0[^/]*/example.xls'
Using the above directory structure, this approach returns one file:
$ find -regex '.*/v0[^/]*/example.xls'
./a/v0/example.xls
To delete such files:
find -regex '.*/v0[^/]*/example.xls' -delete
Compatibility
Although my tests were performed with GNU find, both -regex and -path are required by POSIX and also supported by OSX.
There is a script on a server that I need to run over all the files in a folder. To run this script over one file I use this shell script:
for input in /home/arashsa/duo-bokmaal/Bokmaal/DUO_BM_28042.txt ; do
name=$(basename "$input")
/corpora/bokm/tools/The-Oslo-Bergen-Tagger/./tag-lbk.sh "$input" > "/home/arashsa/duo-bokmaal-obt/$name"
done
I'm terrible at writing shell scripts, and have not managed to found out how to iterate over files. What I want it is to make the script iterate over all files in a given folder that end with .txt and not those that end with _metadata.txt. So I'm thinking I would give it the folder path as argument, make it iterate over all the files in that folder, and run script on files ending with .txt and not _metadata.txt
Use find and the exec option.
$ find /path/to/dir -exec <command here> \;
Each file or directory can be obtained by using {}.
Example usage: $ find . -exec echo {} \;, this will echo each file name recursively or directory name in the current directory. You can use some other options to further specify the desired files and directories you wish to handle. I will briefly explain some of them. Note that the echo is redundant because the output of find will automatically print but I'll leave it there to illustrate the working of exec. This being said, following commands yield the same result: $ find . -exec echo {} \; and $ find .
maxdepth and mindepth
Specifying the maxdepth and mindepth allows you to go as deep down the directory structure as you like. Maxdepth determines how many times find will enter a directory and mindepth determines how many times a directory should be entered before selecting a file or dir.
Example usages:
(1) listing only elements from this dir, including . (= current dir).
(2) listing only elements from current dir excluding .
(3) listing elements from root dir and all dirs in this dir
(1)$ find . -maxdepth 1 -exec echo {} \;
(2)$ find . -mindepth 1 -maxdepth 1 -exec echo {} \;
# or, alternatively
(2)$ find . ! -path . -maxdepth 1 -exec echo {} \;
(3)$ find / -maxdepth 2 -exec echo {} \;
type
Specifying a type option allows you to filter files or directories only, example usage:
(1) list all files in this dir
(2) call shell script funtion func on every directory in the root dir.
(1)$ find . -maxdepth 1 -type f -exec echo {} \;
(2)$ find / -maxdepth 1 -type d -exec func {} \;
name & regex
The name option allows you to search for specific filenames, you can also look for files and dirs using a regex format.
Example usage: find all movies in a certain directory
$ find /path/to/dir -maxdepth 1 -regextype sed -regex ".*\.\(avi\|mp4\|mkv\)"
size
Another filter is the file size, any file or dir greater than this value will be returned. Example usage:
(1) find all empty files in current dir.
(2) find all non empty files in current dir.
(1)$ find . -maxdepth 1 -type f -size 0
(2)$ find . -maxdepth 1 -type f ! -size 0
Further examples
Move all files of this dir to a directory tmp present in .
$ find . -type f -maxdepth 1 -exec mv {} tmp \;
Convert all mkv files to mp4 files in a dir /path/to/dir and child directories
$ find /path/to/dir -maxdepth 2 -regextype sed -regex ".*\.mkv" -exec ffmpeg -i {} -o {}.mp4 \;
Convert all your jpeg files to png (don't do this, it will take very long to both find them and convert them).
$ find ~ -maxdepth 420 -regextype sed -regex '.*\.jpeg' -exec mogrify -format png {} \;
Note
The find command is a strong tool and it can prove to be fruitful to pipe the output to xargs. It's important to note that this method is superior to the following construction:
for file in $(ls)
do
some commands
done,
as the latter will handle files and directories containing spaces the wrong way.
In bash:
shopt -s extglob
for input in /dir/goes/here/*!(_metadata).txt
do
...
done
I have a folder with tens of thousands of different file types. Id like to copy them all to a new folder (Copy1) but also rename them all to $RANDOM but keep the extension intact. I realize I can write a line specifying which extension to find and how to name it, but there is got to be a way to do it dynamically, because there are at least 100 file types and may be more in the future.
I have the following so far:
find ./ -name '*.*' -type f -exec bash -c 'cp "$1" "${1/\/123_//_$RANDOM}"' -- {} \;
but that puts the random number after the extension, and also it puts the all in the same folder. I cant figure out how to do the following 2 things:
1 - Keep all paths intact, but in a new root folder (Copy1)
2 - How to have name be $RANDOM.extension, instead of .extension.$RANDOM
PS - by $RANDOM i mean actual randomly generated number. I am interested in keeping folder structure, so we are dealing with a few hundred files at most per directory, but all directories/files need to be renamed to $RANDOM. Another way to look at what I need to do. Copy all contents or Folder1 with all subdirectories and files to Folder2 (where Fodler2 is a $RANDOM name), then rename all folders and files to random names but keep all extensions.
EDIT: Ok i figured out how to rename and keep extension. But I have a problem where its dumping all of the files into the root directory where script is run from. How do I keep them in their respective folders? Command Im using is:
find ./ -name '*.*' -type f -exec bash -c 'mv "$1" $RANDOM.${1##*.}' -- {} \;
Thanks!
Ok i figured out how to rename and keep extension. But I have a
problem where its dumping all of the files into the root directory
where script is run from. How do I keep them in their respective
folders? Command Im using is:
find ./ -name '*.*' -type f -exec bash -c 'mv "$1" $RANDOM.${1##*.}' -- {} \;
Change your command to:
PATH=/bin:/usr/bin find . -name '*.*' -type f -execdir bash -c 'mv "$1" $RANDOM.${1##*.}' -- {} \;
Or alternatively using uuids instead of random numbers:
PATH=/bin:/usr/bin find . -name '*.*' -type f -execdir bash -c 'mv "$1" $(uuidgen).${1##*.}' -- {} \;
Here's what I came up with :
i=1
random="whatever"
find . -name "*.*" -type f | while read f
do
newbase=${f/*./$random$i.} //added counter to filename
cp $f /Path/Name/"$newbase"
((i++))
done
I had to add a counter to random (i), otherwise, if the extensions are similar, your files would overwrite themselves when copied.
In your new folder, your files should look like this :
whatever1.txt
whatever2.txt
etc etc
I hope this is what you were looking for.
Here is the command that worked for me.
find . -name '*.pdf' -type f -exec bash -c 'echo "{}" && cp "$1" ./$RANDOM.${1##*.}' -- {} \;