copy files with the base directory - bash

I am searching specific directory and subdirectories for new files, I will like to copy the files. I am using this:
find /home/foo/hint/ -type f -mtime -2 -exec cp '{}' ~/new/ \;
It is copying the files successfully, but some files have same name in different subdirectories of /home/foo/hint/.
I will like to copy the files with its base directory to the ~/new/ directory.
test#serv> find /home/foo/hint/ -type f -mtime -2 -exec ls '{}' \;
/home/foo/hint/do/pass/file.txt
/home/foo/hint/fit/file.txt
test#serv>
~/new/ should look like this after copy:
test#serv> ls -R ~/new/
/home/test/new/pass/:
file.txt
/home/test/new/fit/:
file.txt
test#serv>
platform: Solaris 10.

Since you can't use rsync or fancy GNU options, you need to roll your own using the shell.
The find command lets you run a full shell in your -exec, so you should be good to go with a one-liner to handle the names.
If I understand correctly, you only want the parent directory, not the full tree, copied to the target. The following might do:
#!/usr/bin/env bash
findopts=(
-type f
-mtime -2
-exec bash -c 'd="${0%/*}"; d="${d##*/}"; mkdir -p "$1/$d"; cp -v "$0" "$1/$d/"' {} ./new \;
)
find /home/foo/hint/ "${findopts[#]}"
Results:
$ find ./hint -type f -print
./hint/foo/slurm/file.txt
./hint/foo/file.txt
./hint/bar/file.txt
$ ./doit
./hint/foo/slurm/file.txt -> ./new/slurm/file.txt
./hint/foo/file.txt -> ./new/foo/file.txt
./hint/bar/file.txt -> ./new/bar/file.txt
I've put the options to find into a bash array for easier reading and management. The script for the -exec option is still a little unwieldy, so here's a breakdown of what it does for each file. Bearing in mind that in this format, options are numbered from zero, the {} becomes $0 and the target directory becomes $1...
d="${0%/*}" # Store the source directory in a variable, then
d="${d##*/}" # strip everything up to the last slash, leaving the parent.
mkdir -p "$1/$d" # create the target directory if it doesn't already exist,
cp "$0" "$1/$d/" # then copy the file to it.
I used cp -v for verbose output as shown in "Results" above, but IIRC it's also not supported by Solaris, and can be safely ignored.

The --parents flag should do the trick:
find /home/foo/hint/ -type f -mtime -2 -exec cp --parents '{}' ~/new/ \;

Try testing with rsync -R, for example:
find /your/path -type f -mtime -2 -exec rsync -R '{}' ~/new/ \;
From the rsync man:
-R, --relative
Use relative paths. This means that the full path names specified on the
command line are sent to the server rather than just the last parts of the
filenames.

The problem with the answers by #Mureinik and #nbari might be that the absolute path of new files will spawn in the target directory. In this case you might want to switch to the base directory before the command and go back to your current directory afterwards:
path_current=$PWD; cd /home/foo/hint/; find . -type f -mtime -2 -exec cp --parents '{}' ~/new/ \; ; cd $path_current
or
path_current=$PWD; cd /home/foo/hint/; find . -type f -mtime -2 -exec rsync -R '{}' ~/new/ \; ; cd $path_current
Both ways work for me at a Linux platform. Let’s hope that Solaris 10 knows about rsync’s -R ! ;)

I found a way around it:
cd ~/new/
find /home/foo/hint/ -type f -mtime -2 -exec nawk -v f={} '{n=split(FILENAME, a, "/");j= a[n-1];system("mkdir -p "j"");system("cp "f" "j""); exit}' {} \;

Related

Find big files and empty them

will the following find big log files (over 1GB) at the /opt directory and empty them?
find /opt/ -type f -size +1G -exec cat > /dev/null {} \;
thank you.
This is what's required :
find /opt/ -type f -size +1G -exec cp /dev/null {} \;
The redirection in your code causes cat writing big files into /dev/null.
It may be safer to add a name clause :
find /opt/ -type f -name "*.log" -size +1G -exec cp /dev/null {} \;
If you have GNU coreutils, you can use truncate like this:
find /opt/ -type f -size +1G -exec truncate -s0 {} \;
the problem you are facing is that the redirection (>) is a property of the shell, whereas cat doesn't know anything about it.
the simplest solution for your problem is probably just putting the file-emptying into a small wrapper script (it needs to escape the $ with backslashes, in order for the heredoc to not expand $# and $f but instead write them into the wrapper-script literally).
$ cat >/tmp/wrapper-script.sh <<'EOL'
#!/bin/sh
for f in "$#"; do
cat /dev/null > "${f}"
done
EOL
$ chmod +x /tmp/wrapper-script.sh
$ find /opt/ -type f -size +1G -exec /tmp/wrapper-script.sh {} +
the wrapper-script iterates over all files given on the cmdline and empties all of them (note the + specifier in the find invocation).

How to print the deleted file names along with path in shell script

I am deleting the files in all the directories and subdirectories using the command below:
find . -type f -name "*.txt" -exec rm -f {} \;
But I want to know which are the files deleted along with their paths. How can I do this?
Simply add a -print argument to your find.
$ find . -type f -name "*.txt" -print -exec rm -f {} \;
As noted by #JonathanRoss below, you can achieve an equivalent result with the -v option to rm.
It's not the scope of your question, but more generally it gets more interesting if you want to delete directories recursively. Then:
a simple -exec rm -r argument keeps it silent
a -print -exec rm -r argument reports the toplevel directories you're operating on
a -exec rm -rv argument reports all you're removing

How to copy files recursively, rename them but keep the same extension in Bash?

I have a folder with tens of thousands of different file types. Id like to copy them all to a new folder (Copy1) but also rename them all to $RANDOM but keep the extension intact. I realize I can write a line specifying which extension to find and how to name it, but there is got to be a way to do it dynamically, because there are at least 100 file types and may be more in the future.
I have the following so far:
find ./ -name '*.*' -type f -exec bash -c 'cp "$1" "${1/\/123_//_$RANDOM}"' -- {} \;
but that puts the random number after the extension, and also it puts the all in the same folder. I cant figure out how to do the following 2 things:
1 - Keep all paths intact, but in a new root folder (Copy1)
2 - How to have name be $RANDOM.extension, instead of .extension.$RANDOM
PS - by $RANDOM i mean actual randomly generated number. I am interested in keeping folder structure, so we are dealing with a few hundred files at most per directory, but all directories/files need to be renamed to $RANDOM. Another way to look at what I need to do. Copy all contents or Folder1 with all subdirectories and files to Folder2 (where Fodler2 is a $RANDOM name), then rename all folders and files to random names but keep all extensions.
EDIT: Ok i figured out how to rename and keep extension. But I have a problem where its dumping all of the files into the root directory where script is run from. How do I keep them in their respective folders? Command Im using is:
find ./ -name '*.*' -type f -exec bash -c 'mv "$1" $RANDOM.${1##*.}' -- {} \;
Thanks!
Ok i figured out how to rename and keep extension. But I have a
problem where its dumping all of the files into the root directory
where script is run from. How do I keep them in their respective
folders? Command Im using is:
find ./ -name '*.*' -type f -exec bash -c 'mv "$1" $RANDOM.${1##*.}' -- {} \;
Change your command to:
PATH=/bin:/usr/bin find . -name '*.*' -type f -execdir bash -c 'mv "$1" $RANDOM.${1##*.}' -- {} \;
Or alternatively using uuids instead of random numbers:
PATH=/bin:/usr/bin find . -name '*.*' -type f -execdir bash -c 'mv "$1" $(uuidgen).${1##*.}' -- {} \;
Here's what I came up with :
i=1
random="whatever"
find . -name "*.*" -type f | while read f
do
newbase=${f/*./$random$i.} //added counter to filename
cp $f /Path/Name/"$newbase"
((i++))
done
I had to add a counter to random (i), otherwise, if the extensions are similar, your files would overwrite themselves when copied.
In your new folder, your files should look like this :
whatever1.txt
whatever2.txt
etc etc
I hope this is what you were looking for.
Here is the command that worked for me.
find . -name '*.pdf' -type f -exec bash -c 'echo "{}" && cp "$1" ./$RANDOM.${1##*.}' -- {} \;

shell script to traverse files recursively

I need some assistance in creating a shell script to run a specific command (any) on each file in a folder, as well as recursively dive into sub-directories.
I'm not sure how to start.
a point in the right direction would suffice. Thank you.
To apply a command (say, echo) to all files below the current path, use
find . -type f -exec echo "{}" \;
for directories, use -type d
You should be looking at the find command.
For example, to change permissions all JPEG files under your /tmp directory:
find /tmp -name '*.jpg' -exec chmod 777 {} ';'
Although, if there are a lot of files, you can combine it with xargs to batch them up, something like:
find /tmp -name '*.jpg' | xargs chmod 777
And, on implementations of find and xargs that support null-separation:
find /tmp -name '*.jpg' -print0 | xargs -0 chmod 777
Bash 4.0
#!/bin/bash
shopt -s globstar
for file in **/*.txt
do
echo "do something with $file"
done
To recursively list all files
find . -name '*'
And lets say for example you want to 'grep' on each file then -
find . -type f -name 'pattern' -print0 | xargs -0 grep 'searchtext'
Within a bash script, you can go through the results from "find" command this way:
for F in `find . -type f`
do
# command that uses $F
done

Delete all files but keep all directories in a bash script?

I'm trying to do something which is probably very simple, I have a directory structure such as:
dir/
subdir1/
subdir2/
file1
file2
subsubdir1/
file3
I would like to run a command in a bash script that will delete all files recursively from dir on down, but leave all directories. Ie:
dir/
subdir1/
subdir2/
subsubdir1
What would be a suitable command for this?
find dir -type f -print0 | xargs -0 rm
find lists all files that match certain expression in a given directory, recursively. -type f matches regular files. -print0 is for printing out names using \0 as delimiter (as any other character, including \n, might be in a path name). xargs is for gathering the file names from standard input and putting them as a parameters. -0 is to make sure xargs will understand the \0 delimiter.
xargs is wise enough to call rm multiple times if the parameter list would get too big. So it is much better than trying to call sth. like rm $((find ...). Also it much faster than calling rm for each file by itself, like find ... -exec rm \{\}.
With GNU's find you can use the -delete action:
find dir -type f -delete
With standard find you can use -exec rm:
find dir -type f -exec rm {} +
find dir -type f -exec rm '{}' +
find dir -type f -exec rm {} \;
where dir is the top level of where you want to delete files from
Note that this will only delete regular files, not symlinks, not devices, etc. If you want to delete everything except directories, use
find dir -not -type d -exec rm {} \;

Resources