I have a folder with a lot of subdirectories. I want to find html.txt files and move them into a new directory with the same structure.
I´m using
find . -name "*.html.txt" | cp -t /home/....
but it is not keeping the name and directory structure.
Is there anyway to do this?
Try:
find . -name '*.html.txt' | xargs cp -t /home --parents
xargs is often useful to turn various lists into a list of arguments, especially with the output of 'find'. You can probably also do this with an exec command from within find, but I always find that harder to quickly get right.
--parents does the trick of creating all the parent directories as they are in the list of files.
I'm using this type of command in csh, and it should work in bash as well.
Mostly answered here already:
https://serverfault.com/questions/180853/how-to-copy-file-preserving-directory-path-in-linux
Related
I'm new to linux (using bash) and I wanted to ask about something that I do often while I work, I'll give two examples.
Deleting multiple specific folders inside a certain directory.
Copying multiple specific folders into a ceratin directory.
I succesfully done this with files, using find with some regex and then using -exec and -delete. But for folders I found it more problematic, because I had problem pipelining the list of folders I got to the cp/rm command succescfully, each time getting the "No such file or directory error".
Looking online I found the following command (in my case for copying all folders starting with a Z):
cp -r $(ls -A | grep "Z*") destination
But when I execute it it says nothing and the prompt won't show up again until I hit Ctrl+C and nothing is copied.
How can I achieve what I'm looking for? For both cp and rm.
Thanks in advance!
First of all, you are trying to grep "Z*" but it means you are looking for Z, ZZ, ZZZZ, ZZZZZ ?
also try to execute ls -A - you will get multiple columns. I think need at least ls -1A to print result one per line.
So for your command try something like:
cp -r $(ls -1A|grep "^p") destination
or
cp -r $(ls -1A|grep "^p") -t destination
But all the above is just to correct syntax of your example.
It is much better to use find. Just in case try to put target directory in quotas like:
find <PATH_FROM> -type d -exec cp -r \"{}\" -t target \;
trying to make my workflow more efficient. I often have two files of the same name in two directories held within one "master" directory like this:
root
folder_1
my_website.html
folder_2
my_website.html
From within terminal I can enter:
find -L . -name "my_website.html"
and that gives me the list of files and their locations. But I'd like to open them directly from here, rather than navigate down the directories. Is there a way to chain an open command on to open both of the files it finds.
Thanks
After searching and toying I came across this which works:
find -L . -name 'my_website.html' -print0 | xargs -0 -n 1 open
I have a suspicion that a few years ago someone accidentally copied a folder structure from /home/data to /home/something/data. Since then /home/data has had many updates and changes.
What is the easiest way to check if there are any files in /home/something/data unique
(by name and location) to that location, to help me confirm if everything in there was a copy from /home/data?
Using diff -r dir1 dir2, you can recursively scan directories for differences in structure and content. Additional flags can tweak the output and behavior to your liking.
Use rsync in dry-run mode to see if copying /home/something/data into /home/data would actually copy any data.
rsync -r --dry-run /home/something/data /home/data
If a file under /home/something/data is identical to a file under /home/data, it would not be copied, and rsync --dry-run will not report it.
You may or may not like this approach, it can take a while to scan all files but I generally have a good feeling when I do it.
Go to the top of each directory structure and run a find and get the md5 checksums of each and every file - your switches may vary as I am on OSX
cd /home/data
find . -type f -exec md5 -r {} + > /tmp/a
cd /home/something/data
find . -type f -exec md5 -r {} + > /tmp/b
When they are finished, run the output files through sort and uniq -u to tell you the lines that only appear once (they should all appear twice if the files are the same in both directories):
sort < /tmp/[ab] | uniq -u
My requirement is simple but unable to find a way to do it effectively.
I have a directory named Code which contains around 14lac files and is around 40 GB. All i want to do is to create a copy of this Code folder at same directory level. Means Code and Code_Copy are in same folder.
If i use find . | parallel cp -rpf {} ../Code_Copy in Code folder, i get the correct directory structure plus all the files present in Code folder recursively also directly copied to Code_Copy.
If i use tree | parallel cp -rpf {} ../Code_Copy, the directory structure is created properly but the command keeps on running for a long time even after that giving error on "cannot stat file :" etc for lines of output of tree command.
Pls help in providing a solution.
Thanks in advance
tree will fail because it does not give paths but draws the paths using ascii art. This is not useful input for parallel.
The simplest solution is:
cp -a Code Code_Copy
but that may be slow. Running rsync in parallel may be the fastest solution depending on your disk system:
find . -type f | parallel 'mkdir -p ../Copy/{//}; rsync -a {} ../Copy/{}'
# To get permissions and special files fixed
rsync -a ./ ../Copy/
To get the most out of GNU Parallel consider walking through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Late answer but you need to do it in two steps. Build the directory tree first, then copy in the files. From the parent directory of Code:
NUM_JOBS=4
find Code -type d -print0 | parallel -j$NUM_JOBS -0 "mkdir -p Code_Copy/{}"
find Code ! -type d -print0 | parallel -j$NUM_JOBS -0 "cp -P {} Code_Copy/{}"
The -P option preserves softlinks within the tree copied.
This question already has answers here:
Command line: piping find results to rm
(5 answers)
Closed last month.
Recently frigged up my external hard drive with my photos on it (most are on DVD anyway, but..) by some partition friggery.
Fortunately I was able to put things back together with PhotoRec another Unix partition utility and PDisk.
PhotoRec returned over one thousand folders chalk full of anything from .txt files to important .NEF's.
So I tried to make the sorting easier by using unix since the OSX Finder would simply crumble under such requests as to select and delete a billion .txt files.
But I encounter some BS when I tried to find and delete txt files, or find and move all jpegs recursively into a new folder called jpegs. I am a unix noob so I need some assistance please.
Here is what I did in bash. (I am in the directory that ls would list all the folders and files I need to act upon).
find . -name *.txt | rm
or
sudo find . -name *.txt | rm -f
So it's giving me some BS that I need to unlink the files. Whatever.
I need to find all .txt files recursively and delete them preferably verbose.
You can't pipe filenames to rm. You need to use xargs instead. Also, remember to quote the file pattern ".txt" or the shell will expand it.
find . -name "*.txt" | xargs rm
find . -name "*.txt" -exec rm {} \;
$ find . -name "*.txt" -type f -delete