So I have some folder
|-Folder1
||-SubFolder1
||-SubFolder2
|-Folder2
||-SubFolder3
||-SubFolder4
Each subfolder contains several jpg I want to zip to the root folder...
I'm a little bit stuck on "How to enter each folder"
Here is my code:
find ./ -type f -name '*.jpg' | while IFS= read i
do
foldName=${PWD##*/}
zip ../../foldName *
done
The better would be to store FolderName+SubFolderName and give it to the zip command as name...
Zipping JPEGs (for Compression) is Usually Wasted Effort
First of all, attempting to compress already-compressed formats like JPEG files is usually a waste of time, and can sometimes result in archives that are larger than the original files. However, it is sometimes useful to do so for the convenience of having a bunch of files in a single package.
Just something to keep in mind. YMMV.
Use Find's -execdir Flag
What you need is the find utility's -execdir flag. The GNU find man page says:
-execdir command {} +
Like -exec, but the specified command is run from the subdirec‐
tory containing the matched file, which is not normally the
directory in which you started find.
For example, given the following test corpus:
cd /tmp
mkdir -p foo/bar/baz
touch foo/bar/1.jpg
touch foo/bar/baz/2.jpg
you can zip the entire set of files with find while excluding the path information with a single invocation. For example:
find /tmp/foo -name \*jpg -execdir zip /tmp/my.zip {} +
Use Zip's --junk-paths Flag
The zip utility on many systems supports a --junk-paths flag. The man page for zip says:
--junk-paths
Store just the name of a saved file (junk the path), and do not
store directory names.
So, if your find utility doesn't support -execdir, but you do have a zip that supports junking paths, you could do this instead:
find /tmp/foo -name \*jpg -print0 | xargs -0 zip --junk-paths /tmp/my.zip
You can use dirname to get the directory name of a file/directory it is located in.
You can also simplify the find command to search only for directories by using -type d. Then you should use basename to get only the name of the subdirs:
find ./*/* -type d | while read line; do
zip --junk-paths "$(basename $line)" $line/*.jpg
done
Explanation
find ./*/* -type d
will print out all directories located in ./*/* which will result in all subdirs of directories located in the current dir
while read line reads each line from the stream and stores it in the variable "line". Thus $line will be the relative path to the subdir, e.g. "Folder1/Subdir2"
"$(basename $line)" returns the only the name of the subdir, e.g. "Subdir2"
Update: add --junk-paths to the zip command if you do not want the directy paths to be stored in the zip filde
So a little check, I finally got something working:
find ./*/* -type d | while read line; do
#printf '%s\n' "$line"
zip ./"$line" "$line"/*.jpg
done
But this create un archive containing:
Subfolder.zip
Folder
|-Subfolder
||-File1.jpg
||-File2.jpg
||-File3.jpg
Instead I fold like it to be:
Subfolder.zip
|-File1.jpg
|-File2.jpg
|-File3.jpg
So I tried using basename and dirname in differnet combination...Always got some error...
And just to learn how to: what if I would like the new archive to be created in the same root directory as "Folder"?
Ok finally got it!
find ./* -name \*.zip -type f -print0 | xargs -0 rm -rf
find ./*/* -type d | while read line; do
#printf '%s\n' "$line"
zip --junk-paths ./"$line" "$line"/*.jpg
done
find . -name \*.zip -type f -mindepth 2 -exec mv -- '{}' . \;
In first row I simply remove all .zip files,
Then I zip all and in the final row I move all zip to the root directory!
Thanks everbody for your help!
Related
I have directory named "Documents". In this directory I have 5 files:
User1.txt
User2.txt
User3.txt
User4.txt
User5.txt
Users-info.zip
index.html
I want to copy only those files in whose names there is a word "user" to another directory. How I can do this with cp command?
cp User* /path/to/dir try this, will be enough.
If you wish unusual way:
find . -type f -name 'User*' -print0 | xargs -0 cp -t /path/to/dir/for/copies/
For your case it is:
cp User[1-9].txt /dst_dir
We copy only files with User in the beginning, than some digit and finally .txt.
I want to delete a file from a directory which contains many subdirectories but the deletion should not happen in one subdiretory(searc) whose name is already predefined but path varies as shown below.So now how to delete a file i am using the below command
find . -type f -name "*.txt" -exec rm -f {} \;
this command deletes all the files in the directory.So How can we delete the file without serching that subdirectory.
The subdirectory file name will be same but the path will different
for eg
Main
|
a--> searc
|
b-->x--->searc
|
c-->y-->x-->searc
now the
the subdirectory not to be searched can be present any where as shown above
I think you want the -prune option. In combination with a successful name match, this prevents descent into the named directories. Example:
% mkdir -p test/{a,b,c}
% touch test/{a,b,c}/foo.txt
% find test -name b -prune -o -name '*.txt' -print
test/a/foo.txt
test/c/foo.txt
I am not completely sure what you're asking, so I can give only somewhat generic advice.
You already know the -name option. This refers to the filename only. You can, however, also use -wholename (a.k.a. -path), which refers to the full path (beginning with the one given as first option to find).
So if you want to delete all *.txt files except in the foo/bar subdirectory, you can do this:
find . -type f -name "*.txt" ! -wholename "./foo/bar/*" -delete
Note the -delete option; it doesn't require a subshell, and is easier to type.
If you would like to exclude a certain directory name regardless of where in the tree it might be, just don't "root" it. In the above example, foo/bar was "rooted" to ./, so only a top-level foo/bar would match. If you write ! -wholename "*/foo/bar/*" instead (allowing anything before or after via the *), you would exclude any files below any directory foo/bar from the operation.
You can use xargs instead of the exec
find .... <without the --exec stuff> | grep -v 'your search' | xargs echo rm -f
Try this first. If it is satisfactory, you can remove the echo.
I am not sure how to word this question to find the solution easily online, so after much searching I thought I would ask here.
I access my website's files using bitvise ssh client and I use command lines for various grep and sed functions that I've been recently taught, but I can't seem to find a simple way to do this:
What is the command line to make a backup copy (.bak) of EVERY file that ends in .php? I am looking for the command to instantly make a backup of every php file at once, so when I go into my files I see things like...
index.php
index.php.bak
For every php file.
Also, what is the command line to do this for EVERY file at once, regardless of extension?
Would be awesome to see a solution that uses xargs or find's -exec.
But here is how can do this with a shell loop and find:
Note, this recursively backs up files in sub directories.
For .php files:
find . -iname '*.php' -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done
For all files:
find . -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done
For all files that have an extension:
find . -iname '*.*' -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done
You can just use the cp command
enter the dir that you have all the .php files then type
cp *.php temp/ where temp is a directory in the current directory. The * means all
if you just want to copy the whole folder you could
cp -R foldername destinationArea
I have been given a list of folders which need to be found and copied to a new location.
I have basic knowledge of bash and have created a script to find and copy.
The basic command I am using is working, to a certain degree:
find ./ -iname "*searchString*" -type d -maxdepth 1 -exec cp -r {} /newPath/ \;
The problem I want to resolve is that each found folder contains the files that I want, but also contains subfolders which I do not want.
Is there any way to limit the recursion so that only the files at the root level of the found folder are copied: all subdirectories and files therein should be ignored.
Thanks in advance.
If you remove -R, cp doesn't copy directories:
cp *searchstring*/* /newpath
The command above copies dir1/file1 to /newpath/file1, but these commands copy it to /newpath/dir1/file1:
cp --parents *searchstring*/*(.) /newpath
for GNU cp and zsh
. is a qualifier for regular files in zsh
cp --parents dir1/file1 dir2 copies file1 to dir2/dir1 in GNU cp
t=/newpath;for d in *searchstring*/;do mkdir -p "$t/$d";cp "$d"* "$t/$d";done
find *searchstring*/ -type f -maxdepth 1 -exec rsync -R {} /newpath \;
-R (--relative) is like --parents in GNU cp
find . -ipath '*searchstring*/*' -type f -maxdepth 2 -exec ditto {} /newpath/{} \;
ditto is only available on OS X
ditto file dir/file creates dir if it doesn't exist
So ... you've been given a list of folders. Perhaps in a text file? You haven't provided an example, but you've said in comments that there will be no name collisions.
One option would be to use rsync, which is available as an add-on package for most versions of Unix and Linux. Rsync is basically an advanced copying tool -- you provide it with one or more sources, and a destination, and it makes sure things are synchronized. It knows how to copy things recursively, but it can't be told to limit its recursion to a particular depth, so the following will copy each item specified to your target, but it will do so recursively.
xargs -L 1 -J % rsync -vi -a % /path/to/target/ < sourcelist.txt
If sourcelist.txt contains a line with /foo/bar/slurm, then the slurm directory will be copied in its entiriety to /path/to/target/slurm/. But this would include directories contained within slurm.
This will work in pretty much any shell, not just bash. But it will fail if one of the lines in sourcelist.txt contains whitespace, or various special characters. So it's important to make sure that your sources (on the command line or in sourcelist.txt) are formatted correctly. Also, rsync has different behaviour if a source directory includes a trailing slash, and you should read the man page and decide which behaviour you want.
You can sanitize your input file fairly easily in sh, or bash. For example:
#!/bin/sh
# Avoid commented lines...
grep -v '^[[:space:]]*#' sourcelist.txt | while read line; do
# Remove any trailing slash, just in case
source=${line%%/}
# make sure source exist before we try to copy it
if [ -d "$source" ]; then
rsync -vi -a "$source" /path/to/target/
fi
done
But this still uses rsync's -a option, which copies things recursively.
I don't see a way to do this using rsync alone. Rsync has no -depth option, as find has. But I can see doing this in two passes -- once to copy all the directories, and once to copy the files from each directory.
So I'll make up an example, and assume further that folder names do not contain special characters like spaces or newlines. (This is important.)
First, let's do a single-pass copy of all the directories themselves, not recursing into them:
xargs -L 1 -J % rsync -vi -d % /path/to/target/ < sourcelist.txt
The -d option creates the directories that were specified in sourcelist.txt, if they exist.
Second, let's walk through the list of sources, copying each one:
# Basic sanity checking on input...
grep -v '^[[:space:]]*#' sourcelist.txt | while read line; do
if [ -d "$line" ]; then
# Strip trailing slashes, as before
source=${line%%/}
# Grab the directory name from the source path
target=${source##*/}
rsync -vi -a "$source/" "/path/to/target/$target/"
fi
done
Note the trailing slash after $source on the rsync line. This causes rsync to copy the contents of the directory, rather than the directory.
Does all this make sense? Does it match your requirements?
You can use find's ipath argument:
find . -maxdepth 2 -ipath './*searchString*/*' -type f -exec cp '{}' '/newPath/' ';'
Notice the path starts with ./ to match find's search directory, ends with /* in order to exclude files in the top level directory, and maxdepth is set to 2 to only recurse one level deep.
Edit:
Re-reading your comments, it seems like you want to preserve the directory you're copying from? E.g. when searching for foo*:
./foo1/* ---> copied to /newPath/foo1/* (not to /newPath/*)
./foo2/* ---> copied to /newPath/foo2/* (not to /newPath/*)
Also, the other requirement is to keep maxdepth at 1 for speed reasons.
(As pointed out in the comments, the following solution has security issues for specially crafted names)
Combining both, you could use this:
find . -maxdepth 1 -type d -iname 'searchString' -exec sh -c "mkdir -p '/newPath/{}'; cp "{}/*" '/newPath/{}/' 2>/dev/null" ';'
Edit 2:
Why not ditch find altogether and use a pure bash solution:
for d in *searchString*/; do mkdir -p "/newPath/$d"; cp "$d"* "/newPath/$d"; done
Note the / at the end of the search string, causing only directories to be considered for matching.
How could I move all .txt files from a folder and all included folders into a target directory .
And preferably rename them to the folder they where included in, although thats not that important. I'm not exactly familiar with bash.
To recursively move files, combine find with mv.
find src/dir/ -name '*.txt' -exec mv -t target/dir/ -- {} +
Or if on a UNIX system without GNU's version of find, such as macOS, use:
find src/dir/ -name '*.txt' -exec mv -- {} target/dir/ ';'
To rename the files when you move them it's trickier. One way is to have a loop that uses "${var//from/to}" to replace all occurrences of from with to in $var.
find src/dir/ -name '*.txt' -print0 | while IFS= read -rd $'\0' file; do
mv -- "$file" target/dir/"${file//\//_}"
done
This is ugly because from is a slash, which needs to be escaped as \/.
See also:
Unix.SE: Understanding IFS= read -r line
BashFAQ: How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
Try this:
find source -name '*.txt' | xargs -I files mv files target
This will work faster than any option with -exec, since it will not invoke a singe mv process for every file which needs to be moved.
If it's just one level:
mv *.txt */*.txt target/directory/somewhere/.