Recursively copy/backup all .php files to .php.bak files and keep them in their current paths - shell

I am not sure how to word this question to find the solution easily online, so after much searching I thought I would ask here.
I access my website's files using bitvise ssh client and I use command lines for various grep and sed functions that I've been recently taught, but I can't seem to find a simple way to do this:
What is the command line to make a backup copy (.bak) of EVERY file that ends in .php? I am looking for the command to instantly make a backup of every php file at once, so when I go into my files I see things like...
index.php
index.php.bak
For every php file.
Also, what is the command line to do this for EVERY file at once, regardless of extension?

Would be awesome to see a solution that uses xargs or find's -exec.
But here is how can do this with a shell loop and find:
Note, this recursively backs up files in sub directories.
For .php files:
find . -iname '*.php' -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done
For all files:
find . -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done
For all files that have an extension:
find . -iname '*.*' -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done

You can just use the cp command
enter the dir that you have all the .php files then type
cp *.php temp/ where temp is a directory in the current directory. The * means all
if you just want to copy the whole folder you could
cp -R foldername destinationArea

Related

Iterate over files in a subfolder

new here, learning bash for first time.
I'm trying to iterate over files named "list.txt" placed in subfolders, manipulate and create a new files, under the same subfolder. The nest could be like this:
inventory/product_names1/list.txt
inventory/product_names2/list.txt
As product_names is completly random, I would like to iterate over all list.txt files with unix cms like sed/grep/cut and create a new file, under the same random product_names folders.
for f in $( find . -name 'list.txt'); do for list in $f; do cat $f | cut -d']' -f2- > "$f/new_file.txt" ; done ; done
I can access files into the nest using find command. How can I redirect output in the right subfolder if the product_names is random?
inventory/product_names1/list.txt
inventory/product_names1/new_file.txt
inventory/product_names2/list.txt
inventory/product_names2/new_file.txt
This script is intended to work in the root folder, pointing and working with entime path "inventory". $f access to inventory/product_names1/list.txt but I need the output in inventory/product_names1. How can I redirect correctly if I don't have the right value/variable?
You can either use parameter expansion to remove the file name from the path, or you can iterate over all the directories and only work on them if they contain the list.txt file.
#!/bin/bash
for list in inventory/*/list.txt ; do
new=${list%/*}/new_list.txt
echo "$list" "$new"
done
# OR
for dir in inventory/* ; do
if [[ -f $dir/list.txt ]] ; then
echo "$dir"/list.txt "$dir"/new_list.txt
fi
done
find can not only find files but also execute commands when a file is found:
find . -type f -name 'list.txt' -execdir sh -c 'cut -d"]" -f2 list.txt > new_file.txt' \;
Explanations:
-type f condition added to skip directories named list.txt. If some of your list.txt files can be symbolic links and you want to consider them too, use -type f,l with GNU find. With other find you may need to use \(-type f -o -type l\).
-execdir runs the command in the directory where the file was found.
By default find does not print when -execdir is used. If you need it add the -print command:
find . -type f -name 'list.txt' -execdir sh -c 'cut -d"]" -f2 list.txt > new_file.txt' \; -print

Doing something to all files in an entire tree

The scenario is that I want to convert all of my music files from .mp3 to .ogg. They are in a folder called "Music". In this folder there are folders and files. The files are .mp3s. The directories may contain .mp3s or directories which further contain .mp3s or directories, and so on. This is because some artists have albums which have parts and some do not, etc.
I want to write a script that converts each file using avconv.
Basically, what I am going to do is manually cd into every directory and run the following:
for file in $(ls); do avconv -i $file `echo \`basename $file .mp3\`.ogg`; done
This successfully gets me what I want. However, this is not great as I have a lot of folders, and manually going into each of them and executing this is slow.
My question, then, is how do I write a script that runs this in any directory that has .mp3s, and then goes into any subdirectory it finds and recursively calls itself? My intuition tells me to use Perl or Python because of the complex nature of this.
Thanks for any suggestions!
I'm not familiar with avconv but assuming your command is:
avconv -i inputname outputname
And you want to convert all inputname.mp3 to inputname.ogg in their original directories below Music, then the following should work in bash:
#!/bin/bash
while read -r fname; do
avconv -i "$fname" "${fname%.mp3}.ogg"
done < <(find /path/to/Music -type f -name "*.mp3")
Note: this does not remove the original .mp3, and the space between < < is required. Also note, for file in $(ls) is filled with potential for errors.
You can do it with bash in one liner:
First you find all files (of type file (-type f) ) that match next pattern "*.mp3". To read each one you use 'while' and invoke avconf.
For exchange extension I prefer 'sed' command, that keep folder so you don't need the 'cd' command.
Notice that you must put quotes on $FN variable because it can contain spaces.
find -type f -iname "*.mp3" | while read "FN" ; do avconf -i "$FN" $(echo "$FN" | sed 's/\.mp3/\.ogg/g') ; done
find <music-folder> -type f -name '*.mp3' | \
xargs -I{} bash -c 'mp3="$0"; ogg="${mp3%.mp3}.ogg"; avconv -i "$mp3" "$ogg";' {}
This should survive in cases of "weird" filenames with spaces, quotes and other strange symbols within.
You can list directories with absolute paths and recursively cd into every directory using find $PWD -type d syntax:
Just inside from Music directory run:
for d in $(find $PWD -type d)
do
cd $d
for file in $(find . -maxdepth 1 -type f)
do
echo $file
avconv -i $file `echo \`basename $file .mp3\`.ogg`
done
done

Copying list of files to a directory

I want to make a search for all .fits files that contain a certain text in their name and then copy them to a directory.
I can use a command called fetchKeys to list the files that contain say 'foo'
The command looks like this : fetchKeys -t 'foo' -F | grep .fits
This returns a list of .fits files that contain 'foo'. Great! Now I want to copy all of these to a directory /path/to/dir. There are too many files to do individually , I need to copy them all using one command.
I'm thinking something like:
fetchKeys -t 'foo' -F | grep .fits > /path/to/dir
or
cp fetchKeys -t 'foo' -F | grep .fits /path/to/dir
but of course neither of these works. Any other ideas?
If this is on Linux/Unix, can you use the find command? That seems very much like fetchkeys.
$ find . -name "*foo*.fit" -type f -print0 | while read -r -d $'\0' file
do
basename=$(basename $file)
cp "$file" "$fits_dir/$basename"
done
The find command will find all files that match *foo*.fits in their name. The -type f says they have to be files and not directories. The -print0 means print out the files found, but separate them with the NUL character. Normally, the find command will simply return a file on each line, but what if the file name contains spaces, tabs, new lines, or even other strange characters?
The -print0 will separate out files with nulls (\0), and the read -d $'\0' file means to read in each file separating by these null characters. If your files don't contain whitespace or strange characters, you could do this:
$ find . -name "*foo*.fit" -type f | while read file
do
basename=$(basename $file)
cp "$file" "$fits_dir/$basename"
done
Basically, you read each file found with your find command into the shell variable file. Then, you can use that to copy that file into your $fits_dir or where ever you want.
Again, maybe there's a reason to use fetchKeys, and it is possible to replace that find with fetchKeys, but I don't know that fetchKeys command.
Copy all files with the name containing foo to a certain directory:
find . -name "*foo*.fit" -type f -exec cp {} "/path/to/dir/" \;
Copy all files themselves containing foo to a certain directory (solution without xargs):
for f in `find . -type f -exec grep -l foo {} \;`; do cp "$f" /path/to/dir/; done
The find command has very useful arguments -exec, -print, -delete. They are very robust and eliminate the need to manually process the file names. The syntax for -exec is: -exec (what to do) \;. The name of the file currently processed will be substituted instead of the placeholder {}.
Other commands that are very useful for such tasks are sed and awk.
The xargs tool can execute a command for every line what it gets from stdin. This time, we execute a cp command:
fetchkeys -t 'foo' -F | grep .fits | xargs -P 1 -n 500 --replace='{}' cp -vfa '{}' /path/to/dir
xargs is a very useful tool, although its parametrization is not really trivial. This command reads in 500 .fits files, and calls a single cp command for every group. I didn't tested it to deep, if it doesn't go, I'm waiting your comment.

shell entering each folder and zip content

So I have some folder
|-Folder1
||-SubFolder1
||-SubFolder2
|-Folder2
||-SubFolder3
||-SubFolder4
Each subfolder contains several jpg I want to zip to the root folder...
I'm a little bit stuck on "How to enter each folder"
Here is my code:
find ./ -type f -name '*.jpg' | while IFS= read i
do
foldName=${PWD##*/}
zip ../../foldName *
done
The better would be to store FolderName+SubFolderName and give it to the zip command as name...
Zipping JPEGs (for Compression) is Usually Wasted Effort
First of all, attempting to compress already-compressed formats like JPEG files is usually a waste of time, and can sometimes result in archives that are larger than the original files. However, it is sometimes useful to do so for the convenience of having a bunch of files in a single package.
Just something to keep in mind. YMMV.
Use Find's -execdir Flag
What you need is the find utility's -execdir flag. The GNU find man page says:
-execdir command {} +
Like -exec, but the specified command is run from the subdirec‐
tory containing the matched file, which is not normally the
directory in which you started find.
For example, given the following test corpus:
cd /tmp
mkdir -p foo/bar/baz
touch foo/bar/1.jpg
touch foo/bar/baz/2.jpg
you can zip the entire set of files with find while excluding the path information with a single invocation. For example:
find /tmp/foo -name \*jpg -execdir zip /tmp/my.zip {} +
Use Zip's --junk-paths Flag
The zip utility on many systems supports a --junk-paths flag. The man page for zip says:
--junk-paths
Store just the name of a saved file (junk the path), and do not
store directory names.
So, if your find utility doesn't support -execdir, but you do have a zip that supports junking paths, you could do this instead:
find /tmp/foo -name \*jpg -print0 | xargs -0 zip --junk-paths /tmp/my.zip
You can use dirname to get the directory name of a file/directory it is located in.
You can also simplify the find command to search only for directories by using -type d. Then you should use basename to get only the name of the subdirs:
find ./*/* -type d | while read line; do
zip --junk-paths "$(basename $line)" $line/*.jpg
done
Explanation
find ./*/* -type d
will print out all directories located in ./*/* which will result in all subdirs of directories located in the current dir
while read line reads each line from the stream and stores it in the variable "line". Thus $line will be the relative path to the subdir, e.g. "Folder1/Subdir2"
"$(basename $line)" returns the only the name of the subdir, e.g. "Subdir2"
Update: add --junk-paths to the zip command if you do not want the directy paths to be stored in the zip filde
So a little check, I finally got something working:
find ./*/* -type d | while read line; do
#printf '%s\n' "$line"
zip ./"$line" "$line"/*.jpg
done
But this create un archive containing:
Subfolder.zip
Folder
|-Subfolder
||-File1.jpg
||-File2.jpg
||-File3.jpg
Instead I fold like it to be:
Subfolder.zip
|-File1.jpg
|-File2.jpg
|-File3.jpg
So I tried using basename and dirname in differnet combination...Always got some error...
And just to learn how to: what if I would like the new archive to be created in the same root directory as "Folder"?
Ok finally got it!
find ./* -name \*.zip -type f -print0 | xargs -0 rm -rf
find ./*/* -type d | while read line; do
#printf '%s\n' "$line"
zip --junk-paths ./"$line" "$line"/*.jpg
done
find . -name \*.zip -type f -mindepth 2 -exec mv -- '{}' . \;
In first row I simply remove all .zip files,
Then I zip all and in the final row I move all zip to the root directory!
Thanks everbody for your help!

Moving files with an extension into a location

How could I move all .txt files from a folder and all included folders into a target directory .
And preferably rename them to the folder they where included in, although thats not that important. I'm not exactly familiar with bash.
To recursively move files, combine find with mv.
find src/dir/ -name '*.txt' -exec mv -t target/dir/ -- {} +
Or if on a UNIX system without GNU's version of find, such as macOS, use:
find src/dir/ -name '*.txt' -exec mv -- {} target/dir/ ';'
To rename the files when you move them it's trickier. One way is to have a loop that uses "${var//from/to}" to replace all occurrences of from with to in $var.
find src/dir/ -name '*.txt' -print0 | while IFS= read -rd $'\0' file; do
mv -- "$file" target/dir/"${file//\//_}"
done
This is ugly because from is a slash, which needs to be escaped as \/.
See also:
Unix.SE: Understanding IFS= read -r line
BashFAQ: How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
Try this:
find source -name '*.txt' | xargs -I files mv files target
This will work faster than any option with -exec, since it will not invoke a singe mv process for every file which needs to be moved.
If it's just one level:
mv *.txt */*.txt target/directory/somewhere/.

Resources