How can I recursively replace file and directory names using Terminal? - macos

Using the Terminal on macOS, I want to recursively replace a word with the name of both a directory and a file name. For instance, I have an angular app and the module name is article, all of the file names, and directory names contain the word article. I've already done a find and replace to replace articles with apples in the code. Now I want to do the same with the file structure so both the file names and the directories share the same convention.
Just for information, I've already tried to use the newest Yeoman generator to create new files, but there seems to be an issue with it. The alternative is to duplicate a directory and rename all of the files, this is quite time consuming.

got it to work with the following script
var=$1
if [ -n "$var" ]; then
CRUDNAME=$1
CRUDNAMEUPPERCASE=`echo ${CRUDNAME:0:1} | tr '[a-z]' '[A-Z]'`${CRUDNAME:1}
FOLDERNAME=$CRUDNAME's'
# Create new folder
cp -R modules/articles modules/$FOLDERNAME
# Do the find/replace in all the files
find modules/$FOLDERNAME -type f -print0 | xargs -0 sed -i -e 's/Article/'$CRUDNAMEUPPERCASE'/g'
find modules/$FOLDERNAME -type f -print0 | xargs -0 sed -i -e 's/article/'$CRUDNAME'/g'
# Delete useless files due to sed
rm modules/$FOLDERNAME/**/*-e
rm modules/$FOLDERNAME/**/**/*-e
rm modules/$FOLDERNAME/**/**/**/*-e
# Rename all the files
for file in modules/$FOLDERNAME/**/*article* ; do mv $file ${file//article/$CRUDNAME} ; done
for file in modules/$FOLDERNAME/**/**/*article* ; do mv $file ${file//article/$CRUDNAME} ; done
for file in modules/$FOLDERNAME/**/**/**/*article* ; do mv $file ${file//article/$CRUDNAME} ; done
else
echo "Usage: sh rename-module.sh [crud-name]"
fi
apparently I'm not the only one to encounter this issue
https://github.com/meanjs/generator-meanjs/issues/79

Related

Shell Script: How to copy files with specific string from big corpus

I have a small bug and don't know how to solve it. I want to copy files from a big folder with many files, where the files contain a specific string. For this I use grep, ack or (in this example) ag. When I'm inside the folder it matches without problem, but when I want to do it with a loop over the files in the following script it doesn't loop over the matches. Here my script:
ag -l "${SEARCH_QUERY}" "${INPUT_DIR}" | while read -d $'\0' file; do
echo "$file"
cp "${file}" "${OUTPUT_DIR}/${file}"
done
SEARCH_QUERY holds the String I want to find inside the files, INPUT_DIR is the folder where the files are located, OUTPUT_DIR is the folder where the found files should be copied to. Is there something wrong with the while do?
EDIT:
Thanks for the suggestions! I took this one now, because it also looks for files in subfolders and saves a list with all the files.
ag -l "${SEARCH_QUERY}" "${INPUT_DIR}" > "output_list.txt"
while read file
do
echo "${file##*/}"
cp "${file}" "${OUTPUT_DIR}/${file##*/}"
done < "output_list.txt"
Better implement it like below with a find command:
find "${INPUT_DIR}" -name "*.*" | xargs grep -l "${SEARCH_QUERY}" > /tmp/file_list.txt
while read file
do
echo "$file"
cp "${file}" "${OUTPUT_DIR}/${file}"
done < /tmp/file_list.txt
rm /tmp/file_list.txt
or another option:
grep -l "${SEARCH_QUERY}" "${INPUT_DIR}/*.*" > /tmp/file_list.txt
while read file
do
echo "$file"
cp "${file}" "${OUTPUT_DIR}/${file}"
done < /tmp/file_list.txt
rm /tmp/file_list.txt
if you do not mind doing it in just one line, then
grep -lr 'ONE\|TWO\|THREE' | xargs -I xxx -P 0 cp xxx dist/
guide:
-l just print file name and nothing else
-r search recursively the CWD and all sub-directories
match these works alternatively: 'ONE' or 'TWO' or 'THREE'
| pipe the output of grep to xargs
-I xxx name of the files is saved in xxx it is just an alias
-P 0 run all the command (= cp) in parallel (= as fast as possible)
cp each file xxx to the dist directory
If i understand the behavior of ag correctly, then you have to
adjust the read delimiter to '\n' or
use ag -0 -l to force delimiting by '\0'
to solve the problem in your loop.
Alternatively, you can use the following script, that is based on find instead of ag.
while read file; do
echo "$file"
cp "$file" "$OUTPUT_DIR/$file"
done < <(find "$INPUT_DIR" -name "*$SEARCH_QUERY*" -print)

Linux copy directory structure and create symlinks to existing files with different extension

I have a large directory structure similar to the following
/home/user/abc/src1
/file_a.xxx
/file_b.xxx
/home/user/abc/src2
/file_a.xxx
/file_b.xxx
It contains multiple srcX folders and has many files, most of the files have a .xxx extension. These are the ones that I am interested in.
I would like to create an identical directory structure in say /tmp. This part I have been able to accomplish via rsync
rsync -av -f"+ */" -f"- *" /home/user/abc/ /tmp/xyz/
The next step is what I can't figure out. I need the directory structure in /tmp/xyz to have symlinks to all the files in /home/user/abc with a different file extension (.zzz). The directory structure would look as follows:
/tmp/xyz/src1
/file_a.zzz -> /home/user/abc/src1/file_a.xxx
/file_b.zzz -> /home/user/abc/src1/file_b.xxx
/tmp/xyz/src2
/file_a.zzz -> /home/user/abc/src2/file_a.xxx
/file_b.zzz -> /home/user/abc/src2/file_b.xxx
I understand that I could just copy the data and do a batch rename. That is not an acceptable solution.
How do I recursively create symlinks for all the .xxx files in /home/user/abc and link them to /tmp/xyz with a .zzz extension.
The find + exec seems like what I want but I can't put 2 and 2 together on this one.
This could work
cd /tmp/xyz/src1
find /home/user/abc/src1/ -type f -print0 | xargs -r0 -I '{}' ln -s '{}' $(basename '{}' .xxx).zzz
Navigate to /tmp/xyz/ then run the following script:
#!/usr/bin/env bash
# First make src* folders in present directory:
mkdir -p $(find ~/user/abc/src* -type d -name "src*" | rev | cut -d"/" -f1 | rev)
# Then make symbolic links:
while read -r -d' ' file; do
ln -s ${file} $(echo ${file} | rev | cut -d/ -f-2 | rev | sed 's/\.xxx/\.zzz/')
done <<< $(echo "$(find ~/user/abc/src* -type f -name '*.xxx') dummy")
Thanks for the input all. Based upon the ideas I saw I was able to come up with a script that fits my needs.
#!/bin/bash
GLOBAL_SRC_DIR="/home/usr/abc"
GLOBAL_DEST_DIR="/tmp/xyz"
create_symlinks ()
{
local SRC_DIR="${1}"
local DEST_DIR="${2}"
# read in our file, use null terminator
while IFS= read -r -d $'\0' file; do
# If file ends with .xxx or .yyy
if [[ ${file} =~ .*\.(xxx|yyy) ]] ; then
basePath="$(dirname ${file})"
fileName="$(basename ${file})"
completeSourcePath="${basePath}/${fileName}"
#echo "${completeSourcePath}"
# strip off preceding text
partialDestPath=$(echo ${basePath} | sed -r "s|^${SRC_DIR}||" )
fullDestPath="${DEST_DIR}/${partialDestPath}"
# rename file from .xxx to .zzz. don't rename just link .yyy
cppFileName=$(echo ${fileName} | sed -r "s|\.xxx$|\.zzz|" )
completeDestinationPath="${fullDestPath}/${cppFileName}"
$(ln -s ${completeSourcePath} ${completeDestinationPath})
fi
done < <(find ${SRC_DIR} -type f -print0)
}
main ()
{
create_symlinks ${GLOBAL_SRC_DIR} ${GLOBAL_DEST_DIR}
}
main

How can I recursively change file names to lower case and update all references to them?

I need to rename all the files in the MathJax library to lower case and update all references to them in order to conform to some SVN restrictions. Does anyone have a bash script that will do this for me? Thanks.
In order to rename files from uppercase to lowercase, you can use the rename command. In this case you can enter the directory containing the library files and run following command to rename all the files to lower case:
find . -exec readlink -e '{}' \; | xargs rename 'y/A-Z/a-z/'
Let me know your feedback after trying this out.
Here's the solution I came up with. The script is based on another Stack Overflow answer: How to create a bash script that will lower case all files in the current folder, then search/replace in all files?. I just changed it to go through the whole directory tree. One thing that stumped me for a while was the need to provide an empty string after the -i switch for sed to get it to work with Mac OS X:
#!/bin/bash
find . -type f -name '*.js' -print0 | while IFS= read -r -d '' file; do
#Check to see if the filename contains any uppercase characters
oldfilename=$(basename $file)
iscap=`echo $oldfilename | awk '{if ($0 ~ /[[:upper:]]/) print }'`
if [[ -n $iscap ]]
then
#If the filename contains upper case characters convert them to lower case
newname=`echo $oldfilename | tr '[A-Z]' '[a-z]'` #make lower case
#Perform various search/replaces on the file name to clean things up
newname=$(echo "$newname" | sed 's/---/-/')
newname=$(echo "$newname" | sed 's/--/-/')
newname=$(echo "$newname" | sed 's/-\./\./')
newname=$(echo "$newname" | sed 's/there-s-/theres-/')
#Rename file
newpathandfile=$(dirname $file)/$newname
echo "Moving $file\n To $newpathandfile\n\n"
mv $file $newpathandfile
#Update all references to the new filename in all php files
find . -type f -name '*.js' -print0 | while IFS= read -r -d '' thisfile; do
sed -i '' -e "s/$oldfilename/$newname/g" $thisfile
echo "$thisfile s/$oldfilename/$newname/g"
done
fi
done
I renamed directories partly by script and partly manually. There were other types of files (web fonts in particular) that I was able to rename using this script but by specifying the appropriate file extension.
The final steps for the MathJax customization involved transforming to lowercase the value that goes into config.root. I couldn't find anyone else who had had to convert the whole MathJax library to lowercase file and directory names, so that part of this question/answer may not have much value for others.
The script I came up with takes a while to run but that wasn't an issue for me; I don't think I'll ever have to run it again!

How can I manipulate file names using bash and sed?

I am trying to loop through all the files in a directory.
I want to do some stuff on each file (convert it to xml, not included in example), then write the file to a new directory structure.
for file in `find /home/devel/stuff/static/ -iname "*.pdf"`;
do
echo $file;
sed -e 's/static/changethis/' $file > newfile +".xml";
echo $newfile;
done
I want the results to be:
$file => /home/devel/stuff/static/2002/hello.txt
$newfile => /home/devel/stuff/changethis/2002/hello.txt.xml
How do I have to change my sed line?
If you need to rename multiple files, I would suggest to use rename command:
# remove "-n" after you verify it is what you need
rename -n 's/hello/hi/g' $(find /home/devel/stuff/static/ -type f)
or, if you don't have rename try this:
find /home/devel/stuff/static/ -type f | while read FILE
do
# modify line below to do what you need, then remove leading "echo"
echo mv $FILE $(echo $FILE | sed 's/hello/hi/g')
done
Are you trying to change the filename? Then
for file in /home/devel/stuff/static/*/*.txt
do
echo "Moving $file"
mv "$file" "${file/static/changethis}.xml"
done
Please make sure /home/devel/stuff/static/*/*.txt is what you want before using the script.
First, you have to create the name of the new file based on the name of the initial file. The obvious solution is:
newfile=${file/static/changethis}.xml
Second you have to make sure that the new directory exists or create it if not:
mkdir -p $(dirname $newfile)
Then you can do something with your file:
doSomething < $file > $newfile
I wouldn't do the for loop because of the possibility of overloading your command line. Command lines have a limited length, and if you overload it, it'll simply drop off the excess without giving you any warning. It might work if your find returns 100 file. It might work if it returns 1000 files, but it might fail if your find returns 1000 files and you'll never know.
The best way to handle this is to pipe the find into a while read statement as glenn jackman.
The sed command only works on STDIN and on files, but not on file names, so if you want to munge your file name, you'll have to do something like this:
$newname="$(echo $oldname | sed 's/old/new/')"
to get the new name of the file. The $() construct executes the command and puts the results of the command on STDOUT.
So, your script will look something like this:
find /home/devel/stuff/static/ -name "*.pdf" | while read $file
do
echo $file;
newfile="$(echo $file | sed -e 's/static/changethis/')"
newfile="$newfile.xml"
echo $newfile;
done
Now, since you're renaming the file directory, you'll have to make sure the directory exists before you do your move or copy:
find /home/devel/stuff/static/ -name "*.pdf" | while read $file
do
echo $file;
newfile="$(echo $file | sed -e 's/static/changethis/')"
newfile="$newfile.xml"
echo $newfile;
#Check for directory and create it if it doesn't exist
$dirname=$(dirname "$newfile")
if [ ! -d "$dirname" ]
then
mkdir -p "$dirname"
fi
#Directory now exists, so you can do the move
mv "$file" "$newfile"
done
Note the quotation marks to handle the case there's a space in the file name.
By the way, instead of doing this:
if [ ! -d "$dirname" ]
then
mkdir -p "$dirname"
fi
You can do this:
[ -d "$dirname"] || mkdir -p "$dirname"
The || means to execute the following command only if the test isn't true. Thus, if [ -d "$dirname" ] is a false statement (the directory doesn't exist), you run mkdir.
It's a fairly common shortcut when you see shell scripts.
find ... | while read file; do
newfile=$(basename "$file").xml;
do something to "$file" > "$somedir/$newfile"
done
OUTPUT="$(pwd)";
for file in `find . -iname "*.pdf"`;
do
echo $file;
cp $file $file.xml
echo "file created in directory = {$OUTPUT}"
done
This will create a new file with name whatyourfilename.xml, for hello.pdf the new file created would be hello.pdf.xml, basically it creates a new file with .xml appended at the end.
Remember the above script finds files in the directory /home/devel/stuff/static/ whose file names match the matcher string of the find command (in this case *.pdf), and copies it to your present working directory.
The find command in this particular script only finds files with filenames ending with .pdf If you wanted to run this script for files with file names ending with .txt, then you need to change the find command to this find /home/devel/stuff/static/ -iname "*.txt",
Once I wanted to remove trailing -min from my files. i.e. wanted alg-min.jpg to turn into alg.jpg. so after some struggle, managed to figure something like this:
for f in *; do echo $f; mv $f $(echo $f | sed 's/-min//g');done;
Hope this helps someone willing to REMOVE or SUBTITUDE some part of their file names.

How to rename files on a date base in the shell?

I'd like to rename some files that are all in the same directory. The file name pattern used is Prefix_ddmmyy.tex with a european date format. For the sake of readability and the ordering I'd like to rename the files in a pattern Prefix_yymmdd.tex with a canonical date format.
Anyone ideas how I can do this automatically for a complete directory? My sed and regexp knowledge is not very sharp...
for file in Prefix_*.tex ; do
file_new=echo "$file" | sed -e 's:\([0-9][0-9]\)\([0-9][0-9]\)\([0-9][0-9]\)\(\.tex\):\3\2\1\4:'
test "$file" != "$file_new" && mv -f "$file" "$file_new"
done
Or, if you have a lot of files and/or want to process files recursively, replace:
for file in Prefix_*.tex ; do
with:
find . -name Prefix_*.tex -print | while read file ; do
or (non-recursive, GNU):
find . -maxtdepth 1 -name Prefix_*.tex -print | while read file ; do
You can also do it with any bourne-type shell without external commands:
for f in *.tex; do
_s=.${f##*.} _f=${f%.*} _p=${f%_*}_
_dt=${_f#$_p} _d=${_dt%????} _m=${_dt%??}
_y=${_dt#$_m} _m=${_m#??}
mv -- "$f" "$_p$_y$_m$_d$_s"
done
With zsh it would be:
autoload -U zmv
zmv '(*_)(??)(??)(??)(.tex)' '$1$4$3$2$5'
You can try "mmv".

Resources