I have 3 text files.I want to modify the file name of those files using for loop as below .
Please find the files which I have
1234.xml
333.xml
cccc.xml
Output:
1234_R.xml
333_R.xml
cccc_R.xml
Depending on your distribution, you can use rename:
rename 's/(.*)(\.xml)/$1_R$2/' *.xml
Just basic unix command mv work both on move and rename
mv 1234.xml 1234_R.xml
If you want do it by a large amount, do like this:
[~/bash/rename]$ touch 1234.xml 333.xml cccc.xml
[~/bash/rename]$ ls
1234.xml 333.xml cccc.xml
[~/bash/rename]$ L=`ls *.xml`
[~/bash/rename]$ for x in $L; do mv $x ${x%%.*}_R.xml; done
[~/bash/rename]$ ls
1234_R.xml 333_R.xml cccc_R.xml
[~/bash/rename]$
You can use a for loop to iterate over a list of words (e.g. with the list of file names returned by ls) with the (bash) syntax:
for name [ [ in [ word ... ] ] ; ] do list ; done
Then use mv source dest to rename each file.
One nice trick here, is to use basename, which strips directory and suffix from filenames (e.g. basename 333.xml will just return 333).
If you put all this together, the following should work:
for f in `ls *.xml`; do mv $f `basename $f`_R.xml; done
Related
There are n folders in the directory named after the date, for example:
20171002 20171003 20171005 ...20171101 20171102 20171103 ...20180101 20180102
tips: Dates are not continuous.
I want to compress every three folders in each month into one compression block.
For example:
tar jcvf mytar-20171002_1005.tar.bz2 20171002 20171003 20171005
How to write a shell to do this?
You need to do a for loop on your ls variable, then parse the directory name.
dir_list=$(ls)
prev_month=""
times=0
first_dir=""
last_dir=""
dir_list=()
for i in $dir_list; do
month=${i:0:6} #here month will be year plus month
if [ "$month" = "$prev_month" ]; then
i=$(($i+1))
if [ "$i" -eq "3" ]; then
#compress here
dir_list=()
first_dir=""
last_dir=""
else
last_dir=$i
dir_list+=($i)
fi
else
if [ "$first_dir" = "" ]; then
first_dir=$i
else
#compress here
first_dir="$i"
last_dir=""
dir_list=()
fi
fi
This code is not tested and may contain syntaxe error. '#compress here' need to be replace by a loop on the array to create a string to compress.
Assuming you don't have too many directories (I think the limit is several hundred), then you can use Bash's array manipulation.
So, you first load all your directory names into a Bash array:
dirs=( $(ls) )
(I'm going to assume files have no spaces in their names, otherwise it gets a bit dicey)
Then you can use Bash's array slice syntax to pop 3 elements at a time from the array:
while [ "${#dirs[#]}" -gt 0 ]; do
dirs_to_compress=( "${dirs[#]:0:3}" )
dirs=( "${dirs[#]:3}" )
# do something with dirs_to_compress
done
The rest should be pretty easy.
You can achieve this with xargs, a bash while loop, and awk:
ls | xargs -n3 | while read line; do
tar jcvf $(echo $line | awk '{print "mytar-"$1"_"substr($NF,5,4)".tar.bz2"}') $line
done
unset folders
declare -A folders
g=3
for folder in $(ls -d */); do
folders[${folder:0:6}]+="${folder%%/} "
done
for folder in "${!folders[#]}"; do
for((i=0; i < $(echo ${folders[$folder]} | tr ' ' '\n' | wc -l); i+=g)) do
group=(${folders[$folder]})
groupOfThree=(${group[#]:i:g})
tar jcvf mytar-${groupOfThree[0]}_${groupOfThree[-1]:4:4}.tar.bz2 ${groupOfThree[#]}
done
done
This script finds all folders in the current directory, seperates them in groups of months, makes groups of at most three folders and creates a .tar.bz2 for each of them with the name you used in the question.
I tested it with those folders:
20171101 20171102 20171103 20171002 20171003 20171005 20171007 20171009 20171011 20171013 20180101 20180102
And the created tars are:
mytar-20171002_1005.tar.bz2
mytar-20171007_1011.tar.bz2
mytar-20171013_1013.tar.bz2
mytar-20171101_1103.tar.bz2
mytar-20180101_0102.tar.bz2
Hope that helps :)
EDIT: If you are using bash version < 4.2 then replace the line:
tar jcvf mytar-${groupOfThree[0]}_${groupOfThree[-1]:4:4}.tar.bz2 ${groupOfThree[#]}
by:
tar jcvf mytar-${groupOfThree[0]}_${groupOfThree[`expr ${#groupOfThree[#]} - 1`]:4:4}.tar.bz2 ${groupOfThree[#]}
That's because bash version < 4.2 doesn't support negative indices for arrays.
Say I have this folder structure:
MainFolder
Folder1
Folder2
Folder3
...
Folder200
I want to write a script that, if I am current inside Folder2 and execute the script, it will automatically change to the next directory in the list, in this case, Folder3. The restrictions are that the folders could have any name, and I cannot rename it.
So my questions are:
1) How can I know what directory is next on the list? I was wondering if the subdirectories of a directory have a sequential index number that I could use to know what dir comes next.
2) Since I would like to display the name of the new directory at the end of the script, is there a way to display only the dir name? (eg: Folder3 instead of /home/path/to/dir/Folder3 which is the result of "pwd")
What defines the order in which the directories are to be processed? If you have directories without spaces and other special characters in their names, you can use ls to list the directories in order, and then find the name after the current directory:
cwd=$(basename $PWD)
nwd=$(ls .. | awk "/^$cwd$/ { found = 1; next; } { if (found) { print; found = 0 } }")
if [ -d ../$nwd ]
then cd ../$nwd
fi
The name of the directory (only) is found using basename $PWD or ${PWD##*/}.
This should work, assuming the sequence has no holes.
cdn()
{
cd ${PWD%%[0-9]*}$(( $(echo $PWD | sed 's/.*[^0-9]//')+1))
}
Edit: it seems I overlooked the "directory could have any name" statement. From your example, I assumed they have a trailing numerical id ...
It would have been clearer if you had put random names instead of Folderxx in your example.
Edit2: Here is a shell function that suits your requirements:
cdn()
{
for i in $(echo "/\/$(basename $PWD)$/
+1,\$p
Q" | ed -s !"ls -d ../*" | tail -n +2)
do
\cd $i 2>/dev/null && echo $(basename $PWD) && break
done
}
From a list of file names stored in a file f, what's the best way to find the relative path of each file name under dir, outputting this new list to file p? I'm currently using the following:
while read name
do
find dir -type f -name "$name" >> p
done < f
which is too slow for a large list, or a large directory tree.
EDIT: A few numbers:
Number of directories under dir: 1870
Number of files under dir: 80622
Number of filenames in f: 73487
All files listed in f do exist under dir.
The following piece of python code does the trick. The key is to run find once and store the output in a hashmap to provide an O(1) way to get from file_name to the list of paths for the filename.
#!/usr/bin/env python
import os
file_names = open("f").readlines()
file_paths = os.popen("find . -type f").readlines()
file_names_to_paths = {}
for file_path in file_paths:
file_name = os.popen("basename "+file_path).read()
if file_name not in file_names_to_paths:
file_names_to_paths[file_name] = [file_path]
else:
file_names_to_paths[file_name].append(file_path) # duplicate file
out_file = open("p", "w")
for file_name in file_names:
if file_names_to_paths.has_key(file_name):
for path in file_names_to_paths[file_name]:
out_file.write(path)
Try this perl one-liner
perl -e '%H=map{chomp;$_=>1}<>;sub R{my($p)=#_;map R($_),<$p/*> if -d$p;($b=$p)=~s|.*/||;print"$p\n" if$H{$b}}R"."' f
1- create an hashmap whose keys are filenames : %H=map{chomp;$_=>1}<>
2- define a recursive subroutine to traverse directories : sub R{}
2.1- recusive call for directories : map R($_), if -d$p
2.2- extract the filename from the path : ($b=$p)=~s|.*/||
2.3- print if hashmap contains filename : print"$p\n" if$H{$b}
3- call R with path current directory : R"."
EDIT : to traverse hidden directories (.*)
perl -e '%H=map{chomp;$_=>1}<>;sub R{my($p)=#_;map R($_),grep !m|/\.\.?$|,<$p/.* $p/*> if -d$p;($b=$p)=~s|.*/||;print"$p\n" if$H{$b}}R"."' f
I think this should do the trick:
xargs locate -b < f | grep ^dir > p
Edit: I can't think of an easy way to prefix dir/*/ to the list of file names, otherwise you could just pass that directly to xargs locate.
Depending on what percentage of the directory tree is considered a match, it might be faster to find every file, then grep out the matching ones:
find "$dir" -type f | grep -f <( sed 's+\(.*\)+/\1$+' "$f" )
The sed command pre-processes your list of file names into regular expressions that will only match full names at the end of a path.
Here is an alternative using bash and grep
#!/bin/bash
flist(){
for x in "$1"/*; do #*/ for markup
[ -d "$x" ] && flist $x || echo "$x"
done
}
dir=/etc #the directory you are searching
list=$(< myfiles) #the file with file names
#format the list for grep
list="/${list//
/\$\|/}"
flist "$dir" | grep "$list"
...if you need full posix shell compliance (busybox ash, hush, etc...) replace the $list substring manipulation with a variant of chepner's sed and replace $(< file) with $(cat file)
I've got a set of four directories
English.lproj
German.lproj
French.lproj
Italian.lprj
each of this contains a serie of XMLs named
2_symbol.xml
4_symbol.xml
5_symbol.xml
... and so on ...
I need to rename all of these files into another numerical pattern,
because the code that determined those numbers have changed.
so the new numerical pattern would be like
1_symbol.xml
5_symnol.xml
3_symbol.xml
... and so on ...
so there's no algorithm applicable to determine this serie, because of this
reason I thought about storing the two numerical series into an array.
I was thinking to a quick way of doing it with a simple bash script.
I think that I'd need an array to store the old numerical pattern and another
array to store the new numerical pattern, so that I can perform a cycle to make
# move n_symbol.xml newdir/newval_symbol.xml
any suggestion?
thx n cheers.
-k-
you don't need bash for this, any POSIX-compatible shell will do.
repls="1:4 2:1 4:12 5:3"
for pair in $repls; do
old=${pair%:*}
new=${pair#*:}
file=${old}_symbol.xml
mv $file $new${file#$old}
done
edit: you need to take care of overwriting files. the snippet above clobbers 4_symbol.xml, for example.
for pair in $repls; do
...
mv $file $new${file#$old}.tmp
done
for f in *.tmp; do
mv $f ${f%.tmp}
done
The following script will randomly shuffle the symbol names of all xml files across 'lproj' directories.
#!/bin/bash
shuffle() { # Taken from http://mywiki.wooledge.org/BashFAQ/026
local i tmp size max rand
size=${#array[*]}
max=$(( 32768 / size * size ))
for ((i=size-1; i>0; i--)); do
while (( (rand=$RANDOM) >= max )); do :; done
rand=$(( rand % (i+1) ))
tmp=${array[i]} array[i]=${array[rand]} array[rand]=$tmp
done
}
for file in *lproj/*.xml; do # get an array of symbol names
tmp=${file##*/}
array[$((i++))]=${tmp%%_*}
done
shuffle # shuffle the symbol name array
i=0
for file in *lproj/*.xml; do # rename the files with random symbols
echo mv "$file" "${file%%/*}/${array[$((i++))]}_${file##*_}"
done
Note: Remove the echo in front of the mv when you are satisfied with the results and re-run the script to make the changes permanent.
Script Output
$ ./randomize.sh
mv 1.lproj/1_symbol.xml 1.lproj/16_symbol.xml
mv 1.lproj/2_symbol.xml 1.lproj/12_symbol.xml
mv 1.lproj/3_symbol.xml 1.lproj/6_symbol.xml
mv 1.lproj/4_symbol.xml 1.lproj/4_symbol.xml
mv 2.lproj/5_symbol.xml 2.lproj/14_symbol.xml
mv 2.lproj/6_symbol.xml 2.lproj/1_symbol.xml
mv 2.lproj/7_symbol.xml 2.lproj/3_symbol.xml
mv 2.lproj/8_symbol.xml 2.lproj/7_symbol.xml
mv 3.lproj/10_symbol.xml 3.lproj/10_symbol.xml
mv 3.lproj/11_symbol.xml 3.lproj/11_symbol.xml
mv 3.lproj/12_symbol.xml 3.lproj/2_symbol.xml
mv 3.lproj/9_symbol.xml 3.lproj/8_symbol.xml
mv 4.lproj/13_symbol.xml 4.lproj/13_symbol.xml
mv 4.lproj/14_symbol.xml 4.lproj/15_symbol.xml
mv 4.lproj/15_symbol.xml 4.lproj/9_symbol.xml
mv 4.lproj/16_symbol.xml 4.lproj/5_symbol.xml
I have thousands of mp3s inside a complex folder structure which resides within a single folder. I would like to move all the mp3s into a single directory with no subfolders. I can think of a variety of ways of doing this using the find command but one problem will be duplicate file names. I don't want to replace files since I often have multiple versions of a same song. Auto-rename would be best. I don't really care how the files are renamed.
Does anyone know a simple and safe way of doing this?
You could change a a/b/c.mp3 path into a - b - c.mp3 after copying. Here's a solution in Bash:
find srcdir -name '*.mp3' -printf '%P\n' |
while read i; do
j="${i//\// - }"
cp -v "srcdir/$i" "dstdir/$j"
done
And in a shell without ${//} substitution:
find srcdir -name '*.mp3' -printf '%P\n' |
sed -e 'p;s:/: - :g' |
while read i; do
read j
cp -v "srcdir/$i" "dstdir/$j"
done
For a different scheme, GNU's cp and mv can make numbered backups instead of overwriting -- see -b/--backup[=CONTROL] in the man pages.
find srcdir -name '*.mp3' -exec cp -v --backup=numbered {} dstdir/ \;
bash like pseudocode:
for i in `find . -name "*.mp3"`; do
NEW_NAME = `basename $i`
X=0
while ! -f move_to_dir/$NEW_NAME
NEW_NAME = $NEW_NAME + incr $X
mv $i $NEW_NAME
done
#!/bin/bash
NEW_DIR=/tmp/new/
IFS="
"; for a in `find . -type f `
do
echo "$a"
new_name="`basename $a`"
while test -e "$NEW_DIR/$new_name"
do
new_name="${new_name}_"
done
cp "$a" "$NEW_DIR/$new_name"
done
I'd tend to do this in a simple script rather than try to fit in in a single command line.
For instance, in python, it would be relatively trivial to do a walk() through the directory, copying each mp3 file found to a different directory with an automatically incremented number.
If you want to get fancier, you could have a dictionary of existing file names, and simply append a number to the duplicates. (the index of the dictionary being the file name, and the value being the number of files found so far, which would become your suffix)
find /path/to/mp3s -name *.mp3 -exec mv \{\} /path/to/target/dir \;
At the risk of many downvotes, a perl script could be written in short time to accomplish this.
Pseudocode:
while (-e filename)
change filename to filename . "1";
In python: to actually move the file, change debug=False
import os, re
from_dir="/from/dir"
to_dir = "/target/dir"
re_ext = "\.mp3"
debug = True
w = os.walk(from_dir)
n = w.next()
while n:
d, arg, names = n
names = filter(lambda fn: re.match(".*(%s)$"%re_ext, fn, re.I) , names)
n = w.next()
for fn in names:
from_fn = os.path.join(d,fn)
target_fn = os.path.join(to_dir, fn)
file_exists = os.path.exists(target_fn)
if not debug:
if not file_exists:
os.rename(from_fn, target_fn)
else:
print "DO NOT MOVE - FILE EXISTS ", from_fn
else:
print "MOVE ", from_fn, " TO " , target_fn
Since you don't care how the duplicate files are named, utilize the 'backup' option on move:
find /path/to/mp3s -name *.mp3 -exec mv --backup=numbered {} /path/to/target/dir \;
Will get you:
song.mp3
song.mp3.~1~
song.mp3.~2~