I havent been able to find an answer that best suites my needs, and I appologize if someone is able to find it easily.
I have a script that works to move files into folders based on their names. It worked perfectly until I realized that The files where missing their extension once I fixed this (another script was responsible for the file naming based on an email subject line) Once I fixed this problem It then started making a folder for each file. Is there anyway I can make this script drop everything in the folder name before the first (.)
Here is the script
#!/bin/bash
#folder script
#Benjamin D. Schran
MAIN_DIR=/PGHWH1/Photos
cd $MAIN_DIR
find . -maxdepth 1 -type f > SCRIPT_LOG1
find . -name '* *' | while read fname
do
new_fname=`echo $fname | tr " " "_"`
if [ -e $new_fname ]
then
echo "File $new_fname already exists. Not replacing $fname"
else
echo "Creating new file $new_fname to replace $fname"
mv "$fname" $new_fname
fi
done
find . -maxdepth 1 -type f | while read file;
do
f=$(basename "$file")
f1=${f%.*}
if [ -d "$f1" ];
then
mv "$f" "$f1"
else
mkdir "$f1"
chmod 777 "$f1"
mv "$f" "$f1"
fi
done
SCRIPTLOG=Script_log.$(date +%Y-%m-%d-%H-%M)
find . -type f > SCRIPT_LOG2
cd /PGHWH1/bin
sh scriptlog.sh > $SCRIPTLOG.html
mv $SCRIPTLOG.html /PGHWH1/log
rm $MAIN_DIR/SCRIPT_LOG1 $MAIN_DIR/SCRIPT_LOG2
What I need it to do is to take a files that is
Filename-date.%.jpg
and make
Foldername-date
then move the files of
Filename-date.1.jpg
Filename-date.2.jpg
Filename-date.3.jpg
to the appropriate folder
Foldername-date
but the current output is
Foldername-date.1
Foldername-date.2
Foldername-date.3
Any help at all would be appreciated
The following lines do the job in my bash:
#first create a tmp file with unique directory names
ls *.jpg | awk -F'.' '{print $1}' | uniq > dirs
#second create the directories
mkdir -p `cat dirs`
#third move the files
for i in `cat dirs`; do mv $i*.jpg $i/; done
#(optionally) remove the tmp file
rm dirs
Related
I'm a photographer and I have multiple jpg files of clothings in one folder. The files name structure is:
TYPE_FABRIC_COLOR (Example: BU23W02CA_CNU_RED, BU23W02CA_CNU_BLUE, BU23W23MG_LINO_WHITE)
I have to move files of same TYPE (BU23W02CA) on one folder named as TYPE.
For example:
MAIN FOLDER>
BU23W02CA_CNU_RED.jpg, BU23W02CA_CNU_BLUE.jpg, BU23W23MG_LINO_WHITE.jpg
Became:
MAIN FOLDER>
BU23W02CA_CNU > BU23W02CA_CNU_RED.jpg, BU23W02CA_CNU_BLUE.jpg
BU23W23MG_LINO > BU23W23MG_LINO_WHITE.jpg
Here are some scripts.
V1
#!/bin/bash
find . -maxdepth 1 -type f -name "*.jpg" -print0 | while IFS= read -r -d '' file
do
# Extract the directory name
dirname=$(echo "$file" | cut -d'_' -f1-2 | sed 's#\./\(.*\)#\1#')
#DEBUG echo "$file --> $dirname"
# Create it if not already existing
if [[ ! -d "$dirname" ]]
then
mkdir "$dirname"
fi
# Move the file into it
mv "$file" "$dirname"
done
it assumes all files that the find lists are of the format you described in your question, i.e. TYPE_FABRIC_COLOR.ext.
dirname is the extraction of the first two words delimited by _ in the file name.
since find lists the files with a ./ prefix, it is removed from the dirname as well (that is what the sed command does).
the find specifies the name of the files to consider as *.jpg. You can change this to something else, if you want to restrict which files are considered in the move.
this version loops through each file, creates a directory with it's first two sections (if it does not exists already), and moves the file into it.
if you want to see what the script is doing to each file, you can add option -v to the mv command. I used it to debug.
However, since it loops though each file one by one, this might take time with a large number of files, hence this next version.
V2
#!/bin/bash
while IFS= read -r dirname
do
echo ">$dirname"
# Create it if not already existing
if [[ ! -d "$dirname" ]]
then
mkdir "$dirname"
fi
# Move the file into it
find . -maxdepth 1 -type f -name "${dirname}_*" -exec mv {} "$dirname" \;
done < <(find . -maxdepth 1 -type f -name "*.jpg" -print | sed 's#^\./\(.*\)_\(.*\)_.*\..*$#\1_\2#' | sort | uniq)
this version loops on the directory names instead of on each file.
the last line does the "magic". It finds all files, and extracts the first two words (with sed) right away. Then these words are sorted and "uniqued".
the while loop then creates each directory one by one.
the find inside the while loop moves all files that match the directory being processed into it. Why did I not simply do mv ${dirname}_* ${dirname}? Since the expansion of the * wildcard could result in a too long arguments list for the mv command. Doing it with the find ensures that it will work even on LARGE number of files.
Suggesting oneliner awk script:
echo "$(ls -1 *.jpg)"| awk '{system("mkdir -p "$1 OFS $2);system("mv "$0" "$1 OFS $2)}' FS=_ OFS=_
Explanation:
echo "$(ls -1 *.jpg)": List all jpg files in current directory one file per line
FS=_ : Set awk field separator to _ $1=type $2=fabric $3=color.jpg
OFS=_ : Set awk output field separator to _
awk script explanation
{ # for each file name from list
system ("mkdir -p "$1 OFS $2); # execute "mkdir -p type_fabric"
system ("mv " $0 " " $1 OFS $2); # execute "mv current-file to type_fabric"
}
I have a lot of broken symbolic links that point to files that do no longer exist in that location, I can find all of them with this oneliner:
find . -type l | while read f; do if [ ! -e "$f" ]; then ls -l "$f"; fi; done
Which gives me something like this:
./17_50.paired.right.fastq.gz -> ../../../../../../../.git/annex/objects/24/P0/SHA256E-s4214107462--c36267de6b6d438d1ea9c0f262be5a873aaffdf8845a42377e159db6a71b404d.gz/SHA256E-s4214107462--c36267de6b6d438d1ea9c0f262be5a873aaffdf8845a42377e159db6a71b404d.gz
Now, I do have a backup of the linked files elsewhere, and I would like to use the result of the first oneliner to find the file SHA256E-s4214107462--c36267de6b6d438d1ea9c0f262be5a873aaffdf8845a42377e159db6a71b404d.gz in the backup and replace the original link to something like this
./17_50.paired.right.fastq.gz -> /path/to/backup/SHA256E-s4214107462--c36267de6b6d438d1ea9c0f262be5a873aaffdf8845a42377e159db6a71b404d.gz
How can I do that?
Thank you.
It's not the prettiest, but it works.
find . -type l | while read f
do
if [ ! -e "$f" ] #if link is broken
then
item=$(ls -l "$f")
filepath=$(echo "$item" | sed -n 's/^.* \.\/\s*\(\S*\).*$/\1/p') #extract filepath by looking for " ./"
oldlinkpath=$(echo "$item" | sed -n 's/^.*>\s*\(\S*\).*$/\1/p') #extract linkpath by looking for ">"
oldlinkfilename=$(basename "$oldlinkpath")
newlinkpath=$(find /mnt/usbbackup/ppgdata/data -type f -name "$oldlinkfilename") #find file in backup
rm "$filepath" #remove symlink
echo "linking $filepath to $newlinkpath"
ln -s "$newlinkpath" "$filepath" #create new symlink to file in backup
fi
done
find all the broken links
extract path of broken link
find filename of linked file
find linked file in backup by its filename
change link to location in backup
Once all is done I run 'symlinks -c folder' to change all the absolute paths to relative ones.
I have a folder "test" in it there is 20 other folder with different names like A,B ....(actually they are name of people not A, B...) I want to write a shell script that go to each folder like test/A and rename all the .c files with A[1,2..] and copy them to "test" folder. I started like this but I have no idea how to complete it!
#!/bin/sh
for file in `find test/* -name '*.c'`; do mv $file $*; done
Can you help me please?
This code should get you close. I tried to document exactly what I was doing.
It does rely on BASH and the GNU version of find to handle spaces in file names. I tested it on a directory fill of .DOC files, so you'll want to change the extension as well.
#!/bin/bash
V=1
SRC="."
DEST="/tmp"
#The last path we saw -- make it garbage, but not blank. (Or it will break the '[' test command
LPATH="/////"
#Let us find the files we want
find $SRC -iname "*.doc" -print0 | while read -d $'\0' i
do
echo "We found the file name... $i";
#Now, we rip off the off just the file name.
FNAME=$(basename "$i" .doc)
echo "And the basename is $FNAME";
#Now we get the last chunk of the directory
ZPATH=$(dirname "$i" | awk -F'/' '{ print $NF}' )
echo "And the last chunk of the path is... $ZPATH"
# If we are down a new path, then reset our counter.
if [ $LPATH == $ZPATH ]; then
V=1
fi;
LPATH=$ZPATH
# Eat the error message
mkdir $DEST/$ZPATH 2> /dev/null
echo cp \"$i\" \"$DEST/${ZPATH}/${FNAME}${V}\"
cp "$i" "$DEST/${ZPATH}/${FNAME}${V}"
done
#!/bin/bash
## Find folders under test. This assumes you are already where test exists OR give PATH before "test"
folders="$(find test -maxdepth 1 -type d)"
## Look into each folder in $folders and find folder[0-9]*.c file n move them to test folder, right?
for folder in $folders;
do
##Find folder-named-.c files.
leaf_folder="${folder##*/}"
folder_named_c_files="$(find $folder -type f -name "*.c" | grep "${leaf_folder}[0-9]")"
## Move these folder_named_c_files to test folder. basename will hold just the file name.
## Don't know as you didn't mention what name the file to rename to, so tweak mv command acc..
for file in $folder_named_c_files; do basename=$file; mv $file test/$basename; done
done
I have 4 files with the following names in different directories and subdirectories
tag0.txt, tag1.txt, tag2.txt and tag3.txt
and wish to rename them as tag0a.txt, tag1a.txt ,tag2a.txt and tag3a.txt in all directories and subdirectories.
Could anyone help me out using a shell script?
Cheers
$ shopt -s globstar
$ rename -n 's/\.txt$/a\.txt/' **/*.txt
foo/bar/tag2.txt renamed as foo/bar/tag2a.txt
foo/tag1.txt renamed as foo/tag1a.txt
tag0.txt renamed as tag0a.txt
Remove -n to rename after checking the result - It is the "dry run" option.
This can of course be done with find:
find . -name 'tag?.txt' -type f -exec bash -c 'mv "$1" ${1%.*}a.${1##*.}' -- {} \;
Here is a posix shell script (checked with dash):
visitDir() {
local file
for file in "$1"/*; do
if [ -d "$file" ]; then
visitDir "$file";
else
if [ -f "$file" ] && echo "$file"|grep -q '^.*/tag[0-3]\.txt$'; then
newfile=$(echo $file | sed 's/\.txt/a.txt/')
echo mv "$file" "$newfile"
fi
fi
done
}
visitDir .
If you can use bashisms, just replace the inner IF with:
if [[ -f "$file" && "$file" =~ ^.*/tag[0-3]\.txt$ ]]; then
echo mv "$file" "${file/.txt/a.txt}"
fi
First check that the result is what you expected, then possibly remove the "echo" in front of the mv command.
Using the Perl script version of rename that may be on your system:
find . -name 'tag?.txt' -exec rename 's/\.txt$/a$&/' {} \;
Using the binary executable version of rename:
find . -name 'tag?.txt' -exec rename .txt a.txt {} \;
which changes the first occurrence of ".txt". Since the file names are constrained by the -name argument, that won't be a problem.
Is this good enough?
jcomeau#intrepid:/tmp$ find . -name tag?.txt
./a/tag0.txt
./b/tagb.txt
./c/tag1.txt
./c/d/tag3.txt
jcomeau#intrepid:/tmp$ for txtfile in $(find . -name 'tag?.txt'); do \
mv $txtfile ${txtfile%%.txt}a.txt; done
jcomeau#intrepid:/tmp$ find . -name tag*.txt
./a/tag0a.txt
./b/tagba.txt
./c/d/tag3a.txt
./c/tag1a.txt
Don't actually put the backslash into the command, and if you do, expect a '>' prompt on the next line. I didn't put that into the output to avoid confusion, but I didn't want anybody to have to scroll either.
I have the situation, where a template directory - containing files and links (!) - needs to be copied recursively to a destination directory, preserving all attributes. The template directory contains any number of placeholders (__NOTATION__), that need to be renamed to certain values.
For example template looks like this:
./template/__PLACEHOLDER__/name/__PLACEHOLDER__/prog/prefix___FILENAME___blah.txt
Destination becomes like this:
./destination/project1/name/project1/prog/prefix_customer_blah.txt
What I tried so far is this:
# first create dest directory structure
while read line; do
dest="$(echo "$line" | sed -e 's#__PLACEHOLDER__#project1#g' -e 's#__FILENAME__#customer#g' -e 's#template#destination#')"
if ! [ -d "$dest" ]; then
mkdir -p "$dest"
fi
done < <(find ./template -type d)
# now copy files
while read line; do
dest="$(echo "$line" | sed -e 's#__PLACEHOLDER__#project1#g' -e 's#__FILENAME__#customer#g' -e 's#template#destination#')"
cp -a "$line" "$dest"
done < <(find ./template -type f)
However, I realized that if I want to take care about permissions and links, this is going to be endless and very complicated. Is there a better way to replace __PLACEHOLDER__ with "value", maybe using cp, find or rsync?
I suspect that your script will already do what you want, if only you replace
find ./template -type f
with
find ./template ! -type d
Otherwise, the obvious solution is to use cp -a to make an "archive" copy of the template, complete with all links, permissions, etc, and then rename the placeholders in the copy.
cp -a ./template ./destination
while read path; do
dir=`dirname "$path"`
file=`basename "$path"`
mv -v "$path" "$dir/${file//__PLACEHOLDER__/project1}"
done < <(`find ./destination -depth -name '*__PLACEHOLDER__*'`)
Note that you'll want to use -depth or else renaming files inside renamed directories will break.
If it's very important to you that the directory tree is created with the names already changed (i.e. you must never see placeholders in the destination), then I'd recommend simply using an intermediate location.
First copy with rsync, preserving all the properties and links etc.
Then change the placeholder strings in the destination filenames:
#!/bin/bash
TEMPL="$PWD/template" # somewhere else
DEST="$PWD/dest" # wherever it is
mkdir "$DEST"
(cd "$TEMPL"; rsync -Hra . "$DEST") #
MyRen=$(mktemp)
trap "rm -f $MyRen" 0 1 2 3 13 15
cat >$MyRen <<'EOF'
#!/bin/bash
fn="$1"
newfn="$(echo "$fn" | sed -e 's#__PLACEHOLDER__#project1#g' -e s#__FILENAME__#customer#g' -e 's#template#destination#')"
test "$fn" != "$newfn" && mv "$fn" "$newfn"
EOF
chmod +x $MyRen
find "$DEST" -depth -execdir $MyRen {} \;