Automator/Apple Script: Move files with same prefix on a new folder. The folder name must be the files prefix - bash

I'm a photographer and I have multiple jpg files of clothings in one folder. The files name structure is:
TYPE_FABRIC_COLOR (Example: BU23W02CA_CNU_RED, BU23W02CA_CNU_BLUE, BU23W23MG_LINO_WHITE)
I have to move files of same TYPE (BU23W02CA) on one folder named as TYPE.
For example:
MAIN FOLDER>
BU23W02CA_CNU_RED.jpg, BU23W02CA_CNU_BLUE.jpg, BU23W23MG_LINO_WHITE.jpg
Became:
MAIN FOLDER>
BU23W02CA_CNU > BU23W02CA_CNU_RED.jpg, BU23W02CA_CNU_BLUE.jpg
BU23W23MG_LINO > BU23W23MG_LINO_WHITE.jpg

Here are some scripts.
V1
#!/bin/bash
find . -maxdepth 1 -type f -name "*.jpg" -print0 | while IFS= read -r -d '' file
do
# Extract the directory name
dirname=$(echo "$file" | cut -d'_' -f1-2 | sed 's#\./\(.*\)#\1#')
#DEBUG echo "$file --> $dirname"
# Create it if not already existing
if [[ ! -d "$dirname" ]]
then
mkdir "$dirname"
fi
# Move the file into it
mv "$file" "$dirname"
done
it assumes all files that the find lists are of the format you described in your question, i.e. TYPE_FABRIC_COLOR.ext.
dirname is the extraction of the first two words delimited by _ in the file name.
since find lists the files with a ./ prefix, it is removed from the dirname as well (that is what the sed command does).
the find specifies the name of the files to consider as *.jpg. You can change this to something else, if you want to restrict which files are considered in the move.
this version loops through each file, creates a directory with it's first two sections (if it does not exists already), and moves the file into it.
if you want to see what the script is doing to each file, you can add option -v to the mv command. I used it to debug.
However, since it loops though each file one by one, this might take time with a large number of files, hence this next version.
V2
#!/bin/bash
while IFS= read -r dirname
do
echo ">$dirname"
# Create it if not already existing
if [[ ! -d "$dirname" ]]
then
mkdir "$dirname"
fi
# Move the file into it
find . -maxdepth 1 -type f -name "${dirname}_*" -exec mv {} "$dirname" \;
done < <(find . -maxdepth 1 -type f -name "*.jpg" -print | sed 's#^\./\(.*\)_\(.*\)_.*\..*$#\1_\2#' | sort | uniq)
this version loops on the directory names instead of on each file.
the last line does the "magic". It finds all files, and extracts the first two words (with sed) right away. Then these words are sorted and "uniqued".
the while loop then creates each directory one by one.
the find inside the while loop moves all files that match the directory being processed into it. Why did I not simply do mv ${dirname}_* ${dirname}? Since the expansion of the * wildcard could result in a too long arguments list for the mv command. Doing it with the find ensures that it will work even on LARGE number of files.

Suggesting oneliner awk script:
echo "$(ls -1 *.jpg)"| awk '{system("mkdir -p "$1 OFS $2);system("mv "$0" "$1 OFS $2)}' FS=_ OFS=_
Explanation:
echo "$(ls -1 *.jpg)": List all jpg files in current directory one file per line
FS=_ : Set awk field separator to _ $1=type $2=fabric $3=color.jpg
OFS=_ : Set awk output field separator to _
awk script explanation
{ # for each file name from list
system ("mkdir -p "$1 OFS $2); # execute "mkdir -p type_fabric"
system ("mv " $0 " " $1 OFS $2); # execute "mv current-file to type_fabric"
}

Related

How to use a while read filename; do to take filenames strip "(-to the end" and then create a directory with that information?

I have hundreds of movies saved as "Title (year).mkv". They are all in one directory, however, I wish to create a directory by just using the "Title" of the file and then mv the filename into the newly created directory to clean things up a little bit.
Here is what I have so far:
dest=/storage/Uploads/destination/
find "$dest" -maxdepth 1 -mindepth 1 -type f -printf "%P\n" | sort -n | {
while read filename ; do
echo $filename;
dir=${filename | cut -f 1 -d '('};
echo $dir;
# mkdir $dest$dir;
# rename -n "s/ *$//" *;
done;
}
~
dest=/storage/Uploads/destination/
is my working dirctory
find $dest -maxdepth 1 -mindepth 1 type f -printf "%P\n" | sort -n | {
is my find all files in $dest variable
while read filename ; do
as long as there's a filename to read, the loop continues
echo $filename
just so I can see what it is
dir=${filename | cut -f 1 -d '('};
dir = the results of command within the {}
echo $dir;
So I can see the name of the upcoming directory
mkdir $dest$dir;
Make the directory
rename -n "s/ *$//" *;
will rename the pesky directories that have a trailing space
And since we have more files to read, starts over until the last one, and
done;
}
When I run it, I get"
./new.txt: line 8: ${$filename | cut -f 1 -d '('}: bad substitution
I have two lines commented so it won't use those until I get the other working. Anyone have a way to do what I'm trying to do? I would prefer a bash script so I can run it again when necessary.
Thanks in advance!
dir=${filename | cut -f 1 -d '('}; is invalid. To run a command and capture it's output use $( ) and echo the text into the pipe. By the way, that cut will leave a trailing space which you probably don't want.
But don't use external programs like cut when there is no need, bash expansion will do it for you, and get rid of the trailing space:
filename="Title (year).mkv"
# remove all the characters on the right after and including <space>(
dir=${filename%% (*}
echo "$dir"
Gives
Title
General syntax is %%pattern to remove the longest pattern from the right. Pattern uses the glob (filename expansion) syntax, so (* is a space, followed by ( followed by zero or more of any character.
% is the shortest pattern, and ## and # do the same but remove from the left of the pattern.

Making a file out of all the files with given a string

Create a file that includes the content of all the files in the current folder that has a given string (in say argument 1), the data will be in it one after the other (each file appended to the end). The name of the file will be the given string.
I thought of the following but it doesn't work:
grep $1 * >> fnames #places all the names of the right files in a file
for x in fnames
do
cat x >> $1 #concat the files from the list
done
rm fnames
On the same note, is there a site that has solved exercises like this or examples?
You can do something like this using process substitution:
shopt -s nullglob
while read -r file; do
cat "$file"
done < <(grep -l "search-pattern" *) > /path/to/newfile
This is assuming your directory only has files and no sub-directories.
You will need to use find with grep if there are sub-directories as well:
find . -maxdepth 1 -type f -exec grep -q "search-pattern" {} \; -print0 |
xargs -0 cat > /path/to/newfile
How about (assuming you aren't worried about files with spaces or newlines or shell globs/etc. in their names since those will not work here correctly):
for O in $(grep -l $1 *)
do
cat "$O" >> $1
done

find command with filename coming from bash printf builtin not working

I'm trying to do a script which lists files on a directory and then searchs one by one every file in other directory. For dealing with spaces and special characters like "[" or "]" I'm using $(printf %q "$FILENAME") as input for the find command: find /directory/to/search -type f -name $(printf %q "$FILENAME").
It works like a charm for every filename except in one case: when there's multibyte characters (UTF-8). In that case the output of printf is an external quoted string, i.e.: $'file name with blank spaces and quoted characters in the form of \NNN\NNN', and that string is not being expanded without the $'' quoting, so find searchs for a file with a name including that quote: «$'filename'».
Is there an alternative solution in order to be able to pass to find any kind of filename?
My script is like follows (I know some lines can be deleted, like the "RESNAME="):
#!/bin/bash
if [ -d $1 ] && [ -d $2 ]; then
IFSS=$IFS
IFS=$'\n'
FILES=$(find $1 -type f )
for FILE in $FILES; do
BASEFILE=$(printf '%q' "$(basename "$FILE")")
RES=$(find $2 -type f -name "$BASEFILE" -print )
if [ ${#RES} -gt 1 ]; then
RESNAME=$(printf '%q' "$(basename "$RES")")
else
RESNAME=
fi
if [ "$RESNAME" != "$BASEFILE" ]; then
echo "FILE NOT FOUND: $FILE"
fi
done
else
echo "Directories do not exist"
fi
IFS=$IFSS
As an answer said, I've used associative arrays, but with no luck, maybe I'm not using correctly the arrays, but echoing it (array[#]) returns nothing. This is the script I've written:
#!/bin/bash
if [ -d "$1" ] && [ -d "$2" ]; then
declare -A files
find "$2" -type f -print0 | while read -r -d $'\0' FILE;
do
BN2="$(basename "$FILE")"
files["$BN2"]="$BN2"
done
echo "${files[#]}"
find "$1" -type f -print0 | while read -r -d $'\0' FILE;
do
BN1="$(basename "$FILE")"
if [ "${files["$BN1"]}" != "$BN1" ]; then
echo "File not found: "$BN1""
fi
done
fi
Don't use for loops. First, it is slower. Your find has to complete before the rest of your program can run. Second, it is possible to overload the command line. The enter for command must fit in the command line buffer.
Most importantly of all, for sucks at handling funky file names. You're running conniptions trying to get around this. However:
find $1 -type f -print0 | while read -r -d $'\0' FILE
will work much better. It handles file names -- even file names that contain \n characters. The -print0 tells find to separate file names with the NUL character. The while read -r -d $'\0 FILE will read each file name (separate by the NUL character) into $FILE.
If you put quotes around the file name in the find command, you don't have to worry about special characters in the file names.
Your script is running find once for each file found. If you have 100 files in your first directory, you're running find 100 times.
Do you know about associative (hash) arrays in BASH? You are probably better off using associative arrays. Run find on the first directory, and store those files names in an associative array.
Then, run find (again using the find | while read syntax) for your second directory. For each file you find in the second directory, see if you have a matching entry in your associative array. If you do, you know that file is in both arrays.
Addendum
I've been looking at the find command. It appears there's no real way to prevent it from using pattern matching except through a lot of work (like you were doing with printf. I've tried using the -regex matching and using \Q and \E to remove the special meaning of pattern characters. I haven't been successful.
There comes a time that you need something a bit more powerful and flexible than shell to implement your script, and I believe this is the time.
Perl, Python, and Ruby are three fairly ubiquitous scripting languages found on almost all Unix systems and are available on other non-POSIX platforms (cough! ...Windows!... cough!).
Below is a Perl script that takes two directories, and searches them for matching files. It uses the find command once and uses associative arrays (called hashes in Perl). I key the hash to the name of my file. In the value portion of the hash, I store an array of the directories where I found this file.
I only need to run the find command once per directory. Once that is done, I can print out all the entries in the hash that contain more than one directory.
I know it's not shell, but this is one of the cases where you can spend a lot more time trying to figure out how to get shell to do what you want than its worth.
#! /usr/bin/env perl
use strict;
use warnings;
use feature qw(say);
use File::Find;
use constant DIRECTORIES => qw( dir1 dir2 );
my %files;
#
# Perl version of the find command. You give it a list of
# directories and a subroutine for filtering what you find.
# I am basically rejecting all non-file entires, then pushing
# them into my %files hash as an array.
#
find (
sub {
return unless -f;
$files{$_} = [] if not exists $files{$_};
push #{ $files{$_} }, $File::Find::dir;
}, DIRECTORIES
);
#
# All files are found and in %files hash. I can then go
# through all the entries in my hash, and look for ones
# with more than one directory in the array reference.
# IF there is more than one, the file is located in multiple
# directories, and I print them.
#
for my $file ( sort keys %files ) {
if ( #{ $files{$file} } > 1 ) {
say "File: $file: " . join ", ", #{ $files{$file} };
}
}
Try something like this:
find "$DIR1" -printf "%f\0" | xargs -0 -i find "$DIR2" -name \{\}
How about this one-liner?
find dir1 -type f -exec bash -c 'read < <(find dir2 -name "${1##*/}" -type f)' _ {} \; -printf "File %f is in dir2\n" -o -printf "File %f is not in dir2\n"
Absolutely 100% safe regarding files with funny symbols, newlines and spaces in their name.
How does it work?
find (the main one) will scan through directory dir1 and for each file (-type f) will execute
read < <(find dir2 -name "${1##*/} -type f")
with argument the name of the current file given by the main find. This argument is at position $1. The ${1##*/} removes everything before the last / so that if $1 is path/to/found/file the find statement is:
find dir2 -name "file" -type f
This outputs something if file is found, otherwise has no output. That's what is read by the read bash command. read's exit status is true if it was able to read something, and false if there wasn't anything read (i.e., in case nothing is found). This exit status becomes bash's exit status which becomes -exec's status. If true, the next -printf statement is executed, and if false, the -o -printf part will be executed.
If your dirs are given in variables $dir1 and $dir2 do this, so as to be safe regarding spaces and funny symbols that could occur in $dir2:
find "$dir1" -type f -exec bash -c 'read < <(find "$0" -name "${1##*/}" -type f)' "$dir2" {} \; -printf "File %f is in $dir2\n" -o -printf "File %f is not in $dir2\n"
Regarding efficiency: this is of course not an efficient method at all! the inner find will be executed as many times as there are found files in dir1. This is terrible, especially if the directory tree under dir2 is deep and has many branches (you can rely a little bit on caching, but there are limits!).
Regarding usability: you have fine-grained control on how both find's work and on the output, and it's very easy to add many more tests.
So, hey, tell me how to compare files from two directories? Well, if you agree on loosing a little bit of control, this will be the shortest and most efficient answer:
diff dir1 dir2
Try it, you'll be amazed!
Since you are only using find for its recursive directory following, it will be easier to simply use the globstar option in bash. (You're using associative arrays, so your bash is new enough).
#!/bin/bash
shopt -s globstar
declare -A files
if [[ -d $1 && -d $2 ]]; then
for f in "$2"/**/*; do
[[ -f "$f" ]] || continue
BN2=$(basename "$f")
files["$BN2"]=$BN2
done
echo "${files[#]}"
for f in "$1"/**/*; do
[[ -f "$f" ]] || continue
BN1=$(basename $f)
if [[ ${files[$BN1]} != $BN1 ]]; then
echo "File not found: $BN1"
fi
done
fi
** will match zero or more directories, so $1/**/* will match all the files and directories in $1, all the files and directories in those directories, and so forth all the way down the tree.
If you want to use associative arrays, here's one possibility that will work well with files with all sorts of funny symbols in their names (this script has too much to just show the point, but it is usable as is – just remove the parts you don't want and adapt to your needs):
#!/bin/bash
die() {
printf "%s\n" "$#"
exit 1
}
[[ -n $1 ]] || die "Must give two arguments (none found)"
[[ -n $2 ]] || die "Must give two arguments (only one given)"
dir1=$1
dir2=$2
[[ -d $dir1 ]] || die "$dir1 is not a directory"
[[ -d $dir2 ]] || die "$dir2 is not a directory"
declare -A dir1files
declare -A dir2files
while IFS=$'\0' read -r -d '' file; do
dir1files[${file##*/}]=1
done < <(find "$dir1" -type f -print0)
while IFS=$'\0' read -r -d '' file; do
dir2files[${file##*/}]=1
done < <(find "$dir2" -type f -print0)
# Which files in dir1 are in dir2?
for i in "${!dir1files[#]}"; do
if [[ -n ${dir2files[$i]} ]]; then
printf "File %s is both in %s and in %s\n" "$i" "$dir1" "$dir2"
# Remove it from dir2 has
unset dir2files["$i"]
else
printf "File %s is in %s but not in %s\n" "$i" "$dir1" "$dir2"
fi
done
# Which files in dir2 are not in dir1?
# Since I unset them from dir2files hash table, the only keys remaining
# correspond to files in dir2 but not in dir1
if [[ -n "${!dir2files[#]}" ]]; then
printf "File %s is in %s but not in %s\n" "$dir2" "$dir1" "${!dir2files[#]}"
fi
Remark. The identification of files is only based on their filenames, not their contents.

Folder Creation Subtract file number?

I havent been able to find an answer that best suites my needs, and I appologize if someone is able to find it easily.
I have a script that works to move files into folders based on their names. It worked perfectly until I realized that The files where missing their extension once I fixed this (another script was responsible for the file naming based on an email subject line) Once I fixed this problem It then started making a folder for each file. Is there anyway I can make this script drop everything in the folder name before the first (.)
Here is the script
#!/bin/bash
#folder script
#Benjamin D. Schran
MAIN_DIR=/PGHWH1/Photos
cd $MAIN_DIR
find . -maxdepth 1 -type f > SCRIPT_LOG1
find . -name '* *' | while read fname
do
new_fname=`echo $fname | tr " " "_"`
if [ -e $new_fname ]
then
echo "File $new_fname already exists. Not replacing $fname"
else
echo "Creating new file $new_fname to replace $fname"
mv "$fname" $new_fname
fi
done
find . -maxdepth 1 -type f | while read file;
do
f=$(basename "$file")
f1=${f%.*}
if [ -d "$f1" ];
then
mv "$f" "$f1"
else
mkdir "$f1"
chmod 777 "$f1"
mv "$f" "$f1"
fi
done
SCRIPTLOG=Script_log.$(date +%Y-%m-%d-%H-%M)
find . -type f > SCRIPT_LOG2
cd /PGHWH1/bin
sh scriptlog.sh > $SCRIPTLOG.html
mv $SCRIPTLOG.html /PGHWH1/log
rm $MAIN_DIR/SCRIPT_LOG1 $MAIN_DIR/SCRIPT_LOG2
What I need it to do is to take a files that is
Filename-date.%.jpg
and make
Foldername-date
then move the files of
Filename-date.1.jpg
Filename-date.2.jpg
Filename-date.3.jpg
to the appropriate folder
Foldername-date
but the current output is
Foldername-date.1
Foldername-date.2
Foldername-date.3
Any help at all would be appreciated
The following lines do the job in my bash:
#first create a tmp file with unique directory names
ls *.jpg | awk -F'.' '{print $1}' | uniq > dirs
#second create the directories
mkdir -p `cat dirs`
#third move the files
for i in `cat dirs`; do mv $i*.jpg $i/; done
#(optionally) remove the tmp file
rm dirs

Shell Programming File Search and Append

I am trying to write a shell program that will search my current directory (say, my folder containing C code), read all files for the keywords "printf" or "fprintf", and append the include statement to the file if it isn't already done.
I have tried to write the search portion already (for now, all it does is search files and print the list of matching files), but it is not working. Included below is my code. What am I doing wrong?
EDIT: New code.
#!/bin/sh
#processes files ending in .c and appends statements if necessary
#search for files that meet criteria
for file in $( find . -type f )
do
echo $file
if grep -q printf "$file"
then
echo "File $file contains command"
fi
done
To execute commands in a subshell you need $( command ). Notice the $ before the parenthesis.
You don't need to store the list of files in a temporary variable, you can directly use
for file in $( find . ) ; do
echo "$file"
done
And with
find . -type f | grep somestring
you are not searching the file content but the file name (in my example all the files which name contains "somestring")
To grep the content of the files:
for file in $( find . -type f ) ; do
if grep -q printf "$file" ; then
echo "File $file contains printf"
fi
done
Note that if you match printf it will also match fprintf (as it contains printf)
If you want to search just files ending with .c you can use the -name option
find . -name "*.c" -type f
Use the -type f option to list only files.
In any case check if your grep has the -r option to search recursively
grep -r --include "*.c" printf .
You can do this sort of thing with sed -i, but I find that distasteful. Instead, it seems reasonable to use ed (sed is ed for streams, so it makes sense to use ed when you're not working with a stream).
#!/bin/sh
for i in *.c; do
grep -Fq '#include <stdio.h>' $i && continue
grep -Fq printf $i && ed -s $i << EOF > /dev/null
1
i
#include <stdio.h>
.
w
EOF
done

Resources