GREP date from email header and make it the files creation date - bash

I am on Mac Terminal and want to "grep" a string (which is a UNIX timestamp) out of an email header, convert that into a format the OS can work with and make that the creation date of the file. I want to do that recursively for all mails inside a folder (with multiple possible subfolders).
The structure would probably look something like this:
#!/bin/bash
for i in `ls`
do
# Find the date field (X-Delivery-Time) inside an email header and grep the UNIX timestamp
# convert timestamp to a format the OS can work with
# overwrite the existing creation date with the new one
done
The mails header look like this
X-Envelope-From: <some#mail.com>
X-Envelope-To: <my#mail.com>
X-Delivery-Time: 1535436541
...
Some background: Apple Mail uses the date a file was created as the date displayed within Apple Mail. That’s why after moving mails from one server to another all mails now display the same date which makes sorting impossible.
As I am new to Terminal/Bash any help is appreciated. Thanks

On a Mac this should work, but since I have no mac I cannot test it myself. I assume your email files have the .emlx extension.
For a single directory:
for i in ./*.emlx; do
unixTime=$(grep -m1 '^X-Delivery-Time:' "$i" | grep -Eo '[0-9]+') &&
humanTime=$(date -r "$unixTime" +%Y%m%d%H%M.%S) &&
touch -t "$humanTime" "$i"
done
For a whole directory tree:
fixdate() {
unixTime=$(grep -m1 '^X-Delivery-Time:' "$1" | grep -Eo '[0-9]+') &&
humanTime=$(date -r "$unixTime" +%Y%m%d%H%M.%S) &&
touch -t "$humanTime" "$1"
}
export -f fixdate
find . -name '*.emlx' -exec bash -c 'fixdate "$#"' . {} \;
or, if you have bash 4 or higher installed (macOS still uses 3 by default)
shopt -s globstar
for i in ./**/*.emlx; do
unixTime=$(grep -m1 '^X-Delivery-Time:' "$i" | grep -Eo '[0-9]+') &&
humanTime=$(date -r "$unixTime" +%Y%m%d%H%M.%S) &&
touch -t "$humanTime" "$i"
done

What follows assumes you are using the default macOS utilities (touch, date...) As they are completely outdated some adjustments will be needed if you use more recent versions (e.g. macports or brew). It also assumes that you are using bash.
If you have sub-folders ls is not the right tool. And anyway, the output of ls is not for computers, it is for humans. So, the first thing to do is find all email files. Guess what? The utility that does this is named find:
$ find . -type f -name '*.emlx'
foo/bar.emlx
baz.emlx
...
searches for true files (-type f) starting from the current directory (.) and which name is anything.emlx (-name '*.emlx'). Adapt to your situation. If all files are email files you can skip the -name ... part.
Next we need to loop over all these files and process each of them. This is a bit more complex than for f in ... for several reasons (large number of files, file names with spaces...) A robust way to do this is to redirect the output of a find command to a while loop:
while IFS= read -r -d '' f; do
<process file "$f">
done < <(find . -type f -name '*.emlx' -print0)
The -print0 option of find is used to separate the file names with a null character instead of the default newline character. The < <(find...) part is a way to redirect the output of find to the input of the while loop. The while IFS= read -r -d '' f; do reads each file name produced by find, stores it in shell variable f, preserving the leading and trailing spaces if any (IFS=), the backslashes (-r) and using the null character as separator (-d '').
Now we must code the processing of each file. Let's first retrieve the delivery time, assuming it is always the second word of the last line starting with X-Delivery-Time::
awk '/^X-Delivery-Time:/ {t = $2} END {print t}' "$f"
does that. If you don't know awk already it's time to learn a bit of it. It's one of the very useful Swiss knives of text processing (sed is another). But let's improve it a bit such that it returns the first encountered delivery time instead of the last, stops as soon as it encountered it, and also checks that the timestamp is a real timestamp (digits):
awk '/^X-Delivery-Time:[[:space:]]+[[:digit:]]+$/ {print $2; exit}' "$f"
The [[:space:]]+ part of the regular expression matches 1 or more spaces, tabs,... and the [[:digit:]]+ matches 1 or more digits. ^ and $ match the beginning and the end of the line, respectively. The result can be assigned to a shell variable:
t="$(awk '/^X-Delivery-Time:[[:space:]]+[[:digit:]]+$/ {print $2; exit}' "$f")"
Note that if there was no match the t variable will store the empty string. We will use this later to skip such files.
Once we have this delivery time, which looks like a UNIX timestamp (seconds since 1970/01/01) in your example, we must use it to change the last modification time of the email file. The command that does this is touch:
$ man touch
...
touch [-A [-][[hh]mm]SS] [-acfhm] [-r file] [-t [[CC]YY]MMDDhhmm[.SS]] file ...
...
Unfortunately touch wants a time in the CCYYMMDDhhmm.SS format. No worry, the date utility can be used to convert a UNIX timestamp in any format we like. For instance, with your example timestamp (1535436541):
$ date -r 1535436541 +%Y%m%d%H%M.%S
201808280809.01
We are almost done:
while IFS= read -r -d '' f; do
# uncomment for debugging
# echo "processing $f"
t="$(awk '/^X-Delivery-Time:[[:space:]]+[[:digit:]]+$/ {print $2; exit}' "$f")"
if [ -z "$t" ]; then
echo "no delivery time found in $f"
continue
fi
# uncomment for debugging
# echo touch -t "$(date -r "$t" +%Y%m%d%H%M.%S)" "$f"
touch -t "$(date -r "$t" +%Y%m%d%H%M.%S)" "$f"
done < <(find . -type f -name '*.emlx' -print0)
Note how we test if t is the empty string (if [ -z "$t" ]). If it is, we print a message and jump to the next file (continue). Just put all this in a file with a shebang line and run...
If, instead of the X-Delivery-Time field, you must use a Date field with a more complex and variable format (e.g. Date: Mon, 11 Jun 2018 10:36:14 +0200), the best would be to install a decently recent version of touch with the coreutils package of Mac Ports or Homebrew. Then:
while IFS= read -r -d '' f; do
t="$(awk '/^Date:/ {print gensub(/^Date:[[:space:]+](.*)$/,"\\1","1"); exit}' "$f")"
if [ -z "$t" ]; then
echo "no delivery time found in $f"
continue
fi
touch -d "$t" "$f"
done < <(find . -type f -name '*.emlx' -print0)
The awk command is slightly more complex. It prints the matching line without the Date: prefix. The following sed command would do the same in a more compact form but would not really be more readable:
t="$(sed -rn 's/^Date:\s*(.*)/\1/p;Ta;q;:a' "$f")"

Related

Identify the files year wise and delete from a dir in unix

I need to list out the files which are created in a specific year and then to delete the files. year should be the input.
i tried with date it is working for me. but not able to covert that date to year for comparison in loop to get the list of files.
Below code is giving 05/07 files. but want to list out the files which are created in 2022,2021,etc.,
for file in /tmp/abc*txt ; do
[ "$(date -I -r "$file")" == "2022-05-07" ] && ls -lstr "$file"
done
If you end up doing ls -l anyway, you might just parse the date information from the output. (However, generally don't use ls in scripts.)
ls -ltr | awk '$8 ~ /^202[01]$/'
date -r is not portable, though if you have it, you could do
for file in /tmp/abc*txt ; do
case $(date -I -r "$file") in
2020-* | 2021-* ) ls -l "$file";;
esac
done
(The -t and -r flags to ls have no meaning when you are listing a single file anyway.)
If you don't, the tool of choice would be stat, but it too has portability issues; the precise options to get the information you want will vary between platforms. On Linux, try
for file in /tmp/abc*txt ; do
case $(LC_ALL=C stat -c %y "$file") in
2020-* | 2021-* ) ls -l "$file";;
esac
done
On BSD (including MacOS) try stat -f %Sm -t %Y "$file" to get just the year.
If you need proper portability, perhaps look for a scripting language with wide support, such as Perl or Python. The stat() system call is the fundamental resource for getting metainformation about a file. The find command also has some features for finding files by age, though its default behavior is to traverse subdirectories, too (you can inhibit that with -maxdepth 1; but then the options to select files by age are again not entirely POSIX portable).
To list out files which were last modified in a specific year and then to delete those files, you could use a combination of the find -newer and touch commands:
# given a year as input
year=2022
stampdir=$(mktemp -d)
touch -t ${year}01010000 "$stampdir"/beginning
touch -t $((year+1))01010000 "$stampdir"/end
find /tmp -name 'abc*txt' -type f -newer "$stampdir/beginning" ! -newer "$stampdir/end" -print -delete
rm -r "$stampdir"
First, create a temporary working directory to store the timestamp files; we don't want the find command to accidentally find them. Be careful here; mktemp will probably create a directory in /tmp; this use-case is safe only because we're naming the timestamp files such that they don't match the "abc*txt" pattern from the question.
Next, create bordering timestamp files with the touch command: one that is the newest date in the year, named "beginning", and another for the newest date of the next year, named "end".
Then run the find command; here's the breakdown:
start in /tmp (from the question)
files named with the 'abc*txt' pattern (from the question)
only files (not directories, etc -- from the question)
newer than the beginning timestamp file
not newer (i.e. older) than the end timestamp file
if found, print the filename and then delete it
Finally, clean up the temporary working directory that we created.
Try this:
For checking which files are picked up:
echo -e "Give Year :"
read yr
ls -ltr /tmp | grep "^-" |grep -v ":" | grep $yr | awk -F " " '{ print $9;}'
** You can replace { print $9 ;} with { rm $9; } in the above command for deleting the picked files

Automator/Apple Script: Move files with same prefix on a new folder. The folder name must be the files prefix

I'm a photographer and I have multiple jpg files of clothings in one folder. The files name structure is:
TYPE_FABRIC_COLOR (Example: BU23W02CA_CNU_RED, BU23W02CA_CNU_BLUE, BU23W23MG_LINO_WHITE)
I have to move files of same TYPE (BU23W02CA) on one folder named as TYPE.
For example:
MAIN FOLDER>
BU23W02CA_CNU_RED.jpg, BU23W02CA_CNU_BLUE.jpg, BU23W23MG_LINO_WHITE.jpg
Became:
MAIN FOLDER>
BU23W02CA_CNU > BU23W02CA_CNU_RED.jpg, BU23W02CA_CNU_BLUE.jpg
BU23W23MG_LINO > BU23W23MG_LINO_WHITE.jpg
Here are some scripts.
V1
#!/bin/bash
find . -maxdepth 1 -type f -name "*.jpg" -print0 | while IFS= read -r -d '' file
do
# Extract the directory name
dirname=$(echo "$file" | cut -d'_' -f1-2 | sed 's#\./\(.*\)#\1#')
#DEBUG echo "$file --> $dirname"
# Create it if not already existing
if [[ ! -d "$dirname" ]]
then
mkdir "$dirname"
fi
# Move the file into it
mv "$file" "$dirname"
done
it assumes all files that the find lists are of the format you described in your question, i.e. TYPE_FABRIC_COLOR.ext.
dirname is the extraction of the first two words delimited by _ in the file name.
since find lists the files with a ./ prefix, it is removed from the dirname as well (that is what the sed command does).
the find specifies the name of the files to consider as *.jpg. You can change this to something else, if you want to restrict which files are considered in the move.
this version loops through each file, creates a directory with it's first two sections (if it does not exists already), and moves the file into it.
if you want to see what the script is doing to each file, you can add option -v to the mv command. I used it to debug.
However, since it loops though each file one by one, this might take time with a large number of files, hence this next version.
V2
#!/bin/bash
while IFS= read -r dirname
do
echo ">$dirname"
# Create it if not already existing
if [[ ! -d "$dirname" ]]
then
mkdir "$dirname"
fi
# Move the file into it
find . -maxdepth 1 -type f -name "${dirname}_*" -exec mv {} "$dirname" \;
done < <(find . -maxdepth 1 -type f -name "*.jpg" -print | sed 's#^\./\(.*\)_\(.*\)_.*\..*$#\1_\2#' | sort | uniq)
this version loops on the directory names instead of on each file.
the last line does the "magic". It finds all files, and extracts the first two words (with sed) right away. Then these words are sorted and "uniqued".
the while loop then creates each directory one by one.
the find inside the while loop moves all files that match the directory being processed into it. Why did I not simply do mv ${dirname}_* ${dirname}? Since the expansion of the * wildcard could result in a too long arguments list for the mv command. Doing it with the find ensures that it will work even on LARGE number of files.
Suggesting oneliner awk script:
echo "$(ls -1 *.jpg)"| awk '{system("mkdir -p "$1 OFS $2);system("mv "$0" "$1 OFS $2)}' FS=_ OFS=_
Explanation:
echo "$(ls -1 *.jpg)": List all jpg files in current directory one file per line
FS=_ : Set awk field separator to _ $1=type $2=fabric $3=color.jpg
OFS=_ : Set awk output field separator to _
awk script explanation
{ # for each file name from list
system ("mkdir -p "$1 OFS $2); # execute "mkdir -p type_fabric"
system ("mv " $0 " " $1 OFS $2); # execute "mv current-file to type_fabric"
}

How to use bash string formatting to reverse date format?

I have a lot of files that are named as: MM-DD-YYYY.pdf. I want to rename them as YYYY-MM-DD.pdf I’m sure there is some bash magic to do this. What is it?
For files in the current directory:
for name in ./??-??-????.pdf; do
if [[ "$name" =~ (.*)/([0-9]{2})-([0-9]{2})-([0-9]{4})\.pdf ]]; then
echo mv "$name" "${BASH_REMATCH[1]}/${BASH_REMATCH[4]}-${BASH_REMATCH[3]}-${BASH_REMATCH[2]}.pdf"
fi
done
Recursively, in or under the current directory:
find . -type f -name '??-??-????.pdf' -exec bash -c '
for name do
if [[ "$name" =~ (.*)/([0-9]{2})-([0-9]{2})-([0-9]{4})\.pdf ]]; then
echo mv "$name" "${BASH_REMATCH[1]}/${BASH_REMATCH[4]}-${BASH_REMATCH[3]}-${BASH_REMATCH[2]}.pdf"
fi
done' bash {} +
Enabling the globstar shell option in bash lets us do the following (will also, like the above solution, handle all files in or below the current directory):
shopt -s globstar
for name in **/??-??-????.pdf; do
if [[ "$name" =~ (.*)/([0-9]{2})-([0-9]{2})-([0-9]{4})\.pdf ]]; then
echo mv "$name" "${BASH_REMATCH[1]}/${BASH_REMATCH[4]}-${BASH_REMATCH[3]}-${BASH_REMATCH[2]}.pdf"
fi
done
All three of these solutions uses a regular expression to pick out the relevant parts of the filenames, and then rearranges these parts into the new name. The only difference between them is how the list of pathnames is generated.
The code prefixes mv with echo for safety. To actually rename files, remove the echo (but run at least once with echo to see that it does what you want).
A direct approach example from the command line:
$ ls
10-01-2018.pdf 11-01-2018.pdf 12-01-2018.pdf
$ ls [0-9]*-[0-9]*-[0-9]*.pdf|sed -r 'p;s/([0-9]{2})-([0-9]{2})-([0-9]{4})/\3-\1-\2/'|xargs -n2 mv
$ ls
2018-10-01.pdf 2018-11-01.pdf 2018-12-01.pdf
The ls output is piped to sed , then we use the p flag to print the argument without modifications, in other words, the original name of the file, and s to perform and output the conversion.
The ls + sed result is a combined output that consist of a sequence of old_file_name and new_file_name.
Finally we pipe the resulting feed through xargs to get the effective rename of the files.
From xargs man:
-n number Execute command using as many standard input arguments as possible, up to number arguments maximum.
You can use the following command very close to the one of klashxx:
for f in *.pdf; do echo "$f"; mv "$f" "$(echo "$f" | sed 's#\(..\)-\(..\)-\(....\)#\3-\2-\1#')"; done
before:
ls *.pdf
12-01-1998.pdf 12-03-2018.pdf
after:
ls *.pdf
1998-01-12.pdf 2018-03-12.pdf
Also if you have other pdf files that does not respect this format in your folder, what you can do is to select only the files that respect the format: MM-DD-YYYY.pdf to do so use the following command:
for f in `find . -maxdepth 1 -type f -regextype sed -regex './[0-9]\{2\}-[0-9]\{2\}-[0-9]\{4\}.pdf' | xargs -n1 basename`; do echo "$f"; mv "$f" "$(echo "$f" | sed 's#\(..\)-\(..\)-\(....\)#\3-\2-\1#')"; done
Explanations:
find . -maxdepth 1 -type f -regextype sed -regex './[0-9]\{2\}-[0-9]\{2\}-[0-9]\{4\}.pdf this find command will look only for files in the current working directory that respect your syntax and extract their basename (remove the ./ at the beginning, folders and other type of files that would have the same name are not taken into account, other *.pdf files are also ignored.
for each file you do a move and the resulting file name is computed using sed and back reference to the 3 groups for MM,DD and YYYY
For these simple filenames, using a more verbose pattern, you can simplify the body of the loop a bit:
twodigit=[[:digit:]][[:digit:]]
fourdigit="$twodigit$twodigit"
for f in $twodigit-$twodigit-$fourdigit.pdf; do
IFS=- read month day year <<< "${f%.pdf}"
mv "$f" "$year-$month-$day.pdf"
done
This is basically #Kusalananda's answer, but without the verbosity of regular-expression matching.

find command with filename coming from bash printf builtin not working

I'm trying to do a script which lists files on a directory and then searchs one by one every file in other directory. For dealing with spaces and special characters like "[" or "]" I'm using $(printf %q "$FILENAME") as input for the find command: find /directory/to/search -type f -name $(printf %q "$FILENAME").
It works like a charm for every filename except in one case: when there's multibyte characters (UTF-8). In that case the output of printf is an external quoted string, i.e.: $'file name with blank spaces and quoted characters in the form of \NNN\NNN', and that string is not being expanded without the $'' quoting, so find searchs for a file with a name including that quote: «$'filename'».
Is there an alternative solution in order to be able to pass to find any kind of filename?
My script is like follows (I know some lines can be deleted, like the "RESNAME="):
#!/bin/bash
if [ -d $1 ] && [ -d $2 ]; then
IFSS=$IFS
IFS=$'\n'
FILES=$(find $1 -type f )
for FILE in $FILES; do
BASEFILE=$(printf '%q' "$(basename "$FILE")")
RES=$(find $2 -type f -name "$BASEFILE" -print )
if [ ${#RES} -gt 1 ]; then
RESNAME=$(printf '%q' "$(basename "$RES")")
else
RESNAME=
fi
if [ "$RESNAME" != "$BASEFILE" ]; then
echo "FILE NOT FOUND: $FILE"
fi
done
else
echo "Directories do not exist"
fi
IFS=$IFSS
As an answer said, I've used associative arrays, but with no luck, maybe I'm not using correctly the arrays, but echoing it (array[#]) returns nothing. This is the script I've written:
#!/bin/bash
if [ -d "$1" ] && [ -d "$2" ]; then
declare -A files
find "$2" -type f -print0 | while read -r -d $'\0' FILE;
do
BN2="$(basename "$FILE")"
files["$BN2"]="$BN2"
done
echo "${files[#]}"
find "$1" -type f -print0 | while read -r -d $'\0' FILE;
do
BN1="$(basename "$FILE")"
if [ "${files["$BN1"]}" != "$BN1" ]; then
echo "File not found: "$BN1""
fi
done
fi
Don't use for loops. First, it is slower. Your find has to complete before the rest of your program can run. Second, it is possible to overload the command line. The enter for command must fit in the command line buffer.
Most importantly of all, for sucks at handling funky file names. You're running conniptions trying to get around this. However:
find $1 -type f -print0 | while read -r -d $'\0' FILE
will work much better. It handles file names -- even file names that contain \n characters. The -print0 tells find to separate file names with the NUL character. The while read -r -d $'\0 FILE will read each file name (separate by the NUL character) into $FILE.
If you put quotes around the file name in the find command, you don't have to worry about special characters in the file names.
Your script is running find once for each file found. If you have 100 files in your first directory, you're running find 100 times.
Do you know about associative (hash) arrays in BASH? You are probably better off using associative arrays. Run find on the first directory, and store those files names in an associative array.
Then, run find (again using the find | while read syntax) for your second directory. For each file you find in the second directory, see if you have a matching entry in your associative array. If you do, you know that file is in both arrays.
Addendum
I've been looking at the find command. It appears there's no real way to prevent it from using pattern matching except through a lot of work (like you were doing with printf. I've tried using the -regex matching and using \Q and \E to remove the special meaning of pattern characters. I haven't been successful.
There comes a time that you need something a bit more powerful and flexible than shell to implement your script, and I believe this is the time.
Perl, Python, and Ruby are three fairly ubiquitous scripting languages found on almost all Unix systems and are available on other non-POSIX platforms (cough! ...Windows!... cough!).
Below is a Perl script that takes two directories, and searches them for matching files. It uses the find command once and uses associative arrays (called hashes in Perl). I key the hash to the name of my file. In the value portion of the hash, I store an array of the directories where I found this file.
I only need to run the find command once per directory. Once that is done, I can print out all the entries in the hash that contain more than one directory.
I know it's not shell, but this is one of the cases where you can spend a lot more time trying to figure out how to get shell to do what you want than its worth.
#! /usr/bin/env perl
use strict;
use warnings;
use feature qw(say);
use File::Find;
use constant DIRECTORIES => qw( dir1 dir2 );
my %files;
#
# Perl version of the find command. You give it a list of
# directories and a subroutine for filtering what you find.
# I am basically rejecting all non-file entires, then pushing
# them into my %files hash as an array.
#
find (
sub {
return unless -f;
$files{$_} = [] if not exists $files{$_};
push #{ $files{$_} }, $File::Find::dir;
}, DIRECTORIES
);
#
# All files are found and in %files hash. I can then go
# through all the entries in my hash, and look for ones
# with more than one directory in the array reference.
# IF there is more than one, the file is located in multiple
# directories, and I print them.
#
for my $file ( sort keys %files ) {
if ( #{ $files{$file} } > 1 ) {
say "File: $file: " . join ", ", #{ $files{$file} };
}
}
Try something like this:
find "$DIR1" -printf "%f\0" | xargs -0 -i find "$DIR2" -name \{\}
How about this one-liner?
find dir1 -type f -exec bash -c 'read < <(find dir2 -name "${1##*/}" -type f)' _ {} \; -printf "File %f is in dir2\n" -o -printf "File %f is not in dir2\n"
Absolutely 100% safe regarding files with funny symbols, newlines and spaces in their name.
How does it work?
find (the main one) will scan through directory dir1 and for each file (-type f) will execute
read < <(find dir2 -name "${1##*/} -type f")
with argument the name of the current file given by the main find. This argument is at position $1. The ${1##*/} removes everything before the last / so that if $1 is path/to/found/file the find statement is:
find dir2 -name "file" -type f
This outputs something if file is found, otherwise has no output. That's what is read by the read bash command. read's exit status is true if it was able to read something, and false if there wasn't anything read (i.e., in case nothing is found). This exit status becomes bash's exit status which becomes -exec's status. If true, the next -printf statement is executed, and if false, the -o -printf part will be executed.
If your dirs are given in variables $dir1 and $dir2 do this, so as to be safe regarding spaces and funny symbols that could occur in $dir2:
find "$dir1" -type f -exec bash -c 'read < <(find "$0" -name "${1##*/}" -type f)' "$dir2" {} \; -printf "File %f is in $dir2\n" -o -printf "File %f is not in $dir2\n"
Regarding efficiency: this is of course not an efficient method at all! the inner find will be executed as many times as there are found files in dir1. This is terrible, especially if the directory tree under dir2 is deep and has many branches (you can rely a little bit on caching, but there are limits!).
Regarding usability: you have fine-grained control on how both find's work and on the output, and it's very easy to add many more tests.
So, hey, tell me how to compare files from two directories? Well, if you agree on loosing a little bit of control, this will be the shortest and most efficient answer:
diff dir1 dir2
Try it, you'll be amazed!
Since you are only using find for its recursive directory following, it will be easier to simply use the globstar option in bash. (You're using associative arrays, so your bash is new enough).
#!/bin/bash
shopt -s globstar
declare -A files
if [[ -d $1 && -d $2 ]]; then
for f in "$2"/**/*; do
[[ -f "$f" ]] || continue
BN2=$(basename "$f")
files["$BN2"]=$BN2
done
echo "${files[#]}"
for f in "$1"/**/*; do
[[ -f "$f" ]] || continue
BN1=$(basename $f)
if [[ ${files[$BN1]} != $BN1 ]]; then
echo "File not found: $BN1"
fi
done
fi
** will match zero or more directories, so $1/**/* will match all the files and directories in $1, all the files and directories in those directories, and so forth all the way down the tree.
If you want to use associative arrays, here's one possibility that will work well with files with all sorts of funny symbols in their names (this script has too much to just show the point, but it is usable as is – just remove the parts you don't want and adapt to your needs):
#!/bin/bash
die() {
printf "%s\n" "$#"
exit 1
}
[[ -n $1 ]] || die "Must give two arguments (none found)"
[[ -n $2 ]] || die "Must give two arguments (only one given)"
dir1=$1
dir2=$2
[[ -d $dir1 ]] || die "$dir1 is not a directory"
[[ -d $dir2 ]] || die "$dir2 is not a directory"
declare -A dir1files
declare -A dir2files
while IFS=$'\0' read -r -d '' file; do
dir1files[${file##*/}]=1
done < <(find "$dir1" -type f -print0)
while IFS=$'\0' read -r -d '' file; do
dir2files[${file##*/}]=1
done < <(find "$dir2" -type f -print0)
# Which files in dir1 are in dir2?
for i in "${!dir1files[#]}"; do
if [[ -n ${dir2files[$i]} ]]; then
printf "File %s is both in %s and in %s\n" "$i" "$dir1" "$dir2"
# Remove it from dir2 has
unset dir2files["$i"]
else
printf "File %s is in %s but not in %s\n" "$i" "$dir1" "$dir2"
fi
done
# Which files in dir2 are not in dir1?
# Since I unset them from dir2files hash table, the only keys remaining
# correspond to files in dir2 but not in dir1
if [[ -n "${!dir2files[#]}" ]]; then
printf "File %s is in %s but not in %s\n" "$dir2" "$dir1" "${!dir2files[#]}"
fi
Remark. The identification of files is only based on their filenames, not their contents.

How can I escape white space in a bash loop list?

I have a bash shell script that loops through all child directories (but not files) of a certain directory. The problem is that some of the directory names contain spaces.
Here are the contents of my test directory:
$ls -F test
Baltimore/ Cherry Hill/ Edison/ New York City/ Philadelphia/ cities.txt
And the code that loops through the directories:
for f in `find test/* -type d`; do
echo $f
done
Here's the output:
test/Baltimore
test/Cherry
Hill
test/Edison
test/New
York
City
test/Philadelphia
Cherry Hill and New York City are treated as 2 or 3 separate entries.
I tried quoting the filenames, like so:
for f in `find test/* -type d | sed -e 's/^/\"/' | sed -e 's/$/\"/'`; do
echo $f
done
but to no avail.
There's got to be a simple way to do this.
The answers below are great. But to make this more complicated - I don't always want to use the directories listed in my test directory. Sometimes I want to pass in the directory names as command-line parameters instead.
I took Charles' suggestion of setting the IFS and came up with the following:
dirlist="${#}"
(
[[ -z "$dirlist" ]] && dirlist=`find test -mindepth 1 -type d` && IFS=$'\n'
for d in $dirlist; do
echo $d
done
)
and this works just fine unless there are spaces in the command line arguments (even if those arguments are quoted). For example, calling the script like this: test.sh "Cherry Hill" "New York City" produces the following output:
Cherry
Hill
New
York
City
First, don't do it that way. The best approach is to use find -exec properly:
# this is safe
find test -type d -exec echo '{}' +
The other safe approach is to use NUL-terminated list, though this requires that your find support -print0:
# this is safe
while IFS= read -r -d '' n; do
printf '%q\n' "$n"
done < <(find test -mindepth 1 -type d -print0)
You can also populate an array from find, and pass that array later:
# this is safe
declare -a myarray
while IFS= read -r -d '' n; do
myarray+=( "$n" )
done < <(find test -mindepth 1 -type d -print0)
printf '%q\n' "${myarray[#]}" # printf is an example; use it however you want
If your find doesn't support -print0, your result is then unsafe -- the below will not behave as desired if files exist containing newlines in their names (which, yes, is legal):
# this is unsafe
while IFS= read -r n; do
printf '%q\n' "$n"
done < <(find test -mindepth 1 -type d)
If one isn't going to use one of the above, a third approach (less efficient in terms of both time and memory usage, as it reads the entire output of the subprocess before doing word-splitting) is to use an IFS variable which doesn't contain the space character. Turn off globbing (set -f) to prevent strings containing glob characters such as [], * or ? from being expanded:
# this is unsafe (but less unsafe than it would be without the following precautions)
(
IFS=$'\n' # split only on newlines
set -f # disable globbing
for n in $(find test -mindepth 1 -type d); do
printf '%q\n' "$n"
done
)
Finally, for the command-line parameter case, you should be using arrays if your shell supports them (i.e. it's ksh, bash or zsh):
# this is safe
for d in "$#"; do
printf '%s\n' "$d"
done
will maintain separation. Note that the quoting (and the use of $# rather than $*) is important. Arrays can be populated in other ways as well, such as glob expressions:
# this is safe
entries=( test/* )
for d in "${entries[#]}"; do
printf '%s\n' "$d"
done
find . -type d | while read file; do echo $file; done
However, doesn't work if the file-name contains newlines. The above is the only solution i know of when you actually want to have the directory name in a variable. If you just want to execute some command, use xargs.
find . -type d -print0 | xargs -0 echo 'The directory is: '
Here is a simple solution which handles tabs and/or whitespaces in the filename. If you have to deal with other strange characters in the filename like newlines, pick another answer.
The test directory
ls -F test
Baltimore/ Cherry Hill/ Edison/ New York City/ Philadelphia/ cities.txt
The code to go into the directories
find test -type d | while read f ; do
echo "$f"
done
The filename must be quoted ("$f") if used as argument. Without quotes, the spaces act as argument separator and multiple arguments are given to the invoked command.
And the output:
test/Baltimore
test/Cherry Hill
test/Edison
test/New York City
test/Philadelphia
This is exceedingly tricky in standard Unix, and most solutions run foul of newlines or some other character. However, if you are using the GNU tool set, then you can exploit the find option -print0 and use xargs with the corresponding option -0 (minus-zero). There are two characters that cannot appear in a simple filename; those are slash and NUL '\0'. Obviously, slash appears in pathnames, so the GNU solution of using a NUL '\0' to mark the end of the name is ingenious and fool-proof.
You could use IFS (internal field separator) temporally using :
OLD_IFS=$IFS # Stores Default IFS
IFS=$'\n' # Set it to line break
for f in `find test/* -type d`; do
echo $f
done
IFS=$OLD_IFS
<!>
Why not just put
IFS='\n'
in front of the for command? This changes the field separator from < Space>< Tab>< Newline> to just < Newline>
find . -print0|while read -d $'\0' file; do echo "$file"; done
I use
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for f in $( find "$1" -type d ! -path "$1" )
do
echo $f
done
IFS=$SAVEIFS
Wouldn't that be enough?
Idea taken from http://www.cyberciti.biz/tips/handling-filenames-with-spaces-in-bash.html
Don't store lists as strings; store them as arrays to avoid all this delimiter confusion. Here's an example script that'll either operate on all subdirectories of test, or the list supplied on its command line:
#!/bin/bash
if [ $# -eq 0 ]; then
# if no args supplies, build a list of subdirs of test/
dirlist=() # start with empty list
for f in test/*; do # for each item in test/ ...
if [ -d "$f" ]; then # if it's a subdir...
dirlist=("${dirlist[#]}" "$f") # add it to the list
fi
done
else
# if args were supplied, copy the list of args into dirlist
dirlist=("$#")
fi
# now loop through dirlist, operating on each one
for dir in "${dirlist[#]}"; do
printf "Directory: %s\n" "$dir"
done
Now let's try this out on a test directory with a curve or two thrown in:
$ ls -F test
Baltimore/
Cherry Hill/
Edison/
New York City/
Philadelphia/
this is a dirname with quotes, lfs, escapes: "\''?'?\e\n\d/
this is a file, not a directory
$ ./test.sh
Directory: test/Baltimore
Directory: test/Cherry Hill
Directory: test/Edison
Directory: test/New York City
Directory: test/Philadelphia
Directory: test/this is a dirname with quotes, lfs, escapes: "\''
'
\e\n\d
$ ./test.sh "Cherry Hill" "New York City"
Directory: Cherry Hill
Directory: New York City
ps if it is only about space in the input, then some double quotes worked smoothly for me...
read artist;
find "/mnt/2tb_USB_hard_disc/p_music/$artist" -type f -name *.mp3 -exec mpg123 '{}' \;
To add to what Jonathan said: use the -print0 option for find in conjunction with xargs as follows:
find test/* -type d -print0 | xargs -0 command
That will execute the command command with the proper arguments; directories with spaces in them will be properly quoted (i.e. they'll be passed in as one argument).
#!/bin/bash
dirtys=()
for folder in *
do
if [ -d "$folder" ]; then
dirtys=("${dirtys[#]}" "$folder")
fi
done
for dir in "${dirtys[#]}"
do
for file in "$dir"/\*.mov # <== *.mov
do
#dir_e=`echo "$dir" | sed 's/[[:space:]]/\\\ /g'` -- This line will replace each space into '\ '
out=`echo "$file" | sed 's/\(.*\)\/\(.*\)/\2/'` # These two line code can be written in one line using multiple sed commands.
out=`echo "$out" | sed 's/[[:space:]]/_/g'`
#echo "ffmpeg -i $out_e -sameq -vcodec msmpeg4v2 -acodec pcm_u8 $dir_e/${out/%mov/avi}"
`ffmpeg -i "$file" -sameq -vcodec msmpeg4v2 -acodec pcm_u8 "$dir"/${out/%mov/avi}`
done
done
The above code will convert .mov files to .avi. The .mov files are in different folders and
the folder names have white spaces too. My above script will convert the .mov files to .avi file in the same folder itself. I don't know whether it help you peoples.
Case:
[sony#localhost shell_tutorial]$ ls
Chapter 01 - Introduction Chapter 02 - Your First Shell Script
[sony#localhost shell_tutorial]$ cd Chapter\ 01\ -\ Introduction/
[sony#localhost Chapter 01 - Introduction]$ ls
0101 - About this Course.mov 0102 - Course Structure.mov
[sony#localhost Chapter 01 - Introduction]$ ./above_script
... successfully executed.
[sony#localhost Chapter 01 - Introduction]$ ls
0101_-_About_this_Course.avi 0102_-_Course_Structure.avi
0101 - About this Course.mov 0102 - Course Structure.mov
[sony#localhost Chapter 01 - Introduction]$ CHEERS!
Cheers!
Had to be dealing with whitespaces in pathnames, too. What I finally did was using a recursion and for item in /path/*:
function recursedir {
local item
for item in "${1%/}"/*
do
if [ -d "$item" ]
then
recursedir "$item"
else
command
fi
done
}
Convert the file list into a Bash array. This uses Matt McClure's approach for returning an array from a Bash function:
http://notes-matthewlmcclure.blogspot.com/2009/12/return-array-from-bash-function-v-2.html
The result is a way to convert any multi-line input to a Bash array.
#!/bin/bash
# This is the command where we want to convert the output to an array.
# Output is: fileSize fileNameIncludingPath
multiLineCommand="find . -mindepth 1 -printf '%s %p\\n'"
# This eval converts the multi-line output of multiLineCommand to a
# Bash array. To convert stdin, remove: < <(eval "$multiLineCommand" )
eval "declare -a myArray=`( arr=(); while read -r line; do arr[${#arr[#]}]="$line"; done; declare -p arr | sed -e 's/^declare -a arr=//' ) < <(eval "$multiLineCommand" )`"
for f in "${myArray[#]}"
do
echo "Element: $f"
done
This approach appears to work even when bad characters are present, and is a general way to convert any input to a Bash array. The disadvantage is if the input is long you could exceed Bash's command line size limits, or use up large amounts of memory.
Approaches where the loop that is eventually working on the list also have the list piped in have the disadvantage that reading stdin is not easy (such as asking the user for input), and the loop is a new process so you may be wondering why variables you set inside the loop are not available after the loop finishes.
I also dislike setting IFS, it can mess up other code.
Well, I see too many complicated answers. I don't want to pass the output of find utility or to write a loop , because find has "exec" option for this.
My problem was that I wanted to move all files with dbf extension to the current folder and some of them contained white space.
I tackled it so:
find . -name \*.dbf -print0 -exec mv '{}' . ';'
Looks much simple for me
just found out there are some similarities between my question and yours. Aparrently if you want to pass arguments into commands
test.sh "Cherry Hill" "New York City"
to print them out in order
for SOME_ARG in "$#"
do
echo "$SOME_ARG";
done;
notice the $# is surrounded by double quotes, some notes here
I needed the same concept to compress sequentially several directories or files from a certain folder. I have solved using awk to parsel the list from ls and to avoid the problem of blank space in the name.
source="/xxx/xxx"
dest="/yyy/yyy"
n_max=`ls . | wc -l`
echo "Loop over items..."
i=1
while [ $i -le $n_max ];do
item=`ls . | awk 'NR=='$i'' `
echo "File selected for compression: $item"
tar -cvzf $dest/"$item".tar.gz "$item"
i=$(( i + 1 ))
done
echo "Done!!!"
what do you think?
find Downloads -type f | while read file; do printf "%q\n" "$file"; done
For me this works, and it is pretty much "clean":
for f in "$(find ./test -type d)" ; do
echo "$f"
done
Just had a simple variant problem... Convert files of typed .flv to .mp3 (yawn).
for file in read `find . *.flv`; do ffmpeg -i ${file} -acodec copy ${file}.mp3;done
recursively find all the Macintosh user flash files and turn them into audio (copy, no transcode) ... it's like the while above, noting that read instead of just 'for file in ' will escape.

Resources