Recursively rename image collection with subfolders - image

I'm trying to rename files in a huge folder of images, that contains lots of subfolders and within them images.
Something like this:
ImageCollection/
January/
Movies/
123123.jpg
asd.jpg
Landscapes/
qweqas.jpg
February/
Movies/
ABC.jpg
QWY.jpg
Landscapes/
t.jpg
And I want to run the script and rename them in ascending order but keeping them in their corresponding folder, like this:
ImageCollection/
January/
Movies/
0.jpg
1.jpg
Landscapes/
2.jpg
February/
Movies/
3.jpg
4.jpg
Landscapes/
5.jpg
Until now I have the following:
#!/usr/bin/env bash
x=0
for i path/to/dir/*/*.jpg; do
new=$(printf path/to/dir/%d ${x})
mv ${i} ${new}
let x=x+1
done
But my problem, relies on not being able to keep the files in their corresponding subfolders, instead everything is moved to the path/to/dir root folder.

A pure Bash solution (except from the mv, of course):
#!/bin/bash
shopt -s nullglob
### Optional: if you also want the .JPG (uppercase) files
# shopt -s nocaseglob
i=1
for file in ImageCollection/*/*.jpg; do
dirname=${file%/*}
newfile=$dirname/$i.jpg
echo mv "$file" "$newfile" && ((++i))
done
This will not perform the renaming, only show what's going to happen. Remove the echo if your happy with the result you see.
You can use the -n option to mv too, so as to not overwrite existing files. (I would definitely use it in this case!). If -n is not available, you may use:
[[ ! -e $newfile ]] && mv "$file" "$newfile" && ((++i))
This is 100% safe regarding filenames (or dirnames) containing spaces or other funny symbols.

#!/bin/bash
x=0
for f in `find path_to_main_dir_or_top_folder | grep "\.jpg$"`;
do
mv $f $(dirname $f)/$x.jpg && ((x++))
done
echo shenzi
$f will hold all files with full path for all *.jpg files
dirname command will give you the fill path (excluding the filename) for a given $f file.
$x.jpg will do the trick. $x value will increment per iteration in the loop.

Related

Check if files exists in 3 different directories and move them one to another

I'm quite new in creating shell scripts.
I'm developing a shell script that will backup my files once a day only.
I need to check which *.war files are in three different folders (input folder, production folder, backup folder)
If the same files exists in the three directories, don't perform backup.
If it doesn't, it must move the files in folder 2 to folder 3.
This is what I've done so far.
===============================
TODAY=$(date +%d-%m-%Y)
INPUT=/home/bruno.ogasawara/entrada/
BACKUP=/home/bruno.ogasawara/backup/
PROD=/home/bruno.ogasawara/producao/
DIR1=$(ls $INPUT)
DIR2=$(ls $PROD)
DIR3=$(ls $BACKUP$TODAY)
for i in $DIR1; do
for j in $DIR2; do
for k in $DIR3; do
if [ $i == $j ] && [ $j == $k ]; then
exit 1; else
mv -f $PROD$j $BACKUP$TODAY
fi
done
done
done
mv -f $INPUT*.war $PROD
===============================
The verification is not working. Only thing working is the mv -f $INPUT*.war $PROD in the end.
Where am I missing something or doing something wrong?
Thanks in advance people.
What I understand is you want to sync those three folders.
In that case you should not modify the file names as we are using file names to compare them.Otherwise you should use md5 or sha checksums.But linux filesystem already has timestamps feature you don't have to attach date to filename.
In your code you used ls to list files ...but actually ls command lists files in column mode which is not comaptible with for loop in bash.
correct command is
find $DIR -maxdepth 1 -type f -exec basename {} \;
you want to sync the *.war files to all folders...then simply you can use this:
#!/bin/bash
DIR1=/home/bruno.ogasawara/entrada/
DIR2=/home/bruno.ogasawara/backup/
DIR3=/home/bruno.ogasawara/producao/
cp -n $DIR1/*.war $DIR2
cp -n $DIR1/*.war $DIR3
cp -n $DIR2/*.war $DIR1
cp -n $DIR2/*.war $DIR3
cp -n $DIR3/*.war $DIR1
cp -n $DIR3/*.war $DIR2
-n: will check if file already exists.it will not overwrite the existing file.

Tar compress files when some can be missing

I am writing a bash script that pulls files from another server to the current directory. The issue is that I get a lot of files and I only need ~3 of them; however all 3 might not be there.
For example, make server all:
server call --> file1.txt file2.txt file3.xls file4.json .... (etc)
Then compress files with tar:
tar zcf needed_files.tgz file4.json file23.doc *.txt
But file4.json was not there, so I would expect tar to compress file23.doc and all .txt files but the script fails with:
tar: file4.json: Cannot stat: No such file or directory
I have tried other combinations of tar commands like czvf but no luck.
tar should successfully compress the existing files despite the "no such file or directory" errors.
Anyway, you could also use nullglob in combination with extglob #() to get only the existing files:
shopt -s extglob nullglob
files=( "fileA"#() "fileB"#() *.txt )
(( ${#files[#]} )) && tar zcf needed_files.tgz -- "${files[#]}"
Try an extended glob.
shopt -s extglob # set extended globbing on
if echo file[1234].+(txt|xls|json) | grep -vq '\['
then tar cvzf needed_files.tgz file[1234].+(txt|xls|json)
else echo No matching files for extglob 'file[1234].+(txt|xls|json)'
fi
If matching files exist, it will list them.
If not, it will literally echo back the pattern.
grepping out the pattern metacharacters tells you whether there are any files in the set. If they do exist, use the same glob to provide the files to tar, and it will receive exactly the set of matching files. If they don't, the condition test lets you skip it.
Of course, it breaks if you make files with [ in the names, etc...
Or, you could do it in a loop....
for f in file[1234].+(txt|xls|json)
do if [[ -e "$f" ]]
then [[ -e needed_files.tar ]] && c=r || c=c
tar ${c}vf needed_files.tar "$f"
fi
done
Not perfect, but might suit your tastes better.
Neither is a great solution, but one of them ought to get you rolling.
tar zcf needed_files.tgz $(ls -d file4.json file23.doc *.txt 2>/dev/null)
Notice that prints only existing files
ls -d file4.json file23.doc *.txt 2>/dev/null
Also you can use --ignore-failed-read option, but it will also ignore other read errors.

How to move files from subfolders to their parent directory (unix, terminal)

I have a folder structure like this:
A big parent folder named Photos. This folder contains 900+ subfolders named a_000, a_001, a_002 etc.
Each of those subfolders contain more subfolders, named dir_001, dir_002 etc. And each of those subfolders contain lots of pictures (with unique names).
I want to move all these pictures contained in the subdirectories of a_xxx inside a_xxx. (where xxx could be 001, 002 etc)
After looking in similar questions around, this is the closest solution I came up with:
for file in *; do
if [ -d $file ]; then
cd $file; mv * ./; cd ..;
fi
done
Another solution I got is doing a bash script:
#!/bin/bash
dir1="/path/to/photos/"
subs= `ls $dir1`
for i in $subs; do
mv $dir1/$i/*/* $dir1/$i/
done
Still, I'm missing something, can you help?
(Then it would be nice to discard the empty dir_yyy, but not much of a problem at the moment)
You could try the following bash script :
#!/bin/bash
#needed in case we have empty folders
shopt -s nullglob
#we must write the full path here (no ~ character)
target="/path/to/photos"
#we use a glob to list the folders. parsing the output of ls is baaaaaaaddd !!!!
#for every folder in our photo folder ...
for dir in "$target"/*/
do
#we list the subdirectories ...
for sub in "$dir"/*/
do
#and we move the content of the subdirectories to the parent
mv "$sub"/* "$dir"
#if you want to remove subdirectories once the copy is done, uncoment the next line
#rm -r "$sub"
done
done
Here is why you don't parse ls in bash
Make sure the directory where the files exist is correct (and complete) in the following script and try it:
#!/bin/bash
BigParentDir=Photos
for subdir in "$BigParentDir"/*/; do # Select the a_001, a_002 subdirs
for ssdir in "$subdir"/*/; do # Select dir_001, … sub-subdirs
for f in "$ssdir"/*; do # Select the files to move
if [[ -f $f ]]; do # if indeed are files
echo \
mv "$ssdir"/* "$subdir"/ # Move the files.
fi
done
done
done
No file will be moved, just printed. If you are sure the script does what you want, comment the echo line and run it "for real".
You can try this
#!/bin/bash
dir1="/path/to/photos/"
subs= `ls $dir1`
cp /dev/null /tmp/newscript.sh
for i in $subs; do
find $dir1/$i -type f -exec echo mv \'\{\}\' $dir1/$i \; >> /tmp/newscript.sh
done
then open /tmp/newscript.sh with an editor or less and see if looks like what you are trying to do.
if it does then execute it with sh -x /tmp/newscript.sh

Shell Script to list files in a given directory and if they are files or directories

Currently learning some bash scripting and having an issue with a question involving listing all files in a given directory and stating if they are a file or directory. The issue I am having is that I only get either my current directory or if a specify a directory it will just say that it is a directory eg. /home/user/shell_scripts will return shell_scipts is a directory rather than the files contained within it.
This is what I have so far:
dir=$dir
for file in $dir; do
if [[ -d $file ]]; then
echo "$file is a directory"
if [[ -f $file ]]; then
echo "$file is a regular file"
fi
done
Your line:
for file in $dir; do
will expand $dir just to a single directory string. What you need to do is expand that to a list of files in the directory. You could do this using the following:
for file in "${dir}/"* ; do
This will expand the "${dir}/"* section into a name-only list of the current directory. As Biffen points out, this should guarantee that the file list wont end up with split partial file names in file if any of them contain whitespace.
If you want to recurse into the directories in dir then using find might be a better approach. Simply use:
for file in $( find ${dir} ); do
Note that while simple, this will not handle files or directories with spaces in them. Because of this, I would be tempted to drop the loop and generate the output in one go. This might be slightly different than what you want, but is likely to be easier to read and a lot more efficient, especially with large numbers of files. For example, To list all the directories:
find ${dir} -maxdepth 1 -type d
and to list the files:
find ${dir} -maxdepth 1 -type f
if you want to iterate into directories below, then remove the -maxdepth 1
This is a good use for globbing:
for file in "$dir/"*
do
[[ -d "$file" ]] && echo "$file is a directory"
[[ -f "$file" ]] && echo "$file is a regular file"
done
This will work even if files in $dir have special characters in their names, such as spaces, asterisks and even newlines.
Also note that variables should be quoted ("$file"). But * must not be quoted. And I removed dir=$dir since it doesn't do anything (except break when $dir contains special characters).
ls -F ~ | \
sed 's#.*/$#/& is a Directory#;t quit;s#.*#/& is a File#;:quit;s/[*/=>#|] / /'
The -F "classify" switch appends a "/" if a file is a directory. The sed code prints the desired message, then removes the suffix.
for file in $(ls $dir)
do
[ -f $file ] && echo "$file is File"
[ -d $file ] && echo "$file is Directory"
done
or replace the
$(ls $dir)
with
`ls $`
If you want to list files that also start with . use:
for file in "${dir}/"* "${dir}/"/.[!.]* "${dir}/"/..?* ; do

Comparing relative paths in bash script

I am trying to build a bash script capable of comparing two directories given as arguments $1 and $2, and changing the files' timestamps from the second directory ( if they are not different than a given timestamp $3 ) to be the same as the files with the same name in the first directory. I'm doing okay with that, but I don't see how to access the folders inside the given directories, and compare the files inside those folders.
For example, if I have Directory1 and Directory2 given as arguments:
Directory1 contains:
-text.txt
-folder1/secondfile.txt
-folder2/thirdfile.txt
and Directory2 contains:
-text.txt
-folder1/secondfile.txt
-folder3/thirdfile.txt
so in this case I want my script to modify the files text.txt and secondfile.txt, but not the thirdfile.txt because the relative paths are not the same. How would my script access folders in the directory and how would it compare relative paths? I have managed to do what I wanted with files from the directory, but not folders, and I have no idea how to compare relative paths, even though I searched around I couldn't find it.
So far I've done this script (with help from SO):
#!/bin/bash
cd "$1"
function check {
for i in *; do
if [[-d "$i"]]; then
cd "$i"
check
cd -
fi
if [[-f "$i"]]; then
if [[stat %y "$i" == "$3"]]; then
#if [[path check]];then
#touch -r "$i" "$2/path/$i"
fi
}
and I don't know how to do the [[path check]] which should check if both files have the same relative path (relative to the directories given as arguments).
EDIT:
As the answer suggests, is the following code the right way to do it?
#!/bin/bash
cd "$1"
shopt -s globstar
for i in **; do
if [[stat %y "$i" == "$3"]]; then
if [["$1/$i" == "$2/$i"]];then
touch -r "$i" "$2/$i"
fi
There was an answer here before, which I wanted to reply to, suggesting using shopt -s globstar and ** instead of *.
The additional question was something along the lines of "Would I be able to compare the relative paths?".
Yes, you would. With shopt -s globstar, ** expands to include the relative path to each file. So it would return text.txt folder1/secondfile.txt folder2/thirdfile.txt.
EDIT:
You should not need to cd "$1" either, for "$1" and "$2" would not exist inside dir "$1". Try something along the lines of:
#!/usr/bin/env bash
shopt -s globstar
for i in $(cd "$1"; echo **); do
if [[ $(stat -c %y "$1/$i") == "$3" ]]; then
if [[ -f "$1/$i" ]] && [[ -f "$2/$i" ]]; then
touch -r "$1/$i" "$2/$i"
fi
fi
done

Resources