rename files in a folder using find shell - shell

i have a n files in a different folders like abc.mp3 acc.mp3 bbb.mp3 and i want to rename them 01-abc.mp3, 02-acc.mp3, 03-bbb.mp3... i tried this
#!/bin/bash
IFS='
'
COUNT=1
for file in ./uff/*;
do mv "$file" "${COUNT}-$file" let COUNT++ done
but i keep getting errors like for syntax error near 'do and sometimes for not found... Can someone provide single line solution to this using "find" from terminal. i'm looking for a solution using find only due to certain constraints... Thanks in advance

I'd probably use:
#!/bin/bash
cd ./uff || exit 1
COUNT=1
for file in *.mp3;
do
mv "$file" $(printf "%.2d-%s" ${COUNT} "$file")
((COUNT++))
done
This avoids a number of issues and also includes a 2-digit number for the first 9 files (the next 90 get 2-digit numbers anyway, and after that you get 3-digit numbers, etc).

you can try this;
#!/bin/bash
COUNT=1
for file in ./uff/*;
do
path=$(dirname $file)
filename=$(basename $file)
if [ $COUNT -lt 10 ]; then
mv "$file" "$path"/0"${COUNT}-$filename";
else
mv "$file" "$path"/"${COUNT}-$filename";
fi
COUNT=$(($COUNT+1));
done
Eg:
user#host:/tmp/test$ ls uff/
abc.mp3 acc.mp3 bbb.mp3
user#host:/tmp/test$ ./test.sh
user#host:/tmp/test$ ls uff/
01-abc.mp3 02-acc.mp3 03-bbb.mp3

Ok, here's the version without loops:
paste -d'\n' <(printf "%s\n" *) <(printf "%s\n" * | nl -w1 -s-) | xargs -d'\n' -n2 mv -v
You can also use find if you want:
paste -d'\n' <(find -mindepth 1 -maxdepth 1 -printf "%f\n") <(find -mindepth 1 -maxdepth 1 -printf "%f\n" | nl -w1 -s-) | xargs -d'\n' -n2 mv -v
Replace mv with echo mv for the "dry run":
paste -d'\n' <(printf "%s\n" *) <(printf "%s\n" * | nl -w1 -s-) | xargs -d'\n' -n2 echo mv -v

Here's a solution.
i=1
for f in $(find ./uff -mindepth 1 -maxdepth 1 -type f | sort)
do
n=$i
[ $i -lt 10 ] && n="0$i"
echo "$f" "$n-$(basename "$f")"
((i++))
done
And here it is as a one-liner (but in real life if you ever tried anything remotely like what's below in a coding or ops interview you'd not only fail to get the job, you'd probably give the interviewer PTSD. They'd wake up in cold sweats thinking about how terrible your solution was).
i=1; for f in $(find ./uff -mindepth 1 -maxdepth 1 -type f | sort); do n=$i; [ $i -lt 10 ] && n="0$i"; echo "$f" "$n-$(basename "$f")" ; ((i++)); done
Alternatively, you could just cd ./uff if you wanted the rename them in the same directory, and then use find . (along with the other find arguments) to clear everything up. I'm assuming you only want files moved, not directories. And I'm assuming you don't want to recursively rename files / directories.

Related

bash iterate over a directory sorted by file size

As a webmaster, I generate a lot of junk files of code. Periodically I have to purge the unneeded files filtered by extention. Example: "cleaner txt" Easy enough. But I want to sort the files by size and process them for the "for" loop. How can I do that?
cleaner:
#/bin/bash
if [ -z "$1" ]; then
echo "Please supply the filename suffixes to delete.";
exit;
fi;
filter=$1;
for FILE in *.$filter; do clear;
cat $FILE; printf '\n\n'; rm -i $FILE; done
You can use a mix of find (to print file sizes and names), sort (to sort the output of find) and cut (to remove the sizes). In case you have very unusual file names containing any possible character including newlines, it is safer to separate the files by a character that cannot be part of a name: NUL.
#/bin/bash
if [ -z "$1" ]; then
echo "Please supply the filename suffixes to delete.";
exit;
fi;
filter=$1;
while IFS= read -r -d '' -u 3 FILE; do
clear
cat "$FILE"
printf '\n\n'
rm -i "$FILE"
done 3< <(find . -mindepth 1 -maxdepth 1 -type f -name "*.$filter" \
-printf '%s\t%p\0' | sort -zn | cut -zf 2-)
Note that we must use a different file descriptor than stdin (3 in this example) to pass the file names to the loop. Else, if we use stdin, it will also be used to provide the answers to rm -i.
Inspired from this answer, you could use the find command as follows:
find ./ -type f -name "*.yaml" -printf "%s %p\n" | sort -n
find command prints the the size of the files and the path so that the sort command prints the results from the smaller one to the larger.
In case you want to iterate through (let's say) the 5 bigger files you can do something like this using the tail command like this:
for f in $(find ./ -type f -name "*.yaml" -printf "%s %p\n" |
sort -n |
cut -d ' ' -f 2)
do
echo "### $f"
done
If the file names don't contain newlines and spaces
while read filesize filename; do
printf "%-25s has size %10d\n" "$filename" "$filesize"
done < <(du -bs *."$filter"|sort -n)
while read filename; do
echo "$filename"
done < <(du -bs *."$filter"|sort -n|awk '{$0=$2}1')

Adding files sizes using bash command "wc"

I ran this command to find each file modified yesterday:
find /eqtynas/ -type f -mtime -1 > /home/writtenToStorage.20171026 &
and then developed this script to add up all the files collected by the script, and sum the sizes. .
#!/bin/bash
ydate=$(date +%Y%m%d --date="yesterday")
file="/home/writtenToStorage.$ydate"
fileSize=0
for line in $(cat $file)
do
if [ -f $line ] && [ -s $line ] ; then
fileSize1=$fileSize
fileSize=$(wc -c < $line)
Total=$(( $fileSize + $fileSize1 ))
fi
done
echo $Total
However when I stat just one of the files in the list It comes out to 18942, where as the total for all the files combined comes out at 34499.
wc -c /eqty/fixed
18942 /eqty/fixed
Is the script ok - because I ran another check and the total size was 314 gigs
find /eqtynas/ -type f -mtime -1 -print0 | du -ch --files0-from=- --total -s > 24hourUsage.20171026 &
Continuing from my comment, you may prefer something similar to:
sum=
while read -r sz; do
sum=$((sum + sz))
done < <(find /eqtynas/ -type f -mtime -1 -exec stat -c %s '{}' \; )
echo "sum: $sum"
There are a number of ways to do this. You can also pipe the result of -exec ls -al '{}' to awk and just sum the 5th field.
If you have already written the filenames to /home/writtenToStorage.20171026, then you can simply redirect the file to your while loop, e.g.
while read -r sz; do
sum=$((sum + sz))
done <"/home/writtenToStorage.20171026"
Look things over and let me know i you have any questions.
You're not adding to Total, you're just setting it to the sum of the sizes of the last two files.
for line in $(cat $file)
do
if [ -f $line ] && [ -s $line ] ; then
fileSize=$(wc -c < $line)
((Total += fileSize))
fi
done

How to locate the directory where the sum of the number of lines of regular file is greatest (in bash)

Hi i'm new in Unix and bash and I'd like to ask q. how can i do this
The specified directory is given as arguments. Locate the directory
where the sum of the number of lines of regular file is greatest.
Browse all specific directories and their subdirectories. Amounts
count only for files that are directly in the directory.
I try somethnig but it's not working properly.
while [ $# -ne 0 ];
do case "$1" in
-h) show_help ;;
-*) echo "Error: Wrong arguments" 1>&2 exit 1 ;;
*) directories=("$#") break ;;
esac
shift
done
IFS='
'
amount=0
for direct in "${directories[#]}"; do
for subdirect in `find $direct -type d `; do
temp=`find "$subdirect" -type f -exec cat {} \; | wc -l | tr -s " "`
if [ $amount -lt $temp ]; then
amount=$temp
subdirect2=$subdirect
fi
done
echo Output: "'"$subdirect2$amount"'"
done
the problem is here when i use as arguments this dirc.(just example)
/home/usr/first and there are this direct.
/home/usr/first/tmp/first.txt (50 lines)
/home/usr/first/tmp/second.txt (30 lines)
/home/usr/first/tmp1/one.txt (20 lines)
it will give me on Output /home/usr/first/tmp1 100 and this is wrong it should be /home/usr/first/tmp 80
I'd like to scan all directories and all its subdirectories in depth. Also if multiple directories meets the maximum should list all.
Given your sample files, I'm going to assume you only want to look at the immediate subdirectories, not recurse down several levels:
max=-1
# the trailing slash limits the wildcard to directories only
for dir in */; do
count=0
for file in "$dir"/*; do
[[ -f "$file" ]] && (( count += $(wc -l < "$file") ))
done
if (( count > max )); then
max=$count
maxdir="$dir"
fi
done
echo "files in $maxdir have $max lines"
files in tmp/ have 80 lines
In the spirit of Unix (caugh), here's an absolutely disgusting chain of pipes that I personally hate, but it's a lot of fun to construct :):
find . -mindepth 1 -maxdepth 1 -type d -exec sh -c 'find "$1" -maxdepth 1 -type f -print0 | wc -l --files0-from=- | tail -1 | { read a _ && echo "$a $1"; }' _ {} \; | sort -nr | head -1
Of course, don't use this unless you're mentally ill, use glenn jackman's nice answer instead.
You can have great control on find's unlimited filtering possibilities, too. Yay. But use glenn's answer!

Need a bash scripts to move files to sub folders automatically

I have a folder with 320G images, I want to move the images to 5 sub folders randomly(just need to move to 5 sub folders). But I know nothing on bash scripts.Please could someone help? thanks!
You could move the files do different directories based on their first letter:
mv [A-Fa-f]* dir1
mv [F-Kf-k]* dir2
mv [^A-Ka-k]* dir3
Here is my take on this. In order to use it place the script somewhere else (not in you folder) but run it from your folder. If you call your script file rmove.sh, you can place it in, say ~/scripts/, then cd to your folder and run:
source ~/scripts/rmove.sh
#/bin/bash
ndirs=$((`find -type d | wc -l` - 1))
for file in *; do
if [ -f "${file}" ]; then
rand=`dd if=/dev/random bs=1 count=1 2>/dev/null | hexdump -b | head -n1 | cut -d" " -f2`
rand=$((rand % ndirs))
i=0
for directory in `find -type d`; do
if [ "${directory}" = . ]; then
continue
fi
if [ $i -eq $rand ]; then
mv "${file}" "${directory}"
fi
i=$((i + 1))
done
fi
done
Here's my stab at the problem:
#!/usr/bin/env bash
sdprefix=subdir
dirs=5
# pre-create all possible sub dirs
for n in {1..5} ; do
mkdir -p "${sdprefix}$n"
done
fcount=$(find . -maxdepth 1 -type f | wc -l)
while IFS= read -r -d $'\0' file ; do
subdir="${sdprefix}"$(expr \( $RANDOM % $dirs \) + 1)
mv -f "$file" "$subdir"
done < <(find . -maxdepth 1 -type f -print0)
Works with huge numbers of files
Does not beak if a file is not moveable
Creates subdirectories if necessary
Does not break on unusual file names
Relatively cheap
Any scripting language will do so I'll write in Python here:
#!/usr/bin/python
import os
import random
new_paths = ['/path1', '/path2', '/path3', '/path4', '/path5']
image_directory = '/path/to/images'
for file_path in os.listdir(image_directory):
full_path = os.path.abspath(os.path.join(image_directory, file_path))
random_subdir = random.choice(new_paths)
new_path = os.path.abspath(os.path.join(random_subdir, file_path))
os.rename(full_path, new_path)
mv `ls | while read x; do echo "`expr $RANDOM % 1000`:$x"; done \
| sort -n| sed 's/[0-9]*://' | head -1` ./DIRNAME
run it in your current image directory, this command will select one file at a time and move it to ./DIRNAME, iterate this command until there are no more files to move.
Pay attention that ` is backquotes and not just quotes characters.

Bash script to list files not found

I have been looking for a way to list file that do not exist from a list of files that are required to exist. The files can exist in more than one location. What I have now:
#!/bin/bash
fileslist="$1"
while read fn
do
if [ ! -f `find . -type f -name $fn ` ];
then
echo $fn
fi
done < $fileslist
If a file does not exist the find command will not print anything and the test does not work. Removing the not and creating an if then else condition does not resolve the problem.
How can i print the filenames that are not found from a list of file names?
New script:
#!/bin/bash
fileslist="$1"
foundfiles="~/tmp/tmp`date +%Y%m%d%H%M%S`.txt"
touch $foundfiles
while read fn
do
`find . -type f -name $fn | sed 's:./.*/::' >> $foundfiles`
done < $fileslist
cat $fileslist $foundfiles | sort | uniq -u
rm $foundfiles
#!/bin/bash
fileslist="$1"
while read fn
do
FPATH=`find . -type f -name $fn`
if [ "$FPATH." = "." ]
then
echo $fn
fi
done < $fileslist
You were close!
Here is test.bash:
#!/bin/bash
fn=test.bash
exists=`find . -type f -name $fn`
if [ -n "$exists" ]
then
echo Found it
fi
It sets $exists = to the result of the find. the if -n checks if the result is not null.
Try replacing body with [[ -z "$(find . -type f -name $fn)" ]] && echo $fn. (note that this code is bound to have problems with filenames containing spaces).
More efficient bashism:
diff <(sort $fileslist|uniq) <(find . -type f -printf %f\\n|sort|uniq)
I think you can handle diff output.
Give this a try:
find -type f -print0 | grep -Fzxvf - requiredfiles.txt
The -print0 and -z protect against filenames which contain newlines. If your utilities don't have these options and your filenames don't contain newlines, you should be OK.
The repeated find to filter one file at a time is very expensive. If your file list is directly compatible with the output from find, run a single find and remove any matches from your list:
find . -type f |
fgrep -vxf - "$1"
If not, maybe you can massage the output from find in the pipeline before the fgrep so that it matches the format in your file; or, conversely, massage the data in your file into find-compatible.
I use this script and it works for me
#!/bin/bash
fileslist="$1"
found="Found:"
notfound="Not found:"
len=`cat $1 | wc -l`
n=0;
while read fn
do
# don't worry about this, i use it to display the file list progress
n=$((n + 1))
echo -en "\rLooking $(echo "scale=0; $n * 100 / $len" | bc)% "
if [ $(find / -name $fn | wc -l) -gt 0 ]
then
found=$(printf "$found\n\t$fn")
else
notfound=$(printf "$notfound\n\t$fn")
fi
done < $fileslist
printf "\n$found\n$notfound\n"
The line counts the number of lines and if its greater than 0 the find was a success. This searches everything on the hdd. You could replace / with . for just the current directory.
$(find / -name $fn | wc -l) -gt 0
Then i simply run it with the files in the files list being separated by newline
./search.sh files.list

Resources