bash: Copy last version of a file from a mask list - bash

There is a set of programs in a source folder, but only the most recent version must be copied to the destination USB drive.
From Bash Script - Copy latest version of a file in a directory recursively, it shows that my formula would be:
f=$(find . -name AdbeRdr\*.exe | sort -n | tail -1)
So how to make find work inside a for loop on a set of masks?
set1="AdbeRdr\*.exe jre-\*.exe LibreOffice\*.msi"
for m in $set1
do
echo "m: $m"
f=$(find . -name $m | sort -n | tail -1)
echo "f: $f"
cp $f /media/USB
done
$m outputs the correct values (AdbeRdr*.exe, etc.), $f is empty and cp copies the whole parent directory. If I specify the mask explicitly without a variable (find . -name AdbeRdr\*.exe | sort -n | tail -1), the last file is outputted correctly.
Where am I going wrong? And how can I handle spaces if those would occur in filenames?
Thanks!

Use an array rather than a string to hold your elements, like this:
set1=( 'AdbeRdr*.exe' 'jre-*.exe' 'LibreOffice*.msi' )
for m in "${set1[#]}"
do
echo "m: $m"
f=$(find . -name "$m" | sort -n | tail -1)
echo "f: $f"
cp "$f" /media/USB
done
Use double-quotes around your variables to handle spaces in filenames.

Related

bash iterate over a directory sorted by file size

As a webmaster, I generate a lot of junk files of code. Periodically I have to purge the unneeded files filtered by extention. Example: "cleaner txt" Easy enough. But I want to sort the files by size and process them for the "for" loop. How can I do that?
cleaner:
#/bin/bash
if [ -z "$1" ]; then
echo "Please supply the filename suffixes to delete.";
exit;
fi;
filter=$1;
for FILE in *.$filter; do clear;
cat $FILE; printf '\n\n'; rm -i $FILE; done
You can use a mix of find (to print file sizes and names), sort (to sort the output of find) and cut (to remove the sizes). In case you have very unusual file names containing any possible character including newlines, it is safer to separate the files by a character that cannot be part of a name: NUL.
#/bin/bash
if [ -z "$1" ]; then
echo "Please supply the filename suffixes to delete.";
exit;
fi;
filter=$1;
while IFS= read -r -d '' -u 3 FILE; do
clear
cat "$FILE"
printf '\n\n'
rm -i "$FILE"
done 3< <(find . -mindepth 1 -maxdepth 1 -type f -name "*.$filter" \
-printf '%s\t%p\0' | sort -zn | cut -zf 2-)
Note that we must use a different file descriptor than stdin (3 in this example) to pass the file names to the loop. Else, if we use stdin, it will also be used to provide the answers to rm -i.
Inspired from this answer, you could use the find command as follows:
find ./ -type f -name "*.yaml" -printf "%s %p\n" | sort -n
find command prints the the size of the files and the path so that the sort command prints the results from the smaller one to the larger.
In case you want to iterate through (let's say) the 5 bigger files you can do something like this using the tail command like this:
for f in $(find ./ -type f -name "*.yaml" -printf "%s %p\n" |
sort -n |
cut -d ' ' -f 2)
do
echo "### $f"
done
If the file names don't contain newlines and spaces
while read filesize filename; do
printf "%-25s has size %10d\n" "$filename" "$filesize"
done < <(du -bs *."$filter"|sort -n)
while read filename; do
echo "$filename"
done < <(du -bs *."$filter"|sort -n|awk '{$0=$2}1')

How to find files and count them (storing the info into a variable)?

I want to have a conditional behavior depending on the number of files found:
found=$(find . -type f -name "$1")
numfiles=$(printf "%s\n" "$found" | wc -l)
if [ $numfiles -eq 0 ]; then
echo "cannot access $1: No such file" > /dev/stderr; exit 2;
elif [ $numfiles -gt 1 ]; then
echo "cannot access $1: Duplicate file found" > /dev/stderr; exit 2;
else
echo "File: $(ls $found)"
head $found
fi
EDITED CODE (to reflect more precisely what I need)
Though, numfiles isn't equal to 2(or more) when there are duplicate files found...
All the filenames are on one line, separated by a space.
On the other hand, this works correctly:
find . -type f -name "$1" | wc -l
but I don't want to do twice the recursive search in the if/then/else construct...
Adding -print0 doesn't help either.
What would?
PS- Simplifications or improvements are always welcome!
You want to find files and count the files with a name "$1":
grep -c "/${1}$" $(find . 2>/dev/null)
And store the result in a var. In one command:
numfiles=$(grep -c "/${1}$" <(find . 2>/dev/null))
Using $() to store data to a variable trims tailing whitespace. Since the final newline does not appear in the variable numfiles, wc miscounts by one. You can recover the trailing newline with:
numfiles=$(printf "%s\n" "$found" | wc -l)
This miscounts if found is empty (and if any filenames contain a newline), emphasizing the fact that this entire approach is faulty. If you really want to go this way, you can try:
numfiles=$(test -z "$numfiles" && echo 0 || printf "%s\n" "$found" | wc -l)
or pipe the output of find to a script that counts the output and prints a count along with the first filename:
find . -type f -name "$1" | tr '\n' ' ' |
awk '{c=NF; f=$1 } END {print c, f; exit c!=1}' c=0 |
while read count name; do
case $count in
0) echo no files >&2;;
1) echo 1 file $name;;
*) echo Duplicate files >&2;;
esac;
done
All of these solutions fail miserably if any pathnames contain whitespace. If that matters, you could change the awk to a perl script to make it easier to handle null separators and use -print0, but really I think you should stop worrying about special cases. (find -exec and find | xargs both fail to handle to 0 files matching case cleanly. Arguably this awk solution also doesn't handle it cleanly.)

renumbering image files to be contiguous in bash

I have a directory with image files that follow a naming scheme and are not always contiguous. e.i:
IMG_33.jpg
IMG_34.jpg
IMG_35.jpg
IMG_223.jpg
IMG_224.jpg
IMG_225.jpg
IMG_226.jpg
IMG_446.jpg
I would like to rename them so they go something like this, in the same order:
0001.jpg
0002.jpg
0003.jpg
0004.jpg
0005.jpg
0006.jpg
0007.jpg
0008.jpg
So far this is what I came up, and while it does the four-digit padding, it doesn't sort by the number values in the filenames.
#!/bin/bash
X=1;
for i in *; do
mv $i $(printf %04d.%s ${X%.*} ${i##*.})
let X="$X+1"
done
result:
IMG_1009.JPG 0009.JPG
IMG_1010.JPG 0010.JPG
IMG_101.JPG 0011.JPG
IMG_102.JPG 0012.JPG
Update:
Try this. If output is okay remove echo.
X=1; find . -maxdepth 1 -type f -name "*.jpg" -print0 | sort -z -n -t _ -k2 | while read -d $'\0' -r line; do echo mv "$line" "$(printf "%04d%s" $X .jpg)"; ((X++)); done
Using the super helpful rename. First, pads files with one digit to two digits; then pads files with two digits to three digits; etc.
rename IMG_ IMG_0 IMG_?.jpg
rename IMG_ IMG_0 IMG_??.jpg
rename IMG_ IMG_0 IMG_???.jpg
Then, your for-loop (or another similar one) that renames does the trick as the files are in both alphabetical and numerical order.
how about this :
while read f1;do
echo $f1
mv IMG_$f1 $f1
done< <(ls | cut -d '_' -f 2 | sort -n)
thanks
Michael

Is there a 'better' way to find a list of files in a directory tree

I have created a list of files using find, foundlist.lst.
The find command is simply find . -type f -name "<search_pattern>" > foundlist.lst
I would now like to use this list to find copies of these files in other directories.
The 'twist' in my requirements is that I want to search only for the 'base' of the file name. I don't want to include the extension in the search.
Example:
./sort.cc is a member of the list. I want to look for all files of the pattern sort.*
Here is what I wrote. It works. It seems to me that there is a more efficient way to do this.
./findfiles.sh foundfiles.lst /usr/bin/temp
#!/bin/bash
# findfiles.sh
if [ $# -ne 2 ]; then
echo "Need two arguments"
echo "usage: findfiles <filelist> <dir_to_search>"
else
filename=$1
echo "$filename"
while read -r line; do
name=$line
# change './file.ext' to 'file.*'
search_base=$( echo ${name} | sed "s%\.\/%%" | sed "s/\..*/\.\*/" )
find $2 -type f -name $search_base
done < $filename
fi
For stripping the file, I'd use the following (instead of awk)
search_base=`basename ${name} | cut -d'.' -f1`

Need a bash scripts to move files to sub folders automatically

I have a folder with 320G images, I want to move the images to 5 sub folders randomly(just need to move to 5 sub folders). But I know nothing on bash scripts.Please could someone help? thanks!
You could move the files do different directories based on their first letter:
mv [A-Fa-f]* dir1
mv [F-Kf-k]* dir2
mv [^A-Ka-k]* dir3
Here is my take on this. In order to use it place the script somewhere else (not in you folder) but run it from your folder. If you call your script file rmove.sh, you can place it in, say ~/scripts/, then cd to your folder and run:
source ~/scripts/rmove.sh
#/bin/bash
ndirs=$((`find -type d | wc -l` - 1))
for file in *; do
if [ -f "${file}" ]; then
rand=`dd if=/dev/random bs=1 count=1 2>/dev/null | hexdump -b | head -n1 | cut -d" " -f2`
rand=$((rand % ndirs))
i=0
for directory in `find -type d`; do
if [ "${directory}" = . ]; then
continue
fi
if [ $i -eq $rand ]; then
mv "${file}" "${directory}"
fi
i=$((i + 1))
done
fi
done
Here's my stab at the problem:
#!/usr/bin/env bash
sdprefix=subdir
dirs=5
# pre-create all possible sub dirs
for n in {1..5} ; do
mkdir -p "${sdprefix}$n"
done
fcount=$(find . -maxdepth 1 -type f | wc -l)
while IFS= read -r -d $'\0' file ; do
subdir="${sdprefix}"$(expr \( $RANDOM % $dirs \) + 1)
mv -f "$file" "$subdir"
done < <(find . -maxdepth 1 -type f -print0)
Works with huge numbers of files
Does not beak if a file is not moveable
Creates subdirectories if necessary
Does not break on unusual file names
Relatively cheap
Any scripting language will do so I'll write in Python here:
#!/usr/bin/python
import os
import random
new_paths = ['/path1', '/path2', '/path3', '/path4', '/path5']
image_directory = '/path/to/images'
for file_path in os.listdir(image_directory):
full_path = os.path.abspath(os.path.join(image_directory, file_path))
random_subdir = random.choice(new_paths)
new_path = os.path.abspath(os.path.join(random_subdir, file_path))
os.rename(full_path, new_path)
mv `ls | while read x; do echo "`expr $RANDOM % 1000`:$x"; done \
| sort -n| sed 's/[0-9]*://' | head -1` ./DIRNAME
run it in your current image directory, this command will select one file at a time and move it to ./DIRNAME, iterate this command until there are no more files to move.
Pay attention that ` is backquotes and not just quotes characters.

Resources