Shell win32 delete oldest directories (recursive) - bash

I need to delete the oldest folders (including their contents) from a certain path. E.g. if there are more than 10 directories, delete the oldest ones until you are below 8 directories. The log would show count of directories before/after + the filesystem before/after and what dirs were deleted.
Thank you in advance!

You should test this first on your backup directory,
#!/bin/bash
DIRCOUNT="$(find . -type d -printf x | wc -c)"
if [ "$DIRCOUNT" -gt 10 ]; then
ls -A1td */ | tail -n -8 | xargs rm -r
fi

if i do not misunderstanding your intentions, below is your answer
#! /usr/bin/env bash
DIRCOUNT="$(find . -maxdepth 1 -type d -printf x | wc -c)"
echo "Now you have $DIRCOUNT dirs"
[[ "$DIRCOUNT" -gt 10 ]] && ls -A1td */ | tail -n $((DIRCOUNT-8)) | xargs rm -r && echo "Now you have 8 dirs"

Related

Find and count compressed files by extension

I have a bash script that counts compressed files by file extension and prints the count.
#!/bin/bash
FIND_COMPRESSED=$(find . -type f | sed -e 's/.*\.//' | sort | uniq -c | sort -rn | grep -Ei '(deb|tgz|tar|gz|zip)$')
COUNT_LINES=$($FIND_COMPRESSED | wc -l)
if [[ $COUNT_LINES -eq 0 ]]; then
echo "No archived files found!"
else
echo "$FIND_COMPRESSED"
fi
However, the script works only if there are NO files with .deb .tar .gz .tgz .zip.
If there are some, say test.zip and test.tar in the current folder, I get this error:
./arch.sh: line 5: 1: command not found
Yet, if I copy the contents of the FIND_COMPRESSED variable into the COUNT_LINES, all works fine.
#!/bin/bash
FIND_COMPRESSED=$(find . -type f | sed -e 's/.*\.//' | sort | uniq -c | sort -rn | grep -Ei '(deb|tgz|tar|gz|zip)$')
COUNT_LINES=$(find . -type f | sed -e 's/.*\.//' | sort | uniq -c | sort -rn | grep -Ei '(deb|tgz|tar|gz|zip)$'| wc -l)
if [[ $COUNT_LINES -eq 0 ]]; then
echo "No archived files found!"
else
echo "$FIND_COMPRESSED"
fi
What am I missing here?
So when you do that variable like that, it tries to execute it like a command, which is why it fails when it has contents. When it's empty, wc simply returns 0 and it marches on.
Thus, you need to change that line to this:
COUNT_LINES=$(echo $FIND_COMPRESSED | wc -l)
But, while we're at it, you can also simplify the other line with something like this:
FIND_COMPRESSED=$(find . -type f -iname "*deb" -or -iname "*tgz" -or -iname "*tar*") #etc
you can do
mapfile FIND_COMPRESSED < <(find . -type f -regextype posix-extended -regex ".*(deb|tgz|tar|gz|zip)$" -exec bash -c '[[ "$(file {})" =~ compressed ]] && echo {}' \;)
COUNT_LINES=${#FIND_COMPRESSED[#]}

Bash script getting error in files

Hi Guys pls help on this...
[root#uenbe1 ~]# cat test.sh
#!/bin/bash
cd /vol/cdr/MCA
no='106'
value='55'
size=`df -kh | grep '/vol/cdr/MCA' | awk '{print $5}'| sed 's/%//g'`
if [ "$size" -gt "$value" ] ;
then
delete=$(($size-$value))
echo $delete
count=$(($no*$delete))
`ls -lrth | head -n $count | xargs rm -rf`
fi
output:
+ cd /vol/cdr/MCA
+ no=106
+ value=55
++ df -kh
++ grep /vol/cdr/MCA
++ awk '{print $5}'
++ sed s/%//g
+ size=63
+ '[' 63 -gt 55 ']'
+ delete=8
+ echo 8
8
+ count=848
++ ls -lrth
++ head -n 848
++ xargs rm -rf
rm: invalid option -- 'w'
Try `rm --help' for more information.``
i want to delete these files which in $count.
The command ls -lrth prints lines like:
-rw-r--r-- 1 bize bize 0 may 22 19:54 text.txt
-rw-r--r-- 1 bize bize 0 may 22 19:54 manual.pdf
that text given to the command rm will be interpreted as options
$ rm -rw-r text.txt
rm: invalid option -- 'w'
List only the name of files. That is: remove the long -l option from ls (and the -h option since it works only with -l):
$ ls -1rt | head -n "$count" | xargs
But Please: do not make a rm -rf automatic, that is a sure path to future problems.
Maybe?:
$ ls -1rt | head -n "$count" | xargs -I{} echo rm -rf /vol/cdr/MCA/'{}' \;
why are you passing
ls -l
use just, it will find the list of file greater than given size,
if you get this list in a file you can then take list of files which are to be deleted or whatever
find /vol/cdr/MCA -type f -size +56320c -exec ls '{}' \;
> `ls -lrth | head -n $count | xargs rm -rf`
This line has multiple problems. The backticks are superfluous, and you are passing the directory permission, file size, owner information etc as if that were part of the actual file name.
The minimal fix is to lose the backticks and the -l option to ls (and incidentally, the -r option to rm looks misplaced, too); but really, a proper solution would not use ls here at all.

counting the total numbers of files and directories in a provided folder including subdirectories and their files

I want to count all the files and directories from a provided folder including files and directories in a subdirectory. I have written a script which will count accurately the number of files and directory but it does not handle the subdirectories any ideas ???
I want to do it without using FIND command
#!/bin/bash
givendir=$1
cd "$givendir" || exit
file=0
directories=0
for d in *;
do
if [ -d "$d" ]; then
directories=$((directories+1))
else
file=$((file+1))
fi
done
echo "Number of directories :" $directories
echo "Number of file Files :" $file
Use find:
echo "Number of directories: $(find "$1" -type d | wc -l)"
echo "Number of files/symlinks/sockets: $(find "$1" ! -type d | wc -l)"
Using plain shell and recursion:
#!/bin/bash
countdir() {
cd "$1"
dirs=1
files=0
for f in *
do
if [[ -d $f ]]
then
read subdirs subfiles <<< "$(countdir "$f")"
(( dirs += subdirs, files += subfiles ))
else
(( files++ ))
fi
done
echo "$dirs $files"
}
shopt -s dotglob nullglob
read dirs files <<< "$(countdir "$1")"
echo "There are $dirs dirs and $files files"
find "$1" -type f | wc -l will give you the files, find "$1" -type d | wc -l the directories
My quick-and-dirty shellscript would read
#!/bin/bash
test -d "$1" || exit
files=0
# Start with 1 to count the starting dir (as find does), else with 0
directories=1
function docount () {
for d in $1/*; do
if [ -d "$d" ]; then
directories=$((directories+1))
docount "$d";
else
files=$((files+1))
fi
done
}
docount "$1"
echo "Number of directories :" $directories
echo "Number of file Files :" $files
but mind it: On my build folder for a project, there were quite some differences:
find: 6430 dirs, 74377 non-dirs
my script: 6032 dirs, 71564 non-dirs
#thatotherguy's script: 6794 dirs, 76862 non-dirs
I assume that has to do with the legions of links, hidden files etc., but I am too lazy to investigate: find is the tool of choice.
Here are some one-line commands that work without find:
Number of directories: ls -Rl ./ | grep ":$" | wc -l
Number of files: ls -Rl ./ | grep "[0-9]:[0-9]" | wc -l
Explanation:
ls -Rl lists all files and directories recursively, one line each.
grep ":$" finds just the results whose last character is ':'. These are all of the directory names.
grep "[0-9]:[0-9]" matches on the HH:MM part of the timestamp. The timestamp only shows up on file, not directories. If your timestamp format is different then you will need to pick a different grep.
wc -l counts the number of lines that matched from the grep.

How to locate the directory where the sum of the number of lines of regular file is greatest (in bash)

Hi i'm new in Unix and bash and I'd like to ask q. how can i do this
The specified directory is given as arguments. Locate the directory
where the sum of the number of lines of regular file is greatest.
Browse all specific directories and their subdirectories. Amounts
count only for files that are directly in the directory.
I try somethnig but it's not working properly.
while [ $# -ne 0 ];
do case "$1" in
-h) show_help ;;
-*) echo "Error: Wrong arguments" 1>&2 exit 1 ;;
*) directories=("$#") break ;;
esac
shift
done
IFS='
'
amount=0
for direct in "${directories[#]}"; do
for subdirect in `find $direct -type d `; do
temp=`find "$subdirect" -type f -exec cat {} \; | wc -l | tr -s " "`
if [ $amount -lt $temp ]; then
amount=$temp
subdirect2=$subdirect
fi
done
echo Output: "'"$subdirect2$amount"'"
done
the problem is here when i use as arguments this dirc.(just example)
/home/usr/first and there are this direct.
/home/usr/first/tmp/first.txt (50 lines)
/home/usr/first/tmp/second.txt (30 lines)
/home/usr/first/tmp1/one.txt (20 lines)
it will give me on Output /home/usr/first/tmp1 100 and this is wrong it should be /home/usr/first/tmp 80
I'd like to scan all directories and all its subdirectories in depth. Also if multiple directories meets the maximum should list all.
Given your sample files, I'm going to assume you only want to look at the immediate subdirectories, not recurse down several levels:
max=-1
# the trailing slash limits the wildcard to directories only
for dir in */; do
count=0
for file in "$dir"/*; do
[[ -f "$file" ]] && (( count += $(wc -l < "$file") ))
done
if (( count > max )); then
max=$count
maxdir="$dir"
fi
done
echo "files in $maxdir have $max lines"
files in tmp/ have 80 lines
In the spirit of Unix (caugh), here's an absolutely disgusting chain of pipes that I personally hate, but it's a lot of fun to construct :):
find . -mindepth 1 -maxdepth 1 -type d -exec sh -c 'find "$1" -maxdepth 1 -type f -print0 | wc -l --files0-from=- | tail -1 | { read a _ && echo "$a $1"; }' _ {} \; | sort -nr | head -1
Of course, don't use this unless you're mentally ill, use glenn jackman's nice answer instead.
You can have great control on find's unlimited filtering possibilities, too. Yay. But use glenn's answer!

Need a bash scripts to move files to sub folders automatically

I have a folder with 320G images, I want to move the images to 5 sub folders randomly(just need to move to 5 sub folders). But I know nothing on bash scripts.Please could someone help? thanks!
You could move the files do different directories based on their first letter:
mv [A-Fa-f]* dir1
mv [F-Kf-k]* dir2
mv [^A-Ka-k]* dir3
Here is my take on this. In order to use it place the script somewhere else (not in you folder) but run it from your folder. If you call your script file rmove.sh, you can place it in, say ~/scripts/, then cd to your folder and run:
source ~/scripts/rmove.sh
#/bin/bash
ndirs=$((`find -type d | wc -l` - 1))
for file in *; do
if [ -f "${file}" ]; then
rand=`dd if=/dev/random bs=1 count=1 2>/dev/null | hexdump -b | head -n1 | cut -d" " -f2`
rand=$((rand % ndirs))
i=0
for directory in `find -type d`; do
if [ "${directory}" = . ]; then
continue
fi
if [ $i -eq $rand ]; then
mv "${file}" "${directory}"
fi
i=$((i + 1))
done
fi
done
Here's my stab at the problem:
#!/usr/bin/env bash
sdprefix=subdir
dirs=5
# pre-create all possible sub dirs
for n in {1..5} ; do
mkdir -p "${sdprefix}$n"
done
fcount=$(find . -maxdepth 1 -type f | wc -l)
while IFS= read -r -d $'\0' file ; do
subdir="${sdprefix}"$(expr \( $RANDOM % $dirs \) + 1)
mv -f "$file" "$subdir"
done < <(find . -maxdepth 1 -type f -print0)
Works with huge numbers of files
Does not beak if a file is not moveable
Creates subdirectories if necessary
Does not break on unusual file names
Relatively cheap
Any scripting language will do so I'll write in Python here:
#!/usr/bin/python
import os
import random
new_paths = ['/path1', '/path2', '/path3', '/path4', '/path5']
image_directory = '/path/to/images'
for file_path in os.listdir(image_directory):
full_path = os.path.abspath(os.path.join(image_directory, file_path))
random_subdir = random.choice(new_paths)
new_path = os.path.abspath(os.path.join(random_subdir, file_path))
os.rename(full_path, new_path)
mv `ls | while read x; do echo "`expr $RANDOM % 1000`:$x"; done \
| sort -n| sed 's/[0-9]*://' | head -1` ./DIRNAME
run it in your current image directory, this command will select one file at a time and move it to ./DIRNAME, iterate this command until there are no more files to move.
Pay attention that ` is backquotes and not just quotes characters.

Resources