How could I remove one older folder with bash? - bash

I have the following folders:
1435773881 Jul 1 21:04
1435774663 Jul 2 21:17
1435774856 Jul 3 21:20
1435775432 Jul 4 21:56
I need to remove older folder folder (1435773881 in the case above) with bash script.
What command should I use?

You can do
ls -lt | tail -1 | awk '{print $NF}' | xargs rm -rf
ls -lt | tail -1 shows the last line after sorting the directories by date
awk '{print $NF}' "prints" the last column (which is the directory name)
xargs rm -rf deletes that directory

Assuming you want to delete just the oldest file from the current folder:
rm -rf "$(ls -t | tail -1)";
And since you specifically asked for a way to provide an absolute path:
rm -rf "$1/$(ls -t "$1" | tail -1)";
Include the snipped above in a function...
function removeOldest
{
rm -rf "$1/$(ls -t "$1" | tail -1)";
}
...or an executable named removeOldest
#!/bin/bash
rm -rf "$1/$(ls -t "$1" | tail -1)";
and call it like
removeOldest /path/to/the/directory
If you want to embed it in a script, just replace the both $1 with the path directly.
Also note that if the specified directory contains no files at all, it is deleted itself.
If you want to prevent that, use
toBeDeleted="$(ls -t "$1" | tail -1)";
if [ ${#toBeDeleted} -gt 0 ] && [ -d "$1/$toBeDeleted" ]; then
rm -rf "$1/$toBeDeleted";
fi;

Related

How to find symlinks in a directory that points to another

I need to write a bash script that finds and lists symlinks from one directory (lets say some "Directory1") but only the ones pointing to files in certain another directory (lets say "Directory2"). I can`t use "find".
I have tried something like this but it's apparently wrong:
if [[ -d $1 ]]&&[[ -d $2 ]]
then
current_dir='pwd'
cd $1
do
for plik in *
if[[-L $file] && ["$(readlink -- "$file")" = "$2"] ]
then
#ls -la | grep ^l
echo "$(basename "$file")"
fi
done
fi
How about a simple ls with grep :
ls -l Directory1/ | grep "\->" | grep "Directory2"
I have found a solution:
for file1 in $_cat1/* do
if [ -L $file1]; then
dir1="$(dirname `readlink -f $file1`)"
dir2="$(dirname `readlink -f $_cat2`)"
if [dir1 == dir2]
echo $dir1
fi
fi
done

Is there any command to step down one directory in shell?(when there is only one sub-directory)

To go up one directory I write cd ..
Is there any command that would work for the reverse situation in which there is only one subdirectory?
Let's say I am in:
dir1/dir2/
dir2 has only one subdirectory dir3
Is there any short cut to step down one directory from dir2 to dir3 without writing the name of subdirectory(dir3)?
There isn't such command per se, but you can trick the cd command by typing cd */ ;-)
At the time of my question I was not aware of shells's auto-complete with Tab key.
In this scenario I just type cd, press Tab and the name of the directory shows up so I can press Enter to move to the directory.
I had a similar thought when I was learning shell and wrote a wrapper around cd that does what you want. It grew into something a bit more complicated. If you have folders called folder1 and folder2 you can type: cdd 2
If there is only one folder you can just type: cdd
It also has a similar functionality to ksh's cd paths substitution with 2 args (if at /home/tom, you can type the following to get to /home/bob: cdd tom bob).
It also works like the normal cd command if you pass a folder that exists.
It was written a while ago so it might not be the prettiest, but it works.
It also does an ls at the end which you can remove.
One other thing to note is you can (in bash at least) type the following to go the the previous directory you were in: cd -
function cdd()
{
if [[ $3 != "" ]]; then
printf "~~~ cdd can only take 1 or 2 arguments, you specified 3 or more\n";
return;
else
if [[ $2 != "" ]]; then
ARG=$(pwd | sed "s/$1/$2/g");
cd $ARG;
else
if [[ $1 == "" ]]; then
cd $(ls -d */ | head -1);
else
if [[ -d $1 ]]; then
cd $1;
else
if [[ -d $(ls -F | grep "/$" | grep "^$1" | head -1) ]]; then
cd $(ls -F | grep "/$" | grep "^$1" | head -1);
else
if [[ -d $(ls -F | grep "/$" | grep "$1/$" | head -1) ]]; then
cd $(ls -F | grep "/$" | grep "$1/$" | head -1);
else
if [[ -d $(ls -F | grep "/$" | grep "$1" | head -1) ]]; then
cd $(ls -F | grep "/$" | grep "$1" | head -1);
else
if [[ -d $(ls -a -F | grep "/$" | grep "$1" | head -1) ]]; then
cd $(ls -a -F | grep "/$" | grep "$1" | head -1);
else
printf "~~~ Folder not found...\n";
return 3;
fi;
fi;
fi;
fi;
fi;
fi;
fi;
fi;
if [[ $? == 0 ]]; then
ls --color=auto -a --color;
fi
}

Bash : Find and Remove duplicate files from different folders

I have two folders with some common files, I want to delete duplicate files from xyz folder.
folder1:
/abc/file1.csv
/abc/file2.csv
/abc/file3.csv
/abc/file4.csv
folder2:
/xyz/file1.csv
/xyz/file5.csv
I want to compare both folders and remove duplicate from /xyz folder. Output should be: file5.csv
For now I am using :
find "/xyz" "/abc" "/abc" -printf '%P\n' | sort | uniq -u | -exec rm {} \;
But it failing with reason : if -exec is not a typo you can run the following command to lookup the package that contains the binary:
command-not-found -exec
-bash: -exec: command not found
-exec is an option to find, you've already exited the command find when you started the pipes.
Try xargs instead, it take all the data from stdin and appends to the program.
UNTESTED
find "/xyz" "/abc" "/abc" -printf '%P\n' | sort | uniq -u | xargs rm
Find every file in 234 and 123 directory get filename by -printf, sort them, uniq -d give list of duplications, give back path by sed, using 123 directory to delete the duplications from, and pass files to xargs rm
Command:
find ./234 ./123 -type f -printf '%P\n' | sort | uniq -d | sed 's/^/.\/123\//g' | xargs rm
sed don't needed if you are in the ./123 directory and using full path for folders in find.
Another approach: just find the files in abc and attempt to remove them from xyz:
UNTESTED
find /abc -type f -printf 'rm -f /xyz/%P' | sh
Remove Duplicate Files From Particular Directory
FileList=$(ls)
for D1 in $FileList ;do
if [[ -f $D1 ]]; then
for D2 in $FileList ;do
if [[ -f $D2 ]]; then
if [[ $D1 == $D2 ]]; then
: 'Skip Orignal File'
else
if [[ $(md5sum $D1 | cut -d'=' -f 2 | cut -d ' ' -f 1 ) == $(md5sum $D2 | cut -d'=' -f 2 | cut -d ' ' -f 1 ) ]]; then
echo "Duplicate File Found : $D2"
rm -rf $D2
fi #Detect Duplicate Using MD5
fi #Skip Orginal File
fi #D2 File available Then Next
done
fi #D1 File available Then Next
done

Bash script getting error in files

Hi Guys pls help on this...
[root#uenbe1 ~]# cat test.sh
#!/bin/bash
cd /vol/cdr/MCA
no='106'
value='55'
size=`df -kh | grep '/vol/cdr/MCA' | awk '{print $5}'| sed 's/%//g'`
if [ "$size" -gt "$value" ] ;
then
delete=$(($size-$value))
echo $delete
count=$(($no*$delete))
`ls -lrth | head -n $count | xargs rm -rf`
fi
output:
+ cd /vol/cdr/MCA
+ no=106
+ value=55
++ df -kh
++ grep /vol/cdr/MCA
++ awk '{print $5}'
++ sed s/%//g
+ size=63
+ '[' 63 -gt 55 ']'
+ delete=8
+ echo 8
8
+ count=848
++ ls -lrth
++ head -n 848
++ xargs rm -rf
rm: invalid option -- 'w'
Try `rm --help' for more information.``
i want to delete these files which in $count.
The command ls -lrth prints lines like:
-rw-r--r-- 1 bize bize 0 may 22 19:54 text.txt
-rw-r--r-- 1 bize bize 0 may 22 19:54 manual.pdf
that text given to the command rm will be interpreted as options
$ rm -rw-r text.txt
rm: invalid option -- 'w'
List only the name of files. That is: remove the long -l option from ls (and the -h option since it works only with -l):
$ ls -1rt | head -n "$count" | xargs
But Please: do not make a rm -rf automatic, that is a sure path to future problems.
Maybe?:
$ ls -1rt | head -n "$count" | xargs -I{} echo rm -rf /vol/cdr/MCA/'{}' \;
why are you passing
ls -l
use just, it will find the list of file greater than given size,
if you get this list in a file you can then take list of files which are to be deleted or whatever
find /vol/cdr/MCA -type f -size +56320c -exec ls '{}' \;
> `ls -lrth | head -n $count | xargs rm -rf`
This line has multiple problems. The backticks are superfluous, and you are passing the directory permission, file size, owner information etc as if that were part of the actual file name.
The minimal fix is to lose the backticks and the -l option to ls (and incidentally, the -r option to rm looks misplaced, too); but really, a proper solution would not use ls here at all.

How to segregate files based on recursive grep

I have a directory, sub-directories each containing some text files.
main-dir
|
sub-dir1
| file1 "foo"
|
sub-dir2
| file2 "bar"
|
sub-dir3
| file3 "foo"
These files file1, file2 contain same text. I want to segregate these sub-directories based on content of files. I would like to group sub-dir1 and sub-dir3 as files in these sub-dirs have same content. In this example, move sub-dir1 and sub-dir3 to another directory.
using grep in recursive mode lists out all subdirectories matching file content. How can I make use that of output.
Your solution could be simplified to this:
for dir in *; do
if grep "foo" "$dir/file1" >/dev/null; then
cp -rf "$dir" "$HOME_PATH/newdir/"
fi
done
but will work only when all directories actually contain a file file1.
Something like this:
grep -rl "foo" * | sed -r 's|(.*)/.*|\1|' | sort -u | while read dir; do
cp -rf "$dir" "$HOME_PATH/newdir/"
done
or like this:
grep -rl "foo" * | while read f; do
dirname "$f"
done | sort -u | while read dir; do
cp -rf "$dir" "$HOME_PATH/newdir/"
done
or like this:
find . -type f -exec grep -l "foo" {} \; | xargs -I {} dirname {} | sort -u |
while read dir; do
cp -rf "$dir" "$HOME_PATH/newdir/"
done
might be better.
I managed to write this script which solves my question.
PWD=`$pwd`
FILES=$PWD/*
for f in $FILES
do
str=$(cat $f/file1)
if [ "$str" == "foo" ];
then
cp -rf $f $HOME_PATH/newdir/
fi
done

Resources