Bash script getting error in files - bash

Hi Guys pls help on this...
[root#uenbe1 ~]# cat test.sh
#!/bin/bash
cd /vol/cdr/MCA
no='106'
value='55'
size=`df -kh | grep '/vol/cdr/MCA' | awk '{print $5}'| sed 's/%//g'`
if [ "$size" -gt "$value" ] ;
then
delete=$(($size-$value))
echo $delete
count=$(($no*$delete))
`ls -lrth | head -n $count | xargs rm -rf`
fi
output:
+ cd /vol/cdr/MCA
+ no=106
+ value=55
++ df -kh
++ grep /vol/cdr/MCA
++ awk '{print $5}'
++ sed s/%//g
+ size=63
+ '[' 63 -gt 55 ']'
+ delete=8
+ echo 8
8
+ count=848
++ ls -lrth
++ head -n 848
++ xargs rm -rf
rm: invalid option -- 'w'
Try `rm --help' for more information.``
i want to delete these files which in $count.

The command ls -lrth prints lines like:
-rw-r--r-- 1 bize bize 0 may 22 19:54 text.txt
-rw-r--r-- 1 bize bize 0 may 22 19:54 manual.pdf
that text given to the command rm will be interpreted as options
$ rm -rw-r text.txt
rm: invalid option -- 'w'
List only the name of files. That is: remove the long -l option from ls (and the -h option since it works only with -l):
$ ls -1rt | head -n "$count" | xargs
But Please: do not make a rm -rf automatic, that is a sure path to future problems.
Maybe?:
$ ls -1rt | head -n "$count" | xargs -I{} echo rm -rf /vol/cdr/MCA/'{}' \;

why are you passing
ls -l
use just, it will find the list of file greater than given size,
if you get this list in a file you can then take list of files which are to be deleted or whatever
find /vol/cdr/MCA -type f -size +56320c -exec ls '{}' \;

> `ls -lrth | head -n $count | xargs rm -rf`
This line has multiple problems. The backticks are superfluous, and you are passing the directory permission, file size, owner information etc as if that were part of the actual file name.
The minimal fix is to lose the backticks and the -l option to ls (and incidentally, the -r option to rm looks misplaced, too); but really, a proper solution would not use ls here at all.

Related

Bash : Find and Remove duplicate files from different folders

I have two folders with some common files, I want to delete duplicate files from xyz folder.
folder1:
/abc/file1.csv
/abc/file2.csv
/abc/file3.csv
/abc/file4.csv
folder2:
/xyz/file1.csv
/xyz/file5.csv
I want to compare both folders and remove duplicate from /xyz folder. Output should be: file5.csv
For now I am using :
find "/xyz" "/abc" "/abc" -printf '%P\n' | sort | uniq -u | -exec rm {} \;
But it failing with reason : if -exec is not a typo you can run the following command to lookup the package that contains the binary:
command-not-found -exec
-bash: -exec: command not found
-exec is an option to find, you've already exited the command find when you started the pipes.
Try xargs instead, it take all the data from stdin and appends to the program.
UNTESTED
find "/xyz" "/abc" "/abc" -printf '%P\n' | sort | uniq -u | xargs rm
Find every file in 234 and 123 directory get filename by -printf, sort them, uniq -d give list of duplications, give back path by sed, using 123 directory to delete the duplications from, and pass files to xargs rm
Command:
find ./234 ./123 -type f -printf '%P\n' | sort | uniq -d | sed 's/^/.\/123\//g' | xargs rm
sed don't needed if you are in the ./123 directory and using full path for folders in find.
Another approach: just find the files in abc and attempt to remove them from xyz:
UNTESTED
find /abc -type f -printf 'rm -f /xyz/%P' | sh
Remove Duplicate Files From Particular Directory
FileList=$(ls)
for D1 in $FileList ;do
if [[ -f $D1 ]]; then
for D2 in $FileList ;do
if [[ -f $D2 ]]; then
if [[ $D1 == $D2 ]]; then
: 'Skip Orignal File'
else
if [[ $(md5sum $D1 | cut -d'=' -f 2 | cut -d ' ' -f 1 ) == $(md5sum $D2 | cut -d'=' -f 2 | cut -d ' ' -f 1 ) ]]; then
echo "Duplicate File Found : $D2"
rm -rf $D2
fi #Detect Duplicate Using MD5
fi #Skip Orginal File
fi #D2 File available Then Next
done
fi #D1 File available Then Next
done

Shell win32 delete oldest directories (recursive)

I need to delete the oldest folders (including their contents) from a certain path. E.g. if there are more than 10 directories, delete the oldest ones until you are below 8 directories. The log would show count of directories before/after + the filesystem before/after and what dirs were deleted.
Thank you in advance!
You should test this first on your backup directory,
#!/bin/bash
DIRCOUNT="$(find . -type d -printf x | wc -c)"
if [ "$DIRCOUNT" -gt 10 ]; then
ls -A1td */ | tail -n -8 | xargs rm -r
fi
if i do not misunderstanding your intentions, below is your answer
#! /usr/bin/env bash
DIRCOUNT="$(find . -maxdepth 1 -type d -printf x | wc -c)"
echo "Now you have $DIRCOUNT dirs"
[[ "$DIRCOUNT" -gt 10 ]] && ls -A1td */ | tail -n $((DIRCOUNT-8)) | xargs rm -r && echo "Now you have 8 dirs"

How could I remove one older folder with bash?

I have the following folders:
1435773881 Jul 1 21:04
1435774663 Jul 2 21:17
1435774856 Jul 3 21:20
1435775432 Jul 4 21:56
I need to remove older folder folder (1435773881 in the case above) with bash script.
What command should I use?
You can do
ls -lt | tail -1 | awk '{print $NF}' | xargs rm -rf
ls -lt | tail -1 shows the last line after sorting the directories by date
awk '{print $NF}' "prints" the last column (which is the directory name)
xargs rm -rf deletes that directory
Assuming you want to delete just the oldest file from the current folder:
rm -rf "$(ls -t | tail -1)";
And since you specifically asked for a way to provide an absolute path:
rm -rf "$1/$(ls -t "$1" | tail -1)";
Include the snipped above in a function...
function removeOldest
{
rm -rf "$1/$(ls -t "$1" | tail -1)";
}
...or an executable named removeOldest
#!/bin/bash
rm -rf "$1/$(ls -t "$1" | tail -1)";
and call it like
removeOldest /path/to/the/directory
If you want to embed it in a script, just replace the both $1 with the path directly.
Also note that if the specified directory contains no files at all, it is deleted itself.
If you want to prevent that, use
toBeDeleted="$(ls -t "$1" | tail -1)";
if [ ${#toBeDeleted} -gt 0 ] && [ -d "$1/$toBeDeleted" ]; then
rm -rf "$1/$toBeDeleted";
fi;

remove files which contain more than 14 lines in a folder

Unix command used
wc -l * | grep -v "14" | rm -rf
However this grouping doesn't seem to do the job. Can anyone point me towards the correct way?
Thanks
wc -l * 2>&1 | while read -r num file; do ((num > 14)) && echo rm "$file"; done
remove "echo" if you're happy with the results.
Here's one way to print out the names of all files with at least 15 lines (assuming you have Gnu awk, for the nextfile command):
awk 'FNR==15{print FILENAME;nextfile}' *
That will produce an error for any subdirectory, so it's not ideal.
You don't actually want to print the filenames, though. You want to delete them. You can do that in awk with the system function:
# The following has been defanged in case someone decides to copy&paste
awk 'FNR==15{system("echo rm "FILENAME);nextfile}' *
for f in *; do if [ $(wc -l $f | cut -d' ' -f1) -gt 14 ]; then rm -f $f; fi; done
There's a few problems with your solution: rm doesn't take input from stdin, and your grep only finds files who don't have exactly 14 lines. Try this instead:
find . -type f -maxdepth 1 | while read f; do [ `wc -l $f | tr -s ' ' | cut -d ' ' -f 2` -gt 14 ] && rm $f; done
Here's how it works:
find . -type f -maxdepth 1 #all files (not directories) in the current directory
[ #start comparison
wc -l $f #get line count of file
tr -s ' ' #(on the output of wc) eliminate extra whitespace
cut -d ' ' -f 2 #pick just the line count out of the previous output
-gt 14 ] #test if all that was greater than 14
&& rm $f #if the comparison was true, delete the file
I tried to figure out a solution just using find with -exec, but I couldn't figure out a way to test the line count. Maybe somebody else can come up with a way for it

BASH: How to remove all files except those named in a manifest?

I have a manifest file which is just a list of newline separated filenames. How can I remove all files that are not named in the manifest from a folder?
I've tried to build a find ./ ! -name "filename" command dynamically:
command="find ./ ! -name \"MANIFEST\" "
for line in `cat MANIFEST`; do
command=${command}"! -name \"${line}\" "
done
command=${command} -exec echo {} \;
$command
But the files remain.
[Note:] I know this uses echo. I want to check what my command does before using it.
Solution:(thanks PixelBeat)
ls -1 > ALLFILES
sort MANIFEST MANIFEST ALLFILES | uniq -u | xargs rm
Without temp file:
ls -1 | sort MANIFEST MANIFEST - | uniq -u | xargs rm
Both Ignores whether the files are sorted/not.
For each file in current directory grep filename in MANIFEST file and rm file if not matched.
for file in *
do grep -q -F "$file" PATH_TO_YOUR_MANIFIST || rm "$file"
done
Using the "set difference" pattern from http://www.pixelbeat.org/cmdline.html#sets
(find ./ -type f -printf "%P\n"; cat MANIFEST MANIFEST; echo MANIFEST) |
sort | uniq -u | xargs -r rm
Note I list MANIFEST twice in case there are files listed there that are not actually present.
Also note the above supports files in subdirectories
figured it out:
ls -1 > ALLFILES
comm -3 MANIFEST ALLFILES | xargs rm
Just for fun, a Perl 1-liner... not really needed in this case but much more customizable/extensible than Bash if you want something fancier :)
$ ls
1 2 3 4 5 M
$ cat M
1
3
$ perl -e '{use File::Slurp; %M = map {chomp; $_ => 1} read_file("M"); $M{M}=1; \
foreach $f (glob("*")) {next if $M{$f}; unlink "$f"||die "Can not unlink: $!\n" };}'
$ ls
1 3 M
The above can be even shorter if you pass the manifest on STDIN
perl -e '{%M = map {chomp; $_ => 1} <>; $M{M}=1; \
foreach $f (glob("*")) {next if $M{$f};unlink "$f"||die "Can not unlink: $!\n" };}' M
Assumes that MANIFEST is already sorted:
find -type f -printf %P\\n | sort | comm -3 MANIFEST - | xargs rm

Resources