Updated question based on new information…
Here is a gist of my code, with the general idea that I store items in DropBox at:
~/Dropbox/Public/drops/xx.xx.xx/whatever
Where the date is always 2 chars, 2 chars, and 2 chars, dot separated. Within that folder can be more folders and more files, which is why when I use find I do not set the depth and allow it to scan recursively.
https://gist.github.com/anonymous/ad51dc25290413239f6f
Below is a shortened version of the gist, it won't run as it stands, I don't believe, though the gist will run assuming you have DropBox installed and there are files at the path location that I set up.
General workflow:
SIZE="+250k" # For `find` this is the value in size I am looking for files to be larger than
# Location where I store the output to `find` to process that file further later on.
TEMP="/tmp/drops-output.txt"
Next I rm the tmp file and touch a new one.
I will then cd into
DEST=/Users/$USER/Dropbox/Public/drops
Perform a quick conditional check to make sure that I am working where I want to be,
with all my values as variables, I could mess up easily and not be working where I
thought I would be.
# Conditional check: is the current directory the one I want to be the working directory?
if [ "$(pwd)" = "${DEST}" ]; then
echo -e "Destination and current working directory are equal, this is good!:\n $(pwd)\n"
fi
The meat of step one is the `find` command
# Use `find` to locate a subset of files that are larger than a certain size
# save that to a temp file and process it. I believe this could all be done in
# one find command with -exec or similar but I can't figure it out
find . -type f -size "${SIZE}" -exec ls -lh {} \; >> "$TEMP"
Inside $TEMP will be a data set that looks like this:
-rw-r--r--# 1 me staff 61K Dec 28 2009 /Users/me/Dropbox/Public/drops/12.28.09/wor-10e619e1-120407.png
-rw-r--r--# 1 me staff 230K Dec 30 2009 /Users/me/Dropbox/Public/drops/12.30.09/hijack-loop-d6250496-153355.pdf
-rw-r--r--# 1 me staff 49K Dec 31 2009 /Users/me/Dropbox/Public/drops/12.31.09/mt-5a819185-180538.png
The trouble is, not all files will contains no spaces, though I have done all I can to make sure variables are quoted
and wrapped in parens or braces or quotes where applicable.
With the results in /tmp I run:
# Number of results located as a result of the find `command` above
RESULTS=$(wc -l "$TEMP" | awk '{print $1}')
echo -e "Located: [$RESULTS] total files greater than or equal to $SIZE\n"
# With a result set found via `find`, now use awk to print out the sorted list of file
# sizes and paths.
echo -e "SIZE DATE FILE PATH"
#awk '{print "["$5"] ", $9, $10}' < "$TEMP" | sort -n
awk '{for(i=5;i<=NF;i++) {printf $i " "} ; printf "\n"}' "$TEMP" | sort -n
With the changes to awk from how I had it originally, my result now looks like this:
751K Oct 21 19:00 ./10.21.14/netflix-67-190039.png
760K Sep 14 19:07 ./01.02.15/logos/RCA_old_logo.jpg
797K Aug 21 03:25 ./08.21.14/girl-88-032514.zip
916K Sep 11 21:47 ./09.11.14/small-shot-4d-214727.png
I want it to look like this:
SIZE FILE PATH
========================================
751K ./10.21.14/netflix-67-190039.png
760K ./01.02.15/logos/RCA_old_logo.jpg
797K ./08.21.14/girl-88-032514.zip
916K ./09.11.14/small-shot-4d-214727.png
# All Done
if [ "$?" -ne "0" ]; then
echo "find of drop files larger than $SIZE completed without errors.\n"
exit 1
fi
Original Post to Stack prior to gaining some new information leading to new questions…
Original Post is below, given new information, I tried some new tactics and have left myself with the above script and info.
I have a simple script, Mac OS X, it performs a find on a dir and locates all files of type file and of size greater than +SIZE
These are then appended to a file via >>
From there, I have a file that essentially contains a ls -la listing, so I use awk to get to the file size and the file name with this command:
# With a result set found via `find`, now use awk to print out the sorted list of file
# sizes and paths.
echo -e "SIZE FILE PATH"
awk '{print "["$5"] ", $9, $10}' < "$TEMP" | sort -n
All works as I want it to, but I get some filename truncation right at the above code. The entire file is around 30 lines, I have pinned it to this line. I think if I throw in a different Internal Field Sep that would fix it. I could use \t as there can't be a \t in Mac OS X filenames.
I thought it was just quoting, but I can't seem to see where if that is the case. Here is a sample of the data returned, usually I get about 50 results. The first one I stuffed in this file has filename truncation:
[1.0M] ./11.26.14/Bruna Legal
[1.4M] ./12.22.14/card-88-082636.jpg
[1.6M] ./12.22.14/thrasher-8c-082637.jpg
[11M] ./01.20.15/td-6e-225516.mp3
Bruna Legal is "Bruna Legal Name.pdf" on the filesystem.
You can avoid parsing the output of ls command and do the whole work with find using the printf action, like:
find /tmp -type f -maxdepth 1 -size +4k 2>/dev/null -printf "%kKB %f\n" |
sort -nrk1,1
In my example it outputs every file that is bigger than 4 kilobytes. The issue is that the find command cannot print formatted output with the size in MB. In addition the numeric ordering does not work for me with square brackets surrounding the number, so I omit them. In my test it yields:
140KB +~JF7115171557203024470.tmp
140KB +~JF3757415404286641313.tmp
120KB +~JF8126196619419441256.tmp
120KB +~JF7746650828107924225.tmp
120KB +~JF7068968012809375252.tmp
120KB +~JF6524754220513582381.tmp
120KB +~JF5532731202854554147.tmp
120KB +~JF4394954996081723171.tmp
24KB +~JF8516467789156825793.tmp
24KB +~JF3941252532304626610.tmp
24KB +~JF2329724875703278852.tmp
16KB 578829321_2015-01-23_1708257780.pdf
12KB 575998801_2015-01-16_1708257780-1.pdf
8KB adb.log
EDIT because I've noted that %k is not accurate enough, so you can use %s to print in bytes and transform to KB o MB using awk, like:
find /tmp -type f -maxdepth 1 -size +4k 2>/dev/null -printf "%sKB %f\n" |
sort -nrk1,1 |
awk '{ $1 = sprintf( "%.2f", $1 / 1024) } { print }'
It yields:
136.99KB +~JF7115171557203024470.tmp
136.99KB +~JF3757415404286641313.tmp
117.72KB +~JF8126196619419441256.tmp
117.72KB +~JF7068968012809375252.tmp
117.72KB +~JF6524754220513582381.tmp
117.68KB +~JF7746650828107924225.tmp
117.68KB +~JF5532731202854554147.tmp
117.68KB +~JF4394954996081723171.tmp
21.89KB +~JF8516467789156825793.tmp
21.89KB +~JF3941252532304626610.tmp
21.89KB +~JF2329724875703278852.tmp
14.14KB 578829321_2015-01-23_1708257780.pdf
10.13KB 575998801_2015-01-16_1708257780-1.pdf
4.01KB adb.log
Related
I executed a command on Linux to list all the files & subfiles (with specific format) in a folder.
This command is:
ls -R | grep -e "\.txt$" -e "\.py$"
In an other hand, I have some filenames stored in a file .txt (line by line).
I want to show the result of my previous command, but I want to filter the result using the file called filters.txt.
If the result is in the file, I keep it
Else, I do not keep it.
How can I do it, in bash, in only one line?
I suppose this is something like:
ls -R | grep -e "\.txt$" -e "\.py$" | grep filters.txt
An example of the files:
# filters.txt
README.txt
__init__.py
EDIT 1
I am trying to a file instead a list of argument because I get the error:
'/bin/grep: Argument list too long'
EDIT 2
# The result of the command ls -R
-rw-r--r-- 1 XXX 1 Oct 28 23:36 README.txt
-rw-r--r-- 1 XXX 1 Oct 28 23:36 __init__.py
-rw-r--r-- 1 XXX 1 Oct 28 23:36 iamaninja.txt
-rw-r--r-- 1 XXX 1 Oct 28 23:36 donttakeme.txt
-rw-r--r-- 1 XXX 1 Oct 28 23:36 donttakeme2.txt
What I want as a result:
-rw-r--r-- 1 XXX 1 Oct 28 23:36 README.txt
-rw-r--r-- 1 XXX 1 Oct 28 23:36 __init__.py
You can use comm :
comm -12 <(ls -R | grep -e "\.txt$" -e "\.py$" ) <(cat filters.txt)
This will give you the intersection of the two lists.
EDIT
It seems that ls is not great for this, maybe find Would be safer
find . -type f | xargs grep $(sed ':a;N;$!ba;s/\n/\\|/g' filters.txt)
That is, for each of your files, take your filters.txt and replace all newlines with \| using sed and then grep for all the entries.
Grep uses \| between items when grepping for more than one item. So the sed command transforms the filters.txt into such a list of items to be used by grep.
grep -f filters.txt -r .
..where . is your current folder.
You can run this script in the target directory, giving the list file as a single argument.
#!/bin/bash -e
# exit early if awk fails (ie. can't read list)
shopt -s lastpipe
find . -mindepth 1 -type f -name '*.txt' -o -name '*.py' -print0 |
awk -v exclude_list_file="${1?:no list file provided}" \
'BEGIN {
while ((getline line < exclude_list_file) > 0) {
exclude_list[c++] = line
}
close(exclude_list_file)
if (c==0) {
exit 1
}
FS = "/"
RS = "\000"
}
{
for (i in exclude_list) {
if (exclude_list[i] == $NF) {
next
}
}
print
}'
It prints all paths, recursively, excluding any filename which exactly matches a line in the list file (so lines not ending .py or .txt wouldn’t do anything).
Only the filename is considered, the preceding path is ignored.
It fails immediately if no argument is given or it can't read a line from the list file.
The question is tagged bash, but if you change the shebang to sh, and remove shopt, then everything in the script except -print0 is POSIX. -print0 is common, it’s available on GNU (Linux), BSDs (including OpenBSD), and busybox.
The purpose of lastpipe is to exit immediately if the list file can't be read. Without it, find keeps runs until completion (but nothing gets printed).
If you specifically want the ls -l output format, you could change awk to use a null output record separator (add ORS = "\000" to the end of BEGIN, directly below RS="\000"), and pipe awk in to xargs -0 ls -ld.
I need to search for files in a directory by month/year and pass them through wc -l or lines and test if [ $lines -le 18 ], or something similar and give me a list of files that match.
In the past I called this with 'file.sh 2020-06' and used something like this to process the files for that month:
find . -name "* $1-*" -exec grep '(1 |2 |3 )' {}
but I now need to test for a line count.
The above -exec worked but when I changed over to passing the file to another exec I get complaints of "too many parameters" because the file name has spaces. I just can't seem to get on track with solving this one.
Any pointers to get me going would be very much appreciated.
Rick
Here's one using find and awk. But first some test files (Notice: it creates files named 16, 17, 18 and 19):
$ for i in 16 17 18 19 ; do seq 1 $i > $i ; done
Then:
$ find . -name 1\[6789\] -exec awk 'NR==18{exit c=1}END{if(!c) print FILENAME}' {} \;
./16
./17
I'm using an Awk script to split a big text document into independent files. I did it and now I'm working with 14k text files. The problem here is there are a lot of files with just three lines of text and it's not useful for me to keep them.
I know I can delete lines in a text with awk 'NF>=3' file, but I don't want to delete lines inside files, rather I want to delete files which content is just two or three text lines.
Thanks in advance.
Could you please try following findcommand.(tested with GNU awk)
find /your/path/ -type f -exec awk -v lines=3 'NR>lines{f=1; exit} END{if (!f) print FILENAME}' {} \;
So above will print file names who are having lesser than 3 lines on console. Once you are happy with results coming then try following to delete them. Only once you are ok with above command's output run following and even I will suggest run below command in a test directory first and once you are fully satisfied then proceed with below one.(remove echo from below I have still put it for safer side :) )
find /your/path/ -type f -exec awk -v lines=3 'NR>lines{f=1; exit} END{exit !f}' {} \; -exec echo rm -f {} \;
If the files in the current directory are all text files, this should be efficient and portable:
for f in *; do
[ $(head -4 "$f" | wc -l) -lt 4 ] && echo "$f"
done # | xargs rm
Inspect the list, and if it looks OK, then remove the # on the last line to actually delete the unwanted files.
Why use head -4? Because wc doesn't know when to quit. Suppose half of the text files were each more than a terabyte long; if that were the case wc -l alone would be quite slow.
You may use wc to calculate lines and then decide either to delete the file or not. you should write a shell script instead of just awk command.
You can try Perl. The below solution will be efficient as the file handle ARGV will be closed if the line count > 3
perl -nle ' close(ARGV) if ($.>3) ; $kv{$ARGV}++; END { for(sort keys %kv) { print if $kv{$_}>3 } } ' *
If you want to pipe the output of some other command (say find) you can use it like
$ find . -name "*" -type f -exec perl -nle ' close(ARGV) if ($.>3) ; $kv{$ARGV}++; END { for(sort keys %kv) { print if $kv{$_}>3 } } ' {} \;
./bing.fasta
./chris_smith.txt
./dawn.txt
./drcatfish.txt
./foo.yaml
./ip.txt
./join_tab.pl
./manoj1.txt
./manoj2.txt
./moose.txt
./query_ip.txt
./scottc.txt
./seats.ksh
./tane.txt
./test_input_so.txt
./ya801.txt
$
the output of wc -l * on the same directory
$ wc -l *
12 bing.fasta
16 chris_smith.txt
8 dawn.txt
9 drcatfish.txt
3 fileA
3 fileB
13 foo.yaml
3 hubbs.txt
8 ip.txt
19 join_tab.pl
6 manoj1.txt
6 manoj2.txt
5 moose.txt
17 query_ip.txt
3 rororo.txt
5 scottc.txt
22 seats.ksh
1 steveman.txt
4 tane.txt
13 test_input_so.txt
24 ya801.txt
200 total
$
I have a set of data files across a number of directories with format
ls lcp01/output/
> dst000.dat dst001.dat ... dst075.dat nn000.dat nn001.dat ... nn036.dat aa000.dat aa001.dat ... aa040.dat
That is to say, there are a set of directories lcp01 through lcp25 with a collection of different data files in their output folders. I want to know what the highest number dstXXX.dat file is in each directory (in the example shown the result would be 75).
I wrote a script which achieves this, but I'm not satisfied with the final step which feels a bit hacky:
#!/bin/bash
for i in `seq -f "%02g" 1 25`; #specify dir extensions 1 through 25
do
echo " "
echo $i
names=($(ls lcp$i/output | grep dst )) #dir containing dst files
NUMS=()
for j in "${names[#]}";
do
temp="$(echo $j | tr -dc '0-9' && printf " ")" # record suffixes for each dst file
NUMS+=("$((10#$temp))") #force base 10 interpretation of dst suffixes
done
numList="$(echo "${NUMS[*]}" | sort -nr | head -n1)"
echo ${numList:(-3)} #print out the last 3 characters of the sorted list - the largest file suffix
done
The final two steps organise the list of output indices, then I show the last 3 characters of that list which will be my largest file number (providing the file numbers are smaller than 100).
Is there a cleaner way of doing this? Ideally I would like more control over the output format, but mainly it's the step of reading the last 3 characters out. I would like to be able to just output the largest number, which should be the last element of the list but I cannot figure out how.
You could do something like the following:
for d in lc[0-9][0-9]; do find $d -name 'dst*.dat' -print | sort -u | tail -n1; done
Above command will only work if the numbering has the same number of digits (dst001..999.dat), as it is sorted as a string; if that's not the case:
for d in lc[0-9][0-9]; do echo -n $d: ; find $d -name 'dst*.dat' -print | grep -o '[0-9]*.dat' | sort -n | tail -n1; done
using filename expansions
for d in lcp*/output; do
files=( $d/dst*.dat )
file=${files[-1]}
[[ -e $file ]] || continue
file=${file#dst*}
echo ${file%.dat}
done
or with extension option to restrict pattern to numbers
shopt -s extglob
... lcp*([0-9])/output
... $d/dst*([0-9]).dat
...
file=${file##dst*(0)}
...
Running "ls -lrt" on my terminal I get a large list that looks something like this:
-rw-r--r-- 1 pratik staff 1849089 Jun 23 12:24 cam13-vid.webm
-rw-r--r-- 1 pratik staff 1850653 Jun 23 12:24 cam12-vid.webm
-rw-r--r-- 1 pratik staff 1839110 Jun 23 12:24 cam11-vid.webm
-rw-r--r-- 1 pratik staff 1848520 Jun 23 12:24 cam10-vid.webm
-rw-r--r-- 1 pratik staff 1839122 Jun 23 12:24 cam1-vid.webm
I have only shown part of it above as a sample.
I would like to rename all the files to have a number one less than current.
For example,
mv cam1-vid.webm cam0-vid.webm
mv cam2-vid.webm cam1-vid.webm
.....
....
mv cam 200-vid.webm cam199-vid.webm
How can this be done using a os x / linux bash script (perhaps using sed) ?
You can do this with plain bash:
for i in {1..200}
do
mv "cam${i}-vid.webm" "cam$((i-1))-vid.webm"
done
I would use find, split up the file names, to find the number, subtract one, and rename:
find . -name "cam*-vid.webm" -print0 | while read -d\$0 old_name
do
number=${old_name#cam} #Filter left to remove 'cam' prefix
number=${number%-vid.webm"} #Filter right to remove '-vid.webm' suffix
$((number -= 1))
new_name="cam${number}-vid.webm"
echo "mv \"$old_name\" \"$new_name\""
done | tee results
This will merely print out the commands (that is why I have echo). I'm piping it into a file named results. Once this command completes, look at results and make sure it does everything it should. Whenever there's an operation like this, there can be a nasty surprise. For example, if I rename cam02-vid.webm to cam01-vid.webm before I rename cam01-vid.webm, I am going to overwrite cam01-vid-webm.
Maybe a safer way is to explicitly give the file numbers I need:
for number in {1..200}
do
$((old_number = $number + 1))
echo mv "\"cam${old_number}-vid.webm\" \"cam${number}-vid.webm\""
done | tee results
Useful hint: If the result file looks good, you can actually just run it as a shell script:
$ bash results
Another possibility is to test to make sure the old file exist:
for number in {1..200}
do
$((old_number = $number + 1))
if [ -f "$cam${old_number}-vid.webm" ]
then
echo mv "\"cam${old_number}-vid.webm\" \"cam${number}-vid.webm\""
else
echo "ERROR: Can't find a file called 'cam${old_number}-vid.webm'"
fi
done | tee results
A perl solution.
First it traverses all input files (#ARGV) and filters those that are plain files and not links (grep), extracts the number (map) and sorts numerically in ascendant to avoid overwritting (sort). Later creates a new file decrementing the number and renames the original:
perl -e '
for (
sort { $a->[0] <=> $b->[0] }
map { m/(\d+)/; [$1, $_ ] }
grep { -f $_ && ! -l $_ }
#ARGV
) {
$n = --$_-> [0];
($newname = $_->[1]) =~ s/\A(?i)(cam)\d+(.*)\z/$1$n$2/;
print "Executing command ===> rename $_->[1], $newname\n";
rename $_->[1], $newname;
}' *
Assuming initial content of the directory as:
cam1-vid.webm
cam13-vid.webm
cam12-vid.webm
cam11-vid.webm
cam10-vid.webm
cam2-vid.webm
After running the command yields:
cam0-vid.webm
cam10-vid.webm
cam11-vid.webm
cam12-vid.webm
cam1-vid.webm
cam9-vid.webm