filename group by a pattern and select only one from each group - bash

I have following files(as an example, 60000+ actually) and all the log files follows this pattern:
analyse-ABC008795-84865-201911261249.log
analyse-ABC008795-84866-201911261249.log
analyse-ABC008795-84867-201911261249.log
analyse-ABC008795-84868-201911261249.log
analyse-ABC008795-84869-201911261249.log
analyse-ABC008796-84870-201911261249.log
analyse-ABC008796-84871-201911261249.log
analyse-ABC008796-84872-201911261249.log
analyse-ABC008796-84873-201911261249.log
Only numbers get change in log files. I want to take one file from each category where files should be categorized by ABC.... number. So, as you can see, there are only two categories here:
analyse-ABC008795
analyse-ABC008796
So, what I want to have is one file(let's say first file) from each category. Output should look like this:
analyse-ABC008795-84865-201911261249.log
analyse-ABC008796-84870-201911261249.log
This should be done in Bash/linux environment, so that after I get this, I should use grep to check if my "searching string" contain in those files
ls -l | <what should I do to group and get one file from each category> | grep "searching string"

With bash and awk.
files=(*.log)
printf '%s\n' "${files[#]}" | awk -F- '!seen[$2]++'
Or use find instead of a bash array for a more portable approach.
find . -type f -name '*.log' | awk -F- '!seen[$2]++'
If your find has the -printf flag and you don't want the leading ./ from the filename add it before the pipe |
-printf '%f\n'
The !seen[$2]++ Remove second and subsequent instances of each input line, without having to sort them first. The $2 means the second field which -F is using.

Related

How to print the amount of files in a folder (recursively) seperated by extensions?

For example, I have a folder containing files of different types (.jpg, .png, .txt, ..) and would like to know how many files of each extensions there is in my folder separatly.
The output would be something like this:
.jpg : 255
.png : 123
.txt : 12
No extension : 1
For now, I only know how to find how many files exist for one given extension using this command:
find /folderpath -type f -name '*.jpg' | wc -l
However I would like it to be able to find by itself the files extensions.
Thanks for your help.
You can do this for a single directory with:
ls | grep '\.' | sed 's/.*\././' | sort | uniq -c
(I'm ignoring files with no . - tweak if you want something else)
I'd suggest fleshing this out into a script (say, extension_counts) that takes a list of directories, and for each one outputs the path followed by the report in the format you wish.
Quick and dirty version:
#!/bin/sh
for dir in $*; do
echo $dir
(cd $dir && ls | grep '\.' | sed 's/.*\././' | sort | uniq -c)
done
... but you should consider hardening this.
Then for the recursive part, you can use find and xargs:
find . -type d | xargs extension_counts
You could be a bit smarter and do it all in one script file by defining extension_counts as a function, but that's an optimisation.
There are some pitfalls to parsing the output of ls (or find). In this case the only potential issue I can think of is filenames containing a newline (yes, this is possible). You could just accept that you're using a tool not designed for weird filenames, or you could write something more robust in a language with firmer data structures, such as Python, Perl, Ruby, Go, etc.
This could be done with a quick awk one liner as well:
find /folderpath -type f -name '*.*' | awk -F"." 'BEGIN{OFS=" : "}{extensions[$NF]++}END{for (ext in extensions) { print ext, extensions[ext]}};'
That awk script will split each line by a period -F"."
Set the OFS (Output Field Separator) by " : " BEGIN{OFS=" : "}
Load an array using the file extension for the key extensions[$NF] where $NF is the last field in the record. The value of the array will be a count ++.
When all the lines are processed we iterate the array for (ext in extensions) and print out the index and value {print ext, extensions[ext]}
I would proceed this way :
list the file names (rather than their paths produced by find) :
find . -type f | rev | cut -d/ -f1 | rev
We reverse each line so that we can easily address the last field
reduce to their extension :
sed -E 's/^.*\././;t end;s/.*/No extension/;:end'
Here we remove everything up to the first dot, or if the substitution could not be done (because there was no dot) we replace everything by "No extension".
sort the result :
sort
group by extension and add the count :
uniq -c
For a complete command as follows :
find . -type f | rev | cut -d/ -f1 | rev | sed -E 's/^.*\././;t end;s/.*/No extension/;:end' | sort | uniq -c
Note that the presentation differs from yours, which could be easily fixed with an additional sed :
2 .119
1 .147
[...]
1 .Xauthority
1 .xml
1 .xsession-errors
2 .zip
1 .zshrc
48 No extension

Bash, getting the latest folder based on its name which is a date

Can anyone tell me how to get the name of the latest folder based on its name which is formatted as a date, using bash. For example:
20161121/
20161128/
20161205/
20161212/
The output should be: 20161212
Just use GNU sort with -nr flags for based on reverse numerical sort.
find . ! -path . -type d | sort -nr | head -1
An example structure, I have a list of following folders in my current path,
find . ! -path . -type d
./20161121
./20161128
./20161205
./20161212
See how the sort picks up the folder you need,
find . ! -path . -type d | sort -nr
./20161212
./20161205
./20161128
./20161121
and head -1 for first entry alone,
find . ! -path . -type d | sort -nr | head -1
./20161212
to store it in a variable, use command-substitution $() as
myLatestFolder=$(find . ! -path . -type d | sort -nr | head -1)
Sorting everything seems like extra work if all you want is a single entry. It could especially be problematic if you need to sort a very large number of entries. Plus, you should note that find-based solutions will by default traverse subdirectories, which might or might not be what you're after.
$ shopt -s extglob
$ mkdir 20160110 20160612 20160614 20161120
$ printf '%d\n' 20+([0-9]) | awk '$1>d{d=$1} END{print d}'
20161120
$
While the pattern 20+([0-9]) doesn't precisely match dates (it's hard to validate dates without at least a couple of lines of code), we've at least got a bit of input validation via printf, and a simple "print the highest" awk one-liner to parse printf's results.
Oh, also, this handles any directory entries that are named appropriately, and does not validate that they are themselves directories. That too would require either an extra test or a different tool.
One method to require items to be directories would be the use of a trailing slash:
$ touch 20161201
$ printf '%s\n' 20+([0-9])/ | awk '$1>d{d=$1} END{print d}'
20161120/
But that loses the input validation (the %d format for printf).
If you felt like it, you could build a full pattern for your directory names though:
$ dates='20[01][0-9][01][0-9][0-3][0-9]'
$ printf '%s\n' $dates/ | awk '$1>d{d=$1} END{print d}'
20161120/

SHELL printing just right part after . (DOT)

I need to find just extension of all files in directory (if there are 2 same extensions, its just one). I already have it. But the output of my script is like
test.txt
test2.txt
hello.iso
bay.fds
hellllu.pdf
Im using grep -e -e '.' and it just highlight DOTs
And i need just these extensions give in one variable like txt,iso,fds,pdf
Is there anyone who could help? I already had it one time but i had it on array. Today I found out It's has to work on dash too.
You can use find with awk to get all unique extensions:
find . -type f -name '?*.?*' -print0 |
awk -F. -v RS='\0' '!seen[$NF]++{print $NF}'
can be done with find as well, but I think this is easier
for f in *.*; do echo "${f##*.}"; done | sort -u
if you want to assign a comma separated list of the unique extensions, you can follow this
ext=$(for f in *.*; do echo "${f##*.}"; done | sort -u | paste -sd,)
echo $ext
csv,pdf,txt
alternatively with ls
ls -1 *.* | rev | cut -d. -f1 | rev | sort -u | paste -sd,
rev/rev is required if you have more than one dot in the filename, assuming the extension is after the last dot. For any other directory simply change the part *.* to dirpath/*.* in all scripts.
I'm not sure I understand your comment. If you don't assign to a variable, by default it will print to the output. If you want to pass directory name as a variable to a script, put the code into a script file and replace dirpath with $1, assuming that will be your first argument to the script
#!/bin/bash
# print unique extension in the directory passed as an argument, i.e.
ls -1 "$1"/*.* ...
if you have sub directories with extensions above scripts include them as well, to limit only to file types replace ls .. with
find . -maxdepth 1 -type f -name "*.*" | ...

How to sort the results of find (including nested directories) alphabetically in bash

I have a list of directories based on the results of running the "find" command in bash. As an example, the result of find are the files:
test/a/file
test/b/file
test/file
test/z/file
I want to sort the output so it appears as:
test/file
test/a/file
test/b/file
test/z/file
Is there any way to sort the results within the find command, or by piping the results into sort?
If you have the GNU version of find, try this:
find test -type f -printf '%h\0%d\0%p\n' | sort -t '\0' -n | awk -F '\0' '{print $3}'
To use these file names in a loop, do
find test -type f -printf '%h\0%d\0%p\n' | sort -t '\0' -n | awk -F '\0' '{print $3}' | while read file; do
# use $file
done
The find command prints three things for each file: (1) its directory, (2) its depth in the directory tree, and (3) its full name. By including the depth in the output we can use sort -n to sort test/file above test/a/file. Finally we use awk to strip out the first two columns since they were only used for sorting.
Using \0 as a separator between the three fields allows us to handle file names with spaces and tabs in them (but not newlines, unfortunately).
$ find test -type f
test/b/file
test/a/file
test/file
test/z/file
$ find test -type f -printf '%h\0%d\0%p\n' | sort -t '\0' -n | awk -F'\0' '{print $3}'
test/file
test/a/file
test/b/file
test/z/file
If you are unable to modify the find command, then try this convoluted replacement:
find test -type f | while read file; do
printf '%s\0%s\0%s\n' "${file%/*}" "$(tr -dc / <<< "$file")" "$file"
done | sort -t '\0' | awk -F'\0' '{print $3}'
It does the same thing, with ${file%/*} being used to get a file's directory name and the tr command being used to count the number of slashes, which is equivalent to a file's "depth".
(I sure hope there's an easier answer out there. What you're asking doesn't seem that hard, but I am blanking on a simple solution.)
find test -type f -printf '%h\0%p\n' | sort | awk -F'\0' '{print $2}'
The result of find is, for example,
test/a'\0'test/a/file
test'\0'test/file
test/z'\0'test/z/file
test/b'\0'test/b/text file.txt
test/b'\0'test/b/file
where '\0' stands for null character.
These compound strings can be properly sorted with a simple sort:
test'\0'test/file
test/a'\0'test/a/file
test/b'\0'test/b/file
test/b'\0'test/b/text file.txt
test/z'\0'test/z/file
And the final result is
test/file
test/a/file
test/b/file
test/b/text file.txt
test/z/file
(Based on the John Kugelman's answer, with "depth" element removed which is absolutely redundant.)
If you want to sort alphabetically, the best way is:
find test -print0 | sort -z
(The example in the original question actually wanted files before directories, which is not the same and requires extra steps)
try this. for reference, it firsts sorts on the second field second char. which only exists on the file, and has a r for reverse meaning it is first, after that it will sort on the first char of the second field. [-t is field deliminator, -k is key]
find test -name file |sort -t'/' -k2.2r -k2.1
do a info sort for more info. there is a ton of different ways to use the -t and -k together to get different results.

Force sort command to ignore folder names

I ran the following from a base folder ./
find . -name *.xvi.txt | sort
Which returns the following sort order:
./LT/filename.2004167.xvi.txt
./LT/filename.2004247.xvi.txt
./pred/2004186/filename.2004186.xvi.txt
./pred/2004202/filename.2004202.xvi.txt
./pred/2004222/filename.2004222.xvi.txt
As you can see, the filenames follow a regular structure, but the files themselves might be located in different parts of the directory structure. Is there a way of ignoring the folder names and/or directory structure so that the sort returns a list of folders/filenames based ONLY on the file names themselves? Like so:
./LT/filename.2004167.xvi.txt
./pred/2004186/filename.2004186.xvi.txt
./pred/2004202/filename.2004202.xvi.txt
./pred/2004222/filename.2004222.xvi.txt
./LT/filename.2004247.xvi.txt
I've tried a few different switches under the find and sort commands, but no luck. I could always copy everything out to a single folder and sort from there, but there are several hundred files, and I'm hoping that a more elegant option exists.
Thanks! Your help is appreciated.
If your find has -printf you can print both the base filename and the full filename. Sort by the first field, then strip it off.
find . -name '*.xvi.txt' -printf '%f %p\n' | sort -k1,1 | cut -f 2- -d ' '
I have chosen a space as a delimiter. If your filenames include spaces, you should choose another delimiter which is a character that's not in your filenames. If any filenames include newlines, you'll have to modify this because it won't work.
Note that the glob in the find command should be quoted.
If your find doesn't have printf, you could use awk to accomplish the same thing:
find . -name *.xvi.txt | awk -F / '{ print $NF, $0 }' | sort | sed 's/.* //'
The same caveats about spaces that Dennis Williamson mentioned apply here. And for variety, I'm using sed to strip off the sort field, instead of cut.
find . -name *.xvi.txt | sort -t'.' -k3 -n
will sort it as you want. the only problem is if filename or directory name will include additinal dots.
To avoid it you can use :
find . -name *.xvi.txt | sed 's/[0-9]\+.xvi.txt$/\\&/' | sort -t'\' -k2 | sed 's/\\//'

Resources