terminal find file with latest patch number - bash

I have a folder with a lot of patch files with pattern
1.1.hotfix1
1.2.hotfix2
2.1.hotfix1
2.1.hotfix2 ...etc
and I have to find out the latest patch(2.1.hotfix2 should be the result of the example) with a bash
how can i achieve it?

Reverse order all files by time and print the first line.
In case you have some other files then you can print files having hotfix text only.
ls -t1 *hotfix* | head -n 1

You can use find with regex, and take the last line from sort:
find * -type f -regex "^[^\d]+\.[^\d]+\.hotfix[^\d]+$" | sort | tail -1

Related

To count distinct file name in a directory using Shell scripting

I have one Folder which consist of around 400 plus Files what i have to do to count number of distinct files as there may be more than one version of file.
Like For Eg If in a folder i have 8 files:-
V07Y_0021_YP_0100_001.PDF - This is unique
V07Y_0021_YP_0099_001.PDF - This is unique
V07Y_0021_YP_0003_001.PDF - This is duplicate _001.PDF is first version
V07Y_0021_YP_0003_002.PDF - This is duplicate _002.PDF is second Version
V07Y_0021_YP_0109_001.PDF - This is duplicate _002.PDF is first Version
V07Y_0021_YP_0108_001.PDF - This is unique
V07Y_0021_YP_0109_002.PDF - This is duplicate _002.PDF is second Version
In Above Files _0109,_0100,_0099 is Page Number and after these numbers _001,_002 is version.Also there can be more than two versions also of same file (Page No)
SO i have to implement a logic which will give me count as 5 as 2 files are duplicate so it will be counted only once.
I have tried various ways like find directoryName -type f -printf '%f\n' | sort -u
This dosent Worked for me as i have to find a pattern too.
If Anybody knows the ogic Please share.
Thanks in advance.
find . -type f -printf '%f\n' |
# Remove the version part
sed 's!_[0-9][0-9][0-9].PDF$!!' |
# remove duplicates
sort -u
would output:
V07Y_0021_YP_0003
V07Y_0021_YP_0099
V07Y_0021_YP_0100
V07Y_0021_YP_0108
V07Y_0021_YP_0109
Tested on repl
If you just want to count :
ls targetDirectory/V07Y_0021_YP* | cut -d'_' -f4 | sort -u | wc -l
This would send you the number of unique items.
ls : list the files, cut : get the fourth item with '_' separator, sort : remove duplicates, wc : count lines
You can remove | wc -l to get the list of files.

How to grab the last result of a find command?

The result of my find command produces the following result
./alex20_0
./alex20_1
./alex20_2
./alex20_3
I saved this result as a variable. now the only part I really need is whatever the last part is or essentially the highest number or "latest version".
So from the above string all I need to extract is ./alex20_3 and save that as a variable. Is there a way to just extract whatever the last directory is outputted from the find command?
I would do the last nth characters command to extract it since its already in order, but it wouldn't be the same number of characters once we get to version ./alex20_10 etc.
Try this:
your_find_command | tail -n 1
find can list your files in any order. To extract the latest version you have to sort the output of find. The safest way to do this is
find . -maxdepth 1 -name "string" -print0 | sort -zV | tail -zn1
If your implementation of sort or tail does not support -z and you are sure that the filenames are free of line-breaks you can also use
find . -maxdepth 1 -name "string" -print | sort -V | tail -n1
There could be multiple ways to achieve this -
Using the 'tail' command (as suggested by #Roadowl)
find branches -name alex* | tail -n1
Using the 'awk' command
find branches -name alex* | awk 'END{print}'
Using the 'sed' command
find branches -name alex* | sed -e '$!d'
Other possible options are to use a bash script, perl or any other language. You best bet would be the one that you find is more convenient.
Since you want the file name sorted by the highest version, you can try as follows
$ ls
alex20_0 alex20_1 alex20_2 alex20_3
$ find . -iname "*alex*" -print | sort | tail -n 1
./alex20_3

How do I filter down a subset of files based upon time?

Let's assume I have done lots of work whittling down a list of files in a directory down to the 10 files that I am interested in. There were hundreds of files, and I have finally found the ones I need.
I can either pipe out the results of this (piping from ls), or I can say I have an array of those values (doing this inside a script). Doesn't matter either way.
Now, of that list, I want to find only the files that were created yesterday.
We can use tools like find -mtime 1 which are fine. But how would we do that with a subset of files in a directory? Can we pass a subset to find via xargs?
I can do this pretty easily with a for loop. But I was curious if you smart people knew of a one-liner.
If they're in an array:
files=(...)
find "${files[#]}" -mtime 1
If they're being piped in:
... | xargs -d'\n' -I{} find {} -mtime 1
Note that the second one will run a separate find command for each file which is a bit inefficient.
If any of the items are directories and you don't want to search inside of them, add -maxdepth 0 to disable find's recursion.
Another option that won't recurse, though I'd just use John's find solution if I were you.
$: stat -c "%n %w" "${files[#]}" | sed -n "
/ $(date +'%Y-%m-%d' --date=yesterday) /{ s/ .*//; p; }"
The stat will print the name and creation date of files in the array.
The sed "greps" for the date you want and strips the date info before printing the filename.

How to read CSV file stored in variable

I want to read a CSV file using Shell,
But for some reason it doesn't work.
I use this to locate the latest added csv file in my csv folder
lastCSV=$(ls -t csv-output/ | head -1)
and this to count the lines.
wc -l $lastCSV
Output
wc: drupal_site_livinglab.csv: No such file or directory
If I echo the file it says: drupal_site_livinglab.csv
Your issue is that you're one directory up from the path you are trying to read. The quick fix would be wc -l "csv-output/$lastCSV".
Bear in mind that parsing ls -t though convenient, isn't completely robust, so you should consider something like this to protect you from awkward file names:
last_csv=$(find csv-output/ -mindepth 1 -maxdepth 1 -printf '%T#\t%p\0' |
sort -znr | head -zn1 | cut -zf2-)
wc -l "$last_csv"
GNU find lists all files along with their last modification time, separating the output using null bytes to avoid problems with awkward filenames.
if you remove -maxdepth 1, this will become a recursive search
GNU sort arranges the files from newest to oldest, with -z to accept null byte-delimited input.
GNU head -z returns the first record from the sorted list.
GNU cut -z at the end discards the timestamp, leaving you with only the filename.
You can also replace find with stat (again, this assumes that you have GNU coreutils):
last_csv=$(stat csv-output/* --printf '%Y\t%n\0' | sort -znr | head -zn1 | cut -zf2-)

How to get sorted list of files by modified date that match a certain filename and print out part of text in unix shell?

Sorry for the long title. I'm trying to basically write a script that will do a "find" and get a sorted list of all files named README and print out a section of text in them. It's an easy way for me to go to a directory which has a number of project folders and print out summaries. This is what I have so far:
find . -name "README" | xargs -I {} sed -n '/---/,/NOTES/p' {}
I can't seem to get this to be sorted by modified date. Any help would be great!
You can use the -printf option in find:
$ find . -name 'README' -printf '%T#\t%p\n' | sort | cut -f 2-

Resources