store output of ls - lrt in two different variables - shell

I want to store out put of ls - lrt | tail -2 in two different variables and get the base file name. file name have the pattern YYYYMMDD_filename. I want to compare both the files with current date and pick the previous day file. please help. I m new to shell scripting.

This may not answer your question. To get all the files with yesterday's date:
yesterday=$( date -d yesterday +%Y%m%d )
files=( "$yesterday"_* )
It's generally advised to avoid parsing the output of ls.

Related

Bash function to "use" the most recent "dated" file in a dir

I have a dir with a crap load (hundreds) of log files from over time. In certain cases I want to make a note regarding the most recent (by date in filename, not by creation time) log or I just need some piece of info from it and i want to view it quickly and I just know it was (usually) the last one created (but always) with the newest date. So I wanted to make a "simple" function in my bashrc to overcome this problem, basically what I want is a function that goes to a specific dir and finds the latest log by date (always in the same format) and open it with less or whatever pager I want.
The logs are formatted like this:
typeoflog-short-description-$(date "+%-m-%-d-%y")
basically the digits in between the last 3 dashes are what I'm interested in, for example(s):
update-log-2-24-18
removed-cuda-opencl-nvidia-12-2-19
whatever-changes-1-18-19
Now if it was January, 20 2019 and this was the last log added to the dir I need a way to see what the highest number is in the last 2 digits of the filename (that i don't really have a problem with), then check for the highest month that would be 2 "dashes" from the last set of digits whether it be 2 digits or 1 for the month, and then do the same thing for the day of the month and set that as a local variable and use it like the following example.
Something like this:
viewlatestlog(){
local loc="~/.logdir"
local name=$(echo $loc/*-19 | #awk or cut or sort or i could even loop it from 1-31 and 1-12 for the days and months.)
#I have ideas, but i know there has to be a better way to do this and it's not coming to me, maybe with expr or a couple of sort commands; i'm not sure, it would have been easier if i had made is so that each date number had 2 digits always... But I didn't
## But the ultimate goal is that i can run something like this command at the end
less $loc/$name
{
PS. For bonus points you could also tell me if there is a way to automatically copy the filename (with the location and all or without, I don't really care) to my linux clipboard, so when I'm making my note I can "link" to the log file if I ever need to go back to it...
Edit: Cleaned up post a little bit, I tend to my questions way too wordy, I apologize.
GNU sort can sort by fields:
$ find . -name whatever-changes-\* | sort -n -t- -k5 -k3 -k4
./whatever-changes-3-01-18
./whatever-changes-1-18-19
./whatever-changes-2-12-19
./whatever-changes-11-01-19
The option -t specifies the field delimiter and the option -k selects the fields starting with 1. The option -n specifies numeric sort.
Assuming your filenames do not contain tabs or newlines, how about:
loc="~/.logdir"
for f in "$loc"/* ; do
if [[ $f =~ -([0-9]{1,2})-([0-9]{1,2})-([0-9]{2})$ ]]; then
mm=${BASH_REMATCH[1]}
dd=${BASH_REMATCH[2]}
yy=${BASH_REMATCH[3]}
printf "%02d%02d%02d\t%s\n" "$yy" "$mm" "$dd" "$f"
fi
done | sort -r | head -n 1 | cut -f 2
First extract the month, date, and year from the filename.
Then create a date string formatted as "YYMMDD" and prepend to the
filename delimited by a tab character.
Then you can perform the sort command on the list.
Finally you can obtain the desired (latest) filename by extracting with top and cut.
Hope this helps.

Pick Oldest file on basis of date in Name of the file

I am stuck in one situation where I am having a bunch of files and I need to pick the oldest one on the basis of time present in name only. Not on basis of the timestamp as I am doing SCP from one system to another so timestamp would be same for all the files once SCP runs
I have files like
UAT-2019-03-21-16-31.csv
UAT-2019-03-21-17-01.csv
AIT-2019-03-21-17-01.csv
Here, 2019 represents the year, 03 the month, 21 the day, 16 the hours in 24-hour format and 31 represent the minutes.
I need to pick the UAT-2019-03-21-16-31.csv file from the above files first.
How can I do in shell scripting.
I tried doing ls -1 but it will sort alphabetically, that means AIT-2019-03-21-17-01.csv will be picked first, but I need according to time mentioned in the file name
You can try this
ls -1 | sort -t"-" -k2 -k3 -k4 -k5 -k6 | head -n1
Output :
UAT-2019-03-21-16-31.csv
Curious about alternatives answer as I know that parsing ls output is not ideal.
The best and efficient way to do this would be to convert the filename time stamp to epoch time and find the oldest among them.
You need to write a script that does below in order:
Get all the filename timestamp into a variable.
Convert all filename timestamp to epoch time.
Find the oldest and get the filename.
Command to convert the filename timestamp to epoch time would be
date -d"2019-03-21T17:01" +%s
date -d"YYYY-MM-DDTHH:MM" +%s
You can try these steps in script
Hope so this helps you to start writing the script.

Unix Shell Scripting using Date Command

Ok, so i'm trying to write a scrpit to wc files using the date command. The format of the files, for example, goes like this: testfile20170104.gz.
Now the files are set up to have yesterday's date with the format yyyymmdd. So if today is 1/5/2017 the file will have the previous day of 1/4/2017 in the yyyymmdd format, as you see in the example above.
Normally to count the file all one needs to do is simply input: gzcat testfile20170104.gz|wc -l to get the word count.
However, what I want to do is run a script or even a for loop that gzcat the file but instead of having to copy and paste the filename in the command line, I want to use the date command to input put yesterday's date in the filename with the format of yyyymmdd.
So as a template something like this:
gzcat testfile*.gz|wc -l | date="-1 days"+%Y%m%d
Now I know what I have above is COMPLETELY wrong but you get the picture. I want to replace the '*' with the output from the date command, if that makes sense...
Any help will be much much appreciated!
Thanks!
You want:
filename="testfile$( date -d yesterday +%Y%m%d ).gz"
zcat "$filename"

Find text files in bash

I have a bunch of email's as text files in multiple directories under one directory. I am trying to write a script where I would type in a date as 3 separate command line arguments like so:
findemail 2015 20 04
the format being yyyy/dd/mm and it would bring up all filenames for emails that were sent on that day. I am unsure where to start for this though. I figured I could use find possibly but I am new to scripting so I am unsure. Any help would be greatly appreciated!
The timestamp in the email looks like:
TimeStamp: 02/01/2004 at 11:19:02 (still in the same format as the input)
grep -lr "$(printf "^TimeStamp: %02i/%02i/%04i" "$2" "$1" "$3")" path/to/directory
The regex looks for mm/dd/yyyy; swap the order of $1 and $2 if you want the more sensible European date order.
The command substitution $(command ...) runs command ... and substitutes its output into the command line which contains the command substitution. So we use a subshell which runs printf to create the regex argument to grep.
The -l option says to list the names of matching files; the -r option says to traverse a set of directories recursively. (If your grep is too pedestrian to have the -r option, it's certainly not hard to concoct a find expression which does the same. See e.g. here.)
The easiest thing to do would be to use a search utility such as grep. Grep has a very useful recursive option that allows searching for a string in all the files in a directory (and subdirectories) that's easy to use.
Assuming you have your timestamp in a variable called timestamp, then this would return a list of filenames that contain the timestamp:
grep -lr $timestamp /Your/Main/Directory/Goes/Here
EDIT: To clarify, this would only search for the exact string, so it needs to be in the exact same format as in the searched text.

Find and replace date within a file

My apologies if my title is not descriptive enough, I believe the following will be.
I have 3 files which are just plain text, within each file, is a date
Date: 2012-08-31 for example
I would like to get a command/script to find this and update to the current date, but the date will be ever changing and may not be known going in (without viewing the contents of the file
Knowing what the date is, its simple enough with sed, but how can I do this knowing the syntax of the line I want to modify, but not the specific values. ("Date: " at least is unchanging)
Assuming your date format is unchanging, and all three files are the only three text files in your PWD, you could use GNU sed like this:
sed -r 's/Date: [0-9]{4}-[0-9]{2}-[0-9]{2}/Date: 2012-09-01/g' *.txt
today=`date +%F`
sed -r -i '.bak' "s/Date: [0-9]{4}-[0-9]{2}-[0-9]{2}/Date: $today/g" file1 file2 file3

Resources