Unix script for checking logs for last 10 days - shell

I have a log table which is maintained for a single day and the data from the table is only present for one day.However, the logs for it is present in the unix directory.
My requirement is to check the logs for the last 10 days and find me the count of records got loaded.
In the log file the pattern is something like this( fastload log of teradata).
**** 13:16:49 END LOADING COMPLETE
Total Records Read = 443303
Total Error Table 1 = 0 ---- Table has been dropped
Total Error Table 2 = 0 ---- Table has been dropped
Total Inserts Applied = 443303
Total Duplicate Rows = 0
I want to the script to be parametrized( parameter will be stage table name) which find the records inserted into table and error tables for the last 10 days.
Is this possible? Can anyone help me build the unix script for this?
There are many logs in the logs directory. what if a want to check only for the below:
bash-3.2$ ls -ltr 2018041*S_EVT_ACT_FLD*
-rw-rw----+ 1 edwops abgrp 52610 Apr 10 17:37 20180410173658_S_EVT_ACT_FLD.log
-rw-rw----+ 1 edwops abgrp 52576 Apr 11 18:12 20180411181205_S_EVT_ACT_FLD.log
-rw-rw----+ 1 edwops abgrp 52646 Apr 13 18:04 20180413180422_S_EVT_ACT_FLD.log
-rw-rw----+ 1 edwops abgrp 52539 Apr 14 16:16 20180414161603_S_EVT_ACT_FLD.log
-rw-rw----+ 1 edwops abgrp 52538 Apr 15 14:15 20180415141523_S_EVT_ACT_FLD.log
-rw-rw----+ 1 edwops abgrp 52576 Apr 16 15:38 20180416153808_S_EVT_ACT_FLD.log
Thanks.

find . -ctime -10 -type f -print|xargs awk -F= '/Total Records Read/ {print $2}'|paste -sd+| bc
find . -ctime -10 -type f -print get the filenames of files 10 days or younger in current working directory. To run on a different directory replace . with the path
awk -F= '/Total Records Read/ {print $2}' using = as a field seperator filter out the second half of any line containing the key phrase
Total Records Read
paste -sd+ add a plus sign
bc evaluate the stream of numbers and operators into a single answer

I could not use find. because the system is Solaris, find doesn't have maxdepth future. I use case to create a FILTER2 and use it to
ls -l --time-style=long-iso FOLDER | grep -E $FILTER.
but I know it's not a good way.
LOCAL_DAY=`date "+%d"`
LOCAL_MONTH=`date "+%Y-%m"`
LASTTENDAYE_MONTH=`date --date='10 days ago' "+%Y-%m"`
case $LOCAL_DAY in
0*)
FILTER2="$LASTTENDAY_MONTH-[2-3][0-9]|$LOCAL_MONTH";;
1*)
FILTER2="$LOCAL_MONTH-0[0-9]|$LOCAL_MONTH-1[0-9]";;
2*)
FILTER2="$LOCAL_MONTH-1[0-9]|$LOCAL_MONTH-2[0-9]";;
esac

Related

How to get a filename list with ncftp?

So I tried
ncftpls -l
which gives me a list
-rw-r--r-- 1 100 ftpgroup 3817084 Jan 29 15:50 1548773401.tar.gz
-rw-r--r-- 1 100 ftpgroup 3817089 Jan 29 15:51 1548773461.tar.gz
-rw-r--r-- 1 100 ftpgroup 3817083 Jan 29 15:52 1548773521.tar.gz
-rw-r--r-- 1 100 ftpgroup 3817085 Jan 29 15:53 1548773582.tar.gz
-rw-r--r-- 1 100 ftpgroup 3817090 Jan 29 15:54 1548773642.tar.gz
But all I want is to check the timestamp (which is the name of the tar.gz)
How to only get the timestamp list ?
As requested, all I wanted to do is delete old backups, so awk was a good idea (at least it was effective) even it wasn't the right params. My method to delete old backup is probably not the best but it works
ncftpls *authParams* | (awk '{match($9,/^[0-9]+/, a)}{ print a[0] }') | while read fileCreationDate; do
VALIDITY_LIMIT="$((`date +%s`-600))"
a=$VALIDITY_LIMIT
b=$fileCreationDate
if [ $b -lt $a ];then
deleteFtpFile $b
fi
done;
You can use awk to only display the timestamps from the output like so:
ncftpls -l | awk '{ print $5 }'

Csh - Fetching fields via awk inside xargs

I'm struggling to understand this behavior:
Script behavior: read a file (containing dates); print a list of files in a multi-level directory tree and get their size, print the file size only, (future step: sum the overall file size).
Starting script:
cat dates | xargs -I {} sh -c "echo '{}: '; du -d 2 "/folder/" | grep {} | head"
2000-03:
1000 /folder/2000-03balbasldas
2000-04:
12300 /folder/2000-04asdwqdas
[and so on]
But when I try to filter via awk on the first field, I still get the whole line
cat dates | xargs -I {} sh -c "echo '{}: '; du -d 2 "/folder/" | grep {} | awk '{print $1}'"
2000-03:
1000 /folder/2000-03balbasldas
2000-04:
12300 /folder/2000-04asdwqdas
I've already approached it via divide-et-impera, and the following command works just fine:
du -d 2 "/folder/" | grep '2000-03' | awk '{print $1}'
1000
I'm afraid that I'm missing something very trivial, but I haven't found anything so far.
Any idea? Thanks!
Input: directory containing folders named YYYY-MM-random_data and a file containing strings:
ls -l
drwxr-xr-x 2 user staff 68 Apr 24 11:21 2000-03-blablabla
drwxr-xr-x 2 user staff 68 Apr 24 11:21 2000-04-blablabla
drwxr-xr-x 2 user staff 68 Apr 24 11:21 2000-05-blablabla
drwxr-xr-x 2 user staff 68 Apr 24 11:21 2000-06-blablabla
drwxr-xr-x 2 user staff 68 Apr 24 11:21 2000-06-blablablb
drwxr-xr-x 2 user staff 68 Apr 24 11:21 2000-06-blablablc
[...]
cat dates
2000-03
2000-04
2000-05
[...]
Expected output: sum of the disk space occupied by all the files contained in the folder whose name include the string in the file dates
2000-03: 1000
2000-04: 2123
2000-05: 1222112
[...]
======
But in particular, I'm interested in why awk is not able to fetch the column $1 I asked it to.
Ok it seems I found the answer myself after a lot of research :D
I'll post it here, hoping that it will help somebody else out.
https://unix.stackexchange.com/questions/282503/right-syntax-for-awk-usage-in-combination-with-other-command-inside-xargs-sh-c
The trick was to escape the $ sign.
cat dates | xargs -I {} sh -c "echo '{}: '; du -d 2 "/folder/" | grep {} | awk '{print \$1}'"
Using GNU Parallel it looks like this:
parallel --tag "eval du -s folder/{}* | perl -ne '"'$s+=$_ ; END {print "$s\n"}'"'" :::: dates
--tag prepends the line with the date.
{} is replaced with the date.
eval du -s folder/{}* finds all the dirs starting with the date and gives the total du from those dirs.
perl -ne '$s+=$_ ; END {print "$s\n"}' sums up the output from du
Finally there is bit of quoting trickery to get it quoted correctly.

How to use awk and sed to count number of elements in a column

There are some emails in my email account's inbox:
12:00 <harry#hotmail.com>
12:20 <harry#hotmail.com>
12:22 <jim#gmail.com>
12:30 <clare#bbc.org>
12:40 <harry#hotmail.com>
12:50 <jim#gmail.com>
12:55 <harry#hotmail.com>
I would like to use command line (awk, sed, grep etc.) to count the number of emails I received from different people.(change all the minute to :00) How can I make it?
I prefer the result like:
Number of email time From
3 12:00 <jim#gmail.com>
4 12:00 <harry#hotmail.com>
1 12:00 <clare#bbc.org>
Appreciate for your help!
Here is how to do it with awk
awk '{a[$1]++} END {for (i in a) print a[i]"\t"i}' file
4 <harry#hotmail.com>
1 <clare#bbc.org>
2 <jim#gmail.com>
You may want to use uniq after sort:
$ sort file | uniq -c
1 <clare#bbc.org>
4 <harry#hotmail.com>
2 <jim#gmail.com>
You can also get the header using printf:
$ printf "Number of email\temail\n%s\n" "$(sort file | uniq -c)"
Number of email email
1 <clare#bbc.org>
4 <harry#hotmail.com>
2 <jim#gmail.com>
We initially have to sort the file in order to uniq to work properly. From man uniq:
Filter adjacent matching lines from INPUT

How to check on FTP if there files on the list older than 7 days

I have a list of files from remote FTP Server:
drwxrwxrwx 2 test-backup everyone 4096 Jul 8 02:30 .
drwxrwxrwx 5 0 0 4096 Jul 23 07:02 ..
-rw-rw-rw- 1 test-backup everyone 352696 Jul 18 02:30 expdp_TEST11P2_custom_Fri.dmp.gz
-rw-rw-rw- 1 test-backup everyone 352796 Jul 21 02:30 expdp_TEST11P2_custom_Mon.dmp.gz
-rw-rw-rw- 1 test-backup everyone 352615 Jul 19 02:30 expdp_TEST11P2_custom_Sat.dmp.gz
-rw-rw-rw- 1 test-backup everyone 352626 Jul 20 02:30 expdp_TEST11P2_custom_Sun.dmp.gz
-rw-rw-rw- 1 test-backup everyone 10511523642 Jul 24 03:08 expdp_TEST11P2_custom_Thu.dmp.gz
-rw-rw-rw- 1 test-backup everyone 10496881744 Jul 22 03:03 expdp_TEST11P2_custom_Tue.dmp.gz
-rw-rw-rw- 1 test-backup everyone 10504557195 Jul 23 03:03 expdp_TEST11P2_custom_Wed.dmp.gz
I need to check if there are any files older than 7 days, Have You any Ideas how can I do this in Bash?
As I understand the issue, you have a list of file list received via ftp (and you do not have access to find on the remote server). Assuming that you have the directory list stored in a file called ftptimes, then you can identify files older than 7 days via:
$ awk -v cutoff="$(date -d "7 days ago" +%s)" '{line=$0; "date -d \""$6" " $7" " $8 "\" +%s" |getline; fdate=$1} fdate < cutoff {print line} ' ftptimes
From your sample date, the output would be:
drwxrwxrwx 2 test-backup everyone 4096 Jul 8 02:30 .
Addressing the parts of the awk command, one by one:
-v cutoff="$(date -d "7 days ago" +%s)"
This defines an awk variable called cutoff that will have the Unix time (seconds since 1970-01-01 00:00:00 UTC) corresponding to seven days ago
line=$0;
This saves for later use the current input line into the variable line.
"date -d \""$6" " $7" " $8 "\" +%s" |getline; fdate=$1
This converts the date given by ftp into Unix time, reads that time in, and saves it in a variable called fdate.
fdate < cutoff {print line}
If the file date is less than the cutoff date, then the line is printed.
In the sample data that you provided, the only file older than seven days is the current directory (.) which dates to Jul 8.
As an example, if we wanted files older than 5 days, then more files would be printed:
$ awk -v cutoff="$(date -d "5 days ago" +%s)" '{line=$0; "date -d \""$6" " $7" " $8 "\" +%s" |getline; fdate=$1} fdate < cutoff {print line} ' ftptimes
drwxrwxrwx 2 test-backup everyone 4096 Jul 8 02:30 .
-rw-rw-rw- 1 test-backup everyone 352696 Jul 18 02:30 expdp_TEST11P2_custom_Fri.dmp.gz
-rw-rw-rw- 1 test-backup everyone 352615 Jul 19 02:30 expdp_TEST11P2_custom_Sat.dmp.gz
In the above, I assumed that the info from ftp was stored in a file. It is also possible to pipe it in:
echo ls | ftp host port | awk -v cutoff="$(date -d "5 days ago" +%s)" '{line=$0; "date -d \""$6" " $7" " $8 "\" +%s" |getline; fdate=$1} fdate < cutoff {print line} '
where host and port are replaced by the host and port of your server.
Bash version
The above can also be accomplished in bash although it requires explicit looping. Again, assuming the ftp information in the file ftptimes:
$ cutoff="$(date -d "7 days ago" +%s)"; while read line; do set -- $line; fdate=$(date -d "$6 $7 $8" +%s) ; [ $fdate -lt $cutoff ] && echo $line ; done <ftptimes
drwxrwxrwx 2 test-backup everyone 4096 Jul 8 02:30 .
The find command is the most flexible for date ranges. You have 3 basic tests to choose from: -atime +n (last access time was greater than n*24 hours ago); -ctime +n (file status changed greater than n*24 hours ago); and -mtime +n (file was modified greater than n*24 hours ago). Note: the use of n means exactly n*24 hours ago; +n means greater than n*24 hours ago and -n means less than n*24 hours ago. Also note that any fractional parts of the 24 hour period are ignored which means you may have to adjust the +n to +6 to get all files greater than 6 days old (meaning 7 days old) rather than +7. Example:
find /path/to/files -type f -mtime +6
Will find all files (not dirs) in /path/to/files that were modified greater than 6 days ago (which is 7 days). You can test with -atime, -ctime, and -mtime to see which fits your needs.

pick up files based on dates in ksh script

I have this list of files . Now I will have to pick the latest file based on some condition
3679 Jul 21 23:59 belk_rpo_error_**po9324892**_07212014.log
0 Jul 22 23:59 belk_rpo_error_**po9324892**_07222014.log
3679 Jul 23 23:59 belk_rpo_error_**po9324892**_07232014.log
22 Jul 22 06:30 belk_rpo_error_**po9324267**_07012014.log
0 Jul 20 05:50 belk_rpo_error_**po9999992**_07202014.log
411 Jul 21 06:30 belk_rpo_error_**po9999992**_07212014.log
742 Jul 21 07:30 belk_rpo_error_**po9999991**_07212014.log
0 Jul 23 2014 belk_rpo_error_**po9999991**_07232014.log
For a PATRICULAR Order_No(Marked with ** **)
If the latest file is 0 kB then we will discard it (rest of the files with same Order_no as well)
if the latest file is non Zero then I will take it.(Only the latest one)
Then append the contents in a txt file .
My expected output would be ::
411 Jul 21 06:30 belk_rpo_error_**po9999992**_07212014.log
3679 Jul 23 23:59 belk_rpo_error_**po9324892**_07232014.log
22 Jul 22 06:30 belk_rpo_error_**po9324267**_07012014.log
I am at my wits end here. I cant seem to figure out how to compare dates in Unix. Any help is very appreciated.
You can try something like:
touch test.txt
for var in ` find . ! -empty -exec ls -r {} \;`
do
cat $var>>test.txt
done
untested
use stat to emit date (epoch time), size and filename.
use awk to filter out zero-length files and extract order number.
sort by order number and date
awk to pick up the last filename for each order number
stat -c $'%Y\t%s\t%n' *.log |
awk -F'\t' -v OFS='\t' '
$2 > 0 {
split($3, a, /_/)
print a[4], $1, $3
}' |
sort -t $'\t' -k1,1 -k2,2n |
awk -F'\t' '
NR > 1 && $1 != prev_order {print filename}
{filename = $3; prev_order = $1}
END {print filename}
'
The sort command might be wrong: In order to group by order number, you might need to sort first by file time then by order number.
If I understand your question, the resulting files need to be concatenated and appended to a file. If the above pipeline is working OK, then pipe into | xargs cat >> something.log

Resources