I want the code in Bash scripting
"It should print the dates in the below manner
From : 2015-October-03 2015-October-04(in the next line again it should print)
2015-October-10 2015-October-11
" "
" "
To :2017-October-21 2017-October-22
2017-October-28 2017-October-29
So, this should print all the months from the 2015-till date weekend dates in the above format only. please help me at the earliest
The following is the solution for your query.
Solution:-
#!/bin/bash
Date_Diff_Count=` echo $[$[$(date +%s)-$(date -d "2015-01-01" +%s)]/60/60/24] `
for i in ` seq -$Date_Diff_Count 0 `
do
VALUE=`date -d "+$i day" | egrep -i "Sat|Sun" | awk -F" " '{print $2" "$3" "$6}'`
[[ ! -z ${VALUE} ]] && date -d "${VALUE}" +%Y-%B-%d
done > sample.txt
paste -d " " - - < sample.txt
Output
2015-January-03 2015-January-04
2015-January-10 2015-January-11
2015-January-17 2015-January-18
2015-January-24 2015-January-25
2015-January-31 2015-February-01
...
2016-May-07 2016-May-08
2016-May-14 2016-May-15
2016-May-21 2016-May-22
2016-May-28 2016-May-29
...
2017-October-07 2017-October-08
2017-October-14 2017-October-15
2017-October-21 2017-October-22
2017-October-28 2017-October-29
Explanation
Date_Diff_Count is the variable i.e. getting number of days by
subtracting the start date from the current date. Based on your wish
you can edit the start date.
For loop is starting from -Date_Diff_Count to 0 for Ex: if
Date_Diff_Count is 500, for loop sequence starts from -500 to 0.
Value is where we are fetching only year,month and date after doing pipe on the output of date and egrep command.
if value is not zero then we are converting date into the format YYYY-month-DD
Final output will be saved in sample.txt file
Final paste command is to merge 2 consecutive lines into a single line. If you want to merge 3 lines then use paste -d " " - - -
d is delimiter to separate the merged lines. You can use any other operators based on your requirements.
Related
I have a file that consists of the following...
false|aaa|user|aaa001|2014-12-11|
false|bbb|user|bbb||
false|ccc|user|ccc|2021-10-19|
false|ddd|user|ddd|2018-11-16|
false|eee|user|eee|2020-06-02|
I want to use the date in the 5th column to calculate the number of days from the current date and append it to each line in the file.
The end result would be a file that looks like the following, assuming the current date is 1/13/2022...
false|aaa|user|aaa001|2014-12-11|2590
false|bbb|user|bbb||
false|ccc|user|ccc|2021-10-19|86
false|ddd|user|ddd|2018-11-16|1154
false|eee|user|eee|2020-06-02|590
Some lines in the file will not contain a date value (which is expected). I need a solution for a Bash script on Linux.
I am able to submit a command using echo for a single line and then calculate the number of days from the current date by using cut on the 5th field (see below)...
echo "false|aaa|user|aaa001|2014-12-11" | echo $(( ($(date --date=date +"%Y-%m-%d" +%s) - $(date --date=cut -d'|' -f5 +%s) )/(60*60*24) ))
2590
I don't know how to do this one line at a time, capture the 'number of days' value and then append it to each line in the file.
Here's an approach using
paste to append the outputs
sed to arrange the empty lines and
awk to calculate the desired days.
This works with GNU date. BSD date has to use something like date -jf x +%s.
EDIT: Updated the date to compare with to current day.
% current=$(date +%m/%d/%Y)
% paste -d"\0" file <(cut -d"|" -f5 file |
sed 's/^$/#/' |
xargs -Ix date -d x +%s 2>&1 |
awk -v cur="$(date -d "$current" +%s)" '/invalid/{print 0; next}
{print int((cur-$1)/3600/24)}')
false|aaa|user|aaa001|2014-12-11|2590
false|bbb|user|bbb||0
false|ccc|user|ccc|2021-10-19|86
false|ddd|user|ddd|2018-11-16|1154
false|eee|user|eee|2020-06-02|590
Also date returns date: invalid date ‘#’ in the empty case. If any other implementation behaves differently the awk regex has to be adjusted accordingly.
Data
% cat file
false|aaa|user|aaa001|2014-12-11|
false|bbb|user|bbb||
false|ccc|user|ccc|2021-10-19|
false|ddd|user|ddd|2018-11-16|
false|eee|user|eee|2020-06-02|
I have a file containing on each line a date time value
I have a command to change all the values to the today date, but i need to be able to change not only to today, but let's say, first 10 lines changed to today, next 10 lines to be changed to yesterday's date, and so on.
Could you please help me on this one?
file snippet:
bla|TRANSACTTIME=20181127153310|bla|bla
bla|TRANSACTTIME=20181127153310|bla|bla
bla|TRANSACTTIME=20181127153310|bla|bla
bla|TRANSACTTIME=20181127153310|bla|bla
I thinks this should work,
#!/bin/bash
set +x
STEP=3 #size of the block you want to modify
DATE_STEP=1 #how many days you want to step
BASEDATE=20181127 #basedate you want to replace
LINES=$(cat $1 | wc -l)
BLOCKS=$((LINES / STEP ))
MODULE=$((LINES % STEP ))
if [ "$MODULE" -ne "0" ];
then
BLOCKS=$((BLOCKS + 1))
fi
START=1
END=$STEP
ADD_DAYS=0
for i in $(seq 1 $BLOCKS);
do
NEWDATE=$(date +'%Y%m%d' -d"today+$ADD_DAYS days")
#sed is used twice, first to get the required lines and then to do the replacement
sed -n ${START},${END}p $1 | sed s/$BASEDATE/$NEWDATE/
START=$((END + 1))
END=$((END + STEP))
ADD_DAYS=$((ADD_DAYS + DATE_STEP))
done
output goes directly to stdout
I am trying to write a little bash script, where you can specify a number of minutes and it will show the lines of a log file from those last X minutes.
To get the lines, I am using sed
sed -n '/time/,/time/p' LOGFILE
On CLI this works perfectly, in my script however, it does not.
# Get date
now=$(date "+%Y-%m-%d %T")
# Get date minus X number of minutes -- $1 first argument, minutes
then=$(date -d "-$1 minutes" +"%Y-%m-%d %T")
# Filter logs -- $2 second argument, filename
sed -n '/'$then'/,/'$now'/p' $2
I have tried different approaches and none of them seem to work:
result=$(sed -n '/"$then"/,/"$now"/p' $2)
sed -n "/'$then'/,/'$now'/p" "$2"
sed -n "/$then/,/$now/p" $2
sed -n "/$then/,/$now/p" "$2
Any sugesstions?
I am on Debian 5, echo $SHELL says /bin/sh
EDIT : The script produces no output, so there is no error showing up.
In the logfile every entry starts with a date like this 2013-05-15 14:21:42,794
I assume that the main problem is that you try to perform an arithmetic comparison by string matching. sed -n '/23/,/27/p' gives you the lines between the first line that contains 23 and the next line that contains 27 (and then again from the next line that contains 23 to the next line that contains 27, and so on). It does not give you all lines that contain a number between 23 and 27. If the input looks like
19
22
24
26
27
30
it does not output anything (since there is no 23). An awk solution that uses string matching has the same problem. So, unless your then date string occurs verbatim in the log file, your method will fail. You have to convert your date strings into numbers (drop the -, <space>, and :) and then check whether the resulting number is in the right range, using an arithmetical comparison rather than a string match. This goes beyond the capabilities of sed; awk and perl can do it rather easily. Here is a perl solution:
#!/bin/bash
NOW=$(date "+%Y%m%d%H%M%S")
THEN=$(date -d "-$1 minutes" "+%Y%m%d%H%M%S")
perl -wne '
if (m/^(....)-(..)-(..) (..):(..):(..)/) {
$date = "$1$2$3$4$5$6";
if ($date >= '"$THEN"' && $date <= '"$NOW"') {
print;
}
}' "$2"
Don't give yourself a headache with nested quotes. Use the -v option with awk to pass the value of a shell variable into the script:
#!/bin/bash
# Get date
now=$(date "+%Y-%m-%d %T")
# Get date minus X number of minutes -- $1 first argument, minutes
delta=$(date -d "-$1 minutes" +"%Y-%m-%d %T")
# Filter logs -- $2 second argument, filename
awk -v n="$now" -v d="$delta" '$0~n,$0~d' $2
Also don't use variable names of shell builtins i.e then.
How can I convert one date format to another format in a shellscript?
Example:
the old format is
MM-DD-YY HH:MM
but I want to convert it into
YYYYMMDD.HHMM
Like "20${D:6:2}${D:0:2}${D:3:2}.${D:9:2}${D:12:2}00", if the old date in the $D variable.
Take advantage of the shell's word splitting and the positional parameters:
date="12-31-11 23:59"
IFS=" -:"
set -- $date
echo "20$3$1$2.$4$5" #=> 20111231.2359
myDate="21-12-11 23:59"
#fmt is DD-MM-YY HH:MM
outDate="20${myDate:6:2}${myDate:3:2}${myDate:0:2}.${myDate:9:2}${myDate:12:2}00"
case "${outDate}" in
2[0-9][0-9][0-9][0-1][0-9][0-3][0-9].[0-2][0-9][0-5][[0-9][0-5][[0-9] )
: nothing_date_in_correct_format
;;
* ) echo bad format for ${outDate} >&2
;;
esac
Note that if you have a large file to process, then the above is an expensive(ish) process. For filebased data I would recommend something like
cat infile
....|....|21-12-11 23:59|22-12-11 00:01| ...|
awk '
function reformatDate(inDate) {
if (inDate !~ /[0-3][0-9]-[0-1][0-9]-[0-9][0-9] [0-2][0-9]:[0-5][[0-9]/) {
print "bad date format found in inDate= "inDate
return -1
}
# in format assumed to be DD-MM-YY HH:MM(:SS)
return (2000 + substr(inDate,7,2) ) substr(inDate,4,2) substr(inDate, 1,2) \
"." substr(inDate,10,2) substr(inDate,13,2) \
( substr(inDate,16,2) ? substr(inDate,16,2) : "00" )
}
BEGIN {
#add or comment out for each column of data that is a date value to convert
# below is for example, edit as needed.
dateCols[3]=3
dateCols[4]=4
# for awk people, I call this the pragmatic use of associative arrays ;-)
#assuming pipe-delimited data for columns
#....|....|21-12-11 23:59|22-12-11 00:01| ...|
FS=OFS="|"
}
# main loop for each record
{
for (i=1; i<=NF; i++) {
if (i in dateCols) {
#dbg print "i=" i "\t$i=" $i
$i=reformatDate($i)
}
}
print $0
}' infile
output
....|....|20111221.235900|20111222.000100| ...|
I hope this helps.
There is a good answer down already, but you said you wanted an alternative in the comments, so here is my [rather awful in comparison] method:
read sourcedate < <(echo "12-13-99 23:59");
read sourceyear < <(echo $sourcedate | cut -c 7-8);
if [[ $sourceyear < 50 ]]; then
read fullsourceyear < <(echo -n 20; echo $sourceyear);
else
read fullsourceyear < <(echo -n 19; echo $sourceyear);
fi;
read newsourcedate < <(echo -n $fullsourceyear; echo -n "-"; echo -n $sourcedate | cut -c -5);
read newsourcedate < <(echo -n $newsourcedate; echo -n $sourcedate | cut -c 9-14);
read newsourcedate < <(echo -n $newsourcedate; echo :00);
date --date="$newsourcedate" +%Y%m%d.%H%M%S
So, the first line just reads a date in, then we get the two-digit year, then we append it to '20' or '19' based on if it's less than 50 (so this would give you years from 1950 to 2049 - feel free to shift the line). Then we append a hyphen and the month and date. Then we append a space and the time, and lastly we append ':00' as the seconds (again feel free to make your own default). Lastly we use GNU date to read it in (since it's been standardized now) and print it in a different format (which you can edit).
It's a lot longer and uglier than cutting up the string, but having the format in the last line may be worth it. Also you could shorten it significantly with the shorthand you just learned in the first answer.
Good luck.
I am writing a script in BASH that needs to check through log files for ERROR entries. I plan to run this as a cron hourly, so I only want to have it only return ERROR type entries that occurred within the last hour (all server times are GMT). I establish the following variables
# Log file directory
LOGPATH="/path/to/logs/"
# Current date and time
CURDATE=`date +%Y-%m-%d`
CURTIME=`date +%H:%M:%S`
# Old date and time
OLDDATE=`date +%Y-%m-%d -d "1 hour ago"`
OLDTIME=`date +%H:%M:%S -d "1 hour ago"`
All log files adhere to the file name format of ktYEAR-MONTH-DAY.root.log.txt Where YEAR/MONTH/DAY are replaced with the date that entries are recorded in. So for instance, today's log file would be kt2011-08-15.root.log.txt. An example entry of the contents is
2011-08-15 | 19:30:02 | ERROR | 18333 | 337 | n/a | dms | default | error | XMLRPC Lucene - addDocument - Reason: Failed to parse XML-RPC request: An invalid XML character (Unicode: 0xb) was found in the element content of the document.
The columns of interest are the 1st, 2nd, 3rd (value may be "INFO", "DEBUG", etc, but am only interested when "ERROR" is the value) and the last column which is the body of the log message.
What I am trying to accomplish is having this BASH script parse through the file(s) that have entries spanning the last hour of activity (as defined in the 1st and 2nd column), and if the 4th column contains the string "ERROR", then display the right-most column's contents. My confusion comes when trying to determine how to parse through the log file(s) based off of the $CURTIME an $OLDTIME, made worse when midnight comes and I then have to search through the previous day's log file. I would prefer not to do a blanket grep style search through all the log files as the quantity and size can be excessive, but if that's how it has to be done, then so be it.
awk -F ' \\| ' -v "d=$(date -d "1 hour ago" -u +%Y-%m-%d#%H:%M:%S)" '$3 == "ERROR" && $1"#"$2 > d'
This is as simple as doing string comparison in awk. When you pass midnight, simply add the $OLDDATE file to the search:
if [ "$CURDATE" != "$OLDDATE" ]; then
cat "kt$OLDDATE.root.log.txt" "kt$CURDATE.root.log.txt"
else
cat "kt$CURDATE.root.log.txt"
fi | awk -F "|" -v olddate=$OLDDATE -v oldtime=$OLDTIME -v curdate=$CURDATE 'BEGIN{olddate=olddate " "; curdate = curdate " "; oldtime = " " oldtime " "}
$1 == olddate && $2 >= oldtime && $3 == " ERROR "{print $0}
$1 > olddate && $3 == " ERROR "{print $0}'
Can be combined with glenn's solution to be much shorter.