Make the times from a log file relative to the starttime - bash

I have a logfile with this format:
10:33:56 some event occurs
10:33:57 another event occurs
10:33:59 another one occurs
I want to make the times relative to the start time:
00:00:00 some event occurs
00:00:01 another event occurs
00:00:03 another one occurs
using a bash script. That would allow me to compare better different execution delays.

One can make this script rebase_time.sh:
adddate() {
while IFS= read -r line; do
log_file_hours=`echo $line | awk 'BEGIN{FS="[ [/:]+"}; {print $1}'`
log_file_minutes=`echo $line | awk 'BEGIN{FS="[ [/:]+"}; {print $2}'`
log_file_seconds=`echo $line | awk 'BEGIN{FS="[ [/:]+"}; {print $3}'`
log_date="$log_file_hours:$log_file_minutes:$log_file_seconds"
if [[ -z "$first_date" ]]; then
first_date=$log_date
fi
StartDate=$(date -u -d "$first_date" +"%s")
FinalDate=$(date -u -d "$log_date" +"%s")
diff=$(date -u -d "0 $FinalDate sec - $StartDate sec" +"%H:%M:%S")
echo $diff ${line#$log_date}
done
}
cat "$1" | adddate
and call it this way:
./rebase_time events.log

Related

bulk write in Unix using shell script

Is there any way to write bulk data in a file in shell script instead of writing line by line code in file?
In below script, I want to write difference between arrival time and generation time of files in test.csv file.
########################################################
echo "Starting the Execution for Time difference\n";
############################################################
# Functions used across the script
datediff() {
Unixtime=`echo $1 $2 $3 $4`
Filetime=`echo $5 $6 $7 $8`
echo $Unixtime;
echo $Filetime;
d1=`date -d "$Unixtime" +%s`
d2=`date -d "$Filetime" +%s`
echo $d1;
echo $d2;
TIME_DIFF=`expr $d1 - $d2`
TIME_DIFF=`expr $TIME_DIFF / 60`
echo $TIME_DIFF;
echo "$Unixtime,$Filetime,$TIME_DIFF,$9" >> ../test.csv
}
rm -f ../test.csv;
for i in `ls -1 | grep -v 'DelayCheck.s*'`
do
DayMonth=`ls -lrt $i | awk '{print $7" "$6" "}'`
Year=`ls --full-time $i | awk '{print $6}' | cut -c1-4`
HourMin=`ls -lrt $i | awk '{print " "$8}'`
timeA=`echo $DayMonth $Year $HourMin`
FileYearMonDay=`ls -ltr $i | awk '{print $9}' | awk -F'--' '{print $3}' | cut
-c2-9`
timeB1=`date -d $FileYearMonDay +'%d %b %Y'`
timeB2=`echo $i | awk -F'--' '{print substr($3,10,13)}' | sed -e
's/../:&/2g'`
timeB=`echo $timeB1 $timeB2`
echo "Time A is $timeA";
echo " Time b is $timeB";
datediff $timeA $timeB $i
done
echo $?;
script is working fine, but the problem is there is over 100k files. So script performance is bad.
I had tried to search is there any way to write bulk data in a file but I didn't find any solution.

How to remove the usage of temp file and read data from the command itself

I have a shell script and I need help to make it efficient. I am using temp files to store and read the data, but I need to read the data in memory.
It collects metrics from the Postgres database using a command and fetches the metrics. My current script fetches the metrics to a temp file, then reads from it.
I want to stop using temp files and use memory instead.
The script works, I just need help to automate more and get rid of reading data from temp files.
List item
INPUT=`mktemp`
#/usr/pgsql-9.5/bin/pgbench -c1 -j1 -t 1000 -S man > $INPUT
TESTTIME=15 #seconds
echo "Waiting $TESTTIME seconds..."
/usr/pgsql-9.5/bin/pgbench -c1 -j1 -T $TESTTIME -r man > $INPUT
OLDIFS=$IFS
IFS=" "
[ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; }
tps=`cat $INPUT |awk '/^tps/ {print $3}' |awk -F'.' '{print $1}' |head -n1`
update_l=`cat $INPUT |awk '/UPDATE/ {print $1}' |tail -n1`
select_l=`cat $INPUT |awk '/SELECT/ {print $1}' |tail -n1`
insert_l=`cat $INPUT |awk '/INSERT/ {print $1}' |tail -n1`
echo ${PLOTTER_PREFIX}.tps $tps kv
echo ${PLOTTER_PREFIX}.update_latency $update_l kv
echo ${PLOTTER_PREFIX}.select_latency $select_l kv
echo ${PLOTTER_PREFIX}.insert_latency $insert_l kv
#{ while read line; do
# # statsite_buildData ${PLOTTER_PREFIX}.latency average ${latency average} kv
# echo ${PLOTTER_PREFIX}.${line} kv
# done } < $INPUT
statsite_sendData
#echo $Test
IFS=$OLDIFS
rm -f $INPUT
You can capture the output of the command to a variable, like so:
output=$(/usr/pgsql-9.5/bin/pgbench -c1 -j1 -T $TESTTIME -r man)
Then just use echo instead of cat and substitute $INPUT with the variable name.
tps=`echo "$output" | awk '/^tps/ {print $3}' | awk -F'.' '{print $1}' |head -n1`
update_l=`echo "$output" | awk '/UPDATE/ {print $1}' | tail -n1`
...
I would also suggest using $() instead of surrounding commands with backticks. So the above would become:
tps=$(echo "$output" | awk '/^tps/ {print $3}' | awk -F'.' '{print $1}' |head -n1)
update_l=$(echo "$output" | awk '/UPDATE/ {print $1}' | tail -n1)
...

Bash Shell Issue

currentDate="20160324"
headerDumpFile="header.txt"
#currentDate="$(date +ā€™%Y%m%dā€™)"
printf "Current date in dd/mm/yyyy format %s\n" $currentDate
contId=""
labelList="c12,playlist-play,play,pause,end,playlist-end,heartbeat,ns_st_cl"
params="corporate=abc&user=abc&password=abc&startdate=$currentDate&site=abc&extralabels=$labelList"
url="https://example.com/v1/start?$params"
a=1
while true
do
curl -D $headerDumpFile -v -k -H "Accept-Encoding:gzip" $url > $a.zip
contId= cat $headerDumpFile | grep "X-CS-Continuation-Id:" | awk '{print $NF}'
if [ "$contId" ];then
printf "Breaking the Loop.."
break;
fi
url="https://example.com/v1/start?$params&continuationId=${contId}"
a=$((a + 1))
echo $contId
echo $url
done
When i Do echo url its giving value of contId as blank but when i do echo $contId. Its printed correctly .Please suggest
Perhaps is it what you want to achieve:
contId=$(cat $headerDumpFile | grep "X-CS-Continuation-Id:" | awk '{print $NF}')
Or the simpler:
contId=$(awk '/X-CS-Continuation-Id:/ {print $NF}' $headerDumpFile)
Note that unlike what you were guessing, echo $contId isn't displaying anything in your code. What is displayed is the result of the bogus contId= cat $headerDumpFile | grep "X-CS-Continuation-Id:" | awk '{print $NF}' line.

Variable loss in redirected bash while loop

I have the following code
for ip in $(ifconfig | awk -F ":" '/inet addr/{split($2,a," ");print a[1]}')
do
bytesin=0; bytesout=0;
while read line
do
if [[ $(echo ${line} | awk '{print $1}') == ${ip} ]]
then
increment=$(echo ${line} | awk '{print $4}')
bytesout=$((${bytesout} + ${increment}))
else
increment=$(echo ${line} | awk '{print $4}')
bytesin=$((${bytesin} + ${increment}))
fi
done < <(pmacct -s | grep ${ip})
echo "${ip} ${bytesin} ${bytesout}" >> /tmp/bwacct.txt
done
Which I would like to print the incremented values to bwacct.txt, but instead the file is full of zeroes:
91.227.223.66 0 0
91.227.221.126 0 0
127.0.0.1 0 0
My understanding of Bash is that a redirected for loop should preserve variables. What am I doing wrong?
First of all, simplify your script! Usually there are many better ways in bash. Also most of the time you can rely on pure bash solutions instead of running awk or other tools.
Then add some debbuging!
Here is a bit refactored script with debugging
#!/bin/bash
for ip in "$(ifconfig | grep -oP 'inet addr:\K[0-9.]+')"
do
bytesin=0
bytesout=0
while read -r line
do
read -r subIp _ _ increment _ <<< "$line"
if [[ $subIp == "$ip" ]]
then
((bytesout+=increment))
else
((bytesin+=increment))
fi
# some debugging
echo "line: $line"
echo "subIp: $subIp"
echo "bytesin: $bytesin"
echo "bytesout: $bytesout"
done <<< "$(pmacct -s | grep "$ip")"
echo "$ip $bytesin $bytesout" >> /tmp/bwacct.txt
done
Much clearer now, huh? :)

BASH better way to monitor files

I've made a Bash script to monitor some server log files for certain data and my method probably isn't the most efficient.
One section specifically bugs me is that I have to write a newline to the monitored log so that the same line wont be read over continually.
Feedback would be greatly appreciated!
#!/bin/bash
serverlog=/home/skay/NewWorld/server.log
onlinefile=/home/skay/website/log/online.log
offlinefile=/home/skay/website/log/offline.log
index=0
# Creating the file
if [ ! -f "$onlinefile" ]; then
touch $onlinefile
echo "Name Date Time" >> "$onlinefile"
fi
if [ ! -f "$offlinefile" ]; then
touch $offlinefile
echo "Name Date Time" >> "$offlinefile"
fi
# Functions
function readfile {
# Login Variables
loginplayer=`tail -1 $serverlog | grep "[INFO]" | grep "joined the game" | awk '{print $4}'`
logintime=`tail -1 $serverlog | grep "[INFO]" | grep "joined the game" | awk '{print $2}'`
logindate=`tail -1 $serverlog | grep "[INFO]" | grep "joined the game" | awk '{print $1}'`
# Logout Variables
logoutplayer=`tail -1 $serverlog | grep "[INFO]" | grep "left the game" | awk '{print $4}'`
logouttime=`tail -1 $serverlog | grep "[INFO]" | grep "left the game" | awk '{print $2}'`
logoutdate=`tail -1 $serverlog | grep "[INFO]" | grep "left the game" | awk '{print $1}'`
# Check for Player Login
if [ ! -z "$loginplayer" ]; then
echo "$loginplayer $logindate $logintime" >> "$onlinefile"
echo "Player $loginplayer login detected" >> "$serverlog"
line=`grep -rne "$loginplayer" $offlinefile | cut -d':' -f1`
if [ "$line" > 1 ]; then
sed -i "$line"d $offlinefile
unset loginplayer
unset line
fi
fi
# Check for Player Logout
if [ ! -z "$logoutplayer" ]; then
echo "$logoutplayer $logoutdate $logouttime" >> "$offlinefile"
echo "Player $loginplayer logout detected" >> "$serverlog"
line=`grep -rne "$logoutplayer" $onlinefile | cut -d':' -f1`
if [ "$line" > 1 ]; then
sed -i "$line"d $onlinefile
unset logoutplayer
unset line
fi
fi
}
# Loop
while [ $index -lt 100 ]; do
readfile
done
Thanks!
instead of using multiple
tail -n 1 file
try the following construct:
tail -f file | while read line;do
echo "read: $line"
done
it will be much more reliable...and won't read the same line twice ;)
note: by using new processes of grep/awk/etc you are burning away processes...it's not that it is critical, but usually process creation is expensive...but if new lines occur rarely it's perfectly fine
where i want'ed to get is: if you are intrested, take a look at bash builting string manipulator function replace $(x/aa} ${x//aa} and friends..or try to use extended regexpes with grep

Resources