Implementing a datalogger in bash - bash

Hi I'm a newby in Bash scripting.
I need to log a data stream from a specific IP address and generate a logfile for each day as "file-$date.log" (i.e at 00:00:00 UT close the previous day file and create the correspondig to the new one)
I need to show data stream on screen while it is logged in a file
I try this solution but not works well because never closesthe initial file
apparently the condition check never executes while the first command of the pipe it is something different to an constant string like echo "something".
#!/bin/bash
log_data(){
while IFS= read -r line ; do printf '%s %s\n' "$(date -u '+%j %Y-%m-%d %H:%M:%S')" "$line"; done
}
register_data() {
while : ;
do
> stream.txt
DATE=$(date -u "+%j %Y-%m-%d %H:%M")
HOUR=$(date -u "+%H:%M:%S")
file="file-$DATE.log"
while [[ "${HOUR}" != 00:00:00 ]];
do
tail -f stream.txt | tee "${file}"
sleep 1
HOUR=$(date -u "+%H:%M:%S")
done
> stream.txt
done
}
nc -vn $IP $IP_port | log_data >> stream.txt &
register_data
I'll will be glad if someone can give me some clues to solve this problem.

Related

Shell script - is there a faster way to write date/time per second between start and end time?

I have this script (which works fine) that will write all the date/time per second, from a start date/time till an end date/time to a file
while read line; do
FIRST_TIMESTAMP="20230109-05:00:01" #this is normally a variable that changes with each $line
LAST_TIMESTAMP="20230112-07:00:00" #this is normally a variable that changes with each $line
date=$FIRST_TIMESTAMP
while [[ $date < $LAST_TIMESTAMP || $date == $LAST_TIMESTAMP ]]; do
date2=$(echo $date |sed 's/ /-/g' |sed "s/^/'/g" |sed "s/$/', /g")
echo "$date2" >> "OUTPUTFOLDER/output_LABELS_$line"
date=$(date -d "$date +1 sec" +"%Y%m%d %H:%M:%S")
done
done < external_file
However this sometimes needs to run 10 times, and the start date/time and end date/time sometimes lies days apart.
Which makes the script take a long time to write all that data.
Now I am wondering if there is a faster way to do this.
Avoid using a separate date call for each date. In the next example I added a safety parameter maxloop, avoiding loosing resources when the dates are wrong.
#!/bin/bash
awkdates() {
maxloop=1000000
awk \
-v startdate="${first_timestamp:0:4} ${first_timestamp:4:2} ${first_timestamp:6:2} ${first_timestamp:9:2} ${first_timestamp:12:2} ${first_timestamp:15:2}" \
-v enddate="${last_timestamp:0:4} ${last_timestamp:4:2} ${last_timestamp:6:2} ${last_timestamp:9:2} ${last_timestamp:12:2} ${last_timestamp:15:2}" \
-v maxloop="${maxloop}" \
'BEGIN {
T1=mktime(startdate);
T2=mktime(enddate);
linenr=1;
while (T1 <= T2) {
printf("%s\n", strftime("%Y%m%d %H:%M:%S",T1));
T1+=1;
if (linenr++ > maxloop) break;
}
}'
}
mkdir -p OUTPUTFOLDER
while IFS= read -r line; do
first_timestamp="20230109-05:00:01" #this is normally a variable that changes with each $line
last_timestamp="20230112-07:00:00" #this is normally a variable that changes with each $line
awkdates >> "OUTPUTFOLDER/output_LABELS_$line"
done < <(printf "%s\n" "line1" "line2")
Using epoch time (+%s and #) with GNU date and GNU seq to
produce datetimes in ISO 8601 date format:
begin=$(date -ud '2023-01-12T00:00:00' +%s)
end=$(date -ud '2023-01-12T00:00:12' +%s)
seq -f "#%.0f" "$begin" 1 "$end" |
date -uf - -Isec
2023-01-12T00:00:00+00:00
2023-01-12T00:00:01+00:00
2023-01-12T00:00:02+00:00
2023-01-12T00:00:03+00:00
2023-01-12T00:00:04+00:00
2023-01-12T00:00:05+00:00
2023-01-12T00:00:06+00:00
2023-01-12T00:00:07+00:00
2023-01-12T00:00:08+00:00
2023-01-12T00:00:09+00:00
2023-01-12T00:00:10+00:00
2023-01-12T00:00:11+00:00
2023-01-12T00:00:12+00:00
if you're using macOS/BSD's date utility instead of the gnu one, the equivalent command to parse would be :
(bsd)date -uj -f '%FT%T' '2023-01-12T23:34:45' +%s
1673566485
...and the reverse process is using -r flag instead of -d, sans "#" prefix :
(bsd)date -uj -r '1673566485' -Iseconds
2023-01-12T23:34:45+00:00
(gnu)date -u -d '#1673566485' -Iseconds
2023-01-12T23:34:45+00:00

Issues with grep and get a count of a string in a loop

I have a set of search strings in a file (File1) and a content file (File2). I am trying to loop through all the search strings within File1 and get a count of each of the search string within File2 and output it - I want to automate this and make it generic so I can search through multiple content files. However, I dont seem to be able to get the exact count when I execute this loop. I get a "0" count for each of the strings although I have those strings in the file. Unable to figure out what I am doing wrong and can use some help !
Below is the script I came up with:
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
count=$(echo cat "$2" | grep -c "$line")
echo "$count - $line"
done < "$1"
Command I am using to run this script:
./scanscript.sh File1.log File2.log
I say this since I searched this command separately and get the right value. This command works by itself but I want to put this in a loop
cat File2.log | grep -c "Search String"
Sample Data for File 1 (Search Strings):
/SERVER_NAME/Root/DEV/Database/NJ-CONTENT/Procs/
/SERVER_NAME3/Root/DEV/Database/NJ-CONTENT/Procs/
Sample Data for File 2 (Content File):
./SERVER_NAME/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:29:
./SERVER_NAME2/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:100:
./SERVER_NAME3/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:143:
./SERVER_NAME4/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:223:
./SERVER_NAME5/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:5589:
Problem is this line:
count=$(echo cat "$2" | grep -c "$line")
That should be changed to:
count=$(grep -Fc "$line" "$2")
Also note -F is to be used for fixed string search instead of regex search.
Full code:
while IFS='' read -r line || [[ -n "$line" ]]; do
count=$(grep -Fc "$line" "$2");
echo "$count - $line";
done < "$1"
Run it as:
./scanscript.sh File1.log File2.log
Output:
1 - /SERVER_NAME/Root/DEV/Database/NJ-CONTENT/Procs/
1 - /SERVER_NAME3/Root/DEV/Database/NJ-CONTENT/Procs/

Is there a BASH script allowing me to grep from /var/log/messages?

I am in need of a script (if one exists, not confident in my own scripting ability) that is able to bring up specified information from the /var/log/messages. I need to show logged traffic on specific dates and times and protocols (ie. Show all insecure FTP traffic on October 23 between 12:30pm and 12:35pm). Is there a script that can do this, or is anyone able to perhaps create a quick simple one that can do the job?
This is the script I have, although it is not working completely:
#!/bin/bash
read -p "Enter month (first 3 letters): " month
read -p "Enter day of month: " day
read -p "Enter starting time (HH:MM:SS): " stime
read -p "Enter ending time (HH:MM:SS): " ftime
read -p "Enter chain: " chain
read -p "Enter any other modifiers (TTL, SRC, DSP, SPT, DPT, IN, OUT, etc): " modifier
if [ -z "$month" ]; then
month='IN='
fi
if [ -z "$chain" ]; then
chain='IN='
fi
if [ -z "$modifier" ]; then
modifier='IN='
fi
if [ -z "$stime" ] && [ -z "$ftime" ]; then
cat /var/log/messages | grep -i "$month $day" | grep -i $chain | grep -i $modifier
else
cat /var/log/messages | grep -i "$month $day" | grep -i $chain | grep -i $modifier | sed -n "/$stime/,/$ftime/p"
fi
It isnt working well but its better than nothing. Perhaps it can be improved

Formatting a bash cat pipe to mail command

When I wrote this script it works just fine. What it does is it diffs two file
and if the diff fails that indicates that their is no change between the todays file
and the previous days file. That menas the the download is stale.
However when I to email it it is not formatted. I treid putting in a newline
so that the email is legible. But the mail function does not work. The file is
to garbled to read.
#!/bin/bash
dayofweek=$(/bin/date +%w)
today=$(/bin/date +%Y.%m.%d_)
yesterday=$(/bin/date -d '1 day ago' +%Y.%m.%d_)
destination="/sbclocal/stmevt3/dailymetrics/EQ_PERFORMANCE/"
file1=OPTS_TRIP_TRIP_csv_Oct2014.csv
file2=OPTS_TRIPnon-penny1-20_TRIPnon-penny1-20_Oct2014.csv
file3=OPTS_TRIPnon-penny21-50_TRIPnon-penny21-50_Oct2014.csv
file4=OPTS_TRIPnon-penny51-100_TRIPnon-penny51-100_Oct2014.csv
file5=OPTS_TRIPpenny1-20_TRIPpenny1-20_Oct2014.csv
file6=OPTS_TRIPpenny21-50_TRIPpenny21-50_Oct2014.csv
file7=OPTS_TRIPpenny51-100_TRIPpenny51-100_Oct2014.csv
for i in $file1 $file2 $file3 $file4 $file5 $file6 $file7
do
if diff $destination$today$i $destination$yesterday$i > /dev/null ; then
printf "$today$i may be stale - please notify production\n" >> /tmp/eq_diffs.$today
sleep 2
else
echo " " > /dev/null
fi
done
#think we might have to do a if file exists
cat /tmp/eq_diffs.$today | mail -s "EQ performance diffs" casper#casper.com

How to find the first occurence of date which is greater than or eqaul to particular date in text file using shell script

past_date='2013-11-14'
initial_time=$(grep -o -m1 "$past_date [0-9][0-9]:[0-9][0-9]:[0-9][0-9]" logfile.txt)
/* Here I am trying to find the first occurence of date which is greater than or eqaul to '2013-11-14', Above code I have tried ,It is giving only that particular line of file, If that date is not found It has to give next date which is greater than 2013-11-14 date */
Using awk
past_date='20131114'
awk '{d=$1;gsub(/-/,"",d);if (d>=p) {print;exit}}' p=$past_date logfile
2013-11-15 15:45:40 Starting agent install process
If you use bash, then you might want to try something like:
past_date='2013-11-14'
initial_time=$(grep -oP '\d{4}-\d\d-\d\d \d\d:\d\d:\d\d' < logfile.txt | \
while read LINE ; do if [ "$LINE" '>' "$past_date" ]; then echo $LINE; break; fi ; done)
while read line
do
initial_time=`echo $line | sed -e 's/\([0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] [0-9][0-9]:[0-9][0-9]:[0-9][0-9]\).*/\1/'`
file_content_date=`date -d "$initial_time" +%Y%m%d`
comparison_past_date=`date -d "$past_date" +%Y%m%d`
if [ $comparison_past_date -le $file_content_date ]; then
comparison_start_date=`date -d "$file_content_date" +%Y%m%d`
break
fi
done < logfile.txt
fi

Resources