how to grep multiple words from different lines in same log - bash

I want to grep for files containing the words in different line from same log. the words are checkCredit?msisdn=766117506 and creditLimit
The log file is like this
freeMemory 103709392time Mon Mar 12 04:02:13 IST 2018
http://127.0.0.1:8087/DialogProxy/services/ServiceProxy/checkCredit?msisdn=767807544&transid=45390124
freeMemory 73117016time Mon Mar 12 04:02:14 IST 2018
statusCode200
{statusCode=200, response=outstandingAmount 0.0 creditLimit 0.0, errorResponse=, responseTime=0}
this is balnce 0.0
What is the best way to do this?

Give this a try:
grep 'checkCredit?msisdn\|creditLimit' inputfile

You can use
$ grep -e 'checkCredit?msisdn=766117506' -e 'creditLimit' <filename>

Simply try
grep creditLimit log.txt | grep checkCredit

grep 'checkCredit?msisdn=766117506\|creditLimit' inputfile catalina.out_bckp , when i run this command it also display only the creditLimit details why it doesn't show the checkCredit?msisdn=766117506 details.
The log file as shown in your question post does not contain the 766117506, so it's no wonder that grep doesn't find it. If you really have data with 766117506, add them to the question.
I have used this command grep 'checkCredit?msisdn\|this is balnce' catalina.out_bckp |awk '$4 < 10 {print ;}' it gave me good result but there is some missing values.
You haven't used creditLimit in that pattern, so it's no wonder that those lines are missing.

Related

What is the Exact Use and Meaning of "IFS=!"

I was trying to understand the usage of IFS but there is something I couldn't find any information about.
My example code:
#!/bin/sh
# (C) 2016 Ergin Bilgin
IFS=!
for LINE in $(last -a | sed '$ d')
do
echo $LINE | awk '{print $1}'
done
unset IFS
I use this code to print last users line by line. I totally understand the usage of IFS and in this example when I use default IFS, it reads word by word inside of my loop. And when I use IFS=! it reads line by line as I wish. The problem here is I couldn't find anything about that "!" on anywhere. I don't remember where I learned that. When I google about achieving same kind of behaviour, I see other values which are usually strings.
So, what is the meaning of that "!" and how it gives me the result I wish?
Thanks
IFS=! is merely setting a non-existent value for IFS so that you can iterate input line by line. Having said that using for loop here is not recommended, better to use read in a while loop like this to print first column i.e. username:
last | sed '$ d' | while read -r u _; do
echo "$u"
done
As you are aware, if the output of last had a !, the script would split the input lines on that character.
The output format of last is not standardized (not in POSIX for instance), but you are unlikely to find a system where the first column contains anything but the name of whatever initiated an action. For instance, I see this:
tom pts/8 Wed Apr 27 04:25 still logged in michener.jexium-island.net
tom pts/0 Wed Apr 27 04:15 still logged in michener.jexium-island.net
reboot system boot Wed Apr 27 04:02 - 04:35 (00:33) 3.2.0-4-amd64
tom pts/0 Tue Apr 26 16:23 - down (04:56) michener.jexium-island.net
continuing to
reboot system boot Fri Apr 1 15:54 - 19:03 (03:09) 3.2.0-4-amd64
tom pts/0 Fri Apr 1 04:34 - down (00:54) michener.jexium-island.net
wtmp begins Fri Apr 1 04:34:26 2016
with Linux, and different date-formats, origination, etc., on other machines.
By setting IFS=!, the script sets the field-separator to a value which is unlikely to occur in the output of last, so each line is read into LINE without splitting it. Normally, lines are split on spaces.
However, as you see, the output of last normally uses spaces for separating columns, and it is fed into awk which splits the line anyway — with spaces. The script could be simplified in various ways, e.g.,:
#!/bin/sh
for LINE in $(last -a | sed -e '$ d' -e 's/ .*//')
do
echo $LINE
done
which is (starting from the example in the question) adequate if the number of logins is not large enough to exceed your command-line. While checking for variations in last output, I noticed one machine with about 9800 lines from several years. (The other usual motivations given for not using for-loops are implausible in this instance). As a pipe:
#!/bin/sh
last -a | sed -e 's/ .*//' -e '/^$/d' | while IFS= read LINE
do
echo $LINE
done
I changed the sed expression (which OP likely copied from some place such as Bash - remove the last line from a file) because it does not work.
Finally, using the -a option of last is unnecessary, since all of the additional information it provides is discarded.

Grep a time stamp in the H:MM:SS format

Working on the file an need to grep the line with a time stamp in the H:MM:SS format. I tried the following egrep '[0-9]\:[0-9]\:[0-9]'. Didn't work for me. What am i doing wrong in regex?
$ date -u | egrep '\d{1,2}:\d{1,2}:\d{1,2}'
Fri May 2 00:59:47 UTC 2014
Try a site like http://regexpal.com/
Here is the fix:
grep '[0-9]:[0-9][0-9]:[0-9][0-9]'
If you need get timestamp only, and your grep is gnu grep.
grep -o '[0-9]:[0-9][0-9]:[0-9][0-9]'
and if you work more harder, limit on time format only:
grep '[0-2][0-9]:[0-5][0-9]:[0-5][0-9]'
Simplest way that I know of:
grep -E '([0-9]{2}:){2}[0-9]{2}' file
If you need month and day also:
grep -E '.{3,4} .{,2} ([0-9]{2}:){2}[0-9]{2}' file

sed & awk, second column modifications

I've got a file that I need to make some simple modifications to. Normally, I wouldn't have an issue, however the columns are nearly identical which throws me off.
Some examples:
net_192.168.0.64_26 192.168.0.64_26
net_192.168.0.128-26 192.168.0.128-26
etc
Now, normally in a stream I'd just modify the second column, however I need to write this to a file which confuses me.
The following string does what I need it do to but then I lose visibility to the first column, and can't pipe it somewhere useful:
cat file.txt | awk '{print $2}' | sed 's/1_//g;s/2_//g;s/1-//g;s/2-//g;s/_/\ /g;s/-/\ /g' | egrep '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}'
Output needs to look like (subnet becomes the 3rd column):
net_192.168.0.64_26 192.168.0.64 26
net_192.168.0.128-26 192.168.0.128 26
How do I do what the above line does, while keeping both columns visible so I can pipe them to a new file/modify the old etc.
Thanks!
try this, if it is ok for you:
awk '{gsub(/[_-]/," ",$2)}1' file
test with your example text:
kent$ echo "net_192.168.0.64_26 192.168.0.64_26
net_192.168.0.128-26 192.168.0.128-26"|awk '{gsub(/[_-]/," ",$2)}1'
net_192.168.0.64_26 192.168.0.64 26
net_192.168.0.128-26 192.168.0.128 26
If you just want to replace the characters _,- with a single space from the second field then:
$ awk '{gsub(/[-_]/," ",$2)}1' file
net_192.168.0.64_26 192.168.0.64 26
net_192.168.0.128-26 192.168.0.128 26
And a sed version:
sed 's/\(.*\)[-_]/\1 /' file

sed: mass converting epochs amongst random other text

Centos / Linux
Bash
I have a log file, which has lots of text in and epoch numbers all over the place. I want to replace all epochs whereever they are into readable date/time.
I've been wanting to this via sed, as that seems the tool for the job. I can't seem to get the replacement part of sed to actually parse the variable(epoch) to it for conversion.
Sample of what I'm working with...
echo "Some stuff 1346474454 And not working" \
| sed 's/1[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]/'"`bpdbm -ctime \&`"'/g'
Some stuff 0 = Thu Jan 1 01:00:00 1970 And not working
The bpdbm part will convert a supplied epoch variable into useful date. Like this..
bpdbm -ctime 1346474454
1346474454 = Sat Sep 1 05:40:54 2012
So how do i get the "found" item to be parsed into a command. As i don't seem to be able to get it to work.
Any help would be lovely. If there is another way, that would be cool...but i suspect sed will be quickest.
Thanks for your time!
that seems the tool for the job
No, it is not. sed can use & only itself, there is no way how to make it an argument to a command. You need something more powerful, e.g. Perl:
perl -pe 'if ( ($t) = /(1[0-9]+)/ ) { s/$t/localtime($t)/e }'
You can do it with GNU sed, the input:
infile
Some stuff 1346474454 And not working
GNU sed supports /e parameter which allows for piping command output into pattern space, one way to take advantage of this with bpdbm:
sed 's/(.*)(1[0-9]{9})(.*)/echo \1 $(bpdbm -ctime \2) \3/e' infile
Or with coreutils date:
sed 's/(.*)(1[0-9]{9})(.*)/echo \1 $(date -d #\2) \3/e' infile
output with date
Some stuff Sat Sep 1 06:40:54 CEST 2012 And not working
To get the same output as with bpdbm:
sed 's/(.*)(1[0-9]{9})(.*)/echo "\1$(date -d #\2 +\"%a %b %_d %T %Y\")\3"/e' infile
output
Some stuff Sat Sep 1 06:40:54 2012 And not working
Note, this only replaces the last epoch found on a line. Re-run if there are more.

Trying to parse logfile based on start and end time

I am trying to parse large zipped logfile and would like to collect all matching parameters within a certain time range:
Wed Nov 3 09:27:20 2010 : remote IP address 209.151.64.18
Wed Nov 3 11:57:22 2010 : secondary DNS address 204.117.214.10
I am able to grep other parameter using the line below:
gzcat jfk-gw10-asr1.20100408.log.gz | egrep gabriel|98.126.209.144\|13.244.137.58\|16.151.65.121
I have been unable to parse for the start time and/or end time.
Any assistance is greatly appreciated.
Assuming that the log file is chronologically sorted you could do e.g.:
gzcat jfk-gw10-asr1.20100408.log.gz | sed -n '/Nov 3 09:/,/Nov 3 11:/p'
to get log entries between 09:00:00 and 11:59:59 on Nov, 3rd.
you can access space separated field using awk:
cat you_file_name | awk '/ / { print $1;}'
Use $[n] to print the desired field.

Resources