optional regex patten in td-agent - ruby

I have two different format of logs line, you can make test using this site
I need to keep optional client section in below line, if it is present it conclude otherwise it ignore
\[(?<date>[^\]]*)\] \[(?<level>[^\]]*)\] \[(?<pid-tid>[^\]]*)\] (\[(?<client>[^\]]*)\]) (?<message>[^\]]*)
Log Lines - without client
[Mon Jan 18 21:55:58.239970 2016] [proxy_http:error] [pid 2769:tid 140041068427008] AH01114: HTTP: failed to make connection to backend: xx.xxx.xx.xx
Log Lines - with client
[Mon Jan 18 21:55:58.239970 2016] [proxy_http:error] [pid 2769:tid 140041068427008] [client xx.xxx.x.xx:10723] AH01114: HTTP: failed to make connection to backend: xx.xxx.xx.xx
I have tried like (.*?clientsection) -> 0 or more matches
\[(?<date>[^\]]*)\] \[(?<level>[^\]]*)\] \[(?<pid-tid>[^\]]*)\] (.*?(\[(?<client>[^\]]*)\])) (?<message>[^\]]*)
but it does not work

In your second expression, (.*?(\[(?<client>[^\]]*)\])) part matches an obligatory space, and then captures any 0+ chars, as few as possible, then captures 0+ chars other than ] into "client" group and then matches ] placing it inside the numbered capture group. If the client part is missing in the text, your expression will attempt to match the first space, then a [...] substring, and then again a space.
If you want to fix the regex, you need to make the "client" group optional and make sure the adjoining context is also made optional.
Replace the (.*?(\[(?<client>[^\]]*)\])) with (?: \[(?<client>[^\]]*)\])?. Here, (?:...)? is an optional non-capturing group that will create no subgroup (no capture), and will match 1 or 0 occurrences of its pattern, only if all that sequence is present.
See the Rubular demo (\n is added to the negated character classes since a multiline string is used for testing).

Related

custom pattern to filter strings when using telegraf inputs.logparser.grok

I'm trying filter for particular words in a log file using regex, the goal is that any log line that matches the regex in custom_pattern will go into influxdb, log lines that do not match willbe ignored. When I tested the regex it works, even in golang playground (https://play.golang.org/p/_apzOVwwgl2). But when I use it in the telegraf conf file as it is below, it doesn't work, there's no input into influxdb. Is there something I'm missing that should added to the configuration?
I've tested the regex on http://grokdebug.herokuapp.com/ and https://play.golang.org/p/_apzOVwwgl2 it works but not in the custom_patterns under [inputs.logparser.grok].
Here is my grok config
[[inputs.logparser]]
files = ["/var/log/test1"]
from_beginning = true
[inputs.logparser.grok]
patterns = ["%{FAIL_LOG}"]
custom_patterns = '''FAIL_LOG ^.*?\b(multipathd?)\b.*?\b(failed|failing|(remaining active paths))\b.*?$'''
The pattern is supposed to match first 2 log lines like below and ignore the third line.
Oct 29 03:29:03 dc-as-5p multipath: checker failed interface 8:0 in map 150gb
Oct 29 03:29:03 dc-as-5p multipathd: checker failing interface 8:0 in map 150gb
Oct 29 03:26:03 dc-as-5p link: checker down remaining active paths interface 8:0 in map 150gb
What am I doing wrong?
I summarised how I got custom log parsing in Telegraf/GROK to work in the following post: Custom log parsing with Telegraf/Tail Plugin/GROK. Maybe it helps you or others debug similar problems.
Maybe interessting for others reading this in 2020, that Telegraf's logparser is now replaced by the Tail plugin. Example in my post above.
PS: My approach for your problem would be to not use regex at all, but to define three different patterns for each of the lines. This of course will only work if you have a low number of possible log errors/lines.
If you run telegraf with the --debug flag, you will see that it is having an issue parsing the logs.
$ telegraf --debug --config ./telegraf.conf
...
2019-11-17T05:01:07Z D! Grok no match found for: "Oct 29 03:29:03 dc-as-5p multipath: checker failed interface 8:0 in map 150gb"
2019-11-17T05:01:07Z D! Grok no match found for: "Oct 29 03:29:03 dc-as-5p multipathd: checker failing interface 8:0 in map 150gb value=3"
2019-11-17T05:01:07Z D! Grok no match found for: "Oct 29 03:26:03 dc-as-5p link: checker down remaining active paths interface 8:0 in map 150gb"
This error message is misleading because, as your testing has shown, your regex pattern is correct. The real issue is that you have not included a value to be logged in your regex.
A version of your regex to store the error message and timestamp might be:
custom_patterns = '''FAIL_LOG %{SYSLOGTIMESTAMP:timestamp}.*(multipath).?: %{GREEDYDATA:message:string}'''
The value pattern can be found between ${}. Additional premade patterns can be found here. This will eliminate the first two errors above. The results of these can be seen using the --test flag.
$telegraf --test --config ./telegraf.conf
...
> logparser,host=pop-os,path=./test1 message="checker failed interface 8:0 in map 150gb",timestamp="Oct 29 03:29:03 " 1573968174161853621
For some reason the --test flag did not always output the results. I would have to run the command multiple times before getting the above output.

URL being stored in SCRIPT_NAME on subsequent requests in IIRF?

I'm having an issue with IIRF (Ionics Isapi Rewrite Filter) on IIS6 (although in this case not sure that's relevant), and it appears to be working on initial requests, but on a standard refresh (i.e. F5 not CTRL + F5), most of the time it will just 404. Although it appears to be intermittment. My re-write rule itself seems correctly, and I've tested it on various RegEx testers, as well as the fact it does work fine on a clear-cache refresh. I'm not an expert, but it appears to relate to the fact that on the times it doesn't work, the URL of the page I'm trying to hit is being fed through in the SCRIPT_NAME HTTP variable, rather than coming via the URL, which in this case appears to be the path I want to re-write to (although like I say, it 404's so it doesn't appear to actually be going to this path in these cases). I'm sure you'll see quite quickly that I'm just essentially doing extensionless URL re-writing. I've tried adding rules to re-write on the SCRIPT_NAME but no such luck so far.
My config is:
RewriteLog iirf
RewriteLogLevel 5
RewriteEngine ON
IterationLimit 5
UrlDecoding OFF
# Rewrite all extensionless URLs to index.html
RewriteRule ^[^.]*$ /appname/index.html
See the log below - this is a case of it NOT working. I'm hitting /appname/task/5 but it appears to store that in the SCRIPT_NAME. Strangely the URL it appears to want to re-write is actually the URL I want it to rewrite to.. Again, this is only on subsequent requests. On the first request it almost always re-writes without issue and page loads fine.
Tue Jul 12 10:17:33 - 4432 - Cached: DLL_THREAD_DETACH
Tue Jul 12 10:17:33 - 4432 - Cached: DLL_THREAD_DETACH
Tue Jul 12 10:17:33 - 4432 - HttpFilterProc: SF_NOTIFY_URL_MAP
Tue Jul 12 10:17:33 - 4432 - HttpFilterProc: cfg= 0x01C8CC60
Tue Jul 12 10:17:33 - 4432 - HttpFilterProc: SF_NOTIFY_AUTH_COMPLETE
Tue Jul 12 10:17:33 - 4432 - DoRewrites
Tue Jul 12 10:17:33 - 4432 - GetServerVariable_AutoFree: getting 'QUERY_STRING'
Tue Jul 12 10:17:33 - 4432 - GetServerVariable_AutoFree: 1 bytes
Tue Jul 12 10:17:33 - 4432 - GetServerVariable_AutoFree: result ''
Tue Jul 12 10:17:33 - 4432 - GetHeader_AutoFree: getting 'method'
Tue Jul 12 10:17:33 - 4432 - GetHeader_AutoFree: 4 bytes ptr:0x000D93A8
Tue Jul 12 10:17:33 - 4432 - GetHeader_AutoFree: 'method' = 'GET'
Tue Jul 12 10:17:33 - 4432 - DoRewrites: Url: '/appname/index.html'
Tue Jul 12 10:17:33 - 4432 - EvaluateRules: depth=0
Tue Jul 12 10:17:33 - 4432 - GetServerVariable: getting 'SCRIPT_NAME'
Tue Jul 12 10:17:33 - 4432 - GetServerVariable: 16 bytes
Tue Jul 12 10:17:33 - 4432 - GetServerVariable: result '/appname/task/5'
Tue Jul 12 10:17:33 - 4432 - EvaluateRules: no RewriteBase
Tue Jul 12 10:17:33 - 4432 - EvaluateRules: Rule 1: pattern: ^[^.]*$ subject: /appname/index.html
Tue Jul 12 10:17:33 - 4432 - EvaluateRules: Rule 1: -1 (No match)
Tue Jul 12 10:17:33 - 4432 - EvaluateRules: returning 0
Tue Jul 12 10:17:33 - 4432 - DoRewrites: No Rewrite
Any help is much appreciated!
Thanks
I'm not sure what the reason for it being stored in the SCRIPT_NAME variable was, but I've written an extra rule to cater for it, which fixed it for me:
RewriteEngine ON
IterationLimit 5
UrlDecoding OFF
# Rewrite all extensionless URLs to index.html
RewriteRule ^[^.]*$ /appname/index.html
# Subsequent requests may store the URL in the SCRIPT_NAME variable for some reason
RewriteCond %{SCRIPT_NAME} ^[^.]*$ # If SCRIPT_NAME variable contains an extensionless URL
RewriteRule .*\S.* /appname/index.html [L] # Rewrite all URLs to index if RewriteCond met
I've had this issue for a while, and to compound it I have it running on an Integrated Windows Authentication secure website. I think the problem affects more than just the requested uri. My guess is somehow IIRF is caching the previous request's directive set, and using that.
For testing, firstly I ensured that the HTTP response headers forced the browser not to cache any content, so the browser should always request and receive updated content. This does work as expected.
I had the IIRF directives parse the request through various destination script files I could flip between. What I found, after updating and saving both the IIRF.ini directives and the php files, was that each request, essentially after a soft refresh (F5), would indeed receive updated content (IIS was parsing and serving the scripts), but the directives executed were the previous ones.
eg, if a request resulted in a legitimate 404 error via the directives, then the next URL request, whether to an existing script or not, would also produce the 404 error. Getting a new full request (generally the Ctrl-F5 hard refresh, or a first request) would actually execute the proper/current directives.
After some header analysis, it seems that the requests showing problems have unsynchronized server variables, on first run through the directives:
SCRIPT_NAME holds the current URL being requested
REQUEST_URI showed the previous URL requested (the only variable with the previous URL)
That meant that the RewriteRule directives would be making use of the previous request_uri, which didn't hold the correct url. That seems to explain the 'old' incorrect end result on subsequent requests. I added the following directives, which seems to have solved this, at least on a basic level:
RewriteCond %{REQUEST_URI}::%{SCRIPT_NAME} !^([^\?]*?)(\?.*)?::\1$
RewriteRule ^ %{SCRIPT_NAME} [R=301,L,QSA]
the first check makes sure the pre-querystring value of request_uri matches the script_name value
the 301 forces the browser to a hard refresh, which should then synchronize the URLs (and any querystring is also resent). It should be the first directive, with the rest following.
This doesn't work, however, if the same url is requested a 2nd time. The newly saved directives won't take until the hard refresh is sent. I haven't yet found a way to determine if the latest directives are being parsed on soft refresh of the same URL.
In context of my secure site, it also didn't seem to want to work with requests that didn't include the Authorization HTTP header. Each of the browsers does the login handshake process properly, but once authorized, if the browser doesn't send that header, it can cause cached 401 errors on requests that shouldn't produce it. So I determined that the same solution can apply to these requests - forcing the refresh as long as the browser has not sent the Authorization header:
RewriteCond %{REQUEST_URI}::%{SCRIPT_NAME} !^([^\?]*?)(\?.*)?::\1$ [OR]
RewriteCond %{HTTP_Authorization} !^.+$
RewriteRule ^ %{SCRIPT_NAME} [R=301,L,QSA]
I think this solution addresses the same-url soft refresh problem above, since it appears that the soft refresh doesn't send the Authorization header. Effectively, every request becomes a hard refresh, ensuring sync'd variables and Auth header for the secure content.
For false positives, in theory this solution may be insufficient if the same url is requested on a soft refresh after the directives have changed and the Authorization header is sent. Secure content will be most recent, but directives used will still be the previous version.
It may also be insufficient for any request being parsed with differing yet valid request_uri and script_name values (usually this is on second itartion though). One step may be to set an environment variable on first iteration match, and only do the sync check and redirect if it's the first iteration of the directives. But IIRF doesn't seem to support the [E] modifier.
In short, I think this describes the problem:
Via IIRF, for any valid first request iteration, ^%{REQUEST_URI}(\?.+)?$ should always match %{SCRIPT_NAME}, but an untrappable old-directives parsing would occur if the variables are identical on first iteration, unless there's a way to check the cached (currently used) directives against the saved directives.
At any rate, hopefully this is one step closer to determining a universal workaround for this seemingly cached previous-request-and-directives IIRF bug.

Formatting IP with sed

I am trying to figure out how to do the following with sed:
I got a list of IPv4 addresses and I am trying to make them all uniform in the display. So for example: 1.2.4.32 would be 001.002.004.032. 10.125.62.1 would be 010.125.062.001.
I am trying to use sed to do it because that's what I am learning right now.
I got these two, which will take any one or two digit number and append zeros at the front.
sed 's/\<[0-9][0-9]\>/0&/g' file
sed 's/\<[0-9]\>/00&/g' file
But that runs into a more practical problem in that my input file will have single or double digits numbers in other non-IP address places. Example:
host-1 1.2.3.32
So I need a way for it to look for the full IP address, which I thought could be achieved by this
sed 's/\.\<[0-9]\>/00&/g'
but not only does that ignore the case of 1.something.something.something, but also it appends the 00 at the end of 3rd octet for some reason.
echo "10.10.88.5" | sed 's/\.\<[0-9]\>/00&/g'
10.10.8800.5
Sample file:
Jumpstart Server jumo 10.20.5.126
Jumpstart Server acob 10.20.5.168
NW1 H17 Node cluster 10.10.161.87
NW1 H17 Node-1 10.10.161.8
NW1 H17 Node-2 10.10.161.9
ts-nw1 10.10.8.6
The idiomatic way of changing only parts of a line is to copy it to the hold space, remove the parts we're not interested in from the pattern space, get the hold space back and then rearrange the pattern space to replace the part we've changed with our new version.
This should work (replace -r with -E for BSD sed):
sed -r 'h # Copy pattern space to hold space
# Remove everything except IP address from pattern space
s/.*\b(([0-9]{1,3}\.){3}[0-9]{1,3})\b.*/\1/
s/([0-9])+/00&/g # Prepend '00' to each group of digits
s/[0-9]*([0-9]{3})/\1/g # Only retain last three digits of each group
G # Append hold space to pattern space
# Replace old IP with new IP
s/(.*)\n(.*)\b([0-9]{1,3}\.){3}[0-9]{1,3}\b(.*)/\2\1\4/' infile
The last step is the most complicated one. Just before it, a line looks like this (newline as \n, end of line as $):
010.020.005.126\nJumpstart Server jumo 10.20.5.126$
i.e., our new and improved IP address, a newline, then the complete old line. We now capture the underlined groups:
010.020.005.126\nJumpstart Server jumo 10.20.5.126$
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^ ^
(.*) \n (.*) \b...\b (.*)
\1 \2 \3 \4
and rearrange the line by using group 2, then groups 1 (our new IP) and 4. Notice that
There are four capture groups, but the third one is just there to help describe an IP address, we don't actually want to retain it, hence \2\1\4 in the substitution (there are no non-capturing groups in sed).
The last capturing group (after the IP address) is empty, but having it makes it possible to use this for lines that have the IP address anywhere.
This only replaces the first IP address on each line, in case there are several.
The overall output is
Jumpstart Server jumo 010.020.005.126
Jumpstart Server acob 010.020.005.168
NW1 H17 Node cluster 010.010.161.087
NW1 H17 Node-1 010.010.161.008
NW1 H17 Node-2 010.010.161.009
ts-nw1 010.010.008.006
The same as a solidly unreadable one-liner:
sed -r 'h;s/.*\b(([0-9]{1,3}\.){3}[0-9]{1,3})\b.*/\1/;s/([0-9])+/00&/g;s/[0-9]*([0-9]{3})/\1/g;G;s/(.*)\n(.*)\b([0-9]{1,3}\.){3}[0-9]{1,3}\b(.*)/\2\1\4/' infile
\b is a GNU extension. The script mostly works without it as well; using it makes sure that blah1.2.3.4blah is left alone.
$ cat 37222835.txt
Jumpstart Server jumo 10.20.5.126 10.29.23.24
Jumpstart Server acob 10.20.5.168 dig opt
Jumpstart Server reac 251.218.212.1 rel
NW1 H17 Node cluster 10.10.161.87
NW1 H17 Node-1 10.10.161.8
NW1 H17 Node-2 10.10.161.9
ts-nw1 10.10.8.6
Nw2 HW12 Node-3 192.168.0.1
cluster
Doing :
sed -n 's/\([1]\?[0-9][0-9]\?\|2[0-4][0-9]\|25[0-5]\)\{1\}\.'\
'\([1]\?[0-9][0-9]\?\|2[0-4][0-9]\|25[0-5]\)\{1\}\.'\
'\([1]\?[0-9][0-9]\?\|2[0-4][0-9]\|25[0-5]\)\{1\}\.'\
'\([1]\?[0-9][0-9]\?\|2[0-4][0-9]\|25[0-5] \)/00\1\.00\2\.00\3\.00\4/g;
s/0\+\([0-9]\{3\}\)/\1/g;p' 37222835.txt
gives :
Jumpstart Server jumo 010.020.005.126 010.029.023.024
Jumpstart Server acob 010.020.005.168 dig opt
Jumpstart Server reac 251.218.212.001 rel
NW1 H17 Node cluster 010.010.161.087
NW1 H17 Node-1 010.010.161.008
NW1 H17 Node-2 010.010.161.009
ts-nw1 010.010.008.006
Nw2 HW12 Node-3 192.168.000.001
cluster
Advantage over the approach mentioned by #benjamin-w
This can replace multiple ip addresses in the same line
Disadvantage(the approach mentioned by #benjamin-w remedy this)
Had there be a word say Node-000234 it would be changed to Node-234. In fact, you could work on the second substitution command to get the desired behaviour.

os x screen command,'.screenrc', termcap

I need help in the conceptual area surrounding:
/usr/bin/screen,
~/.screenrc,
termcap
My Goal: is to create a 'correctly' formatted log file via 'screen'.
Symptom: The log file contains hundreds of carriage-return bytes [i.e. (\015) or (\r) ]. I would like to replace every carriage-return byte with a linefeed byte [i.e. (\012) or (\n)].
My Approach: I have created the file: ~/.screenrc and added a 'termcap' line to it with the hope of intercepting the inbound bytes and translating the carriage-return bytes into linefeed bytes BEFORE they are written to the log file. I cycled through nine different syntactical forms of my request. None had the desired effect (see below for all nine forms).
My Questions:
Can my goal be accomplished with my approach?
If yes, what changes do I need to make to achieve my goal?
If no, what alternative should I implement?
Do I need to mix in the 'stty' command?
If yes, how?
Note: I can create a 'correctly' formatted file using the log file as input to 'tr':
$ /usr/bin/tr '\015' '\012' <screenlog.0 | head
<5 BAUD ADDRESS: FF>
<WAITING FOR 5 BAUD INIT>
<5 BAUD ADDRESS: 33>
<5 BAUD INIT: OK>
Rx: C233F1 01 00 # 254742 ms
Tx: 86F110 41 00 BE 1B 30 13 # 254753 ms
Tx: 86F118 41 00 88 18 00 10 # 254792 ms
Tx: 86F128 41 00 80 08 00 10 # 254831 ms
Rx: C133F0 3E # 255897 ms
Tx: 81F010 7E # 255903 ms
$
The 'screen' log file ( ~/screenlog.0 ) is created using the following command:
$ screen -L /dev/tty.usbserial-000014FA 115200
where:
$ ls -dl /dev/*usb*
crw-rw-rw- 1 root wheel 17, 25 Jul 21 19:50 /dev/cu.usbserial-000014FA
crw-rw-rw- 1 root wheel 17, 24 Jul 21 19:50 /dev/tty.usbserial-000014FA
$
$
$ ls -dl ~/.screenrc
-rw-r--r-- 1 scottsmith staff 684 Jul 22 12:28 /Users/scottsmith/.screenrc
$ cat ~/.screenrc
#termcap xterm* 'XC=B%,\015\012' # 01 no effect
#termcap xterm* 'XC=B%\E(B,\015\012' # 02 no effect
#termcap xterm* 'XC=B\E(%\E(B,\015\012' # 03 no effect
#terminfo xterm* 'XC=B%,\015\012' # 04 no effect
#terminfo xterm* 'XC=B%\E(B,\015\012' # 05 no effect
#terminfo xterm* 'XC=B\E(%\E(B,\015\012' # 06 no effect
#termcapinfo xterm* 'XC=B%,\015\012' # 07 no effect
#termcapinfo xterm* 'XC=B%\E(B,\015\012' # 08 no effect
termcapinfo xterm* 'XC=B\E(%\E(B,\015\012' # 09 no effect
$
$ echo $TERM
xterm-256color
$ echo $SCREENRC
$ ls -dl /usr/lib/terminfo/?/*
ls: /usr/lib/terminfo/?/*: No such file or directory
$ ls -dl /usr/lib/terminfo/*
ls: /usr/lib/terminfo/*: No such file or directory
$ ls -dl /etc/termcap
ls: /etc/termcap: No such file or directory
$ ls -dl /usr/local/etc/screenrc
ls: /usr/local/etc/screenrc: No such file or directory
$
System:
MacBook Pro (17-inch, Mid 2010)
Processor 2.53 GHz Intel Core i5
Memory 8 GB 1067 MHz DDR3
Graphics NVIDIA GeForce GT 330M 512 MB
OS X Yosemite Version 10.10.4
Screen(1) Mac OS X Manual Page: ( possible relevant content ):
CHARACTER TRANSLATION
Screen has a powerful mechanism to translate characters to arbitrary strings depending on the current font and terminal type. Use this feature if you want to work with a common standard character set (say ISO8851-latin1) even on terminals that scatter the more unusual characters over several national language font pages.
Syntax: XC=<charset-mapping>{,,<charset-mapping>}
<charset-mapping> := <designator><template>{,<mapping>}
<mapping> := <char-to-be-mapped><template-arg>
The things in braces may be repeated any number of times.
A tells screen how to map characters in font ('B': Ascii, 'A': UK, 'K': german, etc.) to strings. Every describes to what string a single character will be translated. A template mechanism is used, as most of the time the codes have a lot in common (for example strings to switch to and from another charset). Each occurrence of '%' in gets substituted with the specified together with the character. If your strings are not similar at all, then use '%' as a template and place the full string in . A quoting mechanism was added to make it possible to use a real '%'. The '\' character quotes the special char- acters '\', '%', and ','.
Here is an example:
termcap hp700 'XC=B\E(K%\E(B,\304[,\326\\,\334]'
This tells screen how to translate ISOlatin1 (charset 'B') upper case umlaut characters on a hp700 terminal that has a german charset. '\304' gets translated to '\E(K[\E(B' and so on. Note that this line gets parsed three times before the internal lookup table is built, therefore a lot of quoting is needed to create a single '\'.
Another extension was added to allow more emulation: If a mapping translates the unquoted '%' char, it will be sent to the terminal whenever screen switches to the corresponding . In this special case the template is assumed to be just '%' because the charset switch sequence and the char- acter mappings normally haven't much in common.
This example shows one use of the extension:
termcap xterm 'XC=K%,%\E(B,[\304,\\\326,]\334'
Here, a part of the german ('K') charset is emulated on an xterm. If screen has to change to the 'K' charset, '\E(B' will be sent to the terminal, i.e. the ASCII charset is used instead. The template is just '%', so the mapping is straightforward: '[' to '\304', '\' to '\326', and ']' to '\334'.
The section on character translation is describing a feature which is unrelated to logging. It is telling screen how to use ISO-2022 control sequences to print special characters on the terminal. In the manual page's example
termcap xterm 'XC=K%,%\E(B,[\304,\\\\\326,]\334'
this tells screen to send escape(B (to pretend it is switching the terminal to character-set "K") when it has to print any of [, \ or ]. Offhand (referring to XTerm Control Sequences) the reasoning in the example seems obscure:
xterm handles character set "K" (German)
character set "B" is US-ASCII
assuming that character set "B" is actually rendered as ISO-8859-1, those three characters are Ä, Ö and Ü (which is a plausible use of German, to print some common umlauts).
Rather than being handled by this feature, screen's logging is expected to record the original characters sent to the terminal — before translation.

How to find a String based on Date in vi editor "Feb 12, 2014 1:00:53 PM"

My Date Stamp in Catalina.out log file is based on format "Feb 12, 2014 1:00:53 PM", i wanted to search exception occurred on the specified time, how can i find that in Vi editor??? my log files are very large containing size in GB's.
to search in vim use the / command - type /Feb 12, 2014 1:00:53 PM and hit enter - all matches should get highlighted. Use n to move to the next occurrence and N to move to the previous occurrence.
edit:
Putting \ in front of a character will escape it so it can be used in your search - if you were trying to search and replace "http://" with "https://" you could search like :s/http:\/\//https\/\//gi - which would keep the /'s from ending your regex statement - alternatively you can use a different character to avoid the picket fences (\/\//\/\) like :s#http://#https://#gi
i found a way and it worked, hope fully it will help others.
in my case vi simple search recognized space and , and - issue is only with : or / so to search Feb 12, 2014 1:00:53 PM use /Feb 12, 2014 1\:00\:53 PM
before writing : or / in search just type \ before it.

Resources