Google Cloud Logging: show lines above and below the matched lines - google-cloud-logging

Sometimes when searching logs by keyword or other conditions, I want to show not only the lines which match the condition, but also a few lines around them to better understand the context, similar to the -C flag of grep.
Is this possible?

Also looking for this. What I found so far is the option to "Pin and show resource log entries" for a matched result. It will pin the entry, and then show it inline with the rest of the log

I do not think it is possible to achieve this on Google Cloud Logging.
I usually run a first query, get the timestamp of the log that I am interested in and then run a second query using Custom time range.
Advanced logs queries

Related

ill-formed log in Loki

I'm using zerolog in golang, which outputs json formatted log, the app is running on k8s, and has cri-o format as following.
actual log screenshot on Grafana loki
My question is, since there's some non-json text prepended to my json log, I can't seem to effectively query the log, one example is, when I tried to pipe the log into logfmt, exceptions were thrown.
What I want is to be able to query into the sub field of the json.
My intuition is to maybe for each log, only select the parts from { (start of the json), then maybe I can do more interesting manipulation. I'm a bit stuck and not sure what's the best way to proceed.
Any help and comments is appreciated.
after some head scratching, problem solved.
As I'm directly using the promtail setup from here https://raw.githubusercontent.com/grafana/loki/master/tools/promtail.sh
And within this setup, the default parser is docker, but we need to change it to cri, afterwards, the logs are properly parsed as json in my Grafana dashboard

How to store a part of log into a file using LogStash

I'm processing a log file with the help of logstash aggregate filter with grok having multiple patterns.
Now while processing the logs I want to extract a part of the log with some regex and store it into a file.
For example, let's say my log is :
id:0422 time:[2013-11-19 02:34:58] level:INFO text:(Lorem Ipsum is simply dummy text of the printing and typesetting industry)
In this log the text will be different at every time.
I have a regex with help of it I can match a part of text that can occure in logstash
So if I find something in that text with help of that regex while logstash indexing into elastic I want to store it into some file or something
Is it possible to achieve this?
There are different solutions for this:
create a filter using ruby code that will be triggered to write in a specific format when you have all the event data together
create a separate output which will be triggered based on an if statement to a file, this will be the preferred way of working as it is clear that it is an output.
Depending on the fact if you want to send all data or not, or have it look different or not you might need to use the clone function in order to clone the event into two different ones which can be manipulated apart from each other using tags.

How to read data in logs using logstash?

I have just started log stash, i have log files in that log file whole object is printed in the logs, Since my object is huge i cant write the grok patterns to the whole object and also i expecting only two values out of those object. Can you please let us know how can i get that?
my logs files looks like below
2015-06-10 13:02:57,903 your done OBJ[name:test;loc:blr;country:india,acc:test#abe.com]
This is just an example my object has lot attributes in int , in those object i need to get only name and acc.
Regards
Mohan.
You can use the following pattern for the same
%{GREEDYDATA}\[name:%{WORD:name};%{GREEDYDATA},acc:%{NOTSPACE:account}\]
GREEDYDATA us defined as follows -
GREEDYDATA .*
The key lie in understanding greedydata macro.
It eats up all possible characters as possible.
Logstash patterns don't have to match the entire line. You could also pull the leading information off (date, time, etc) in one grok{} and then use a different grok{} to pull off just the two fields that you want.

Tivoli Logfile Monitoring - Regex to exclude

I am sorry if it is not right to post a question on two forums.
We use Tivoli to monitor our logs files. The log4j log level is set to ERROR and Tivoli would raise tickets for these statements. But there are some known issues for which we would not want Tivoli to raise tickets. Is there a way to specify that some statements need to be ignored ?
Current regex : [/var/tmp/abc.log;ERROR(.*);error found: RegExp1]
This is very generic. We need to exclude certain framework errors (Hibernate / Mule) for a known issue. Is there a way to specify using a regex ?
Thanks,
Midhun
If you are using the LO Agent you can configure situation formula based on regular expression to fit your needs.
Below the LO Agent User Guide
http://pic.dhe.ibm.com/infocenter/tivihelp/v15r1/topic/com.ibm.itm.doc_6.2.3fp1/logfileagent623fp2_user.pdf
Take a look at the "Log File RegEx Statistics attribute group" section:
The Log File RegEx Statistics attribute group contains information that shows the statistics of the log file
regular expression search expressions. Regular expressions can be used to filter records or to define
records. This attribute group shows information about both types. When the Result Type attribute value
is INCLUDE or EXCLUDE, the filter is used to filter records;
Hope this helps
I don't have the reputation yet to post a comment but I'd have liked to ask if you are using the Tivoli Log File Agent (lo) of the Unix Log Agent (ul) before answering.
If your question is still actual...
Here is documentationof LogAgent - http://www-01.ibm.com/support/knowledgecenter/SS4EKN_7.2.0.2/com.ibm.itm.doc_6.3/logfile/klo_fileformat_specs.htm
You can specify new Regex as DISCARDED and all records mathced this regex will be not catched by ITM Events.
If you use the special predefined event class DISCARD as your event class, any log records matching the associated pattern are discarded, and no events are generated for them. For example:
REGEX DISCARD
As a pattern matched, nothing is written to the unmatch log. The log file status records matched include these discarded events.
BTW
[/var/tmp/abc.log;ERROR(.*);error found: RegExp1]
may be better as
[/var/tmp/abc.log;ERROR([^;]*);error found: RegExp1]
.* is greedy and best avoided when possible

How do I see/debug the way SOLR find it's results?

Let's say I search for "ABLS" and the SOLR returns a result that to me does not make any sense.
How can I debug why SOLR picked this record to be returned?
debugQuery=true would help you get the detailed score calculation and the explanation for each scores.
An over view of the scoring is available at link
For detailed explaination of the debug information you can refer Link
You could add debugQuery=true&indent=true to the url and examine the results. You could also use the analysis tool in solr. Go to the admin and click analysis. You would need to read the wiki to understand either of these more in depth.
queryDebug will give you knowledge about why your scoring looks like it does (end how every field is relevant).
I will get some results that you are not understand and play with them with Solr's analysis
You should find it under:
/admin/analysis.jsp?highlight=on
Alternatively turn on highlighting over your results to see what is actually matching in your results
Solr queries are full of short parameters, hard to read and modify, especially when the parameters are too many.
And after it is even harder to debug and understand why a document is more or less relevant than another. The debug explain output usually is a three too big to fit in one page.
I found this Google Chrome extension useful to see Solr Query explain and debug in a clear manner.
For those who still use very old version of solr 3.X, "debugQuery=true" will not put the debug information. you should specify "debugQuery=on".
There are two ways of doing that. First is the query level, which means adding the debugQuery=on to your query. That will include a few things:
parsed query
debug timing information
detailed scoring information which helps you with analysis of why a give document is given a score.
In addition to that, you can use the [explain] transformer and add it to your fl parameter. For example ...&fl=*,[explain], which will result in your documents having the scoring information as another field.
The scoring information can be quite extensive and will include calculations done by the similarity algorithm. If you would like to learn more about the similarities and the scoring algorithm in Solr, have a look at this my and my colleague Radu from Sematext talk from the Activate conference: https://www.youtube.com/watch?v=kKocQdYGVJM

Resources