My log messages are of the form "{title}: {details}" and I would like to group issues by title.
Is there a way to add a fingerprint rule to my sentry project that does this?
I am reading this page:
https://docs.sentry.io/product/data-management-settings/event-grouping/fingerprint-rules/
If I could capture the content of the glob stars and refer to the with $i, it would be something like this
message:"*:*" -> $1
match messages that contain : and group by everything before the colon
It's not possible on the Sentry side. You need to apply that fingerprint on the SDK side if you require regexp-like matches.
Related
I'm trying to add a filter with IN operator to my tablix. Problem is , my criteria values are already comma separated like A,B and C,D. Writing them like " 'A,B','C,D' doesn't seem to work.
I couldn't manage to get the filter working and all other questions/examples are for single word criteria. Any help?
I managed to fix this issue on my own by using a =split("2B,2C",",") function and changing the split notation (the last ,) to something other than a comma. Works very fine.
I hope this would help and save time for other people in future.
I've enabled the access logs for my ELB's on AWS, and we're sending them to a setup of logstash + elasticsearch + kibana.
I'm using logstash's grok filter to parse the logs into separate fields that i can view and sort in kibana, and running into a difficulty with parsing the last field that amazon give in those logs, which is the "request" field.
it contains actually 3 parts. the HTTP method, the URL itself and the HTTP version.
how can i separate those 3 into independent fields that i could use?
Thanks
Benyamin
What about something like this, to replace the last element of your grok filter?
\"%{WORD:verb} %{NOTSPACE:request} HTTP/%{NUMBER:httpversion}\"
I've never actually administered logstash before, but I pieced this together by looking at the source code for the built-in filters, some of which are evidently built on top of other built-in filters.
https://github.com/elasticsearch/logstash/blob/v1.4.1/patterns/grok-patterns
This pattern should extract three elements, the "verb" would capture "GET" and the "httpversion" would capture the numeric HTTP version, and the "request" would capture the rest.
I admit I'm also guessing about the backslashes to escape the double quote literals that are in the message, but that seems like the logical way to include a literal quote to match the ones that ELB puts in the logs. Note that the final double-quote I've shown isn't the closing quote of the filter string expression. That quote would go immediately after the above, since this matches the last thing on each line.
So I dumped all the emails from a DB into a txt file and I`m looking to sort them by email provider, basically anything that comes after the # sign.
I know I can use regex to validate each email.
However how do I indicate that I want to sort them by anything that comes after the # sign?
I know I can use regex to validate each email.
Careful! The range of valid e-mail addresses is much wider than most people think. The only correct regexes for e-mail validation are on the order of a page in length. If you must use a regex, just check for the # and one ..
However how do I indicate that I want to sort them by anything that comes after the # sign
email_addresses.sort_by {|addr| addr.split('#').last }
I'm using Url Rewriter to create user-friendly URLs in my web app and have the following rule set up
<rewrite url="/(?!Default.aspx).+" to="/letterchain.aspx?ppc=$1"/>
How do I replace $1 so that it is the last part of the URL?
So that the following
www.mywebapp.com/hello
would transform to
/letterchain.aspx?ppc=hello
I've read the docs but can't find anything.
The $1 in the to portion of the group refers to the first capture group defined (eg the part in the brackets).
The part that you actually want injecting into the $1 is the .+ which isnt in a capture group.
I'm not sure but I think because of the (?! ) "match if suffix is absent" query this isnt counted as numbered capture group $1 so this should work:
<rewrite url="/(?!Default.aspx)(.+)" to="/letterchain.aspx?ppc=$1"/>
If it doesnt then just try inserting the second capture group into your to string instead:
<rewrite url="/(?!Default.aspx)(.+)" to="/letterchain.aspx?ppc=$2"/>
Please note that if you are developing for IIS 7+ http://www.iis.net/download/urlrewrite/ is a module from Microsoft that performs faster rewrites with lower footprint.
BTW, your regex has a small problem, you need to escape the dot character, that is "/(?!Default.aspx)(.+)"
I have a pipe that filters an RSS feed and removes any item that contains "stopwords" that I've chosen. Currently I've manually created a filter for each stopword in the pipe editor, but the more logical way is to read these from a file. I've figured out how to read the stopwords out of the text file, but how do I apply the filter operator to the feed, once for every stopword?
The documentation states explicitly that operators can't be applied within the loop construct, but hopefully I'm missing something here.
You're not missing anything - the filter operator can't go in a loop.
Your best bet might be to generate a regex out of the stopwords and filter using that. e.g. generate a string like (word1|word2|word3|...|wordN).
You may have to escape any odd characters. Also I'm not sure how long a regex can be so you might have to chunk it over multiple filter rules.
In addition to Gavin Brock's answer the following Yahoo Pipes
filters the feed items (title, description, link and author) according to multiple stopwords:
Pipes Info
Pipes Edit
Pipes Demo
Inputs
_render=rss
feed=http://example.com/feed.rss
stopwords=word1-word2-word3