I am using sentry to track errors/events. However, for every error, I get the runtime variables in the stack trace. Is there any method to mask certain variables (e.g. mask a field named user_secret_key).
Related
I am trying to get a histogram object that displays the distribution output of a timeMeasureEnd block, and have managed to get the histogram to display this output as a cumulative distribution and mean.
However, one objective of my model is to the measure the hourly average and distribution of the timeMeasureEnd block, and I am unable to make the histogram object reset on an hourly basis using an event block.
At present I have the following:
An event block called HourlyReset in cyclic mode using a 1h timeout based on model time, this element is functioning correctly.
I also have a histogram provisionally called chart, that is currently displaying timeMeasureEnd.distribution, this is also functioning correctly.
However, when I specify the action for the event block as chart.reset(); I get an error message:
Description: Type mismatch: cannot convert from TimeMeasureEnd to double. Location: Histogram Test/Main/data - Histogram Data
A second approach I tried was to have the timeMeasureEnd block write to a histogram data object, and have the event block reset a histogram data object but in this instance I get the same error message.
I am clearly missing something here, and I assume it is related to the agent object that is being injected into the system by the source block.
Any pointers in the right direction would be welcomed.
You can just call the resetStats() method of the timeMeasureEnd block. Just put code like timeMeasureEnd.resetStats() inside your event and the collected statistics inside this block and therefore the histogram will be reset every time you call this function.
Good luck (and please accept this answers if it solves your problem:))
I have a PutGCSObject processor for which I want to capture the error into a flow file attribute.
As in the Picture, when there is an error for the Processor, it sends to failure with all the pre-existing attributes as-is.
I want the error message to be a part of the same flow file as an attribute. How can I achieve that ?
There is actually a way to get it.
Here is how i do it:
1: I route all ERROR connections to a main "monitoring process group"
2: Here is my "monitoring process group"
In updateattribute I capture filename as initial_filename
Then in my next step I query the bulletins
I then parse the output as individual attributes.
After I have the parsed bulleting output I use a RouteOnAttribute proc to drop all bulletins I don't need (some of them I have already used and notified on).
Once I only have my actual ERROR bulletin left, I use ExecuteStreamingCommand to run a python script using nipyapi module to get more info about the error, such as where it is in my flow, hierarchy, a description of the processor that failed, some proc stats and also I have metadata catalog about each proc/process group with their custodians and business use case.
This data is then posted to sumologic for logging and also I trigger a series of notifications (Slack + PagerDuty hook to create an incident lifecycle).
I hope this helps
There's no universal way to append error messages as flowfile attributes. Also, we tend to strongly avoid anything like that because of the potential to bubble up error messages with sensitive data to users who might not be authorized to see those details.
Some spans reported to google trace represent method calls that ended in an error.
Is there a way to get google trace to visually set these spans apart from success spans (a different color, an error icon similar to AWS xray...)?
I tried setting these attributes, but visually they made no difference:
Span status
/error/message attribute
/error/name attribute
/http/status_code attribute
You could also use a Trace Filter. This will filter the Traces by “Terms”. For instance, you could select the span as well as a latency and Stackdriver Trace will filter it.
I am currently doing internal phishing campaigns within my company and I am trying to improve the process. One of the issues is that if I include tracking pixels to allow tracking whether an email has been opened or not the image will invariably get blocked by outlook and require the user to manually download it "Click here to download pictures. To help protect your privacy, Outlook prevented automatic download of some pictures in this message,"
Now, I can get around this by adding the spoofed email address in to the safe senders list, but this then means that I have to do this for each campaign and have to then push it out via GPO to everyone. Does anybody know of a way that the mail server can be whitelisted so that any email received from say 10.10.150.200 will have its images automatically downloaded?
You will need to set the PR_BLOCK_STATUS MAPI property - see an excerpt from [MS-OXOMSG].pdf below. Keep in mind that the property must be set on the client side after the message is received - you cannot set the property when sending the message.
Note that OOM won't help you since it rounds off all date/time values, and you need to have the native FILETIME value to calculate the value of the PR_BLOCK_STATUS MAPI property. And to use Extended MAPI, you will need to use C++ or Delphi.
If using Redemption (I am its author) is an option (can be used from any language), you can set that property using RDOMail.DownloadPictures property:
2.2.1.1 PidTagBlockStatus
Type: PtypInteger32 8
Indicates the user's preference for viewing external content (such as links
to images on an HTTP server) in the message body. A client MAY ignore this
value and always allow or block external content based on other factors
(such as whether the sender is on a safe list). If this property is used,
then the default action is to block the external content. However, if the
value of this property falls within a certain range, then viewing external
content is allowed. The allowed value is computed from
PidTagMessageDeliveryTime: since the sender of a message does not have
knowledge of this value, the sender cannot reliably set PidTagBlockStatus to
the allowed values.
To compute the allowed values, convert the value of
PidTagMessageDeliveryTime to a PtypDouble, floatdate, where the date is
represented as the number of days from midnight, December 30, 1899. Apply
the following formula: result = ((floatdate - floor(floatdate)) * 100000000)
3; where floor(x) returns the largest integer ? x. Convert the PtypDouble
value result to a 32-bit integer computedvalue. Clients SHOULD set
PidTagBlockStatus to computedvalue to allow external content. However, when
determining whether to accept external content, clients SHOULD allow
external content if the absolute value of the difference between
computedvalue and the value of PidTagBlockStatus is 1 or less.
In order to avoid reading of messages which are processed but missed to get committed when a KAFKA STREAMS is killed , I want to get the offset for each message along with the key and value so that I can store it somewhere and use it to avoid the reprocessing of already processed messages.
Yes, this is possible. See the FAQ entry at http://docs.confluent.io/current/streams/faq.html#accessing-record-metadata-such-as-topic-partition-and-offset-information.
I'll copy-paste the key information below:
Accessing record metadata such as topic, partition, and offset information?
Record metadata is accessible through the Processor API.
It is also accessible indirectly through the DSL thanks to its
Processor API integration.
With the Processor API, you can access record metadata through a
ProcessorContext. You can store a reference to the context in an
instance field of your processor during Processor#init(), and then
query the processor context within Processor#process(), for example
(same for Transformer). The context is updated automatically to match
the record that is currently being processed, which means that methods
such as ProcessorContext#partition() always return the current
record’s metadata. Some caveats apply when calling the processor
context within punctuate(), see the Javadocs for details.
If you use the DSL combined with a custom Transformer, for example,
you could transform an input record’s value to also include partition
and offset metadata, and subsequent DSL operations such as map or
filter could then leverage this information.