ndc (nested diagnostic context) in LARAVEL - is it possible? - laravel

In log4net you can add NDC to filter all the logs in a specific context. for example per session,request,cli command ext.
its a basically another field in the log that has random string, and the string is share between the contexts from abobe.
When you search for this string you can see the entire request logs.
is it possible in laravel?
Thanks

Related

Attach meta data / custom data to slack messages sent through the API

I am developing a series of Slack apps for my workspace, and some of them are meant to interact with the content (messages) delivered by the other apps : extracting content IDs that may be referred to by other messages
A concrete example :
Suppose I have an app A "FindUser" that is capable of giving me the user profile when a slack user types find me#example.com, and it replies in the thread with a formatted view of the user profile
I am developing an app B "EditTags", which basically gives me a right click option with "edit tags" (see Slack's Interactive Components/Actions), the idea being that a user could first ask app A to find a user, and then right click on the reply from App A and click the "edit tags" action given by the other app. What this app B does it actually retrieve the tags for the user mentionned by the previous message from app A, and in another reply to the thread it gives some controls to either delete an existing tag OR it shows a select with autocomplete to add new tags.
The B app needs to retrieve the user ID that the A app mentionned previously. So I need some way to pass that data directly in the slack message. When looking at the examples, slack does not seem to provide a way to add arbitrary "metadata" to a message, am I wrong ? Do you have workaround for this ? I mean I could totally send the user ID say, in the footer, so I can just read the footer, but I was planning to use the footer for something else... Is there a way to pass metadata hrough properties that would be hidden to the end user ?
Although this does not feel relevant, I am building a slack nodeJS app using the node slack sdk (and especially the #slack/interactive-messages package)
For the most part the Slack API does not provide any official means to attach custom data / meta data to messages. But with some simple "hacks" it is still possible. Here is how:
Approach
The basic approach is to use an existing field of the message as container for your data. Obviously you want to pick a field that is not directly linked to Slack functionality.
Some field are not always needed, so you can just use that field as data container. Or if its needed, you can include the functional value of that field along with your custom data in the data container.
For example for message buttons you could use the value field of a button and structure your code in a way that you do not need it in its original function. Usually its sufficient to know which button the user client (via the name field), so the value field is free to be used for your custom data. Or you can include the functional value of your button along with the custom data in a data container (e.g. a JSON string) in that field.
Serialization
All messages are transported through HTTP and mostly encoded as UTF-8 in JSON. So you want to serialize / de-serialize your data accordingly, especially if its binary data. If possible I would recommend to use JSON.
Length
The maximum allowed length of most fields is documented in the official Slack API documentation. e.g. for the value field for message buttons can contain up to 2.000 characters. Keep in mind that you need to consider the length of your data after serialization. e.g. if you convert binary data into Base64 so it can be transported with HTTP you will end up with about 1.33 characters for every byte.
Contents
In general I would recommend to keep your data container as small as possible and not include the actual data, but only IDs. Here are two common approaches:
Include IDs of your data objects and load the actual objects
from a data store when the request is later processed.
Include the ID of server session and when processing the request you
can restore the corresponding server session which contains all data
objects.
In addition you might need to include functional values so that the functionality of the field you are using still works (e.g. value of a menu option, see below)
Implementation
Dialogs
Dialogs provide an official field for custom data called state. Up to 3.000 characters.
Message buttons
For Message buttons you can use the message action fields / value. Up to 2.000 characters. Its also possible to use the name field, but I would advise against it, because the maximum allowed length of that field is not documented.
Message menus
For Message menus you can use the value field of an option or the name field of the menu action.
Usually the value field is the better approach, since you have a documented max length of 2.000 and it gives you more flexibility. However, you will need to combine you custom data with the actual functional value for each option. Also, this will not work for dynamic select elements (like users), where you can not control the value field.
When using the name field note, keep in mind that the maximum allowed length of name is not documented, so you want to keep you data as short as possible. Also, if you want to use more than one menu per attachment you need to include the actual name of the menu into your data container.
Normal message attachments
Normal message attachments do not contain any suitable field to be used as container for custom data, since all fields are linked to Slack functionality.
Technically you could use the fallback field, but only if you are 100% sure that your app is never used on a client that can not display attachments. Otherwise your data will be displayed to the user.

access fields from log using ruby filter

What I am trying to do is pass my grok fields in some way or another to an external ruby filter-script and set based on these fields specific tags. The problem is that I can only get the whole log message with the event API.
My question is: is it possible to access fields from the already processed log message in the ruby filter or do I have to parse the whole message myself, which would not be optimal because every log message is processed twice? Alternatively I could completely dump the grok filter and do everything myself in the script.
Yes, it is possible.
You can get read-only access to any field using Event API
filter {
ruby {
code => 'event.get("foo" )'
}
}
field can also be a nested field reference such as [field][bar].
event.get("[foo][bar]")

Is there a way to search for transactions by custom field?

I store specific custom field for each transaction. I'd like to conduct a search by this field. I wouldn't like to retrieve too many transactions (can filter by payment method id, but still) and iterate through them on application side. So, I read a documentation, didn't find an ability to search by custom field (only by predefined). I didn't try it out, but probably it's possibly to do so by following the same pattern like
var stream = gateway.transaction.search(function (search) {
search.myCustomField().is("custom_field_value");
// or search.customFields.myCustomField().is("custom_field_value");
});
Thanks in advance
I work as a developer for Braintree. Searching on custom fields is not supported at this time. You can see all of the searchable transaction attributes listed here.
If you would like to discuss alternatives, I recommend emailing our support team at support#braintreepayments.com to see if there is another method to achieve what you are trying to do.

How can I log the value of a variable sent in an HTTP Request sent from JMeter, if the value was first read in from a csv file

I would like to read the exact value of a variable I use to pass through an HTTP Request. I first read in many values of variables using the CSV Data Set Config. For the username, it is in the form of an email address. So, I have a variable called "email" in the Data Set Config. In the actual HTTP Request, for "name", I call it "username". For the "Value" field for this same "username", I added a time() function to it like this so I would end up creating unique users in my tests:
${email}${__time()
When I view the "Request" in a View Results Tree, I can see my parameter is listed correctly:
username=email1%40email.com1390854377360
I do not care if this is correct in real world terms. I already know that is not a valid email. That is ok for now.
What I want to know is how can I log that email that I just created on the fly? I would like to not have to pull in the whole request every time also and then use some type of Regular Expression extractor. It just seems like there should be an easy way to do this.
I think there are 2 ways,
Beanshell Pre/Post processors : you can write custom code in which you can log all your variables in some custom log file
Simple data writer : you can configure it and check save url,save field names,save response data field checkboxes that will give you complete data but in that later postprocessing on result file is required to get all usernames (email in your case).
first approach is easier and allows you create your own logging format(easy to retrieve and use somewhere else).
second approach is somewhat tedious and requires post processing.

How to use SLF4J to log to two different files based on type of msg..?

i am running a client server program in spring.
i am trying to implement SLF4J + Logback for logging.
Now the thing is my client (which in real life would be a device/sensor) will send me data in string format which contains various fields seperated by comma) exact pattern is like this : deviceID,DeviceName,DeviceLocation,TimeStamp,someValue
now what i want is to filter the message in Logback using deviceID and then write the whole string to file which has name like device.log suppose for example 1,indyaah,Scranton,2011-8-10 12:00:00,34 should be logged in to file device1.log dynamically.
so how can i use evaluateFilter in logback/janino.
Thanks in advance.
Logback provides all the features you need out of the box. You need to learn about SiftingAppender and probably MDC.
SiftingAppender wraps several homogeneous appenders and picks single one per each logging message based on user-defined criteria (called distriminator). The documentation is pretty good, and it has some nice examples.

Resources