logstash-logger and filtering events - ruby

I'm using logstash-logger in a Ruby app. It's great. However, I seem to have a problem that my debug messages are being forwarded, and no setting seems to turn that off. It's increasing my logging requirements by 3 to 5 fold, both increasing the costs and slowing down the app.
Is there a way to filter log messages to eliminate this noise?
I have set in the config.logger_level to :info, but debug messages are still being forwarded.
Any suggestions?
P.s. My config below, is still sending debug messages and info messages.
#logstash_logger = ::LogStashLogger.new(
type: :multi_delegator,
outputs: [
{ type: :stdout },
{ type: :udp, host: logging_host, port: logging_port }
])
#logstash_logger.level = ::Logger::WARN

Related

Send logs to datadog with custom formatter shows only the first log of multiple ones

I am trying to send logs to DD (datadog) in such a way that the logs are being received as json and therefore shown properly in the portal through attributes.
My logger is a simple Logger.new(STDOUT, level: Logger::INFO).
If I stick to its standard output, it will in the form
I, [2022-07-30T22:43:35.216846 #1] INFO -- my-app: {"user":"1234"}
which is not really parsable by DD since not a proper JSON. In this case however all the logs appear at least on the DD portal.
Now.. I am trying to format the logs in a JSON manner in this way:
def self.logger
#logger ||= Logger.new(STDOUT, level: Logger::INFO)
#logger.progname = 'my-app'
#logger.formatter = proc do |severity, datetime, progname, msg|
{timestamp: datetime.to_s, progname: progname, severity: severity, correlation: Datadog::Tracing.log_correlation, message: msg}.to_json
end
#logger
end
This is my logger and thanks to this logs are seen properly in DD and parsed correctly because formatted in my app in a proper JSON.
The problem with this approach though seems to be that the logs are sent in 1 full block. Meaning that only the very first log is being visible. Let's say that I want to log this:
my_hash = {"message" => '1', "prop" => '1234'}.to_json
logger.info(my_hash)
my_hash = {"message" => '2', "prop" => '12345'}.to_json
logger.info(my_hash)
only the first log will be shown correctly on the DD portal. Parsed correctly with its message and prop attributes, but nothing about the second log.
Here is the thing, if I see the output of my app locally in the console I see this:
{"timestamp":"2022-07-31 01:15:39 +0200","progname":"my-app","severity":"INFO","correlation":"dd.service=my-app dd.trace_id=2976451780376429536 dd.span_id=0","message":"{"message":"1","prop":"1234"}"}{"timestamp":"2022-07-31 01:15:39 +0200","progname":"my-app","severity":"INFO","correlation":"dd.service=my-app dd.trace_id=2976451780376429536 dd.span_id=0","message":"{"message":"2","prop":"12345"}"}127.0.0.1 - - [31/Jul/2022:01:15:39 +0200] "GET /controller/test_controller HTTP/1.1" 200 - 0.0024
so the 2nd log gets actually outputted! But DD somehow sees only the first log..
(I know there is even a 3rd one shown in this message.. but that's just Sinatra automatic behavior for every http call reaching the api). What do you guys think is the problem?

ActiveMQ jolokia gives different message response depending on environment

I have to get (not consume) part of a message that is in queue. I reused bash script that was prompted as a response here, with the use of /api/jolokia/ : ActiveMQ Jolokia API How can I get the full Message Body
Part of a response that I am interested to get is MsgId in value:text :
"request": {
"mbean": "org.apache.activemq:brokerName=MyBrokerName,destinationName=MyQueueName,destinationType=Queue,type=Broker",
"type": "exec",
"operation": "browseMessages()"
},
"value": [
{
"jMSCorrelationIDAsBytes": [],
***some other objects here ***
"text": "<?xml version=\"1.0\"?>\r\n<RepositoryOperationRq xmlns=\"http://www.ACORD.org/\">\r\n <MsgId>xxx28bab-e62c-4dbc-a2aa-xxx</MsgId>\r\n <CreationDtTime>2020-01-01T11:11:11-11:00</CreationDtTime>\r\n
There is no problem on DEV env ActiveMQ but when I tried do the same on UAT env ActiveMQ there is no value:text object in response at all, and some others objects values are different, like:
"connectionControl": false
and
"connectionControl": "false"
I thought it might be because of maxDepth parameter so I increased it. Unfortunately when set maxDepth=5 I got that error:
"error_type": "java.lang.IllegalStateException",
"error": "java.lang.IllegalStateException : Error while extracting next from org.apache.activemq.broker.region.cursors.FilePendingMessageCursor#3bb9ace4",
"status": 500
and the whole ActiveMQ stopped receiving any messages- had to force restart it. ActiveMQ configs should be the same on both envs, and the version is 5.13.3. Do you know why that text object is missing?
I think the difference here is down to the content of the messages in each environment. The browseMessages operation simply returns the messages in the corresponding destination (e.g. MyQueueName).
If the message is not a javax.jms.TextMessage then it won't have the text field. If a property is false instead of "false" that just means the property value was a boolean instead of a String respectively.

Mulesoft EC2 *describeInstances* with *filter* option

I'm having problems using the EC2 connector with filters for DescribeInstances. Specifically, I'm trying to find all instances that have the tag "classId" set.
I've also tried to find all instances that have the classId tag with specific string, e.g. "123".
Below are the XMLs of the describeInstance for both scenarios.
tag-key ------
<ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag-key" values="#[['classId']]">
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
tag:classId:----
<ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag:classId">
<ec2:values >
<ec2:value value="#['123']" />
</ec2:values>
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
Each time I receive an error like the following (for tag:classId):
ERROR 2021-03-29 08:32:49,693 [[MuleRuntime].uber.04: [ec2-play].ec2-playFlow.BLOCKING #1092a5bc] [processor: ; event: df5e2df0-908a-11eb-94b5-38f9d38da5c3] org.mule.runtime.core.internal.exception.OnErrorPropagateHandler: 
********************************************************************************
Message        : The filter 'null' is invalid (Service: AmazonEC2; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 33e3bbfb-99ea-4382-932f-647662810c92; Proxy: null)
Element        : ec2-playFlow/processors/0 # ec2-play:ec2-play.xml:33 (Describe instances)
Element DSL      : <ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag:classId">
<ec2:values>
<ec2:value value="#['123']"></ec2:value>
</ec2:values>
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
Error type      : EC2:INVALID_PARAMETER_VALUE
FlowStack       : at ec2-playFlow(ec2-playFlow/processors/0 # ec2-play:ec2-play.xml:33 (Describe instances))
 (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
NOTE: The code works without a filter, returning all instances. But, that isn't what I want or need. The more filtering I can do the faster the response.
Does anyone have samples of the filter option working? Can you tell me what I'm doing wrong?
Thanks!
This surely is a bug. I tried the same and it was not working for me as well. I enabled debug logging and found that the connector is not sending the filter.1.Name=tag:classId as a query parameter in the request. Here is the debug log that I found. (Notice there is no filter.1.Name=tag:classId in the query string)
DEBUG 2021-04-02 21:55:17,198 [[MuleRuntime].uber.03: [test-aws-connector].test-aws-connectorFlow.BLOCKING #2dff3afe] [processor: ; event: 91a34891-93d0-11eb-af49-606dc73d31d1] org.apache.http.wire: http-outgoing-0 >> "Action=DescribeInstances&Version=2016-11-15&Filter.1.Value.1=123"
However, I tried to use the Expression or Bean Reference option and set the expression directly as [{name: 'tag:classId', values:['123']}] like this:
and it worked correctly. Here is the same debug log after this change
DEBUG 2021-04-02 21:59:17,198 [[MuleRuntime].uber.03: [test-aws-connector].test-aws-connectorFlow.BLOCKING #2dff3afe] [processor: ; event: 91a34891-93d0-11eb-af49-606dc73d31d1] org.apache.http.wire: http-outgoing-0 >> "Action=DescribeInstances&Version=2016-11-15&Filter.1.Name=tag%3AclassId&Filter.1.Value.1=123"
Also, I want to point out very weird behaviour, this does not work if you try to format [{name: 'tag:classId',values: ['123']}] across multiple lines in the expression and will give an error during deployment.

Always some test cases getting jasmine.DEFAULT_TIMEOUT_INTERVAL

I am going to create end to end(e2e) test using protractor with jasmine and angular 6. I have written some test cases almost 10 cases. That's all working fine, but always some cases become fails. And its failed because of jasmine timeout. I have configure timeout value like below. But I am not getting consistant result. sometimes a test cases is success but at next run it will goes to success or fail. I have searched on google but I have not found any useful solution.
I have defined some common properties for wait
waitForElement(element: ElementFinder){
browser.waitForAngularEnabled(false);
browser.wait(() => element.isPresent(), 100000, 'timeout: ');
}
waitForUrl(url: string){
browser.wait(() => protractor.ExpectedConditions.urlContains(url), 100000, 'timeout')
}
And protractor.conf.js file I have defined that
jasmineNodeOpts: {
showColors: true,
includeStackTrace: true,
defaultTimeoutInterval: 20000,
print: function () {
}
}
I am getting below error
- Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
- Failed: stale element reference: element is not attached to the page document
(Session info: chrome=76.0.3809.100)
(Driver info: chromedriver=76.0.3809.12 (220b19a666554bdcac56dff9ffd44c300842c933-refs/branch-heads/3809#{#83}),platform=Windows NT 10.0.17134 x86_64)
I have got the solution:
I have configured waiting timeout 100000 ms for individual element find where whole script timeout was 20000 ms. So I have follow below process:
Keep full spec timeout below than sum of all elements find timeouts. I have configured defaultTimeoutInterval at jasmineNodeOpts greater than sum of value for all test cases timeout. And then add a large value to allScriptsTimeout: 2000000 inside of export.config. Its resolved my problem.
NB: I gave this answer because I think it may help others who will face this kind of problem.

How to get HTTP StatusCodes in ember-data

When I invoke
App.store.createRecord(App.User, { name: this.get("name") });
App.store.commit();
how do I know if its successful and how to wait for the asyn message?
Very limited error handling was recently added to DS.RESTAdapter in ember-data master.
When creating or updating records (with bulk commit disabled) and a status code between 400 and 599 is returned, the following will happen:
A 422 Unprocessable Entity will transition the record to the "invalid" state and will add any errors returned from the server to the record's errors property.
The adapter assumes the server will respond with JSON in the following format:
{
errors: {
name: ["can't be blank"],
password: ["must be at least 8 characters", "must contain a number"]
{
}
(The error messages themselves could be arrays of strings or just strings. ember-data doesn't currently care which.)
To detect this state:
record.get('isValid') === false
All other status codes will transition the record to the "error" state.
To detect this state, use:
record.get('isError') === true
More cases may eventually be handled by ember-data out of the box, but for the moment if you need something specific, you'll have to extend DS.RESTAdapter, customizing its didError function to add it in yourself.

Resources