ActiveMQ jolokia gives different message response depending on environment - bash

I have to get (not consume) part of a message that is in queue. I reused bash script that was prompted as a response here, with the use of /api/jolokia/ : ActiveMQ Jolokia API How can I get the full Message Body
Part of a response that I am interested to get is MsgId in value:text :
"request": {
"mbean": "org.apache.activemq:brokerName=MyBrokerName,destinationName=MyQueueName,destinationType=Queue,type=Broker",
"type": "exec",
"operation": "browseMessages()"
},
"value": [
{
"jMSCorrelationIDAsBytes": [],
***some other objects here ***
"text": "<?xml version=\"1.0\"?>\r\n<RepositoryOperationRq xmlns=\"http://www.ACORD.org/\">\r\n <MsgId>xxx28bab-e62c-4dbc-a2aa-xxx</MsgId>\r\n <CreationDtTime>2020-01-01T11:11:11-11:00</CreationDtTime>\r\n
There is no problem on DEV env ActiveMQ but when I tried do the same on UAT env ActiveMQ there is no value:text object in response at all, and some others objects values are different, like:
"connectionControl": false
and
"connectionControl": "false"
I thought it might be because of maxDepth parameter so I increased it. Unfortunately when set maxDepth=5 I got that error:
"error_type": "java.lang.IllegalStateException",
"error": "java.lang.IllegalStateException : Error while extracting next from org.apache.activemq.broker.region.cursors.FilePendingMessageCursor#3bb9ace4",
"status": 500
and the whole ActiveMQ stopped receiving any messages- had to force restart it. ActiveMQ configs should be the same on both envs, and the version is 5.13.3. Do you know why that text object is missing?

I think the difference here is down to the content of the messages in each environment. The browseMessages operation simply returns the messages in the corresponding destination (e.g. MyQueueName).
If the message is not a javax.jms.TextMessage then it won't have the text field. If a property is false instead of "false" that just means the property value was a boolean instead of a String respectively.

Related

How to suppress aws lambda cli output

I want to use aws lambda update-function-code command to deploy the code of my function. The problem here is that aws CLI always prints out some information after deployment. That information contains sensitive information, such as environment variables and their values. That is not acceptable as I'm going to use public CI services, and I don't want that info to become available to anyone. At the same time I don't want to solve this by directing everything from AWS command to /dev/null for example as in this case I will lose information about errors and exceptions which will make it harder to debug it if something went. What can I do here?
p.s. SAM is not an option, as it will force me to switch to another framework and completely change the workflow I'm using.
You could target the output you'd like to suppress by replacing those values with jq
For example if you had output from the cli command like below:
{
"FunctionName": "my-function",
"LastModified": "2019-09-26T20:28:40.438+0000",
"RevisionId": "e52502d4-9320-4688-9cd6-152a6ab7490d",
"MemorySize": 256,
"Version": "$LATEST",
"Role": "arn:aws:iam::123456789012:role/service-role/my-function-role-uy3l9qyq",
"Timeout": 3,
"Runtime": "nodejs10.x",
"TracingConfig": {
"Mode": "PassThrough"
},
"CodeSha256": "5tT2qgzYUHaqwR716pZ2dpkn/0J1FrzJmlKidWoaCgk=",
"Description": "",
"VpcConfig": {
"SubnetIds": [],
"VpcId": "",
"SecurityGroupIds": []
},
"CodeSize": 304,
"FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-function",
"Handler": "index.handler",
"Environment": {
"Variables": {
"SomeSensitiveVar": "value",
"SomeOtherSensitiveVar": "password"
}
}
}
You might pipe that to jq and replace values only if the keys exist:
aws lambda update-function-code <args> | jq '
if .Environment.Variables.SomeSensitiveVar? then .Environment.Variables.SomeSensitiveVar = "REDACTED" else . end |
if .Environment.Variables.SomeRandomSensitiveVar? then .Environment.Variables.SomeOtherSensitiveVar = "REDACTED" else . end'
You know which data is sensitive and will need to set this up appropriately. You can see the example of what data is returned in the cli docs and the API docs are also helpful for understanding what the structure can look like.
Lambda environment variables show themselves everywhere and cannot considered private.
If your environment variables are sensitive, you could consider using aws secret manager.
In a nutshell:
create a secret in the secret store. It has a name (public) and a value (secret, encrypted, with proper user access control)
Allow your lambda to access the secret store
In your lambda env, store the name of your secret, and tell your lambda to get the corresponding value at runtime
bonus: password rotation is made super easy, as you don't even have to update your lambda config anymore

Gateway and aggregator - CorrelationStrategy error

I have to access an external service using spring integration. The process is (1) I had to pass an id to get basic information (2) using basic info from step 1, I need to access to more services and merge the information into a singlr object.
integration-graph:
input: Channel1 outputs to : Channel1Out
I have recipient list router that puts message to the 2 channels Channel2 and Channel3.
Channel2 and Channel3's output channels use a xml xpath-transformer
and output to Channel 4
<int:aggregator id="aggregatorChannel"
correlation-strategy-expression="headers['jms_messageId']"
release-strategy-expression="size() == 2" method="mergeVO"
input-channel="channel4" output-channel="dest-channel">
<bean class="n.b.lbr.eai.vo.PojoAggregator"></bean>
</int:aggregator>
This is giving error.
java.lang.IllegalStateException: Null correlation not allowed. Maybe the CorrelationStrategy is failing?
at org.springframework.util.Assert.state(Assert.java:70) ~[spring-core-4.3.8.RELEASE.jar:4.3.8.RELEASE]
at org.springframework.integration.aggregator.AbstractCorrelatingMessageHandler.handleMessageInternal(AbstractCorrelatingMessageHandler.java:385) ~[spring-integration-core-4.3.9.RELEASE.jar:4.3.9.RELEASE]
I did see some posts on this topic, but I do not understand how to solve the below error
{
"timestamp": 1533137160301,
"status": 500,
"error": "Internal Server Error",
"exception": "java.lang.IllegalStateException",
"message": "Null correlation not allowed. Maybe the CorrelationStrategy is failing?",
"path": "/w/b/search/11223"
}
please suggest, if this is a design issue or how to solve this problem
EDIT1:
Is the below valid scatter gather?
<bean id="messageStore" class="org.springframework.integration.store.SimpleMessageStore"/>
<int:scatter-gather id="scatterGather2" input-channel="drBInputChannel" gather-channel="gatherChannel" gather-timeout="5000">
<int:scatterer id="myScatterer" apply-sequence="true">
<int:recipient channel="bserviceInputChannel"/>
<int:recipient channel="aserviceInputChannel"/>
</int:scatterer>
<int:gatherer id="myGatherer"
**??**
message-store="messageStore"
correlation-strategy=**??**
release-strategy-expression="size() == 2"
>
<bean class="nd.wbr.eai.vo.PojoAggregator"></bean>
</int:gatherer>
</int:scatter-gather>
I need help to convert to xml and use in the above
#Bean
public MessageHandler gatherer() {
return new AggregatingMessageHandler(
***new ExpressionEvaluatingMessageGroupProcessor("^[payload gt 5] ?:-1D"),***
new SimpleMessageStore(),
***new HeaderAttributeCorrelationStrategy(
IntegrationMessageHeaderAccessor.CORRELATION_ID),***
new ExpressionEvaluatingReleaseStrategy("size() == 2"));
}
The "java.lang.IllegalStateException: Null correlation not allowed. Maybe the CorrelationStrategy is failing?" exception means that your correlation-strategy-expression="headers['jms_messageId']" doesn't produce anything meaningful. To be precise there is just no jms_messageId header in the message.
Not sure why you did such a choice for the correlation key, but there is definitely not going to be such a headers when you perform HTTP request. You may emulate it though, but it might be better to choose some other correlation strategy.
On the other hand, looking to your original task description, I would say that you need to take a look into the Scatter-Gather pattern and stop to worry about correlation key altogether!

Cloudwatch to Elasticsearch parse/tokenize log event before push to ES

Appreciate your help in advance.
In my scenario - Cloudwatch multiline logs needs to be shipped to elasticsearch service.
ECS--awslog->Cloudwatch---using lambda--> ES Domain
(Basic flow though very open to change how data is shipped from CW to ES )
I was able to solve multi-line issue using multi_line_start_pattern BUT
The main issue I am experiencing now - is my logs have ODL format (following format)
[yyyy-mm-ddThh:mm:ss.SSS-Z][ProductName-Version][Log Level]
[Message ID][LoggerName][Key Value Pairs][[
Message]]
AND I will like to parse and tokenize log events before storing in ES (vs the complete log line ).
For example:
[2018-05-31T11:08:49.148-0400] [glassfish 4.1] [INFO] [] [] [tid: _ThreadID=43 _ThreadName=Thread-8] [timeMillis: 1527692929148] [levelValue: 800] [[
[] INFO : (DummyApplicationFunctionJPADAO) EntityManagerFactory located under resource lookup name [null], resource name=AuthorizationPU]]
Needs to be parsed and tokenize using format
timestamp 2018-05-31T11:08:49.148-0400
ProductName-Version glassfish 4.1
LogLevel INFO
MessageID
LoggerName
KeyValuePairs tid: _ThreadID=43 _ThreadName=Thread-8
Message [] INFO : (DummyApplicationFunctionJPADAO)
EntityManagerFactorylocated under resource lookup name
[null], resource name=AuthorizationPU
In above Key Value pairs repeat and are variable - for simplicity I can store all as one long string.
As far as what I gathered about Cloudwatch - It seems Subscription Filter Pattern reg ex support is very limited really not sure how to fit the above pattern. For lambda function that pushes the data to ES have not seen AWS doc or examples that support lambda as means to parse and push for ES.
Will appreciate if someone can please guide what/where will be best option to parse CW logs before it gets into ES => Subscription Filter -Pattern vs in lambda function or any other way.
Thank you .
From what I can see your best bet is what you're suggesting, a CloudWatch log triggered lambda that reformats the logged data into your ES prefered format and then posts it into ES.
You'll need to subscribe this lambda to your CloudWatch logs. You can do this on the lambda console, or the cloudwatch console (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html).
The lambda's event payload will be: { "awslogs": { "data": "encoded-logs" } }. Where encoded-logs is a Base64 encoding of a gzipped JSON.
For example, the sample event (https://docs.aws.amazon.com/lambda/latest/dg/eventsources.html#eventsources-cloudwatch-logs) can be decoded in node, for example, using:
const zlib = require('zlib');
const data = event.awslogs.data;
const gzipped = Buffer.from(data, 'base64');
const json = zlib.gunzipSync(gzipped);
const logs = JSON.parse(json);
console.log(logs);
/*
{ messageType: 'DATA_MESSAGE',
owner: '123456789123',
logGroup: 'testLogGroup',
logStream: 'testLogStream',
subscriptionFilters: [ 'testFilter' ],
logEvents:
[ { id: 'eventId1',
timestamp: 1440442987000,
message: '[ERROR] First test message' },
{ id: 'eventId2',
timestamp: 1440442987001,
message: '[ERROR] Second test message' } ] }
*/
From what you've outlined, you'll want to extract the logEvents array, and parse this into an array of strings. I'm happy to give some help on this too if you need it (but I'll need to know what language you're writing your lambda in- there are libraries for tokenizing ODL- so hopefully it's not too hard).
At this point you can then POST these new records directly into your AWS ES Domain. Somewhat crypitcally the S3-to-ES guide gives a good outline of how to do this in python: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html#es-aws-integrations-s3-lambda-es
You can find a full example for a lambda that does all this (by someone else) here: https://github.com/blueimp/aws-lambda/tree/master/cloudwatch-logs-to-elastic-cloud

logstash-logger and filtering events

I'm using logstash-logger in a Ruby app. It's great. However, I seem to have a problem that my debug messages are being forwarded, and no setting seems to turn that off. It's increasing my logging requirements by 3 to 5 fold, both increasing the costs and slowing down the app.
Is there a way to filter log messages to eliminate this noise?
I have set in the config.logger_level to :info, but debug messages are still being forwarded.
Any suggestions?
P.s. My config below, is still sending debug messages and info messages.
#logstash_logger = ::LogStashLogger.new(
type: :multi_delegator,
outputs: [
{ type: :stdout },
{ type: :udp, host: logging_host, port: logging_port }
])
#logstash_logger.level = ::Logger::WARN

How to get HTTP StatusCodes in ember-data

When I invoke
App.store.createRecord(App.User, { name: this.get("name") });
App.store.commit();
how do I know if its successful and how to wait for the asyn message?
Very limited error handling was recently added to DS.RESTAdapter in ember-data master.
When creating or updating records (with bulk commit disabled) and a status code between 400 and 599 is returned, the following will happen:
A 422 Unprocessable Entity will transition the record to the "invalid" state and will add any errors returned from the server to the record's errors property.
The adapter assumes the server will respond with JSON in the following format:
{
errors: {
name: ["can't be blank"],
password: ["must be at least 8 characters", "must contain a number"]
{
}
(The error messages themselves could be arrays of strings or just strings. ember-data doesn't currently care which.)
To detect this state:
record.get('isValid') === false
All other status codes will transition the record to the "error" state.
To detect this state, use:
record.get('isError') === true
More cases may eventually be handled by ember-data out of the box, but for the moment if you need something specific, you'll have to extend DS.RESTAdapter, customizing its didError function to add it in yourself.

Resources