Gateway and aggregator - CorrelationStrategy error - spring-boot

I have to access an external service using spring integration. The process is (1) I had to pass an id to get basic information (2) using basic info from step 1, I need to access to more services and merge the information into a singlr object.
integration-graph:
input: Channel1 outputs to : Channel1Out
I have recipient list router that puts message to the 2 channels Channel2 and Channel3.
Channel2 and Channel3's output channels use a xml xpath-transformer
and output to Channel 4
<int:aggregator id="aggregatorChannel"
correlation-strategy-expression="headers['jms_messageId']"
release-strategy-expression="size() == 2" method="mergeVO"
input-channel="channel4" output-channel="dest-channel">
<bean class="n.b.lbr.eai.vo.PojoAggregator"></bean>
</int:aggregator>
This is giving error.
java.lang.IllegalStateException: Null correlation not allowed. Maybe the CorrelationStrategy is failing?
at org.springframework.util.Assert.state(Assert.java:70) ~[spring-core-4.3.8.RELEASE.jar:4.3.8.RELEASE]
at org.springframework.integration.aggregator.AbstractCorrelatingMessageHandler.handleMessageInternal(AbstractCorrelatingMessageHandler.java:385) ~[spring-integration-core-4.3.9.RELEASE.jar:4.3.9.RELEASE]
I did see some posts on this topic, but I do not understand how to solve the below error
{
"timestamp": 1533137160301,
"status": 500,
"error": "Internal Server Error",
"exception": "java.lang.IllegalStateException",
"message": "Null correlation not allowed. Maybe the CorrelationStrategy is failing?",
"path": "/w/b/search/11223"
}
please suggest, if this is a design issue or how to solve this problem
EDIT1:
Is the below valid scatter gather?
<bean id="messageStore" class="org.springframework.integration.store.SimpleMessageStore"/>
<int:scatter-gather id="scatterGather2" input-channel="drBInputChannel" gather-channel="gatherChannel" gather-timeout="5000">
<int:scatterer id="myScatterer" apply-sequence="true">
<int:recipient channel="bserviceInputChannel"/>
<int:recipient channel="aserviceInputChannel"/>
</int:scatterer>
<int:gatherer id="myGatherer"
**??**
message-store="messageStore"
correlation-strategy=**??**
release-strategy-expression="size() == 2"
>
<bean class="nd.wbr.eai.vo.PojoAggregator"></bean>
</int:gatherer>
</int:scatter-gather>
I need help to convert to xml and use in the above
#Bean
public MessageHandler gatherer() {
return new AggregatingMessageHandler(
***new ExpressionEvaluatingMessageGroupProcessor("^[payload gt 5] ?:-1D"),***
new SimpleMessageStore(),
***new HeaderAttributeCorrelationStrategy(
IntegrationMessageHeaderAccessor.CORRELATION_ID),***
new ExpressionEvaluatingReleaseStrategy("size() == 2"));
}

The "java.lang.IllegalStateException: Null correlation not allowed. Maybe the CorrelationStrategy is failing?" exception means that your correlation-strategy-expression="headers['jms_messageId']" doesn't produce anything meaningful. To be precise there is just no jms_messageId header in the message.
Not sure why you did such a choice for the correlation key, but there is definitely not going to be such a headers when you perform HTTP request. You may emulate it though, but it might be better to choose some other correlation strategy.
On the other hand, looking to your original task description, I would say that you need to take a look into the Scatter-Gather pattern and stop to worry about correlation key altogether!

Related

ActiveMQ jolokia gives different message response depending on environment

I have to get (not consume) part of a message that is in queue. I reused bash script that was prompted as a response here, with the use of /api/jolokia/ : ActiveMQ Jolokia API How can I get the full Message Body
Part of a response that I am interested to get is MsgId in value:text :
"request": {
"mbean": "org.apache.activemq:brokerName=MyBrokerName,destinationName=MyQueueName,destinationType=Queue,type=Broker",
"type": "exec",
"operation": "browseMessages()"
},
"value": [
{
"jMSCorrelationIDAsBytes": [],
***some other objects here ***
"text": "<?xml version=\"1.0\"?>\r\n<RepositoryOperationRq xmlns=\"http://www.ACORD.org/\">\r\n <MsgId>xxx28bab-e62c-4dbc-a2aa-xxx</MsgId>\r\n <CreationDtTime>2020-01-01T11:11:11-11:00</CreationDtTime>\r\n
There is no problem on DEV env ActiveMQ but when I tried do the same on UAT env ActiveMQ there is no value:text object in response at all, and some others objects values are different, like:
"connectionControl": false
and
"connectionControl": "false"
I thought it might be because of maxDepth parameter so I increased it. Unfortunately when set maxDepth=5 I got that error:
"error_type": "java.lang.IllegalStateException",
"error": "java.lang.IllegalStateException : Error while extracting next from org.apache.activemq.broker.region.cursors.FilePendingMessageCursor#3bb9ace4",
"status": 500
and the whole ActiveMQ stopped receiving any messages- had to force restart it. ActiveMQ configs should be the same on both envs, and the version is 5.13.3. Do you know why that text object is missing?
I think the difference here is down to the content of the messages in each environment. The browseMessages operation simply returns the messages in the corresponding destination (e.g. MyQueueName).
If the message is not a javax.jms.TextMessage then it won't have the text field. If a property is false instead of "false" that just means the property value was a boolean instead of a String respectively.

Queries that involve more than 250 virtual entity lookup field values fail with "An unexpected error occurred."

I'm receiving this mysterious error message when using a custom virtual entity data provider:
{
"error": {
"code": "0x80040216",
"message": "An unexpected error occurred.",
"#Microsoft.PowerApps.CDS.ErrorDetails.ApiExceptionSourceKey": "Plugin/Microsoft.Crm.ObjectModel.InsertLookupLogicalNamePlugin",
"#Microsoft.PowerApps.CDS.ErrorDetails.ApiStepKey": "ccb4d064-785c-eb11-a812-002248163c60",
"#Microsoft.PowerApps.CDS.ErrorDetails.ApiDepthKey": "1",
"#Microsoft.PowerApps.CDS.ErrorDetails.ApiActivityIdKey": "aac514e1-53ec-4ed9-9e47-d2643f0e92b1",
"#Microsoft.PowerApps.CDS.ErrorDetails.ApiPluginSolutionNameKey": "System",
"#Microsoft.PowerApps.CDS.ErrorDetails.ApiStepSolutionNameKey": "System",
"#Microsoft.PowerApps.CDS.ErrorDetails.ApiExceptionCategory": "SystemFailure",
"#Microsoft.PowerApps.CDS.ErrorDetails.ApiExceptionMesageName": "UnExpected",
"#Microsoft.PowerApps.CDS.ErrorDetails.ApiExceptionHttpStatusCode": "400",
"#Microsoft.PowerApps.CDS.HelpLink": "http://go.microsoft.com/fwlink/?LinkID=398563&error=Microsoft.Crm.CrmException%3a80040216&client=platform",
"#Microsoft.PowerApps.CDS.TraceText": "\r\n[Microsoft.Crm.ObjectModel: Microsoft.Crm.ObjectModel.InsertLookupLogicalNamePlugin]\r\n[ccb4d064-785c-eb11-a812-002248163c60: External plug-in implementation]\r\n\r\n",
"#Microsoft.PowerApps.CDS.InnerError.Message": "An unexpected error occurred."
}
}
It seems to occur more often with larger page sizes.
Plugin trace logs indicate the data provider ran successfully/no exception.
After some spelunking I found this error message in the on-prem server:
Query with entity reference to virtual entity can not exceed 250 limit. Please modify your query to reduce the number.
The limit appears to be unique lookup field values across the whole query. So in this example, if the results were as below:
Record
Lookup Column to Virtual Entity
Lookup Column to Virtual Entity
1
Value A
Value B
2
Value C
3
Value A
Value C
Would count as 3 towards the limit for that query.

Generate Spring-integration daily statistics report

Have a spring integration application where files are routed from a folder to S3 buckets using s3-outbound-channel-adapter. If the file processed successfully, then file will be moved to corresponding target-bucket. if any error , file move to error bucket via error channel.
Have to generate a daily statistics report in a text file containing below details.
Total no of files processed:
Total success :
Total Error:
Would like to know how to get no of files processed successfully/error. Is there any way to achieve this requirement.
Any suggestion or example would be helpful.
Gone through the DefaultMessageChannelMetrics and Micrometer Integration in documentation. Not sure it will help my requirement.
Have separate gateway and adapter to process success and error files.
Success :
<int-aws:s3-outbound-gateway id="s3FileMover"
request-channel="filesOutS3GateWay"
reply-channel="filesOutS3ChainChannel"
transfer-manager="transferManager"
bucket-expression = "headers.TARGET_PATH"
key-expression="headers.file_name"
command="UPLOAD">
<int-aws:request-handler-advice-chain>
<ref bean="retryAdvice" />
</int-aws:request-handler-advice-chain>
</int-aws:s3-outbound-gateway>
Error :
<int-aws:s3-outbound-channel-adapter id="filesErrorS3Mover"
channel="filesErrorS3MoverChannel"
transfer-manager="transferManager"
bucket="${aws.s3.error.bucket}"
key-expression="headers.TARGET + '/' + headers.file_name"
upload-metadata-provider = "fileMetaDataProvider"
command="UPLOAD">
<int-aws:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpressionString" value="payload.delete()"/>
</bean>
</int-aws:request-handler-advice-chain>
You can query and reset the MessageChannelMetrics directly on the message channels directly.
getSendCount();
reset();
All standard message channels implement that interface so just inject the channel as that...
#Autowired
private MessageChannelMetrics filesOutS3GateWay;
private int getCount() {
return this.filesOutS3GateWay.getSendCount();
}

Cloudwatch to Elasticsearch parse/tokenize log event before push to ES

Appreciate your help in advance.
In my scenario - Cloudwatch multiline logs needs to be shipped to elasticsearch service.
ECS--awslog->Cloudwatch---using lambda--> ES Domain
(Basic flow though very open to change how data is shipped from CW to ES )
I was able to solve multi-line issue using multi_line_start_pattern BUT
The main issue I am experiencing now - is my logs have ODL format (following format)
[yyyy-mm-ddThh:mm:ss.SSS-Z][ProductName-Version][Log Level]
[Message ID][LoggerName][Key Value Pairs][[
Message]]
AND I will like to parse and tokenize log events before storing in ES (vs the complete log line ).
For example:
[2018-05-31T11:08:49.148-0400] [glassfish 4.1] [INFO] [] [] [tid: _ThreadID=43 _ThreadName=Thread-8] [timeMillis: 1527692929148] [levelValue: 800] [[
[] INFO : (DummyApplicationFunctionJPADAO) EntityManagerFactory located under resource lookup name [null], resource name=AuthorizationPU]]
Needs to be parsed and tokenize using format
timestamp 2018-05-31T11:08:49.148-0400
ProductName-Version glassfish 4.1
LogLevel INFO
MessageID
LoggerName
KeyValuePairs tid: _ThreadID=43 _ThreadName=Thread-8
Message [] INFO : (DummyApplicationFunctionJPADAO)
EntityManagerFactorylocated under resource lookup name
[null], resource name=AuthorizationPU
In above Key Value pairs repeat and are variable - for simplicity I can store all as one long string.
As far as what I gathered about Cloudwatch - It seems Subscription Filter Pattern reg ex support is very limited really not sure how to fit the above pattern. For lambda function that pushes the data to ES have not seen AWS doc or examples that support lambda as means to parse and push for ES.
Will appreciate if someone can please guide what/where will be best option to parse CW logs before it gets into ES => Subscription Filter -Pattern vs in lambda function or any other way.
Thank you .
From what I can see your best bet is what you're suggesting, a CloudWatch log triggered lambda that reformats the logged data into your ES prefered format and then posts it into ES.
You'll need to subscribe this lambda to your CloudWatch logs. You can do this on the lambda console, or the cloudwatch console (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html).
The lambda's event payload will be: { "awslogs": { "data": "encoded-logs" } }. Where encoded-logs is a Base64 encoding of a gzipped JSON.
For example, the sample event (https://docs.aws.amazon.com/lambda/latest/dg/eventsources.html#eventsources-cloudwatch-logs) can be decoded in node, for example, using:
const zlib = require('zlib');
const data = event.awslogs.data;
const gzipped = Buffer.from(data, 'base64');
const json = zlib.gunzipSync(gzipped);
const logs = JSON.parse(json);
console.log(logs);
/*
{ messageType: 'DATA_MESSAGE',
owner: '123456789123',
logGroup: 'testLogGroup',
logStream: 'testLogStream',
subscriptionFilters: [ 'testFilter' ],
logEvents:
[ { id: 'eventId1',
timestamp: 1440442987000,
message: '[ERROR] First test message' },
{ id: 'eventId2',
timestamp: 1440442987001,
message: '[ERROR] Second test message' } ] }
*/
From what you've outlined, you'll want to extract the logEvents array, and parse this into an array of strings. I'm happy to give some help on this too if you need it (but I'll need to know what language you're writing your lambda in- there are libraries for tokenizing ODL- so hopefully it's not too hard).
At this point you can then POST these new records directly into your AWS ES Domain. Somewhat crypitcally the S3-to-ES guide gives a good outline of how to do this in python: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html#es-aws-integrations-s3-lambda-es
You can find a full example for a lambda that does all this (by someone else) here: https://github.com/blueimp/aws-lambda/tree/master/cloudwatch-logs-to-elastic-cloud

Client-side validation of Elasticsearch query string

I have an application that uses NEST (Elasticsearch .NET client) to communicate with an Elasticsearch cluster. The integration allows the user to specify input for the "query_string" portion of a query.
The user may input an invalid query. Say "AND", which is invalid because the predicate is incomplete. But the error message that comes back from Elasticsearch is exceedingly verbose, and contains terminology that isn't very user-friendly, like "all shards failed".
Is there a way I can offer the user a more meaningful error message (say - "bad predicate"). Ideally, the users search string would be validated without an Elasticsearch round-trip, but I'll settle for a simpler error message however I can get it.
The error message returned by Elasticsearch is verbose but for parsing errors like these, Elasticsearch throws a QueryParsingException. If you examine the error message closely, you'll find the string QueryParsingException towards the end of the entire error message. This is the exception (and its message) you are interested in. For example, when I spelt must as mus2t in a search request, I get a huge error message by Elasticsearch and below is the last part of the error message.
QueryParsingException[[<index name>] bool query does not support [mus2t]]; }]
I got this when I spelt must as mus2t. You can parse and extract out this error message.
You can use validation api.
For following query
var validateResponse = client.Validate<Document>(descriptor => descriptor
.Explain()
.Query(query => query
.QueryString(qs => qs
.OnFields(f => f.Name)
.Query("AND"))));
you will get
org.elasticsearch.index.query.QueryParsingException: [indexname]
Failed to parse query [AND];
org.apache.lucene.queryparser.classic.ParseException: Cannot parse
'AND': Encountered " <AND> "AND "" at line 1, column 0. Was expecting
one of:
<NOT> ...
"+" ...
"-" ...
<BAREOPER> ...
"(" ...
"*" ...
<QUOTED> ...
<TERM> ...
<PREFIXTERM> ...
<WILDTERM> ...
<REGEXPTERM> ...
"[" ...
"{" ...
<NUMBER> ...
<TERM> ...
"*" ...
Still not so perfect for end user and it requires round-trip to ES, but maybe it will be helpful.

Resources