Aggregating response from asynchronous publish subscribe channel - spring

I need to call 4 web services asynchronously and aggregate the results to a single message.If one of the service takes more time to respond than the specified timeout(3sec) then the remaining responses which have arrived should be aggregated and the late coming messages should be discarded . For this i used the below snippet in spring configuration file
<int:aggregator input-channel="aggregatorInputChannel" group-timeout="3000" send-partial-result-on-expiry="true" expire-groups-upon-completion="true" output-channel="aggregatorOutputChannel" ref="responseAggregator" method="populateResponseHeader" >
</int:aggregator>
When one of the web service(lets say service4) call takes more time than the timeout value, then the thread for service4 keeps running in the background and the server send a 202 response. Any suggestions on how i should modify my aggregator to ignore the messages which arrive later than the timeout and get the response?

First of all you should take a look into the Scatter-Gather pattern.
Looks like it is fully sufficient for your use-case.
You should use expire-groups-upon-timeout="false":
<xsd:attribute name="expire-groups-upon-timeout">
<xsd:annotation>
<xsd:documentation>
Boolean flag specifying, if a group is completed due to timeout (reaper or
'group-timeout(-expression)'), whether the group should be removed.
When true, late arriving messages will form a new group. When false, they
will be discarded. Default is 'true' for an aggregator and 'false' for a
resequencer.
</xsd:documentation>
</xsd:annotation>
<xsd:simpleType>
<xsd:union memberTypes="xsd:boolean xsd:string" />
</xsd:simpleType>

Related

Spring integration delayer- Jdbc Message Store: message is not getting deleted

Below is the delayer code which I am using in my application. The output channel checkMessageInProgress is a database stored procedure call which will check if the message needs to be processed or delayed.
If the message needs to be delayed again, the retry count is incremented. After 3 delays, custom Application Exception is raised. I am using a jdbc Message Store for the delayer messages. In scenario where the message is delayed for 3 times and when exception is raised, messages are not getting deleted from the databases tables and the server is picking those messages on restart. How do I make sure that the message is deleted from table in cases where the delay happens for 3 times
<int:chain input-channel="delayerChannel"
output-channel="checkMessageInProgress">
<int:header-enricher>
<!-- Exception/ERROR handling for flows originating from Delayer -->
<int:header name="errorChannel" value="exceptionChannel"
overwrite="true" />
<int:header name="retryCount" overwrite="true" type="int"
expression="headers['retryCount'] == null ? 0 : headers['retryCount'] + 1" />
</int:header-enricher>
<!-- If retryCount maxed out -discard message and log it in error table -->
<int:filter expression="(headers['retryCount'] lt 3)"
discard-channel="raiseExceptionChannel">
</int:filter>
<!-- Configurable delay - fetch from property file -->
<int:delayer id="Delayer" default-delay="${timeout}"
message-store="mymessageStore">
<!-- Transaction management for flows originating from the Delayer -->
<int:transactional transactionmanager="myAppTransactionManager"/>
</int:delayer>
</int:chain>
That is not surprise. Since you use transactional resource (database) any exception downstream causes transaction rollback, therefore no deletion for the data.
Consider shift message to the separate thread before throwing exception. That way the transaction will be committed.

Using DataWeave in Mule cache

Two months ago, I had configured an EE cache in Mulesoft. All of a sudden, it stopped working. It turned out that DataWeave cannot be placed inside the cache scope. Once I move it out of the scope, it works perfectly again. I tested with this :
<set-payload value="#[message.inboundProperties.'http.request.uri']" doc:name="Set Payload"/>
<ee:cache cachingStrategy-ref="EBX_Response_Caching_Strategy" doc:name="Cache">
<logger message="No entry with key: '#[payload]' was found in the cache. A request will be send to EBX service. Detailed response is returned: #[flowVars.detailedResponse]" level="INFO" doc:name="Logger"/>
<scripting:transformer encoding="UTF-8" mimeType="application/json" doc:name="Set filter">
<scripting:script engine="Groovy"><![CDATA[
flowVars['filter'] = 'filtervalue' ]]></scripting:script>
</scripting:transformer>
<http:request config-ref="HTTP_Request_Configuration" path="/ebx-dataservices/rest/data/v1/" method="GET" doc:name="EBX HTTP call">
<http:request-builder>
<http:query-param paramName="login" value="${svc0031.login}"/>
<http:query-param paramName="password" value="${svc0031.password}"/>
<http:query-param paramName="pageSize" value="unbounded"/>
<http:query-param paramName="filter" value="#[filter]"/>
<http:header headerName="Host" value="myhost.com"/>
</http:request-builder>
</http:request>
</ee:cache>
<dw:transform-message metadata:id="91c16073-669d-4c27-a3ea-5cbae4e56ede" doc:name="Basic info response">
<dw:input-payload doc:sample="sample_data\json.json"/>
<dw:set-payload><![CDATA[%dw 1.0
%output application/json
---
{
hotels: payload.rows map ((row , indexOfRow) -> {
name: row.content.companyName.content,
propertyCode: row.content.propertyCode.content,
})
}]]></dw:set-payload>
</dw:transform-message>
If I move the DataWeave transformation in the cache scope, the caching just stops working and a request is sent to the backendsystem always.
Why is that? Did MuleSoft change something? We are running on ESB 3.7.3
You are using a consumable payload in Cache Scope and when we use Consumable payload then Cache is always a MISS and your processors inside the Cache scope will process again even after the Cache is used.
In your case, you are using a HTTP requester which will give you a Consumable response and therefore Cache strategy is being abandoned.
Solution is use a 'Byte Array to Object' to Consume the stream and make the response Non-consumable so that the Cache will cache it in memory and next time it will be a Cache-HIT and it will pick up from In-memory Cache.
Other option for you is to use Dataweave inside the Cache Scope, that will also consume your incoming stream and make Cache a HIT.
For more info on Cache HIT and MISS and Consumable responses go here :
https://docs.mulesoft.com/mule-user-guide/v/3.7/cache-scope

How to publish Threshold event of combined connections to Message Bus?

What's the topic of connection threshold events? How do I listen to connection count threshold events over the message bus, and how do I figure out what is the current connection count?
Connection threshold events can be published over the message bus to the following topics:
#LOG/WARNING/VPN/<router-name>/VPN_VPN_CONNECTIONS_HIGH/<vpn-name> when the connection count exceeds the high threshold.
#LOG/INFO/VPN/<router-name>/VPN_VPN_CONNECTIONS_HIGH_CLEAR/<vpn-name> when the connection count goes below the clear threshold.
If desired, you can apply wildcards to the topics. For example, #LOG/*/VPN/<router-name>/VPN_VPN_CONNECTIONS*/<vpn-name>.
Note that you will need to fill in <router-name> and <vpn-name> with appropriate values.
In order to have the connection count threshold events published over the message bus, you will need to do the following:
a. Configure the VPN to "Publish Message VPN Event Messages".
b. Your application needs to subscribe to the topic for connection threshold events.
In order to figure out the current connection count, you will need to send a SEMP over message bus query.
a. Enable SEMP over Message Bus Show Commands on the VPN.
b. Send a SEMP over Message Bus query. There's an SempGetOverMB sample in the API with detailed instructions to do this. You can also refer to the documentation for details.
<rpc semp-version="soltr/7_2">
<show>
<message-vpn>
<vpn-name>default</vpn-name>
</message-vpn>
</show>
</rpc>
c. Parse the XML based response.
<rpc-reply semp-version="soltr/7_2">
<rpc>
<show>
<message-vpn>
<vpn>
<name>default</name>
<connections-service-smf>3</connections-service-smf>
<connections-service-web>0</connections-service-web>
<connections-service-rest-incoming>0</connections-service-rest-incoming>
<connections-service-mqtt>0</connections-service-mqtt>
<connections-service-rest-outgoing>0</connections-service-rest-outgoing>
<max-connections>10</max-connections>
<max-connections-service-smf>9000</max-connections-service-smf>
<max-connections-service-web>9000</max-connections-service-web>
<max-connections-service-rest-incoming>9000</max-connections-service-rest-incoming>
<max-connections-service-mqtt>9000</max-connections-service-mqtt>
<max-connections-service-rest-outgoing>6000</max-connections-service-rest-outgoing>
... Removed non-relevant portions for clarity ...
</vpn>
</message-vpn>
</show>
</rpc>
<execute-result code="ok"/>
</rpc-reply>
Note that there is a system limit of 10 SEMP poll requests per second, and some topics should not be polled. Refer to the documentation for details.

Mule - Caching Strategy - Session Clear

I am using Mule 3.5.0 and trying to implement the Cache Strategy. The cache is supposed to be hit by APIs for grabbing a Sugar CRM OAuth Token. Multiple endpoints are hitting this cache.
My requirement is to keep only one active element which in the queue which serves this active token to every API call for 5 minutes. When the TTL expires, the cache should grab another token and cache it for subsequent calls.
The problem arises, when multiple inbound endpoints are hitting the cache, old values are also being spit out by the cache. Is all I need to do is change the maxEntries to 1? OR is there a better way of achieving this?
<ee:object-store-caching-strategy name="Caching_Strategy" doc:name="Caching Strategy">
<in-memory-store name="sugar-cache-in-memory" maxEntries="500" entryTTL="300000" expirationInterval="300000"/>
</ee:object-store-caching-strategy>
<flow name="get-oauth-token-cache" doc:name="get-oauth-token-cache" tracking:enable-default-events="true">
<ee:cache cachingStrategy-ref="Caching_Strategy" doc:name="Cache">
..............................
..............................
..............................
<logger message="------------------------ Direct Call for Token----------------------" level="INFO" doc:name="Logger"/>
<DATAMAPPER to set #payload.access_token />
</ee:cache>
<set-session-variable variableName="access_token" value="#[payload.access_token]" doc:name="Session Variable"/>
</flow>
The problem was that in the first line after ee:cache I had the Set Payload function. Had to take it outside the Cache Scope.
Sorry.

Reading of files with max-messages-per-poll=10 and prevent-duplicates=false

I'm trying to read files from the directory. If file cannot be processed it stays there to be tried later.
<file:inbound-channel-adapter prevent-duplicates="false" id="fileInput" directory="file:${java.io.dir}/input-data" auto-create-directory="true" filter="compositeFileFilterBean"/>
<integration:poller id="poller" max-messages-per-poll="10" default="true" >
<integration:interval-trigger interval="60" time-unit="SECONDS" />
</integration:poller>
The problem is if max-messages-per-poll set to, say 10, then each poll will return exactly 10 messages, even if there is only 1 file (i.e. all 10 messages will be the same).
Yes, that would be the expected behavior with those settings.
I am not sure why you think that is wrong.
If there is a file in the directory that is not filtered by a filter (such as the one that prevents duplicates), it will be found by the poller, either within the current poll (when max-messages-per-poll is > 1) or on the next poll.
To do what you want, you would need a custom filter, that would filter a file that was previously found within your 60 seconds polling interval.
You can:
Option1. set the property "prevent-duplicates" to true in inbound-channel-adapter. This property is true by default AND in case no other filter is in place, or file-regex. If we are using a custom filter, springs understand that our custom filter will include AcceptOnceFileListFilter, so it sets prevent-duplicates to false.
Option2. complete bean "compositeFileFilterBean" with filter org.springframework.integration.file.filters.AcceptOnceFileListFilter"

Resources