Two months ago, I had configured an EE cache in Mulesoft. All of a sudden, it stopped working. It turned out that DataWeave cannot be placed inside the cache scope. Once I move it out of the scope, it works perfectly again. I tested with this :
<set-payload value="#[message.inboundProperties.'http.request.uri']" doc:name="Set Payload"/>
<ee:cache cachingStrategy-ref="EBX_Response_Caching_Strategy" doc:name="Cache">
<logger message="No entry with key: '#[payload]' was found in the cache. A request will be send to EBX service. Detailed response is returned: #[flowVars.detailedResponse]" level="INFO" doc:name="Logger"/>
<scripting:transformer encoding="UTF-8" mimeType="application/json" doc:name="Set filter">
<scripting:script engine="Groovy"><![CDATA[
flowVars['filter'] = 'filtervalue' ]]></scripting:script>
</scripting:transformer>
<http:request config-ref="HTTP_Request_Configuration" path="/ebx-dataservices/rest/data/v1/" method="GET" doc:name="EBX HTTP call">
<http:request-builder>
<http:query-param paramName="login" value="${svc0031.login}"/>
<http:query-param paramName="password" value="${svc0031.password}"/>
<http:query-param paramName="pageSize" value="unbounded"/>
<http:query-param paramName="filter" value="#[filter]"/>
<http:header headerName="Host" value="myhost.com"/>
</http:request-builder>
</http:request>
</ee:cache>
<dw:transform-message metadata:id="91c16073-669d-4c27-a3ea-5cbae4e56ede" doc:name="Basic info response">
<dw:input-payload doc:sample="sample_data\json.json"/>
<dw:set-payload><![CDATA[%dw 1.0
%output application/json
---
{
hotels: payload.rows map ((row , indexOfRow) -> {
name: row.content.companyName.content,
propertyCode: row.content.propertyCode.content,
})
}]]></dw:set-payload>
</dw:transform-message>
If I move the DataWeave transformation in the cache scope, the caching just stops working and a request is sent to the backendsystem always.
Why is that? Did MuleSoft change something? We are running on ESB 3.7.3
You are using a consumable payload in Cache Scope and when we use Consumable payload then Cache is always a MISS and your processors inside the Cache scope will process again even after the Cache is used.
In your case, you are using a HTTP requester which will give you a Consumable response and therefore Cache strategy is being abandoned.
Solution is use a 'Byte Array to Object' to Consume the stream and make the response Non-consumable so that the Cache will cache it in memory and next time it will be a Cache-HIT and it will pick up from In-memory Cache.
Other option for you is to use Dataweave inside the Cache Scope, that will also consume your incoming stream and make Cache a HIT.
For more info on Cache HIT and MISS and Consumable responses go here :
https://docs.mulesoft.com/mule-user-guide/v/3.7/cache-scope
Related
I need to call 4 web services asynchronously and aggregate the results to a single message.If one of the service takes more time to respond than the specified timeout(3sec) then the remaining responses which have arrived should be aggregated and the late coming messages should be discarded . For this i used the below snippet in spring configuration file
<int:aggregator input-channel="aggregatorInputChannel" group-timeout="3000" send-partial-result-on-expiry="true" expire-groups-upon-completion="true" output-channel="aggregatorOutputChannel" ref="responseAggregator" method="populateResponseHeader" >
</int:aggregator>
When one of the web service(lets say service4) call takes more time than the timeout value, then the thread for service4 keeps running in the background and the server send a 202 response. Any suggestions on how i should modify my aggregator to ignore the messages which arrive later than the timeout and get the response?
First of all you should take a look into the Scatter-Gather pattern.
Looks like it is fully sufficient for your use-case.
You should use expire-groups-upon-timeout="false":
<xsd:attribute name="expire-groups-upon-timeout">
<xsd:annotation>
<xsd:documentation>
Boolean flag specifying, if a group is completed due to timeout (reaper or
'group-timeout(-expression)'), whether the group should be removed.
When true, late arriving messages will form a new group. When false, they
will be discarded. Default is 'true' for an aggregator and 'false' for a
resequencer.
</xsd:documentation>
</xsd:annotation>
<xsd:simpleType>
<xsd:union memberTypes="xsd:boolean xsd:string" />
</xsd:simpleType>
I am using Mule 3.5.0 and trying to implement the Cache Strategy. The cache is supposed to be hit by APIs for grabbing a Sugar CRM OAuth Token. Multiple endpoints are hitting this cache.
My requirement is to keep only one active element which in the queue which serves this active token to every API call for 5 minutes. When the TTL expires, the cache should grab another token and cache it for subsequent calls.
The problem arises, when multiple inbound endpoints are hitting the cache, old values are also being spit out by the cache. Is all I need to do is change the maxEntries to 1? OR is there a better way of achieving this?
<ee:object-store-caching-strategy name="Caching_Strategy" doc:name="Caching Strategy">
<in-memory-store name="sugar-cache-in-memory" maxEntries="500" entryTTL="300000" expirationInterval="300000"/>
</ee:object-store-caching-strategy>
<flow name="get-oauth-token-cache" doc:name="get-oauth-token-cache" tracking:enable-default-events="true">
<ee:cache cachingStrategy-ref="Caching_Strategy" doc:name="Cache">
..............................
..............................
..............................
<logger message="------------------------ Direct Call for Token----------------------" level="INFO" doc:name="Logger"/>
<DATAMAPPER to set #payload.access_token />
</ee:cache>
<set-session-variable variableName="access_token" value="#[payload.access_token]" doc:name="Session Variable"/>
</flow>
The problem was that in the first line after ee:cache I had the Set Payload function. Had to take it outside the Cache Scope.
Sorry.
How do i setup an expiry time for the cache entries in mule ? I am setting up a keyExpression based cache on the incoming requests, like this:
<ee:object-store-caching-strategy name="UserAuth-CachingStrategy" keyGenerationExpression="#[message.inboundProperties.'authorization']" doc:name="Caching Strategy">
The cache is supposed to hit an external WS and the results are supposed to be cached for 5 minutes. If i set an 'in-memory' store with a TTL for let's say 5 minutes, mule isn't honoring this request. Irrespective of the value in TTL, mule always hits the actual external ws once every 3-4 requests. If i don't set any value for TTL, then the cache never expires. how do i properly set a cache for an 'in-memory' cache in mule ?
Thanks
If you are using ObjectStore , you can easily set as given in the following using spring properties and referring your caching strategy to it :-
http://ricston.com/blog/cache-scope-ehcache/
You can also use managed-store as following :-
<ee:object-store-caching-strategy nname="UserAuth-CachingStrategy" keyGenerationExpression="#[message.inboundProperties.'authorization']" doc:name="Caching Strategy">
<managed-store storeName="myNonPersistentManagedObjectStore" maxEntries="-1" entryTTL="20000" expirationInterval="5000"/>
</ee:object-store-caching-strategy>
I am using an EhCache based cacheWriter for write-behind cache implementation.
here is the config:
<cache name="CACHE_JOURNALS"
maxElementsInMemory="1000"
eternal="false" timeToIdleSeconds="120" timeToLiveSeconds="120"
overflowToDisk="true" maxElementsOnDisk="10000000" diskPersistent="false"
diskExpiryThreadIntervalSeconds="120" memoryStoreEvictionPolicy="LRU">
<cacheWriter writeMode="write-behind"
maxWriteDelay="2"
rateLimitPerSecond="20"
writeCoalescing="true"
writeBatching="false"
writeBatchSize="1"
retryAttempts="2"
retryAttemptDelaySeconds="2">
<cacheWriterFactory
class="JournalCacheWriterFactory"
properties="just.some.property=test; another.property=test2"
propertySeparator=";" />
</cacheWriter>
</cache>
after I do a cache.putWithWriter
cache.putWithWriter(new Element(key, newvalue));
another thread tends to read from cache with 'key'
observation:
if < 2s then I get the old value
if > 2s then I get the updated value (newvalue)
It seems that cache is updated with 'key':newvalue only after write to datastore.
Q1.Is this the expected behaviour for write-behind?
Q2.Is there any way the get it to update the cache with 'key':newvalue just as soon
as the 'putWithWriter' call completes and then have a deferred write
behind.
From the documentation, it seems that the later is what is implied.
Q1: No. I don't even see how that would actually happen!
Q2: n/a as what you observe isn't the expected behavior, but the new value should be observable in the cache right away.
Could it be you use this cache with some read through of some kind and actually observe the Cache's Entry being evicted/expired and actually repopulate with the old value from the Datastore?
This was a naive error on my side, the code calling the #Cacheble method was from the same spring service.
Spring does not intercept calls from-to the same service.
I refactored the cache enabled code out and it works as expected.
I am using Infinispan as L2 cache and I have two application nodes. The L2 cache in two apps are replicated. The two apps are not identical.
One of my app fill the database using web services while other app run GUI for the database.
The both app do the extensively read and write to the database. After running the app I have seen following error. I do not know which cause this error.
I wonder why
- My cache instances are not properly replicated each change to other
L2 cache got a two reposes
L2 responses are not equal
ERROR org.infinispan.interceptors.InvocationContextInterceptor - ISPN000136: Execution error
2013-05-29 06:32:32 ERROR - Exception while processing event, reason: org.infinispan.loaders.CacheLoaderException: Responses contains more than 1 element and these elements are not equal, so can't decide which one to use:
[SuccessfulResponse{responseValue=TransientCacheValue{maxIdle=100000, lastUsed=1369809152081} TransientCacheValue {value=MarshalledValue{instance=, serialized=ByteArray{size=1911, array=0x0301fe0409000000..}, cachedHashCode=1816114786}#57991642}} ,
SuccessfulResponse{responseValue=TransientCacheValue{maxIdle=100000, lastUsed=1369809152116} TransientCacheValue {value=MarshalledValue{instance=, serialized=ByteArray{size=1911, array=0x0301fe0409000000..}, cachedHashCode=1816114786}#6cdaa731}} ]
My Infinispan configuration is
<globalJmxStatistics enabled="true" jmxDomain="org.infinispan" allowDuplicateDomains="true"/>
<transport
transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport"
clusterName="infinispan-hibernate-cluster"
distributedSyncTimeout="50000"
strictPeerToPeer="false">
<properties>
<property name="configurationFile" value="jgroups.xml"/>
</properties>
</transport>
</global>
<default>
</default>
<namedCache name="my-cache-entity">
<clustering mode="replication">
<stateRetrieval fetchInMemoryState="false" timeout="60000"/>
<sync replTimeout="20000"/>
</clustering>
<locking isolationLevel="READ_COMMITTED" concurrencyLevel="1000"
lockAcquisitionTimeout="15000" useLockStriping="false"/>
<eviction maxEntries="10000" strategy="LRU"/>
<expiration maxIdle="100000" wakeUpInterval="5000"/>
<lazyDeserialization enabled="true"/>
<!--<transaction useSynchronization="true"
transactionMode="TRANSACTIONAL" autoCommit="false"
lockingMode="OPTIMISTIC"/>-->
<loaders passivation="false" shared="false" preload="false">
<loader class="org.infinispan.loaders.cluster.ClusterCacheLoader"
fetchPersistentState="false"
ignoreModifications="false" purgeOnStartup="false">
<properties>
<property name="remoteCallTimeout" value="20000"/>
</properties>
</loader>
</loaders>
</namedCache>
Replicated entity caches should be configured with state retrieval, as already indicated in the default Infinispan configuration file and you've already done so. ClusterCacheLoader should only used in special situations (for query caching). Why not just use the default Infinsipan configuration provided? In fact, if you don't configure a config file, it'll use the default one.