Hi I want to filter my logs in a special way:
I have a high frequented system. A lot of devices are connected and are sending messages all time. To log all is impossible.
Now I'm searching for a way to log all things are not depending on device messages and to log the other messages only of a special device.
I found, that I could mark these logs but I have no idea to combine both log types:
public someMethod (String serial){
Marker sm = org.slf4j.MarkerFactory.getMarker(serial);
log.info("A log message I want to find each time");
log.debug(sm, "This log I want to filter out only for serial: "+ serial);
...
I'm working with spring boot and log4j.
I tried vaious filters, but without success.
Any one any idea?
I found it by myself:
here the start of my log4j2.xml:
<Configuration monitorInterval="60">
<filters>
<ThresholdFilter level="INFO" onMatch="ACCEPT" onMismatch="NEUTRAL"/>
<MarkerFilter marker="serial" onMatch="ACCEPT" onMismatch="DENY"/>
</filters>
...
<Appenders> ...
Related
I am facing a hard time finding a solution for a problem I am facing which is:
Let's say I have 4 microservices (A,B,C,D) that interact with each other (rest APIs)
A calls B and B calls D. So that path for a single request is A/B/D
Below is the logging pattern.
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } [%15.15t] %logger{10}:%L | %m%n
I want to add path in it.
So lets say Request is initiated from A to B, in the logs of B I want it to display Path:A/
now B calls D, the Path should be in the logs of D: A/B
Please suggest how do I manage this.
I am sorry for naïve question since I am new to SLF4J
as I understand you want to know the path your request took!!! for that you can use Spring Cloud Sleuth provides Spring Boot auto-configuration for distributed tracing
https://developer.okta.com/blog/2021/07/26/spring-cloud-sleuth
I need to call 4 web services asynchronously and aggregate the results to a single message.If one of the service takes more time to respond than the specified timeout(3sec) then the remaining responses which have arrived should be aggregated and the late coming messages should be discarded . For this i used the below snippet in spring configuration file
<int:aggregator input-channel="aggregatorInputChannel" group-timeout="3000" send-partial-result-on-expiry="true" expire-groups-upon-completion="true" output-channel="aggregatorOutputChannel" ref="responseAggregator" method="populateResponseHeader" >
</int:aggregator>
When one of the web service(lets say service4) call takes more time than the timeout value, then the thread for service4 keeps running in the background and the server send a 202 response. Any suggestions on how i should modify my aggregator to ignore the messages which arrive later than the timeout and get the response?
First of all you should take a look into the Scatter-Gather pattern.
Looks like it is fully sufficient for your use-case.
You should use expire-groups-upon-timeout="false":
<xsd:attribute name="expire-groups-upon-timeout">
<xsd:annotation>
<xsd:documentation>
Boolean flag specifying, if a group is completed due to timeout (reaper or
'group-timeout(-expression)'), whether the group should be removed.
When true, late arriving messages will form a new group. When false, they
will be discarded. Default is 'true' for an aggregator and 'false' for a
resequencer.
</xsd:documentation>
</xsd:annotation>
<xsd:simpleType>
<xsd:union memberTypes="xsd:boolean xsd:string" />
</xsd:simpleType>
Two months ago, I had configured an EE cache in Mulesoft. All of a sudden, it stopped working. It turned out that DataWeave cannot be placed inside the cache scope. Once I move it out of the scope, it works perfectly again. I tested with this :
<set-payload value="#[message.inboundProperties.'http.request.uri']" doc:name="Set Payload"/>
<ee:cache cachingStrategy-ref="EBX_Response_Caching_Strategy" doc:name="Cache">
<logger message="No entry with key: '#[payload]' was found in the cache. A request will be send to EBX service. Detailed response is returned: #[flowVars.detailedResponse]" level="INFO" doc:name="Logger"/>
<scripting:transformer encoding="UTF-8" mimeType="application/json" doc:name="Set filter">
<scripting:script engine="Groovy"><![CDATA[
flowVars['filter'] = 'filtervalue' ]]></scripting:script>
</scripting:transformer>
<http:request config-ref="HTTP_Request_Configuration" path="/ebx-dataservices/rest/data/v1/" method="GET" doc:name="EBX HTTP call">
<http:request-builder>
<http:query-param paramName="login" value="${svc0031.login}"/>
<http:query-param paramName="password" value="${svc0031.password}"/>
<http:query-param paramName="pageSize" value="unbounded"/>
<http:query-param paramName="filter" value="#[filter]"/>
<http:header headerName="Host" value="myhost.com"/>
</http:request-builder>
</http:request>
</ee:cache>
<dw:transform-message metadata:id="91c16073-669d-4c27-a3ea-5cbae4e56ede" doc:name="Basic info response">
<dw:input-payload doc:sample="sample_data\json.json"/>
<dw:set-payload><![CDATA[%dw 1.0
%output application/json
---
{
hotels: payload.rows map ((row , indexOfRow) -> {
name: row.content.companyName.content,
propertyCode: row.content.propertyCode.content,
})
}]]></dw:set-payload>
</dw:transform-message>
If I move the DataWeave transformation in the cache scope, the caching just stops working and a request is sent to the backendsystem always.
Why is that? Did MuleSoft change something? We are running on ESB 3.7.3
You are using a consumable payload in Cache Scope and when we use Consumable payload then Cache is always a MISS and your processors inside the Cache scope will process again even after the Cache is used.
In your case, you are using a HTTP requester which will give you a Consumable response and therefore Cache strategy is being abandoned.
Solution is use a 'Byte Array to Object' to Consume the stream and make the response Non-consumable so that the Cache will cache it in memory and next time it will be a Cache-HIT and it will pick up from In-memory Cache.
Other option for you is to use Dataweave inside the Cache Scope, that will also consume your incoming stream and make Cache a HIT.
For more info on Cache HIT and MISS and Consumable responses go here :
https://docs.mulesoft.com/mule-user-guide/v/3.7/cache-scope
I am using Mule 3.5.0 and trying to implement the Cache Strategy. The cache is supposed to be hit by APIs for grabbing a Sugar CRM OAuth Token. Multiple endpoints are hitting this cache.
My requirement is to keep only one active element which in the queue which serves this active token to every API call for 5 minutes. When the TTL expires, the cache should grab another token and cache it for subsequent calls.
The problem arises, when multiple inbound endpoints are hitting the cache, old values are also being spit out by the cache. Is all I need to do is change the maxEntries to 1? OR is there a better way of achieving this?
<ee:object-store-caching-strategy name="Caching_Strategy" doc:name="Caching Strategy">
<in-memory-store name="sugar-cache-in-memory" maxEntries="500" entryTTL="300000" expirationInterval="300000"/>
</ee:object-store-caching-strategy>
<flow name="get-oauth-token-cache" doc:name="get-oauth-token-cache" tracking:enable-default-events="true">
<ee:cache cachingStrategy-ref="Caching_Strategy" doc:name="Cache">
..............................
..............................
..............................
<logger message="------------------------ Direct Call for Token----------------------" level="INFO" doc:name="Logger"/>
<DATAMAPPER to set #payload.access_token />
</ee:cache>
<set-session-variable variableName="access_token" value="#[payload.access_token]" doc:name="Session Variable"/>
</flow>
The problem was that in the first line after ee:cache I had the Set Payload function. Had to take it outside the Cache Scope.
Sorry.
I am using Infinispan as L2 cache and I have two application nodes. The L2 cache in two apps are replicated. The two apps are not identical.
One of my app fill the database using web services while other app run GUI for the database.
The both app do the extensively read and write to the database. After running the app I have seen following error. I do not know which cause this error.
I wonder why
- My cache instances are not properly replicated each change to other
L2 cache got a two reposes
L2 responses are not equal
ERROR org.infinispan.interceptors.InvocationContextInterceptor - ISPN000136: Execution error
2013-05-29 06:32:32 ERROR - Exception while processing event, reason: org.infinispan.loaders.CacheLoaderException: Responses contains more than 1 element and these elements are not equal, so can't decide which one to use:
[SuccessfulResponse{responseValue=TransientCacheValue{maxIdle=100000, lastUsed=1369809152081} TransientCacheValue {value=MarshalledValue{instance=, serialized=ByteArray{size=1911, array=0x0301fe0409000000..}, cachedHashCode=1816114786}#57991642}} ,
SuccessfulResponse{responseValue=TransientCacheValue{maxIdle=100000, lastUsed=1369809152116} TransientCacheValue {value=MarshalledValue{instance=, serialized=ByteArray{size=1911, array=0x0301fe0409000000..}, cachedHashCode=1816114786}#6cdaa731}} ]
My Infinispan configuration is
<globalJmxStatistics enabled="true" jmxDomain="org.infinispan" allowDuplicateDomains="true"/>
<transport
transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport"
clusterName="infinispan-hibernate-cluster"
distributedSyncTimeout="50000"
strictPeerToPeer="false">
<properties>
<property name="configurationFile" value="jgroups.xml"/>
</properties>
</transport>
</global>
<default>
</default>
<namedCache name="my-cache-entity">
<clustering mode="replication">
<stateRetrieval fetchInMemoryState="false" timeout="60000"/>
<sync replTimeout="20000"/>
</clustering>
<locking isolationLevel="READ_COMMITTED" concurrencyLevel="1000"
lockAcquisitionTimeout="15000" useLockStriping="false"/>
<eviction maxEntries="10000" strategy="LRU"/>
<expiration maxIdle="100000" wakeUpInterval="5000"/>
<lazyDeserialization enabled="true"/>
<!--<transaction useSynchronization="true"
transactionMode="TRANSACTIONAL" autoCommit="false"
lockingMode="OPTIMISTIC"/>-->
<loaders passivation="false" shared="false" preload="false">
<loader class="org.infinispan.loaders.cluster.ClusterCacheLoader"
fetchPersistentState="false"
ignoreModifications="false" purgeOnStartup="false">
<properties>
<property name="remoteCallTimeout" value="20000"/>
</properties>
</loader>
</loaders>
</namedCache>
Replicated entity caches should be configured with state retrieval, as already indicated in the default Infinispan configuration file and you've already done so. ClusterCacheLoader should only used in special situations (for query caching). Why not just use the default Infinsipan configuration provided? In fact, if you don't configure a config file, it'll use the default one.