How can I disable ehcache or all types caching? - ehcache

Hybris: 6.3.0.0-SNAPSHOT
I'm doing performance testing, and I need to disable caching. I've already disabled the database (mySQL) caching and would like to disable all forms of application caching. Is it possibe?
I've already seen other questions and the suggestion to use setDisableCaching for FlexibleSearch. Unfortunately, there are some FlexibleSearch that are under the control of Hybris, and I can't change the method directly. I'm looking to override it next, but I want to know if there's an easier way.
I've also tried adding "-Dnet.sf.ehcache.disabled=true" to tomcat.generaloptions in local.properties, but the application just seems to hang during startup, and the server never starts.
Additional context: We have a web service that is returing 3,000 PointOfService records. The first call is so slow, the Client thinks the application is not working (it might have timed out). The succeeding calls are faster, because the data has already been cached. I need to check how to improve the performance of the first call.

The new cache is Region Cache.
If you want to disable the cache you have to set the size of all regioncache to 0. It won't be really disabled but nothing will be cached.
You can disable it using code as mentionned in other response Registry.getCurrentTenant().getCache().setEnabled(false);
You can use old cache by setting in your local.properties cache.legacymode=true.
This won't disable all cache however.
Now if your problem is low time response when querying a lot of object maybe you need to define your own cache region and set the proper values in your properties :
<alias name="defaultMyObjectCacheRegion" alias="myObjectCacheRegion"/>
<bean name="defaultMyObjectCacheRegion" class="de.hybris.platform.regioncache.region.impl.EHCacheRegion">
<constructor-arg name="name" value="MyObjectCacheRegion" />
<constructor-arg name="maxEntries" value="${regioncache.myObjectcacheregion.maxentries}" />
<constructor-arg name="evictionPolicy" value="${regioncache.myObjectcacheregion.evictionpolicy}" />
<constructor-arg name="statsEnabled" value="${regioncache.stats.enabled}" />
<constructor-arg name="exclusiveComputation" value="${regioncache.exclusivecomputation}" />
<property name="handledTypes">
<array>
<value>[MyObject typecode]</value>
</array>
</property>
To conclude you should not try to disable hybris cache it's almost impossible. But you can easily clear it for testing purpose.
If you have performance issue, I suggest you also take a look a DB transaction. This is often a bottleneck. See : https://help.hybris.com/1808/hcd/8c7387f186691014922080f2e053216a.html

You can manually delete Hybris cache from-
https://localhost:9002/hac/monitoring/cache

Run the below as groovy script in commit mode from HAC
tenant.getCurrentTenant().getCache().setEnabled(false);
To reenable it, change false to true.

Did you consider adding a pagination to the call for PointOfService? Let the client request only 10/100 elements at a time. The client then can subsequently request the first 10, second 10... elements. That way the call will be a lot faster. It also wont cram your cache and stress server and database that much. Also the client will be much safer on his processing of the data.

Related

Concurrency Issue on Hashmap Stored in Apache Ignite

I am developing a Clustered Web Application with different WARS deployed, so I need session sharing (and not only this). I've started using Ignite as a good platform for Clustered(Replicated) cache server.
The issue I reached is this:
My cache Key is String and Value is a HashMap
CacheConfiguration<Integer, Map<String,String>> cfg = new CacheConfiguration<>("my_cache");
I am using this cache as a WEBSESSION. The issue is where one servlet gets the Map, Put some session specific values, and put it back to Ignite. After the first servlet gets the cache, second one enters and because it finishes after the first one, the second put will kill first one changes.
So my exact question is, what's the pattern to have concurrent map access issue solved is a high efficient way (without whole object locking).
Regards
It sounds a bit weird to me, because this scenario should be only possible when two there are two concurrent requests working with the same session. How is this possible?
But in any case, you can use TRANSACTIONAL cache for web session data. This will guarantee that these two requests will be processed within a lock and the data will be updated atomically.
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="web-sessions-cache"/>
<property name="atomicityMode" value="TRANSACTIONAL"/>
</bean>

Method caching with Spring boot and Hazelcast.How and where do I specify my refresh/reload intervals?

I realise #Cacheable annotation helps me with caching the result of a particular method call and subsequent calls are returned from the cache if there are no changes to arguments etc.
I have a requirement where I'm trying to minimise the number of calls to a db and hence loading the entire table. However,I would like to reload this data say every day just to ensure that my cache is not out of sync with the underlying data on the database.
How can I specify such reload/refresh intervals.
I'm trying to use Spring boot and hazelcast.All the examples I have seen talk about specifying LRU LFU etc policies on the config file for maps etc but nothing at a method level.
I can't go with the LRU/LFU etc eviction policies as I intend to reload the entire table data every x hrs or x days.
Kindly help or point me to any such implementation or docs etc.
Spring #Cacheable doesn't support this kind of policies at method level. See for example the code for CacheableOperation.
If you are using hazelcast as your cache provider for spring, you can explicitly evict elements or load datas by using the corresponding IMap from your HazelcastInstance.

Does Spring Integration provide something like a metric-channel-interceptor that can be logged within a flow

I'm looking for an easy way on some of my flows to be able to log when some "event" occurs.
In my simple case an "event" might be whenever any message flows down a channel, or whenever a certain # of messages flow down a channel, I'd like to print out some info to a log file.
I know there currently is a logging-channel-adapter but in the case just described I'd need to be able to tailor my own log message and I'd also need to have some sort of counter or metrics keeping track of things (so the expression on the adapter wouldn't suffice since that grants access to the payload but not info about the channel or flow).
I'm aware that Spring Integration already exposes a lot of metrics to JMX via ManagedResources and MetricType and ManagedMetrics.
I've also watched Russell's "Managing and Monitoring Spring Integration Applications" YouTube video several times: https://www.youtube.com/watch?v=TetfR7ULnA8
and I realize that Spring Integration component metrics can be polled via jmx-attribute-polling-channel-adapter
There are certainly many ways to get what I'm after.
A few examples:
ServiceAdapter that has a counter in it that also has a reference to a logger
Hook into the advice-chain of a poller
Poll JMX via jmx-attribute-polling-channel-adapter
It might be useful however to offer a few components that users could put into the middle of a flow that could provide some basic functionality to easily satisfy the use-case I described.
Sample flow might look like:
inbound-channel-adapter -> metric-logging-channel-interceptor -> componentY -> outbound-channel-adapter
Very high-level such a component might look like a hybrid of the logging-channel-adapter and a ChannelInterceptor with a few additional fields:
<int:metric-logging-channel-interceptor>
id=""
order=""
phase=""
auto-startup=""
ref=""
method=""
channel=""
outchannel=""
log-trigger-expression="(SendCount % 10) = 0"
level=""
logger-name=""
log-full-message=""
expression=""
/>
Internally the class implementing that would need to keep a few basic stats, I think the ones exposed on messageChannel would be a good (i.e. SendCount, MaxSendDuration, etc).
The log-trigger-expression and expression attributes would need access to the internal counters as well.
Please let me know if there is something that already does what I'm describing or if I'm overcomplicating this. If it does not currently exist though I think that being able to quickly drop a component into a flow without having to write a custom ServiceActivator just for logging purposes provides benefit.
Interesting question. You can already do something similar with a selective wire-tap...
<si:publish-subscribe-channel id="seconds">
<si:interceptors>
<si:wire-tap channel="thresholdLogger" selector="selector" />
</si:interceptors>
</si:publish-subscribe-channel>
<bean id="selector" class="org.springframework.integration.filter.ExpressionEvaluatingSelector">
<constructor-arg
value="#mbeanServer.getAttribute('org.springframework.integration:type=MessageChannel,name=seconds', 'SendCount') > 5" />
</bean>
<si:logging-channel-adapter id="thresholdLogger" />
There are a couple of things going on here...
The stats are actually held in the MBean for the channel, not the channel itself so the expression has to get the value via the MBean server.
Right now, the wire-tap doesn't support selector-expression, just selector so I had to use a reference to an expression evaluating selector. It would be a useful improvement to support selector-expression directly.
Even though the selector in this example acts on the stats for the tapped channel, it can actually reference any MBean.
I can see some potential improvements here.
Support selector-expression.
Maintain the stats in the channel itself instead of the MBean so we can just use #channelName.sendCount > 5.
Feel free to open JIRA 'improvement' issue(s).
Hope that helps.

Spring Integration Poller too slow

We have a Spring Integration project which uses a the following
<int-file:inbound-channel-adapter
directory="file:#{'${poller.landingzonepath}'.toLowerCase()}" channel="createMessageChannel"
filename-regex="${ingestion.filenameRegex}" queue-size="10000"
id="directoryPoller" scanner="leafScanner">
<!-- <int:poller fixed-rate="${ingestion.filepoller.interval:10000}" max-messages-per-poll="100" /> -->
<int:poller fixed-rate="10000" max-messages-per-poll="1000" />
</int-file:inbound-channel-adapter>
We also have a leafScanner which extends from the default RecursiveLeafOnlyDirectoryScanner, our leafscanner doesn't do too much. Just checks a directory against a regex property.
The issue we're seeing is one where there are 250,000 (.landed [the ones we care about] files) which means about 500k actual files in the directory that we are polling. This is redesign of an older system and the redesign was to make the application more scalable, whilst being agnostic of the directory names inside the polled parent directory. We wanted to get away from a poller per specific directory, but it seems unless we're doing something wrong, we'll have to go back to this.
If anyone has any possible solutions, or configuration items we could try please let me know. On my local machine with 66k .landed files, it takes about 16 minutes before the first file is presented to our transformer to do something.
As the JavaDocs indicate, the RecursiveLeafOnlyDirectoryScanner will not scale well with large directories or deep trees.
You could make your leafScanner stateful and, instead of subclassing RecursiveLeafOnlyDirectoryScanner, subclass DefaultDirectoryScanner and implement listEligibleFiles and return when you have 1000 files after saving off where you are; and on the next poll, continue from where you left off; when you get to the end, start again at the beginning.
You could maintain state in a field (which would mean you'd start over after a JVM restart) or use some persistence.
Just an update. The reason our implementation was so slow was beacuse of locking (trying to prevent duplicates), locking (preventing duplicates) is automatically disabled by adding a filter.
The max-messages-per-poll is also very important if you want to add a thread pool. Without this you will see no performance improvements.

Flushing entire cache at once with Enterprise Caching Block

I am looking into using Enterprise Caching Block for my .NET 3.5 service to cache a bunch of static data from the database.
From everything I have read, it seems that FileDependency is the best option for storing static data that does not expire too often. However, when the file changes and the cache is flushed, I need to get a callback once to do some post processing for that particular cache. If I implement ICacheItemRefreshAction and register it during adding an item to the cache, I get a callback for each one of them.
Is there a way to register a callback for the entire cache so that I dont see thousands of callbacks being invoked when the cache flushes?
Thanks
To address your follow up for a better way than FileDependency: you could wrap a SqlDependency in an ICacheItemExpiration. See SqlCacheDependency with the Caching Application Block for sample code.
That approach would only work with SQL Server and would require setting up Service Broker.
In terms of a cache level callback, I don't see an out of the box way to achieve that; almost everything is geared to the item level. What you could do would be to create your own CacheManager Implementation that features a cache level callback.
Another approach might be to have a ICacheItemRefreshAction that only performs any operations when the cache is empty (i.e. the last item has been removed).

Resources