I am going to take some details from mysql tables into esb using db-lookup mediator and it works perfectly without any problem. But since it accessing databases, it will impact the performance. So I want to cache some data from database and read from cache instead of directly reading from database. I found a mediator called cache mediator which related to caching. But not clear how to use it with my scenario. Is it possible? If its possible, Can anyone show me an example code on how to implement it.
This is the way I use dblookup mediator.
<dblookup>
<connection>
<pool>
<password>password</password>
<driver>com.mysql.jdbc.Driver</driver>
<url>jdbc:mysql://localhost:3306/userdb</url>
<user>root</user>
</pool>
</connection>
<statement>
<sql><![CDATA[select * from user limit 1]]></sql>
<result column="username" name="user"/>
</statement>
</dblookup>
I just want to get the result from cache instead of database.
The caching mediator is used to cache responses of EI so depending on your usecase you can use this to increase performance, but it cannot be used for dblookup.
First I would advise you to evaluate the performance, it should be quite fast already. You could try to configure a datasource (either in 'masterdatasources.xml' or through the carbon GUI) and use in your proxy the following to refer to the datasource
<dblookup>
<connection>
<pool>
<dsName>yourdatasourcename</dsName>
</pool>
</connection>
<statement>
<sql><![CDATA[select * from user limit 1]]></sql>
<result column="username" name="user"/>
</statement>
</dblookup>
Related
Hybris: 6.3.0.0-SNAPSHOT
I'm doing performance testing, and I need to disable caching. I've already disabled the database (mySQL) caching and would like to disable all forms of application caching. Is it possibe?
I've already seen other questions and the suggestion to use setDisableCaching for FlexibleSearch. Unfortunately, there are some FlexibleSearch that are under the control of Hybris, and I can't change the method directly. I'm looking to override it next, but I want to know if there's an easier way.
I've also tried adding "-Dnet.sf.ehcache.disabled=true" to tomcat.generaloptions in local.properties, but the application just seems to hang during startup, and the server never starts.
Additional context: We have a web service that is returing 3,000 PointOfService records. The first call is so slow, the Client thinks the application is not working (it might have timed out). The succeeding calls are faster, because the data has already been cached. I need to check how to improve the performance of the first call.
The new cache is Region Cache.
If you want to disable the cache you have to set the size of all regioncache to 0. It won't be really disabled but nothing will be cached.
You can disable it using code as mentionned in other response Registry.getCurrentTenant().getCache().setEnabled(false);
You can use old cache by setting in your local.properties cache.legacymode=true.
This won't disable all cache however.
Now if your problem is low time response when querying a lot of object maybe you need to define your own cache region and set the proper values in your properties :
<alias name="defaultMyObjectCacheRegion" alias="myObjectCacheRegion"/>
<bean name="defaultMyObjectCacheRegion" class="de.hybris.platform.regioncache.region.impl.EHCacheRegion">
<constructor-arg name="name" value="MyObjectCacheRegion" />
<constructor-arg name="maxEntries" value="${regioncache.myObjectcacheregion.maxentries}" />
<constructor-arg name="evictionPolicy" value="${regioncache.myObjectcacheregion.evictionpolicy}" />
<constructor-arg name="statsEnabled" value="${regioncache.stats.enabled}" />
<constructor-arg name="exclusiveComputation" value="${regioncache.exclusivecomputation}" />
<property name="handledTypes">
<array>
<value>[MyObject typecode]</value>
</array>
</property>
To conclude you should not try to disable hybris cache it's almost impossible. But you can easily clear it for testing purpose.
If you have performance issue, I suggest you also take a look a DB transaction. This is often a bottleneck. See : https://help.hybris.com/1808/hcd/8c7387f186691014922080f2e053216a.html
You can manually delete Hybris cache from-
https://localhost:9002/hac/monitoring/cache
Run the below as groovy script in commit mode from HAC
tenant.getCurrentTenant().getCache().setEnabled(false);
To reenable it, change false to true.
Did you consider adding a pagination to the call for PointOfService? Let the client request only 10/100 elements at a time. The client then can subsequently request the first 10, second 10... elements. That way the call will be a lot faster. It also wont cram your cache and stress server and database that much. Also the client will be much safer on his processing of the data.
Our company have a website made in Classic ASP, which most of them are static pages.
We have enabled Kernal cache for 30 seconds in our web.config file in order to speed-up the displaying of all our webpages :
<caching enabled="true" enableKernelCache="true" maxCacheSize="512" maxResponseSize="524288">
<profiles>
<add extension="*" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" varyByQueryString="*" varyByHeaders="accept-encoding, accept-language" location="Server" duration="00:00:30"/>
</profiles>
</caching>
We have one dynamic page displaying customer information :
This page display information based on the Request.Querystring("userId") parameter :
EXAMPLE. https://website.com/user.asp?userId=12345
Our question is the following : Do you confirm me the fact that this dynamic page will never be cached for other users, as it always have a different URL (based on the userId parameter) ?
We need to be sure that userId=12345 will NEVER see cached information for userID=56789, even if they access the "user.asp" page in the same cache timeframe (30 seconds) ?
Thank you very much,
Yes, varyByQueryString="*" Will keep separate cache copies for each query string. (You can try for yourself to be sure)
Beware that any user can change the URL and spoof as another user. This is not a good authentication mechanism.
EDIT by #AlexLaforge:
This answer is absolutely true !
The Kernel Cache in IIS needs several conditions to be met by the requested resource in order to be cached. One of those conditions is that it should not contain any query string. You can check it in Microsoft's Knowledge Base here:
https://support.microsoft.com/en-us/help/817445/instances-in-which-http-sys-does-not-cache-content
if one or more of the following conditions are true, HTTP.sys does not cache the request response:
....
The request contains a query string.
It makes total sense, and that's exactly the use case that I need.
We have a Spring Integration project which uses a the following
<int-file:inbound-channel-adapter
directory="file:#{'${poller.landingzonepath}'.toLowerCase()}" channel="createMessageChannel"
filename-regex="${ingestion.filenameRegex}" queue-size="10000"
id="directoryPoller" scanner="leafScanner">
<!-- <int:poller fixed-rate="${ingestion.filepoller.interval:10000}" max-messages-per-poll="100" /> -->
<int:poller fixed-rate="10000" max-messages-per-poll="1000" />
</int-file:inbound-channel-adapter>
We also have a leafScanner which extends from the default RecursiveLeafOnlyDirectoryScanner, our leafscanner doesn't do too much. Just checks a directory against a regex property.
The issue we're seeing is one where there are 250,000 (.landed [the ones we care about] files) which means about 500k actual files in the directory that we are polling. This is redesign of an older system and the redesign was to make the application more scalable, whilst being agnostic of the directory names inside the polled parent directory. We wanted to get away from a poller per specific directory, but it seems unless we're doing something wrong, we'll have to go back to this.
If anyone has any possible solutions, or configuration items we could try please let me know. On my local machine with 66k .landed files, it takes about 16 minutes before the first file is presented to our transformer to do something.
As the JavaDocs indicate, the RecursiveLeafOnlyDirectoryScanner will not scale well with large directories or deep trees.
You could make your leafScanner stateful and, instead of subclassing RecursiveLeafOnlyDirectoryScanner, subclass DefaultDirectoryScanner and implement listEligibleFiles and return when you have 1000 files after saving off where you are; and on the next poll, continue from where you left off; when you get to the end, start again at the beginning.
You could maintain state in a field (which would mean you'd start over after a JVM restart) or use some persistence.
Just an update. The reason our implementation was so slow was beacuse of locking (trying to prevent duplicates), locking (preventing duplicates) is automatically disabled by adding a filter.
The max-messages-per-poll is also very important if you want to add a thread pool. Without this you will see no performance improvements.
My site uses varnish cache heavily and is set to refresh every 5 minutes. I found out that this was skewing product view stats making them less than what they actually should be.
I want to turn off Magentos default product view logging facility so that no product views are being recorded.
I want to mimic the action by doing custom inserts into the relevant tables i.e. tf_report_viewed_product_index
Inserting into the tf_report_viewed_product_index table alone is not allowed since it has foriegn key constraints. There is more to it.
In case anyone comes across this you can use xml to disable an event:
<frontend>
<events>
<catalog_controller_product_view>
<observers>
<reports>
<type>disabled</type>
</reports>
</observers>
</catalog_controller_product_view>
</events>
</frontend>
Then using an ajax call from the product view page I simply insert a new row into tf_report_viewed_product_index table.
This is not a Magento issue, this is a user-request-reaching-your-web-app (Magento) issue. The speed and load-handling benefits realized by using Varnish exist precisely because pre-generated static content is cached and served ahead of the dynamically generated content from Magento (which also includes the overhead and resources of logging traffic to the report_* and log_* tables).
I've not too much experience in this area, but I believe you should use varnishcsa to log cache hits and then process them via cron using the Magento Report module's modeling; see Mage_Reports_Model_Event_Observer::catalogProductView() for a start, but note that this method normally handles logging of single views. You will likely want to do a mass insert of processed Varnish log data and then calculate.
And, here's a link SO post on setting up logging with varnishcsa.
Scalability issue with singleton object
Hi All,
We have a singleton object hosted in windows service.
It works fine untill the number of simultaneous client requests exceeds some magical number around 100.
After this all new calls seems to be queued and processed one by one only when one of current connections is released.
We would very much appreciate if someone could tell us how to get rid of this limitation.
At the time when it happens the number of threads (according to Task Manager) is about 120 so thread pooling shouldn’t be an issue (there are 2 CPUs which makes up to 512 threads, if I correctly understand).
There is also plenty of free memory (the process allocates about 200-300 MB and there is still more than 1GB of free memory)
We use .Net framework 3.5
Below is fragment of app.config.
<configuration>
<system.runtime.remoting>
<application>
<service>
<wellknown type="CompanyName.Server.ServerStub, MyServer" objectUri="MyServer" mode="Singleton"/>
</service>
<channels>
<channel port="3210" ref="tcp">
<serverProviders>
<formatter ref="binary" typeFilterLevel="Full"/>
</serverProviders>
</channel>
</channels>
</application>
</system.runtime.remoting>
</configuration>
There always is only 1 singleton Object. Its handling all request one by one. After about 100 requests you'll probably notice some slowdown because some buffers are filling up.