Echache 3.2.0 No Store.Provider found to handle configured resource types [offheap, disk] exception - ehcache

i have recently switched from an older implementation of ehcache to version 3.2 so i have the following xml configuration file for a project:
<eh:config xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns:eh='http://www.ehcache.org/v3'
xsi:schemaLocation="http://www.ehcache.org/v3
http://www.ehcache.org/schema/ehcache-core-3.0.xsd">
<eh:persistence directory="C:\foo\bar\Cache-Persistence"/>
<eh:thread-pools>
<eh:thread-pool alias="defaultDiskPool" min-size="1" max-size="3"/>
</eh:thread-pools>
<eh:disk-store thread-pool="defaultDiskPool"/>
<eh:cache-template name="PROC_REQTemplate">
<eh:key-type>java.lang.String</eh:key-type>
<eh:value-type>java.lang.String</eh:value-type>
<eh:expiry>
<eh:ttl>640</eh:ttl>
</eh:expiry>
<eh:resources>
<eh:offheap unit="MB">500</eh:offheap>
<eh:disk unit="GB" persistent="true">3</eh:disk>
</eh:resources>
<eh:disk-store-settings thread-pool="defaultDiskPool"/>
</eh:cache-template>
<eh:cache alias="proc_req_cache" uses-template="PROC_REQTemplate"/>
</eh:config>
with the above shown configuration i get the following exception trace that i keep truncated to conserve a bit of space but shows clearly the error:
java.lang.IllegalStateException: No Store.Provider found to handle configured resource types [offheap, disk] from {org.ehcache.impl.internal.store.heap.OnHeapStore$Provider, org.ehcache.impl.internal.store.tiering.TieredStore$Provider, org.ehcache.impl.internal.store.offheap.OffHeapStore$Provider, org.ehcache.impl.internal.store.disk.OffHeapDiskStore$Provider}
at org.ehcache.core.internal.store.StoreSupport.selectStoreProvider(StoreSupport.java:80) ~[?:?]
at org.ehcache.core.EhcacheManager.getStore(EhcacheManager.java:440) ~[?:?]
at org.ehcache.core.EhcacheManager.createNewEhcache(EhcacheManager.java:311) ~[?:?]
at org.ehcache.core.EhcacheManager.createCache(EhcacheManager.java:260) ~[?:?]
at org.ehcache.core.EhcacheManager.init(EhcacheManager.java:567) ~[?:?]
I thought that according to the current 3.2 documentation you can use any combination of data storage tiers but apparently this is not the case as the above error shows.So...
I can only make the above hown configuration to work if i comment-out the
offheap resource and leave only the disk but not both. Is this normal? what am i missing?
As per the 2.7.8 version the documentation (see here ehcache-2.8-storage-options) mentioned BigMemory as the offHeap store however, in ehcache-3.2.0.jar if i am seeing correctly there is some-kind of internal map for that purpose. Could the error reported above be related to the fact that i am not including BigMemory in the project? My guess is no, but it would be nice if someone could clarify?
Any help would be greatly appreciated. Thanks in advance.

In short, there is currently no support for having a disk tier with just an offheap tier. The current Ehcache 3.x support for tiering mandates a heap tier the moment you want to have multiple tiers.
Supported combination at this day (Ehcache 3.1.x and above):
heap or offheap or disk or clustered (single tier)
heap + offheap
heap + disk
heap + offheap + disk
heap + clustered
heap + offheap + clustered
The error has nothing to do with BigMemory which was the commercial offering on top of Ehcache 2.x.

The problem is that the higher caching level (currently offheap) needs to be a caching tier (our terminology for near caching). Right now, offheap isn't. So you need an onheap level as soon as you start having layers. Here is a working configuration.
I've also set ehcache as default namespace to make the xml more readable. And set the defaultThreadPool as default to prevent you from having to set it everywhere (and alternative is to add <event-dispatch thread-pool="defaultDiskPool"/> because the event-dispatch needs a thread pool and there was no default).
<config xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns='http://www.ehcache.org/v3'
xsi:schemaLocation="http://www.ehcache.org/v3
http://www.ehcache.org/schema/ehcache-core-3.0.xsd">
<persistence directory="C:\foo\bar\Cache-Persistence"/>
<thread-pools>
<thread-pool alias="defaultDiskPool" min-size="1" max-size="3" default="true"/>
</thread-pools>
<cache-template name="PROC_REQTemplate">
<key-type>java.lang.String</key-type>
<value-type>java.lang.String</value-type>
<expiry>
<ttl>640</ttl>
</expiry>
<resources>
<heap unit="entries">1</heap>
<offheap unit="MB">500</offheap>
<disk unit="GB" persistent="true">3</disk>
</resources>
</cache-template>
<cache alias="proc_req_cache" uses-template="PROC_REQTemplate"/>
</config>

Related

Memory problems when running stanford nlp (stanford segmentator)

I downloaded the stanford segmentator and I am following the instructions but I am getting a memory error, the full message is here:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.regex.Pattern.matcher(Pattern.java:1093)
at edu.stanford.nlp.wordseg.Sighan2005DocumentReaderAndWriter.shapeOf(Sighan2005DocumentReaderAndWriter.java:230)
at edu.stanford.nlp.wordseg.Sighan2005DocumentReaderAndWriter.access$300(Sighan2005DocumentReaderAndWriter.java:49)
at edu.stanford.nlp.wordseg.Sighan2005DocumentReaderAndWriter$CTBDocumentParser.apply(Sighan2005DocumentReaderAndWriter.java:169)
at edu.stanford.nlp.wordseg.Sighan2005DocumentReaderAndWriter$CTBDocumentParser.apply(Sighan2005DocumentReaderAndWriter.java:114)
at edu.stanford.nlp.objectbank.LineIterator.setNext(LineIterator.java:42)
at edu.stanford.nlp.objectbank.LineIterator.<init>(LineIterator.java:31)
at edu.stanford.nlp.objectbank.LineIterator$LineIteratorFactory.getIterator(LineIterator.java:108)
at edu.stanford.nlp.wordseg.Sighan2005DocumentReaderAndWriter.getIterator(Sighan2005DocumentReaderAndWriter.java:86)
at edu.stanford.nlp.objectbank.ObjectBank$OBIterator.setNextObjectHelper(ObjectBank.java:435)
at edu.stanford.nlp.objectbank.ObjectBank$OBIterator.setNextObject(ObjectBank.java:419)
at edu.stanford.nlp.objectbank.ObjectBank$OBIterator.<init>(ObjectBank.java:412)
at edu.stanford.nlp.objectbank.ObjectBank.iterator(ObjectBank.java:250)
at edu.stanford.nlp.sequences.ObjectBankWrapper.iterator(ObjectBankWrapper.java:45)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.classifyAndWriteAnswers(AbstractSequenceClassifier.java:1193)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.classifyAndWriteAnswers(AbstractSequenceClassifier.java:1137)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.classifyAndWriteAnswers(AbstractSequenceClassifier.java:1091)
at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:3023)
Before executing the file I tried increasing the heap space by doing export JAVA_OPTS=-Xmx4000m. I also tried splitting the file but still had the same error - I split the file to 8 chunks, so each had around 15MB each. What should I do to adjust the memory problem?
The segment.sh script that ships with the segmenter limits the memory to 2G, which is probably the cause of the error. Editing that file will hopefully fix the issue for you.

Trying to Create a new Policy for Multi Disk Operations

I am using clickhouse with just one disk which is specified at config.xml file under <path>
Now I want to extend this disk, so I updated the clickhouse version for enabling multi disk support.
What I want to do now is using the two disks together. I want to read from both of them but write data to second one only.
I have many tables, I thought changing the storage policy of the tables would do the trick but i can't change it.
For example i have a table called default_event which has default policy, after this query:
alter table default_event modify setting storage_policy='newStorage_only';
I got this error : Exception: New storage policy default shall contain volumes of old one
My storage xml is like this:
<?xml version="1.0" encoding="UTF-8"?>
<yandex>
<storage_configuration>
<disks>
<!--
default disk is special, it always
exists even if not explicitly
configured here, but you can't change
it's path here (you should use <path>
on top level config instead)
-->
<default>
<!--
You can reserve some amount of free space
on any disk (including default) by adding
keep_free_space_bytes tag
-->
<keep_free_space_bytes>1024</keep_free_space_bytes>
</default>
<test_disk>
<!--
disk path must end with a slash,
folder should be writable for clickhouse user
-->
<path>/DATA/newStorage/</path>
</test_disk>
<test_disk_2>
<!--
disk path must end with a slash,
folder should be writable for clickhouse user
-->
<path>/DATA/secondStorage/</path>
</test_disk_2>
<test_disk_3>
<!--
disk path must end with a slash,
folder should be writable for clickhouse user
-->
<path>/DATA/thirdStorage/</path>
</test_disk_3>
</disks>
<policies>
<newStorage_only>
<!-- name for new storage policy -->
<volumes>
<newStorage_volume>
<!-- name of volume -->
<!--
we have only one disk in that volume
and we reference here the name of disk
as configured above in <disks> section
-->
<disk>test_disk</disk>
</newStorage_volume>
</volumes>
</newStorage_only>
</policies>
</storage_configuration>
</yandex>
I tried adding default volume to the new policy but i can't start clickhouse with that config.
So, your main problem is that before that you did not explicitly specify the storage policy, but the default disk is written there by default. New policy should include all old disks and volumes with same names.
I gave a configuration based on yours, removing everything unnecessary. And that, I mean that in addition to those listed, you have a drive specified in path with the name default. All disks are listed in the volumes section of the new policy. Writing to new disks will happen thanks to move_factor. The value 0.5 tells us that when 50% of the disk space is reached, we need to write to the next one, and so on.
As soon as the rest of the disks fill evenly, you can lower this value.
PS: you can not use old disks in the new policy, for this you need to execute ALTER TABLE ... MOVE PARTITIONS/PARTS ... to transfer partitions/parts to new disks. Then the table will not be tied to the old disk and it will not be tedious to specify it in the new storage policy. Disks, of course, must be pre-configured in the settings.
<yandex>
<storage_configuration>
<disks>
<test_disk>
<path>/DATA/newStorage/</path>
</test_disk>
<test_disk_2>
<path>/DATA/secondStorage/</path>
</test_disk_2>
<test_disk_3>
<path>/DATA/thirdStorage/</path>
</test_disk_3>
</disks>
<policies>
<!--... old policy ... -->
<new_storage_only> <!-- policy name -->
<volumes>
<default>
<disk>default</disk>
</default>
<new_volume>
<disk>test_disk</disk>
<disk>test_disk_2</disk>
<disk>test_disk_3</disk>
</new_volume>
</volumes>
<move_factor>0.5</move_factor>
</new_storage_only>
</policies>
</storage_configuration>
</yandex>

memory usage grows until VM crashes while running Wildfly 9 with Java 8

We are having an issue with virtual servers (VMs) running out of native memory. These VMs are running:
Linux 7.2(Maipo)
Wildfly 9.0.1
Java 1.8.0._151 running with (different JVMs have different heap sizes. They range from 0.5G to 2G)
The JVM args are:
-XX:+UseG1GC
-XX:SurvivorRatio=1
-XX:NewRatio=2
-XX:MaxTenuringThreshold=15
-XX:-UseAdaptiveSizePolicy
-XX:G1HeapRegionSize=16m
-XX:MaxMetaspaceSize=256m
-XX:CompressedClassSpaceSize=64m
-javaagent:/<path to new relic.jar>
After about a month, sometimes longer, the VMs start to use all of their swap space and then eventually the OOM-Killer notices that java is using too much memory and kills one of our JVMs.
The amount of memory being used by the java process is larger than heap + metaSpace + compressed as revealed by using -XX:NativeMemoryTracking=detail
Are there tools that could tell me what is in this native memory(like a heap dump but not for the heap)?
Are there any tools that can map java heap usage to native memory usage (outside the heap) that are not jemalloc? I have used jemalloc to try to achieve this but the graph that is being drawn contains only hex values and not human readable class names so I cant really get anything out of it. Maybe I'm doing something wrong or perhaps I need another tool.
Any suggestions would be greatly appreciated.
You can use jcmd.
Start application with -XX:NativeMemoryTracking=summary or -
XX:NativeMemoryTracking=detail
Use jcmd to monitor the NMT (native memory tracker)
jcmd "pid" VM.native_memory baseline //take the baseline
jcmd "pid" VM.native_memory detail.diff // use based on your need to analyze more on change in native memory from its baseline

Disk persistent cache in ehcache 3.4 is using (leaking?) direct memory

I am running a web application that makes use of Ehcache 3.4.0. I have a cache configuration that defines a simple default of 1000 in-memory objects:
<cache-template name="default">
<key-type>java.lang.Object</key-type>
<value-type>java.lang.Object</value-type>
<heap unit="entries">1000</heap>
</cache-template>
I then have some disk-based caches that use this default template, but override all values (generated programmatically, so that's why they even use the default template at all) like so:
<cache alias='runViewCache' uses-template='default'>
<key-type>java.lang.String</key-type>
<value-type>java.lang.String</value-type>
<resources>
<heap unit='entries'>1</heap>
<disk unit='GB' persistent='true'>1</disk>
</resources>
</cache>
As data is written into my disk-based cache, direct/off-heap memory is used by the JVM, and never freed. Even clearing the cache does not free the memory. The memory used is directly related (nearly byte-for-byte as far as I can tell) to the data written to the disk-based cache.
The authoritative tier for this cache is an instance of org.ehcache.impl.internal.store.disk.OffHeapDiskStore.
This appears to be a memory leak (memory is consumed and never freed) but I am by no means an expert at configuring ehcache. Can anyone suggest a configuration change that will cause my disk tier to NOT use off-heap memory? Or, is there something else that I am just completely misunderstanding that someone else can point out?
Thank you!
How do you measure "used"?
TL;DR No, disk tier does not waste RAM.
As of v3.0.0 Ehcache uses memory mapped files for disk persistence:
Replacement of the port of Ehcache 2.x open source disk store by one that leverages the offheap library and memory mapped files.
This means, Ehcache uses in-memory address space to access files on disk. This does consume 0 bytes of your RAM. (At least directly. As #louis-jacomet already stated, the OS can decide to cache parts of the files in RAM.)
When you're running on Linux you should compare the VIRT and RES values of your process. VIRT is the amount of virtual bytes used by the process. RES is the amount of real RAM (RESident) bytes used by the process. VIRT should increase, while disk store cache is populated, but RES should remain pretty stable.

Out Of Memory in IBM Websphere 8.5.5.7

I am throwing a question on Out of Memory in IBM Websphere 8.5.5.7....We have an application primarily a Spring RestFull Webservices application deployed in IBM WAS 8.5.5.7. getting the below Out of Memory error for the last 5 days
[2/3/16 13:12:51:651 EST] 000000ab BBFactoryImpl E CWOBB9999E: Something unexpected happened; the data (if any) is <null> and the exception (if any) is java.lang.OutOfMemoryError: Java heap space at
com.ibm.oti.vm.VM.getClassNameImpl(Native Method) at
com.ibm.oti.vm.AbstractClassLoader.getPackageName(AbstractClassLoader.java:384) at
com.ibm.oti.vm.BootstrapClassLoader.loadClass(BootstrapClassLoader.java:65) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:691) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:693) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:358) at
java.lang.ClassLoader.loadClass(ClassLoader.java:663) at
org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:502) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at
org.eclipse.osgi.internal.loader.buddy.RegisteredPolicy.loadClass(RegisteredPolicy.java:79) at
org.eclipse.osgi.internal.loader.buddy.PolicyHandler.doBuddyClassLoading(PolicyHandler.java:135) at
org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:494) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at
org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:693) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
java.lang.ClassLoader.loadClass(ClassLoader.java:663) at
sun.reflect.DelegatingClassLoader.loadClass(DelegatingClassLoader.java:51) at
sun.misc.Unsafe.defineClass(Native Method) at
sun.reflect.ClassDefiner.defineClass(ClassDefiner.java:57) at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:437) at
java.security.AccessController.doPrivileged(AccessController.java:363) at
sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:433) at
sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:149) at
sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:316) at
java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1409) at
java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:63) at
java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:515) at
java.security.AccessController.doPrivileged(AccessController.java:363) at
java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:491) at
java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:338) at
java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:625) at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1619) at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1619) at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1768) at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at
java.io.ObjectInputStream.readObject(ObjectInputStream.java:364) at
com.ibm.son.util.Util.deserialize(Util.java:434) at
com.ibm.son.mesh.AbstractTCPImpl.procReceivedMessage(AbstractTCPImpl.java:478) at
com.ibm.son.mesh.CfwTCPImpl.completedRead(CfwTCPImpl.java:1248) at
com.ibm.son.mesh.CfwTCPImpl.complete(CfwTCPImpl.java:1061) at
com.ibm.ws.ssl.channel.impl.SSLReadServiceContext$SSLReadCompletedCallback.complete(SSLReadServiceContext.java:1818) at
com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:175) at
com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217) at
com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161) at
com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138) at
com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204) at
com.ibm.io.async.ResultHandler.runEventProces
Analyzed on the Introscope and Heap Analyser for the heap dump It is observed that consistently the lion share of the memory (>60%) is being consumed by com/ibm/xml/xlxp2/scan/util/SimpleDataBufferFactory used by IBM stax parser with WAS
Introscope Analysis throws light on sudden spike in the thread count, memory usage and gradual increase in connection count when the OOM happened.
When checking on the com.ibm.xml.xlxp2.scan.util.Databuffer issue of taking more heapsize , its being seen that IBM has been fixing Out Of Memory Issues for classes belong to com.ibm.xml.xlxp.scan.util/com.ibm.xml.xlxp2.scan.util in WAS 6, WAS 7 and WAS 8 servers.
http://www-01.ibm.com/support/docview.wss?uid=swg1PM39346
http://www-01.ibm.com/support/docview.wss?uid=swg1PM08333
Can anyone share any idea whether this an issue with IBM WAS 8.5.5.7...could not get a solid break
Many of the out of memory problems concerning com.ibm.xml.xlxp2.scan.util.DataBuffer were addressed with system properties that users can configure to reduce the memory used by the IBM StAX parser.
The following system properties can be helpful in resolving out of memory issues with the IBM StAX parser. Each of them should be available in WebSphere Application Server v8.5.5.7.
com.ibm.xml.xlxp2.api.util.encoding.DataSourceFactory.bufferLength
System property which controls the size of the StAX parser's data buffers. The default value is 65536.
Setting this property to a smaller value such as 2048 may reduce the memory usage if the 64KB buffers were only being partially filled by the InputStream when they are in use. The buffers are cached within the StAX parser (inside com/ibm/xml/xlxp2/scan/util/SimpleDataBufferFactory) so a reduction in memory usage there would reduce the overall memory linked to each StAX parser object.
com.ibm.xml.xlxp2.api.util.Pool.STRONG_REFERENCE_POOL_MAXIMUM_SIZE
System property (introduced by APAR PM42465) which limits the number of XMLStreamReaders (and XMLStreamWriters) that will be cached using strong references. Follow the instructions at the link provided on how to set this property.
com.ibm.xml.xlxp2.api.util.encoding.DataSourceFactory.bufferLoadFactor
The value of this system property is a non-negative integer which determines the minimum number of bytes (as a percentage) that will be loaded into each buffer. The percentage is calculated with the following formula 1 / (2^n).
When the system property is not set its default value is 3. Setting the property to a lower value than the default can improve memory usage but may also reduce throughput.
com.ibm.xml.xlxp2.scan.util.SymbolMap.maxSymbolCount
System property (introduced by APAR PI08415). The value of this property is a non-negative integer which determines the maximum size of the StAX parser's symbol map. Follow the instructions at the link provided on how to set this property.

Resources