JAudioTagger2.2.3 OOM when i use it many songs info (about 1300) background - jaudiotagger

io.reactivex.exceptions.UndeliverableException: The exception could not be delivered to the consumer because it has already canceled/disposed the flow or the exception has nowhere to go to begin with. Further reading: https://github.com/ReactiveX/RxJava/wiki/What's-different-in-2.0#error-handling | java.lang.OutOfMemoryError: Failed to allocate a 235800 byte allocation with 135032 free bytes and 131KB until OOM, max allowed footprint 134217728, growth limit 134217728
at io.reactivex.plugins.RxJavaPlugins.onError(RxJavaPlugins.java:367)
at io.reactivex.internal.schedulers.ScheduledRunnable.run(ScheduledRunnable.java:69)
at io.reactivex.internal.schedulers.ScheduledRunnable.call(ScheduledRunnable.java:57)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:301)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
Caused by: java.lang.OutOfMemoryError: Failed to allocate a 235800 byte allocation with 135032 free bytes and 131KB until OOM, max allowed footprint 134217728, growth limit 134217728
at org.jaudiotagger.tag.datatype.ByteArraySizeTerminated.readByteArray(ByteArraySizeTerminated.java:94)
at org.jaudiotagger.tag.id3.framebody.AbstractID3v2FrameBody.read(AbstractID3v2FrameBody.java:181)
at org.jaudiotagger.tag.id3.framebody.AbstractID3v2FrameBody.(AbstractID3v2FrameBody.java:81)
at org.jaudiotagger.tag.id3.framebody.FrameBodyAPIC.(FrameBodyAPIC.java:149)
at java.lang.reflect.Constructor.newInstance0(Native Method)
at java.lang.reflect.Constructor.newInstance(Constructor.java:343)
at org.jaudiotagger.tag.id3.AbstractID3v2Frame.readBody(AbstractID3v2Frame.java:272)
at org.jaudiotagger.tag.id3.ID3v23Frame.read(ID3v23Frame.java:446)
at org.jaudiotagger.tag.id3.ID3v23Frame.(ID3v23Frame.java:280)
at org.jaudiotagger.tag.id3.ID3v23Tag.readFrames(ID3v23Tag.java:581)
at org.jaudiotagger.tag.id3.ID3v23Tag.read(ID3v23Tag.java:546)
at org.jaudiotagger.tag.id3.ID3v23Tag.(ID3v23Tag.java:311)
at org.jaudiotagger.audio.mp3.MP3File.readV2Tag(MP3File.java:219)
at org.jaudiotagger.audio.mp3.MP3File.(MP3File.java:391)
at org.jaudiotagger.audio.mp3.MP3FileReader.read(MP3FileReader.java:39)
at org.jaudiotagger.audio.AudioFileIO.readFile(AudioFileIO.java:286)
at com.flyaudio.media.music.util.music.SongUtils.getSongInfo(SongUtils.java:63)
at com.flyaudio.media.music.util.music.SongUtils.getExternalSongInfo(SongUtils.java:143)
at com.flyaudio.media.music.scan.util.ScanDbUtils.analysisExtraData(ScanDbUtils.java:55)
at com.flyaudio.media.music.scan.util.ScanDbUtils$1.doInBackground(ScanDbUtils.java:37)
at com.flyaudio.lib.async.executor.RxExecutor$BackgroundTask.doInBackground(RxExecutor.java:62)
at com.flyaudio.lib.async.executor.RxExecutor$Task$1.subscribe(RxExecutor.java:117)
at io.reactivex.internal.operators.observable.ObservableCreate.subscribeActual(ObservableCreate.java:40)
at io.reactivex.Observable.subscribe(Observable.java:12246)
at io.reactivex.internal.operators.observable.ObservableSubscribeOn$SubscribeTask.run(ObservableSubscribeOn.java:96)
at io.reactivex.Scheduler$DisposeTask.run(Scheduler.java:578)
at io.reactivex.internal.schedulers.ScheduledRunnable.run(ScheduledRunnable.java:66)
at io.reactivex.internal.schedulers.ScheduledRunnable.call(ScheduledRunnable.java:57) 
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:301) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) 
at java.lang.Thread.run(Thread.java:764) 

You simply ran out of memory (128MB max):
128 Megabytes (MB) = 134,217,728 Bytes (B)
If you have enough resources, I'd suggest to simply increase the JVM memory to e.g. 1GB max:
java -jar -Xms512M -Xmx1024M jaudiotagger-2.2.3.jar
This increases the Heap Size by passing JVM parameters -Xms (initial heap size) and -Xmx (max heap size)
This should do the job, but there's more parameters if you need to fine tune. See e.g.:
Increase heap size in Java

Related

Pod sizing with Actuator metrics jvm.memory.max

I am trying to size our pods using the actuator metrics info. With the below K8 resource quota configuration;
resources:
requests:
memory: "512Mi"
limits:
memory: "512Mi"
We are observing that jvm.memory.max returns ~1455 mb. I understand that this value includes heap and non-heap. Further drilling into the api (jvm.memory.max?tag=area:nonheap) and (jvm.memory.max?tag=area:heap) results in ~1325mb and ~129mb respectively.
Obviously with the non-heap set to max out at a value greater than the K8 limit, the container is bound to get killed eventually. But why is the jvm (non-heap memory) not bounded by the memory configuration of the container (configured in K8)?
The above observations are valid with java 8 and java 11. The below blog discusses the experimental options with java 8 where CPU and heap configurations are discussed but no mention of non-heap. What are some suggestions to consider in sizing the pods?
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
Source
Java 8 has a few flags that can help the runtime operate in a more container aware manner:
java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -jar app.jar
Why you get maximum JVM heap memory of 129 MB if you set the maximum container memory limit to 512 MB? So the answer is that memory consumption in JVM includes both heap and non-heap memory. The memory required for class metadata, JIT complied code, thread stacks, GC, and other processes is taken from the non-heap memory. Therefore, based on the cgroup resource restrictions, the JVM reserves a portion of the memory for non-heap use to ensure system stability.
The exact amount of non-heap memory can vary widely, but a safe bet if you’re doing resource planning is that the heap is about 80% of the JVM’s total memory. So if you set the set maximum heap to 1000 MB, you can expect that the whole JVM might need around 1250 MB.
The JVM read that the container is limited to 512M and created a JVM with maximum heap size of ~129MB. Exactly 1/4 of the container memory as defined in the JDK ergonomic page.
If you dig into the JVM Tuning guide you will see the following.
Unless the initial and maximum heap sizes are specified on the command line, they're calculated based on the amount of memory on the machine. The default maximum heap size is one-fourth of the physical memory while the initial heap size is 1/64th of physical memory. The maximum amount of space allocated to the young generation is one third of the total heap size.
You can find more information about it here.

Committed Bytes and Commit Limit - Memory Statistics

I'm trying to understand the actual difference between committed bytes and commit limit.
From the definitions below,
Commit Limit is the amount of virtual memory that can be committed
without having to extend the paging file(s). It is measured in bytes.
Committed memory is the physical memory which has space reserved on the disk paging files.
Committed Bytes is the amount of committed virtual memory, in bytes.
From my computer configurations, i see that my Physical Memory is 1991 MB, Virtual Memory (total paging file for all files) is 1991 MB and
Minimum Allowed is 16 MB, Recommended is 2986 MB and Currently Allocated is 1991 MB.
But when i open my perfmon and monitor Committed Bytes and Commit Limit, the numbers differ a lot. So what exactly are these Committed Bytes and Commit Limit and how do they form.
Right now in my perfmon, Committed Bytes is running at 3041 MB (Sometimes it goes to 4000 MB as well), Commit Limit is 4177 MB. So how are they calculated. Kindly explain. I've read a lot of documents but I wasn't understanding how this works.
Please help. Thanks.

What is the Spring Boot default memory settings?

For example if I run/debug simple spring boot app from IDE without definitions, what size of initial heap size, max heap size and stack size (-Xms, -Xmx, -Xss) will be set?
By default Spring Boot app will use JVM default memory settings.
Default heap size
In case your physical memory size is up to 192 megabytes (MB) then default maximum heap size is half of the physical memory.
In case your physical memory size is more than 192 megabytes then default maximum heap size is one fourth of the physical memory.
For example, if your computer has 128 MB of physical memory, then the maximum heap size is 64 MB, and greater than or equal to 1 GB of physical memory results in a maximum heap size of 256 MB.
The maximum heap size is not actually used by the JVM unless your program creates enough objects to require it. A much smaller amount, called the initial heap size, is allocated during JVM initialization. This amount is at least 8 MB and otherwise 1/64th of physical memory up to a physical memory size of 1 GB.
The maximum amount of space allocated to the young generation is one third of the total heap size.
You can check default values specific to you machine with the following command
Linux:
java -XX:+PrintFlagsFinal -version | grep HeapSize
Windows:
java -XX:+PrintFlagsFinal -version | findstr HeapSize
Reference:https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/parallel.html#default_heap_size
Default thread stack size
The default thread stack size varies with JVM, OS and environment variables.
To find out what your default thread stack size is on your platform, use
In Linux:
java -XX:+PrintFlagsFinal -version | grep ThreadStackSize
In Windows:
java -XX:+PrintFlagsFinal -version | findstr ThreadStackSize
usually its 25% of your total physical memory if no "Xmx" options are provided during java start
On a Unix/Linux system, you can do
java -XX:+PrintFlagsFinal -version | grep HeapSize
On Windows, use the following command to find out the defaults
java -XX:+PrintFlagsFinal -version | findstr HeapSize
Look for the options MaxHeapSize (for -Xmx) and InitialHeapSize for -Xms.
The resulting output is in bytes.

Hadoop:Why my active namenode use too much memory?

In the 50070 monitor web page,It shows:
16,889,283 files and directories, 9,236,314 blocks = 26,125,597 total filesystem object(s).
Heap Memory used 14.84 GB of 25.85 GB Heap Memory. Max Heap Memory is 25.85 GB.
confusion-over-hadoop-namenode-memory-usage tells me how to calculate the memory of namenode used.I find the method is not for me. 26,125,597 * 150byte ≈ 4G,but i use near 15G memory!
But,the standby namenode just use 5G memory.
16,889,283 files and directories, 9,236,314 blocks = 26,125,597 total filesystem object(s).
Heap Memory used 4.9 GB of 25.85 GB Heap Memory. Max Heap Memory is 25.85 GB.

What are reasons for "Cannot allocate memory" except of exceeding address space and memory fragmentation?

The problem is that in a 32-bit application on Mac OS X I receive an error
malloc: *** mmap(size=49721344) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
For the reference error code is in sys/errno.h:
#define ENOMEM 12 /* Cannot allocate memory */
The memory allocation pattern is like this:
First is allocated nearly 250MB of memory
Allocate 6 blocks of 32MB
Then 27 images each handled like this
Allocate 16MB (image bitmap is loaded)
Allocate 32MB, process it, free these 32MB
Again allocate 32MB, process it, free these 32MB
Free 16MB allocated in step 3.1
Free 4 blocks allocated in step 2 (2 blocks are still used)
Free 250MB block allocated in step 1
Allocate blocks of various size, total size doesn't exceed 250MB. And here I receive the mentioned memory allocation error
I've checked that none of these memory blocks is leaked, so I guess used memory at any given time stays below 1GB, which should be accessible on 32-bit system.
The second guess was memory fragmentation. But I've checked that all block in step 3 reuse same addresses. So I touch less than 1GB of memory - memory fragmentation should not be an issue.
Now I am completely lost what can be a reason for not allocating memory. Also everything works OK when I process less than 27 images. Here is part of heap command result before step 6 for 26 images:
Process 1230: 4 zones
Zone DefaultMallocZone_0x273000: Overall size: 175627KB; 29620 nodes malloced for 68559KB (39% of capacity); largest unused: [0x6f800000-8191KB]
Zone DispatchContinuations_0x292000: Overall size: 4096KB; 1 nodes malloced for 1KB (0% of capacity); largest unused: [0x2600000-1023KB]
Zone QuartzCore_0x884400: Overall size: 232KB; 7039 nodes malloced for 132KB (56% of capacity); largest unused: [0x3778ca0-8KB]
Zone DefaultPurgeableMallocZone_0x27f2000: Overall size: 4KB; 0 nodes malloced for 0KB (0% of capacity); largest unused: [0x3723000-4KB]
All zones: 36660 nodes malloced - 68691KB
And for 27 images:
Process 1212: 4 zones
Zone DefaultMallocZone_0x273000: Overall size: 167435KB; 30301 nodes malloced for 68681KB (41% of capacity); largest unused: [0x6ea51000-32372KB]
Zone DispatchContinuations_0x292000: Overall size: 4096KB; 1 nodes malloced for 1KB (0% of capacity); largest unused: [0x500000-1023KB]
Zone QuartzCore_0x106b000: Overall size: 192KB; 5331 nodes malloced for 101KB (52% of capacity); largest unused: [0x37f2f98-8KB]
Zone DefaultPurgeableMallocZone_0x30f8000: Overall size: 4KB; 0 nodes malloced for 0KB (0% of capacity); largest unused: [0x368f000-4KB]
All zones: 35633 nodes malloced - 68782KB
So what are other reasons for "Cannot allocate memory" and how can I diagnose them? Or probably I made a mistake ruling out mentioned reasons, then how can I check them again?
Turned out I've made a mistake checking that address space is not exhausted. Instead of using heap command I should have used vmmap. vmmap revealed that most of the memory is used by images mapped into the memory.

Resources