There's a lot of different advice regarding heap sizes and I was interested to see what should be a general setting for min setting when max is currently set to 2048.
I thought the min should be set to about half of the max.
What could the implications of having the min set to low be?
There are no golden rules for setting the Min and Max Heap size of your Websphere.
It depends on how much memory your app needs and how much load will the specific server have.
My suggestion is to set min heap size to a low value and max heap size to a high enough value.
Then expose the server in production-like load and read verbose gc log (native_stderr.log) of the JVM to see how it performs.
Generally speaking you should keep your Heap usage between 40%-70% of your max heap Size.
Related
How can I determine the heap size required for 1 GB logs having 1 day retention period?
if I take the machine with 32 GB heap size (64 GB RAM) how many GB logs I can keep in this for 1 day?
It depends on various factors like the number of indexing requests, search requests, cache utilization, size of search and indexing requests, number of shards/segments etc, also heap size should follow the sawtooth pattern, and instead of guessing it, you should start measuring it.
The good thing is that you can starting right, by assigning 50% of RAM as ES Heap size which is not crossing 32 GB.
I am trying to size our pods using the actuator metrics info. With the below K8 resource quota configuration;
resources:
requests:
memory: "512Mi"
limits:
memory: "512Mi"
We are observing that jvm.memory.max returns ~1455 mb. I understand that this value includes heap and non-heap. Further drilling into the api (jvm.memory.max?tag=area:nonheap) and (jvm.memory.max?tag=area:heap) results in ~1325mb and ~129mb respectively.
Obviously with the non-heap set to max out at a value greater than the K8 limit, the container is bound to get killed eventually. But why is the jvm (non-heap memory) not bounded by the memory configuration of the container (configured in K8)?
The above observations are valid with java 8 and java 11. The below blog discusses the experimental options with java 8 where CPU and heap configurations are discussed but no mention of non-heap. What are some suggestions to consider in sizing the pods?
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
Source
Java 8 has a few flags that can help the runtime operate in a more container aware manner:
java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -jar app.jar
Why you get maximum JVM heap memory of 129 MB if you set the maximum container memory limit to 512 MB? So the answer is that memory consumption in JVM includes both heap and non-heap memory. The memory required for class metadata, JIT complied code, thread stacks, GC, and other processes is taken from the non-heap memory. Therefore, based on the cgroup resource restrictions, the JVM reserves a portion of the memory for non-heap use to ensure system stability.
The exact amount of non-heap memory can vary widely, but a safe bet if you’re doing resource planning is that the heap is about 80% of the JVM’s total memory. So if you set the set maximum heap to 1000 MB, you can expect that the whole JVM might need around 1250 MB.
The JVM read that the container is limited to 512M and created a JVM with maximum heap size of ~129MB. Exactly 1/4 of the container memory as defined in the JDK ergonomic page.
If you dig into the JVM Tuning guide you will see the following.
Unless the initial and maximum heap sizes are specified on the command line, they're calculated based on the amount of memory on the machine. The default maximum heap size is one-fourth of the physical memory while the initial heap size is 1/64th of physical memory. The maximum amount of space allocated to the young generation is one third of the total heap size.
You can find more information about it here.
I'm working with Chicken Scheme, I wonder how many elements a list can have.
There is no hard limit – it can have as many as there's room for in memory.
The documentation, under the option -:hmNUMBER, it mentions that there is a default max limit of 2GB heap size, which gives you about 45 million pairs. You can increase these with several options, but the simplest for a set default memory limit is -heap-size. Here is how to double the default:
csc -heap-size 4000M <file>
It says under the documentation for -heap-size that it only uses half of the allocated memory at every given time. It might be using the lonely hearts garbage collection algorithm where when memory is full it moves used memory to the unused segment making the old segment the unused one.
I use the 2.1.6 version on Neo4j and I try to change the size of the off heap memory.
I have 2GO of RAM. I put 1GO for the heap size with the parameterwrapper.java.initmemory and the wrapper.java.maxmemory.
To change the size of the off heap memory, I add the parameters:
neostore.nodestore.db.mapped_memory
neostore.relationshipstore.db.mapped_memory
But when I launch my program, and do a JConsole, the size of the off heap is always 44Mo at the start and increases until a maximum: 73Mo although I change the value of the neostore parameters.
Is there another parameter to add to change this size?
You configure the max sizes, they will only be used if needed.
I have few questions regarding java GC and memory management.
In java we define process memory upper bound and lower bound by xmx and xms parameters. Using these parameters JVM allocates young old and perm space. So if new threads are created then from which memory do stacks memory is allocated to threads? is it from perm space or any other space?
Also static variables of class is allocated to which space young, old or perm space? (I guess perm?)
Does XmX paramenter bounds the young + old gen OR young + old+ perm gen OR young + old + perm + stack size ??
Thanks
Basicly, stack memory comes from stack area, which is independent from heap area and perm
area.
Static variables are allocated in the heap, except string and numeric constants.
-Xmx parameter only bounds the young + old parts of the heap, as perm area is not part of it.
Stack area size is set by -Xss flag, heap area size is set by -Xmx flag and perm area size is set by -XX:MaxPermSize.
If you want to dive into JVM internal memory management I recommend this blog entry.
The thread stack space is controlled by another option -Xss. Here is a reference that might help you which is on this particular topic.
on solaris you can use 'ulimit -a' to see the stack limit of processes. I think that the threads stack is taken from this resource. I am wondering if the JVM will issue garbage collection when there is enough space in the heap for the threads, but not enough space for their stack.