Does Liberty has limitation with heap size? - websphere-liberty

Is there any limitation of using Liberty over WAS? I have found mixed articles online some states there is a limitation for Heap size, is that true.
This Articleenter link description here states that max memory for Liberty should not exceed 2 GB
There definitely many benefits. But is the max heap memory only 2GB allowed?

The 2GB limit listed in that article is the "Freemium" heap limit. If you exceed that 2GB heap limit for your entire organization (business, non-profit, personal use, etc) then you would be required to pay for the usage as per IBM software purchase guidelines.
There is no technical heap limit for Liberty. We have deployments of Liberty that use 32 GB of heap, for example. If you have purchased entitlement for Liberty there is no heap limitation per server or per organization.

Related

Wildfly 16 : What is benefit of changing XX:MaxMetaspaceSize in Java8?

I am getting metaspace issue in Wildfly.
Currently XX:MaxMetaspaceSize is 256M. But i am getting following issue multiple times in multiple server groups in different projects (50 projects in total distributed among server groups). And facing following exception daily.
failed to define class: OutOfMemoryException: Metaspace
Most of posts(stackoverflow and others) suggest it should be 2GB in case of wildfly.
But i have read in various article which suggests that in Java 8 there is no need to increasing Metaspace:
In Java 8, the metaspace that holds your classes can expand without limit by default,
Could you please resolve this confusion that - if Metaspace is automatically increased and suppose i have set 256 then does it not automatically increase? What benefit i will get at 2G.
Per the Oracle docs, in Java 8, the class metadata is stored in native memory and by default is unlimited. MaxMetaspaceSize puts an upper limit on the native memory that's used for class metadata.
If you also have UseCompressedOops and UseCompressedClassesPointers enabled, then MaxMetaspaceSize sets the upper limit on the sum of both areas of native memory used for class metadata - the area for the compressed class metadata, and the area for all other class metadata.
Also, 2GB sounds a bit high. I would slowly increase and test to be sure you're setting this to an optimal value for your needs.

statement_mem seems to limit the node memory instead of the segment memory

According to the GreenPlum documentation, GUCs such as statement_mem, gp_vmem_protect_limit should work at segment level. Same thing should happen with a resource queue memory allowance.
On our system we have 8 primary segments per node. So if I set the statement_mem of a query to 2GB I would expect the query to consume (if needed) up to 2GB x 8 = 16GBs of RAM. But it seems that it would only use 2GBs total per node before starting to write into disk (that's it 2GB/8 per segment). I tried with different statement_values and same thing.
max_statement_mem or gp_vmem_protect_limit limits are never reached. RAM usage on nodes have been monitored using various tools (from GP command center to top, free, all the way across Pivotal suggested session_level_memory_consumption view).
EDITED FROM HERE
ADDED two documentation sources where statement_mem is defined per segment and not per host. (#Jon Roberts)
On the GP best practices guide, beginning of page 32, it clearly says that if the statement_mem is 125MB and we have 8 segments on the server, each query will get 1GB allocated per server.
https://www.google.es/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwi6sOTx8O3KAhVBKg4KHTwICX0QFggmMAE&url=http%3A%2F%2Fgpdb.docs.pivotal.io%2F4300%2Fpdf%2FGPDB43BestPractices.pdf&usg=AFQjCNGkTqa6143fvJUztYISWAiVyj62dA&sig2=D2ZcJwLDqN0qBzU73NjXNg&bvm=bv.113943164,d.ZWU&cad=rja
On the https://support.pivotal.io/hc/en-us/articles/201947018-Pivotal-Greenplum-GPDB-Memory-Configuration it seems to use statement_mem as segment memory and not host memory. It keeps interrelating statement_mem with the memory limit of the resource queues as well as with the gp_vmem_protect_limit (both parameters defined per segment basis).
This is why I'm getting confused about how to properly manage the memory resources.
Thanks
I incorrectly stated that statement_mem is on a per host and that is not the case. This link is talking about the memory on a segment level:
http://gpdb.docs.pivotal.io/4370/guc_config-statement_mem.html#statement_mem
With the default of "eager_free" gp_resqueue_memory_policy, memory gets re-used so the aggregate amount of memory used may look low for a particular query execution. If you change it to "auto" where the memory isn't re-used, the memory usage is more noticeable.
Run an "explain analyze" of your query and see the slices that are used. With eager_free, the memory gets re-used so you may only have a single slice wanting more memory than available such as this one:
(slice18) * Executor memory: 10399K bytes avg x 2 workers, 10399K bytes max (seg0). Work_mem: 8192K bytes max, 13088K bytes wanted.
And for your question on how to manage the resources, most people don't change the default values. A query that spills to disk is usually an indication that the query needs to be revised or the data model needs some work.

How much virtual memory does a 30.5Gb heap(256 Gb memory in total) for Elasticsearch support?

Assume I have a machine with 256gb memory and 12TB SSD. Indexed document size is 100TB. I assign 30.5 GB to Elasticsearch heap. The remaining is for Lucene and OS.
My question is, how much virtual memory does Elasticsearch support? To put it in another way, how many indexed documents can I put into the virtual memory for each machine?
Thanks
The amount of virtual memory ES can use is defined by the value of the vm.max_map_count setting in /etc/sysctl.conf. By default it is set at 262144, but you can change this value using:
sysctl -w vm.max_map_count=262144
From the linux documentation:
This file contains the maximum number of memory map areas a process
may have. Memory map areas are used as a side-effect of calling
malloc, directly by mmap and mprotect, and also when loading shared
libraries.
While most applications need less than a thousand maps, certain
programs, particularly malloc debuggers, may consume lots of them,
e.g., up to one or two maps per allocation.
The default value is 65536.
So this setting doesn't impose a specific size available to ES/Lucene, but a number of discrete memory areas that a given process can use. How much memory is used exactly will depend on the size of the memory chunks being allocated by ES/Lucene. By default, Lucene uses
1<<30 = 1,073,741,824 ~= 1GB chunks on a 64 bit JRE and
1<<28 = 268,435,456 ~= 256MB chunks on 32 bit JRE
So if you do the math, the default value of vm.max_map_count is probably good enough for your case, if not you can tune it and monitor your virtual memory usage.

How much memory is available for database use in memsql

I have created memsql cluster on 7 machines. One of the machine shows that out of 62.86 GB only 2.83 is used. So here I am assuming that around 60 GB
memory is available to store data.
But my top command tell another story
Here we can see that about 21.84 GB memory is getting used and free memory is 41 GB.
So
1> How much exact memory is available for database? Is it 60 Gb as per cluster URL or 42 Gb as per top command
Note that:
1>memsql-op is consuming aroung 13.5 g virtual memory.
2> as per 'top' if we subtract buffered and cached memory's total size from used memory, then it comes to 2.83GB which is used memory as per cluster URL
To answer your question, you currently have about 60GB of memory free to be used by any process on your machine including the MemSQL database. Note that MemSQL has some overhead and by default reserves a small percentage of the total memory for overhead. If you visit the status page in the MemSQL Ops UI and view the "Leaf Table Memory" card, you will discover the amount of memory that can be used for data storage within the leaf nodes of your MemSQL cluster.
MemSQL Ops is written in Python which is then embedded into a "single binary" via a packaging tool. Because of this it exhibits a couple of oddities including high VM use. Note that this should not affect the amount of data you can store, as Ops is only consuming 308MB of resident memory on your machine. It should stay relatively constant based on the size of your cluster.

Relation between RAM size and Virtual memory with JVM heap size

for performance testing, i need 2 GB of heap memory,so i am setting the parameter in java setting via "-Xmx2048m" and also increasing the virtual memory...but while running the application, it is giving errors like "the java run time environment cannot be loaded" and "Several JVM running in the same process caused an error", (in fact, it is not giving same error for any value more than 1 GB).
so is it possible to set Heap memory to be 2 GB? or it can be maximum of 1 GB only? if yes, how to do it??
I'm using windows 7, 64 bit with RAM size of 8 GB..and using java 1.6
Since you are running a 32-bit JVM, there is a limit on how much memory the process can use. Due to how virtual memory is laid out, 32-bit processes can only access 2 GB of memory (or up to 3-4 GB with special settings). Since Java needs some memory for its own bookkeeping, which is not part of the heap available to your application, the actual usuable limit for -Xmx must be somewhere below 2 GB. According to this answer, the limit for 32-bit Java on Windows is -Xmx1500m (not sure if it has changed in newer releases, but due to the limitations outlined above, it must be below 2 GB so it's likely to have stayd at 1500 MB).

Resources