Setting JVM options when configuring elastic search - elasticsearch

I'm configuring jvm options for an Elasticsearch cluster, and I wonder which jvm heap
would be best for my usecase.
The machine has 16GB memory and will be dedicated to a single node of elasticsearch.
The default value is 1GB, and I'm not familar with Java/JVM but I feel like this is too small.
Any help would be appreciated.

If you use Windows, you can type Windows + R, then systempropertiesadvanced , then set, for example:
ES_JAVA_OPTS
-Xms2g -Xmx2g
(You can increase value as you want, 2 is a number, g is gigabyte, m is megabyte)
Reference document: https://www.elastic.co/guide/en/elasticsearch/reference/master/advanced-configuration.html#set-jvm-options
https://www.javadevjournal.com/java/jvm-parameters/

Related

how to disable memory calculator from docker image generated by buildpack

When I'm establishing the memory limits (-Xmx512m -Xms512m) in the deployment.yml for a Spring Boot Application which the docker image was generated with the command (mvn spring-boot:build-image) then I'm receiving the following error:
Setting Active Processor Count to 4
Adding $JAVA_OPTS to $JAVA_TOOL_OPTIONS
unable to calculate memory configuration
all memory regions require 1130933K which is greater than 956052K available for allocation:
-Xmx512M, 0 headroom, -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=94645K, -XX:ReservedCodeCacheSize=240M,
-Xss1M * 250 threads ←[31;1mERROR: ←[0mfailed to launch: exec.d: failed to execute exec.d file
at path '/layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/memory-calculator': exit status 1
Current deployment.yml config:
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JAVA_OPTS
value: >-
-XX:+PrintGCDetails
-Xlog:gc
-XX:+UseParallelGC
-XX:+PrintFlagsFinal
-Xmx512m
-Xms512m
resources:
requests:
cpu: 1554m
memory: 979M
limits:
cpu: 1554m
memory: 979M
How to set the memory limits properly or disable the buildpack memory calculator?
NOTE: I'm using JAVA 11.
UPDATE:
Thank you for your answer.
But I applied the second option, I understood your point of view, I will try to do a summary about the approach
Iteration 1: No limits
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JAVA_OPTS
value: >-
-Xlog:gc
-XX:+UseParallelGC
grafana visualization it1
Memory Calculator:
Calculating JVM memory based on 13287832K available memory
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx12681764K -XX:MaxMetaspaceSize=94067K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 13287832K, Thread Count: 250, Loaded Class Count: 14194, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 128 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled
Picked up JAVA_TOOL_OPTIONS: -Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties -XX:+ExitOnOutOfMemoryError -XX:ActiveProcessorCount=4 -Xlog:gc -XX:+UseParallelGC -XX:MaxDirectMemorySize=10M -Xmx12681764K -XX:MaxMetaspaceSize=94067K -XX:ReservedCodeCacheSize=240M -Xss1M -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics -Dorg.springframework.cloud.bindings.boot.enable=true
Xmx12681764K = 1585,2205 MB
In this case, Grafana visualize all resources from the hardware and it is not the ideal configuration so, it is necessary to define limits from the upper boundary, the pod.
Iteration 2: With defined limits at kubernetes level
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JAVA_OPTS
value: >-
-Xlog:gc
-XX:+UseParallelGC
resources:
requests:
cpu: 1554m
memory: 979M
limits:
cpu: 1554m
memory: 979M
grafana visualization it2
Memory calculator:
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx349984K -XX:MaxMetaspaceSize=94067K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 956052K, Thread Count: 250, Loaded Class Count: 14194, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 128 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled
Picked up JAVA_TOOL_OPTIONS: -Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties -XX:+ExitOnOutOfMemoryError -XX:ActiveProcessorCount=4 -Xlog:gc -XX:+UseParallelGC -XX:MaxDirectMemorySize=10M -Xmx349984K -XX:MaxMetaspaceSize=94067K -XX:ReservedCodeCacheSize=240M -Xss1M -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics -Dorg.springframework.cloud.bindings.boot.enabl
In this case, Memory Calculator determinates a minimum memory to operate with the application but not limit the upper boundary because it is limited by the configuration from k8s level. My doubt was generated by the delay in the grafana visualization.
As you say, Memory calculator is our side to help.
Many thanks in advance
Alberto.
You're getting the error:
all memory regions require 1130933K which is greater than 956052K available for allocation:
-Xmx512M, 0 headroom, -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=94645K, -XX:ReservedCodeCacheSize=240M,
-Xss1M * 250 threads
which is telling you that you have an invalid memory configuration. The amount of memory that you want to assign to the JVM does not fit within the limits you have put on your container.
There are a number of ways you can fix this:
Increase the container memory limit so that the JVM fits. The error message tells you how much you'd need, 1130933K.
Reduce the amount of memory you're assigning to the JVM so that it fits within the container memory limit. The error message tells you which JVM memory settings are being used, you can override them in JAVA_TOOL_OPTIONS (or JAVA_OPTS, anything added there is included within JAVA_TOOL_OPTIONS) to limit them.
Remove -Xmx512m -Xms512m from JAVA_OPTS and just let the memory calculator generate the largest JVM memory configuration that will fit within the container memory limit you've assigned. You won't get that much heap, but you'll get as large of a heap as possible within the given container memory limit.
Some notes on these options:
If you try option #2, be careful in terms of what you reduce. The JVM is memory-hungry, but that's also how it's so fast. Make sure you are performance testing before and after any changes you make to ensure that you're not hurting your application performance (or to confirm that you're still performing to required levels).
With the Paketo Java buildpack, you should really never set -Xmx and -Xms. What you want to do instead is to adjust the other memory settings, like -Xss, -XX:ReservedCodeCacheSize=240M, -XX:MaxDirectMemorySize=10M, thread count, etc...
The memory calculator will adjust the -Xmx and -Xms settings dynamically so that they consume the remainder of the memory in the container. If you manually set these values, what's likely to happen is that you will either cause an error because the values are too large (what happened here) or that you set them too low and the JVM is not using all of the memory available to it. Let the memory calculator do its job and you'll get the optimal settings.
There is no option to disable the memory calculator and I would strongly caution against attempting to do that. The memory calculator is your friend here.
It's like a compiler for JVM memory settings. It is checking and validating the settings you enter, so it can tell you in advance if there is a problem with your memory configuration. It might be annoying that it complains, but this is far, far better than having your container crash in the middle of the night because it runs out of memory. If it complains, adjust your memory configuration and then rest easy knowing that everything is properly sized to fit in your container.
The memory calculator will by default size your application for a production deployment, optimizing for performance, not low-memory consumption. Again, Java trades higher memory consumption for speed. To do this, it means your container needs at least 1G of RAM.
There is a Paketo RFC to add a low-memory mode to the memory calculator. This would make it easier to run PoC applications and other low-traffic apps that are willing to accept potentially lower performance in exchange for reducing memory consumption (and thereby cost). This RFC has not been implemented as of this post, but we hope to have it implemented in the near future.

memory usage grows until VM crashes while running Wildfly 9 with Java 8

We are having an issue with virtual servers (VMs) running out of native memory. These VMs are running:
Linux 7.2(Maipo)
Wildfly 9.0.1
Java 1.8.0._151 running with (different JVMs have different heap sizes. They range from 0.5G to 2G)
The JVM args are:
-XX:+UseG1GC
-XX:SurvivorRatio=1
-XX:NewRatio=2
-XX:MaxTenuringThreshold=15
-XX:-UseAdaptiveSizePolicy
-XX:G1HeapRegionSize=16m
-XX:MaxMetaspaceSize=256m
-XX:CompressedClassSpaceSize=64m
-javaagent:/<path to new relic.jar>
After about a month, sometimes longer, the VMs start to use all of their swap space and then eventually the OOM-Killer notices that java is using too much memory and kills one of our JVMs.
The amount of memory being used by the java process is larger than heap + metaSpace + compressed as revealed by using -XX:NativeMemoryTracking=detail
Are there tools that could tell me what is in this native memory(like a heap dump but not for the heap)?
Are there any tools that can map java heap usage to native memory usage (outside the heap) that are not jemalloc? I have used jemalloc to try to achieve this but the graph that is being drawn contains only hex values and not human readable class names so I cant really get anything out of it. Maybe I'm doing something wrong or perhaps I need another tool.
Any suggestions would be greatly appreciated.
You can use jcmd.
Start application with -XX:NativeMemoryTracking=summary or -
XX:NativeMemoryTracking=detail
Use jcmd to monitor the NMT (native memory tracker)
jcmd "pid" VM.native_memory baseline //take the baseline
jcmd "pid" VM.native_memory detail.diff // use based on your need to analyze more on change in native memory from its baseline

Heap Size vs HADOOP_NAMENODE_OPTS at namenode

I am using hadoop apache 2.7.1 in HA Cluster.
I needed to update heap memory for both name nodes, so I updated
the property HADOOP_NAMENODE_OPTS in hadoop-env.sh to be 8 gb
export HADOOP_NAMENODE_OPTS="-Xmx8192m $HADOOP_NAMENODE_OPTS"
so the heap size in my name nodes is now 8 GB
but I realized the parameter HADOOP_HEAPSIZE in hadoop-env.sh
and I didn't give it any value
is setting HADOOP_NAMENODE_OPTS to 8 GB enough or should we set HADOOP_HEAPSIZE to 8 GB too?
I mean does the value HADOOP_NAMENODE_OPTS override the value HADOOP_HEAPSIZE
or should be both configured and each one has its specific job?
does the value HADOOP_NAMENODE_OPTS overrides the value HADOOP_HEAPSIZE
Yes, it does. https://www.cloudera.com/documentation/enterprise/latest/topics/admin_nn_memory_config.html

JMeter issues when running large number of threads

I'm testing using Apache's Jmeter, I'm simply accessing one page of my companies website and turning up the number of users until it reaches a threshold, the problem is that when I get to around 3000 threads JMeter doesn't run all of them. Looking at the Aggregate Graph
it only runs about 2,536 (this number varies but is always around here) of them.
The partial run comes with the following exception in the logs:
01:16 ERROR - jmeter.JMeter: Uncaught exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Unknown Source)
at org.apache.jmeter.threads.ThreadGroup.start(ThreadGroup.java:293)
at org.apache.jmeter.engine.StandardJMeterEngine.startThreadGroup(StandardJMeterEngine.java:476)
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:395)
at java.lang.Thread.run(Unknown Source)
This behavior is consistent. In addition one of the times JMeter crashed in the middle outputting a file that said:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32756 bytes for ChunkPool::allocate
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (allocation.cpp:211), pid=10748, tid=11652
#
# JRE version: 6.0_31-b05
# Java VM: Java HotSpot(TM) Client VM (20.6-b01 mixed mode, sharing windows-x86 )
Any ideas?
I tried changing the heap size in jmeter.bat, but that didn't seem to help at all.
JVM is simply not capable of running so many threads. And even if it is, JMeter will consume a lot of CPU resources to purely switch contexts. In other words, above some point you are not benchmarking your web application but the client computer, hosting JMeter.
You have few choices:
experiment with JVM options, e.g. decrease default -Xss512K to something smaller
run JMeter in a cluster
use tools taking radically different approach like Gatling
I had a similar issue and increased the heap size in jmeter.bat to 1024M and that fixed the issue.
set HEAP=-Xms1024m -Xmx1024m
For the JVM, if you read hprof it gives you some solutions among which are:
switch to a 64 bits jvm ( > 6_u25)
with this you will be able to allocate more Heap (-Xmx) , ensure you have this RAM
reduce Xss with:
-Xss256k
Then for JMeter, follow best-practices:
http://jmeter.apache.org/usermanual/best-practices.html
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
Finally ensure you use last JMeter version.
Use linux OS preferably
Tune the TCP stack, limits
Success will depend on your machine power (cpu and memory) and your test plan.
If this is not enough (for 3000 threads it should be OK), you may need to use distributed testing
Increasing the heap size in jmeter.bat works fine
set HEAP=-Xms1024m -Xmx1024m
OR
you can do something like below if you are using jmeter.sh:
JVM_ARGS="-Xms512m -Xmx1024m" jmeter.sh etc.
I ran into this same problem and the only solution that helped me is: https://stackoverflow.com/a/26190804/5796780
proper 100k threads on linux:
ulimit -s 256
ulimit -i 120000
echo 120000 > /proc/sys/kernel/threads-max
echo 600000 > /proc/sys/vm/max_map_count
echo 200000 > /proc/sys/kernel/pid_max
If you don't have root access:
echo 200000 | sudo dd of=/proc/sys/kernel/pid_max
After increasing Xms et Xmx heap size, I had to make my Java run in 64 bits mode. In jmeter.bat :
set JM_LAUNCH=java.exe -d64
Obviously, you need to run a 64 bits OS and have installed Java 64 bits (see https://www.java.com/en/download/manual.jsp)

JVM tuning for better Solr performance

Now we are using Solr1.4 in Master/Slave mode and want to improve the performance for Slave query.
The biggest issue for us is the index file is about 30G.
The Slave server config as below:
Dell PC Server: 48G memory and 2 CPU;
RedHat 64 Linux;
JDK64 1.6.0_22;
Tomcat 6.18.
Our current JAVA_OPTS is "–Xms2048M –Xmx20480 –server -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ParallelGCThreads=20 -XX:SurvivorRatio=2"
Do you have more suggestion for JAVA_OPTS?
The JAVA_OPTS seem fine. quite a few questions :-
Is you max for 20GB ram peaking out ? can you check the memory stats as to whats the max utilized ?
Is there any heavy processing happening on Slave ? CPU stats ?
How are the queries ??? are you using highlighting ?
Whats the number of results you are returing for single query ?
what do your cache stats say ? are they utilized properly ?
Is your index optimized ??
do you use warming queries to improve performance on the slow running queries ?
If the above seems fine, can you consider enabling the http caching ?
use the following opts
-XX:+UseCompressedOops
(This will help in reducing the heap size)
-XX:+DoEscapeAnalysis

Resources