How to turn off Elasticsearch JVM Garbage Collection logs - elasticsearch

After settling into our ELK stack log aggregation setup over the past few months, I am noticing that a significant percentage of the logs we are persisting are from elastic search garbage collection.
While I have tried to ignore these logs specifically in filebeat configuration I seem to have been unsuccessful. Is there a way via configuration to turn this logging off until I need it? Or a way to ignore these log files that I am not currently using?

I put this quote from the official document of elasticsearch.
By default, Elasticsearch enables garbage collection (GC) logs. These are configured in jvm.options and output to the same default location as the Elasticsearch logs. The default configuration rotates the logs every 64 MB and can consume up to 2 GB of disk space.
You can reconfigure JVM logging using the command line options described in JEP 158: Unified JVM Logging. Unless you change the default jvm.options file directly, the Elasticsearch default configuration is applied in addition to your own settings. To disable the default configuration, first disable logging by supplying the -Xlog:disable option, then supply your own command line options. This disables all JVM logging, so be sure to review the available options and enable everything that you require.
For more details: GC logging settings

Related

How to debug input records in Elasticsearch cluster?

where can I find all possible keys for logger.org.elasticsearch.*?
For example:
"logger.org.elasticsearch.transport"
"logger.org.elasticsearch.discovery"
I don't have access to source of logs agent (cannot configure debug). I would like to debug input raw records on elasticsearch side. But I can't find which keys I have to enable.
And of course I am aware about impact of debug on cluster perfomance.
I've already read this question.
How to set up logging level in Elasticsearch?

What is a most benefit way to gather server hardware utilization, app logs, app jvm metrics, using Elastic-Stack?

Besides ELK standard goal for gathering application logs data i want to leverage this stack for advanced data collection such as JVM metrics (via JMX) and host's cpu/ram/disk/network utilization.
The most suitable one i thought is using metricbeat, but i doubt if metricbeat is enough for purposes described above.
Since i aiming at minimal stack of things to configure, will Metricbeat-Elasticsearch-Kibana be enough for collecting app logs,app jvm metrics,host's hardware utilization or there are some more suitable alternatives ?
UPDATE
Oh, i see now, that i need also filebeat besides metricbeat for gathering app logs.
Is there any out of the box single solution that combines filebeat and metricbeat agents ?
Currently Filebeat and Metricbeat are separate binaries and you need to run both:
Filebeat to collect your logs (and potentially parse them with Elasticsearch Ingest node).
Metricbeat with the system module for cpu/ram/disk/network and we also have a JMX / Jolokia module for that functionality.

Tuning logstash performance

I use logstash to connect between elasticsearch and ntopng(a flow collector).
but there are many drop flows, so I think the bottle neck is on logstash because my RAM is 20G and CPU 8 cores.
But I am not sure which parameter could I edit to tune the logstash in the logstash.yml
thank you in advance!
It seems like one step of working out a solution to your problem is to supply decent Logstash monitoring. One good way to achieve this is by installing X-Pack which provides Logstash monitoring in the X-Pack monitoring ui in Kibana.
Please refer to https://www.elastic.co/guide/en/logstash/6.1/logstash-monitoring-ui.html for more information about the Logstash monitoring ui and https://www.elastic.co/guide/en/logstash/6.1/installing-xpack-log.html for information on how to install and configure X-Pack for Logstash.
Apart from Logstash monitoring, you should of course also monitor the used resources on the systems you are running Logstash on. There are several ways to do this, for example with active monitoring solutions, such as Nagios, our passive monitoring solutions such as Elasticsearch with Metricbeat.
Once you know what the bottleneck is, you can go through https://www.elastic.co/guide/en/logstash/6.1/performance-troubleshooting.html and tune Logstash settings or if necessary add more Logstash instances for distributing load.

How to debug a Flink application for memory and garbage collection?

I'm using Flink 1.1.4 and have added to flink-conf.yaml the configuration parameters for memory debugging, as stated in Memory and Performance Debugging:
taskmanager.debug.memory.startLogThread: true
taskmanager.debug.memory.logIntervalMs: 1000
After restarting Flink, I'm seeing the new parameters added to the Job Manager interface, but I'm unable to see any new logs.
Any idea about what I may be missing?
It seems this was resolved in this mailinglist
Key extracts, including one that confirmed the exact settings were tested succesfully:
That is exactly the right way to do it. Logging has to be at least
INFO and the parameter "taskmanager.debug.memory.startLogThread" set
to true. The log output should be under
"org.apache.flink.runtime.taskmanager.TaskManager".
Do you see other outputs for that class in the log?
Make sure you restarted the TaskManager processes after you changed
the config file.
Someone else just used the memory logging with the exact described
settings - it worked.
There is probably some mixup, you may be looking into the wrong log
file, or may setting the a value in a different config...
How do you start the flink cluster? If it's a standalone cluster and
you don't use a shared directory, then you'll find the log of the
taskmanager on the machine on which the taskmanager runs. If you use
YARN then you can activate log aggregation to retrieve the log easily
after the job has finished.

Disable ElasticSearch logs in Liferay 7

I've started using Liferay v7, and am getting a lot of the following log messages:
17:14:12,265 WARN [elasticsearch[Mirage][management][T#1]][decider:157] [Mirage] high disk watermark [90%] exceeded on [fph02E6ISIWnZ5cxWw_mow][Mirage][/Users/randy/FasterPayments/src/eclipse/com.rps.portal/com.rps.portal.backoffice/bundles/data/elasticsearch/indices/LiferayElasticsearchCluster/nodes/0] free: 46gb[9.9%], shards will be relocated away from this node
To be honest, I'd rather not spend time learning about ElasticSearch right now, is it possible to simply disable ElasticSearch within Liferay 7 dev environment? Or other action to remove these log messages?
Go to Control Panel / Configuration / System Settings / Foundation / Elasticsearch.
Under "Additional Configurations" enter
cluster.routing.allocation.disk.threshold_enabled: True
cluster.routing.allocation.disk.watermark.low: 30gb
cluster.routing.allocation.disk.watermark.high: 20gb
or whatever are appropriate values for your system (there is value in being warned that the disk is almost full).
Save & Restart (the values seem not to be picked up at runtime).
Liferay needs an index/search engine, say ElasticSearch or SOLR. It defaults to ElasticSearch in DXP. It makes no sense disabling it.
The warnings tell you you've reached your configured disk shared allocation. You can change this settings in your elasticSearch.yml (cluster.routing.allocation.disk.watermark.high).
If your logs annoy you, you can change your logging settings. Not sure If it's still valid in DXP, but have a look at https://dev.liferay.com/es/discover/deployment/-/knowledge_base/6-2/liferays-logging-system.

Resources