I am trying to deploy ELK on my small server 2 Core / 2G RAM. But ELK stack server just keep restarting and cannot work.
The log printed on those container shows no error and just few warning about deprecated method.
Logstash log:
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
No error prints at Kibana and elasticsearch container
Here is the docker stack composer file: https://github.com/deviantony/docker-elk/blob/master/docker-stack.yml. I didn't not change anything except turn down the heap size.
But if I use docker-compose instead of docker stack deploy in swarm mode, everything goes smoothly.
Also, my CPU jump up to 100% while Memory usage only 60% when I startup the service.
How can I debug for this problem? Thanks in advance.
I think your problem is still caused by lack of memory. I'd test the compose stack you shows above. Check docker stats. The memory usage was fluctuating at 1.8G.
You mentioned that you turn down the heap size in your compose file: from ES_JAVA_OPTS: "-Xmx512m -Xms512m" to lower.
But still not recommend to cut down heap size below 256m. Any lower than that will cause some error like:
[circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be xxx, which is larger than the limit of xxx
Any more complicated query or other operation will throw more error.
Besides, note that you got a single host, but you still using swarm as both master and work node. Any other redundant service or application will push you host to the edge of breakdown.
2G RAM server is not enough for host the whole ELK stack for most of common usage. If you insist, try add mem_limit in you compose file (you don't really need to use v3, v2 is enough for single node service) to limit your container memory usage.
Related
I just downloaded a new docker image. When I try to run it I get this log on my console
Setting Active Processor Count to 4
Calculating JVM memory based on 381456K available memory
unable to calculate memory configuration
fixed memory regions require 654597K which is greater than 381456K available for allocation: -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=142597K, -XX:ReservedCodeCacheSize=240M, -Xss1M * 250 threads
Please, how can I fix this?
I am assuming that you have multiple services and you are going to start them at a time. The issue is related to memory which docker and spring boot uses.
Try this:
environment:
- JAVA_TOOL_OPTIONS=-Xmx128000K
deploy:
resources:
limits:
memory: 800m
You have to provide memory which I mentioned in the .yaml file syntax.
While at the time of startup each service takes lot of memory, so there is no memory remaining for rest of the services and because of that other services starts failing with the memory related message.
Using Elasticsearch sink connector to insert data to ES 7.2 instance hosted on VM.
Getting this : Elastic search max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Is it possible to ignore bootstrap checks ?
How can I increase the virtual memory for Elasticsearch in the docker container ?
Bootstrap checks inspect a variety of Elasticsearch and system settings. If youre in development mode, any bootstrap checks that fail appear as warnings in the Elasticsearch log. If youre in production mode failed bootstrap will cause Elasticsearch to refuse to start.
Elasticsearch work mode is configured implicit. As soon as you configure a network setting like network.host,Elasticsearch assumes that you are moving to production and will upgrade the above warnings to exceptions.
Regarding your specific case you need to increase it on the host machine not the docker by running this command:
sudo sysctl -w vm.max_map_count=262144 . and then restart your docker-containers.
BTW not recommended but if you are running a single node you can skip the bootstrap checks by not binding transport to an external interface or by binding transport to an external interface and setting the discovery type to single-node.
I am trying to set up a development stack as similar as possible to the docker swarm production deployment. Unfortunately some members of the team prefer to work on osx which has severe performance issues when it comes to mounting and syncing of volume data with osxfs. I came across the a docker blog entry about consistency setting "cached" for a docker-compose setup on mac, e.g.:
volumes:
- /www:/var/www:cached
This does not work in swarm mode however. I am getting following error on docker stack up:
invalid spec: webapp-volume:/var/www:cached: unknown option: cached
Is this an option that was removed for v3, supposed to be used differently, or is there a better way to improve io performance on osx with swarm mode?
I have recently migrated from SonarQube 3.7.2 to SonarQube 5.1. Update was successfull and I was able to run analysis.
However now I cannot reach the server and from log it seems ElasticSearch is slowly eating away my disk space.
I tried to restart the server and to delete the data/es directory, but nothing helped.
sonar.log is full of these lines:
...
2015.05.18 00:00:13 WARN es[o.e.c.r.a.decider] [sonar-1431686361188] high disk watermark [10%] exceeded on [Jbz_O0pFRKecav4NT3DWzQ][sonar-1431686361188] free: 5.6gb[3.8%], shards will be relocated away from this node
2015.05.18 00:00:13 INFO es[o.e.c.r.a.decider] [sonar-1431686361188] high disk watermark exceeded on one or more nodes, rerouting shards
...
There are just a few Java projects, but two of them are around a couple million lines of code (LOC).
Your server does not have enough available disk space to feed its internal Elasticsearch indices.
Note that an external volume can be used by setting the property sonar.path.data (see conf/sonar.properties).
Weblogic 10.3 gives out of memory
Followings thing I have done
Increased the -Xms512m
Increased the -Xmx1024m
Increased the max perm size in setdomainenv.bat
Is there any other way to resolve this issue I have a 2 GB system?
It is a production machine and the size of the log is around 4 GB .When analysed the log I found many connection refused error
You'll need to profile your application to find the memory leak. It could be open database connections or other resources not being handled properly
Just increasing the Xms and Xmx wont work beyond a point
Take a Heap Dump into an HPROF file and run this using Eclipse Memory Analyzer Tool or VisualVM
or monitor this using JConsole