I installed grakn 1.0.0 on debian/wheezy with jdk8
while starting grakn server it says
STORAGE .... STARTED
QUEUE ..... FAILED
I found nothing useful in the log directory.
any clues where I could look or where I could ask will
be appreciated
thanks, Gerald
I had the same problem. After failing to start, it created a file (like hs_err_pid20005.log) that said:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 3720347648 bytes for committing reserved memory.
Seems like it's looking for ~3.7GB ! After freeing up some memory, it started fine for me.
Related
I just downloaded a new docker image. When I try to run it I get this log on my console
Setting Active Processor Count to 4
Calculating JVM memory based on 381456K available memory
unable to calculate memory configuration
fixed memory regions require 654597K which is greater than 381456K available for allocation: -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=142597K, -XX:ReservedCodeCacheSize=240M, -Xss1M * 250 threads
Please, how can I fix this?
I am assuming that you have multiple services and you are going to start them at a time. The issue is related to memory which docker and spring boot uses.
Try this:
environment:
- JAVA_TOOL_OPTIONS=-Xmx128000K
deploy:
resources:
limits:
memory: 800m
You have to provide memory which I mentioned in the .yaml file syntax.
While at the time of startup each service takes lot of memory, so there is no memory remaining for rest of the services and because of that other services starts failing with the memory related message.
I am trying to deploy ELK on my small server 2 Core / 2G RAM. But ELK stack server just keep restarting and cannot work.
The log printed on those container shows no error and just few warning about deprecated method.
Logstash log:
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
No error prints at Kibana and elasticsearch container
Here is the docker stack composer file: https://github.com/deviantony/docker-elk/blob/master/docker-stack.yml. I didn't not change anything except turn down the heap size.
But if I use docker-compose instead of docker stack deploy in swarm mode, everything goes smoothly.
Also, my CPU jump up to 100% while Memory usage only 60% when I startup the service.
How can I debug for this problem? Thanks in advance.
I think your problem is still caused by lack of memory. I'd test the compose stack you shows above. Check docker stats. The memory usage was fluctuating at 1.8G.
You mentioned that you turn down the heap size in your compose file: from ES_JAVA_OPTS: "-Xmx512m -Xms512m" to lower.
But still not recommend to cut down heap size below 256m. Any lower than that will cause some error like:
[circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be xxx, which is larger than the limit of xxx
Any more complicated query or other operation will throw more error.
Besides, note that you got a single host, but you still using swarm as both master and work node. Any other redundant service or application will push you host to the edge of breakdown.
2G RAM server is not enough for host the whole ELK stack for most of common usage. If you insist, try add mem_limit in you compose file (you don't really need to use v3, v2 is enough for single node service) to limit your container memory usage.
I keep getting this error when running some of my steps:
Container [pid=5784,containerID=container_1482150314878_0019_01_000015] is running beyond physical memory limits. Current usage: 5.6 GB of 5.5 GB physical memory used; 10.2 GB of 27.5 GB virtual memory used. Killing container.
I searched over the web and people say to increase the memory limits. This error is after I already increased to the maximum allowed on the instance I'm using c4.xlarge. Can I get some assistance about this error and how to solve this?
Also, I don't understand why mapreduce will throw this error and won't just swap or even work slower but just continue to work ...
NOTE: This error started happening after I changed to a custom output compression so it should be related to that.
Thanks!
I've two questions.
How to mount the directory for Ambari disk usage.
I started to run the tera gen program and it does not go beyond 10% map tasks, Ambari continously shows me the message that: Capacity Used: [90.69%, 27.7 GB], Capacity Total: [30.5 GB], path=/usr/hdp I restarted the cluster, restarted Ambari but no use.
What is the way around?
Well,
After a few trial error I found the solution for the same.
You can change the location of log and local directories to bigger place
Remove the old log files from Ambari server.
Documented here.
Weblogic 10.3 gives out of memory
Followings thing I have done
Increased the -Xms512m
Increased the -Xmx1024m
Increased the max perm size in setdomainenv.bat
Is there any other way to resolve this issue I have a 2 GB system?
It is a production machine and the size of the log is around 4 GB .When analysed the log I found many connection refused error
You'll need to profile your application to find the memory leak. It could be open database connections or other resources not being handled properly
Just increasing the Xms and Xmx wont work beyond a point
Take a Heap Dump into an HPROF file and run this using Eclipse Memory Analyzer Tool or VisualVM
or monitor this using JConsole