Kibana 4 RAM consumption - kibana-4

I installed Kibana 4.3.0 on my VPS having one CPU core and 2GB of RAM running Ubuntu 14.04.3.
Kibana works and my dashboard works as expected, but unfortunately it consumes too much RAM so the VPS begins to swap and has a very high system load.
There is not much data put into ES (about 192 temperature entries per day) so Kibana 4 should not consume too much memory.
Is there any possibility to configure Kibana 4 to consume less RAM, i.e. 256MB at the maximum?

in this thread I found a solution for the memory consumption: https://github.com/elastic/kibana/issues/5170
It seems to be a Node.js problem. Changing the last line in bin/kibana start script to
exec "${NODE}" --max-old-space-size=100 "${DIR}/src/cli" ${#}
as suggested in the thread helped.

Related

Performance issue in odoo 15 with AWS EC2 2 core cpu and 4 gb ram

We have recently upgraded from Odoo 12 to Odoo 15, both community versions, and are now experiencing performance issues on your AWS EC2 server.
The server has 2 cores cpu and 4GB of RAM and we are using Apache2 as a reverse proxy. With around 9-10 concurrent users, you are seeing maximum CPU utilization of 25-30% during peak hours and 1.8GB of RAM usage.
Enabling workers results in console JavaScript errors due to missing dependencies, but without workers, the site loads slowly.
Can you advise on how to resolve this performance bottleneck?
Tried with adding parameter in odoo conf file based on the description available related to performance on odoo site.
Limit cpu, limit hard, limit soft, proxy mode, workers

What are the resource requirements to run Logstash in a k8s pod?

I was noticing that running a ELK stack on a Raspberry Pi running a Kubernetes Cluster. I noticed that it didnt have the resources to run all three containers. I was looking up that with Kubernetes you can put limits and requests on your resources CPU and Memory, and it got me thinking. What are the minimum requirements? To me, applications are greedy, so is there a way to cut down the requirements for Logstash, to emphasize resources for Elasticsearch?
Right now, I am running a Raspberry Pi 4, 4g RAM, 32G disk.
If I can put min and max requirements on the container it will better allow me manage the resources. The think though that I noticed is that there was no insight from what I can tell as to minimum requirements for the different containers.
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-managing-compute-resources.html
The above link i believe tells me that the CPU consumption is greedy, but the default MEMORY for Elastic and Kibana 2Gi and 1Gi respectively. It mentioned nothing about logstash though, and whether or not there is a Minimum requirement for CPUs.
I wasnt sure if I should set each ELK container to 1CPU, 1Gi RAM, and I can try it to see if it functions, but since the concept of it throttling down makes me curious what the happy medium would be.
Logstash is not part of the Elastic Cloud, that is why there is no mention of it in the Elastic Cloud on Kubernetes documentation link that you shared.
Logstash is way more CPU bound than memory bound, but how much memory does it needs is completely dependent on your pipelines.
In Logstash the memory depends on the pipelines, the batch size, the filters used, the number of events per seconds, the queue type etc. If you are running a dev or lab environment I think that you can try to give Logstash 1 CPU and 512 MB of RAM and see if it feets your use case.
But I would say that 4GB is pretty small for a full stack since you need to have memory for the applications and still have some memory left for the sytems.

What is the relationship between Elasticsearh ES_Java_Opts and Kubernetes Resource Limits

So i have a Elasticsearch Cluster inside the Kubernetes.
The machine it is running on has 30 GB RAM and 8 cores.
Now according to the thumb rule 50% of the RAM is what we set as ES_JAVA_OPTS and remaining is used for file caching.
here it would be 15 GB
Also in the helm chart we have resource requirements mentioned like below:
resources:
limits:
cpu: 8
memory: 15Gi
requests:
cpu: 8
memory: 15Gi
My question is whether the 50% RAM is of the host machine (Which is 30 GB) or the limit specified in the helm chart 15 GB
Can someone explain how in kubernetes utilise the RAM
Because if it with respect to Host and file caching is not considered as the utilisation of Deployed Application we are OK. But if it within the Resources Limits i need to increase the to 30GB.
Edit:
The question here is that if one elasticsearch node used 50% of RAM as Heap and 50% as file caching and i mention the Heap as 15GB (50% of the RAM) in a 30GB machine. so should i mention the resoure limitations in the deployment template as somewhere around 15GB which Heap requires of need 30GB (Say 28GB) that from the rule Elasticsearch need to be able to cache files.
This comes as concern as if pod exceed the mentioned limit on the template at any given moment kubernetes restart the pod.
So in other words i want to know the RAM file caching is come into play in the overall memory usage of the pod or not.
Note: I am using instance storage as primary Storage of the ES Data as this is extremely fast as compare to EBS.
Conclusion:
Keep Heap half to the RAM in the system and Mentioned in the resources Limit(if any)
I am not a expert in k8s and docker but what I understand is that, docker container uses the host resources and using resource limit you can have a hard limit on the resources it can consume.
If you put a resource limit of 15GB, than overall your docker container can consume 15GB of host RAM.now whether it will share the file system cache with host or not depends on how you have configured your docker volume.
As docker container have the option to share the file system with host using the bind volume or have its own data volume(which is ephemeral and not suited for ES as its a stateful application). in first option it should share the file system cache with host and you should not increase the resource limit further(recommended as you have ES which is stateful) and in second option, as it will use its own file system you have to allocate RAM for its file system cache and have to increase RAM to 30 GB, but you have to give some space for Host OS as well.
Container will always see the node`s memory instead of the container one. In Kubernertes, even though you set a limit for the memory to a container, the container itself is not aware of this limit.
This has an effect on the applications that looks up for the memory available on the system and use that information to decide how memory it wants to reserve.
This is why you setup the JVM heap size. Without this specified the JVM will setup the maximum heap size based on the host/node total memory instead of the one available (that you`ve declared as limit) to the container.
Check out this article about how limits works in k8s.

Elasticsearch load is not distributed evenly

I am facing strange issue with Elasticsearch. I have 8 nodes with same configurations (16GB RAM and 8 core CPU).
One node "es53node6" has always high load as shown in the screenshot below. Also 5-6 nodes were getting stopped yesterday automatically after every 3-4 hours.
What could be the reason?
ES version : 5.3
there can be a fair share of reasons. Maybe all data is stored on that node (which should not happen by default), maybe you are sending all the requests to this single node.
Also, there is no automatic stopping of Elasticsearch built-in. You can configure Elasticsearch that it stops the JVM process when an out-of-memory exception occurs, but this is not enabled by default as it relies on a more recent JVM.
You can use the hot threads API to check where the CPU time is spent in Elasticsearch.

The best memory configuration for ElasticSearch

I have one linux server with 128G memory and 32 cpu cores. I would run an ElasticSearch instance on this server, the server is exclusively only for running ES. So how many memory I should configure for ES. How could I get the best performance of ES please. Is the server too luxurious for ES? Thanks!
I suggest you run two ES instances in each server. Since your linux server pretty powerful, if you set the ES memory as 60g or 80g it may encounter GC problem. Try to run two or three ES instances in one server and monitor the CPU and Memory usage, btw, change the http port of ES for running multiple nodes in one server.

Resources