How does mesos-slave calculate its available resources. In web-ui, mesos-master shows 2.9G memory available on a slave, but I run "free -m":
free -m
total used free shared buffers cached
Mem: 3953 2391 1562 0 1158 771
-/+ buffers/cache: 461 3491
Swap: 4095 43 4052
and --resource parameter was not given.
I want to know how does mesos scheduler calculate resources available.
The function that calculates available resources that are offered by slaves can be seen here, in particular, the memory part is lines 98 to 114.
If the machine has more than 2GB of RAM Mesos will offer total - Gigabytes(1). In your case the machine has ~4GB, and that's why you're seeing ~3GB in the Web UI.
Related
When querying my cluster, I noticed these stats for one of my nodes in the cluster. Am new to Elastic and would like the community's health in understanding the meaning of these and if I need to take any corrective measures?
Does the Heap used look on the higher side and if yes, how would I rectify it? Also any comments on the System Memory Used would be helpful - it feels like its on the really high side as well.
These are the JVM level stats
JVM
Version OpenJDK 64-Bit Server VM (1.8.0_171)
Process ID 13735
Heap Used % 64%
Heap Used/Max 22 GB / 34.2 GB
GC Collections (Old/Young) 1 / 46,372
Threads (Peak/Max) 163 / 147
This is the OS Level stats
Operating System
System Memory Used % 90%
System Memory Used 59.4 GB / 65.8 GB
Allocated Processors 16
Available Processors 16
OS Name Linux
OS Architecture amd64
As You state that you are new to Elasticsearch I must say you go through cluster as well as cat API you can find documentation at clusert API and cat API
This will help you understand more in depth.
We are facing one problem of random High CPU utilization on Production Server which makes application Not Responding. And we need to restart the application again. We have done initial level diagnostic and couldn’t conclude.
We are using following configuration for Production Server
Amazon EC2 8gb RAM(m4.large) ubuntu 14.04 LTS
Amazon RDS 2gb RAM(t2.small) Mysql database
Java heap size -Xms2048M -Xmx4096
Database Connection Pool size Minimum: 20 and Maximum: 150
MaxThreads 100
Below two results are of top command
1) At 6:52:50 PM
KiB Mem : 8173968 total, 2100304 free, 4116436 used, 1957228 buff/cache
KiB Swap: 1048572 total, 1047676 free, 896 used. 3628092 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20698 root 20 0 6967736 3.827g 21808 S 3.0 49.1 6:52.50 java
2) At 6:53:36 PM
KiB Mem : 8173968 total, 2099000 free, 4116964 used, 1958004 buff/cache
KiB Swap: 1048572 total, 1047676 free, 896 used. 3627512 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+COMMAND
20698 root 20 0 6967736 3.828g 21808 S 200.0 49.1 6:53.36 java
Note:
Number of Concurrent users - 5 or 6 (at this time)
Number of requests between 6:52:50 PM and 6:53:36 PM - 4
Results shows CPU utilization is increase drastically.
Any suggestion or direction which can lead to solution??
Additionally, following is the cpu utilization graph for last week.
Thanks!
Without seeing a stack trace, I'd guess that the problem is likely Jetty, as there have been recent documented bugs in Jetty causing the behaviour you describe on EC2 (do a google search on this.). I would recommend you do a couple of stack trace dumps during 100% cpu, to confirm it is Jetty, then if you look at the Jetty documentation on this bug, hopefully you may find you simply need to update Jetty.
Is it possible to have performance issues in Docker?
Because I know vm's and you have to specify how much RAM you want to use etc.
But I don't know that in docker. It's just running. Will it automatically use the RAM what it needs or how is this working?
Will it automatically use the RAM what it needs or how is this working?
No by default, it will use the minimum memory needed, up to a limit.
You can use docker stats to see it against a running container:
$ docker stats redis1 redis2
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
redis1 0.07% 796 KB / 64 MB 1.21% 788 B / 648 B 3.568 MB / 512 KB
redis2 0.07% 2.746 MB / 64 MB 4.29% 1.266 KB / 648 B 12.4 MB / 0 B
When you use docker run, you can specify those limits with Runtime constraints on resources.
That includes RAM:
-m, --memory=""
Memory limit (format: <number>[<unit>], where unit = b, k, m or g)
Under normal circumstances, containers can use as much of the memory as needed and are constrained only by the hard limits set with the -m/--memory option.
When memory reservation is set, Docker detects memory contention or low memory and forces containers to restrict their consumption to a reservation limit.
By default, kernel kills processes in a container if an out-of-memory (OOM) error occurs.
To change this behaviour, use the --oom-kill-disable option. Only disable the OOM killer on containers where you have also set the -m/--memory option.
Note: the upcoming (1.10) docker update command might include dynamic memory changes. See docker update.
By default, docker containers are not limited in the amount of resources they can consume from the host. Containers are limited in what permissions / capabilities they have (that's the "container" part).
You should always set constraints on a container, for example, the maximum amount of memory a container is allowed to use, the amount of swap space, and the amount of CPU. Not setting such limits can potentially lead to the host running out of memory, and the kernel killing off random processes (OOM kill), to free up memory. "Random" in that case, can also mean that the kernel kills your ssh server, or the docker daemon itself.
Read more on constraining resources on a container in Runtime constraints on resources in the manual.
I was testing mesos cgroups isolation. To see what kind of error gets thrown.
I ran the below shell program with marathon. Assigned 1 MB memory and 1 CPU.
#!/bin/sh
temp=a
while :
do
temp=$temp$temp
echo ${#temp}
sleep 1
done
A single character takes 1B of space so the program above needs to throw an exception once the length of the temp string reaches about 1 MB. But the tasks seem to get killed randomly. The task sometimes gets killed at length 1048576 or 2097152 or 4194304.
Ideally since 1MB is the limit it should have stopped when length is 524288.
Additional info -
Slave is run with --isolation='cgroups/cpu,cgroups/mem'
Mesos version - 0.25
The variance you are seeing can be explained with the following:
The amount of memory taken up by your script is not entirely deterministic, as it depends on the implementation of the shell interpreter as well as the size of your system's shared libraries (i.e. the parts of those libraries loaded into your program's resident set).
A 1 MB task in Mesos is accompanied 32 MB for the executor. Because the executor requires slightly less than 32 MB, you will have slightly more than 1 MB for your task.
I have raspberry pi b+ model.
I have seen in specifications, it says, it has 512mb ram.
but when i test it using free -m, it displays only 247mb.
Please tell me the reason for that.
Thank you.
pi#raspberrypi ~/Desktop/song $ free -m
total used free shared buffers cached
Mem: 247 210 36 0 15 103
-/+ buffers/cache: 91 155
Swap: 99 0 99
Set your GPU memory down:
sudo raspi-config
you can select
8 Advanced Options Configure advanced settings
then
A3 Memory Split Change the amount of memory made available to the GPU
then you can set it to the expceted value.