Sprintboot app deployed in Marathon/Mesos - spring-boot

I have packaged the eureka server provided by this repository https://github.com/spring-cloud-samples/eureka and tried to launch it on a cluster installation managed with Marathon/Mesos on which contraints on memory are set.
Nevertheless if I start the app in Marathon with 512MB it takes 100 seconds (each slaves have 32GB of RAM) to start instead of 12 seconds on my mac (16GB of RAM).
Even with configuring the Xms and Xms does not solve the issue. Using 256MB is even worse.

We have found this but no sure it can be applied to tomcat launched by spring boot
Inspired by
https://community.alfresco.com/docs/DOC-4914-jvm-tuning#w_generalcase
https://stackoverflow.com/a/33985214
Total memory in MB
TOTAL_MEM_KB=`free | awk '/^Mem:/{print $2}'`
Processor count
CPU_COUNT=`grep '^processor\s:' /proc/cpuinfo | wc -l`
Take half memory for Xmx setting in GB
XMS=`expr ${TOTAL_MEM_KB} / 1000 / 1000 / 2`
MaxPerm hardcoded to 256M
MAX_PERM_MB="256"
HeapRegion is lower int round of Xmx/2048
G1_HEAP=`expr ${XMS} \* 1000 / 2048`
ParallelGC is half count of CPU (rounded to lower int)
PARA_GC=`expr ${CPU_COUNT} / 2`
ConcurrentGC is half ParallelGC
CONC_GC=`expr ${PARA_GC} / 2`
JAVA_MEM="-Xmx${XMS}g -XX:MaxPermSize=${MAX_PERM_MB}M -XX:+UseG1GC -XX:MaxGCPauseMillis=1000 -XX:G1HeapRegionSize=${G1_HEAP} -XX:ParallelGCThreads=${PARA_GC} -XX:ConcGCThreads=${CONC_GC}"

Related

How to increase heap memory of elasticsearch in Centos 7?

When we run elasticsearch in server then we face Broken pipe issue in elasticsearch.
"org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe"
We just increase the heap memory of the elasticsearch as given step.
First check current heap memory of elasticsearch.
ps aux | grep elasticsearch
"-Xms1g -Xmx1g"
Increase the size of heap Memory
vi /etc/sysconfig/elasticsearch
**Heap size defaults to 256m min, 1g max
Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g**
ES_HEAP_SIZE=3g
Check new heap memory
ps aux | grep elasticsearch
"-Xms3g -Xmx3g"

Bad Disk performance after moving from Ubuntu to Centos 7

Relatively old Dell R620 server (32 cores / 128GB RAM) was working perfect for years with Ubuntu. Plain OS install, no Virtualization.
2 system disks in mirror (XFS)
6 RAID 5 disks for /var (XFS)
server is used for a nightly check of a MySQL Xtrabackup file.
Before the format and move to Centos 7 the process would finish by 08:00, Now running late at noon.
99% of the job is opening a large tar.gz file.
htop : there are only two processes doing something :
1. gzip -d : about 20% CPU
2. tar zxf Xtrabackup.tar.gz : about 4-7% CPU
iotop : it's steady at around 3M/s (Read) / 20-25 M/s (Write) which is about 25% of what i would expect at minimum.
Memory : Used : 1GB of 128GB
Server is fully updated both OS / HW / Firmware including the disks firmware.
IDRAC shows no problems.
Bottom line : Server is not working hard (to say the least) but performance is way off.
Any ideas would be appreciated.
vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 2 0 469072 0 130362040 0 0 57 341 0 0 0 0 98 2 0
0 2 0 456916 0 130374568 0 0 3328 24576 1176 3241 2 1 94 4 0
You have blocked processes and also io operations (around 20MB/s). And this mean for me you have few processes which concurrently access disc resources. What you can do to improve the performance is instead of
tar zxf Xtrabackup.tar.gz
use
gzip -d Xtrabackup.tar.gz|tar xvf -
The second add parallelism and can benefit from multy processor, You can also benefit from increase of the pipe (fifo) buffer. Check this answer for some ideas
Also consider to tune filesystem where are stored output files of tar

Diagnosing high CPU usage on Docker for Mac

How do I diagnose the cause of Docker on MacOS, specifically com.docker.hyperkit using 100% of CPU?
Docker stats
Docker stats shows all the running containers have low CPU, memory, net IO and block IO.
iosnoop
iosnoop shows that com.docker.hyperkit performs about 50 writes per second totaling 500KB per second to the file Docker.qcow2. According to What is Docker.qcow2?, Docker.qcow2 is a sparse file that's the persistent storage for all Docker containers.
In my case the file isn't that sparse. The physical size matches the logical size.
dtrace (dtruss)
dtruss sudo dtruss -p $DOCKER_PID shows a large number of psynch_cvsignal and psynch_cvwait calls.
psynch_cvsignal(0x7F9946002408, 0x4EA701004EA70200, 0x4EA70100) = 257 0
psynch_mutexdrop(0x7F9946002318, 0x5554700, 0x5554700) = 0 0
psynch_mutexwait(0x7F9946002318, 0x5554702, 0x5554600) = 89474819 0
psynch_cvsignal(0x10BF7B470, 0x4C8095004C809600, 0x4C809300) = 257 0
psynch_cvwait(0x10BF7B470, 0x4C8095014C809600, 0x4C809300) = 0 0
psynch_cvwait(0x10BF7B470, 0x4C8096014C809700, 0x4C809600) = -1 Err#316
psynch_cvsignal(0x7F9946002408, 0x4EA702004EA70300, 0x4EA70200) = 257 0
psynch_cvwait(0x7F9946002408, 0x4EA702014EA70300, 0x4EA70200) = 0 0
psynch_cvsignal(0x10BF7B470, 0x4C8097004C809800, 0x4C809600) = 257 0
psynch_cvwait(0x10BF7B470, 0x4C8097014C809800, 0x4C809600) = 0 0
psynch_cvwait(0x10BF7B470, 0x4C8098014C809900, 0x4C809800) = -1 Err#316
Update: top on Docker host
From https://stackoverflow.com/a/58293240/30900:
docker run -it --rm --pid host busybox top
The CPU usage on docker embedded host is ~3%. CPU usage on my MacBook was ~100%. So, the docker embedded host isn't causing the CPU usage spike.
Update: running dtrace scripts of most common stack traces
Stack traces from the dtrace scripts in the answer below: https://stackoverflow.com/a/58293035/30900.
These kernel stack traces look innocuous.
AppleIntelLpssGspi`AppleIntelLpssGspi::regRead(unsigned int)+0x1f
AppleIntelLpssGspi`AppleIntelLpssGspi::transferMmioDuplexMulti(void*, void*, unsigned long long, unsigned int)+0x91
AppleIntelLpssSpiController`AppleIntelLpssSpiController::transferDataMmioDuplexMulti(void*, void*, unsigned int, unsigned int)+0xb2
AppleIntelLpssSpiController`AppleIntelLpssSpiController::_transferDataSubr(AppleInfoLpssSpiControllerTransferDataRequest*)+0x5bc
AppleIntelLpssSpiController`AppleIntelLpssSpiController::_transferData(AppleInfoLpssSpiControllerTransferDataRequest*)+0x24f
kernel`IOCommandGate::runAction(int (*)(OSObject*, void*, void*, void*, void*), void*, void*, void*, void*)+0x138
AppleIntelLpssSpiController`AppleIntelLpssSpiDevice::transferData(IOMemoryDescriptor*, void*, unsigned long long, unsigned long long, IOMemoryDescriptor*, void*, unsigned long long, unsigned long long, unsigned int, AppleIntelSPICompletion*)+0x151
AppleHSSPISupport`AppleHSSPIController::transferData(IOMemoryDescriptor*, void*, unsigned long long, unsigned long long, IOMemoryDescriptor*, void*, unsigned long long, unsigned long long, unsigned int, AppleIntelSPICompletion*)+0xcc
AppleHSSPISupport`AppleHSSPIController::doSPITransfer(bool, AppleHSSPITransferRetryReason*)+0x97
AppleHSSPISupport`AppleHSSPIController::InterruptOccurred(IOInterruptEventSource*, int)+0xf8
kernel`IOInterruptEventSource::checkForWork()+0x13c
kernel`IOWorkLoop::runEventSources()+0x1e2
kernel`IOWorkLoop::threadMain()+0x2c
kernel`call_continuation+0x2e
53
kernel`waitq_wakeup64_thread+0xa7
pthread`__psynch_cvsignal+0x495
pthread`_psynch_cvsignal+0x28
kernel`psynch_cvsignal+0x38
kernel`unix_syscall64+0x27d
kernel`hndl_unix_scall64+0x16
60
kernel`hndl_mdep_scall64+0x4
113
kernel`ml_set_interrupts_enabled+0x19
524
kernel`ml_set_interrupts_enabled+0x19
kernel`hndl_mdep_scall64+0x10
5890
kernel`machine_idle+0x2f8
kernel`call_continuation+0x2e
43395
The most common stack traces in user space over 17 seconds clearly implicate com.docker.hyperkit. There 1365 stack traces in 17 seconds in which com.docker.hyperkit created threads which averages to 80 threads per second.
com.docker.hyperkit`0x000000010cbd20db+0x19f9
com.docker.hyperkit`0x000000010cbdb98c+0x157
com.docker.hyperkit`0x000000010cbf6c2d+0x4bd
libsystem_pthread.dylib`_pthread_body+0x7e
libsystem_pthread.dylib`_pthread_start+0x42
libsystem_pthread.dylib`thread_start+0xd
19
Hypervisor`hv_vmx_vcpu_read_vmcs+0x1
com.docker.hyperkit`0x000000010cbd4c4f+0x2a
com.docker.hyperkit`0x000000010cbd20db+0x174a
com.docker.hyperkit`0x000000010cbdb98c+0x157
com.docker.hyperkit`0x000000010cbf6c2d+0x4bd
libsystem_pthread.dylib`_pthread_body+0x7e
libsystem_pthread.dylib`_pthread_start+0x42
libsystem_pthread.dylib`thread_start+0xd
22
Hypervisor`hv_vmx_vcpu_read_vmcs
com.docker.hyperkit`0x000000010cbdb98c+0x157
com.docker.hyperkit`0x000000010cbf6c2d+0x4bd
libsystem_pthread.dylib`_pthread_body+0x7e
libsystem_pthread.dylib`_pthread_start+0x42
libsystem_pthread.dylib`thread_start+0xd
34
com.docker.hyperkit`0x000000010cbd878d+0x36
com.docker.hyperkit`0x000000010cbd20db+0x42f
com.docker.hyperkit`0x000000010cbdb98c+0x157
com.docker.hyperkit`0x000000010cbf6c2d+0x4bd
libsystem_pthread.dylib`_pthread_body+0x7e
libsystem_pthread.dylib`_pthread_start+0x42
libsystem_pthread.dylib`thread_start+0xd
47
Hypervisor`hv_vcpu_run+0xd
com.docker.hyperkit`0x000000010cbd20db+0x6b6
com.docker.hyperkit`0x000000010cbdb98c+0x157
com.docker.hyperkit`0x000000010cbf6c2d+0x4bd
libsystem_pthread.dylib`_pthread_body+0x7e
libsystem_pthread.dylib`_pthread_start+0x42
libsystem_pthread.dylib`thread_start+0xd
135
Related issues
Github - docker/for-mac: com.docker.hyperkit 100% cpu usage is back again #3499
. One comment suggests adding volume caching described here: https://www.docker.com/blog/user-guided-caching-in-docker-for-mac/. I tried this and got a small ~10% reduction in CPU usage.
I have the same problem. My CPU % went back down to normal after I removed all my volumes.
docker system prune --volumes
I also manually removed some named volumes:
docker volume rm NameOfVolumeHere
That doesn't solve the overall issue of not being able to use volumes with Docker for mac. Right now I'm just being careful about the amount of volumes I use and closing Docker desktop when not in use.
My suspicion is that the issue is IO related. With MacOS volumes, this involves osxfs where there is some performance tuning you can perform. Mainly, if you can accept fewer consistency checks, you can set the volume mode to delegated for faster performance. See the docs for more details: https://docs.docker.com/docker-for-mac/osxfs-caching/. However, if your image contains a large number of small files, performance will suffer, especially if you also have lots of image layers.
You can also try the following command to debug any process issues within the embedded VM that docker uses:
docker run -it --rm --pid host busybox top
(To exit, use <ctrl>-c)
To track down if it's IO, you can also try the following:
$ docker run -it --rm --pid host alpine /bin/sh
$ apk add sysstat
$ pidstat -d 5 12
That will run inside the alpine container running in the VM pid namespace, showing any IO happening from any process, whether or not that process is inside of a container. The stats are every 5 seconds for one minute (12 times) and then it will give you an average table per process. You can then <ctrl>-d to destroy the alpine container.
From the comments and edits, these stats may check out. A 4 core MBP has 8 threads, so full CPU utilization should be 800% if MacOS is reporting the same as other Unix based systems. Inside the VM there's over 100% load shown in the top command for the average in the past minute (though less from the 5 and 15 averages) which is roughly what you see for the hyperkit process on the host. The instantaneous usage is over 12% from top, not 3%, since you need to add the system and user percentages. And the IO numbers shown in pidstat align roughly with what you see written to the qcow2 image.
If the docker engine itself is thrashing (e.g. restarting containers, or running lots of healthchecks), then you can debug that by watching the output of:
docker events
EDIT: after a few weeks, my cpu issues have come back - so the below solutions probably aren't worth it
My CPU was always running crazy high, and it wasn't I/O, as determined using docker stats
I did a bunch of stuff, but had it suddenly decrease to reasonable levels and stay that way for over a week now, after doing the following:
Ensure you have the right # of CPU's set - not what you have, but HALF that amount. Mine was more than half, and I feel this was the real problem, in Preferences | Resources
decrease # of file shares if possible - Preferences | Resources, /private, /tmp/, /var/folders
disable use gRPC FUSE for file sharing - Preferences | Resources
Changing the volumes to use a delegated configuration worked for me and resulted in a drastic drop in CPU usage.
see the document: https://docs.docker.com/docker-for-mac/osxfs-caching/#delegated
how set in my docker-compose.yml:
version: "3"
services:
my_service:
image: python3.6
ports:
- "80:10000"
volumes:
- ./code:/www/code:cached
For me this worked, macOS 10.15.5, Docker Desktop 2.3.0
This is a small dTrace script I use to find where the kernel is spending its time (it's from Solaris, and dates back to the early days of Solaris 10):
#!/usr/sbin/dtrace -s
profile:::profile-1001hz
/arg0/
{
#[ stack() ] = count();
}
It simply samples kernel stack traces and counts each one it encounters in the # aggregation.
Run it as root:
... # ./kernelhotspots.d > /tmp/kernel_hot_spots.txt
Let it run for a decent amount of time while you're having CPU issues, then hit CTRL-C to break the script. It will emit all the kernel stack traces it encountered, the most common last. If you need more (or less) stack frames from the default with
#[ stack( 15 ) ] = count();
That will show a stack frame 15 calls deep.
The last few stack traces will be where your kernel is spending most of its time. That may or may not be informative.
This script will do the same for user-space stack traces:
#!/usr/sbin/dtrace -s
profile:::profile-1001hz
/arg1/
{
#[ ustack() ] = count();
}
Run it similarly:
... # ./userspacehotspots.d > /tmp/userspace_hot_spots.txt
ustack() is a bit slower - to emit the actual function names, dTrace has to do a lot more work to get them from the address spaces of the appropriate processes.
Disabling System Integrity Protection might help you get better stack traces.
See DTrace Action Basics for some more details.
Had same issue with docker today in Big Sur (tried pruning images, changing to apple virtualization, nothing helped). However, disabling the docker desktop to startup in preferences and never opening the desktop gui seems to fix it for me. Docker now runs with only 10%cpu usage even after starting a few containers. However, once I open the desktop gui it slowly rises again to +90% cpu and keeps on hogging the cpu even after closing the DockerDesktop process. Docker version 20.10.13, build a224086.
The solution I found was to increase the resources given to Docker. I increased the Memory from 2GB to 8GB, the Swap from 1GB to 2GB, and the disk image size to 160GB. Completely solved the problem for me, and it's an easy one for readers to try.
to disable use gRPC FUSE for file sharing might not good idea. I found the feedback from another issue made by docker community. see bellow:
So we'll look into that. However,
osxfs will not be supported long term.
We can't maintain two solutions.
hier to docker issue thread
There is an open issue here https://github.com/docker/for-mac/issues/6166
It seems there are a few bugs going on
For some people (me including) unchecking the "Open Docker Dashboard at startup" and manually restarting docker do the job.
For other people increasing resources like CPU and Memory works

Oracle Database CPU Usage on AIX

I want to find the CPU process usage for all Oracle processes on an AIX box.
On Solaris I can do the following:
prstat -n 400 -c -s cpu -p 9013 1 1
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
9013 oracle 3463M 2928M sleep 53 0 0:00:35 0.9% oracle/2
Total: 1 processes, 2 lwps, load averages: 2.25, 2.32, 2.40
This basically reports the CPU usage for a given process ID (in this case 9013). Given a list of all Oracle PID’s I can use this command to get the CPU usage for each one, sum them up and hey presto I have my Oracle database CPU usage.
How can I get the same with AIX?
Thanks
You can try nmon or topas, which will show the current %CPU. You might also want to look into using WLM to create a class for all the Oracle processes, then use wlmstat to see the CPU usage for that class. That would save you the trouble of adding them up manually.

hadoop ulimit open files name

I have a hadoop cluster we assuming is performing pretty "bad". The nodes are pretty beefy.. 24 cores, 60+G RAM ..etc. And we are wondering if there are some basic linux/hadoop default configuration that prevent hadoop from fully utilizing our hardware.
There is a post here that described a few possibilities that I think might be true.
I tried logging in the namenode as root, hdfs and also myself and trying to see the output of lsof and also the setting of ulimit. Here are the output, can anyone help me understand why the setting doesn't match with the open files number.
For example, when I logged in as root. The lsof looks like this:
[root#box ~]# lsof | awk '{print $3}' | sort | uniq -c | sort -nr
7256 cloudera-scm
3910 root
2173 oracle
1886 hbase
1575 hue
1180 hive
801 mapred
470 oozie
427 yarn
418 hdfs
244 oragrid
241 zookeeper
94 postfix
87 httpfs
...
But when I check out the ulimit output, it looks like this:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 806018
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I am assuming, there should be no more than 1024 files opened by one user, however, when you look at the output of lsof, there are 7000+ files opened by one user, can anyone help explain what is going on here?
Correct me if I had made any mistake understanding the relation between ulimit and lsof.
Many thanks!
You need to check limits for the process. It may be different from your shell session:
Ex:
[root#ADWEB_HAPROXY3 ~]# cat /proc/$(pidof haproxy)/limits | grep open
Max open files 65536 65536 files
[root#ADWEB_HAPROXY3 ~]# ulimit -n
4096
In my case haproxy has a directive on its config file to change maximum open files, there should be something for hadoop as well
I had a very similar issue, which caused one of the claster's YARN TimeLine server to stop due to reaching magical 1024 files limit and crashing with "too many open files" errors.
After some investigation it came out that it had some serious issues with dealing with too many files in TimeLine's LevelDB. For some reason YARN ignored yarn.timeline-service.entity-group-fs-store.retain-seconds setting (by default it's set to 7 days, 604800ms). We had LevelDB files dating back for over a month.
What seriously helped was applying a fix described in here: https://community.hortonworks.com/articles/48735/application-timeline-server-manage-the-size-of-the.html
Basically, there are a couple of options I tried:
Shrink TTL (time to live) settings First enable TTL:
<property>
<description>Enable age off of timeline store data.</description>
<name>yarn.timeline-service.ttl-enable</name>
<value>true</value>
</property>
Then set yarn.timeline-service.ttl-ms (set it to some low settings for a period of time):
\
<property>
<description>Time to live for timeline store data in milliseconds.</description>
<name>yarn.timeline-service.ttl-ms</name>
<value>604800000</value>
</property>
Second option, as described, is to stop TimeLine server, delete the whole LevelDB and restart the server. This will start the ATS database from scratch. Works fine if you failed with any other options.
To do it, find the database location from yarn.timeline-service.leveldb-timeline-store.path, back it up and remove all subfolders from it. This operation will require root access to the server where TimeLine is located.
Hope it helps.

Resources