Define hugepages in qemu using ubuntu without libvirt - linux-kernel

I am using qemu, with kvm and I load ubuntu 18.04 image, the qemu is loaded with the following command:
qemu-system-x86_64 -boot c -m 16G -smp 4 -vnc :0 -enable-kvm -drive if=virtio,file=ubuntu.qcow2,cache=none
I don't use libvirt.
I need to change hugepages from the default 2048kb to 1GB.
I configured my host vm to support this size
Configuration:
cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 13
HugePages_Free: 13
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
Hugetlb: 13631488 kB
and in the ubuntu, I configured
/etc/default/grub
and added
GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=40”
and then run
grub-mkconfig -o /boot/grub/grub.cfg
Execution:
after qemu was loaded, in the terminal I run the following commands:
cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
**Hugepagesize: 2048 kB**
cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.15.0-137-generic root=/dev/mapper/ubuntu--vg-ubuntu--lv ro default_hugepagesz=1G hugepagesz=1G hugepages=40 maybe-ubiquity
Does anyone have an idea how to define the huge pages to be 1gb in this configuration?

I solved the issue.
in qemu-system-x86_64 you can write -cpu help and then you can see all supported cpus and additional flags.
in my case I added '-cpu qemu64,pdpe1gb' and then qemu supports huge pages

Related

How to mount Nvm EBS volumes of different size on desired mount point using shell

After adding volumes to an ec2 instance using ansible. How can I mount these devices by size to desired mount point using a shell script that I will pass to user_data
[ec2-user#xxx ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:4 0 200G 0 disk
├─nvme0n1p1 259:5 0 1M 0 part
└─nvme0n1p2 259:6 0 200G 0 part /
nvme1n1 259:0 0 70G 0 disk
nvme2n1 259:1 0 70G 0 disk
nvme3n1 259:3 0 70G 0 disk
nvme4n1 259:2 0 20G 0 disk
This is what i wrote initially but realized the NAME and SIZE are not same always for nvm's
#!/bin/bash
VOLUMES=(nvme1n1 nvme2n1 nvme3n1 nvme4n1)
PATHS=(/abc/sfw /abc/hadoop /abc/log /kafka/data/sda)
for index in ${!VOLUMES[*]}; do
sudo mkfs -t xfs /dev/"${VOLUMES[$index]}"
sudo mkdir -p "${PATHS[$index]}"
sudo mount /dev/"${VOLUMES[$index]}" "${PATHS[$index]}"
echo "Mounted ${VOLUMES[$index]} in ${PATHS[$index]}"
done
I am creating these using ansible and want the 20G to be mounted on /edw/logs but 20G randomly goes on any device. (nvme1n1 or nvme2n1 or nvme3n1 or nvme4n1)
How to write/modify my script?

Why is 2700x slower than i7-6700 in redis benchmark?

I don't understand why is 2700x slower than i7-6700 in redis benchmark?
What else should I add?
Below are my tests.
1st system
OS : CentOS Linux release 7.6.1810 (Core)
Linux localhost.localdomain 5.2.8-1.el7.elrepo.x86_64 #1 SMP Fri Aug 9 13:40:33 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux
CPU : AMD Ryzen 7 2700X Eight-Core Processor (pinnacle ridge 2700x)
RAM : 16GiB DIMM DDR4 Synchronous Unbuffered (Unregistered) 3000 MHz (0.3 ns) * 2
[root#localhost ~]# redis-cli --latency
min: 0, max: 1, avg: 0.21 (5042 samples)
[root#localhost ~]# redis-benchmark -h 127.0.0.1 -p 6379 -n 100000 -t set -q
SET: 110619.47 requests per second
[root#localhost ~]# redis-benchmark -h 127.0.0.1 -p 6379 -n 100000 -t get -q
GET: 138504.16 requests per second
2nd system
OS : CentOS release 6.7 (Final)
Linux dmlocalhost.localdoamin 2.6.32-573.22.1.el6.x86_64 #1 SMP Wed Mar 23 03:35:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
CPU : Intel(R) Core(TM) i7-6700 CPU # 3.40GHz
RAM : 16GiB DIMM Synchronous 2133 MHz (0.5 ns)
[root#dmvault ~]# redis-cli --latency
min: 0, max: 1, avg: 0.11 (5038 samples)
[root#dmvault ~]# redis-benchmark -h 127.0.0.1 -p 6379 -n 100000 -t set -q
SET: 248138.95 requests per second
[root#dmvault ~]# redis-benchmark -h 127.0.0.1 -p 6379 -n 100000 -t get -q
GET: 244498.77 requests per second

Java Processbuilder: GraphicsMagick/imagemagick convert hangs

The below command hangs in Java processbuilder whereas the same command produce expected result in command line.
gm convert -debug all -log %u %m:%l %e -limit Disk 5GB -limit memory 1GB -limit map 1GB -limit pixels 50MB -limit Threads 1 inputfile -quality 82 -size 700x500 -scale 700x500> jpg:out
when i check the top command for the task it gives below information
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3%us, 0.3%sy, 0.0%ni, 99.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 30827392k total, 7131008k used, 23696384k free, 126288k buffers
Swap: 0k total, 0k used, 0k free, 4447716k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18260 yarn 20 0 165m 9148 5436 S 0.0 0.0 0:00.02 gm
From the above information, the task went to sleep after 02 milliseconds.
Any help would be greatly appreciated.

Memory error in Rails console on AWS

I logged into our production instance on AWS, and tried to go into Rails console:
bundle exec rails c production
But I'm getting the following error
There was an error while trying to load the gem 'mini_magick' (Bundler::GemRequireError)
Gem Load Error is: Cannot allocate memory - animate
When I run free I see there's no swap:
free
total used free shared buffers cached
Mem: 7659512 7515728 143784 408 1724 45604
-/+ buffers/cache: 7468400 191112
Swap: 0 0 0
df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 3824796 12 3824784 1% /dev
tmpfs 765952 376 765576 1% /run
/dev/xvda1 15341728 11289944 3323732 78% /
none 4 0 4 0% /sys/fs/cgroup
none 5120 0 5120 0% /run/lock
none 3829756 0 3829756 0% /run/shm
none 102400 0 102400 0% /run/user
/dev/xvdf 10190136 6750744 2898720 70% /mnt
Not sure what's causing this or how to resolve it. Any help is appreciated.
Thanks!
You can increase EC2 instance memory or add swap memory to EC2.
grep Mem /proc/meminfo
grep Swap /proc/meminfo
free
uname -a
# Set swap file to /swapfile1
sudo dd if=/dev/zero of=/swapfile1 bs=1M count=512
grep Swap /proc/meminfo
ll /swapfile1
sudo chmod 600 /swapfile1
mkswap /swapfile1
ll /swapfile1
sudo mkswap /swapfile1
swapon -s
free
sudo swapon /swapfile1
free
grep Swap /proc/meminfo

How to disable core dump in linux

I'm trying to disable core dumps for my application, I changed ulimit -c 0
But whenever I am trying to attach to the process with gdb using gdb --pid=<pid> then gcore I am still getting the core dump for that application. I'm using bash:
-bash-3.2$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 65600
max locked memory (kbytes, -l) 50000000
max memory size (kbytes, -m) unlimited
open files (-n) 131072
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 131072
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
-bash-3.2$ ps -ef | grep top
oracle 8951 8879 0 Mar05 ? 00:01:44 /u01/R122_EBS/fs1/FMW_Home/jrockit32 jre/bin/java -classpath /u01/R122_EBS/fs1/FMW_Home/webtier/opmn/lib/wlfullclient.jar:/u01/R122_EBS/fs1/FMW_Home/Oracle_EBS-app1/shared-libs/ebs-appsborg/WEB-INF/lib/ebsAppsborgManifest.jar:/u01/R122_EBS/fs1/EBSapps/comn/java/classes -mx256m oracle.apps.ad.tools.configuration.RegisterWLSListeners -appsuser APPS -appshost rws3510293 -appsjdbcconnectdesc jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=rws3510293.us.oracle.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=rahulshr))) -adtop /u01/R122_EBS/fs1/EBSapps/appl/ad/12.0.0 -wlshost rws3510293 -wlsuser weblogic -wlsport 7001 -dbsid rahulshr -dbhost rws3510293 -dbdomain us.oracle.com -dbport 1521 -outdir /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/appltmp/Tue_Mar_5_00_42_52_2013 -log /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/logs/appl/rgf/Tue_Mar_5_00_42_52_2013/adRegisterWLSListeners.log -promptmsg hide -contextfile /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/appl/admin/rahulshr_rws3510293.xml
oracle 23694 22895 0 Mar05 pts/0 00:00:00 top
oracle 26235 22895 0 01:51 pts/0 00:00:00 grep top
-bash-3.2$ gcore
usage: gcore [-o filename] pid
-bash-3.2$ gcore 23694
0x000000355cacbfe8 in tcsetattr () from /lib64/libc.so.6
Saved corefile core.23694
[2]+ Stopped top
-bash-3.2$ ls -l
total 2384
-rw-r--r-- 1 oracle dba 2425288 Mar 6 01:52 core.23694
drwxr----- 3 oracle dba 4096 Mar 5 03:32 oradiag_oracle
-rwxr-xr-x 1 oracle dba 20 Mar 5 04:06 test.sh
-bash-3.2$
The gcore command in gdb is not using the Linux core file dumping code in the kernel. It is walking the memory itself, and writing out a binary file in the same format as a process core file. This is apparent since the process is still active after issuing gcore, while if Linux was dumping the core file, the process would have been terminated.

Resources