HugeTLB huge pages and IO mapping: Why does it waste memory? - linux-kernel

Using hugepages= kernel parameter at boot or using echo 1024 > /proc/sys/vm/nr_hugepages allocates memory.
# cat /proc/meminfo
MemTotal: 32565364 kB
MemFree: 30179992 kB
MemAvailable: 30820684 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
# echo 1024 > /proc/sys/vm/nr_hugepages
# cat /proc/meminfo
MemTotal: 32565364 kB
MemFree: 28082216 kB
MemAvailable: 28723824 kB
HugePages_Total: 1024
HugePages_Free: 1024
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 2097152 kB
It allocates 2GB (1024 Hugepages x 2MB) from 30GB to 28GB (MemFree).
Why? How can I remove this behavior?
I only want to reserve the TLB entries, not the actual physical memory. I will never allocate such huge pages since I will only use them to mmap IO memory.
This behavior wastes 2GB of my physical memory.

Related

Why is vm.nr_hugepages being overridden?

I want to specify 25 off 1GB hugepages on our Centos 7 system, which has 48GB RAM.
I have specified the following boot parameters:
hugepagesz=1G hugepages=25 default_hugepagesz=1G
but after boot the system reports:
$ cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
HugePages_Total: 43
HugePages_Free: 43
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
I find that:
$ sudo cat /etc/sysctl.conf | grep nr
vm.nr_hugepages = 200
I tried setting vm.nr_hugepages = 25 directly and via a custom tuned profile (that includes 'balanced') but still vm.nr_hugepages gets set to 200.
So something, somewhere is overriding the value I set. What could this be?

liunx bash echo 4096 bytes to a file, file using 8 bytes on disk?

when I use bash to write a temp test file on liunx terminal.
echo text_content>file1
if set length of text_content 4096 char(random char from [a-Z]) long.
the result file1 ends up use two 4K blocks. and one inode.
test#instance-7:~/notes/rust$ du -csh file1
8.0K file1
8.0K totaldu
But why it used two 4K blocks? I mean, Isn't one 4K block is enough for it?
if I set the length of the text_content 4095 char long, it used only one 4K block.
why it's using more blocks for it needed? or I'm missing something?
here are some disk info for my liunx machine.
test#instance-7:~/notes/rust$ sudo fdisk -l /dev/sda
Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors
Disk model: PersistentDisk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gptDisk identifier: 35BD657D-931E-497E-A86C-D3D7C4F6BD2A
Try this experiment:
write cat > file1 and hit Enter,
then write this and then hit Ctrl-D twice without hitting Enter first;
write cat > file2 and hit Enter,
then write this, and then hit Enter and then Ctrl-D.
finally run diff file1 file2 and ls -l file[12]

How to redirect dd progress in terminal to a log file after replacing carriage returns with newline characters

Any idea why this command isn't working?
dd if=/dev/zero of=/dev/null status=progress |& tr '\r' '\n' >> test.txt
I want the contents of test.txt to look something like this.
395191296 bytes (395 MB, 377 MiB) copied, 1 s, 395 MB/s
805187584 bytes (805 MB, 768 MiB) copied, 2 s, 403 MB/s
1239563264 bytes (1.2 GB, 1.2 GiB) copied, 3 s, 413 MB/s
1666015232 bytes (1.7 GB, 1.6 GiB) copied, 4 s, 417 MB/s
Right now the command is printing nothing to test.txt
It's all because of tr wait for its job finish, which will take an infinite time.
unbuffer can help you in this situation:
dd if=/dev/zero of=/dev/null status=progress |& unbuffer -p tr '\r' '\n' >>test.txt
Use tee
Eg:
dd if=/dev/zero of=/dev/null status=progress 2>&1 | tee test
Then you can replace \r with \n OR open with nano

Tee/sed/systemd-cat delaying messages to journald

I need a script to send stdout to journald and to the console parallel. For the journald lines I need to sanitize the messages before persisting.
I have a dummy example to show my issue:
ping google.com | tee >( sed 's/seq/SEQ/' | systemd-cat -t 'my-ping')
When I have sed & systemd-cat the messages to journald are delayed and they arrive only after stopping the ping process.
Example:
$ ping google.com | tee >( sed 's/seq/SEQ/' | systemd-cat -t 'my-ping')
PING google.com (216.58.197.238) 56(84) bytes of data.
64 bytes from nrt13s49-in-f14.1e100.net (216.58.197.238): icmp_seq=1 ttl=40 time=240 ms
64 bytes from nrt13s49-in-f14.1e100.net (216.58.197.238): icmp_seq=2 ttl=40 time=240 ms
64 bytes from nrt13s49-in-f14.1e100.net (216.58.197.238): icmp_seq=3 ttl=40 time=240 ms
64 bytes from nrt13s49-in-f14.1e100.net (216.58.197.238): icmp_seq=4 ttl=40 time=240 ms
64 bytes from nrt13s49-in-f14.1e100.net (216.58.197.238): icmp_seq=5 ttl=40 time=240 ms
^C
The messages are going in all at once (see timestamp):
journalctl -f | grep my-ping
Aug 17 06:03:40 hostname my-ping[30555]: PING google.com (216.58.197.238) 56(84) bytes of data.
Aug 17 06:03:40 hostname my-ping[30555]: 64 bytes from nrt13s49-in-f14.1e100.net (216.58.197.238): icmp_SEQ=1 ttl=40 time=240 ms
Aug 17 06:03:40 hostname my-ping[30555]: 64 bytes from nrt13s49-in-f14.1e100.net (216.58.197.238): icmp_SEQ=2 ttl=40 time=240 ms
Aug 17 06:03:40 hostname my-ping[30555]: 64 bytes from nrt13s49-in-f14.1e100.net (216.58.197.238): icmp_SEQ=3 ttl=40 time=240 ms
Aug 17 06:03:40 hostname my-ping[30555]: 64 bytes from nrt13s49-in-f14.1e100.net (216.58.197.238): icmp_SEQ=4 ttl=40 time=240 ms
Aug 17 06:03:40 hostname my-ping[30555]: 64 bytes from nrt13s49-in-f14.1e100.net (216.58.197.238): icmp_SEQ=5 ttl=40 time=240 ms
It seems this behavior only presents when I use both if any one of them is left out everything works as expected.
Do you have any pointers what can be the issue and how to get over it?

String search in a file

I've a file named test
# cat test
192.168.171.3 840 KB /var/opt
192.168.171.3 83 MB /var
192.168.171.3 2 KB /var/tmp
192.168.171.3 1179 KB /var/opt
192.168.171.3 65 MB /opt/var/dump
192.168.171.3 15 MB /opt/varble
192.168.171.3 3 MB /var
I want to search for entries that has only /var and not any other variations of it such as /opt/var or /var/tmp. I tried grep '^/var$' test or awk, but it doesn't work.
# grep '^/var$' test
#
# awk '/^\/var$/' test
#
Please help !
++Sorry for the pain... already got it sorted, but thanks for all your answers !++
This grep command should work:
grep " /var$" file
192.168.171.3 83 MB /var
192.168.171.3 3 MB /var
Using awk:
awk '$4=="/var"' file
192.168.171.3 83 MB /var
192.168.171.3 3 MB /var
You could also use the following:
grep -w ^/var file
The -w flag will match strings that form whole words, and the ^ will force it to match the start of a line.

Resources