Difference between huge-pages & hugetlbfs - linux-kernel

I'm struggling to understand the allocation between simple huge pages
(echo {value} \>/proc/sys/vm/nr_hugepages or at boot time from grub config) and through file system (mount -t hugetlbfs ....).
When do we use the fs mount and when the simple allocation?
What's their difference?
What is the purpose of the predefined mount-point under /dev/hugepages?

Related

changing the crtime in bash

I want to change crtime properties in bash.
First, I tried to check the crtime as following command.
stat test
And next, I changed the timestamp.
touch -t '200001010101.11' test
But I realized that if the crtime is already past than the date I wrote, then it can't be changed.
So I want to know how to specify the crtime even it is already past.
Edit:
According to This answer to a similar question, you may be able to use debugfs -w -R 'set_inode_field ...' to change inode fields, though this does require unmounting.
man debugfs shows us the following available commmand:
set_inode_field filespec field value
Modify the inode specified by filespec so that the inode field field has value value. The list of valid inode fields which can be set via this command can be displayed by using the command:
set_inode_field -l Also available as sif.
You can try the following to verify the inode number and name of the crtime field:
stat -c %i test
debugfs -R 'stat <your-inode-number>' /dev/sdb1
and additionally df -Th to find the /dev path of your filesystem (e.g. /dev/sdb1)
Followed by:
umount /dev/sdb1
debugfs -w -R 'set_inode_field <your-inode-number> crtime 200001010101.11' /dev/sdb1
Note: In the above commands, inode numbers must be indicated with <> brackets as shown. Additionally, as described here it may be necessary to flush the inode cache with echo 2 > /proc/sys/vm/drop_caches
Original answer:
You might try birthtime_touch:
birthtime_touch is a simple command line tool that works similar to
touch, but changes a file's creation time (its "birth time") instead
of its access and modification times.
From the birthtime_touch Github page, which also notes why this is not a trivial thing to accomplish:
birthtime_touch currently only runs on Mac OS X. The minimum required
version is Mac OS X 10.6. birthtime_touch is known to work for files
that are stored on HFS+ and MS-DOS filesystems.
The main problem why birthtime_touch does not work on all systems and
for all filesystems, is that not all filesystems store a file's
creation time, and for those that actually do store the creation time
there is no standardized API to access/change that information.
This page has more details about the reasons why we haven't yet seen support for this feature.
Beyond this tool, it might be worth looking at the source on Github to see how it's accomplished and whether or not it might be portable to Unix/Linux. And beyond that, I imagine it would be necessary to write low level code to expose those aspects of the filesystems that crtime would be stored.

How to identify the newly added disk in virtual linux(RHEL 7 &6) host using ansible yaml code

i am planing to write ansible playbook for file system creation. i am using Logical volume manager(LVM).
can any one help me to identify the new lun using ansible modules.
Generally speaking, this is very operating system specific. Your best path forward is to figure out what steps you would do from the command line to detect that there is a new disk and then use shell, parsing the result stdout. For example, on ubuntu, you might run fdisk -l and look for unpartitioned / unallocated drives and parse the output.

Buildroot with readonly filesystem: allow writing on /etc

I'm preparing a Buildroot IoT project based on Orange PI Zero, so I will make it a readonly system. Anyway, I need to persistently write on /etc to update wpa_supplicant.conf when user configure it for his WiFi network. I also need to update a custom text file with some config parameters, if the user want to.
I'd like to avoid remounting the whole filesystem in r/w everytime I need to update a single file.
Which is the best way to accomplish this?
You can set up a writable overlay on top of /etc so changes goes there. Options are either overlayfs in the kernel or unionfs using fuse. As init / initscripts already use /etc you may need to create a wrapper script around init to setup this overlay before executing init. - E.G. something like:
mount -t proc proc /proc
mount /mnt/data
mount -o bind /etc/ /mnt/rom-etc
unionfs -o cow,allow_other,use_ino,nonempty \
mnt/data=RW:/mnt/rom-etc=RO /etc/
exec /sbin/init $*

Make attached media bootable

Good evening everyone! I have been working on this for sometime, but can't figure it out. I am simply trying to get the working bootcode of a bootloader installed on an attached media, but can't figure this out!!! I have tried grub legacy, lilo, and grub2... The host system has it's drive listed as /dev/sda* and the target attached media is listed as /dev/sdb* and is mounted to /mnt/target.
With grub legacy, I was attempting to work with another media (/dev/sdc*, /mnt/source) that already had it installed and tried dirty hacks like:
dd if=/mnt/source/boot/grub/stage1 of=/dev/sdb bs=446 count=1
dd if=/mnt/source/boot/grub/stage2 of=/dev/sdb bs=512 seek=1
This will actually boot into a grub interface where you can enter things like:
root (hd0,0)
setup (hd0)
I get no error messages, but grub will boot to garbage on the screen and then stop.
With lilo, I actually had the package installed and tried to setup (after creating a lilo.conf):
default=Test1
timeout=10
compact
prompt
lba32
backup=/mnt/target/boot/lilo/MBR.hda.990428
map=/mnt/target/boot/lilo/map
install=/mnt/target/boot/lilo/boot.b
image=/mnt/target/boot/vmlinuz
label=Test1
append="quiet ... settime"
initrd=/mnt/target/boot/ramdisks/working.gz
And then from the prompt execute the following:
$ lilo -C /mnt/target/boot/lilo/lilo.conf -b /dev/sdb
Warning: /dev/sdb is not on the first disk
Fatal: Sorry, don't know how to handle device 0x0701
With grub2, I tried something like:
grub-mkconfig -o /mnt/target/boot/grub/grub.cfg
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.11.0-12-generic
Found initrd image: /boot/initrd.img-3.11.0-12-generic
Found memtest86+ image: /boot/memtest86+.bin
No volume groups found
done
I couldn't even get the above to generate a grub.cfg correctly or in the right spot so I gave up on this one... The entries listed above are for the host system, not the target system.
I can provide any additional information that you guys need to help resolve this problem.
-UPDATE-
After working with the media a bit longer, I decided to run an 'fdisk -l' and was presented with the following info:
Partition 1 has different physical/logical beginnings (non-Linux?):
phys(0,32,33) logical(0,37,14)
Partition 1 has different physical/logical endings:
phys(62,53,55) logical(336,27,19)
I should also note that when I try to mount the partition I always get a message that states:
EXT4-fs (sdb1): couldn't mount as ext3 due to feature incompatibilities
Not sure if that is just specific to busybox, or if that is related to the fdisk output. Anyhow, I don't know if the fdisk info is indicating that there may be a problem with the disk geometry that could be causing all these bootloaders to not work.
First stage boot sector code for grub legacy is in "stage1", for grub(2) in "boot.img". Fist stage code contains the address of next stage to be loaded on same disk.
On some other disk the address of next stage to be loaded could be (and is maybe) different.
I think using chroot and grub-install would be a better way to go.
See Grub2/Installing.
As for disk/partition structure:
dd if=/mnt/source/boot/grub/stage2 of=/dev/sdb bs=512 seek=1
maybe has overwritten partition table in MBR of sdb.

Mount VHD using GRUB2 loopback command

I need to mount a VHD file at grub2 command prompt.
I tries using "loopback" command as shown below:
grub > insmod ntfs
grub > insmod ntldr
grub > loopback loop (hd0,1)/test.vhd
grub > ls (loop)/
error: unknown filesystem
I tried both "static" and "dynamic" vhd and both VHD file had ntfs partitioned data.
I guess VHD files have some header data which makes the filesystem not recognizable after "loopback" mount. I am able to mount and access "iso" files using same set of commands.
Is my guess correct? If so, is there a way to overcome this issue?
Well, your guess is half right:
Whilst VHD supports a linear "fixed" storage model, which actually is just the raw data as it would be stored on a "real" hard drive, followed by a VHD footer, this is most probably not usually the case; VHD supports dynamically resizing images, which of course aren't linear internally, so you can't simply boot into that.
I was finally able to get the data from the loop mounted data with following change in the above pasted grub command.
grub > insmod ntfs
grub > loopback loop (hd0,1)/test.vhd
grub > ls (loop,1)/
The file "test.vhd" was a single partitioned VHD file.
NOTE: Only "fixed" or "static" model VHDs work. I could not get it working with "dynamic" VHD (as suggested by #Marcus Müller )
Thanks for the help. Hope this helps somebody.
To use VHD disks on grub2 need:
insmod part_msdos
insmod ntfs
loopback loop /point/where/disk.vhd tdisk=VHD
ls (loop,msdos1)/

Resources