RootFileSystem vs kernel updating - linux-kernel

I know that they are two different things, but what is the difference between the actual Linux kernel and the rootFS file system especially in terms of location in memory and updates?
Regarding partitioning, why is it that the kernel and rootFS are nearly always on different partitions? Won't the kernel code be stored within the rootFS itself? So how are they on different partitions in memory?
Now regarding updating, I've been looking into an OTA update framework which claims to do a full kernel image update. It uses two separate partitions for rootFS. If there is a problem updating one rootFS partition, it can fall back to the working rootFS partition which makes sense. However, how is this actually updating the kernel tho? I don't understand.

I know that they are two different things, but what is the difference between the actual Linux kernel and the rootFS file system especially in terms of location in memory and updates?
Kernel is usually one image file (like zImage). In ARM systems kernel also needs device tree file, but let's avoid it for now. RootFS, in turn, is a file system that contains all files in your /, like binaries (init, bash), config files (/etc), user home directory, and so on. Sometimes RootFs contains kernel image file, sometimes it doesn't, depends on your particular system.
Regarding partitioning, why is it that the kernel and rootFS are nearly always on different partitions? Won't the kernel code be stored within the rootFS itself? So how are they on different partitions in memory?
The key to your questions is to think about bootloader (like U-Boot or GRUB). Once you understand how OS boot process works in bootloader, the answer will automatically reveal itself. As you mentioned, different partitioning schemes exist, which leads to difference in boot process. Actually, there is a lot of different boot schemes out there. Let's review some of them, and hopefully it'll explain what you want to know.
Kernel and rootfs on different partitions. In this case, bootloader usually reads kernel image to RAM, passing rootfs partition via kernel cmdline (via root= parameter). Then bootloader starts kernel execution, and kernel is mounting rootfs from partition specified in root= parameter.
Kernel image is located inside rootfs. In this case, bootloader ought to know where exactly kernel image is located (e.g. in /boot/zImage). Bootloader knows rootfs FS format (e.g. ext4), reads /boot/zImage from rootfs to RAM. Then execution continues as in previous item.
Kernel image and rootfs are passed via network (e.g. TFTP). In such case, sometimes, rootfs is being placed into RAM and mounted as ramdisk (from RAM). No persistent storage is used in such case, and any changes to rootfs will be lost after reboot. Another network case is when rootfs is mounted via NFS, then persistent storage on the server will be used (transparently viewed by the user).
Now regarding updating, I've been looking into an OTA update framework which claims to do a full kernel image update. It uses two separate partitions for rootFS. If there is a problem updating one rootFS partition, it can fall back to the working rootFS partition which makes sense. However, how is this actually updating the kernel tho? I don't understand.
In terms of updates, it's not that different which scheme to use,
(1) or (2). What are you talking about is called (at least in Android) A/B Seamless Updates, meaning that two partitions (A and B) are used for storing the same image (e.g. old rootfs and new rootfs). You need to understand that it's ok to update just rootfs without kernel. There is a rule in kernel development that reads like this: "We don't break userspace". Which means you can rely on different versions of kernel to run the same userspace, or you can rely on one kernel version to run different userspaces.
So it's more like architecture question: do you want to update kernel in your system at all? If yes, then you need to provide two different partitions for kernel and two partitions for rootfs. Or alternatively you can put kernel image and rootfs in the same partition (for example see Android boot image format), and provide second partition for updates.

Related

How does windows distinguishes between discs?

I wonder how windows distinguishes between diferrent drives and memory modules, I mean how can windows writte somethig specificaly to disc C or disc D.
In every programming language when you declare variable it gets saved into to the memory, and when you need to store something to hdd, you have to use some library.
So, how does windows handle it?
Does it treat all discs and memory modules as a single line of data, and does it only save each mediums beginning adress? - like 0x00000 is where the disc C begins, 0x15616 is where the disc D begins.
Like #MSalters said,
C: is a symlink to something like Device\HarddiskVolume1.
What it means is that disk drivers on Windows are implemented as virtual filesystems a bit like on Linux. I'll explain for Linux since there's much more documentation but the answer is quite similar for Windows even though both OSes do things differently.
Basically, on Linux everything is a file. Linux ships with disk drivers as these are at the basis of every computer. Linux exposes a driver model like every OS. The Linux driver model for files (including hard disks) exposes functions that will be called by the kernel to read/write to disk. There are open, read and write functions that the kernel expects to be present for a file driver.
If you wanted, you could write a disk driver and replace the existing one. You write drivers as modules that you can then load in the kernel using certain utilities that ship with Linux. I won't share more details as I'm not that much aware. Once your code is loaded in the kernel, it has access to all kernel code and all hardware since it runs in kernel mode.
Today, disk drivers probably use PCI DMA which is a controller connected to the PCI bus which allows to do disk operations which ignore the CPU and load disk data to RAM directly. The PCI convention says that all compatible devices (like PCI DMA controllers) must expose a certain interface to the computer. This interface is mostly some memory mapped registers that can be used to send commands to the controller. The OS will write data in these registers to tell the DMA controller to do disk operations. Then the DMA controller will trigger an interrupt once it is done. The OS will then know that the data is readily loaded into RAM and ready for use. The same applies for writing
The OS knows the location of these registers by looking in the ACPI tables at boot.
In modern Windows (2000 or later) C: is a symlink to something like Device\HarddiskVolume1. The number there can vary. Typically, \Device\Bootpartition is also a symlink to the same HarddiskVolume.
Windows doesn't use libraries to write to disk. Instead, it uses drivers. The chief difference is that drivers run as part of the OS kernel, while libraries run as part of applications.

How does a computer know where in the filesystem the bootloader is located?

How does a computer know where in the filesystem the bootloader is located? Is there a common file among all operating systems and all computers (maybe not all computers, but all architectures) that points to the bootloader? I know Raspberry Pi always loads bootcode.bin from the first partition of the SD card. Do PCs share a common file like this?
The Master Boot Record occupies the first 512 bytes of the first hard disk, and is the first thing loaded by the BIOS to hand over control to a program capable of booting the desired operating system. In general, a bootloader gets installed in the MBR, removing its previous content. It is (in dual boot cases) possible for them to live in co-existence, which is known as multi-booting.
It is different among different architectures. But usually there is a register the cpu reads its first instruction from after reset to begin execution. This register is often contains the bits for an assembly jump operation to another memory address which is the address of the boot code. On the next clock cycle it will fetch the operation at that address and so on.
The hardware designer will have to determine how this is implemented. For example the first instruction could be to read from a memory address on an eeprom chip that contains the boot code.
As far as PC's go the motherboard has its own boot process which will load the OS bootloader. Hence the reason you can still startup a pc and see the BIOS without an OS installed
Or at least thats what I remember from my Comp. Arch. class forever ago.

Is linux suggest us use sysfs or udev?

While we want to create a device file in file system, which one should we choose right now? Make a node in udev, which will show up in /dev or use sysfs which will show up in /sys.
I just think I can accomplish most of functions for a device through these two different ways. So it confused me a lot.
Thanks.
Use udev (and or define and publish some major & minor device numbers, like for mknod). See makedev(3)
Application programs want to access physical devices in /dev/ (not in /sys/). Data to/from a device go usually thru /dev/ char or block devices. Metadata and configuration can go thru sysfs
Read more about udev and about sysfs. See also device file wikipage.
You won't get very useful answers if you don't explain more concretely your issues... What exact kind of device are you thinking about? Very probably there exist already similar devices....
Publish very early (even in alpha stage, when it is not fully working) your device driver and software source code as free software preferably as GPLv2 (the license used by Linux kernel). Ask also on kernelnewbies. Work hard (perhaps more than a year) to get your driver incorporated in the official Linux kernel.
You should be familiar with Advanced Linux Programming (in the application userspace world) before attempting to code a kernel driver. After that, read books and resources on Linux kernel driver programming and study the source code of existing drivers in the recent Linux kernels.

View linux kernel drivers built into the kernel, and how do they get binded/mounted/started

I'm having a bit of a hard time fully understanding how the kernel starts in linux. I'm a wince developer and our company decided to run with linux instead now.
We outsourced all of the board bringup and the package I recieved is quit a bit different for the prototype board we have compared to the nitrogen6x we have been using.
Before i start listing the differences for the distro we created, the kernels are identical. The distro we have been using is a busybox system. The one we recieved from the vendor is sysvinit. I removed mdev from busybox and we are only using udev.
when I use the kernel on our busybox build the touch screen drivers breaks, or doesn' run, or does something totally magical. I'm not quit sure... there is a /dev/input/event0 device which when run on the sysvinit side is a touch device.. Is the kernel not the mechanism that binds the built-in drivers to a device node? I thought udev was for more dynamic events in the system.
On the other hand I can't really tell whats been loaded on my device. Is there a way to list running drivers that were built into the kernel? my touch pad is up? This is a fairly simple process of looking at the registry on wince to see which devices were loaded.
I guess what I'm really trying to discover, isn't so much how to add a driver to the kernel, its how the whole thing gets is plumbed together. I've found plenty of documents on createing kernel modules, but i haven't found a good resource on how to pull everything together from scratch so you can actually use said modules. Going back to the example of the touchscreen driver, its built into the kernel, how does that get plugged into /dev/input/event0??
I'm kind of having a difficult time finding good resources mostly because searching google for varations of linux/drivers/device nodes/ piles in tons of random crap from everywhere.
What you probably want to use now is evtest. It will allow you to know what are the input devices that are present and ready to use on your system.
To get more information on the input subsystem and more generic information on how the kernel is working, I can direct you to our training materials. The materials are free to download, use and redistribute.
The general answer is, there is no single, easy place to look to discover what drivers have been loaded by the kernel if they are compiled in. Of course, lsmod will display any drivers that were dynamically loaded after kernel boot.
The kernel does not create device nodes. That is, to quote your question, the kernel does not "bind" the driver to the device node. The association between kernel driver and device node is contained in the major and minor numbers registered when the driver is initialized. You can have a device node on your file system for which there is no corresponding driver (common especially in older devices where device nodes were statically created on the file system) and you can also have a driver installed for which there is no device node.
Modern Linux distros have dynamically created device nodes created on a mount point called /dev and this is usually a tmpfs file system, meaning it is volatile - it gets destroyed on every boot and recreated dynamically on each new boot.
udev is the magic that creates most device nodes based on events that it receives from the kernel when a new device is discovered (this can be after boot on device plugin, like a USB disk) or on startup when udev reads the queued events and acts on them. As you noted, busybox has a limited udev implementation called mdev.
Study udev and you will get a much better understanding of the process. Hope this helps a little.

What all necessary argument required to boot Linux kernel

I am new to linux kernel and Try to understand booting of Linux kernel from the point it loaded into RAM,I would like to know after Linux image loaded into RAM ,How control is passed to this image ,what all are necessary parameter needs to pass to kernel and can we pass control to linux image without passing any parameter,
I am looking into the UBOOT code with "bootm.c" but unable to understand where control is passed to Linux image,which function is responsible for it.
Is load_zimage() is responsible for passing the control/
Can anybody lead me to right direction or suggest some good tutorials on this particular part
of linux booting from x86 archetectiure.
I think it depends. Different kinds of CPU architecture, they use different ways to pass information to Linux Kernel. Of course, the Linux Kernel can boot up successfully without bootloader pass information to it, but it need to statically set up correctly in the Linux Kernel, such as root device name, console device, mem size, and also some parameters to enable/disable some features in Linux Kernel.
Why bootloader need to pass various information(parameters) to Linux Kernel, I think it's flexible consideration. Think about this case that it's possible to share one Linux Kernel on two board with same CPU but different peripheral modules.
Let me show some examples that UBoot passes information to Linux Kernel:
(1) For PowerPC cpu, nowadays they use DTB(Device Tree Blob) file to pass more information from UBoot to Linux Kernel. They consider UBoot and DTB as firmware, and in Linux Kernel, they adopt one open firmware(OF) infrastructure. You may know "bootm" command in UBoot, bootm can have three parameters, the first is uImage address, the secondary one is initrd address, and the third one is the dtb address.
(2) Earlier days, they use bootargs to pass information to Linux Kernel. Also you may know there is gd/bd structure in the UBoot, they also can pass information to Linux Kernel. But the information passed in this way is limited, not like DTB.
Hope the above information help you to understand your question.

Resources