I learn that when computer starts it is in real mode.And kernel is responsible for switching mode from real to protected.Ok , my question is Grub boot loader runs in real mode or in protected mode ?
AFAIK, Grub starts in real mode like any other software loaded at boot. It switches to protected mode for its run time (detecting HD, displaying menus etc.) and switches back into real mode before loading and running OS such as Linux that do not support multiboot protocol.
See http://duartes.org/gustavo/blog/post/kernel-boot-process for a detailed answer, but basically Grub does not switch to protected mode when running Linux. It loads the real-mode part of the kernel in low memory and let it do the switch itself (as required per the Linux Boot Protocol, http://lxr.linux.no/#linux+v2.6.25.6/Documentation/i386/boot.txt)
However, Grub also supports the Multiboot Specification, which starts the loaded OS in protected mode. This is done for non-Linux kernels, like modern homebrew OSes for which the makers do not want to bother with the hassle of switching to protected mode.
Related
I'm trying to understand how U-boot PSCI interface is used to boot kernel into HYP mode.
Going through u-boot source, I do see there is a generic psci.S and other psci.S which is board specific and have following doubts.
1). How and where psci.S fits in normal u-boot flow(when and how psci service like cpu_on and cpu_off is called while booting u-boot).s
2). How this psci interface of u-boot is used to boot kernel in HYP mode(what is it in psci interface that allow Linux kernel to booted in HYP mode)?
1). How and where psci.S fits in normal u-boot flow
U-boot sets aside some secure-world memory and copies that secure monitor code there, so that it can stay resident after U-Boot exits and provide minimal handling of some PSCI SMC calls.
(when and how psci service like cpu_on and cpu_off is called while booting u-boot)
They aren't. U-Boot runs and hands over to Linux on the primary CPU only. Linux may bring up secondary cores later in its own boot process, but U-Boot is long gone by then, except for the aforementioned secure monitor code.
2). How this psci interface of u-boot is used to boot kernel in HYP mode
It isn't.
(what is it in psci interface that allow Linux kernel to booted in HYP mode)?
Nothing.
The point behind the patch series you refer to is that it was (and unfortunately still is) all too common for 32-bit ARM platforms to have TrustZone-aware hardware designs but crap software support. The vendor BSPs implement just enough bootloader to get the thing started, and never switch out of secure SVC mode from boot, so the whole of Linux runs in the secure world, and their kernel is full of highly platform-specific code poking secure-only hardware like the power controller directly. This poses a problem if you want to use virtualisation (which, if you're the KVM/ARM co-maintainer and have recently bought yourself a CubieTruck, is obviously a pressing concern...) - it's easy enough to take the U-boot code for such a platform and make it switch into NS-HYP before starting Linux, thus enabling KVM (upstream U-boot already had some rudimentary support for this at the time), but once you've dropped out of the secure world you can't then later bring up your secondary CPUs with the original smp_operations that depend on touching secure-only hardware.
By implementing some trivial runtime "firmware" hanging off the back of the bootloader, you then have the simplest way of calling back into the secure world as needed, and it makes the most sense to move the necessary platform-specific code there to abstract the operations away behind a simple firmware call interface, especially if there's already a suitable one supported by Linux.
There's absolutely nothing special about PSCI itself - there are plenty of ARM platforms that have proper secure firmware, with which they implement power management and SMP operations, but via their own proprietary protocol. The only vaguely relevant aspect is that conforming to the PSCI spec guarantees that the all CPUs are going to come up in the same mode, thus if you did initially enter Linux in HYP, you won't see mismatched secondaries coming up in some other incompatible mode or security state.
I am very new to Hackintosh and now I am studying the boot process.
As far as I know:
efi binary is a "byte-code" that UEFI firmware runs
kexts is the kernel mode device driver that is complied in machine specific code, loaded by the kernel, running in kernel mode with the kernel
kexts injection is like the dynamic loading of library but in kernel mode
My question is, why there is some relationship with the bootloader like chameleon/clover and the kexts? The kexts should be loaded by the kernel but not the bootloader, right?
I see thing here.
http://cloverboot.weebly.com/kexts.html?bcsi_scan_50b5cc4d2c82cc03=bG/X91Fwptz2CvnL0WdFPvjdTdWsAAAAioMalg==&bcsi_scan_filename=kexts.html
Say Hackintosh needs FakeSMC.kext. But it is not the business of the bootloader. What bootloader needs to do is to put the init code of Mac OS kernel in memory and passes the control to it. And it should be that Mac OS kernel loads that FakeSMC.kext.
Isn't it?
Firstly PCs in the past only had legacy bios and no EFI, but Apple never used legacy bios, only EFI.
But this has changed as now most of the modern PCs have builtin UEFI so there is no need to emulate EFI.
There are two ways to boot OS X on a hackintosh with legacy bios. The first is Chameleon and the second is Clover.
Clover and Chameleon loads OS X differently.
Clover uses a modified version of DUET EFI (open source EFI implementation on top of legacy bios) or if the computer has it's own UEFI built in clover uses that. Clover also uses the default bootloader on the OS X Partition located at /System/Library/CoreServices/boot.efi to boot OS X. boot.efi loads the kexts and passes control to the kernel as on a real mac.
Chameleon has it's own built in fake EFI implementation that makes the kernel think it's running on an EFI Mac. But that fake EFI is not enough to load boot.efi so Chameleon has it's own loader. Chameleon loads the kexts by itself and then passes the control over to the kernel.
Both bootloaders have built in ACPI table injection, SMBIOS spoofing, Device ID injection, etc.
FakeSMC is an emulator that emulates the System Management Controller found in a real Mac which contains the key to decrypt Apple Protected Binaries.
Chameleon loads FakeSMC and other kexts by itself as standalone or part of the kernelcache and if you use Clover the same thing gets done by boot.efi.
Note:
Clover has a feature that you're probably talking about that injects kexts on the fly that make it seem like they get loaded by Clover but they actually become part of the kernelcache.
I am trying to run a Linux kernel as the secure OS on a TrustZone enabled development board(Samsung exynos 4412). Although somebody would say secure os should be small and simple. But I just want to try. And if it is possible, then write or port a trustlet application to this secure os will be easy, especially for applications with UI(trusted UI).
I bought the development board with a runnable secure OS based on Xv6 and the normal os is Android(android version 4.2.2, kernel version 3.0.15). I have tried to replace the simple secure os with the android Linux kernel, that is, with a little assembly code ahead, such as clearing the NS bit of SCR register, directly called the Linux kernel entry(with necessary kernel tagged list passed in).
The kernel uncompressed code is executed correctly and the first C function of the kernel, start_kernel(), is also executed. Almost all the initialization functions run well except running to calibrate_delay(). This function will wait for the jiffies changed:
/* wait for "start of" clock tick */
ticks = jiffies;
while (ticks == jiffies);
I guess the reason is no clock interrupt is generated(I print logs in clock interrupt callback functions, they are never gotten in). I have checked the CPSR state before and after the local_irq_enable() function. The IRQ and FIQ bit are set correctly. I also print some logs in the Linux kernel's IRQ handler defined in the interrupt vectors table. Nothing logged.
I know there may be some differences in interrupt system between secure world and non secure world. But I can't find the differences in any documentation. Can anybody point out them? And the most important question is, as Linux is a very complicated OS, can Linux kernel run as a TrustZone secure OS?
I am a newbie in Linux kernel and ARM TrustZone. Please help me.
Running Linux as a secure world OS should be standard by default. Ie, the secure world supervisor is the most trusted and can easily transition to the other modes. The secure world is an operating concept of the ARM CPU.
Note: Just because Linux runs in the secure world, doesn't make your system secure! TrustZone and the secure world are features that you can use to make a secure system.
But I just want to try. And if it is possible, then write or port a trustlet application to this secure os will be easy, especially for applications with UI(trusted UI).
TrustZone allows partitioning of software. If you run both Linux and the trustlet application in the same layer, there is no use. The trustlet is just a normal application.
The normal mode for trustlets is to setup a monitor vector page on boot and lock down physical access. The Linux kernel can then use the smc instruction to call routines in the trustlet to access DRM type functionality to decrypt media, etc. In this mode, Linux runs as a normal world OS, but can call limited functionality with-in the secure world through the SMC API you define.
Almost all the initialization functions run well except running to calibrate_delay().
This is symptomatic of non-functioning interrupts. calibrate_delay() runs a tight loop while waiting for a tick count to increase via system timer interrupts. If you are running in the secure world, you may need to route interrupts. The register GICD_ISPENDR can be used to force an interrupt. You may use this to verify that the ARM GIC is functioning properly. Also, the kernel command line option lpj=XXXXX (where XXXX is some number) can skip this step.
Most likely some peripheral interrupt router, clock configuration or other interrupt/timer initialization is done by a boot loader in a normal system and you are missing this. Booting a new board is always difficult; even more so with TrustZone.
There is nothing technically preventing Linux from running in the Secure state of an ARM processor. But it defeats the entire purpose of TrustZone. A large, complex kernel and OS such as Linux is infeasible to formally verify to the point that it can be considered "Secure".
For further reading on that aspect, see http://www.ok-labs.com/whitepapers/sample/sel4-formal-verification-of-an-os-kernel
As for the specific problem you are facing - there should be nothing special about interrupt handling in Secure vs. Non-secure state (unless you explicitly configure it to be different). But it could be that the secure OS you have removed was performing some initial timer initializations that are now not taking place.
Also 3.0.15 is an absolutely ancient kernel - it was released 2.5 years ago, based on something released over 3 years ago.
There are several issues with what you are saying that need to be cleared up. First, are you trying to get the Secure world kernel running or the Normal world kernel? You said you wanted to run Linux in the SW and Android in the NW, but your question stated, "I have tried to replace the simple secure os with the android Linux kernel". So which kernel are you having problems with?
Second, you mentioned clearing the NS-bit. This doesn't really make sense. If the NS-bit is set, clearing it requires means that you are running in the NW already (as the bit being set would indicate), you have executed a SMC instruction, switched to Monitor mode, set the NS-bit to 0, and then restored the SW registers. Is this the case?
As far as interrupts are concerned, have you properly initialized the VBAR for each execution mode, i.e. Secure world VBAR, Normal world VBAR, and MVBAR. All three will need to be setup, in addition to setting the proper values in the NSACR and others to ensure interrupts are channeled to the correct execution world and not all just handled by the SW. Also, you do need separate exception vector tables and handlers for all three modes. You might be able to get away with only one set initially, but once you partition your memory system using the TZASC, you will need separate everything.
TZ requires a lot of configuration and is not simply handled by setting/unsetting the NS-bit. For the Exynos 4412, numerous TZ control registers that must be properly set in order to execute in the NW. Unfortunately, none of the information on them is covered in the Public version of the User's Guide. You need the full version in order to get all the values and address necessary to actually run a SW and NW kernel on this processor.
I am aware of windows kernel but new to linux kernel. I just need to know how its done in linux, i.e. the program development.
You can check there (free-electrons.com), it's a good informations source for kernel developement. (specialized in embedded linux, but most of the docs are available for standard development)
You have also the classical Linux Devices Drivers, which is very complete and detailled.
And last but not least, the Linux kernel documentation.
Linux does not have a stable kernel API. This is by design, so you should generally avoid writing kernel code if you can; it is unlikely to remain source-compatible indefinitely, and will definitely NOT be binary-compatible, even between minor releases.
This is less-or-more true for vendor kernels; Redhat etc DO maintain source & binary kernel compatibility between major revisions.
More work is gradually being done in the kernel to reduce the amount of kernel-code required to carry out various tasks, such as driver development (for example, USB drivers can typically be done in userspace with libusb), filesystem development (FUSE) and network filtering (NFQUEUE). However, there are still some cases where you need to; in particular, block devices still need to be in the kernel to be able to be usefully used for boot devices and swap.
What is Windows Kernel Driver written with the WDK?
What is different from normal app or service?
Kernel drivers are programs written against Windows NT's native API (rather than the Win32 Subsystem's API) and which execute in kernel mode on the underlying hardware. This means that a driver needs to be able to deal with switching virtual memory contexts between processes, and needs to be written to be incredibly stable -- because kernel drivers run in kernel mode, if one crashes, it brings down the entire system. Kernel drivers are unsuitable for anything but hardware devices because they require administrative access to install or start, and because they remove the security the kernel normally provides to programs that crash -- namely, that they crash themselves and not the entire system.
Long story short:
Drivers use the native API rather than the Win32 API
This means that drivers generally cannot display any UI.
Drivers need to manage memory and how memory is paged explicitly -- using things like paged pool and nonpaged pool.
Drivers need to deal with process context switching and not depend on which process happens to have the page table while they're running.
Drivers cannot be installed into the kernel by limited users.
Drivers run with privileged rights at the processor level.
A fault in a user-level program results in termination of that program's process. A fault in a driver brings down the system with a Blue Screen of Death.
Drivers need to deal with low level hardware bits like Interrupts and Interrupt Request Levels (IRQLs).
It is code that runs in kernel mode rather than user mode. Kernel mode code has direct access to the internals of the OS, hardware etc.
Invariably you write kernel mode modules to implement device drivers.
A kernel driver is a low-level implementation of an "application".
Because it runs in the kernel context, it has the ability to access the kernel API and memory directly.
For example, a kernel driver should be used to:
Control access to files (password protection,hiding)
Allow accessing non-standard filesystems (like ext, reiserfs, zfs and etc.) and devices
True API hooks
...and for many other reasons
If you'd like to get know more, you can search for keyword "ring0" with your favorite search engine.
Others have explained the difference as the perspective of system level.
If you are doing development in C++, there are below differences in User mode development and kernel-mode development.
Unhandled exceptions crash the process in User mode, but in kernel mode, it crashes the whole system(face BSOD).
When the user-mode process terminates without free private memory, the system implicitly free process memory. But in kernel mode, remaining memory free after system boot.
The user-mode code is written and execute in PASSIVE_LEVEL. In kernel mode, there are more IRQL level.
Kernel code debugging done using separate machines. But you can debug user mode on same machine.
you can't use all C++ functionality in kernel-mode such as Exception handling and STL.
Entry points are different, in user mode, you use the main as the entry point. But in kernel mode, we need to use DriverEntry.
You can't use new operator in kernel mode, you need to overload it explictly.