Programmatically detect if hardware virtualization is enabled on Windows 7 - windows

Background
I've been bouncing around this for a while and still haven't come up with an adequate solution, hoping someone out there can point me in the right direction.
Essentially I need to identify whether I can run 64bit VM on a target machine (working in GO but happy to consider binding c code or some assembly (though I feel a bit out of depth there)
In order to run a 64 bit VM the system need Hardware Virtualisation support available and enabled in the bios (im only concerned with intel/amd at this time)
Journey so far
From windows 8 onwards, Windows ships with Hyper-V, and there is a nice function you can call IsProcessorFeaturePresent from the kernel32.dll with an arg of 'PF_VIRT_FIRMWARE_ENABLED' which will tell you if hardware virtualisation is enabled in firmware:
IsProcessorFeaturePresent
now I dont really like the way this behaves (it says not available if hyper-v is installed) but i can cope with it by checking if hyper-v is enabled through other means so this pretty much does the job from win8 upwards.
Problem is this function always return false on win 7 for some reason - even on a system on which I know hardware virtualization is enabled.
Coming from another angle I have used this lib to determine what instruction sets are available: intel processor feature lib - this allows me to know what type of virtualization instructions are available on the processor (if any)
But I'm still missing the final piece of knowing if its enabled in the bios on win 7. I figure in principle it should be easy from here - I should be able to call something which utilizes the virtualization extensions and see if it responds as expected. But unfortunately I have no idea how to do this.
Does anyone have any suggestions as to how I might do this?
Note: Im happy to consider 3rd party libs but this would be used in commercial software so licensing would have to allow for that (e.g nothing from Microsoft)

I am afraid you won't be able to achieve what you want unless you are ready to provide a kernel driver, because checking if BIOS has enabled virtualization requires kernel privileges.
Intel Software Developer Manual describes a model-specific register (MSR) with number 3Ah called IA32_FEATURE_CONTROL. Its bits 1 and 2 control whether VMX instructions are allowed in SMX and non-SMX modes. Also there is bit zero which, when written with 1, locks the whole register's value, thus making impossible to enable/disabled features until the next processor reset. This means that, if BIOS code has disabled VMX and locked it, an OS that boots later will be unable to change that fact, only to see it.
To read this or any other MSR one should use machine instruction RDMSR, and this instruction is only available when CPL is zero, that is, within an OS context. It will throw an exception if attempted to be used from application code.
Unless you find a program interface method that wraps RDMSR around and provides it to applications, you are out of luck. Typically that implies loading and running a dedicated kernel driver. I am aware about one for Linux, but cannot say if there is anything for Windows.
As an extra note, if your code is already running inside a virtual machine, like it is for some Windows installations which enable a Hyper-V environment for regular desktop, then you won't even be able to see an actual host MSR value. It will be up to the VMM to provide you with an emulated value, as well as it will show you whatever CPUID value it wants you to see, not the one from the host.

Related

How does the ARM Linux kernel map console output to a hardware device on boot?

I'm currently struggling to determine how I can get an emulated environment via QEMU to correctly display output on the command line. I have an environment that displays perfectly well using the virt reference board, a cortex-a9CPU, and the 4.1 Linux kernel cross-compiled for ARM. However, if I swap out the 4.1 kernel for 2.6 or 3.1, suddenly I can no longer see console output.
While solving this issue is my main goal, I feel like I lack a critical understanding of how Linux and the hardware initially integrate before userspace configurations via boot scripts and whatnot have a chance to execute. I am aware of the device tree, and have a loose understanding of how it works. But the issue I ran into where a different kernel version broke console availability entirely confounds me. Can someone explain how Linux initially maps console output to a hardware device on the ARM architecture?
Thank you!
The answer depends quite a bit on which kernel version, what config options are set, what hardware, and also possibly on kernel command line arguments.
For modern kernels, the answer is that it looks in the device tree blob it is passed for descriptions of devices, some of which will be serial ports, and it initializes those. The kernel config or command line will specify which of those is to be used for the console. For earlier kernels, especially if you go all the way back to 2.6, use of device tree was less universal, and for some hardware the boot loader simply said "this is a versatile express board" (for instance) and the kernel had compiled-in data structures to tell it where the devices were for each board that it supported. As the transition to device tree progressed, boards were converted one by one, and sometimes a few devices at a time, so what exactly the situation was for any specific kernel version depends on which board you're using.
The other thing that I rather suspect you're running into is that if the kernel crashes early in bootup (ie before it finds the serial port at all) then it will never output anything. So if the kernel is just too early to support the "virt" board properly at all, or if your kernel config is missing something important, then the chances are good that it crashes in early boot without being able to print you a useful message. (Sometimes "earlycon" or "earlyprintk" kernel arguments can assist here, but not always.)

Immidiate and latent effects of modifying my own kernel binary at runtime?

I'm more of a web-developer and database guy, but severely inconvenient performance issues relating to kernel_task and temperature on my personal machine have made me interested in digging into the details of my Mac OS (I notices some processes would trigger long-lasting spikes in kernel-task, despite consistently low CPU temperature and newly re-imaged machine).
I am a root user on my own OSX machine. I can read /System/Library/Kernels/kernel. My understanding is this is "Mach/XNU" Kernel of this machine (although I don't know a lot about those, but I'm surprised that it's only 13Mb).
What happens if I modify or delete /System/Library/Kernels/kernel?
I imagine since it's at run-time, things might be okay until I try to reboot. If this is the case, would carefully modifying this file change the behavior of my OS, only effective on reboot, presuming it didn't cause a kernel panic? (is kernel-panic only a linux thing?)
What happens if I modify or delete /System/Library/Kernels/kernel?
First off, you'll need to disable SIP (system integrity protection) in order to be able to modify or edit this file, as it's protected even from the root user by default for security reasons.
If you delete it, your system will no longer boot. If you replace it with a different xnu kernel, that kernel will in theory boot next time, assuming it's sufficiently matched to both the installed device drivers and other kexts, and the OS userland.
Note that you don't need to delete/replace the kernel file to boot a different one, you can have more than one installed at a time. For details, see the documentation that comes with Apple's Kernel Debug Kits (KDKs) which you can download from the Apple Developer Downloads Area.
I imagine since it's at run-time, things might be okay until I try to reboot.
Yes, the kernel is loaded into memory by the bootloader early on during the boot process; the file isn't used past that, except for producing prelinked kernels when your device drivers change.
Finally, I feel like I should explain a little about what you actually seem to be trying to diagnose/fix:
but severely inconvenient performance issues relating to kernel_task and temperature on my personal machine have made me interested in digging into the details of my Mac OS
kernel_task runs more code than just the kernel core itself. Specifically, any kexts that are loaded (see kextstat command) - and there are a lot of those on a modern macOS system - are loaded into kernel space, meaning they are counted under kernel_task.
Long-running spikes of kernel CPU usage sound like they might be caused by file system self-maintenance, or volume encryption/decryption activity. They are almost certainly not basic programming errors in the xnu kernel itself. (Although I suppose stupid mistakes are easy to make.)
Another possible culprits are device drivers; especially GPU drivers are incredibly complex pieces of software, and of course are busy even if your system is seemingly idle.
The first step to dealing with this problem - if there indeed is one - would be to find out what the kernel is actually doing with those CPU cycles. So for that you'd want to do some profiling and/or tracing. Doing this on the running kernel most likely again requires SIP to be disabled. The Instruments.app that ships with Xcode is able to profile processes; I'm not sure if it's still possible to profile kernel_task with it, I think it at least used to be possible in earlier versions. Another possible option is DTrace. (there are entire books written on this topic)

Can a Linux kernel run as an ARM TrustZone secure OS?

I am trying to run a Linux kernel as the secure OS on a TrustZone enabled development board(Samsung exynos 4412). Although somebody would say secure os should be small and simple. But I just want to try. And if it is possible, then write or port a trustlet application to this secure os will be easy, especially for applications with UI(trusted UI).
I bought the development board with a runnable secure OS based on Xv6 and the normal os is Android(android version 4.2.2, kernel version 3.0.15). I have tried to replace the simple secure os with the android Linux kernel, that is, with a little assembly code ahead, such as clearing the NS bit of SCR register, directly called the Linux kernel entry(with necessary kernel tagged list passed in).
The kernel uncompressed code is executed correctly and the first C function of the kernel, start_kernel(), is also executed. Almost all the initialization functions run well except running to calibrate_delay(). This function will wait for the jiffies changed:
/* wait for "start of" clock tick */
ticks = jiffies;
while (ticks == jiffies);
I guess the reason is no clock interrupt is generated(I print logs in clock interrupt callback functions, they are never gotten in). I have checked the CPSR state before and after the local_irq_enable() function. The IRQ and FIQ bit are set correctly. I also print some logs in the Linux kernel's IRQ handler defined in the interrupt vectors table. Nothing logged.
I know there may be some differences in interrupt system between secure world and non secure world. But I can't find the differences in any documentation. Can anybody point out them? And the most important question is, as Linux is a very complicated OS, can Linux kernel run as a TrustZone secure OS?
I am a newbie in Linux kernel and ARM TrustZone. Please help me.
Running Linux as a secure world OS should be standard by default. Ie, the secure world supervisor is the most trusted and can easily transition to the other modes. The secure world is an operating concept of the ARM CPU.
Note: Just because Linux runs in the secure world, doesn't make your system secure! TrustZone and the secure world are features that you can use to make a secure system.
But I just want to try. And if it is possible, then write or port a trustlet application to this secure os will be easy, especially for applications with UI(trusted UI).
TrustZone allows partitioning of software. If you run both Linux and the trustlet application in the same layer, there is no use. The trustlet is just a normal application.
The normal mode for trustlets is to setup a monitor vector page on boot and lock down physical access. The Linux kernel can then use the smc instruction to call routines in the trustlet to access DRM type functionality to decrypt media, etc. In this mode, Linux runs as a normal world OS, but can call limited functionality with-in the secure world through the SMC API you define.
Almost all the initialization functions run well except running to calibrate_delay().
This is symptomatic of non-functioning interrupts. calibrate_delay() runs a tight loop while waiting for a tick count to increase via system timer interrupts. If you are running in the secure world, you may need to route interrupts. The register GICD_ISPENDR can be used to force an interrupt. You may use this to verify that the ARM GIC is functioning properly. Also, the kernel command line option lpj=XXXXX (where XXXX is some number) can skip this step.
Most likely some peripheral interrupt router, clock configuration or other interrupt/timer initialization is done by a boot loader in a normal system and you are missing this. Booting a new board is always difficult; even more so with TrustZone.
There is nothing technically preventing Linux from running in the Secure state of an ARM processor. But it defeats the entire purpose of TrustZone. A large, complex kernel and OS such as Linux is infeasible to formally verify to the point that it can be considered "Secure".
For further reading on that aspect, see http://www.ok-labs.com/whitepapers/sample/sel4-formal-verification-of-an-os-kernel
As for the specific problem you are facing - there should be nothing special about interrupt handling in Secure vs. Non-secure state (unless you explicitly configure it to be different). But it could be that the secure OS you have removed was performing some initial timer initializations that are now not taking place.
Also 3.0.15 is an absolutely ancient kernel - it was released 2.5 years ago, based on something released over 3 years ago.
There are several issues with what you are saying that need to be cleared up. First, are you trying to get the Secure world kernel running or the Normal world kernel? You said you wanted to run Linux in the SW and Android in the NW, but your question stated, "I have tried to replace the simple secure os with the android Linux kernel". So which kernel are you having problems with?
Second, you mentioned clearing the NS-bit. This doesn't really make sense. If the NS-bit is set, clearing it requires means that you are running in the NW already (as the bit being set would indicate), you have executed a SMC instruction, switched to Monitor mode, set the NS-bit to 0, and then restored the SW registers. Is this the case?
As far as interrupts are concerned, have you properly initialized the VBAR for each execution mode, i.e. Secure world VBAR, Normal world VBAR, and MVBAR. All three will need to be setup, in addition to setting the proper values in the NSACR and others to ensure interrupts are channeled to the correct execution world and not all just handled by the SW. Also, you do need separate exception vector tables and handlers for all three modes. You might be able to get away with only one set initially, but once you partition your memory system using the TZASC, you will need separate everything.
TZ requires a lot of configuration and is not simply handled by setting/unsetting the NS-bit. For the Exynos 4412, numerous TZ control registers that must be properly set in order to execute in the NW. Unfortunately, none of the information on them is covered in the Public version of the User's Guide. You need the full version in order to get all the values and address necessary to actually run a SW and NW kernel on this processor.

16-bit Assembly on 64-bit Windows?

I decided to start learning assembly a while ago, and so I started with 16-bit assembly, using FASM.
However, I recently got a really new computer running Windows 7 64-bit, and now none of the compiled .COM files that the program assembles work any more. They give an error message, saying that the .COM is not compatible with 64-bit however.
32-bit assemblies still work, however I'd rather start with 16 and work my way up...
Is it possible to run a 16-bit program on windows 7? Or is there a specific way to compile them? Or should I give up and skip to 32-bit instead?
The reason you can't use 16-bit assembly is because the 16-bit subsystem has been removed from all 64-bit versions of Windows.
The only way to remedy this is to install something like DOSBox, or a virtual machine package such as VirtualBox and then install FreeDOS into that. That way, you get true DOS anyway. (NTVDM is not true DOS.)
Personally, would I encourage writing 16-bit assembly for DOS? No. I'd use 32- or even 64-bit assembly, the reason being there are a different set of function calls for different operating systems (called the ABI). So, the ABI for 64-bit Linux applications is different to 32-bit ones. I am not sure if that's the case with Windows. However, I guarantee that the meaning of interrupts is probably different.
Also, you've got all sorts of things to consider with 16-bit assembly, like the memory model in use. I might be wrong, but I believe DOS gives you 64K memory to play with "and that's it". Everything, your entire heap and stack along with code must fit into this space, as I understand it, which makes you wonder how anything ever worked, really.
My advice would be to just write 32-bit code. While it might initially seem like it would make sense to learn how to write 16-bit code, then "graduate" to 32-bit code, I'd say in reality rather the opposite is true: writing 32-bit code is actually easier because quite a few arbitrary architectural constraints (e.g., on what you can use as a base register) are basically gone in 32-bit code.
For that matter, I'd consider it open to substantial question whether there's ever a real reason to write 16-bit x86 code at all. For most practical purposes, it's a dead platform -- for desktop machines it's seriously obsolete, and for embedded machines, you're more likely to see things like ARMs or Microchip PICs. Unless you have a specific target in mind and know for sure that it's going to be a 16-bit x86, I'd probably forget that it existed, just like most of the rest of the world has.
32-bit Windows 7 and older include / enable NTVDM by default. On 32-bit Win8+, you can enable it in Windows Features.
On 64-bit Windows (or any other 64-bit OS), you need an emulator or full virtualization.
A kernel in long mode can't use vm86 mode to provide a virtual 8086 real-mode environment. This is a limitation of the AMD64 / x86-64 architecture.
With a 64-bit kernel running, the only way for your CPU to natively run in 16-bit mode is 16-bit protected mode (yes this exists; no, nobody uses it, and AFAIK mainstream OSes don't provide a way to use it). Or for the kernel to switch the CPU out of long mode back to legacy mode, but 64-bit kernels don't do that.
But actually, with hardware virtualization (VirtualBox, Hyper-V or whatever using Intel VT-x or AMD SVM), a 64-bit kernel can be the hypervisor for an entire virtual machine, whether that VM is running in 16-bit real mode or running a 32-bit OS (like Windows 98 or 2000) which can in turn use vm86 mode to run 16-bit real-mode executables.
Especially on a 64-bit kernel, it's usually easier to just emulate a 16-bit PC entirely (like DOSBOX does), instead of using HW virtualization to running normal instructions natively but trap direct hardware access (in / out, loads/stores to VGA memory, etc.) and int instructions that make DOS system calls / BIOS calls / whatever.

kvm vs. vmware for kernel debugging / USB driver development

I'm currently setting up vmware Server 2.0 for kernel debugging with gdb ( see this setup guide ) and someone asked me why not use kvm?
So I ask: kvm vs. vmware for kernel debugging / USB driver development
what are the pros and cons of each?
Driver development? are you working on a driver for a particular piece of hardware? if so, then you probably won't be able to use virtualization, because the virtualized instance won't have access to the new hardware.
For this you will need two machines, one running a remote debugger on the other.
*Edit: * Apparently you're developing a driver for a USB Device? this is one area in particular that a VM actually Can help. These days most VM's have the ability to delegate specific USB devices to a guest OS.
That said, this situation doesn't really offer any benefits over the remote debugger option, because you still need a way to inspect the state of the running or crashed OS, and VM's offer very little assistance in this regard. You might be able to replay saved states from just before a crash.
You might be able to get a bit of traction using UML, which would allow you to do local debugging as on a regular user process, which is a little bit less trouble.
Instead of answering the direct question I'll add another option... Depending on if the kernel in question is a Linux kernel, and what part(s) of it you are working on, you might find that UserModeLinux (included in the 2.6.x source, and available as patch sets for 2.4 and 2.2) may trump both of those options.
As it runs the kernel as a userland process under the host kernel it is easier to attach common debugging tools to. I believe it is very commonly used in the early stages of updates/additions to file-system related code. If you are developing/debugging modules that interact directly with hardware it may be much less use to you though.
Reference links: home,
other
I recently started building GNU Mach/HURD and found the combination of QEmu/KVM to work really quite well.. for the following reasons:
QEmu presents quite a clean environment
Networking has alot of options
I can easily mount the filesystem using a raw device file / loopback
Bottom line is, for kernel work I just want the minimum of functionality to boot and see the result. VMWare is much more for usable virtualization rather than down-and-dirty.
There is however no comparison to booting on a real machine with real hardware. The VM environment can seem like a safety blanket somtimes ... because even my toaster would know what a Realtek RTL8139C was.
If it is a "real hardware" device, of course, vmware will not emulate it, so you won't be able to debug the driver under it (nor will any other virtualisation software, unless you extend one to do so).
Device driver debugging can be done to some extent with a real hardware machine with a normal kernel - although there are obviously things you can't do - like set breakpoints.
It is still possible to attach a debugger to the kernel and inspect stuff. Moreover, traditional printf() debugging is quite possible (printk, anyone), and there are various features in the kernel which make debugging easier. It's possible to build the kernel with various debug options to try to detect pointer problems, memory leaks etc.
By default, the kernel even gives a nice-ish stack trace on the log when it encounters an OOPS or BUG condition (obviously this does not necessarily get written anywhere if the system hangs or crashes). Of course a pointer-out-of-range condition happening inside an interrupt is a recipe for disaster, but you could still get a stack trace on the screen immediately before the panic :)

Resources