Is there a way to lock L2 cache on Pandaboard ES with running Ubuntu there?
TRM says that it is possible, but I don't know it feasible on Pandaboard.
I've tried to compile kernel object and set there bits in Auxiliary Control Register using cp15, but it's RO I suppose, because I cannot write there.
CPSR says I am in a Privilige mode, but i guess that's a non-secure Privilige mode?
How to use that PL310 cache controller to do that?
Do I need to use TrustZone somehow?
When compile SMC #1 instruction using asm volatile(...) when compiling natively on Panda, when trying to taskset this Panda is not responding.
Try OmapConf application to change the register you need.
I haven't tried it in Ubuntu, but I do have used it on Android.
Related
Background
I've been bouncing around this for a while and still haven't come up with an adequate solution, hoping someone out there can point me in the right direction.
Essentially I need to identify whether I can run 64bit VM on a target machine (working in GO but happy to consider binding c code or some assembly (though I feel a bit out of depth there)
In order to run a 64 bit VM the system need Hardware Virtualisation support available and enabled in the bios (im only concerned with intel/amd at this time)
Journey so far
From windows 8 onwards, Windows ships with Hyper-V, and there is a nice function you can call IsProcessorFeaturePresent from the kernel32.dll with an arg of 'PF_VIRT_FIRMWARE_ENABLED' which will tell you if hardware virtualisation is enabled in firmware:
IsProcessorFeaturePresent
now I dont really like the way this behaves (it says not available if hyper-v is installed) but i can cope with it by checking if hyper-v is enabled through other means so this pretty much does the job from win8 upwards.
Problem is this function always return false on win 7 for some reason - even on a system on which I know hardware virtualization is enabled.
Coming from another angle I have used this lib to determine what instruction sets are available: intel processor feature lib - this allows me to know what type of virtualization instructions are available on the processor (if any)
But I'm still missing the final piece of knowing if its enabled in the bios on win 7. I figure in principle it should be easy from here - I should be able to call something which utilizes the virtualization extensions and see if it responds as expected. But unfortunately I have no idea how to do this.
Does anyone have any suggestions as to how I might do this?
Note: Im happy to consider 3rd party libs but this would be used in commercial software so licensing would have to allow for that (e.g nothing from Microsoft)
I am afraid you won't be able to achieve what you want unless you are ready to provide a kernel driver, because checking if BIOS has enabled virtualization requires kernel privileges.
Intel Software Developer Manual describes a model-specific register (MSR) with number 3Ah called IA32_FEATURE_CONTROL. Its bits 1 and 2 control whether VMX instructions are allowed in SMX and non-SMX modes. Also there is bit zero which, when written with 1, locks the whole register's value, thus making impossible to enable/disabled features until the next processor reset. This means that, if BIOS code has disabled VMX and locked it, an OS that boots later will be unable to change that fact, only to see it.
To read this or any other MSR one should use machine instruction RDMSR, and this instruction is only available when CPL is zero, that is, within an OS context. It will throw an exception if attempted to be used from application code.
Unless you find a program interface method that wraps RDMSR around and provides it to applications, you are out of luck. Typically that implies loading and running a dedicated kernel driver. I am aware about one for Linux, but cannot say if there is anything for Windows.
As an extra note, if your code is already running inside a virtual machine, like it is for some Windows installations which enable a Hyper-V environment for regular desktop, then you won't even be able to see an actual host MSR value. It will be up to the VMM to provide you with an emulated value, as well as it will show you whatever CPUID value it wants you to see, not the one from the host.
I'm trying to use userspace built for i.mx53 on a identical board with i.mx6. The i.mx6 board differs only in the CPU used. I built a new kernel and appropriate DTB, I can load it with uboot and it starts fine. However, when I try to use the rootfs I had for i.mx53 board I get a following jffs error:
jffs2: inconsistent device description
which has something to do with flash OOB not containing valid information.
I write the rootfs into a flash partition with the nand write.trimffs command. Do I need to initialize the OOB somehow? I don't remember doing it on the old board. Where can this error come from?
Turns out i.MX6 NAND controller (gpmi driver) uses entire OOB space for ECC and JFFS2 cannot fit it's markers there. It is possible to communicate to the kernel weaker requirements for ECC based on NAND chip specification and use fsl,use-minimum-ecc device tree option to save up some OOB. However, u-boot does not seem to have support for such ECC reconfiguration and it becomes impossible to use NAND in both bootloader and Linux. Probably the best way forward in this situation is to ditch JFFS2 and use UBIFS instead.
Note: I've seen JFFS2 patches which make it not use OOB, but haven't tried them.
I am trying to run a Linux kernel as the secure OS on a TrustZone enabled development board(Samsung exynos 4412). Although somebody would say secure os should be small and simple. But I just want to try. And if it is possible, then write or port a trustlet application to this secure os will be easy, especially for applications with UI(trusted UI).
I bought the development board with a runnable secure OS based on Xv6 and the normal os is Android(android version 4.2.2, kernel version 3.0.15). I have tried to replace the simple secure os with the android Linux kernel, that is, with a little assembly code ahead, such as clearing the NS bit of SCR register, directly called the Linux kernel entry(with necessary kernel tagged list passed in).
The kernel uncompressed code is executed correctly and the first C function of the kernel, start_kernel(), is also executed. Almost all the initialization functions run well except running to calibrate_delay(). This function will wait for the jiffies changed:
/* wait for "start of" clock tick */
ticks = jiffies;
while (ticks == jiffies);
I guess the reason is no clock interrupt is generated(I print logs in clock interrupt callback functions, they are never gotten in). I have checked the CPSR state before and after the local_irq_enable() function. The IRQ and FIQ bit are set correctly. I also print some logs in the Linux kernel's IRQ handler defined in the interrupt vectors table. Nothing logged.
I know there may be some differences in interrupt system between secure world and non secure world. But I can't find the differences in any documentation. Can anybody point out them? And the most important question is, as Linux is a very complicated OS, can Linux kernel run as a TrustZone secure OS?
I am a newbie in Linux kernel and ARM TrustZone. Please help me.
Running Linux as a secure world OS should be standard by default. Ie, the secure world supervisor is the most trusted and can easily transition to the other modes. The secure world is an operating concept of the ARM CPU.
Note: Just because Linux runs in the secure world, doesn't make your system secure! TrustZone and the secure world are features that you can use to make a secure system.
But I just want to try. And if it is possible, then write or port a trustlet application to this secure os will be easy, especially for applications with UI(trusted UI).
TrustZone allows partitioning of software. If you run both Linux and the trustlet application in the same layer, there is no use. The trustlet is just a normal application.
The normal mode for trustlets is to setup a monitor vector page on boot and lock down physical access. The Linux kernel can then use the smc instruction to call routines in the trustlet to access DRM type functionality to decrypt media, etc. In this mode, Linux runs as a normal world OS, but can call limited functionality with-in the secure world through the SMC API you define.
Almost all the initialization functions run well except running to calibrate_delay().
This is symptomatic of non-functioning interrupts. calibrate_delay() runs a tight loop while waiting for a tick count to increase via system timer interrupts. If you are running in the secure world, you may need to route interrupts. The register GICD_ISPENDR can be used to force an interrupt. You may use this to verify that the ARM GIC is functioning properly. Also, the kernel command line option lpj=XXXXX (where XXXX is some number) can skip this step.
Most likely some peripheral interrupt router, clock configuration or other interrupt/timer initialization is done by a boot loader in a normal system and you are missing this. Booting a new board is always difficult; even more so with TrustZone.
There is nothing technically preventing Linux from running in the Secure state of an ARM processor. But it defeats the entire purpose of TrustZone. A large, complex kernel and OS such as Linux is infeasible to formally verify to the point that it can be considered "Secure".
For further reading on that aspect, see http://www.ok-labs.com/whitepapers/sample/sel4-formal-verification-of-an-os-kernel
As for the specific problem you are facing - there should be nothing special about interrupt handling in Secure vs. Non-secure state (unless you explicitly configure it to be different). But it could be that the secure OS you have removed was performing some initial timer initializations that are now not taking place.
Also 3.0.15 is an absolutely ancient kernel - it was released 2.5 years ago, based on something released over 3 years ago.
There are several issues with what you are saying that need to be cleared up. First, are you trying to get the Secure world kernel running or the Normal world kernel? You said you wanted to run Linux in the SW and Android in the NW, but your question stated, "I have tried to replace the simple secure os with the android Linux kernel". So which kernel are you having problems with?
Second, you mentioned clearing the NS-bit. This doesn't really make sense. If the NS-bit is set, clearing it requires means that you are running in the NW already (as the bit being set would indicate), you have executed a SMC instruction, switched to Monitor mode, set the NS-bit to 0, and then restored the SW registers. Is this the case?
As far as interrupts are concerned, have you properly initialized the VBAR for each execution mode, i.e. Secure world VBAR, Normal world VBAR, and MVBAR. All three will need to be setup, in addition to setting the proper values in the NSACR and others to ensure interrupts are channeled to the correct execution world and not all just handled by the SW. Also, you do need separate exception vector tables and handlers for all three modes. You might be able to get away with only one set initially, but once you partition your memory system using the TZASC, you will need separate everything.
TZ requires a lot of configuration and is not simply handled by setting/unsetting the NS-bit. For the Exynos 4412, numerous TZ control registers that must be properly set in order to execute in the NW. Unfortunately, none of the information on them is covered in the Public version of the User's Guide. You need the full version in order to get all the values and address necessary to actually run a SW and NW kernel on this processor.
I am trying to understand how a kernel boots. I am currently trying to port a new kernel to hTC Incredible S VIVO (s710e) device, but I cannot get it to boot. So, I looked into the device's original kernel, and looked through some documentation, and found out that the device uses ATAGs. Now, I have several questions that I cannot find a clear answer for:
What are ATAGs?
What are they used for?
How does the kernel boot using ATAGs?
Do ATAGs play a vital role in booting a kernel?
ATAGS are ARM tags. They are used to carry information such as memory size from boot code to kernel. Some references (which in turn lead to other references): booting standards, customized ATAG.
This reference arm/Booting explains theory, but does not much tell a user what to do.
On my target I use the following in my U-Boot config: CONFIG_CMDLINE_TAG, CONFIG_SETUP_MEMORY_TAGS, and these in my kernel config: CONFIG_ATAGS=y, CONFIG_USE_OF is not set. Not sure if that is sufficient for you but it gives you clues to search on, good luck.
ATAGS are not only arm-related, at all. Look into other archs head.S. They are special parameters to be passed to the kernel through some registers and pointers.
Is there any way to programmatically disable Turbo Boost on a Core i7 mac running Mac OS X ? I need to be able to do this for benchmarking purposes during code optimisation etc. Failing that, any kind of utility which can disable/enable Turbo Boost, even if it requires a reboot, would be useful.
There is a related question (not Mac-specific) on SO: How to turn off Turbo Boost temporarily? but even for PCs it seems that there may be no way to do this programatically/on-the-fly ?
I wrote kernel extension that let's you disable TB, have fun:
https://github.com/nanoant/DisableTurboBoost.kext
If you want to disable TB on Linux here another recipe: http://luisjdominguezp.tumblr.com/post/19610447111/disabling-turbo-boost-in-linux
I've just coded an app that allows to load / unload the kernel extension mentioned before, helping to track the system behaviour displaying CPU Temp & current fan speed.
You can check it out here https://github.com/rugarciap/Turbo-Boost-Switcher
Here is an screenshot of how it looks like http://i.stack.imgur.com/tsKaG.png
You can't. Certain stuff needs to be configured from the BIOS, such as TurboBoost or Vt.
In particular, this is done with the IA32_FEATURE_CONTROL MSR. On a PC, at boot time the MSR is unlocked and the BIOS sets the correct bits to enable or disable features. Once configuration is complete, the BIOS locks the MSR for the changes to take effect and prevent future modification.
I don't know if it's possible to unlock the MSR again before the PC is brought into protected mode, and I don't know how this works on a MacBook where EFI is used instead of BIOS. You'll probably be able to pull it off with an EFI extension of sorts.
CPUID.com's Tmonitor utility can disable/enable Turbo Boost on-the-fly from within Windows, not at boot! There must be a way to do the same thing from within OSX.
Finally there seems to be a good solution for this problem which I have tested with Mac OS X Lion on a Core i7 MacBook Pro today and it appears to work well. Adam Strzelecki, a researcher in parallel computing at Jagiellonian University in Krakow, Poland has written DisableTurboBoost.kext - this is a small kext which can be loaded and unloaded at will (via the command line) to disable/enable TurbBoost.