what PSH kernel in intel edison mean? Is it the name of primary bootloader present inside ROM? - bootloader

I was going through the logs after booting up the intel edison. I came accross the word. Is it the name of bios?Does it do some security verification like key matching/checking and all ?

Intel Edison board, more precisely Intel Tangier SoC, has a Minute IA (i486+, also known as Pentium ISA microarchitecture) based MCU (for example, Intel Quark D2000 SoC has it as far as I know) which is part of so called Platform Services Hub (PSH). PSH has own Page Cache (to keep RTOS and its applications), LAPIC. The peripheral, such as DMA and I2C, is shared with System Controller Unit (SCU). SCU actually controls PSH.
When system starts MCU boots first. Inside it is a Viper RTOS with some modifications, i.e. it has a library to support sensors.
There is no information available from Intel regarding use of open source RTOS, such as Zephyr, on PSH.

Related

How to disable software SMI (System Management Interrupt) in Windows

Starting from Windows 10 1809, OS generates lots of software SMIs.
We are running our real time application on separate processor core and each SMI generates unpredictable delay. Before 1809 it was always possible to disable SMIs in BIOS.
Call stack in Windows looks like:
hal!HalEfiGetEnvironmentVariable+0x56
hal!HalGetEnvironmentVariableEx+0xb572
nt!IopGetEnvironmentVariableHal+0x2a
nt!IoGetEnvironmentVariableEx+0x85
nt!ExpGetFirmwareEnvironmentVariable+0x91
nt!ExGetFirmwareEnvironmentVariable+0x110ce3
nt!NtQuerySystemEnvironmentValueEx+0x6e
SMI is generated by OUT instruction into port 0xb2. It is required to read UEFI variables from NVRAM. When BIOS is in legacy mode, there is no SMIs.
Is it possible to configure Windows, so it will not access UEFI variables using SMIs?
The short answer is NO, it is not possible to configure Windows to not generate SW SMIs on UEFI Variable accesses, because those SMIs are not generated by Windows. The SMIs are generated inside the firmware.
All UEFI-aware OSes read/write UEFI variables via GetVariable() and SetVariable() services, which are part of Runtime Services exposed by a UEFI firmware to the OS via System Table - see UEFI Spec, section 8. The current implementation of Variable Services in most firmware is to process the actual Get/Set variable requests inside SMM, for security reasons.
So it is the device's firmware that's responsible for generating SW SMIs, not the OS. However, the OS and some system services/applications absolutely need to work with UEFI variables as it is how a UEFI-aware OS is supposed to run on a UEFI firmware.
On processors that supports AMD-V (e.g. AMD Processors, Hygon Processors), the answer is yes, but in kernel mode. There are two instructions called stgi and clgi, where stgi sets the GIF and clgi clears the GIF. The GIF is used to control the interrupt behaviors so that one may enter absolute atomic operations. As defined in AMD-V, Internal SMIs (e.g. I/O Trapping) are discarded and External SMIs (e.g. from external hardware, or IPIs by APIC) are held pending when GIF is cleared. Make sure you enabled the SVME bit in EFER MSR as you are to execute these instructions.
If you would like to make it happen in a more generic way, which does not rely on AMD-V, you may try to get your code into SMI handler, in that SMIs which occurs later will be latched while processor is in SMM.
Reference:
Chapter 10.3.3 "Exceptions and Interrupts", Volume 2 "System Programming", AMD64 Architecture Programmer's Manual.
Chapter 15.17 "Global Interrupt Flag, STGI and CLGI Instructions", Volume 2 "System Programming", AMD64 Architecture Programmer's Manual.
https://www.amd.com/system/files/TechDocs/24593.pdf

First physical core to boot

As I was reading through the kernel source code I noticed that a mapping between the physical core id and the virtual core number is being created. This could be because there is some degree of uncertainty in the order in which the cores are brought up.
In a multi-core system, which physical core is the first to boot? Is it always physical core #0? Does this hold for x86, x64, ARM and ARM64?
According to the Intel SDM, in recent Intel processors the selection of the bootstrap processor (BSP) is handled either "through a special system bus cycle" or "by platform-specific arrangement of the combination of hardware, BIOS, and/or configuration input options."
In my experience (with Intel processors only), the BSP always has APIC ID 0 (although this is not guaranteed). However, I don't know whether that means that it is always the same physical core within the processor, or even if there is any way to tell.
For more information, see section 8.4 of the Intel SDM, volume 3A.

How do PCIe devices advertise multiple virtual functions to Linux?

SR-IOV lets PCIe devices expose a single physical function and multiple virtual functions. How does the kernel detect that a device supports virtual functions? Is it a part of the PCIe configuration registers? Where in the kernel are devices tested for how many functions they export?
EDIT: I'm looking for a line of code (or a file) in the kernel source that inspects a PCIe device in order to determine how many virtual functions it exports. I'd also settle for a link to the appropriate standard that lays out what information a device needs to send to the host in order to report that it supports multiple virtual functions.
An SR-IOV capable device defines the SR-IOV Capability (extended capability ID 10h).
This is specified in chapter 9 of the PCI Express Base specification revision 4.0. I'm not sure whether you can find a free copy online; you may need to be a PCI-SIG member.
In the Linux kernel, look for PCI_EXT_CAP_ID_SRIOV and PCI_SRIOV_TOTAL_VF in drivers/pci/iov.c.

How many cp14 debug units has a dual-core arm

If we have a dual-core ARM CPU, does that mean we also have all COprocessors as "dual-cored"?
I mean, do we have two sets of cp14 or cp15 registers in this case?
Thank you!
I believe each core would have the same debug hw registers IF the maker of the SOC device compiled that silicon IP into each core (which is likely). You'll have to interrogate each core to see if it's supported. Then you'd have to execute the instructions accessing CP14 etc from the core in question (to access its CP14)

where is device driver code executed? Kernel space or User space?

Part1:
To the linux/unix experts out there, Could you please help me understanding about device drivers. As i understood, a driver is a piece of code that directly interacts with hardware and exposes some apis to access the device. My question is where does this piece of code runs, User space or Kernel space?
I know that code that is executed in kernel space has some extra privileges like accessing any memory location(pls correct if i'm wrong). If we install a third party driver and if it runs in kernel space, wouldn't this be harmful for the whole system? How any OS handles this?
Part2:
Lets take an example of USB device(camera, keyboard..), How the system recognizes these devices? how does the system know which driver to install? How does the driver know the address of the device to read and write the data?
(if this is too big to answer here, pls provide links of some good documentation or tutorials.., I've tried and couldn't find answers for these. pls help)
Part 1
On linux, drivers run in kernel space. And yes, as you state there a significant security implications to this. Most exceptions in drivers will take down the kernel, potentially corrupt kernel memory (with all manner of consequences). Buggy drivers also have an impact on system security, and malicious drivers can do absolutely anything they want.
A trend seen on MacOSX and Window NT kernels is user-space drivers. Microsoft has for some time been pushing the Windows Userspace Driver Framework, and MacOSX has long provided user-space APIs for Firewire and USB drivers, and class-compliant drivers for many USB peripherals. it is quite unusual to install 3rd party kernel-mode device drivers on MacOSX.
Arguably, the bad reputation Windows used to have for kernel panics can be attributed to the (often poor quality) kernel mode drivers that came with just about every mobile phone, camera and printer.
Linux graphics drivers are pretty much all implemented in user-space with a minimal kernel-resident portion, and Fuse allows the implementation of filing systems in user-space.
Part 2
USB, Firewire, MCI (and also PCI-e) all have enumeration mechanisms through which a bus driver can match the device to a driver. In practice this means that all devices expose metadata describing what they are.
Contained within the metadata is a DeviceID, VendorID and a description of functions the device provides and associated ClassIDs. ClassIDs facilitate generic Class Drivers.
Conceptually, the operating system will attempt to find a driver that specifically supports the VendorID and DeviceID, and then fall back to one that supports the ClassID(s).
Matching devices to drivers is a core concept at the heart of the Linux Device Model, and exact matching criteria used for matching is match() function in the specific bus driver.
Once device drivers are bound to a device, it uses the bus-driver (or addressing information given by it) to perform read and writes. In the case of PCI and Firewire, this is a memory mapped IO address. For USB it bus addressing information.
The Linux Documentation tree provides some insight into the design of the Linux Device Model, but isn't really entry-level reading.
I'd also recommend reading Linux Device Driver (3rd Edition)

Resources