NVMe PCIe Hard Disk on Freescale LS2080A not recognised - linux-kernel

I have a Freescale LS2080 box for which I am developing a custom linux 4.1.8 kernel using the Freescale Yocto project.
I have an NVMe hard disk attached to the LS2080 via a PCIe card, but the disk is not recognised when I boot up the board with my custom linux kernel.
I plugged the same combination of NVMe disk and PCIe card into a linux 3.16.7 desktop PC and it was detected and mounted without problem.
When building the LS2080 kernel using the Yocto project, I have enabled the NVMe block device driver and I have verified that this module is present in the kernel when booting on the board.
The PCIe slot on the board is working fine because I have tried it with a PCIe Ethernet card and a PCIe SATA disk.
I suspect that I am missing something in the kernel configuration or device tree, but I'm not sure what. When I add the NVMe driver to the kernel using menuconfig, the NVMe driver dependencies are supposed to be resolved.
Can anyone provide insight into what I am missing?

First make sure that PCIe device is recognized using lspci.
If device is not shown in lspci list this is enumeration problem, to check the error use PCIe analyzers.
If the device is shown in the list then simply add the device vendor id and device ID to NVMe driver and recompile to load the driver for your device.

Related

PCIe DMA problems in ARM Machines

I'm trying to write a PCIe driver for an ARM machine (Cavium ThunderX2). I'm working with Xilinx Alveo FPGAs. Our work involves migrating pages between heterogeneous nodes (x86 and ARM) and the driver takes care of the DMA between the host and the FPGA, and handles the device interrupts.
The DMA doesn't work (From Device/To Device) and I get "ARM SMMU v3.x 0x10 event occurred" errors. I tried disabling the SMMU (recommended by some threads in the NVIDIA community - https://forums.developer.nvidia.com/t/how-dma-works-in-arm-the-dma-stopped-working-with-our-pci-driver/53699), but that leads to a protection issue ("RAS Controller stopped"), and the system hangs.
I use dma_map_single APIs from dma-mapping.h to convert the virtual address to a DMA-capable bus address. Would dma_alloc_coherent make a difference? (Of course, I'll try this out)
I'm unable to figure out the problem. Is this is a PCIe driver issue or an issue with the device or is there a fix/patch available for ARM PCIe DMA ops? Any help would be appreciated!!
Thanks,
Narayan
Error snippet

External xilinx PCie driver with Yocto

I compiled the xilinx pcie driver using this as a starting point.
https://www.yoctoproject.org/docs/current/kernel-dev/kernel-dev.html#incorporating-out-of-tree-modules
and then instead of using this to the prebuilt image:
MACHINE_EXTRA_RRECOMMENDS += "kernel-module-mymodule"
I copy kernel file .ko directly to the image as:
fs#fs:/opt/PHYTEC_BSPs/yocto_imx7/build/tmp/work/cortexa7hf-neon-poky-linux-gnueabi/met/0.1-r0$ scp xpcie.ko root#172.17.100.101:/lib/modules
then when I insert this kernel module with hardware connected:
root#imx7d-phyboard-zeta-001:/lib/modules# insmod /lib/modules/xpcie.ko
Base hw val 0
Base hw len 0
BAR0 of 0K
BAR0 of 0M
xpcie: Init: Could not remap memory.
insmod: ERROR: could not insert module /lib/modules/xpcie.ko: Operation
not permitted
root#imx7d-phyboard-zeta-001:/lib/modules#
What is the reason?
Is it not allowed to copy kernel directly on already built image like this?
Futhermore when I add it to the image as in local.conf file:
MACHINE_EXTRA_RRECOMMENDS += "kernel-module-mymodule"
again built and load the image, it is not available in /lib/modules/ directory. Where i can find it? or using other 3 methods will be better?
Using insmod to install the driver is the right thing to do. It does not seem to be the cause of the problem you are seeing.
When you build the driver, the .ko file will usually be in the source directory where you built it, unless you make modules_install.
Getting back to the actual problem:
Looking at the driver source, the message "Could not remap memory" indicates that the driver could not map the PCIE memory region into the kernel address space.
Looks like the base address registers were not configured. On all the machines I use, the BIOS has to configure the base address registers before Linux can use the device.
We program the FPGA, reboot the machine, and then load the driver and use the FPGA. Did you try this? Does your FPGA show up with lspci?
Once the FPGA is programmed, if the PCIE configuration does not change you can tell the kernel to rescan and it will write the base address registers with the same values.

How to add new device in uboot?

I want to access the different peripherals of i-MX6 at uboot level but I don't know how to do that?
How to add support for new devices in u-boot?
what are the differences between drivers present in u boot level and kernel level?
The five (4) boot phases.
1.ROM loads x-load (MLO)
2.X-load loads u-boot --> Primary boot-loader
3.U-boot reads commands/Load kernel --> Secondary Boot-Loader
4.Kernel reads root file system.
x-loader (Primary Boot-Loader) :
The x-loader configures the pin muxing, clocks, DDR, and serial console, so that it can access and load the second stage bootloader (u-boot) into the DDR
U-Boot (Secondary Boot-Loader) :
The u-boot can perform CPU dependent and board dependent initialization and configuration not done in the x-loader. The u-boot also includes fastboot functionality for partitioning and flashing the eMMC. The u-boot runs on the Master CPU (CPU ID 0), which is responsible for the initialization and booting; at the same time, the Slave CPU (CPU ID 1) is held in the “wait for event” state.
U-Boot is kind of firmware. It'll basically initialize basic functionality. Like Display, CPU0, FastBoot functionality, Create a temporary file system for loading a kernel and Loading Kernel.
Kernel Driver :
A device driver is a program that controls a particular type of device that is attached to your computer. There are device drivers for printers, displays,Touch, CD-ROM readers, diskette drives, and so on.
U-Boot is mainly for loading a Operating System (Kernel). Device Driver is part of kernel for controlling a device. you want to accesses your device in u-boot loader then you will have to initialize all need hardware for your device like memory an all.

Use of xHCI driver and USB_STORAGE driver

I'm currently learning driver programming and am at very nascent stage. I'm unable to get the difference of use of xHCI, EHCI, or OHCI drivers and usb_storage.
When I plug my USB device (pen drive) and observe dmesg output, it says that my device is using the ehci driver, but my device stops working when I rmmod usb_storage.
There are many drivers for different kind of USB devices let it be mouse, keyboard, camera, etc.
As of now, I assume that the xHCI driver is for USB host and the other driver is for the device we connect to our USB host. Am I correct? If not, what is the explanation?
*HCI are specifications of USB hosts:
xHCI - for USB 3.0
EHCI - for USB 2.0
OHCI and UHCI - for USB 1.x
usb_storage is a upper level driver working on the USB host side, and it is responsible for communication only with USB storage devices, not keyboard, mouse, etc.
The USB is maintained in form of a stack and *hci drivers are the lowest level in that stack. usb-storage and other drivers are located on a higher level of this stack.

how linux kernel detects power over ethernet (PoE)

I want to capture a signal in the kernel when it detects power over ethernet is connected.
I don't have gpios to do this business.
I am working on the atheros chipset based access point. It has Realtek RTL8363SB ethernet module. Is there any mechanism/interrupt/signal/api in kernel which detects from where it is getting power?
I have enabled CONFIG_POWER_SUPPLY while building kernel V 2.6.36
Thanks.

Resources