Store data into ram on a zynq device - vhdl

I am at moment having some problems storing an image generated in the PS part of my Zynq into the DDR3 of my board, and then read that image into the PL side of the board such that the VGA driver created there can
The PS creates a 640x480 image, which ideally i want to store in the Dram.
I've until now used the DMA to transfer the data back and forth and store it as in some way (not storing all pixels) into the block ram of my system. but that isn't a ideal solution and I know so too..
So my question is how do i access the DDR ram of my zynq board, i know it is located on the PS side, but cant seem to find any documentation explaining how it should be interfaced and so..

Usually on zynq you try to use Axi interface for the data.
You can use that by the interconnects and the adress.
In Vivado you have right of the block design diagram a tab called "Address Editor".
In my case a simple test application (axi dma with fifo) is used.
I configured the axi dma to the base address "0x4040_0000" Range of 64K so the High Adress is "0x4040_FFFF".
In The SDK you can access this memory via a C/C++ program.
Here is a short AXI DMA example:
axi dma example
This example was written for the zedboard but I tried it with the z-turn 7020 board and it worked in Vivado 2014.4 and 2016.1.
I hope this helps you.

Related

What happens when we press a key on Windows?

First of all, I would say to you that I write this question from nothing because I have attempt to find good documentation but nothing stand out...
What happens when we squeeze a key?
I think this is complex but I hope you can help me.
What I search to know : all (but especially the program start on the host machine and how the key electric signal is encoded and send...)
The eXtensible Host Controller (xHC) has a Periodic Transfer Ring. Windows programs this ring to trigger a transfer every time an interval in milliseconds has passed. The right interval is specified in the USB descriptor returned by the USB device. When the transfer occurs, the xHC puts a Transfer Event TRB on the event ring and triggers an MSI-X interrupt which bypasses the IOAPIC as some kind of inter-processor interrupt. If Windows detects some change in the keys pressed, it will send a message to the application which currently has focus (calling the window's procedure) with the key pressed in one of the argument.
I don't know about electrical signals but I know the eXtensible Host Controller is the USB controller responsible to interact with USB on modern Windows systems. Since Windows nowadays requires an x64 processor, the xHC must be present on your motherboard. The xHC is a PCI-Express device which is compliant with the PCI-Express specification.
To find an xHC, you:
Find the RSDP ACPI table in RAM;
This table will be found by the UEFI firmware which acts as some kind of small operating-system (OS) during boot of the computer. Then, the OS developers will write a small UEFI application named bootx64.efi that they will place on a FAT32 partition on the hard-disk. They will place this app in the /boot/efi directory. The UEFI firmware will directly launch that application on boot of the computer which allows to have an OS which doesn't require user input to be launched (similarly to how it used to work with the legacy BIOS fetching the first sector of the hard-disk and executing the instructions found there).
The UEFI application is compiled in practice with either EDK2 or gnu-efi. These compilers are aware of the UEFI environment and specification. They thus compile the code to system calls that are present during boot and available for the UEFI application written by the OS developers. The System Tables (often the ACPI tables) are given as an argument to the "main" function (often called UefiMain) called by the UEFI firmware in the UEFI application. The code of the application can thus simply use these arguments to find the RSDP table and pass it to the OS.
Find the MCFG ACPI table using the RSDP;
The chain of table is RSDP -> XSDT -> MCFG. Once the OS found the MCFG, this table specifies the base address of the PCI configuration space. To interact with PCI devices you use memory mapped IO (MMIO). You write to some position in RAM and it will instead write to the registers of the PCI devices. The MCFG thus specifies the base address at which you will start finding MMIO registers for the different PCI devices that are plugged into the computer.
Iterate on the PCI devices and look at their IDs until you find an xHC.
To iterate on the PCI devices, the PCI convention specifies a formula which is the following:
UINT64 physical_address = base_address + ((bus - first_bus) << 20 | device << 15 | function << 12);
The base_address is for a specific segment group. Each segment group can have 256 buses (suitable for large servers or large computers with lots of components). There can be up to 65536 segment groups and each can have up to 256 PCI buses. Each PCI bus can have up to 32 devices plugged onto it and each device can have up to 8 functions. Each function can also be a PCI bridge. This is quite straightforward to understand because the terminology is clear. The bus here is an actual serial bus that the PCI devices (like a network card, a graphics card, an xHC, an AHCI, etc.) use to communicate with RAM. The function is a functionality of the PCI device like controlling USB devices, hard-disks, HDMI screens (for graphics cards), etc. The PCI bridge bridges a PCI bus to another PCI bus. It means you can have almost an infinite amount of devices with the PCI specification because the bridges allow to extend the tree of devices by adding other PCI host controllers.
Meanwhile, the bus is simply a number between 0 and 255. The first bus is specified in the MCFG ACPI table for a specific segment group. The device is a number between 0 and 31 and the function is a number between 0 and 7. This formula returns a physical address which points to a conventional configuration space (it is the same for all functions) which has specific registers. These registers are used to determine what is the type of device and to load a proper driver for it. Each function of each device thus gets a configuration space.
For the xHC, there will be only one function and the IDs returned by its configuration space will be 0x0C for the class ID and 0x03 for the subclass ID (https://wiki.osdev.org/EXtensible_Host_Controller_Interface).
Once you found an xHC, it gets rather complex. You need to initialize it and get the USB devices which are plugged in the computer at the current moment. You need to take several steps to get the xHC operational. For this part, I'll leave you to read the xHCI specification which (on chapter 4) specifies exactly the steps which need to be taken (https://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/extensible-host-controler-interface-usb-xhci.pdf).
For the keyboard portion I'll leave you to read one of my answer on the stackexchange for computer science: https://cs.stackexchange.com/questions/141870/when-are-a-controllers-registers-loaded-and-ready-to-inform-an-i-o-operation/141918#141918.
Some good links:
https://wiki.osdev.org/Universal_Serial_Bus
https://wiki.osdev.org/PCI

Mapping external memory device

I am using the GCC toolchain and the ARM Cortex-M0 uC. I would like to ask if it is possible to define a space in the linker so that the reading and writing operations would call the external device driver functions for reading and writing it's space (eg. SPI memory). Can anyone give some hints how to do it?
Regards, Rafal
EDIT:
Thank you for your comments and replies. My setup is:
The random access SPI memory is connected via SPI controller and I use a "standard" driver to access the memory space and store/read data from it.
What I wanted to do is to avoid calling the driver's functions explicitly, but to hide them behind some fixed RAM address, so that any read of that address would call the spi read memory driver function and write would call the spi write memory function (the offset of the initial address would be the address of the data in the external memory). I doubt that it is at all possible in the uC without the MMU, but I think it is always worth to ask someone else who might have had similar idea.
No, this is not how it works. Cortex-M0 has no memory management Unit, and is therefore unable to intercept accesses to specific memory regions.
It's not really clear what you are trying to achieve. If you have connected SPI memory external to the chip, you have to perform all the accesses using a driver, it is not possible to memory map the SPI port abstraction.
If this is an on-device SPI memory controller, it will have two regions in the memory map. One will be the 'memory'region, and will probably behave read-only, one with be the control registers for the memory controller hardware, and it is these registers which the device driver talks to. Specifically, to write to the SPI, you need to perform driver accesses to perform the write.
In the extreme case, (for example Cortex-M1 for Xilinx), there will be an eXecute In Place (XIP) peripheral for the memory map behaviour, and a SPI Master device for the read/write functionality. A GPIO pin is used to multiplex the SPI EEPROM pins between 'memory mode' and çonfiguration mode'.

Physical Memory Allocation in Kernel

I am writting a Kernel Module that is going to trigger and external PCIe device to read a block of data from my internel memory. To do this I need to send the PCIe device a pointer to the physical memory address of the data that I would like to send. Ultimately this data is going to be written from Userspace to the kernel with the write() function (userspace) and copy_from_user() (kernel space). As I understand it, the address that my kernel module will see is still a virtual memory address. I need a way to get the physical address of it so that the PCIe device can find it.
1) Can I just use mmap() from userspace and place my data in a known location in DDR memory, instead of using copy_from_user()? I do not want to accidently overwrite another processes data in memory though.
2) My kernel module reserves PCIe data space at initialization using ioremap_nocache(), can I do the same from my kernel module or is it a bad idea to treat this memory as io memory? If I can, what would happen if the memory that I try to reserve is already in use? I do not want to hard code a static memory location and then find out that it is in use.
Thanks in advance for you help.
You don't choose a memory location and put your data there. Instead, you ask the kernel to tell you the location of your data in physical memory, and tell the board to read that location. Each page of memory (4KB) will be at a different physical location, so if you are sending more data than that, your device likely supports "scatter gather" DMA, so it can read a sequence of pages at different locations in memory.
The API is this: dma_map_page() to return a value of type dma_addr_t, which you can give to the board. Then dma_unmap_page() when the transfer is finished. If you're doing scatter-gather, you'll put that value instead in the list of descriptors that you feed to the board. Again if scatter-gather is supported, dma_map_sg() and friends will help with this mapping of a large buffer into a set of pages. It's still your responsibility to set up the page descriptors in the format expected by your device.
This is all very well written up in Linux Device Drivers (Chapter 15), which is required reading. http://lwn.net/images/pdf/LDD3/ch15.pdf. Some of the APIs have changed from when the book was written, but the concepts remain the same.
Finally, mmap(): Sure, you can allocate a kernel buffer, mmap() it out to user space and fill it there, then dma_map that buffer for transmission to the device. This is in fact probably the cleanest way to avoid copy_from_user().

Linux driver DMA transfer to a PCIe card with PC as master

I am working on a DMA routine to transfer data from PC to a FPGA on a PCIe card. I read DMA-API.txt and LDD3 ch. 15 for details. However, I could not figure out how to do a DMA transfer from PC to a consistent block of iomem on the PCIe card. The dad sample for PCI in LDD3 maps a buffer and then tells the card to do the DMA transfer, but I need the PC to do this.
What I already found out:
Request bus master
pci_set_master(pdev);
Set the DMA mask
if (dma_set_mask(&(pdev->dev), DMA_BIT_MASK(32))) {
dev_err(&pdev->dev,"No suitable DMA available.\n");
goto cleanup;
}
Request a DMA channel
if (request_dma(dmachannel, DRIVER_NAME)) {
dev_err(&pdev->dev,"Could not reserve DMA channel %d.\n", dmachannel);
goto cleanup;
}
Map a buffer for DMA transfer
dma_handle = pci_map_single(pci_dev, buffer, count, DMA_TO_DEVICE);
Question:
What do I have to do in order to let the PC perform the DMA transfer instead of the card?
Thank your for your help!
First of all thank you for your replies. Maybe I should put my questions more precisely:
In my understanding the PC has to have a DMA controller. How do I access this DMA controller to start a transfer to a memory mapped IO region in the PCIe card?
Our specification demands that the PC's DMA controller initiates the transfer. However, I could only find examples where the device would do the DMA job (DMA_mapping.txt, LDD3 ch.15). Is there a reason, why nobody uses the PC's DMA controller (It still has DMA channels though)? Would it be better to request a specification change for our project?
Thanks for your patience.
Look up DMA_mapping.txt. There's a long section in there that tells you how to set the direction ('DMA direction', line 408).
EDIT
Ok, since you edited your question... your specification is wrong. You could set up the system DMA controller, but it would be pointless, because it's too slow, as I said in the comments. Read this thread.
You must change your FPGA to support bus mastering. I do this for a living - contact me off-thread if you want to sub-contract.
What you are talking about is not really a DMA. The DMA is when your device is accessing memory and the CPU itself is not involved (with an exception of PC's memory controller, which is usually embedded into the PC's CPU these days). Not all devices can do it, and if you are using FPGA, then you surely need some sort of DMA controller in your design (i.e. Expresso DMA Core or alike). In your case, you just have to write to the mapped memory region (i.e. one that you obtain with ioremap_nocache) using iowrite calls (i.e. iowrite32) followed by write memory barriers wmb(). What I/O bar and address you have to write to entirely depends on your device.
Hope it helps. Good Luck!

Convert DMA mapping to virtual address

I have a somewhat unusual situation where I'm developing a simulation module for an Ethernet device. Ideally, the simulation layer would just be identical to the real hardware with regard to the register set. The issue I've run into is that the DMA registers in the hardware are loaded with the DMA mapping (physical) address of the data. I need to use those physical addresses to copy the data from the Tx buffer on the source device to the Rx buffer on the destination device. To do that in module code, I need pointers to virtual memory. I looked at phys_to_virt() and I didn't understand this comment in the man page:
This function does not handle bus mappings for DMA transfers.
Does this mean that a physical address that is retrieved via dma_map_single cannot be converted back to a virtual address using phys_to_virt()? Is there another way to accomplish this conversion?
There is not any general way to map a DMA address to a virtual address. The dma_map_single() function might be programming an IOMMU (eg VT-d on an Intel x86 system), which results in a DMA address that is completely unrelated to the original physical or virtual address. However this presentation and the linked slides gives one approach to hooking an emulated hardware model up to a real driver (basically, use virtualization).
I am not too clear about this question but if you are using "phys_to_virt()" may be the reason that address available on the bus can not be coverted to virtual by this function. I am not sure just try bus_to_virt(bus_addr); function
Try dma_virt = virt_to_phys(bus_to_virt(dma_handle))
it worked for me. It gives the same virtual address that was mapped by dma_coherent_alloc().

Resources