GNURadio non supported platform - fpga

I am just new on GNURadio and for a project I need to use GNURadio with an FPGA platform different from the ones already supported in the GNURadio project...
Is it feasible to develop design for different platforms?? I must a different FPGA since my board include DA/AD converters with higher BW than the supported platforms.
Thanks in advance for your help

GNURadio is a framework for processing data on a CPU. They added some RFNOC stuff that is supposed to do your processing on Ettus USRP FPGAs, but I'm not sure how well that works. GNURadio does not limit itself to any specific hardware front end, however you will have to write the custom drivers yourself. GNURadio uses UHD as the driver to work with Ettus USRPs, and there are other drivers (such as rtl-sdr) already integrated into GNURadio that work with hardware front-ends other than Ettus USRPs.
So to answer your question, yes you can use GNURadio with other FPGA platforms, but you will need to build the drivers yourself. Is it feasible to develop designs for different platforms? Yes, but there will be a significant amount of work that goes into it, which depends on your specific application and time frame for if it's worth it. You have not given any specifics so it's impossible for me to estimate.

Related

Is it possible to implement the current Rocket Chip Github respo on FPGAs other than Artix-7

I am totally new to the RISC-V domain. I am targeting to implement the Rocket Chip core on my FPGA as a module of a bigger project.
As far as I know, SiFive is a supplier for the Rocket Chip. To my knowledge, SiFive makes all its cores implementable only on Xilinx Artix-7 FPGAs. Yet, I am wondering if it is possible to implement it on other FPGAs (Eg. Xilinx Virtex 7 or Zynq)?
If yes, would that require some further modifications of any kind? Or I am fine with the regular flow demonstrated on Github?
Thanks.
LiteX has support for building SoCs around the Rocket core on a range of platforms. It has been tested on both Xilinx FPGAs and Lattice ECP5.
https://www.contrib.andrew.cmu.edu/~somlo/BTCP/ is a description of this flow aimed primarily at the Versa ECP5 development board. But LiteX supports a range of other platforms including some Virtex and Zynq boards.
BTW, Rocket-Chip is not (just) a SiFive project, it was originally developed by Berkeley and is now maintained by Chips Alliance.
Originally, Rocket Chip was supported for Zynq FPGAs: https://github.com/ucb-bar/fpga-zynq
That repo is deprecated and no longer supported, but perhaps something useful can be gleamed from it.
I managed to implement 32-bit single tiny core over Xilinx VC-709 board Virtex-7 fpga for baremetal.
I'm pretty sure you can implement bigger core with linux image.
Modification as per your requirement is not that tough.Just learn chisel and go through with interfaces and architecture.
On hardware side just need knowledge of dpi interface and design flow for fpga.

How does a Linux distribution affect the kernel behavior

This might be obvious for some but not to me so I'll ask =)
I'm having an issue that I have build an embedded Linux stack for some piece of hardware (NVidia TX2 + ConnectTech Astro carrier). I use a PCIe card from EPIX
If I use Ubuntu's official distribution for tegra, the PCIe card is properly detected.
With identical kernel and device tree blob, and the same HW unit, the detection fails with embedded Linux.
I thought that detecting PCIe devices would be kernel's job and not be influence by the distro, unless the drivers are built as kernel modules and inserted at different times. But in my case they are build in kernel.
Could someone elaborate why the detection would work with one distro but not the order?
Here is a link to what I tried to do to fix the detection
tx2-pcie-does-not-detect-endpoint-on-connecttech-carrier-board
Thanks!
A Linux distribution contains a kernel that usually differs from the vanilla kernel of the same release. Most of the time a distribution kernel contains lots of back ports of bug fixes that were discovered and fixed later in micro releases. There may be other features that a specific vendor includes and the vanilla kernel does not, like more recent version of certain drivers, etc. What makes this even more confusing is that sets of these back ports are often different in distributions from different vendors. As a side effect, this makes it difficult to depend on something like KERNEL_VERSION() macro in custom kernel code or in custom device drivers.
I can't say about the specific issue that you're having. The topic is pretty generic, and I hope that this explanation helps.

Possible to use OpenCL on multi-computers?

As far as I know, the answer is no. OpenCL is designed for multi-cores system.
But, is there any way to use OpenCL on multi-computers ( each computer is a multi-cores system ) ? If not, are any additional tools, frameworks... required?
I read some articles about Distributed computing, Cluster computing, Grid computing... but I can't find a satisfied answer
Any ideas will be appreciated
Thank you :)
There are two frameworks for this purpose: VirtualCL and CLara. Both packages let you work transparently with remote machines as local devices. Unfortunately, VirtualCL is only available as pre-compiled binaries without sources and CLara is not actively developed anymore.
SnuCL uses MPI and OpenCL to transparently use the cluster through the OpenCL API. It also adds a few OpenCL extensions to effectively deal with the memory objects.
It is open source. See http://aces.snu.ac.kr/Center_for_Manycore_Programming/SnuCL.html
and http://tbex.twbbs.org/~tbex/pad/SunCL.pdf
There is one more solution not mentioned above: dOpenCL.
"dOpenCL (distributed OpenCL) is a novel, uniform approach to programming distributed heterogeneous systems with accelerators. It transparently integrates the nodes of a distributed system into a single OpenCL platform. Thus, dOpenCL allows the user to run unmodified existing OpenCL applications in a heterogeneous distributed environment. Besides, it extends the OpenCL programming model to deal with individual nodes of the distributed system."
I have used VirtualCL to form a GPU cluster with 3 AMD GPU as compute node and my ubuntu intel desktop running as broker node. I was able to start both the broker and compute nodes.
In addition to the various options already mentioned by other posters, here are two more open source projects that you may be interested in:
ocland (in beta stage): offers a server application and an ICD implementation that the clients can use to take advantage of local and remote devices that support OpenCL in a transparent fashion. The license is GPLv3.
COPRTHR SDK by Brown Deer Technnology (currently version 1.6): this SDK which offers an open source (GPLv3) OpenCL implementation for x86_64, ARM, Epiphany and Intel MIC includes a "Compute Layer Remote Procedure Call" implementation. This consists of a client-side OpenCL implementation that supports rpc (libclrpc) and a server application (clrpcd). The website doesn't mention much about it but the documentation contains a section about this CLRPC implementation.

OpenCL maturity under Windows

I consider using OpenCL in a consumer product which is currently under development.
Doing a small research I found that generally there is good support under Mac OSX. Linux support is also relatively good, but my target audience does not use Linux. It remains to check how well it is supported in Windows.
Regarding Windows I found OpenCL distribution which raises some concerns.
Do any of you have any experience with using OpenCL in consumer-oriented products under Windows? I am more interested in the GPU side of OpenCL, specifically driver support.
Just like CUDA or Stream, OpenCL needs to be supported by the driver. Most CUDA-capable GPUs support OpenCL with a somewhat up-to-date driver (CUDA 1.0 upwards).
In fact, if you compile with, say, CUDA SDK 4.1 your end users will need newer drivers than if you had used OpenCL.
Also, OpenCL is not bound to any GPU architecture. While this might be problematic for specifically designed algorithms, it shouldn't have a very high impact on normal end user programs.
At least with CUDA, you can only compile code optimized for the current known major version. Compiling OpenCL kernels on the end user machine might allow optimizations for newer binary specifications in the future.
The crashes the author in that questions reported for Nvidia OpenCL generally seem to happen a lot if resources are not freed properly. I've been seeing similar crashes until I fixed a leak that didn't release created kernels.
I'm not saying it's the only reason why it might crash, but apart from programmer errors it appears fairly stable to me.
AMD and NVidia both support OpenCL on most (all?) of their GPUs
Unfortunately Intel only supports it on the CPU which is a bit pointless and if you have to insist that the user has a separate GPU for your app you can also insist that they have an NVidia one and use CUDA. This has limited the uptake of OpenCL.

Obsolete Xilinx Chip

My company is trying to build a pcb with an obsolete xilinx fpga (XC3042A) which is part of the XC3000 series chips. Does anyone have any experience programming the data to the chip? I'm looking for what software, hardware, etc. people have used.
I have programmed old Xilinx chips (XC4010XL) using a custom built interface to the ISA bus.
I used Turbo-C on a DOS box and a home-made ISA card with '245 (bidir transceiver) and a 74LS74 (dual flip flop D) for strobe signals on a slave parallel configuration.
It is not difficult to implement the same using a parallel port, for instance.
You should be able to find the programming specs from the Xilinx website. They provide documentation on the different methods used in programming their FPGA. It should be in their AppNotes. They have several modes - typically slave serial or select map (parallel). That means some sort of SPI flash, or parallel flash, or JTAG.
If you look around, you may find schematics for a DIY programming cable too! You can also interface a small micro, say a 8-bit PIC to handle the programming specs while you design your own custom interface to it or interface it to a SD card or something else.
The current Xilinx tools and cables will program old parts.
The XC3000 series does not use the JTAG interface, so you can not use the Xilinx programmer to download your configuration.
You can do so by either using an external EPROM or an embedded processor to download the code.
Take a look at this applications note from Xilinx:
http://www.xilinx.com/support/documentation/application_notes/xapp090.pdf
For daisy chain:
http://www.xilinx.com/support/documentation/application_notes/xapp091.pdf
It describes the data format as well as signal info for downloading the configuration file to the FPGA.
You can use older version of the Xilinx programmer from their web site and configure the devices, I believe the last version of the xilinx supporting the 3000 series was version 8 but I am not sure.
Check out FTDI. You might be able to convince them to go with some updated hardware. It's currently $150 CAD for USB + FPGA, and $80 CAD extra if you bundle it with a Manual. Plus shipping.
It even supports the free web kit available from the Xilinx website.

Resources