I'm interested in starting a hobbyist project, where I do some image processing by interfacing HW and SW. I am quite a newbie to this. I know how to do some basic image processing in Matlab using the existing image processing commands.
I personally enjoy working with HW and wanted to a combination of HW/SW to be able to do this. I've read articles on people using FPGAs and just basic FPGAs/micro-controllers to go about doing this.
Here is my question: can someone recommend languages I should consider that will help me with interfacing on a PC? I image, the SW part would essentially be a GUI and is place-holder for all the processing that is done on the HW. Also in-terms of selecting the HW and realistically considering what I could do on the HW, could I get a few recommendations on that too?
Any recommendations will be appreciated!
EDIT: I read a few of the other posts saying requirements are directly related to knowing what kind of image processing one is doing. Well initially, I want to do finger print recognition. So filtering and locating unique markers in the image etc.
It all depends on what you are familiar with, how you plan on doing the interface between FPGA and PC, and generally the scale of what you want to do. Examples could be:
A fast system could for instance consist of a Xilinx SP605
board, using the PCI Express interface to quickly transfer image
data between PC and FPGA. For this, you'd need to write a device
driver (in C), and a user-space application (I've done this in
C++/Qt).
A more realistic hobbyist system could be a Xilinx SP601
board, using Ethernet to transfer data - you'd then just have to
write a simple protocol (possibly using raw sockets (no TCP/UDP) to
make the FPGA side Ethernet simpler), which can be done in basically
any language offering network access (there's a Xilinx reference
design for the SP605 demonstrating this).
The simplest and cheapest solution would be an FPGA board with a
serial connection - you probably wouldn't be able to do any
"serious" image processing with this, but it should be enough for
very simple proof-of-concept stuff, although the smaller FPGA devices used o these boards typically do not have much on-board memory available.
But again, it all depends on what you actually want to do.
Related
I’m a college student and I’m trying to build an underwater robot with my team.
We plan to use stm32 and RPi. We will put our controller on stm32 and high-level algorithm (like path planning, object detection…) on Rpi. The reason we design it this way is that the controller needs to be calculated fast and high-level algorithms need more overhead.
But later I found out there is tons of package on ROS that support IMU and other attitude sensors. Therefore, I assume many people might build their controller on a board that can run ROS such as RPi.
As far as I know, RPi is slower than stm32 and has less port to connect to sensor and motor which makes me think that Rpi is not a desired place to run a controller.
So I’m wondering if I design it all wrong?
Robot application could vary so much, the suitable structure shall be very much according to use case, so it is difficult to have a standard answer, I just share my thoughts for your reference.
In general, I think Linux SBC(e.g. RPi) + MCU Controller(e.g. stm32/esp32) is a good solution for many use cases. I personally use RPi + ESP32 for a few robot designs, the reason is,
Linux is not a good realtime OS, MCU is good at handling time critical tasks, like motor control, IMU filtering;
Some protection mechnism need to be reliable even when central "brain" hang or whole system running into low voltage;
MCU is cheaper, smaller and flexible to distribute to any parts inside robot, it also helps our modularized design thinking;
Many new MCU is actually powerful enough to handle sophisticated tasks and could offload a lot from the central CPU;
I would like to reproduce the experiment from Dr. Adrian Thompson, who used genetic algorithm to produce a chip (FPGA) which can distinguish between two different sound signals in a extreme efficient way. For more information please visit this link:
http://archive.bcs.org/bulletin/jan98/leading.htm
After some research I found this FPGA board:
http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=836&PartNo=1
Is this board capable of reproducing Dr. Adrian Thompsons experiment or am I in need of another?
Thank you for your support.
In terms of programmable logic, the DE1-SoC is about ~20x bigger, and has ~70x as much embedded memory. Practically any modern FPGA is bigger than the "Xilinx XC6216" cited by his papers, as was linked to you in the other instance of this question you asked.
That said, most modern FPGAs don't allow for the same fine-granularity of configuration, as compared to older FPGAs - the internal routing and block structures are more complex, and FPGA vendors want to protect their products and compel you to use their CAD tools.
In short, yes, the DE1-SoC will be able to contain any design from 12+ years ago. As for replicating the specific functions, you should do some more research to determine if the methods used are still feasible with modern chips and CAD tools.
Edit:
user1155120 elaborated on the features of the XC6216 (see link below) that were of value to Thompson.
Fast Configuration: A larger device will generally take longer to configure, as you have to send more configuration data. That said, I/O interfaces are faster than they were 15 years ago, so it depends on your definition of "fast".
Reconfiguration: Cyclone V chips (like the one in the DE1-SoC) do support partial reconfiguration, but the subscription version of the Quartus II software is required, in addition to a separate license to support PR. I don't believe it supports wildcard reconfiguration, though I could be mistaken.
Memory-Mapped Addressing: The DE1-SoC's internal data can be access through the USB Blaster interface. However, this requires using the SystemConsole on the host PC, so it's not a direct access.
I'm using an arduino to excitate and amplify strain gauges on a rod - the resulting voltage will be picked up by the analog inputs available on the arduino. I need to plot the 'torque' taken by that rod with respect to time on a graph, and the easiest way I see to do this is using the Processing language, as the basic arduino environment does not provide for graphical display.
Any tips on where to start? I only have prior experience with MATLAB, and a bit with Java.
EDIT: I should add a specific question - how do I assign a variable in Processing to the physical values read on the arduino (varying voltage through analog)?
Thanks.
Since you have experience with MATLAB, consider using the ArduinoIO API provided by The MathWorks. Basically lets you interface your Arduino to MATLAB - all the pin I/O features are available. So let MATLAB do the work plotting, etc, for you and just use your Arduino to collect your data.
I can personally vouch for how useful this API is. It's powering my master's thesis (building Arduino-powered vehicles and doing control on them).
I have a computational intensive task which I used CUDA to implement it and now I want to make it even faster with FPGAs (if possible)
The system I want to implement is a series of computations each similar to matrix multiplication in sense of being parallel. It also has some non-parallel parts in between. It works with big amounts of data.
Although I want it as fast as possible, I have enough time to learn and explore with FPGAs.
here I'm asking for suggestions on how I start my path? Which FPGA to choose and where to learn about it. any website or online class or books? I've decided to do this anyway but your idea of whether this will be faster on FPGA or not would be helpful too.
The big wins from an FPGA over using a GPU come from:
Using non-standard word widths optimised to your application. This allows denser logic, which allows more parallel processing blocks
using your knowledge of the required accesses to external RAM to schedule them in hardware more efficiently than a general purpose memory controller can.
The downside is getting data to and from the FPGA. Draw a data-transfer diagram before you start. Even if the FPGA provides infinite speedup, you might still find it's not worth the effort if there's loads of data to be shuffled to and fro!
It's likely you'll be wanting a PCI express based board. Which is (I imagine) a whole new learning-curve before you get to do anything with the FPGA - but if you're up for it, it'll be a very interesting task!
In terms of choosing FPGAs, have a play with the software tools from the various vendors - at the learning stage that's much more important than the chips themselves. You won't find (at this early learning-stage) a show-stopper feature in any of the various chips. Also take into account the availability of boards with your required interfaces on, and any IP-core you might need to do the high-speed interfacing (eg PCIe)
You can get a substantial speedup on most parallel problems with an FPGA.
However, in addition to implementing your computation on the FPGA, there's a lot of work involved in getting the data back and forth from the CPU/main memory. This will require implementation of (for example) a PCI Express endpoint in the FPGA logic (bus mastering for maximum speed) and custom drivers on the software side. Most operating systems will require those drivers to be developed in kernel mode.
And you can't just use the most straightforward approach for FPGA programming either. You're going to need to worry about pipelining and clock synchronization in order to maximize throughput.
In other words, it's a substantially difficult task even for engineers with years of FPGA experience. I strongly suggest you find someone to work with on this. Depending on how proprietary your project is, you might find skilled academics willing to work with you as long as you provide them with all materials and publication rights.
If you're determined to go it alone, you'll need some hardware. Many different companies offer FPGA wired up as accelerators, for example http://www.nallatech.com/pci-express-cards.html
Depending on whether you choose a Xilinx or Altera FPGA, you'll find considerable sample code and tutorials for getting PCI express working.
I have a raw grabbed data from spectrometer that was working on wifi (802.11b) channel 6.
(two laptops in ad-hoc ping each other).
I would like to decode this data in matlab.
I see them as complex vector with 4.6 mln of complex samples.
I see their spectrum quite nice. I am looking document a bit less complicated as IEEE 802.11 standard (which I have).
I can share measurement data to other people.
There's now a few solutions around for decoding 802.11 using Software Defined Radio (SDR) techniques. As mentioned in a previous answer there is software that is based on gnuradio - specifically there's gr-ieee802-11 and also 802.11n+. Plus the higher end SDR boards like WARP utilise FPGA based implementations of 802.11. There's also a bunch of implementations of 802.11 for Matlab available e.g. 802.11a.
If your data is really raw then you basically have to build every piece of the signal processing chain in software, which is possible but not really straightforward. Have you checked the relevant wikipedia page? You might use gnuradio instead of starting from scratch.
I have used 802.11 IEEE standard to code and decode data on matlab.
Coding data is an easy task.
Decoding is a bit more sophisticated.
I agree with Stan, it is going to be tough doing everything yourself. you may get some ideas from the projects on CGRAN like :
https://www.cgran.org/wiki/WifiLocalization