I'm relatively new to the FPGA sceen and was looking to get experience with them and VHDL. I'm not quite sure what the benefit would be over using a standard MCU but looking for experience since many companies are looking for it.
What would be a good platform to start out on and get experience for not to much money. Ive been looking and all I can find are 200 - 300 dollar boards if not 1000's. What should one look for in an FPGA development board, I hear high speed peripheral interfaces, and what I guess I'm really confused about is that an MCU dev board with around 50/100 GPIO can go for around 100 while that same functionality on an FPGA board is much more expensive! I know you can reprogram an FPGA, but so can an MCU. Should I even fiddle with FPGA's will the market keep using them or are we moving towards MCU's only?
Hmm...I was able to find three evaluation boards under $100 pretty quickly:
$79: http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&No=593
$79: http://www.arrownac.com/solutions/bemicro-sdk/
$89: http://www.xilinx.com/products/boards-and-kits/AES-S6MB-LX9.htm
As to what to look for in an evaluation board, that depends entirely on what you want to do. If you have a specific design task to accomplish, you want a board that supports as many of the same functions and I/O as your final circuit. You can get boards with various memory options (SRAM, DDR2, DDR3, Flash, etc), Ethernet, PCI/PCIe bus, high-speed optical transceivers, and more. If you just want to get started, just about any board will work for you. Virtually anything sold today should have enough space for even non-trivial example designs (ie: build your own microcontroller with a soft-core CPU and design/select-your-own peripheral mix).
Even if your board only has a few switches and LEDs you can get started designing a hardware "Hello World" (a.k.a. the blinking LED :), simple state machines, and many other applications. Where you start and what you try to do should depend on your overall goals. If you're just looking to gain general experience with FPGAs, I suggest:
Start with any of low-cost evaluation boards
Run through their demo application (typically already programmed into the HW) to get familiar with what it does
Build the demo program from source and verify it works to get familiar with the FPGA tool chain
Modify the demo application in some way to get familiar with designing hardware for FPGAs
Use your new-found experience to determine what to try next
As for the market continuing to use FPGAs, they are definitely here to stay, but that does not mean they are suitable for every application. An MCU by itself is fine for many applications, but cannot handle everything. For example, you can easily "bit-bang" an I2C or even serial UART with most micro-controllers, but you would be hard pressed to talk to an Ethernet port, a VGA display, or a PCI/PCIe bus without some custom hardware. It's up to you to decide how to mix the available technology (MCUs, FPGAs, custom logic designed in-house, licensed IP cores, off-the-shelf standard hardware chips, etc) to create a functional product or device, and there typically isn't any single 'right' answer.
FPGAs win over microcontrollers if you need some or all of:
Huge amounts of maths to be done (even more than a DSP makes sense for)
Huge amounts of memory bandwidth (often goes hand in hand with the previous point - not much point having lots of maths to do if you have no data to do it on!)
Extremely predictable hard real-time performance - the timing analyser will tell you how fast you can clock you device given the logic you've designed. You can (with a certain - high - statistical likelihood) "guarantee" to operate at that speed. And therefore you can design logic which you know will always meet certain real-time response times, even if those deadlines are in the nano-second realm.
If not, then you are likely better off with a micro or DSP.
The OpenCores web site is an excellent resource, especially the Programming Tools section. The articles link on the site is a good place to start to survey FPGA boards.
The biggest advantage of an FPGA over a microprocessor is architecture. The microprocessor has a fixed set of functional units that solve most problems reasonably well. I've seen computational efficiency figures for microprocessors form 6% to 15%. In an FPGA you are creating functional units specifically for your problem and nothing else, so you can reach 90-100% computational efficiency.
As for the difference in cost, think of volume sales. High volume of microprocessor sales vs. relatively lower FPGA sales.
Related
This is more of a general question about FPGA design than a specific question about code. I studied computer science but have been trying to learn more about hardware recently. I’ve been using a Xilinx FPGA to teach myself VHDL and some of the basics about hardware design, but I have a lot of gaps in my knowledge that have led to me hitting some pretty big walls in my projects. This is the most recent one.
I have a design with a couple dozen “workers”. Part of the design’s functionality depends on these workers executing compute-heavy tasks. In order to save FPGA resources, I have the workers sharing the computing circuitry and have another module to schedule access to that circuitry between the workers. The logic itself works fine and I’ve tested it in the simulator, however when I try to implement the design on the FPGA itself it never meets the timing requirements. A look at the diagram in Vivado showed me that the placer puts all of the shared computing circuitry on one side of the FPGA and all of the workers on the other side. Additionally, the routes that carry data from the workers to the computing circuitry meet timing but the routes that carry the results back to the workers are almost all failing.
So, my question is what solutions are typically used to fix data transfer problems like this in hardware design? I know that I could lower the clock rate to give the signals more time to move around, but I’m hesitant to do that since it would decrease the overall throughout of my design. On the other hand, I could place a few buffers between the shared computing circuitry and the workers (acting like a shift register), at the cost of increasing the compute time for the individual workers. What other techniques or design patterns are there for moving data around between points in an FPGA that are far apart?
Indeed the solutions you propose to reduce timing violations are rights and the most common.
You can also :
Modify synthesis and implementation directives in Vivado to prefer timing optimization than ressources utilization or compute time (of the synthesis and implementation).
Rework your compute unit to ensure that there is a buffer after all of your logic. Indeed you have different ways to segment your compute unit between sequential part and combinationnal part.
Place and route critical parts of your design by yourself. I never did it but I know it's possible (at least set location constraints in .xdc).
About adding buffers on the critcial paths : if you can do a piplined architecture, you will only add one latency clock cycle (It's not a high cost to ensure your design will work correctly).
In my thesis, I plan on writing a section of real-time capability comparison of single board computers:
the factors (if they really have a real time clock, even if they don't have one, can real-time frameworks or RTOS be used to utilize them with real-time properties and how)
what scheduling is used in their out-of-the-box kernel? (for example, if Round-robin is used, then AFAIK real-time scheduling cannot be achieved)
Comparison between Pandaboard, Beagleboard, Beaglebone, and Especially Raspberry Pi
If you have a resource or idea regarding this, I would really appreciate it. In case I have missed an information, please do say and I'd be happy to provide that.
Thanks in advance.
EDIT:
I found a good answer here, but I can always appreciate any better guidance.
What makes a kernel/OS real-time?
First an observation. Scheduling is an OS concept. Why would it matter which scheduler is used in out-of-the-box kernel? If indeed there is such a thing as out-of-the-box kernel. Having said that, realtimeness is affected by scheduler and hardware. But when comparing boards, I would keep scheduler constant (or may be pick a few) and then compare boards. Choosing scheduler(s) is a separate topic on its own. Couple of things to take into account are that it should be pre-emptive and be able to deal with issues like priority inversion.
Note that all these boards have MMU which will bring in latency. That shouldn't really matter though, as long as that latency is bounded. I'd also compare accuracy of crystals on which the clocks are based. Note also SoCs have low power modes, they also tend to switch clocks. Whenever they come out of LP mode, they switch from some internal oscillator to more accurate clock source like external crystal. That requires time to for crystal to stabilise before it can continue normal operations. Comparison of latency involved in switching between power mode will also be a useful determinant.
I'm super excited about my program powering a little seven-segment display, but when I show it off to people not in the field, they always say "well what can you do with it?" I'm never able to give them a concise answer. Can anyone help me out?
First: They don't need to have volatile memory.
Indeed the big players (Xilinx, Altera) usually have their configuration on-chip in SRAM, so you need additional EEPROM/Flash/WhatEver(TM) to store it outside.
But there are others, e.g. Actel is one big player that come to mind, that has non-volatile configuration storage on their FPGAs (btw. this has also other advantages, as SRAM is usually not very radiation tolerant, and you have to require special measurements when you go into orbit).
There are two big things that justify FPGAS:
Price - They are not cheap. But sometimes you can't do something in software, and you need hardware for it. And when you are below a certain point in your required volume (e.g. because its just small series, or a prototype) an FPGA is MUCH cheaper than an ASIC. Also, while developing ASICs this allows - before a final state is reached - much higher turn-around times.
Reconfiguration - You can reconfigure your FPGA. That is something a processor or an ASIC can't do. There are some applications where you can use this: E.g. When you need the ability to fix something in the design, but you can't get physically to the device. Example for this: The mars orbiters/rovers used Xilinx FPGAs. When someone finds there a mistake (or wants to switch to a different coding for transmitting data or whatever), you can't replace the ship, as it is just not reachable. But with an FPGA you can just reconfigure and can apply your changes. Another scenario is, that you can have one single chip which is able to perform different accelerations, depending on the scenario. Imagine a smartphone, when telephoning the FPGA can be configured to make audio en-/decoding, when surfing it can work as a compression engine, when playing videos it can be configured as h264 decoder/accelerator. Another thing you could do is that you can match your hardware to your problem instance. E.g. Cisco uses many FPGAs in their hardware. You need the hardware to perform switching/routing/packet inspection with the required speed, and you can generate from actual setting matching engines directly into hardware.
Another thing which might come up soon (I know some car manufacturer thought about it), is for devices which include a lot of different electronics and have a big supply chain. It's more or less a combination of price and reconfiguration. It's more expensive to have 10 ASICs than 10 FPGAs - where both perform the same task, but it's cheaper to have 10 FPGAs with just one supplier and the need to hold just 1 type of chip at service and supply than to have 10 suppliers with the necessity to hold and manage 10 different chips in supply and service.
True story.
They allow you to fix design flaws in the custom data-acquisition boards for a multi-million dollar particle physics experiment that become obvious only after you have everything installed and are doing integration work and detector characterization.
You can evolve circuits, this is a bit old school evolutionary algorithms but starting from a set of random individuals you can select the circuits that score higher in a fitness function than the rest and breed them to create a new population ad infinitum. read up about Evolutionary Hardware, think this book covers FPGA's http://www.amazon.co.uk/Introduction-Evolvable-Hardware-Self-Adaptive-Computational/dp/0471719773/ref=sr_1_1?ie=UTF8&qid=1316308403&sr=8-1
Say for example you wanted a DSP circuit, you have an input signal and a desired output signal, starting with a random population you select perhaps only the fittest (bad) or perhaps a mixture of fitties and odd ones to create the next generation. after a number of generations you can open the lid and discover low and behold evolution has taken place and you have a circuit that may even out perform your initial expectations!
also read the field guide to genetic programming, it's free on the web somewhere.
There are limitations to software. On software, you're running at the CPU's clock rate, enabling you to only execute one instruction per clock cycle. On software, everything is high level, you do not control details that happen in the low level. You'll always be limited by the operating system or development board you are programming. This is true for popular development boards out there such as Arduinos and Raspberry Pi.
In FPGA hardware, you can precisely program and control what happens between each clock cycle, providing your computations the speed at the electron level (note: speed of electrons determines speed of electric signal transfers between hardware)
Now, we know FPGA implies Hardware, Speed of Electrons, which is much better than
CPU that implies Software, 1 instruction per clock cycle.
So why use FPGA when we can design our own boards using Printed Circuit Board, transistor level?
This is because FPGA's are programmable hardware! It is built such that you can program the connections of a board instead of wiring it up for a specific application. This explains why FPGA's are expensive! It is sort of a 'general hardware' or Programmable Hardware.
To argue why you should pick FPGA's despite their cost, the programmable hardware component allows:
Longer product cycle (you can update the programmable hardware on the customer's products which contains your FPGA by simply allowing them to programmed your updated HDL code into their FPGA)
Recovery for hardware bug. You simply allow them to download the corrected program onto their FPGA. (note: you cannot do this with specific hardware designs as you will have to spend millions to gather back your products, create new ones, and ship them back to customers)
For examples on the cool things FPGA can do, refer to Stanford's infamous ECE5760 course.
http://people.ece.cornell.edu/land/courses/ece5760/FinalProjects/
Hope this helps!
Soon Chee Loong,
University of Toronto
FPGA are also used to test/research circuit design before they start mass production. This is happening in several sectors: image processing, signal processing, etc.
Edit - after few years we can now see more practical applications including finance and machine earning:
aerepospace
emulation
automotive
broadcast
high performance computers
medical
machine learning
finance (including cryptocoins)
I like this article: http://www.hpcwire.com/hpcwire/2011-07-13/jp_morgan_buys_into_fpga_supercomputing.html
My feeling is that FPGA's can sit directly in your streaming data at the point where it enters your the systems under your control. You can then crunch that data without going through the steps a GPGPU would require (bringing the data in off the network, passing it across the PCI Express bus and crunching it a Gb at a time).
There are good reasons for both, but I think the notion of whether you mind buffering the data is a good bellwether.
Here's another cool FPGA application:
https://ehsm.eu/m-labs.hk/m1.html
Automotive image processing is one interesting domain:
Providing lane-keeping support to the driver (disclosure: I wrote this page!):
http://www.conekt.co.uk/capabilities/50-fpga-for-ldw
Providing an aerial view of a car from 4 fisheye-lens cameras (with video):
http://www.logicbricks.com/Solutions/Surround-View-DA-System/Xylon-Test-Vehicle.aspx
We are checking how fast is an algorithm running at the FPGA vs Normal Quad x86 computer.
Now at the x86 we run the algorithm lots of times, and take a median in order to eliminate OS overhead, also this "cleans" the curve from errors.
Thats not the problem.
The measure in the FPGA algorithm is in cycles and then take the cycles to time, with the FSMD is trivial to count cycles anyway...
We think that count cycles is too "pure" measure, and this could be done theoretically and dont need to make a real measure or running the algorithm in the real FPGA.
I want to know is there exist a paper or some idea, to do a real time measure.
If you are trying to establish that your FPGA implementation is competitive or superior, and therefore might be useful in the real world, then I encourage you to compare ** wall clock times ** on the multiprocessor vs. the FPGA implementation. That will also help ensure you do not overlook performance effects beyond the FSM + datapath (such as I/O delays).
I agree that reporting cycle counts only is not representative because the FPGA cycle time can be 10X that of off the shelf commodity microprocessors.
Now for some additional unsolicited advice. I have been to numerous FCCM conferences, and similar, and I have heard many dozens of FPGA implementation vs. CPU implementation performance comparison papers. All too often, a paper compares a custom FPGA implementation that took months, vs. a CPU+software implementation wherein the engineer just took the benchmark source code off the shelf, compiled it, and ran it in one afternoon. I do not find such presentations particularly compelling.
A fair comparison would evaluate a software implementation that uses best practices, best available libraries (e.g. Intel MKL or IPP), that used multithreading across multiple cores, that used vector SIMD (e.g. SSE, AVX, ...) instead of scalar computation, that used tools like profilers to eliminate easily fixed waste and like Vtune to understand and tune the cache+memory hierarchy. Also please be sure to report the actual amount of engineering time spent on the FPGA vs. the software implementations.
More free advice: In these energy focused times where results/joule may trump results/second, consider also reporting the energy efficiency of your implementations.
More free advice: to get most repeatable times on the "quad x86" be sure to quiesce the machine, shut down background processors, daemons, services, etc., disconnect the network.
Happy hacking!
For most of my life, I've programmed CPUs; and although for most algorithms, the big-Oh running time remains the same on CPUs / FPGAs, the constants are quite different (for example, lots of CPU power is wasted shuffling data around; whereas for FPGAs it's often compute bound).
I would like to learn more about this -- anyone know of good books / reference papers / tutorials that deals with the issue of:
what tasks do FPGAs dominate CPUs on (in terms of pure speed)
what tasks do FPGAs dominate CPUs on (in terms of work per jule)
Note: marked community wiki
[no links, just my musings]
FPGAs are essentially interpreters for hardware!
The architecture is like dedicated ASICs, but to get rapid development, and you pay a factor of ~10 in frequency and a [don't know, at least 10?] factor in power efficiency.
So take any task where dedicated HW can massively outperform CPUs, divide by the FPGA 10/[?] factors, and you'll probably still have a winner. Typical qualities of such tasks:
Massive opportunities for fine-grained parallelism.
(Doing 4 operations at once doesn't count; 128 does.)
Opportunity for deep pipelining.
This is also a kind of parallelism, but it's hard to apply it to a
single task, so it helps if you can get many separate tasks to
work on in parallel.
(Mostly) Fixed data flow paths.
Some muxes are OK, but massive random accesses are bad, cause you
can't parallelize them. But see below about memories.
High total bandwidth to many small memories.
FPGAs have hundreds of small (O(1KB)) internal memories
(BlockRAMs in Xilinx parlance), so if you can partition you
memory usage into many independent buffers, you can enjoy a data
bandwidth that CPUs never dreamed of.
Small external bandwidth (compared to internal work).
The ideal FPGA task has small inputs and outputs but requires a
lot of internal work. This way your FPGA won't starve waiting for
I/O. (CPUs already suffer from starving, and they alleviate it
with very sophisticated (and big) caches, unmatchable in FPGAs.)
It's perfectly possible to connect a huge I/O bandwidth to an
FPGA (~1000 pins nowdays, some with high-rate SERDESes) -
but doing that requires a custom board architected for such
bandwidth; in most scenarios, your external I/O will be a
bottleneck.
Simple enough for HW (aka good SW/HW partitioning).
Many tasks consist of 90% irregular glue logic and only 10%
hard work ("kernel" in the DSP sense). If you put all that
onto an FPGA, you'll waste precious area on logic that does no
work most of the time. Ideally, you want all the muck
to be handled in SW and fully utilize the HW for the kernel.
("Soft-core" CPUs inside FPGAs are a popular way to pack lots of
slow irregular logic onto medium area, if you can't offload it to
a real CPU.)
Weird bit manipulations are a plus.
Things that don't map well onto traditional CPU instruction sets,
such as unaligned access to packed bits, hash functions, coding &
compression... However, don't overestimate the factor this gives
you - most data formats and algorithms you'll meet have already
been designed to go easy on CPU instruction sets, and CPUs keep
adding specialized instructions for multimedia.
Lots of Floating point specifically is a minus because both
CPUs and GPUs crunch them on extremely optimized dedicated silicon.
(So-called "DSP" FPGAs also have lots of dedicated mul/add units,
but AFAIK these only do integers?)
Low latency / real-time requirements are a plus.
Hardware can really shine under such demands.
EDIT: Several of these conditions — esp. fixed data flows and many separate tasks to work on — also enable bit slicing on CPUs, which somewhat levels the field.
Well the newest generation of Xilinx parts just anounced brag 4.7TMACS and general purpose logic at 600MHz. (These are basically Virtex 6s fabbed on a smaller process.)
On a beast like this if you can implement your algorithms in fixed point operations, primarily multiply, adds and subtracts, and take advantage of both Wide parallelism and Pipelined parallelism you can eat most PCs alive, in terms of both power and processing.
You can do floating on these, but there will be a performance hit. The DSP blocks contain a 25x18 bit MACC with a 48bit sum. If you can get away with oddball formats and bypass some of the floating point normalization that normally occurs you can still eek out a truck load of performance out of these. (i.e. Use the 18Bit input as strait fixed point or float with a 17 bit mantissia, instead of the normal 24 bit.) Doubles floats are going to eat alot of resources so if you need that, you probably will do better on a PC.
If your algorithms can be expressed as in terms of add and subtract operations, then the general purpose logic in these can be used to implement gazillion adders. Things like Bresenham's line/circle/yadda/yadda/yadda algorithms are VERY good fits for FPGA designs.
IF you need division... EH... it's painful, and probably going to be relatively slow unless you can implement your divides as multiplies.
If you need lots of high percision trig functions, not so much... Again it CAN be done, but it's not going to be pretty or fast. (Just like it can be done on a 6502.) If you can cope with just using a lookup table over a limited range, then your golden!
Speaking of the 6502, a 6502 demo coder could make one of these things sing. Anybody who is familiar with all the old math tricks that programmers used to use on the old school machine like that will still apply. All the tricks that modern programmer tell you "let the libary do for you" are the types of things that you need to know to implement maths on these. If yo can find a book that talks about doing 3d on a 68000 based Atari or Amiga, they will discuss alot of how to implement stuff in integer only.
ACTUALLY any algorithms that can be implemented using look up tables will be VERY well suited for FPGAs. Not only do you have blockrams distributed through out the part, but the logic cells themself can be configured as various sized LUTS and mini rams.
You can view things like fixed bit manipulations as FREE! It's simply handle by routing. Fixed shifts, or bit reversals cost nothing. Dynamic bit operations like shift by a varable amount will cost a minimal amount of logic and can be done till the cows come home!
The biggest part has 3960 multipliers! And 142,200 slices which EACH one can be an 8 bit adder. (4 6Bit Luts per slice or 8 5bit Luts per slice depending on configuration.)
Pick a gnarly SW algorithm. Our company does HW acceleration of SW algo's for a living.
We've done HW implementations of regular expression engines that will do 1000's of rule-sets in parallel at speeds up to 10Gb/sec. The target market for that is routers where anti-virus and ips/ids can run real-time as the data is streaming by without it slowing down the router.
We've done HD video encoding in HW. It used to take several hours of processing time per second of film to convert it to HD. Now we can do it almost real-time...it takes almost 2 seconds of processing to convert 1 second of film. Netflix's used our HW almost exclusively for their video on demand product.
We've even done simple stuff like RSA, 3DES, and AES encryption and decryption in HW. We've done simple zip/unzip in HW. The target market for that is for security video cameras. The government has some massive amount of video cameras generating huge streams of real-time data. They zip it down in real-time before sending it over their network, and then unzip it in real-time on the other end.
Heck, another company I worked for used to do radar receivers using FPGA's. They would sample the digitized enemy radar data directly several different antennas, and from the time delta of arrival, figure out what direction and how far away the enemy transmitter is. Heck, we could even check the unintended modulation on pulse of the signals in the FPGA's to figure out the fingerprint of specific transmitters, so we could know that this signal is coming from a specific Russian SAM site that used to be stationed at a different border, so we could track weapons movements and sales.
Try doing that in software!! :-)
For pure speed:
- Paralizable ones
- DSP, e.g. video filters
- Moving data, e.g. DMA