I would like to reproduce the experiment from Dr. Adrian Thompson, who used genetic algorithm to produce a chip (FPGA) which can distinguish between two different sound signals in a extreme efficient way. For more information please visit this link:
http://archive.bcs.org/bulletin/jan98/leading.htm
After some research I found this FPGA board:
http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=836&PartNo=1
Is this board capable of reproducing Dr. Adrian Thompsons experiment or am I in need of another?
Thank you for your support.
In terms of programmable logic, the DE1-SoC is about ~20x bigger, and has ~70x as much embedded memory. Practically any modern FPGA is bigger than the "Xilinx XC6216" cited by his papers, as was linked to you in the other instance of this question you asked.
That said, most modern FPGAs don't allow for the same fine-granularity of configuration, as compared to older FPGAs - the internal routing and block structures are more complex, and FPGA vendors want to protect their products and compel you to use their CAD tools.
In short, yes, the DE1-SoC will be able to contain any design from 12+ years ago. As for replicating the specific functions, you should do some more research to determine if the methods used are still feasible with modern chips and CAD tools.
Edit:
user1155120 elaborated on the features of the XC6216 (see link below) that were of value to Thompson.
Fast Configuration: A larger device will generally take longer to configure, as you have to send more configuration data. That said, I/O interfaces are faster than they were 15 years ago, so it depends on your definition of "fast".
Reconfiguration: Cyclone V chips (like the one in the DE1-SoC) do support partial reconfiguration, but the subscription version of the Quartus II software is required, in addition to a separate license to support PR. I don't believe it supports wildcard reconfiguration, though I could be mistaken.
Memory-Mapped Addressing: The DE1-SoC's internal data can be access through the USB Blaster interface. However, this requires using the SystemConsole on the host PC, so it's not a direct access.
Related
I'm interested in starting a hobbyist project, where I do some image processing by interfacing HW and SW. I am quite a newbie to this. I know how to do some basic image processing in Matlab using the existing image processing commands.
I personally enjoy working with HW and wanted to a combination of HW/SW to be able to do this. I've read articles on people using FPGAs and just basic FPGAs/micro-controllers to go about doing this.
Here is my question: can someone recommend languages I should consider that will help me with interfacing on a PC? I image, the SW part would essentially be a GUI and is place-holder for all the processing that is done on the HW. Also in-terms of selecting the HW and realistically considering what I could do on the HW, could I get a few recommendations on that too?
Any recommendations will be appreciated!
EDIT: I read a few of the other posts saying requirements are directly related to knowing what kind of image processing one is doing. Well initially, I want to do finger print recognition. So filtering and locating unique markers in the image etc.
It all depends on what you are familiar with, how you plan on doing the interface between FPGA and PC, and generally the scale of what you want to do. Examples could be:
A fast system could for instance consist of a Xilinx SP605
board, using the PCI Express interface to quickly transfer image
data between PC and FPGA. For this, you'd need to write a device
driver (in C), and a user-space application (I've done this in
C++/Qt).
A more realistic hobbyist system could be a Xilinx SP601
board, using Ethernet to transfer data - you'd then just have to
write a simple protocol (possibly using raw sockets (no TCP/UDP) to
make the FPGA side Ethernet simpler), which can be done in basically
any language offering network access (there's a Xilinx reference
design for the SP605 demonstrating this).
The simplest and cheapest solution would be an FPGA board with a
serial connection - you probably wouldn't be able to do any
"serious" image processing with this, but it should be enough for
very simple proof-of-concept stuff, although the smaller FPGA devices used o these boards typically do not have much on-board memory available.
But again, it all depends on what you actually want to do.
Assuming there is a task (e.g. an image processing method with a lot math) which is reasonable to be implemented on FPGA in sense of answer https://stackoverflow.com/a/8695228/544463
Is there any known (that you can actually name) successful application or practice for combining it with "dedicated" (designed on custom demand) super computing cluster (HPC), e.g. with Infiniband stack? I wonder if that has already been done and to which extend that was successful.
My main motivation for the question is that http://en.wikipedia.org/wiki/Reconfigurable_computing is a long term (academic) perspective for the future development of cluster computing as a distinctive alternative to cloud computing (the later concentrates more on the software (higher) flexibility level but also through possible "reconfiguration"). Is it already practical?
I would also expect somebody is doing research on this... It would be nice to learn about results.
Well, it's not FPGA, but D.E. Shaw's Anton computer for molecular dynamics is famously ASICs connected with a custom high-speed network; J. P. Morgan uses clusters of FPGAs in its risk-analysis calculations (recent Forbes article here). Convey computers has been pushing FPGA+x86+high speed networking fairly hard for the past couple of years, so presumably there's some sort of market there...
http://www.maxeler.com/ - they build racks of Intel PCs hosting custom boards stuffed with FPGAs (and - critically - the associated software and FPGA code) to speed up seismic processing, financial analysis and the like.
I think they could be regarded as successful (I gather they turn a profit) and have big customers from finance and oil companies amongst their clientele.
Is there any known (that you can actually name) successful application
or practice for combining it with "dedicated" (designed on custom
demand) super computing cluster (HPC), e.g. with Infiniband stack? I
wonder if that has already been done and to which extend that was
successful.
It's being attempted academically with Novo-G.
You might be interested in Maxwell.
I know that Cray used to have a series of supercomputers some years ago that combined AMD Opterons with Xilinx FPGAs (iirc) through a HyperTransport bus, basically allowing you to create your own specialized processor for custom workloads. According to their website though, they now seem to have dropped FPGAs in favor of GPUs.
For the current research, there's always Google Scholar...
Update: After a bit of searching, it appears to have been the Cray XT5h, which had the possibility of using FPGA coprocessors...
Some have already been mentioned (convey, cray), some not (e.g. beecube).
But one of the biggest FPGA-Clusters I ever heard of, is missing:
The Large Hadron Collider at CERN. They produce in seconds enormous amounts of data (2.7 Terabit/s). They use the FPGAs (> 100) of them to reduce and filter the data to reduce it, and make it handable.
It does not fit your request to be connected to a dedicated HPC-Cluster, but they are a HPC-Cluster on their own (as on the higher hierarchy levels the used FPGAs are FX, they include two PowerPCs and are also some kind of "normal" cluster).
There is quite a lot of published work in reconfigurable computing applications.
Here's a list of links to SRC Computers-centric published papers.
There's the Center for High-Performance Reconfigurable Computing.
Google search "FPGA" or "reconfigurable" along with these academic institution names and you'll find many published papers. Some of the papers you'll find go back to 2004.
Jackson State University
Clemson University
Catholic University
George Washington University
George Mason University
National Center for Supercomputing Applications (NCSA)
University of Illinois (UIUC)
Naval Postgraduate School (NPS)
Air Force Research Lab (AFRL)
University of Dayton Research Institute (UDRI)
University of Florida
University of Arkansas
There also was a reconfigurable-centric conference hosted by NCSA, the Reconfigurable Systems Summer Institute (RSSI).
This list is certainly not exhaustive, but it will get you started.
Disclosures: I currently work for SRC Computers, LLC, I worked at NCSA/UIUC and I chaired the RSSI conference its first two years.
Yet another great use case developed by adapteva called parallela (they have a kickstarter project).
They are developing a epoch-series of processors controlled by a two cores ARM processor (that shares the board).
I am so much anticipating to have this toy in my hands!
PS
Since it was largely inspired by ardunio (and similar ARM-like) systems, this project is still limited by 1 Gbps networking.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
At university I programmed a FPGA in a C-like language. However, I also know that one usually programs FPGAs in Verilog or VHDL. Is this a designer choice? If so, what are the performance drawbacks?
I would ideally like to program the FPGA in a C-like language, rather than VHDL.
I was thinking of getting an Xilinx Virtex-5 if it makes any difference?
FPGA's are not processors. C is a language designed for processors.
Yes, there are C to FPGA compilers.
Are they a good idea? I'd say No. The design you're going to end up with is (from what I've seen) normally a state machine that has one state per line of code in the C. The state machine then moves through the states performing the algorithm. Either that or some other kind of Turing machine is put in place to execute the code.
This is not how somebody skilled in FPGA design would generally approach a problem. It's a slow, and potentially gate hungry way doing things.
In the same way that English is a better language to write a novel than Fortran, VHDL and Verilog are better languages to describe logic circuits than C.
If you're serious about using FPGAs, use a language that is designed to describe logic circuits. It might be a steep learning curve, but the results will be much better IMHO.
The short answer is "yes, certainly".
Here's an excellent survey of C compilers for FPGAs and FPGA-based systems.
C-to-hardware compiler (HLL synthesis)
Performance drawbacks and considerations are found in the system architecture and communication bandwidths rather than in using C vs. a hardware design language (HDL). The considerations in using C vs. an HDL lies in programming time and software maintenance issues, not so much in performance.
You can install a soft processor core inside the FPGA logic, and run your C code inside the virtual processor. Xilinx has Microblaze (licensed) and Picoblaze (free) cores. There are other soft cores you can implement as well (MIPS, x86, 8051, etc).
However, this is largely considered a "hack", as the cores are very slow compared to real cores. And I think that any C-to-FPGA conversion is ultimately going to start smelling like running a soft core, and not give you the efficiency you deserve for running on a FPGA. FPGAs are not Turing machines, they are a sack of logic gates. You can build a Turing machine out of the gates, but that is not why you bought the sack of gates.
Its sort of like buying a bag of Legos, and building a hammer and set of nails out of the bricks. It might work, but you are better off buying a hammer to pound nails, and better off building Castles, Space Ships and Fire Stations with the Legos.
I'd like to add something that I believe is the closest answer to the OP's question. If you're looking for a C-like language (which is not the same as C), you should definitely check out Synflow. The idea is to have a modern language that allows you to design faster without the learning curve of VHDL/Verilog and with no overhead. Also it's free and open source!
Disclosure: I'm co-founder of Synflow :-)
You should have a look at SystemC. The advantages of using a C based language is plentiful. Especially, on a system design perspective you can utilize that your other software (firmware and other low level stuff) is written in C. Hence, your software team can on a really early stage test against the rtl code.
In 2011, Xilinx bought the company AutoESL that had developed high level synthesis with SystemC. Xilinx has reused the name when the released its product "AutoESL". Especially with their new circuit Zynq, there a dual core ARM Cortex A9 embedded together with the FPGA logic, this will probably become a powerful tool for system development.
There are indeed some compilers that allow you to infer (solve using an incomplete description) hardware circuits using a high-level language like C. "C-to-gates" is in fact a popular buzzword. The image companies advertise is that programmers are able to write hardware if the language they use is one they have used to describe software. This is incredibly wrong for a number of reasons, chief among them being the fundamental differences between the execution model assumed by languages like C and hardware description languages.
An illustrative example: C assumes at its heart a large randomly accessible linear-addressed memory - an assumption that rarely holds for hardware. A C-to-gates compiler faces a challenging task of interperting the behavior of the program described, and designing a hardware circuit with the same behavior.
While C-like languages are a great productivity tool in limited use cases, these compilers certainly don't allow you to suddenly know how to design hardware if you are familiar with C.
Hope this helps,
I guess you used Handel C. Its a subset of C. From what I know the result is not very optimized. Verilog and VHDL allows for more optimization. I am saying this based on the my experience with Handel C a few years back
You might want to take a look at C-to-hardware technology, where you can write C code and it will get compiled/translated to VHDL or Verilog. This post lists a few compilers. Haven't used it myself so I don't have any experience with it. Hope this helps!
Many designers write VHDL/Verilog instead of a high-level language, for the same reasons that many programmers used to (and still do in some cases) write assembly instead of Java: you can tune resource usage and performance at a low level. Both VHDL and Verilog are languages designed for designing hardware. C is not. Given enough time, you could always write a program in VHDL/Verilog that will outperform a high-level language program. What an HLL gives you is 1) faster development, 2) ease of maintenance, and 3) possibly greater portability.
There have been many efforts to compile existing high-level programming languages (C is one) to FPGA targets. Most of them do, in fact, generate optimized code. Impulse C, for example, is a subset of C with some add-on libraries that support process-level parallelism, plus a compiler that optimizes the C input for instruction-level parallelism, too. It pipelines loops, maps certain operations to high-performance hardware primitives it knows the underlying FPGA family provides, etc. (Full disclosure: I helped build the Impulse C toolchain.)
The C-to-hardware environments list Carlito and David Pointer link to is pretty exhaustive. Xilinx Virtex-5 is supported by many of them, and if you're using any recent FPGA family from a major vendor, choice of hardware shouldn't be a problem. Some of the HLL environments support built-in (or softcore) embedded CPUs better than others.
I have a computational intensive task which I used CUDA to implement it and now I want to make it even faster with FPGAs (if possible)
The system I want to implement is a series of computations each similar to matrix multiplication in sense of being parallel. It also has some non-parallel parts in between. It works with big amounts of data.
Although I want it as fast as possible, I have enough time to learn and explore with FPGAs.
here I'm asking for suggestions on how I start my path? Which FPGA to choose and where to learn about it. any website or online class or books? I've decided to do this anyway but your idea of whether this will be faster on FPGA or not would be helpful too.
The big wins from an FPGA over using a GPU come from:
Using non-standard word widths optimised to your application. This allows denser logic, which allows more parallel processing blocks
using your knowledge of the required accesses to external RAM to schedule them in hardware more efficiently than a general purpose memory controller can.
The downside is getting data to and from the FPGA. Draw a data-transfer diagram before you start. Even if the FPGA provides infinite speedup, you might still find it's not worth the effort if there's loads of data to be shuffled to and fro!
It's likely you'll be wanting a PCI express based board. Which is (I imagine) a whole new learning-curve before you get to do anything with the FPGA - but if you're up for it, it'll be a very interesting task!
In terms of choosing FPGAs, have a play with the software tools from the various vendors - at the learning stage that's much more important than the chips themselves. You won't find (at this early learning-stage) a show-stopper feature in any of the various chips. Also take into account the availability of boards with your required interfaces on, and any IP-core you might need to do the high-speed interfacing (eg PCIe)
You can get a substantial speedup on most parallel problems with an FPGA.
However, in addition to implementing your computation on the FPGA, there's a lot of work involved in getting the data back and forth from the CPU/main memory. This will require implementation of (for example) a PCI Express endpoint in the FPGA logic (bus mastering for maximum speed) and custom drivers on the software side. Most operating systems will require those drivers to be developed in kernel mode.
And you can't just use the most straightforward approach for FPGA programming either. You're going to need to worry about pipelining and clock synchronization in order to maximize throughput.
In other words, it's a substantially difficult task even for engineers with years of FPGA experience. I strongly suggest you find someone to work with on this. Depending on how proprietary your project is, you might find skilled academics willing to work with you as long as you provide them with all materials and publication rights.
If you're determined to go it alone, you'll need some hardware. Many different companies offer FPGA wired up as accelerators, for example http://www.nallatech.com/pci-express-cards.html
Depending on whether you choose a Xilinx or Altera FPGA, you'll find considerable sample code and tutorials for getting PCI express working.
In my quest for getting some basics down before I start going into programming I am looking for essential knowledge about how the computer works down at the core level.
I have a theory that actually understanding what for instance a stackoverflow let alone a stack is, instead of my sporadic knowledge about computer systems, will help me longer term.
Is there any books or sites that take you through how processors are structured and give a holistic overview and that somehow relates to good to know about digital logic?
Am i making sense?
Yes, you should read some topics of
John L. Hennessy & David A. Patterson, "Computer Architecture: A quantitative Approach"
It has microprocessors' history and theory , (starting with RISC archs - MIPS), pipelining, memory, storage, etc.
David Patterson is a Professor of Computer of Computer Science on EECS Department - U. Berkeley. http://www.eecs.berkeley.edu/~pattrsn/
Hope it helps, here's the link
Tanenbaum's Structured Computer Organization is a good book about how computers work. You might find it hard to get through the book, but that's mostly due to the subject, not the author.
However, I'm not sure I would recommend taking this approach. Understanding how the computer works can certainly be useful, but if you don't really have any programming knowledge, you can't really put your knowledge to good use - and you probably don't need that knowledge yet anyway. You would be better off learning about topics like object-oriented programming and data structures to learn about program design, because unless you're looking at doing embedded programming on very limited systems, you'll find those skills far more useful than knowledge of a computer's inner workings.
In my opinion, 20 years ago it was possible to understand the whole spectrum from BASIC all the way through operating system, hardware, down to the transistor or even quantum level. I don't know that it's possible for one person to understand that whole spectrum with today's technology. (Years ago, everyone serviced their own car. Today it's too hard.)
Some of the "layers" that you might be interested in:
http://en.wikipedia.org/wiki/Boolean_logic (this will be helpful for programming)
http://en.wikipedia.org/wiki/Flip-flop_%28electronics%29
http://en.wikipedia.org/wiki/Finite-state_machine
http://en.wikipedia.org/wiki/Static_random_access_memory
http://en.wikipedia.org/wiki/Bus_%28computing%29
http://en.wikipedia.org/wiki/Microprocessor
http://en.wikipedia.org/wiki/Computer_architecture
It's pretty simple really - the cpu loads instructions and executes them, most of those instructions revolve around loading values into registers or memory locations, and then manipulating those values. Certain memory ranges are set aside for communicating with the peripherals that are attached to the machine, such as the screen or hard drive.
Back in the days of Apple ][ and Commodore 64 you could put a value directly in to a memory location and that would directly change a pixel on the screen - those days are long gone, it is abstracted away from you (the programmer) by several layers of code, such as drivers and the operating system.
You can learn about this sort of stuff, or assembly language (which i am a huge fan of), or AND/NAND gates at the hardware level, but knowing this sort of stuff is not going to help you code up a web application in ASP.NET MVC, or write a quick and dirty Python or Powershell script.
There are lots of resources out there sprinkled around the net that will give you insight into how the CPU and the rest of the hardware works, but if you want to get down and dirty i honestly think you should buy one of those older machines off eBay or somewhere, and learn its particular flavour of assembly language (i understand there are also a lot of programmable PIC controllers out there that might also be good to learn on). Picking up an older machine is going to eliminate the software abstractions and make things way easier to learn. You learn way better when you get instant gratification, like making sprites move around a screen or generating sounds by directly toggling the speaker (or using a PIC controller to control a small robot). With those older machines, the schematics for an Apple ][ motherboard fit on to a roughly A2 size sheet of paper that was folded into the back of one of the Apple manuals - i would hate to imagine what they look like these days.
While I agree with the previous answers insofar as it is incredibly difficult to understand the entire process, we can at least break it down into categories, from lowest (closest to electrons) to highest (closest to what you actually see).
Lowest
Solid State Device Physics (How transistors work physically)
Circuit Theory (How transistors are combined to create logic gates)
Digital Logic (How logic gates are put together to create digital functions or digital structures i.e. multiplexers, full adders, etc.)
Hardware Organization (How the data path is laid out in the CPU, the components of a Von Neuman machine -> memory, processor, Arithmetic Logic Unit, fetch/decode/execute)
Microinstructions (Bit level programming)
Assembly (Programming with words, but directly specifying registers and takes forever to program even simple things)
Interpreted/Compiled Languages (Programming languages that get compiled or interpreted to assembly; the operating system may be in one of these)
Operating System (Process scheduling, hardware interfaces, abstracts lower levels)
Higher level languages (these kind of appear twice; it depends on the language. Java is done at a very high level, but C goes straight to assembly, and the C compiler is probably written in C)
User Interfaces/Applications/Gui (Last step, making it look pretty)
You can find out a lot about each of these. I'm only somewhat expert in the digital logic side of things. If you want a thorough tutorial on digital logic from the ground up, go to the electrical engineering menu of my website:
affablyevil.wordpress.com
I'm teaching the class, and adding online lessons as I go.