Can you program FPGAs in C-like languages? [closed] - fpga

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
At university I programmed a FPGA in a C-like language. However, I also know that one usually programs FPGAs in Verilog or VHDL. Is this a designer choice? If so, what are the performance drawbacks?
I would ideally like to program the FPGA in a C-like language, rather than VHDL.
I was thinking of getting an Xilinx Virtex-5 if it makes any difference?

FPGA's are not processors. C is a language designed for processors.
Yes, there are C to FPGA compilers.
Are they a good idea? I'd say No. The design you're going to end up with is (from what I've seen) normally a state machine that has one state per line of code in the C. The state machine then moves through the states performing the algorithm. Either that or some other kind of Turing machine is put in place to execute the code.
This is not how somebody skilled in FPGA design would generally approach a problem. It's a slow, and potentially gate hungry way doing things.
In the same way that English is a better language to write a novel than Fortran, VHDL and Verilog are better languages to describe logic circuits than C.
If you're serious about using FPGAs, use a language that is designed to describe logic circuits. It might be a steep learning curve, but the results will be much better IMHO.

The short answer is "yes, certainly".
Here's an excellent survey of C compilers for FPGAs and FPGA-based systems.
C-to-hardware compiler (HLL synthesis)
Performance drawbacks and considerations are found in the system architecture and communication bandwidths rather than in using C vs. a hardware design language (HDL). The considerations in using C vs. an HDL lies in programming time and software maintenance issues, not so much in performance.

You can install a soft processor core inside the FPGA logic, and run your C code inside the virtual processor. Xilinx has Microblaze (licensed) and Picoblaze (free) cores. There are other soft cores you can implement as well (MIPS, x86, 8051, etc).
However, this is largely considered a "hack", as the cores are very slow compared to real cores. And I think that any C-to-FPGA conversion is ultimately going to start smelling like running a soft core, and not give you the efficiency you deserve for running on a FPGA. FPGAs are not Turing machines, they are a sack of logic gates. You can build a Turing machine out of the gates, but that is not why you bought the sack of gates.
Its sort of like buying a bag of Legos, and building a hammer and set of nails out of the bricks. It might work, but you are better off buying a hammer to pound nails, and better off building Castles, Space Ships and Fire Stations with the Legos.

I'd like to add something that I believe is the closest answer to the OP's question. If you're looking for a C-like language (which is not the same as C), you should definitely check out Synflow. The idea is to have a modern language that allows you to design faster without the learning curve of VHDL/Verilog and with no overhead. Also it's free and open source!
Disclosure: I'm co-founder of Synflow :-)

You should have a look at SystemC. The advantages of using a C based language is plentiful. Especially, on a system design perspective you can utilize that your other software (firmware and other low level stuff) is written in C. Hence, your software team can on a really early stage test against the rtl code.
In 2011, Xilinx bought the company AutoESL that had developed high level synthesis with SystemC. Xilinx has reused the name when the released its product "AutoESL". Especially with their new circuit Zynq, there a dual core ARM Cortex A9 embedded together with the FPGA logic, this will probably become a powerful tool for system development.

There are indeed some compilers that allow you to infer (solve using an incomplete description) hardware circuits using a high-level language like C. "C-to-gates" is in fact a popular buzzword. The image companies advertise is that programmers are able to write hardware if the language they use is one they have used to describe software. This is incredibly wrong for a number of reasons, chief among them being the fundamental differences between the execution model assumed by languages like C and hardware description languages.
An illustrative example: C assumes at its heart a large randomly accessible linear-addressed memory - an assumption that rarely holds for hardware. A C-to-gates compiler faces a challenging task of interperting the behavior of the program described, and designing a hardware circuit with the same behavior.
While C-like languages are a great productivity tool in limited use cases, these compilers certainly don't allow you to suddenly know how to design hardware if you are familiar with C.
Hope this helps,

I guess you used Handel C. Its a subset of C. From what I know the result is not very optimized. Verilog and VHDL allows for more optimization. I am saying this based on the my experience with Handel C a few years back

You might want to take a look at C-to-hardware technology, where you can write C code and it will get compiled/translated to VHDL or Verilog. This post lists a few compilers. Haven't used it myself so I don't have any experience with it. Hope this helps!

Many designers write VHDL/Verilog instead of a high-level language, for the same reasons that many programmers used to (and still do in some cases) write assembly instead of Java: you can tune resource usage and performance at a low level. Both VHDL and Verilog are languages designed for designing hardware. C is not. Given enough time, you could always write a program in VHDL/Verilog that will outperform a high-level language program. What an HLL gives you is 1) faster development, 2) ease of maintenance, and 3) possibly greater portability.
There have been many efforts to compile existing high-level programming languages (C is one) to FPGA targets. Most of them do, in fact, generate optimized code. Impulse C, for example, is a subset of C with some add-on libraries that support process-level parallelism, plus a compiler that optimizes the C input for instruction-level parallelism, too. It pipelines loops, maps certain operations to high-performance hardware primitives it knows the underlying FPGA family provides, etc. (Full disclosure: I helped build the Impulse C toolchain.)
The C-to-hardware environments list Carlito and David Pointer link to is pretty exhaustive. Xilinx Virtex-5 is supported by many of them, and if you're using any recent FPGA family from a major vendor, choice of hardware shouldn't be a problem. Some of the HLL environments support built-in (or softcore) embedded CPUs better than others.

Related

DE1-SoC Board FPGA for evolvable hardware

I would like to reproduce the experiment from Dr. Adrian Thompson, who used genetic algorithm to produce a chip (FPGA) which can distinguish between two different sound signals in a extreme efficient way. For more information please visit this link:
http://archive.bcs.org/bulletin/jan98/leading.htm
After some research I found this FPGA board:
http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=836&PartNo=1
Is this board capable of reproducing Dr. Adrian Thompsons experiment or am I in need of another?
Thank you for your support.
In terms of programmable logic, the DE1-SoC is about ~20x bigger, and has ~70x as much embedded memory. Practically any modern FPGA is bigger than the "Xilinx XC6216" cited by his papers, as was linked to you in the other instance of this question you asked.
That said, most modern FPGAs don't allow for the same fine-granularity of configuration, as compared to older FPGAs - the internal routing and block structures are more complex, and FPGA vendors want to protect their products and compel you to use their CAD tools.
In short, yes, the DE1-SoC will be able to contain any design from 12+ years ago. As for replicating the specific functions, you should do some more research to determine if the methods used are still feasible with modern chips and CAD tools.
Edit:
user1155120 elaborated on the features of the XC6216 (see link below) that were of value to Thompson.
Fast Configuration: A larger device will generally take longer to configure, as you have to send more configuration data. That said, I/O interfaces are faster than they were 15 years ago, so it depends on your definition of "fast".
Reconfiguration: Cyclone V chips (like the one in the DE1-SoC) do support partial reconfiguration, but the subscription version of the Quartus II software is required, in addition to a separate license to support PR. I don't believe it supports wildcard reconfiguration, though I could be mistaken.
Memory-Mapped Addressing: The DE1-SoC's internal data can be access through the USB Blaster interface. However, this requires using the SystemConsole on the host PC, so it's not a direct access.

Is it possible to create a HW based synthesizer for RTL?

I thought of building a synthesis tool which is based on dedicated hardware in order to accelerate the developing of an RTL.
Are there any HW based platforms that synthesis RTL ?
Can one approximatly estimate how fast is it in compare to Synopsis tool
The idea is to make kind of bootstrapping of vhdl/verilog/netlist synthesizer, a platform that makes a big state machine implemented in HW that sensitizing all the RTL (Writing a compiler in its own language shows the close idea for SW world).
As always, when the question presupposes "do it in hardware", the answer always has to be "show which bottlenecks the hardware will fix and how ". Until you understand the problem well enough to answer that question in more than a handwaving fashion, its all guesswork.
As another has noted - if it were sensible (financially), there's a big enough market of frustrated engineers waiting for synthesis to complete that it would already be there.
If it's just for a fun project, then sure, have at it :)
Here's a recent thesis about the topic, and the author has written a book as well. Given the cost of hardware development this probably isn't practical today.
That's a very interesting idea, and to answer your first question I'm fairly sure there are no existing products like this commercially available.
You should know that synthesis tools are extremely complex, however. Creating what you want is going to be a lot of work, I would say that even a feasibility study (which, among other questions, should answer your second question) would be enough for a master's thesis.
Like Martin said there are tons of frustrated engineers out there with designs that take hours to synthesize (I'm one of them!). Still both Altera's and Xilinx' synthesis tools utilize the six-core processor in my computer very badly, especially if I don't do any design partitioning. This leads me to believe that parallelizing the synthesis process isn't easy, although I tend to overestimate engineers at big companies.

Parallel programming over multiple machines without clustering

I'm going to be a college student at 40. I'll be studying IT and plan on doing a bachelor's project. The basic idea is to try to use neural nets to evaluate bias in media. The training data will be political blogs with well known biases.
What I need is a programming language that can run parallel on multiple machines that are networked, but not clustered. I have 2 Linux machines and 3 running OS X. I would prefer if the language would compile to binary rather than bytecode or to a VM, but I'll take what I can get. I don't need any GUI libraries, so that's not a constraint. I do most of my programming in python, but I'm willing to learn another language if it'll make the parallel execution easier. Any suggestions?
I strongly suggest that you consider sticking with Python. Learning a new language, at the same time as you start tackling parallel / distributed computing, may well throw a spanner in your works that you just don't need. I believe that your time will be better spent tackling the issues of building the neural net you want, rather than learning the peculiarities of a new language. And, by reputation, Python is eminently suitable for what you plan. It does, of course, fail your requirement that it should compile to binary but I'm not sure where that is coming from.
When you write parallel programming over multiple machines without clustering I'm thinking oh, he means distributed programming. I tend towards the view that parallel computing is a niche within distributed computing, in part defined by the homogeneity (from the programmer's point of view) of the resources used. This apparent homogeneity is aided tremendously if it is supported by homogeneity of hardware so that there is little gap between vision and reality.
If what you really have is an assortment of computers of different specs and different OSes and communicating over a non-dedicated network then I fear that you will find it difficult building the illusion of homogeneity for the programmer (ie for yourself) and would be better setting out to build a distributed system from the get go.
I just plain disagree with the answer telling you to pick up C and MPI, I think you'll make progress much faster much quicker with Python.
Good luck with your studies.
Oh, and if you just won't take my advice to forget about a new programming language, consider Haskell and Erlang.
Sounds like like an interesting project. However thinking laterally wouldn't a GPU based system (ie massively parallel) be more the soupe du jour? Hence something like C + CUDA perhaps?
I don't know if it's still around but OCCAM (from the Transputers of old) was designed to be a parallel system, with it's PAR and SEQ constructs. I've just read of this one on linux
That sounds like C + MPI to me.

How do languages related to FPGAs?

I believe at university I wrote a program for an FPGA which was in a language derived from C. I am aware about languages such as VHDL and verilog. However, what I dont understand is the amount of choice a programmer has regarding which to use? Is it dependent on the FPGA? I am going to be using a Xilinx FPGA.
I am confused because the C-variant language was, unsurprisingly, similar to C- however I know things like VHDL are nothing like C. Therefore if I have a choice I would prefer to programme an FPGA using a C-variant language. The Xilinx website had a million documents and it wasn't overly clear.
It was probably Verilog that you used. It's rather C-like in a lot of it's constructs. I wouldn't say it's "like C", but some syntax is similar.
VHDL is based on ADA, so yes, it's rather different.
There are some small FPGA specific languages around, but VHDL and Verilog are the big two. I think most others have died now.
Remember that writing hardware and writing software are two rather different things. You can't really describe hardware constructs in a language like C (*). The language needs to have special features to allow you to describe exactly what you want. The code needs to be structured in a way that will make the hardware efficient. Don't fool yourself into thinking that you can take a piece of software and magically run it on an FPGA just by changing the language/compiler. (This is targeted more at your follow up question to Marty).
Trying to use C to write a circuit description, is like trying to program a computer in English. You could do it, but it's really the wrong language for the job.
(*) Yes, I know there's SystemC (a C++ class library that is meant to make code synthesisable), but I've yet to see anyone get good results from it, and certainly not on FPGAs. Even then the code has to be structured in a similar way as for an HDL.
Clearly HDL are still preferable when programming FPGA (Xilinx, Altera etc : all accept VHDL or Verilog).
However, things are changing (slowly) : there are now excellent so-called behavioral synthesizers that allow you to code in C and generate hardware, expressed for you in VHDL or Verilog at the register-transfer level. They are sometimes refered as HLS : high-level synthesis.
The problem is that they are quite expensive.
Synfony from Synopsys
Cynthesizer from Forte Design Systems
CatapultC from Calypto (was from Mentor)
ImpulseC
At the academic level :
Gaut from Labsticc lab (France)
spark
legup
Hercules
...
Basically, these tools work by extracting a dependency graph from the C program : nodes represents computations and edges represent variables : that is all what you do when you program, in either C or other programming language. Using this internal representation, the compiler can do hardware-relevant transformations like : register allocation (mapping variables to register, or keeping them combinatorial i.e on wires), operations scheduling (deciding if operation execute in same clock cycle) etc, and finally generate HDL automatically.
Hope this helps
JCLL
Usually FPGA vendors will have toolchains that support both Verilog and VHDL - it's up to you to choose which language you'd like. It's generally just these two languages that are supported.
For more C-like languages, a long-shot option is to use the synthesisable subset of SystemC. This is C++ with circuit-friendly stuff added. I'm not sure if the FGPA tools support this though.

Implementions of algorithms for evaluating circuits

Consider the problem of circuit evaluation, where the input is a boolean circuit C and an input string x and you want to compute C(x). (Assume fan-in 2 if you like.)
This is a 'trivial' problem algorithmically, however it appears non-trivial to implement when C can be huge (think several million gates) and memory management becomes an issue.
There are several ways this problem can be approached, trading off memory, time, and disc access. But before going through all this work myself, does anyone know of any existing implementations of algorithms for this problem? It would be surprising to me if none exist...
For C/C++, the standard digital circuit design & simulation system for more than 10 years now is SystemC.
It is a library that allows you to design digital logic in C++. There are supporting software that allows you to do timing analysis and even generate schematic netlist for C code.
I've only played with it a little before deciding that I was more comfortable with Verilog. But it is a mature piece of software with lots of industry support. Googling around will yield a lot of information including several tutorial pages.
It sounds like Binary Decision Diagrams could be used for your task? There are well-known algorithms (and implementations) of these which are very compact in terms of memory usage, given that they are designed to be used on huge state spaces.

Resources