I have some question about Chisel conversion. I know it's theoretical but it would be nice if someone give his opinion.
1) I want to ask why Chisel does not focus on VHDL / SystemVerilog conversion. Although both Verilog and VHDL are same, in some countries especially Europe prefer VHDL.
2) Similarly, C++ model is used for simulation models. Why Not SystemC for this purpose?
I was reading some notes and find out FIRRTL is the middleman for converting CHISEL-->FIRRTL--> Verilog and CHISEL---> FIRRTL--> C++ model.
Is it a nice idea to use the (Low)FIRRTL specs to Convert the VHDL and SystemC models.?
The short answer is that supporting VHDL and SystemC backends simply hasn't been a priority for the developers.
There are a few reasons why it hasn't been a priority:
People rarely ask for it. This is the first time I've seen this come up
since this issue on the Chisel 2 repo back in 2014.
Lack of developer time. We have quite a backlog of features already so additional backends simply haven't been a priority. While I think the implementation effort of adding a VHDL and/or SystemC backend isn't too bad, they would also present an additional maintenance and verification overhead. All this being said, Chisel3 and FIRRTL are open-source projects so we welcome contributors who want to help us with particular features!
Unclear benefits. VHDL (at least) is interoperable with Verilog so it's unclear why a VHDL backend is needed. As far as tooling, my understanding is that Verilog seems to have equivalent or better support than VHDL. For readability/debug-ability concerns, the generated code is not really intended for reading anyway; rather, most users use waveforms and only use the emitted code for the source-locators pointing back to the Scala source. I'm less familiar with SystemC so there might be some nice benefits of emitting it that I'm unaware of!
I'm most certainly unaware many benefits so please let me know what I'm missing!
One other bit that might help future readers: as jkoenig mentioned, the purpose of Chisel isn't to generate HDL code for humans to use. The purpose is to create a new language for designing hardware.
Verilog just happens to be a language that is spoken by lots of hardware tools, so generating Verilog is an easy way to make Chisel interoperable with the current ecosystem of CAD tools. Otherwise you would have to write your own synthesis and place-and-route tools if you actually wanted to implement Chisel designs on real hardware. In this respect, Verilog or VHDL is a moot argument. Either one would work, and as long as one works, the other is not really necessary.
Related
I was a bit nervous about the synthsizability of certain VHDL features, so I thought it might be a good idea to see what is written in the standard (IEEE 1076.6 "IEEE Standard for VHDL Register Transfer Level (RTL) Synthesis").
To my astonishment, I found that there is no current standard: the 1999 version has been superseded by the 2004 version; the 2004 version has the status "withdrawn":
https://standards.ieee.org/standard/1076_6-2004.html
I find it hard to believe that there is no need for a standard subset, so
I hope someone can explain why the appears to be no current standard.
First DASC made them joint standards with IEC. When this happened, the numbers changed - which made them hard to find.
Some time later, both the VHDL and Verilog standards were retired by DASC (Design Automation Standards Comittee) because no one stepped forward to maintain them. Since I do not generally attend DASC meetings, I missed that this happened.
I worked on the VHDL RTL standard. It lacks reward when you put forward coding styles and attributes and vendors such as Xilinx and Altera do not implement them.
It can be revived. It is worth reviving if you can get the tool vendors (Xilinx, Altera, Synopsys, Mentor, and Cadence) participating and implementing the standard.
However, without them participating and committing to implement new features, then the effort by a user led committee would not be worth the time.
If you think of just the statemachine and ROM/RAM attributes, we really need some consistency out of industry - the current state is embarrassing.
I applied TDD for software development, and it was really good! Now, I'm keen on FPGA design with VHDL and I am wondering about how to apply TDD methodology with it.
Have any of you used TDD on FPGA design? If yes, how do yo do? Do you know any articles or materials to learn about?
Thanks!
I have never specifically applied TDD, but I always have testbenches for the design components in VHDL projects.
Although it is not entirely within the philosophy of TDD, a lot of hardware design end up resembling some of its description. In fact, I would argue that hardware is often much more suitable for testing with fine granularity, because components often have clearly defined inputs and outputs where software units would be depending on many other classes for the functionality to make sense.
The resemblance is strong where all sub-blocks are being properly verified, especially if you consider the blocks to be 'features'. Even more so, when using stimuli files for testbenches on small arithmetic blocks, it is normal to first start with some common and special cases you can think of, and then add more as you find problems in your high-level tests. You will end up working iteratively on a fairly low level.
Some concepts of TDD that don't map well to hardware design include the idea of 'user stories'. Unless if you have specifically designed your project to be extensible, it is more likely you won't see any top level tests passing until all your components are in place.
I don't have any specific links or materials for you. Usually there are many custom scripts involved for running all the tests. Apart from testbenches it often involves checks with assertions, equivalence checks and various forms of formal verification, but doing that in great detail may be more specific to ASIC than FPGA design. The VUnit mentioned in the comment by Briand Drummond looks very interesting.
Note that hardware engineers tend to call checks on VHDL code verification, where testing is what you do on a physical product. I've had one professor that was very passionate about the distinction.
I believe at university I wrote a program for an FPGA which was in a language derived from C. I am aware about languages such as VHDL and verilog. However, what I dont understand is the amount of choice a programmer has regarding which to use? Is it dependent on the FPGA? I am going to be using a Xilinx FPGA.
I am confused because the C-variant language was, unsurprisingly, similar to C- however I know things like VHDL are nothing like C. Therefore if I have a choice I would prefer to programme an FPGA using a C-variant language. The Xilinx website had a million documents and it wasn't overly clear.
It was probably Verilog that you used. It's rather C-like in a lot of it's constructs. I wouldn't say it's "like C", but some syntax is similar.
VHDL is based on ADA, so yes, it's rather different.
There are some small FPGA specific languages around, but VHDL and Verilog are the big two. I think most others have died now.
Remember that writing hardware and writing software are two rather different things. You can't really describe hardware constructs in a language like C (*). The language needs to have special features to allow you to describe exactly what you want. The code needs to be structured in a way that will make the hardware efficient. Don't fool yourself into thinking that you can take a piece of software and magically run it on an FPGA just by changing the language/compiler. (This is targeted more at your follow up question to Marty).
Trying to use C to write a circuit description, is like trying to program a computer in English. You could do it, but it's really the wrong language for the job.
(*) Yes, I know there's SystemC (a C++ class library that is meant to make code synthesisable), but I've yet to see anyone get good results from it, and certainly not on FPGAs. Even then the code has to be structured in a similar way as for an HDL.
Clearly HDL are still preferable when programming FPGA (Xilinx, Altera etc : all accept VHDL or Verilog).
However, things are changing (slowly) : there are now excellent so-called behavioral synthesizers that allow you to code in C and generate hardware, expressed for you in VHDL or Verilog at the register-transfer level. They are sometimes refered as HLS : high-level synthesis.
The problem is that they are quite expensive.
Synfony from Synopsys
Cynthesizer from Forte Design Systems
CatapultC from Calypto (was from Mentor)
ImpulseC
At the academic level :
Gaut from Labsticc lab (France)
spark
legup
Hercules
...
Basically, these tools work by extracting a dependency graph from the C program : nodes represents computations and edges represent variables : that is all what you do when you program, in either C or other programming language. Using this internal representation, the compiler can do hardware-relevant transformations like : register allocation (mapping variables to register, or keeping them combinatorial i.e on wires), operations scheduling (deciding if operation execute in same clock cycle) etc, and finally generate HDL automatically.
Hope this helps
JCLL
Usually FPGA vendors will have toolchains that support both Verilog and VHDL - it's up to you to choose which language you'd like. It's generally just these two languages that are supported.
For more C-like languages, a long-shot option is to use the synthesisable subset of SystemC. This is C++ with circuit-friendly stuff added. I'm not sure if the FGPA tools support this though.
I have looked on the web and the discussions/examples appear to be for traditional software development. Since Verilog and VHDL (used for chip design, e.g. FPGAs and ASICs) are similar to software development C and C++ it would appear to make sense. However they have some differences being fundamentally parallel and requiring hardware to fully tests.
What experiences, good and bad, have you had? Any links you can suggest on this specific application?
Edits/clarifications:
10/28/09: I'm particularly asking about TDD. I'm familiar with doing test benches, including self-checking ones. I'm also aware that SystemVerilog has some particular features for test benches.
10/28/09: The questions implied include 1) writing a test for any functionality, never using waveforms for simulation and 2) writing test/testbenches first.
11/29/09: In Empirical Studies Show Test Driven Development Improves Quality they report for (software) TDD "The pre-release defect density of the four products, measured as defects per thousand lines of code, decreased between 40% and 90% relative to the projects that did not use TDD. The teams' management reported subjectively a 15–35% increase in initial development time for the teams using TDD, though the teams agreed that this was offset by reduced maintenance costs." The reduced bugs reduces risk for tape-out, at the expense of moderate schedule impact. This also has some data.
11/29/09: I'm mainly doing control and datapath code, not DSP code. For DSP, the typical solution involves a Matlab bit-accurate simulation.
03/02/10: The advantage of TDD is you make sure the test fails first. I suppose this could be done with assertions too.
I write code for FPGAs, not ASICS... but TDD is my still my preferred approach. I like to have a full suite of tests for all the functional code I write, and I try (not always successfully) to write testcode first. Staring at waveforms always happens at some point when you're debugging, but it's not a good way of validating your code (IMHO).
Given the difficulty of performing proper tests in the real hardware (stimulating corner cases is particularly hard) and the fact that a VHDL-compile takes seconds (vs a "to hardware" compile that takes many minutes (or even hours)), I don't see how anyone can operate any other way!
I also build assertions into the RTL as I write it to catch things I know shouldn't ever happen. Apparantly this is seen as a bit "weird", as there's a perception that verification engineers write assertions and RTL designers don't. But mostly I'm my own verification engineer, so maybe that's why!
I use VUnit for test driven development with VHDL.
VUnit is a Python library that invokes the VHDL compiler and simulator and reads the results of the simulation. It also provides several nice VHDL libraries that makes it a lot easier to write better test benches, such as a communication library, logging library and a checking library.
There are many possibilities since it is invoked from Python. It is possible to both generate test data, as well as check the output data from the test in Python. I saw this example the other day where they used Octave - a Matlab copy - for plotting test results.
VUnit seems very active and I have several times been able to actually ask questions directly to the developers and gotten help quite quickly.
A downside is that it is harder to debug compilation errors since there are so many function/procedure variations with the same name in the libraries. Also, some stuff is done behind the scene by preprocessing the code, which means that some errors might show up in unexpected places.
I don't know a lot about hardware/chip design, but I am deeply into TDD, so I can at least discuss suitability of the process with you.
The question I'd call most pertinent is: How quickly can your tests give you feedback on a design? Related to that: How quickly can you add new tests? And how well do your tools support refactoring (changing structure without changing behavior) of your design?
The TDD process depends a great deal on the "softness" of software - good automated unit tests run in seconds (minutes at the outside), and guide short bursts of focused construction and refactoring. Do your tools support this kind of workflow - rapidly cycling between writing and running tests and building the system under test in short iterations?
The SystemVerilog extensions to the IEEE Verilog Standard include
a variety of constructs which facilitate creating thorough test suites
for verifying complex digital logic designs. SystemVerilog is one of
the Hardware Verification Languages (HVL) which is used to verify ASIC chip
designs via simulation (as opposed to emulation or using FPGA's).
Significant benefits over a traditional Hardware Design Language (Verilog) are:
constrained randomization
assertions
automatic collection of functional and assertion coverage data
The key is to have access to simulation software which supports
this recent (2005) standard. Not all simulators fully support
the more advanced features.
In addition to the IEEE standard, there is an open-source SystemVerilog library
of verification components available from VMM Central (http://www.vmmcentral.com). It provides a reasonable framework for creating a test environment.
SystemVerilog is not the only HVL,and VMM is not the only library.
But, I would recommend both, if you have access to the appropriate
tools. I have found this to be an effective methodology in finding design
bugs before becoming silicon.
With regard to refactoring tools for hardware languages, I'd like to point you to our tool Sigasi HDT. Sigasi provides an IDE with built-in VHDL analyzer and VHDL refactorings.
Philippe Faes, Sigasi
ANVIL– ANother Verilog Interaction Layer talks about this some. I haven't tried it.
I never actively tried TDD on an RTL design, but I had my thoughts on this.
What I think would be interesting is to try out this approach in connection with assertions. You would basically first write down in form of assertions what you assume/expect from your module, write your RTL and later you can verify these assertions using formal tools and/or simulation. In contrast to "normal" testcases (where you probably would need to write directed ones) you should have much better coverage and the assertions/assumptions may be of use later (e.g. on system level) as well.
However I wouldn't fully rely on assertions, this can become very hairy.
Maybe you can express your thoughts on this as well, as you are asking for it I guess you carry some ideas in your head?
What is TDD for you? Do you mean having all your code exercised by automatic tests at all times, or do you go further to mean that tests are written before the code and no new code is written unless tests fail?
Whichever approach you prefer, HDL code testing isn't very different from software testing. It has its pluses (much better coverage and depth of testing) and minuses (difficult to set up and cumbersome relatively to software).
I've had very good experience with employing Python and generic HDL transactors for implementing comprehensive and automatic tests for synthesizable HDL modules. The idea is somewhat similar to what Janick Bergeron presents in his books, but instead of SystemVerilog, Python is used to (1) generate VHDL code from test scenarios written in Python and (2) verification of results written by the monitoring transactors that accept waveforms from the design during simulation.
There's much more to be written about this technique, but I'm not sure what you want to focus on.
According to this http://steve-yegge.blogspot.com/2007/06/rich-programmer-food.html article, I defnitely should.
Quote Gentle, yet insistent executive
summary: If you don't know how
compilers work, then you don't know
how computers work. If you're not 100%
sure whether you know how compilers
work, then you don't know how they
work.
I thought that it was a very interesting article, and the field of application is very useful (do yourself a favour and read it)
But then again, I have seen successful senior sw engineers that didn’t know compilers very well, or internal machine architecture for that matter,
but did know a thing or two of each item in the following list :
A programming paradigm (OO, functional,…)
A programming language API (C#, Java..) and at least 2 very different some say! (Java / Haskell)
A programming framework (Java, .NET)
An IDE to make you more productive (Eclipse, VisualStudio, Emacs,….)
Programming best practices (see fxcop rules for example)
Programming Principles (DRY, High Cohesion, Low Coupling, ….)
Programming methodologies (TDD, MDE)
Design patterns (Structural, Behavioural,….)
Architectural Basics (Tiers, Layers, Process Models (Waterfall, Agile,…)
A Testing Tool (Unit Testing, Model Testing, …)
A GUI technique (WPF, Swing)
A documenting tool (Javadoc, Sandcastle..)
A modelling languague (and tool maybe) (UML, VisualParadigm, Rational)
(undoubtedly forgetting very important stuff here)
Not all of these tools are necessary to be a good programmer (like a GUI when you just don’t need it)
but most of them are. Where do compilers come in, and are they really that important, since, as I mentioned,
lots of programmers seems to be doing fine without knowing them and especially, becoming a good programmer is seen the multitude of knowledge domains almost a lifetime achievement :-) , so even if compilers are extremely important, isn't there always stuff still more important?
Or should i order 'The Unleashed Compilers Unlimited Bible (in 24H..))) today?
For those who have read the article, and want to start studying right away :
Learning Resources on Parsers, Interpreters, and Compilers
If you just want to be a run-of-the-mill coder, and write stuff... you don't need to take compilers.
If you want to learn computer science and appreciate and really become a computer scientist, you MUST take compilers.
Compilers is a microcosm of computer science! It contains every single problem, including (but not limited to) AI (greedy algorithms & heuristic search), algorithms, theory (formal languages, automata), systems, architecture, etc.
You get to see a lot of computer science come together in an amazing way. Not only will you understand more about why programming languages work the way that they do, but you will become a better coder for having that understanding. You will learn to understand the low level, which helps at the high level.
As programmers, we very often like to talk about things being a "black box"... but things are a lot smoother when you understand a little bit about what's in the box. Even if you don't build a whole compiler, you will surely learn a lot. You will get to see the formalisms behind parsing (and realize it's not just a bunch of special cases hacked together), and a bunch of NP complete problems. You will see why the theory of computer science is so important to understand for practical things. (After all, compilers are extremely practical... and we wouldn't have the compilers we have today without formalisms).
I really hope you consider learning about them... it will help you get to the next level as a computer scientist :-).
You should learn about compilers, for the simple reason that implementing a compiler makes you a better programmer. The compiler will surely suck, but you will have learned a lot during the way. It is a great way of improving (or practising) your programming skill.
You do not need to understand compilers to be a good programmer, but it can help. One of the things I realized when learning about them, is that compiling is simply a translation.
If you have ever translated from one language to another, you have just done compiling.
So when should you learn about compilers?
When you want to, or need it to solve a problem.
Compiler theory is useful, but not essential.
Although there are some techniques which come in handy, like lexical analysis and parsing.
Another one is error handling. Compilers need a lot of these. User input can contain anything, even the unexpected. And you need to deal with all of these.
If you're going to be working at a high-enough level where you're worrying over UML and self-describing code, you could easily go your entire career without wanting or needing intimate details of how the compiler works.
But, if you're an in-the-trenches coder and have no aspirations to manage your friends, it's likely that one day, you'll realize you're waging war with your compiler. It could be a random bug that comes along or a hallway conversation about while-verses-for loops. You'll realize the assembly (or IL, likely, in the coming years) is just a bit to the left of what you were needing and another universe will unfold.
So, I suppose my answer is, just be aware of the compiler for now, that it's doing quite a lot, but don't worry over it too much.
The compilers courses usually focus on how the high-level code is analyzed and translated into machine code. That's very interesting, but not crucial. It's more important to understand what is this machine code that is generated by the compiler so that you understand how a computer works and what is the cost of each language construct.
So I'd rather say that you should know an assembly language (I mean a limited subset of assembly language for one architecture) to understand how a computer works and the latter is definitely required for a competent programmer so that he understands what segmenation fault is, when to optimize and when not and other similar low-level things.
If you intend to write extremely time-critical real-time code, you will benefit from understanding how the compiler optimises your code. However, you will actually benefit more from understanding the underlying architecture of your hardware.
From my experience, if you understand how the hardware works, and how the compiler interprets your code, you will be able to write code that does exactly what you intend it to do. I have been caught on several occasions, writing code that got optimised away by the compiler and made the hardware do something that I did not intend.
All in all, understanding the entire software-hardware stack is not essential to write good algorithms and code, but it will most certainly help!
From a practical perspective, general compiler theory is less of concern than a assembler, linker and loader to a specific platform. For example, I just consider the GCC compiler as a translator from my high-level C language to the low-level assembly language on a x86 platform. And more often than not, I manually refine ;) the code generated by the compiler.
From a scientific perspective, I would strongly suggest you learning the compiler theory, it will help you understand the great idea that computer is built upon. And even more, you will have a different eye upon the world.
Just my opinion, but I believe compilers is not given enough attention in CS courses, not in mine, and not in any others afaik. I think any CS major should do 2 things after a sabbatical or finishing their major: Re-learn if necessary finite automata and maybe a formal methods language. Apply it.
Write a simple compiler with this knowledge. Alex Aiken has a very useful online tutorial on writing a compiler for the COOL (Classroom Object Oriented Language) which is a subset of Scala as of 2013 ver. At least at time of writing.