How to syntax check VHDL in Vivado without complete synthesis - vhdl

What's the simplest way to syntax-check my VHDL in Vivado without running through a full synthesis?
Sometimes I code many inter-related modules at once, and would like to quickly find naming errors, missing semi-colons, port omissions, etc. The advice I've read is to run synthesis, but that takes longer than I need for just a syntax check. I've observed that syntax errors will usually cause synthesis to abort within the first minute or so, so my workaround is to run synthesis and abort it manually after about a minute.

In the Vivado Tcl Console window, the check_syntax command performs a fast syntax check, catches typos, missing semi-colons, etc.

Vivado offers an elaboration step before synthesis. This is the lightweight version of y synthesis by just reading all sources and creating a design model based on the language without optimizations and transformations.
A pure syntax check per file is not enough in many cases. You also want to know if certain identifiers exist and if types are matching. Therefore, an elaboration is needed.
(If you never have heard of that step: VHDL compiling has 2 steps: Analysis and Elaboration. Think of elaboration like of linking in ANSI C.)

Related

Why should an HDL simulation (from source code) have access to the simulator's API?

This is a question inspired by this question and answer pair: call questa sim commands from SystemVerilog test bench
The questions asks how Verilog code could control the executing simulator (QuestaSim). I saw similar questions and approaches for VHDL, too.
So my question is:
Why should a simulation (slave) have power of its simulator (master)?
What are typical use cases?
Further reading:
call questa sim commands from SystemVerilog test bench
VerTcl - A Tcl interpreter implemented in VHDL
Why? Can anyone answer "why"? Well, perhaps the product engineer, or developer at Mentor that drove the creation of such behavior can answer that. But lacking that, we can only guess. And that's what I'm doing here.
I can think of a few possible use cases, but they aren't something that cannot be done in another manner. For example, one could have a generic "testbench controller" that depending on generics/parameters could invoke certain simulator behavior. (Edit: After re-reading one of your links, I see that's the exact use case.)
For example, say I have this "generic" testbench code as:
module testbench;
parameter LOG_SIGNALS = 1'b0;
initial
begin
if LOG_SIGNALS
begin
// Log all signals in the design
mti_fli::mti_Cmd("add wave -r /*")
end
endmodule
Then, one could invoke this as:
vsim -c -gLOG_SIGNALS=1 work.testbench
The biggest use case for this might be if vsim is invoked from some environment. If one were to do a do file, I'm not sure one can pass parameters to the script. Say one had the following do file:
if {$log_signals} {
add wave -r /*
}
How does one set $log_signals from the command line? I suppose one could do that through environment variables, such as:
if { [info exists ::env(LOG_SIGNALS)] } {
add wave -r /*
}
Other uses cases might be to turn on/off the capturing of coverage data, list files, maybe even a quirky case of ending simulation.
But of course, all of these can be handled in other manners. And in manners I think are much more clear and much easier to maintain.
As for VerTCL, I find it fascinating. But incomplete. Or at very least barebones. I find scripted testenches exceedingly useful (we use them where I work). And VerTCL is a great way to do it (if you like TCL). But it does need some framework around it to read signals, drive signals, and otherwise manage a simulation.
Ideally, you wouldn't need a simulator API if the HDL was comprehensive enough to do all the functions that are currently left to the simulator. At its inception, Verilog was implemented as an interpreted language and the command line was the Verilog language instead of some other command line interface we see today based on Tcl.
But as simulators became more complex, they needed commands that interacted with the file system, platform, OS, as well as other features at faster pace than the HDL standard could keep up with. The IEEE HDL standards stop at any implementation specific details.
So simulation tool vendors developed command line interfaces (CLI) to meet user needs for debugging and analysis. These CLIs are robust enough to create stimulus and check behaviors that there can be an overlap in functionality of the CLI code versus what's in your testbench source code. So having an API to your simulators CLI can make it easier to control the flow of commands to the simulator and avoid duplication of procedures.
For example, maybe you want to start logging signals to add to a waveform after the design gets out of reset. From the CLI, you would have to set a watch condition on the reset signal that executes the logging command when reset goes inactive. Using the simulator API, you could just put that command in your bench in the spot in your where release reset.

How to Ignore a Synthesis constraint if signal is not in design?

I have a clock in my design that drives some logic in normal operation. However occasionally I want to disable this block of logic by setting a VHDL generic to disable it. But I still have a clock constraint in my .xcf file e.g:
NET "TEST_CLK" TNM_NET = "TEST_CLK";
TIMESPEC TS_TEST_CLK = PERIOD "TEST_CLK" 20.000 ns HIGH 50 %;
If I try to run synthesis I get the following error:
Processing TIMESPEC TS_TEST_CLK: No TNM or User group name TEST_CLK is defined.
How can I tell the tools to effectively ignore this constraint when the clock has been (correctly) optimized out of the design? Is this even possible?
Vivado issues a Critical Warning for constraints that do not match the design, however it continues to build and will generate a .bit file anyway. I think this is a nice tradeoff, but you have to remember to look at the critical warnings.
Also, as Morten Zilmer commented, Vivado uses a TCL file for the constraints, so you can conditionalize the constraints or generate them based on the actual design.
When using the ISE tools, ngdbuild (Translate) has the "allow unmatched timing group" option (-aut on the command line). This is supposed to be used when you have an incomplete design. There ought to be an option under translate properties in the gui also. If you are adding the constraint at synthesis I'm not sure there is an option to specifically do this.

Graphical VHDL Component Editor

In many of my VHDL designs I create a lot of low level components in HDL (which is fine). However, when I am ready to create multiple instantiations and link them together at the top-level, I find that the file ends up being rather large with tons of internal signals going between tons of component instantiations. It gets a little unwieldy and hard to follow.
Instead, I thought what might be easier to understand and faster to develop with is if there was a graphical tool to do the high level component linking. It would be able to parse my low level HDL files and determine the port inputs/outputs and create a block (or multiple instances of that block). I could then use my mouse to create connections between the blocks and give them a text label. When I am done, it would be able to auto-generate a VHDL file with all appropriate syntax for creating internal signals, component instantiations, port declarations, etc.
I tried experimenting with Xilinx Schematic Editor, but this thing was a beast and I did not have any luck.
Is there any tool out there like this? If it could even get me 90% of the way I'd be happy.
You should take a look at www.sigasi.com
Sigasi has autocompletes to help you with the instantiations and quick-fixes to help you with the wiring. There is also a premium version that gives you a live, graphical block diagram view (demo screencast)

Debugging VHDL: How to?

I am a newbie to VHDL and can't figure out how to debug VHDL code.
Is there any software that could probably give me an insight to the internal signals of my VHDL entity as time passes or something like that?
Please help.
As the other posts have pointed out, you'll likely need a simulator like GHDL. However, to debug your simulation, there are a few different methodologies:
Classic print statements -- just mix in writeline(output,[...]) in your procedural code. See this hello world example. If you're just getting started, then adding print statements will be invaluable. For most of the simulation debug that I do ( and that is part of my job ), I do almost all of the debug based on print statements that we've built up in our design and testbench. It is only for the final debug, or for more difficult issues that I use the next debug method.
"Dumping" the simulation ( for GHDL see this page and this one ). This is a cycle by cycle trace of your design ( or a subset of your design). It's as if you hook up a logic analyzer to every single wire in your design. All the info you could ever want about your design, but at a very low level -- the signal level. To use this methodology:
Create a simulation "dump". The base format for such a dump is a Value Change Dump or VCD. Whichever simulator you use, you'll need to read the documentation on how to create a VCD. ( You can also search "dump" in your docs -- your simulator may use a different file format for its dumps.)
Once you create a simulation dump, you then load your dump into a wave-form viewer. If you're using the gEDA package, then you would use gtkwave to view the dump.
note If you want to use GHDL and gtkwave to debug your VHDL code, you can install them on ubuntu with command:
% sudo apt-get install geda ghdl
( assuming you have root access to the machine running ubuntu)
Xilinx offers free version of its design suite: http://www.xilinx.com/tools/webpack.htm. Webpack contains VHDL simulator, although last time I tried I've liked ModelSim's simulator better. It might have changed though.
Wepack is also different from ModelSim as it's not only simulator but full-fledged FPGA design suite.
ModelSim's disadvantage is its license -- as far as I'm concerned it's free for students only.
The others mentioned here are likely more appropriate based on cost and availability. However the best HDL/netlist debugger I've used by far is Verdi.
What you are looking for is an VHDL simulator. There are several alternatives to choose from:
Mentors Modelsim
Xilinx ISim
Aldec Riviera and ActiveHDL
Simili
The Simili software is available as a free version with limited performance.
Once you have the simulator installed you need to learn how to use it. Generally you will have to write a testbench in VHDL too, but some of the simulators will let you create the stimuli signals from a graphical user interface. You can find a large number of examples of VHDL-based testbenches on this page: VHDL Tutorials.
In the simulator you are able to visually inspect the state of your design in the waveform viewer and also be able to set breakpoints in your code to debug the design.
Use a simulation software like ModelSim. This kind of software is usually quite expensive. In general if with your vhdl synthethizer you'll get some lightweight variant of it or similar software which is enough for smaller things.
For small jobs which don't push the language spec all the way, GHDL works fine as a simulator IME. As you push things further (and you must, you can't debug any sizeable piece of code just in silicon) you may find you need to spend some money on something like Mentor's Modelsim/Questa or Aldec's ActiveHDL/Riviera.
first i created test bench to test my component, used isim to simulate this test bench, i inserted break points in the main code areas, used the step into button to step into code, or the run button to jump to next break point
same as i am used to do with any programming language
The professional way to “Debug” VHDL is to create a “testbench”. Like everything else VHDL it’s complicated. Here’s a “how to testbench” link: vhdl Testbenches
I am very lazy. I am also not professional. I prefer to do what is known as “forcing”. If you are using Xilinx, ModelSim, or Vivado, then you have the ability to toss your VHDL code into a “simulation”. There you can add signals to a waveform, select those signals (right click on them) and force their values to be ‘0’ or ‘1’. Then after “forcing” your input, run the simulation for a few nanoseconds and look at the output.

What helps to you improve your ability to find a bug?

I want to know if there are method to quickly find bugs in the program.
It seems that the more you master the architecture of your software, the more quickly
you can locate the bugs.
How the programmers improve their ability to find a bug?
Logging, and unit tests. The more information you have about what happened, the easier it is to reproduce it. The more modular you can make your code, the easier it is to check that it really is misbehaving where you think it is, and then check that your fix solves the problem.
Divide and conquer. Whenever you are debugging, you should be thinking about cutting down the possible locations of the problem. Every time you run the app, you should be trying to eliminate a possible source and zero in on the actual location. This can be done with logging, with a debugger, assertions, etc.
Here's a prophylactic method after you have found a bug: I find it really helpful to take a minute and think about the bug.
What was the bug exactly in essence.
Why did it occur.
Could you have found it earlier, easier.
Anything else you learned from the bug.
I find taking a minute to think about these things will make it far less likely that you will produce the same bug in the future.
I will assume you mean logic bugs. The best way I have found to capture logic bugs is to implement some sort of testing scheme. Check out jUnit as the standard. Pretty much you define a set of accepted outputs of your methods. Every time you compile your system it checks all of your test cases. If you have introduced new logic that breaks your tests, you will know about it instantly and know exactly what you have to fix.
Test driven design is a pretty big movement in programming right now. You will be hard pressed to find a language that doesn't support some kind of testing. Even JavaScript has a multitude of test suites.
Experience makes you a better debugger. Pay close attention to the bugs that you AND others commonly make. Try to figure out if/how these bugs apply to ALL code that affects you, not the single instance of where the bug was seen.
Raymond Chen is famous for his powers of psychic debugging.
Most of what looks like psychic
debugging is really just knowing what
people tend to get wrong.
That means that you don't necessarily have to be intimately familiar with the architecture / system. You just need enough knowledge to understand the types of bugs that apply and are easy to make.
I personally take the approach of thinking about where the bug may be in the code before actually opening up the code and taking a look. When you first start with this approach, it may not actually work very well, especially if you are pretty unfamiliar with the code base. However, over time someone will be able to tell you the behavior they are experiencing and you'll have a good idea where the problem is located or you may even know what to fix in the code to remedy the problem before even looking at the code.
I was on a project for several years that maintained by a vendor. They were not very good debuggers and most of the time it was up to us to point them to an area of the code that had the problem. What made our problem worse was that we didn't have a nice way to view the source code, so a lot of our "debugging" was just feeling.
Error checking and reporting. The #1 newbie coder debugging mistake is to turn off error reporting, avoid checking for whether what's going on makes sense, etc etc. In general, people feel like if they can't see anything going wrong then nothing is going wrong. Which of course could not be further from the case.
Instead, your code should be chock full of error conditions that will make lots of noise, with detailed reporting, someplace you will see it. (This doesn't mean inside a production web page.) Then, instead of having to trace an error all over the place because it got passed through sixteen layers of execution before it finally got someplace that broke, your errors start happening proximately to the actual issue.
It seems that the more you master the
architecture of your software ,the
more quickly you can locate the bugs.
After understanding the architecture, one's ability to find bugs in the application increases with their ability to identify and write extensive tests.
Know your tools.
Make sure that you know how to use conditional breakpoints and watches in your debugger.
Use static analysis tools as well - they can point out the more obvious issues.
Sleep and rest.
Use programming methods that produce fewer bugs in the first place.
If to implement a single stand-alone functional requirement it takes N separate point-edits to source code, the number of bugs put into the code is roughly proportional to N, so find programming methods that minimize N. Ways to do this: DRY (don't repeat yourself), code generation, and DSL (domain-specific-language).
Where bugs are likely, have unit tests.
Obviously.IMHO, the best unit tests are monte-carlo.
Make intermediate results visible.
For example, compilers have intermediate representations, in the form of 4-tuples. If there is a bug, the intermediate code can be examined. That tells if the bug is in the first or second half of the compiler.
P.S. Most programmers are not aware that they have a choice of how much data structure to use. The less data structure you use, the less are the chances for bugs (and performance issues) caused by it.
I find tracepoints to be an invaluable debugging tool. They are a bit like logging, except you create them during a debugging session to solve a particular issue, like breakpoints.
Printing the stacktrace in a tracepoint can be especially useful. For example, you can print the hash code and stacktrace in the constructor of an object, and then later on when the object is used again you can search for its hashcode to see which client code created it. Same for seeing who disposed it or called a certain method etc.
They are also great for debugging issues related to window focus changes etc, where the debugger would interfere if you drop in break mode.
Static code tools like FindBugs
Assertions, assertions, and assertions.
Some areas of our code has 4 or 5 assertions for each line of real code. When we get a bug report the first thing that happens is that the customer data is processed in our debug build 99 times out a hundred an assert will fire near the cause of the bug.
Additionally our debug build perform redundant calculations to ensure that an optimized algorithm is returning the correct result, and also debug functions are used to examine the sanity of data structures.
The hardest thing new developers have to contend with is getting their code to survive the assertions of the code gthey are calling.
Additionally we do not allow any code to be putback to toplevel that causes any integration or unit test to fail.
Stepping through the code, examining flow/state where unexpected behavior is occurring. (Then develop a test for it, of course).
Writing Debug.Write(message) in your code and using DebugView is another option. And then run your application find out what is going on.
"Architecture" in software means something like:
Several components
The components interact across clearly-defined interfaces
Each component has a well-defined responsibility
The responsibility of one component is unlike the responsibilities of other components
So, as you said, the better the architecture the easier it is to find bugs.
First: knowing the bug, you can decide which functionality is broken, and therefore know which component implements that functionality. For example, if the bug is that something isn't being logged properly, therefore this bug should be in one of 3 places:
In the component that's responsible for logging (your logging library)
Or, above that in the application code which is using this library
Or, below that in the system code which this library is using
Second: examine the data transfered across the interfaces between components. To continue the previous example above:
Set a debugger breakpoint on the application code which invokes the logger API, to verify whether the logger API is being used correctly (e.g. whether it's being invoked at all, whether parameters are as-expected, etc.).
Doing this tells you whether the bug is in the component above this interface, or in the component that's below this interface.
Repeat (perhaps using binary search if the call stack is very deep) until you've found which component is at fault.
When you come to the point that you think there must be a bug in the OS, check your assertions -- and put them into the code with "assert" statements.
Conversely, as you are writing the code, think of the range of valid inputs for your algorithms and put in assertions to make sure you have what you think you have. Same goes for output: Check that you produced what you think you produced.
E.g. if you expect a non-empty list:
l = getList(input)
assert l, "List was empty for input: %s" % str(input)
I'm part of the QA team # work, and knowing anything about the product and how it is developed, helps a lot in finding bugs, also when I make new QA tools I pass it to our dev team to test it, finding bugs in your own code is just plain hard!
Some people say programmers are tainted, so we cannot see bugs in their own product; we are not talking about code here, we are beyond that, usability and functionality itself.
Meanwhile unit testing seams to be a nice solution to find bugs in your own code, its totally pointless if you're wrong even before writing the unit test, how are you going to find the bugs then? you don't!, let your co-worker find them, hire a QA guy.
Scientific debugging is what I always used, and it greatly helps.
Basically, if you can replicate a bug, you can track its origin. You should then experiment some tests, observe the results, and infer hypotheses on why the bug happens.
Writing about all your hypotheses, attempts, expected results and observed results can help you track down the bugs, particularly if they're nasty.
There are automated tools that can help you with that process, particularly git-bisect (and similar bisection tools on other revision systems) to quickly find which change introduced the bug, unit testing to reproduce a bug and prevent regressions in your code (can be used in combination with bisect), and delta debugging to find the culprit in your code (similar to git-bisect but whereas git-bisect works on the code history, delta debugging works on the code directly).
But whatever the tools you are using, the most important benefit is in the scientific methodology, as this is the formalization of what most experienced debuggers do.

Resources