Diamond/ModelSim post-route timing simulation problems - fpga

I’m new to TinyFPGA, so I need a little help!
I’m working on a Tiny FPGA project for sensors and actuators where each tinyFPGA provides an 8 bit digital sensor input, and a 4 actuators output with different modes of operation (on/off, PWM, and pulses) - they are serially interconnected in a ring using the WS2811 pixel "protocol, and intercepted by a ESP32.
I have successfully built a pretty decent test bench for system simulation which successfully verifies 3 interconnected instances of the design at RTL level (takes 4 hours to complete with my brand new RYZEN 7 machine:-).
Next I want to do is to do post-routing simulation to verify the timing - and here I get stuck. I’m using Lattice Diamond and the “build in” ModelSim.
I want all the testbench logic to be RTL simulated while the actual FPGA design instances to be post-routing/time simulated.
The .mdo script for modelsim generated by Lattice Diamond looks like this:
if {![file exists “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/timing/timing.mpf”]} {
project new “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/timing” timing
project addfile “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.vo”
project addfile “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/genericIOSatelite_TB.v”
vlib work
vdel -lib work -all
vlib work
vlog +incdir+C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1 -work work “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.vo”
vlog +incdir+C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2 -work work “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/genericIOSatelite_TB.v”
} else {
project open “C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/timing/timing”
project compileoutofdate
}
vsim -L work -L pmi_work -L ovi_machxo2 +transport_path_delays +transport_int_delays genericIOSatelite_TB -sdfmax /genericIOSatelite_TB/DUT0=C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.sdf
view wave
add wave /*
run 1000ns
Where “genericIOSatelite_impl1_vo.vo” is my routed and placed FPGA design, “genericIOSatelite_TB.v” is my testbench, “genericIOSatelite_impl1_vo.sdf” is the timing database for my FPGA design and “/genericIOSatelite_TB/DUT0” is one out of three testbed instantiations of the FPGA design (eventually I would want all three simulated with timing, but one problem at the time).
Now I get the following errors:
…
Loading instances from C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.sdf
** Error (suppressible): (vsim-SDF-3250) C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.sdf(7071): Failed to find INSTANCE ‘SLICE_303’.
** Error (suppressible): (vsim-SDF-3250) C:/Users/jonas/OneDrive/Projects/ModelRailway/GenericJMRIdecoder/hardware/Satelites_CRC_2/impl1/genericIOSatelite_impl1_vo.sdf(7082): Failed to find INSTANCE ‘SLICE_304’.
And 100’ds of more errors like this…
But when I look at the first error: “Failed to find INSTANCE ‘SLICE_303’” I don’t understand the issue, I can clearly see the ‘SLICE_303’ instance both in “genericIOSatelite_impl1_vo.sdf” and in “genericIOSatelite_impl1_vo.vo”:
“genericIOSatelite_impl1_vo.sdf”:
.
.
.
(CELL
(CELLTYPE “SLICE_303”)
(INSTANCE SLICE_303)
(DELAY
(ABSOLUTE
(IOPATH B0 F1 (635:710:786)(635:710:786))
(IOPATH A0 F1 (635:710:786)(635:710:786))
(IOPATH FCI F1 (459:514:569)(459:514:569))
)
)
)
.
.
.
“genericIOSatelite_impl1_vo.vo”:
.
.
.
SLICE_303 SLICE_303( .B0(control_7_adj_1162), .A0(cnt_9_adj_1170),
.FCI(n4958), .F1(n312));
.
.
.
I would very much like to get an advise on what I’m doing wrong here, using the inbuilt OSCH with 133 MHZ Freq, and with 7ns cycle time I believe it would be nice with a reassuring post-routing/placement simulation # worst timing.
Best regards/Jonas

It appears to be a bug in ModelSim as suggested in the following article: https://www.intel.com/content/www/us/en/support/programmable/articles/000084538.html
-sdfnoerror -sdfnowarn seems to fix the problem - but not very assuring to just squelch the issues :-(

Related

How to drive the DDS Compiler IP core from Xilinx

I completed Anton Potočniks' introductory guide to the red pitaya board and I am now able to send commands from the linux machine running on the SoC to its FPGA logic.
I would like to further modify the project so that I can control the phase of the signal that is being transmitted via the red pitayas' DAC. Some pins (from 7 down to 1) of the first GPIO port were still unused so I started setting them from within the OS and used the red pitaya's LEDs to confirm that they were being set without interfering with the functionality of Anton Potočnik's "high bandwidth averager".
I then set the DDS_compilers' to Phase Offset Programmability to "streaming" mode so that it can be configured on the fly using the bits that are currently controling the red pitaya's LEDs. I used some slices to connect my signals to the AXI4-Stream Constant IP core, which in turn drives the DDS compiler.
Unfortunately the DAC is just giving me a constant output of 500 mV.
I created a new project with a testbench for the DDS compiler, because synthesis takes a long time and doesn't give me much insight into what is happening.
Unfortunately all the output signals of the DDS compiler are undefined.
My question:
What am I doing wrong and how can I proceed to control DACs' phase?
EDIT1; here is my test bench
The IP core is configured as follows, so many of the control signals that I provided should not be required:
EDIT2; I changed declarations of the form m_axis_data_tready => '0' to m_axis_phase_tready => m_axis_phase_tready_signal. I also took a look at the wrapper file called dds_compiler_0.vhd and saw that it treats both m_axis_phase_tready and m_axis_data_tready as inputs.
My simulation results remained unchanged...
My new test bench can be found here.
EDIT3: Vivado was just giving me the old simulation results - creating a new testbench, deleting the file under <project_name>.sim/sim_1/behav/xsim/simulate.log and restarting vivado solved this problem.
I noticed that the wrapper file (dds_compiler_0.vhd) only has five ports:
aclk (in)
s_axis_phase_tvalid (in)
s_axis_phase_tdata (in)
m_axis_data_tvalid (out)
and m_axis_data_tdata (out)
So I removed all the unnecessary control signals and got a new simulation result, but I am still not recieving any useful output from the dds_compiler:
The corresponding testbench can be found here.
I also don't get any valid output when I include the control signals.
The corresponding testbench can be found here.
Looks like m_axis_data_tready is not connected. No data will come out unless that's asserted.

With ModelSim, how to obtain all signals' simulation data before adding signals to waveform window?

Background : ModelSim v10.4d installed with quartus v16.0
I was a Cadence Incisive user, now have to pass to mentor ModelSim, but with ModelSim I can't find a way to get all signals' data before adding them to the waveform window.
For example,
In an .do(tcl) ModelSim simmulation script, a typical flow could be:
1,vcom : compile all sources files and testbench
2,vsim : load testbench for simulation
3,view structure/signals/wave : open some windows
4,add wave : add signals to waveform window
5,run xx us : run simulation for a certain time
with this flow, I have to re-do step 5 each time adding a signal to waveform window, or it will show me "NO DATA" for that newly added signal.
So I wonder if it's possible that we skip step 4, do step 5 just once to obtain all signals' simulation data, then we choose signals to send to waveform window, and we have every signal's data without re-doing the "run".
The command you need is log. The reference manual says:
This command creates a wave log format (WLF) file containing simulation data for all HDL objects whose names match the provided specifications.
Try this flow, you can go to step 6 before the end of step 5 :
1- vcom *.vhd : compile all sources files and testbench
2- vsim work.my_tb : load testbench for simulation
3- view structure/signals/wave : open some windows
4- log * -r : tell modelsim to record everything
5- run xx us : run simulation for a certain time
6- add signals to waveform window
Using the log * -r will slow the simulation down and fill your disk up. So, you perhaps would wish to target a specific part of your design rather than using * or perhaps would wish to restrict the depth using the -depth option. Full details can be found in the Modelsim reference manual, available via the Help menu.
If you want to add every signal in the design, just do something like:
add wave -recursive -depth 10 *
This will add every signal up to 10 levels of hierarchy deep. Note that the signal logging only applies to simulation run after the add command.
In a large design, logging every signal will cause the simulation to slow down. By picking and choosing which signals you are actually interested in before running the simulation, you will get the shortest simulation run time.
You can quickly navigate the design using the 'sim' panel, then right-click an object in the 'Objects' panel to add to wave. Here you can also Add to > Wave > Signals in region, or in the 'sim' panel you can Add to > Wave > Signals in region and below.

numato mimas v2 interfacing with ddr ram

I am a newbie here, I used and have my hand on arduino, but now I got task of taking 3000 samples of waveform with 100MSPS with an adc.
As this was impossible with arduino and most of the controller I switch to FPGA,
And bought NUMATO MIMAS v2 (As it has on board 512Mb DDR RAM, which is capable of handling that much fast operation.)
And also bought AD9283 along with it as it has 100MSPS 8bit adc output.
I am using Xilinx ISE, and using Verilog(No specific reason for it).
My PROBLEM is I am unable to interface that inbuilt DDR ram and communicate with it.
Means there are no tutorial to write on that ram and read from it.
So can any one could help me on it?
I think you can read some file from Xilinx about Spartan 6.Like ug416 and ug388 .
Then you can make a Example Design (tutorial) for your board to simulation and communicate with DDR on real board.It is describe in ug416 page 67.You do not code as you creat a MIG IP.It creat a test file automatic.After you simulaion and verify on your board you will familiar with MIG IP.You shoud make sure these questions in this step.Like How many write/read path you will use?How to manage command path?How to addressing?What's the frequency do you need?And so on.
At last you can use MIG by your demand.If you just has only one stream data you will use MIG very easy.Like just use one write path and one read path.
Forgive my English.

Specman-simulator synchronization issue?

I am using Cadence's Ethernet eVC wherein the agent's monitor is tapped at the following signals:
. ____________ _____
.clk _____| |__________________|
. ________ _______ ________________ _________
.data __0a____X___07__X_______0b_______X_________
. ^ ^
It samples data at the rising and falling edges of the clock. In the example above, the data 0x07 is garbage data, and the valid values are 0xa (clk rise) and 0xb (clk fall). However, the monitor is sampling (for clk fall) 0x7!
I'm suspecting this is a Specman-simulator synchronization issue. How can this be resolved if it is?
Simulator - IES 13.10
irun 13.10 options - (I'll include here only those which I think could be relevant to the issue, plus those which I've no idea yet what their purpose is)
-nomxindr
-vhdlsync
+neg_tchk
-nontcglitch
+transport_path_delays
-notimezeroasrtmsg
-pli_export
-snstubelab
Languages - VHDL (top testbench), Verilog (DUT), Specman (virtual sequence, Enet and OCP eVCs)
Time between 0x07 (left ^ in the waveform above) and falling edge of clock (right ^) = 0.098ns
One colleague suggested using -sntimescale, but I still can't imagine how that is causing/would resolve the issue. Any of these search strings were not showing helpful hints, even those articles from Cadence: "specman tick synchronization delta delay timescale precision"
This could be indeed an issue of timescale.
There is a comprehensive cookbook talking about deugging specman simulator interface synchronization issues. please take a look here.
To check what is the timescale used in your simulation, you can add -print_hdl_precision option to irun to print the precision for the VHDL hierarchy. For Verilog, it will be printed automatically in case it is set either in the code or via irun options. the information will be printed during elaboration.
To check the timescale used by Specman, you can issue the following command from Specman prompt:
SN> print get_timescale()
Another option to try (only after the timescale option doesn't help) is to remove the -vhdlsync flag. Indeed in most of the mixed environments you should add this flag. But there are rare cases in which the environment works better without it. If you try removing this flag, just remember to re-elaborate.
If you don't find the solution for your problem in the cookbook, deeper investigation should be done. for example, how does specman samples the signal. is it a simple_port, event_port, tick access, etc.. also some trace and probe commands could be helpful. In such case, I suggest to contact Cadence support.
Good Luck!
Semadar

How to do simple Aldec Active-HDL simulation with waveform using Tcl scripting?

Having a simple test bench like:
entity tb is
end entity;
architecture syn of tb is
signal show : boolean;
begin
show <= TRUE after 10 ns;
end architecture;
ModelSim GUI allows simulation and waveform viewing with a Tcl script in
"all.do" with:
vlib pit
vcom -work pit tb.vhd
vsim pit.tb
add wave sim:/tb/show
run 20 ns
Where to do all.do in the ModelSim GUI console will make library, compile, load tb model, and show the waveform:
How to make a similar simple Tcl script for a similar simulation
with Aldec Active-HDL simulator ?
Aldec Active-HDL documentation for Tcl use is pretty vague for how to use Tcl
from the GUI, but enough time with trial and error gave a positive result.
It appears that it is required to create workspace with design, whereby a
library for work is also created, and then design files can be compiled into
the library.
The resulting Tcl script for Active-HDL is:
workspace create pit # Create workspace namded "pit" and open this
design create -a pit . # Create design named "pit" with "pit" library as work and add to workspace
acom $DSN/../tb.vhd # Compile "tb.vhd" file with location relative to workspace
asim work.tb # Load simulator from work library
add wave /tb/show # Add wave "show" to waveform
run 20 ns # Simulate 20 ns
Which will give the waveform:

Resources