how does inout parameters be implemented? - vhdl

I know what the inout parameters is and how to use them. Assume that we have an inout parameter io and want to create a bidirectional static RAM such as the following code :
LIBRARY ieee;
USE ieee.std_logic_1164.ALL;
ENTITY sram IS
port(
clk : IN std_logic;
wr : IN std_logic;
io : INOUT std_logic;
addr : IN INTEGER RANGE 0 TO 7
);
END sram;
ARCHITECTURE behavioral OF sram IS
TYPE matrix IS ARRAY (0 TO 7) OF std_logic;
SIGNAL mem : matrix;
BEGIN
PROCESS(clk)
BEGIN
IF rising_edge(clk) THEN
IF wr = '1' THEN
mem(addr) <= io;
END IF;
END IF;
END PROCESS;
io <= mem(addr) WHEN wr = '0' ELSE 'Z';
END behavioral;
We can create an instance of sram and write on it such as the following code :
io <= '1' WHEN wr = '1' ELSE 'Z';
Question : How can synthesis tool control the multiple assignments and judge between multiple drivers ? What hardware is implemented to do it ?
Thanks for comments and answers ...

For typical FPGA and ASIC devices, implementation of tristate capabilities are only available on the IO, like for example in an Altera Arria 10 FPGA:
So for such devices, the internal RAMs are always implemented with dedicated input and output ports, thus not using any internal tristate capabilities.
Even if a RAM is connected to external IOs that support tristate, then the internal RAM block is still typically created with dedicated input and output ports, so the connection to a tristate capable pin on the device is handled through an in-out buffer with the output enable and tristate capability.
If internal design tries to use tristate capabilities or multiple drivers when this is not supported by the device, then the synthesis tool will generate and error, typically saying that multiple drivers are not supported for the same net.

On Xilinx devices, the schematics are similar.
This is an image of primitive IOBUF:
The green part is the output driver with tristate control; the blue part is the input driver. The complete IOB (Input/Output Block) consists of several primitives:
IOB registers (input, output, tristate control)
delay chains
wires to combine two IOBs to a differential IOB (**BUFDS)
capability to drive clock networks (CC-IOB - clock capable IOB)
pullup, pulldown, ...
driver (IOBUF)
pin/ball (IPAD, OPAD, IOPAD) - this includes ESD protection
How does synthesis work?
Xilinx synthesis tools (XST / Synth) add IPAD or OPAD primitives for every wire in your top-level component's port description. A pad is just the primitive to represent a physical pin or ball under the FPGA package.
If you enabled to automatically add I/O buffers, the tool will add IBUF, OBUF, IOBUF, IBUFDS, ... primitives between every PAD and top-level port. It uses the port direction (in, out, inout) to infer the correct buffer type. If this option is disabled (default = on) you have to manually add buffers for every I/O pin. In some cases XST gets offended: I added some IOBUFs with tristate control by hand so XST declined to infer the missing buffers. So I had to add all buffers by hand ...
While using Xilinx XST it's possible to use tristate buses (port direction = inout) beneath the top-level. XST will report that it added (virtual) tristate buffers. These buffers get trimmed if the direction of each bit of the bus has an obvious direction and no multiple driver problem.
This does not work in iSim.

Related

Vivado 2015.1 VHDL Input/ Output Violation

I am getting through the tutorial of Nexys 4 DDR and I am implementing a simple MUX
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
library UNISIM;
use UNISIM.VComponents.all;
-- Uncomment the following library declaration if using
-- arithmetic functions with Signed or Unsigned values
--use IEEE.NUMERIC_STD.ALL;
-- Uncomment the following library declaration if instantiating
-- any Xilinx leaf cells in this code.
--library UNISIM;
--use UNISIM.VComponents.all;
entity lab1_2_1 is
Port ( SW0 : in STD_LOGIC;
SW1 : in STD_LOGIC;
SW2 : in STD_LOGIC;
LED0 : out STD_LOGIC);
end lab1_2_1;
architecture Behavioral of lab1_2_1 is
Signal SW2_bar : STD_LOGIC;
Signal SW0_int : STD_LOGIC;
Signal SW1_int : STD_LOGIC;
begin
SW2_bar <= not SW2;
SW0_int <= SW0 and SW2_bar;
SW1_int <= SW1 and SW2;
LED0 <= SW0_int or SW1_int;
end Behavioral;
When I get to the part of generating a bitstream, I get this critical warning
NSTD #1 Critical Warning 1 out of 1 logical ports use I/O standard (IOSTANDARD) value 'DEFAULT', instead of a user assigned specific value. This may cause I/O contention or incompatibility with the board power or connectivity affecting performance, signal integrity or in extreme cases cause damage to the device or the components to which it is connected. To correct this violation, specify all I/O standards. This design will fail to generate a bitstream unless all logical ports have a user specified I/O standard value defined. To allow bitstream creation with unspecified I/O standard values (not recommended), use this command: set_property SEVERITY {Warning} [get_drc_checks NSTD-1]. NOTE: When using the Vivado Runs infrastructure (e.g. launch_runs Tcl command), add this command to a .tcl file and add that file as a pre-hook for write_bitstream step for the implementation run. Problem ports: LED0.
and
UCIO #1 Critical Warning 1 out of 1 logical ports have no user assigned specific location constraint (LOC). This may cause I/O contention or incompatibility with the board power or connectivity affecting performance, signal integrity or in extreme cases cause damage to the device or the components to which it is connected. To correct this violation, specify all pin locations. This design will fail to generate a bitstream unless all logical ports have a user specified site LOC constraint defined. To allow bitstream creation with unspecified pin locations (not recommended), use this command: set_property SEVERITY {Warning} [get_drc_checks UCIO-1]. NOTE: When using the Vivado Runs infrastructure (e.g. launch_runs Tcl command), add this command to a .tcl file and add that file as a pre-hook for write_bitstream step for the implementation run. Problem ports: LED0.
Any Ideas?
Vivado expects you to define physical locations of the IOs and IO standards. IO standard depends on the voltage level and pull-up/pull-down resistors connected to pins of the FPGA.
You may add those into a constraint file (e.g. SDC or XDC). For example, I assigned output LED0 to pin A1 of the FPGA and defined the IO standard as 2.5V LVCMOS. The correct values can be found in the manual of your FPGA board.
set_property PACKAGE_PIN A1 [get_ports {LED0}];
set_property IOSTANDARD LVCMOS25 [get_ports {LED0}];

VHDL buffer variable vs out variable

I work in a VHDL program and I need to do a RAM 256 using the ALTERA DE2-115. The outputs will show in a seven segment display.
The problem is that: I have a dataout output variable. Then the variable has the following values of the temp_ram array:
dataout <= temp_ram(conv_integer(dir));
Then I want to divide the vaules of dataout to put in the seven segment
dataout(7 downto 4)
dataout(3 downto 0)
This shows the following error:
Error (10309): VHDL Interface Declaration error in RAM.vhd(45): interface object "dataout" of mode out cannot be read. Change object mode to buffer.
When I change to buffer and this run prefect, but I can't understand what happen
For cross-platform compatibility and code-reusability, I'd recommend an intermediate signal ( dataout_int can be used by other statements) :
dataout_int <= temp_ram(conv_integer(dir));
and assign the output to this intermediate signal:
dataout <= dataout_int;
You are using conv_integer from a Synopsys package. Please use only official IEEE packages.
dataout is a signal, not a variable, because you use a signal assignment statement. Moreover, the signal is a port of mode out. (Ports are signals as well).
Besides static typing, VHDL is also checking directions of signal in ports. Your signal is of mode out so it can't be read.
Als a solution, you can:
use an intermediate signal,
use mode buffer, which is not supported by all synthesis tools equally
use VHDL-2008, which allows ports of mode out to be read.
Quartus support some VHDL-2008 features.

iCEstick + yosys - using the Global Set/Reset (GSR)

This is probably more of an iCEstick question than a yosys one, but asking here since I'm using the Icestorm tool chain.
I want to specify startup behavior of my design, which various places on the internet seem to agree is related to the typically named rst signal. It wasn't obvious to me where such a signal comes from, so I dug into the powerup sequence. Current understanding is from Figure 2 in this document.
After CDONE is pulled high by the device, all of the internal registers have been reset, to some initial value. Now, I've found plenty of lattice documents about how each type of flip-flop or hard IP receives a reset signal and does something with its internal state, but I still don't quite understand how I specify what those states are (or even just know what they are so I can use them).
For example, if I wanted to bring an LED high for 1 second after powerup (and only after powerup) I would want to start a counter after this reset signal (whatever it is) disables.
Poking around the ice40 family data sheet and the Lattice site, I found this document about using the Global Set/Reset signal. I confirmed this GSR is mentioned in the family data sheet, referenced on page 2-3 under "Clock/Control Distribution Network". It seems that a global reset signal is usable by one of the global buffers GBUF[0-7] and can be routed (up to 4 of them) to all LUTs with the global/high-fanout distribution network.
This seems like exactly what I was after, I but I can't find any other info about how to use this in my designs. The document on using the GSR states that you can instantiate a native GSR component like this:
GSR GSR_INST (.GSR (<global reset sig>));
but I can't tell whether this is just for simulation. Am I completely going in the wrong direction here or just missing something? I'm very inexperienced with FPGAs and hardware, so its entirely possible my entire approach is flawed.
I'm not sure if that GSR document actually is about iCE40. The Lattice iCEcube tool interestingly accepts instances of GSR cells, but it seems to simply treat them as constant zero drivers. There is also no simulation model for the GSR cell type in the iCE40 sim library and no description of it in the iCE40 tech library documentation provided by Lattice.
Furthermore, I have built the following two designs with the lattice tools, and besides the timestamp in the "comment field" of the generated bit-stream file, the generated bit-streams are identical! (This test was performed with Lattice LSE as synthesis tool, not Synplify. I had problems getting Synplify to run on my machine for some reason and gave up trying to do so over a year ago..)
This is the first test design I've used:
module top (
input clk,
output rst,
output reg val
);
always #(posedge clk, posedge rst)
if (rst)
val = 1;
else
val = 0;
GSR GSR_INST (.GSR (rst));
endmodule
And this is the second test design:
module top (
input clk,
output rst,
output val
);
assign val = 0, rst = 0;
endmodule
Given this results I think it is safe to say that the lattice tools simply ignore GSR cells in iCE40 designs. (Maybe for compatibility with their other FPGA families?)
So how does one generate a rst signal then? For example, the following is a simple reset generator that asserts (pulls low) resetn for the first 15 cycles:
input clk;
...
wire resetn;
reg [3:0] rststate = 0;
assign resetn = &rststate;
always #(posedge clk) rststate <= rststate + !resetn;
(The IceStorm flow does support arbitrary initialization values for registers, whereas the lattice tools ignore the initialization value and simply initialize all FFs to zero. So if you want your designs to be portable between the tools, it is recommended to only initialize regs to zero.)
If you are using a PLL, then it is custom to use the PLL LOCK output to drive the resetn signal. Unfortunately the "iCE40 sysCLOCK PLL Design and Usage Guide" does not state if the generated LOCK signal is already synchronous to the generated clock, so it would be a good idea to synchronize it to the clock to avoid problems with metastability:
wire clk, resetn, PLL_LOCKED;
reg [3:0] PLL_LOCKED_BUF;
...
SB_PLL40_PAD #( ... ) PLL_INST (
...
.PLLOUTGLOBAL(clk),
.LOCK(PLL_LOCKED)
);
always #(posedge clk)
PLL_LOCKED_BUF <= {PLL_LOCKED_BUF, PLL_LOCKED};
assign resetn = PLL_LOCKED_BUF[3];
Regarding usage of global nets: You can explicitly route the resetn signal via a global net (using the SB_GB primitive), but using the IceStorm flow, arachne-pnr will automatically route a set/reset signal (when used by more than just a few FFs) over a global net, if a global net is available.

Inferred RAM doesn't initialize in ModelSim Altera edition

I have a memory module for an Altera FPGA target that I've written to be inferred into one of Altera's ALTSYNCRAM blocks. The memory is 1024x16 and I have a memory initialization file specified with an attribute.
When synthesizing, the synthesis report indicates that it generated the type of RAM block that I wanted, and it notes that the initialization file is the one I specified.
When trying to simulate with Altera's edition of ModelSim, the data signal starts out completely uninitialized, and I can't figure out why.
I looked on forums and such and some people mentioned that ModelSim might not support the ".mif" format that I was using, but would support ".hex" so I changed my initialization file to ".hex". I also read that relative file paths can be an issue, but I checked my simulation directory and it looks like QuartusII copied the initialization file into that directory when I tried to simulate.
Any ideas on why the memory isn't being initialized and how to make it do so?
A heavily trimmed model that contains the inferred memory:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
library work;
--use work.types.all;
entity CPU is
--...
end entity CPU;
architecture rtl of CPU is
--these types are actually included in a package
subtype reg is std_logic_vector(15 downto 0);
type mem is array (0 to 1023) of reg;
--...
--memory read port
signal MR : reg;
signal MRA : std_logic_vector(9 downto 0); --flops
--memory write port
signal MW : reg; --flops
signal MWA : std_logic_vector(9 downto 0); --flops
signal MWE : std_logic; --flop
signal data : mem;
attribute ram_init_file : string;
attribute ram_init_file of data : signal is "RAM_init.hex";
attribute ramstyle : string;
attribute ramstyle of data : signal is "no_rw_check";
begin
--...
--memory spec
MR <= data(to_integer(unsigned(MRA(9 downto 0))));
memory_process : process(clk)
begin
if (clk'event and clk = '1') then
if(MWE = '1') then
data(to_integer(unsigned(MWA(9 downto 0)))) <= MW;
end if;
end if;
end process;
end architecture rtl; --CPU
Modelsim does not show any warnings or errors while compiling CPU.vhd, nor does it have any indication of loading the initialization file.
This is my first design using Altera software or memory initialization files, and it wouldn't surprise me if the problem was something really basic, or I'm approaching this from a fundamentally incorrect angle.
I'd normally define the memory with a constant in a package, but this is for a class, and it requires that I have a memory initialization file (it requires .mif format too, but that's only a 3 character change between simulation and synthesis file).
It looks like Modelsim may have a "mem load" command you can use at the start of your simulation in order to initialize the memory. Take a look at the end of this thread:
Initialization altsyncram
Being able to initialize RAM on an FPGA depends on both the synthesizer and the specific FPGA you are using. Some FPGA families support this, others don't. I know this is not the answer you want to hear, but you'll need to check the documentation from Altera.
Modelsim does not pay attention to synthesis attributes. That is a vendor specific convention. You can refer to them in simulation as with any other user-defined attribute but it doesn't know that some attributes invoke special behavior in various third-party synthesizers.
If you want to initialize the RAM for simulation you will need to do one of the following:
Write a function that reads the contents of the memory file and call it during initialization of the data signal.
Convert the memory contents to a VHDL constant defined in a separate package and assign the constant to the data signal as the initializer. This can be automated with a script.
Use the Verilog system task $readmemh (requires Modelsim with mixed language license)
For option 1, the function should be of the form:
impure function read_mem(fname : string) return mem is
variable data : mem;
begin
-- ** Perform read with textio **
...
return data;
end function;
signal data : mem := read_mem(data'ram_init_file);
The Quartus documentation on RAM initialization is sparse and only demonstrates initialized data assigned from within a VHDL process rather than reading from a file. The Xilinx documentation on RAM/ROM inferencing (p258) provides examples for doing this with general purpose VHDL. The same technique can be used for simulating a design targeted to Altera. XST supports this use of file I/O for synthesis but Quartus may choke on it. If that is the case you will have to use a configuration to swap between a synthesis oriented RAM model and one specifically for simulation that initializes with the function.
The Xilinx example only shows how to read files with ASCII binary. I have a general purpose ROM component that reads hex as well as binary which you can adapt into a RAM for what you need.

VHDL - same bitstream, two boards -> inout issue

I wanted to ask if it is possible to use an inout pin as inout and normal out? The two behaviours should be switched through a MUX. The reason for this weird looking implementation is that I have two boards and I want to use the same bitstream. On one board, the same pin is connected to a LED through GPIO and on the other it goes to my I2C bus connection. The software tries to detect the I2C and if successful it sets a register. If not, it clears it.
LED_or_SDA : inout std_logic; -- port definition
process (register)
begin
if ( register = '1') then -- software sets this register
LED_or_SDA <= I2C_SDA; -- here I want to use it as inout
else
LED_or_SDA <= gpio_reg; -- here I want to use it as normal out
end if;
end process;
This implementation throws the error "bidirect pad net is driving non-buffer primitives" during translate. Is there a solution for this?
No, you can't. A mux is not a switch, it is a logic function. The line
LED_or_SDA <= I2C_SDA;
strongly drives LED_or_SDA from I2C_SDA. It does not connect the two nets in a way that allows bidirectional data flow.
You'll need to separate the two directions:
I2C_SDA_in <= LED_or_SDA;
LED_or_SDA <= gpio_reg WHEN (register = '1') ELSE
'0' WHEN (I2C_SDA_out = '0') ELSE
'Z';
Most I2C logic blocks have separate data in and out signals anyway, right up until the external interface, where you'll find a tri-state buffer expressed in much the same way as the code I gave you. You'll simply need to make the input and output data separate ports on your I2C block.
Unfortunately, the weak drive state of the internal signal likely isn't accessible to internal logic, so these won't work:
LED_or_SDA <= gpio_reg WHEN (register = '1') ELSE
I2C_SDA_out;
LED_or_SDA <= gpio_reg WHEN (register = '1') ELSE
'Z' WHEN (I2C_SDA_out = 'Z') ELSE
I2C_SDA_out;
The VHDL compiler actually is keeping track of a tri-state control signal, and propagating that through port statements until it hits the external pin and connects it to the real (hardware) tri-state buffer. But in most compilers you can't access that control signal for your own logic.
Guessing from your error message I assume that we are talking about a Xilinx System, e.g. around a Microblaze - if that's not the case, please update your question and also specify the exact GPIO core you're using.
Often these GPIO blocks already contain the IO-Buffer macro by default. This IOBUF is located next to a physical pin and can not drive any other signals on the FPGA. Hence the GPIO block is intended to be used on the top level of the chip and be directly connected to a pin. However, there usually is a way to also access the signals before the IOBUF: E.g. in Xilinx Platform Studio on the tab "Ports" you should have a choice to use also GPIO_IO (after buffer), GPIO_IO_I (pure input), GPIO_IO_O (pure output), GPIO_IO_T (tristate) or similar.
Yes, the trick is to treat it always as a tristate buffer, and control the output based on the state of register
-- define a tristate pin the usual way.
LED_or_SDA <= LED_or_SDA_out when LED_or_SDA_tristate = '0' else 'Z';
LED_or_SDA_in <= LED_or_SDA
-- then control the data onto it, and the tristate control line
LED_or_SDA_tristate <= '0' when register = '0' else ICD_SDA;
LED_or_SDA_out <= gpio_reg when register = '0' else '0';

Resources