I have seen an example written in a VHDL file
Example snippet,
architecture aaa of bbb is
signal ccc : std_logic
begin
ccc <= transport global_en_lb;
....
I just want to know about transport in the above snippet.
What does it means?
Transport delays are idealised: they model propagation through a device or connection with infinite frequency response. Any input pulse, no matter how short, produces an output pulse. You could model an ideal transmission line with a transport delay, for example - any and all input changes propagate through the line. Transport delays can also be useful in testbenches for queuing up transactions on a driver.
Inertial delays approximate real-world delays. They're more complex but, in short, if you try to propagate a pulse where the pulse width is less than the propagation delay through the device or wire, then the pulse disappears. Inertial delays are the default in VHDL if you can't see a transport or inertial keyword.
At the HDL level, the actual difference between the two is in what happens when you schedule a new transaction for a signal when that signal already has scheduled transactions. For transport delays the transactions are just queued up; for inertial transactions the simulator may merge them.
On your Verilog comment: this was a bit of an after-though in Verilog (like so much else). However, a delay on the RHS of a non-blocking assignment models a transport delay:
always #(x)
y <= #10 ~x; // transport
Continuous assignments don't queue transactions, so model inertial delays:
assign #10 y = ~x; // inertial
In VHDL "transport" keyword is used to model delays for simulations.
The transport delay model just delays the change in the output by the time specified in the after clause.
The transport delay, which is used to model the delay introduced by wiring. With transport delay any pulse of small width is propagated to the output. The transport delay is especially useful for modeling delay line drivers, wire delays on PC board, and on path delays on ASIC.
To understand in terms how exactly it behaves with simulation refer link
Related
I'm using two timers tim3 and tim4 for counting motor encoder readings (tim3) and handling hall sensor inputs (tim4. Inputs ch1, ch2 and ch3 XORed into TI1 of TIM4 running in hall interface mode). What I would like to do now is to synchronize the two timers so that when hall toggles, encoder timer is reset. However it seems that there is no way to combine encoder mode (in the SMS register) with reset mode such that the counter tim3 is reset when tim4 TRGO toggles. It seems that I can only choose one mode or the other but not combination of both.
Maybe I'm misunderstanding how the two timers can be combined for rotor position estimation? What is the best way to combine and sync hall sensor readings with encoder readings on stm32 without using an ISR to reset the counter manually? (Preferably I want to do this automatically in hardware. I have the manual solution working, but I'm not 100% happy with it).
The chip is stm32f103.
In CR2 each timer has an output signal (MMS). In SMCR each timer has input signal modes (SMS).
When you set the Hall timer to Compare Pulse and the encoder timer to Reset Mode, I think the encoder timer will reset each time input capture on CH1 of Hall timer.
If this is possible in your chip depends on the interconnects between the timers.
See TIMx Internal trigger connection (ITR).
SMS bits are already on encoder mode. You can't have both reset and encoder mode.
You can trigger a DMA operation from memory to TIMx->EGR:UG.
TIM3_CH1 can trigger a halfword memory to peripheral operation on DMA1 channel 6 with data 0x0001 to TIM4->EGR.
This will cause TIM4 to re-initialize the counter.
I have designed an algorithm-SHA3 algorithm in 2 ways - combinational
and sequential.
The sequential design that is with clock when synthesized giving design summary as
Minimum clock period 1.275 ns and Maximum frequency 784.129 MHz.
While the combinational one which is designed without clock and has been put between input and output registers is giving synthesis report as
Minimum clock period 1701.691 ns and Maximum frequency 0.588 MHz.
so i want to ask is it correct that combinational will have lesser frequency than sequential?
As far as theory is concerned combinational design should be faster than sequential. But the simulation results I m getting for sequential is after 30 clock cycles where as combinational there is no delay in the output as there is no clock. In this way combinational is faster as we are getting instant output but why frequency of operation of combinational one is lesser than sequential one. Why this design is slow can any one explain please?
The design has been simulated in Xilinx ISE
Now I have applied pipe-lining to the combinational logic by inserting the registers in between the 5 main blocks which are doing the computation. And these registers are controlled by clock so now this pipelined design is giving design summary as
clock period 1.575 ns and freq 634.924 MHz
Min period 1.718 ns and freq 581.937.
So now this 1.575 ns is the delay between any of the 2 registers , its not the propagation delay of entire algorithm so how can i calculate propagation delay of entire pipelined algorithm.
What you are seeing is pipelining and its performance benefits. The combinational circuit will cause each input to go through the propagation delays of the entire algorithm, which will take at up to 1701.691ns on the FPGA you are working with, because the slowest critical path in the combinational circuitry needed to calculate the result will take up to that long. Your simulator is not telling you everything, since a behavioral simulation will not show gate propagation delays. You'll just see the instant calculation of your combinational function in your simulation.
In the sequential design, you have multiple smaller steps, the slowest of which takes 1.275ns in the worst case. Each of those steps might be easier to place-and-route efficiently, meaning that you get overall better performance because of the improved routing of each step. However, you will need to wait 30 cycles for a result, simply because the steps are part of a synchronous pipeline. With the correct design, you could improve this and get one output per clock cycle, with a 30-cycle delay, by having a full pipeline and passing data through it at every clock cycle.
I have a Spartan-6/ISE design where I'm generating 8-bit data # 70MHz to feed the FIFO of a Cypress FX3 USB3 controller. I also generate a 70MHz o/p clock and /WR strobe that clock data into the USB controller. The 70MHz is derived from halving the 140MHz system clock, divided by 2 in a process rather than using a DPLL, though the 140MHz system clock is produced using a DPLL.
I want to ensure the 8-bit data meets the set-up & hold time requirements of the USB controller and, although the data, o/p clock and /WR are derived from the 140MHz, I don't really care about their relationship to it. What I'm really concerned about is ensuring the set-up & hold times for data & /WR w.r.t the 70MHz o/p clock are within the USB controller limits.
My question is: how do I go about specifying timing constraints between FPGA outputs rather than w.r.t. to the internal system clock ?
Thanks
Dave
Is it possible to set an inout pin to specific value when after monitoring the value in same pin.ie if we have an inout signal then if value on that signal is one then after doing specific operation can we set value of that pin to zero in vhdl.
What you are describing doesn't make a lot of sense. Are you sure you are understanding the requirements correctly?
Your load signal sounds like an external control signal that is an input into your module. You should not be trying to change the value of that signal - whoever is controlling your module should do that instead.
As long as the load signal is asserted (1), you should probably be loading your shift register with whatever value is presumably being provided on a different input signal (e.g., parallel_data). When the load signal is deasserted (0) by the external logic, you should probably start shifting out one bit of the loaded data per clock cycle to your output signal (e.g., serial_data).
Note that there is no need for bidirectional signals!
This is all based on what I would consider typical behavior for a shift register, and may or may not match what you are trying to achieve.
This doesn't sound like a good plan, and I'm not entirely sure you want to do it, but I guess if you can set things up such that:
you have a resistor pulling the wire down to ground.
your outside device drives the wire high
the FPGA captures the pin going high and then also drives it high
The outside source goes tristate once it has seen the pin go high
the FPGA can then set the pin tristate when it wants to flag it has finished (or whatever), and the resistor will pull it low again
repeat
I imagine one use for this would be for the outside device to trigger some processing and the FPGA to indicate when it has finished, in which case, the FPGA code could be something like:
pin_name <= '1' when fpga_is_processing = '1' else 'Z';
start_processing <= '1' when pin_name = '1' and pin_name_last = '0';
pin_name_last <= pin_name when rising_edge(clk);
start processing will produce a single clock pulse on the rising edge of the pin_name signal. fpga_is_processing would be an output from your processing block, which must "come back" before he external device has stopped driving the pin high.
You may want to "denoise" the edge-detector on the pin_name signal to reduce the chances of external glitches triggering your processing. There are various ways to achieve that also.
I'm writing a state machine which controls data flow from a chip by setting and reading read/write enables. My clock is running at 27 MHz giving a period of 37 ns. However the specification for the chip I'm communicating with requires I hold my 'read request' signal for at least 50 ns. Of course this isn't possible to do in one cycle since my period is 37 ns.
I have considered I could create an additional state which does nothing but flag the next state to be the one I actually complete the read on, hence adding another period delay (meaning I hold 'read request' for 74 ns), but this doesn't sound like good practice.
The other option is perhaps to use a counter, but I wonder if there's perhaps yet another option I haven't visited yet?
How should one implement delay in a state machine when a state should last longer than one clock period?
Thanks!
(T1 must be greater than 50 ns)
Please see here for the full datasheet.
Delays are only reliably doable using the clock - adding an extra "tick" either via an extra state or using a counter in the existing state is perfectly acceptable to my mind. The counter has the possibility of being more flexible if you re-use the same state machine with a slower external chip (or if you use a different clock frequency to feed the FPGA) - you can just change the maximum count, instead of adding multiple "wait" states to the state machine.