How to drive the DDS Compiler IP core from Xilinx - fpga

I completed Anton Potočniks' introductory guide to the red pitaya board and I am now able to send commands from the linux machine running on the SoC to its FPGA logic.
I would like to further modify the project so that I can control the phase of the signal that is being transmitted via the red pitayas' DAC. Some pins (from 7 down to 1) of the first GPIO port were still unused so I started setting them from within the OS and used the red pitaya's LEDs to confirm that they were being set without interfering with the functionality of Anton Potočnik's "high bandwidth averager".
I then set the DDS_compilers' to Phase Offset Programmability to "streaming" mode so that it can be configured on the fly using the bits that are currently controling the red pitaya's LEDs. I used some slices to connect my signals to the AXI4-Stream Constant IP core, which in turn drives the DDS compiler.
Unfortunately the DAC is just giving me a constant output of 500 mV.
I created a new project with a testbench for the DDS compiler, because synthesis takes a long time and doesn't give me much insight into what is happening.
Unfortunately all the output signals of the DDS compiler are undefined.
My question:
What am I doing wrong and how can I proceed to control DACs' phase?
EDIT1; here is my test bench
The IP core is configured as follows, so many of the control signals that I provided should not be required:
EDIT2; I changed declarations of the form m_axis_data_tready => '0' to m_axis_phase_tready => m_axis_phase_tready_signal. I also took a look at the wrapper file called dds_compiler_0.vhd and saw that it treats both m_axis_phase_tready and m_axis_data_tready as inputs.
My simulation results remained unchanged...
My new test bench can be found here.
EDIT3: Vivado was just giving me the old simulation results - creating a new testbench, deleting the file under <project_name>.sim/sim_1/behav/xsim/simulate.log and restarting vivado solved this problem.
I noticed that the wrapper file (dds_compiler_0.vhd) only has five ports:
aclk (in)
s_axis_phase_tvalid (in)
s_axis_phase_tdata (in)
m_axis_data_tvalid (out)
and m_axis_data_tdata (out)
So I removed all the unnecessary control signals and got a new simulation result, but I am still not recieving any useful output from the dds_compiler:
The corresponding testbench can be found here.
I also don't get any valid output when I include the control signals.
The corresponding testbench can be found here.

Looks like m_axis_data_tready is not connected. No data will come out unless that's asserted.

Related

Writing to a peripheral in Vivado and then outputting to a LED

I want to create a basic project in Vivado that takes a value that i input to a client, which is sent to a server I made (in C), and then the server writes that value to a peripheral in Vivado, and then that data in the peripheral is sent to an output pin that assigns to LED's, making the LED light up.
Basically I want to go from client-->server-->peripheral-->LED lights up
For example, in the client (a GUI) I want to give it a value such as 0011, which is received by the server. Then the server writes that value to the peripheral which will then make, in this case, LED0 & LED1 not light, but LED2 & LED3 will light.
I know how to make an AXI4 peripheral in Vivado, and the client-server (TCP/IP) has been made. My question is what code/design block I would need to then take the data written to the peripheral and assign it to the LED's?
Should I make the peripheral a Master or Slave? Overall confused how should i proceed from here. I am using a Red Pitaya (Xilinx Zynq 7010 SoC) connected by an Ethernet cable to my computer.
Also, I thought of running the program on the Red Pitaya by loading the bitstream on to it (using WinSCP) by running the command
cat FILE_NAME.bit > /dev/xdevcfg
in PuTTY (connected to the Pitaya by IP address), then running the server on the pitaya, and then sending the signal from the client for the server to receive. Is that the correct way of approaching it?
If my logic is off in anyway please let me know
I am somewhat thrown by your statements.
First you say "I know how to make an AXI4 peripheral in Vivado"
Next I read: "Should I make the peripheral a Master or Slave?"
Maybe I am wrong but to me it says you don't really know what you are doing.
Simplest is to:
Instance a zynq system.
Add the IP with the name "AXI GPIO". (Which, by the way, is an AXI slave.)
Run the auto connection.
Assign the right I/O pins to the GPIO port. (check your development system manual)
Build the system.
By the way you find the address of the peripheral in the address tab and it normally is 0x0080000000.
You wrote that you made a server (TCP/IP). "All" it has to do is write the received value to a register in the GPIO block. (Here I assume Xilinx has a document which describes how the GPIO block works and has example GPIO drivers.)

Specman-simulator synchronization issue?

I am using Cadence's Ethernet eVC wherein the agent's monitor is tapped at the following signals:
. ____________ _____
.clk _____| |__________________|
. ________ _______ ________________ _________
.data __0a____X___07__X_______0b_______X_________
. ^ ^
It samples data at the rising and falling edges of the clock. In the example above, the data 0x07 is garbage data, and the valid values are 0xa (clk rise) and 0xb (clk fall). However, the monitor is sampling (for clk fall) 0x7!
I'm suspecting this is a Specman-simulator synchronization issue. How can this be resolved if it is?
Simulator - IES 13.10
irun 13.10 options - (I'll include here only those which I think could be relevant to the issue, plus those which I've no idea yet what their purpose is)
-nomxindr
-vhdlsync
+neg_tchk
-nontcglitch
+transport_path_delays
-notimezeroasrtmsg
-pli_export
-snstubelab
Languages - VHDL (top testbench), Verilog (DUT), Specman (virtual sequence, Enet and OCP eVCs)
Time between 0x07 (left ^ in the waveform above) and falling edge of clock (right ^) = 0.098ns
One colleague suggested using -sntimescale, but I still can't imagine how that is causing/would resolve the issue. Any of these search strings were not showing helpful hints, even those articles from Cadence: "specman tick synchronization delta delay timescale precision"
This could be indeed an issue of timescale.
There is a comprehensive cookbook talking about deugging specman simulator interface synchronization issues. please take a look here.
To check what is the timescale used in your simulation, you can add -print_hdl_precision option to irun to print the precision for the VHDL hierarchy. For Verilog, it will be printed automatically in case it is set either in the code or via irun options. the information will be printed during elaboration.
To check the timescale used by Specman, you can issue the following command from Specman prompt:
SN> print get_timescale()
Another option to try (only after the timescale option doesn't help) is to remove the -vhdlsync flag. Indeed in most of the mixed environments you should add this flag. But there are rare cases in which the environment works better without it. If you try removing this flag, just remember to re-elaborate.
If you don't find the solution for your problem in the cookbook, deeper investigation should be done. for example, how does specman samples the signal. is it a simple_port, event_port, tick access, etc.. also some trace and probe commands could be helpful. In such case, I suggest to contact Cadence support.
Good Luck!
Semadar

I cannot get the Xilinx uartlite IP to work

Im attempting to use the Xilinx uartlite 2.0 IP with an AXI4-lite interface to transmit a byte without a microblaze processor. Unfortunately, all the ready signals remain low after I set the data and valid signals and the tx signal never transmits.
I've included my simulation results. any ideas?
For posterity, Had to invert the reset and ensure all the inputs were initialized. Thank you for the helpful comments. I've attached a working simulation

Configuring a 7-Series GTXE2 transceiver for Serial-ATA (Gen1/2/3)

Hello this will be an experts questions :) You should be familiar with the following topics
Xilinx Multi-Gigabit-Transceivers (MGTs), especially the 7-Series GTX/GTH transceivers (GTXE2_CHANNEL)
Serial-ATA Gen1, Gen2 and Gen3, especially Out-of-Band (OOB) communication
Question:
How should a GTXE2 be configured for Serial-ATA?
OOB signaling is not working neither RX_ElectricalIdle nor ComInit.
Introduction:
I implemented a SATA controller for my final bachelor project, which supports multiple vendor/device platforms (Xilinx Virtex-5, Altera Stratix II, Altera Stratix IV). Now it's time to port this controller to the next device family: Xilinx 7-Series devices, by name a Kintex-7 on a KC705 board.
The SATA controller has a additional abstraction layer in the physical layer, which is based on SAPIS and PIPE 3.0. So to port the SATA controller to a new device family, I have only to write a new transceiver wrapper for a GTXE2 MGT.
As of Xilinx's CoreGenerator doesn't support the SATA protocols in the CoreGen wizard, I started a transceiver project from scratch and applied all necessary settings as far as they are asked by the wizard. After that I copied the GTXE2_COMMON instantiation into my wrapper module, ordered the generics and ports into a meaning full schema.
As a third step I connected all unconnected ports (the wizards doesn't assign all values !!) to their default values (the default from UG476 or zero if not defined).
In step 4 I checked all generics and ports again against the UG476 if they are compatible to the SATA settings. After that I connected my wrapper ports to the MGT and inserted cross-clock modules if necessary.
As of the KC705 board has no 150 MHz reference clock, I program the Si570 to supply this clock as "ProgUser_Clock" after each board "bootup". The MGT is in powerdown mode (P2) while this reconfiguration. When the Si570 is stable, the MGT is powered up, the used Channel PLL (CPLL) locks after ca. 6180 clock cycles. This CPLL_Locked events releases the GTX_TX|RX_Reset wires, which cause a GTX_TX|RX_ResetDone event after additional 270|1760 cycles (all cycles # 150 MHz -> 6,6 ns).
This behavior can be seen in chipscope, captured with a stable, uninterrupted auxiliary clock (200 MHz, slightly oversampled).
So the GXTE2 seams to be powered-up, operational and all clocks are stable.
GTXE2 ports to control the OOB signaling:
The MGT has several ports for OOB signaling. On TX these are:
TX_ElectricalIdle - forces TX into electrical idle condition
TX_ComInit - send a ComInit sequence
TX_ComWake - send a ComWake sequence
TX_ComFinish - sequence was send -> ready for next command
On RX:
RX_ElectricalIdle - RX_n/TX_p are in electrical idle condition (low-level interface)
RX_ComInit_Detected - a complete ComInit sequence was send
RX_ComWake_Detected - a complete ComWake sequence was send
Detailed error desciption:
TX sends no OOB sequences if TX_ComInit is high for one cycle.
RX_ElectricalIdle is always high
Tests:
SATA loopback cable: cut a SATA cable and solder the apropriate wires ;)
-- I'm using a special SFP to SATA adapter, which extends the KC705 with a SATA connector - http://shop.trioflex.ee/product.php?id_product=73
SMA loopback cables: I moved the MGT and connected the LVDS wires to the SMA jacks and installed 2 SMA cables as cross-over.
I programmed my old ML505 (Virtex-5) with onboard SATA connector to send ComInit sequences. The 2 boards are connected with a special SATA cross-over cable.
I connected a HDD with a partial stripped SATA cable to the KC705 (SFP2SATA adapter) and connected a 2.5 GSps scope (yes the signals are undersampled, but it's good to see bursts and idle periods...).
Experiences:
Test 3 shows transmitted OOB sequences from Virtex-5 to Kintex-7 but the ChipScope trigger event does not occur - Rx_ElectricalIdle is still high.
Test 4 shows no transmitted OOB sequences on the cable.
Should I post parts or the complete transceiver instanziation?
only the instance has ca. 650 lines :(
Please ask if you need more information, images, code, ... :)
Appendix:
Electrical idle means that the MGT drives both LVDS wires (TX_n/TX_p) with common mode voltage (V_cm) which is in range 0..2000 mV. If this condition is met, the common mode delta voltage is less than 100 mV, which is referred to as ElectricalIdle condition.
OOB-signaling means that the MGT transmits bursts of electrical idle and normal data symbols (D10.2 in 8b/10b notation) on the LVDS wires. SATA/SAS defines 3 OOB sequences call ComInit, ComWake, ComSAS which have different burst/idle durations. Host controllers and devices use these "Morse signals" to establish a link.
So I think I found some answers to the problem and want to share them.
I started to simulate the GTXE2_CHANNEL hardmacro. The simulation is behaving as "false" as the hardware. So I tried to simulate the MGT in Verilog and used an instance template from here:
http://forums.xilinx.com/t5/7-Series-FPGAs/Using-v7gtx-as-sata-host-PHY-and-there-is-issue-bout-ALIGN/td-p/374203
This template simulates ElectricalIDLE conditions and OOB sequences nearly correct. So I started to diff both solutions:
TXPDELECIDLEMODE, which is a port to choose the behavior of TXElectricalIDLE is not working as expected. So now I'm using the synchronous mode.
PCS_RSVD_ATTR is a unconstrained bit_vector generic of 48 bit. If you have a look into the wrapper code of the secureip GTXE2_CHANNEL component, you will find a conversion from bit_vector => std_logic_vector => string. Internally all generics are treated as DOWNTO ranged. So it's important to pass a DOWNTO constant to the GTXE2 generics!
So now you could ask why is he using to-ranged constants and generics?
Xilinx ISE up to the latest version 14.7 has a major bug in handling vectors of user defined types in unconstrained generics. The default direction of vectors is TO. If you are passing vectors of enums as DOWNTO to unconstrained generics into a component, ISE is reversing the vector elements and "emits" a TO ranged vector in the components !!
This is especially "funny" if the design hierarchy, which uses this generic, is not a balanced tree...
If you are using enums of 2 elements, the problem is not existent -> maybe this enum is mapped to a boolean.
Which task are still open?
TXComFinish is still not acknowledging the send OOB sequences.
I have to investigate this two bug fixes in synthesis and measure the OOB sequences with a scope - this may last some days :)
Edit 1:
Solution for Bug 1:
I have added a timeout counter whose timeout depends on the current generation (clock frequency) and the current COM sequence which is to be send. If the timeout is reached I generate my own TXComFinished signal. Don't or the timeout signal with the original TXComFinished signal from GTX, because sometimes this signal is high while COMWAKE is to be send, but this finished strobe belongs still to the previous COMRESET sequence!
Solution for an other Bug:
RXElectricalIDLE is not glitch free! To solve this problem I added an filter element on this wire, which suppresses spikes on that line.
So currently my controller is running at SATA Gen1 with 1.5 GHz on a KC705 board with a SFP2SATA adapter and I think this question is solved.

Simple Adder Control Signals on Zynq SoC - Zedboard

I am new to the Zedboard and am working up to transferring a complex hardware accelerator I currently have working on a regular FPGA board. Anyway I want to walk before I can run so have done the Zedboard speedway tutorials and am now toying around with small projects. My first of which being an simple adder accelerator:
-Send 2 numbers to the pl(programmable logic), to reg a and b
-the pl adds the numbers
-an interrupt to the PS(CPU) signals the computation has finished.
-In the ISR the PS reads the result from reg c
For this design I am using 3 registers (a,b,c) in the AXI interconnect, I have created the IP templates using CIP.
Basically though what is the best way send a control signal to enable the addition to the PL. So how should I signal to the PL adder that I have loaded the two numbers in reg a and b and now want to add them?
-Should I create a 1bit signal GPIO interconnect, add a 4th 1 bit control register to the IP? or is there a more 'stylish' way to do this by using the BUS2IPdata signals?
-Or is there another way to create custom PS to PL control enable signals?
Many thanks
Sam
Current idea:
-Build a switch in the user_logic HDL based on the BUS2IPWrCE, so when this is asserted to write to reg B I can then signal an enable signal to my adder? Or will I run into some concurrency issues with the data not being fully written straight away?
So to do this I have created the AXI perph using CIP, then modified the used_logic and two new ports, en and interrupt. Following these instructions I employed these external connections.http://www.programmableplanet.com/author.asp?section_id=2142&doc_id=264841
I then connected these two external connections to GPIO interfaces to provide the required functionality.
In your larger designs, it will be difficult to get performance using a GPIOs to control the scheduling of your accelerators. I suggest setting up FIFOs of command blocks between software and hardware.
For example, your peripheral could implement an AXI Stream slave, to receive commands from software, and an AXI Stream master, to send result indications back to software.
It can assert an interrupt to indicate that there are values in the response FIFO.
For higher performance, set up these FIFOs in DRAM and use AXI read/write masters in your peripheral.

Resources