I am facing a CAN bus communication problem to which I don't know the reason. I have MCP2515 as a SPI->CAN interface connected to the MCP2551 (in the past) and now the TI's HVD256.
Currently, the components are connected as depicted in the schema. SCK, MOSI, MISO and CAN-CS are connected to the appropriate pins of the AVR.
schematic + oscilloscope screenshots
The problem lies in the fact, that the CAN communication sometimes work and sometimes does not while the latter prevails significantly. Sometimes I even get no response from the MCP2515 while the MISO (green) signal looks like on the left oscilloscope screenshot.
I've been even recommended to try using pull-downs or pull-ups on the MISO line (which I've never encountered before); then the signal is on the right.
Any idea why that may be happening?
There is also a secondary problem - in the rare case the MCP2515 communicates well over the SPI, content of all registers make sense, there is no signal/data on the TX pin which goes out to the consequent MCP2551 (or HVD256). The output is either 0V or +5V but no data.
Many thanks for any clues!!
-blume-
Do you have resistors between CanLo/CanHigh on the line? It should be there at least on one end of the line.
See here: SN65HVD256 with MCU example schematics
As far as I know, the pull-downs are supposed to be there by the specification. 120 Ohm resistor is a common nominal, I guess.
Related
somehow when i am running my code, it seems like one GPIO Port isn't being initialized, meanwhile if i am debugging, it is.
I am initializing two sensors:
struct MAX31856_t max31856_temperature_sensor_heater_1 = MAX31856_TPL( SPI_DEV_TPL( IO_PIN_TPL(
TEMP_SENSOR_0_CS_GPIO_Port, TEMP_SENSOR_0_CS_Pin), &spi1));
struct MAX31856_t max31856_temperature_sensor_heater_2 = MAX31856_TPL( SPI_DEV_TPL( IO_PIN_TPL(
TEMP_SENSOR_1_CS_GPIO_Port, TEMP_SENSOR_1_CS_Pin), &spi1));
Sensor Heater 1 is not getting any Information, Sensor Heater 2 is getting Informations. Now if i swap the Name of the Heaters:
struct MAX31856_t max31856_temperature_sensor_heater_2 = MAX31856_TPL( SPI_DEV_TPL( IO_PIN_TPL(
TEMP_SENSOR_0_CS_GPIO_Port, TEMP_SENSOR_0_CS_Pin), &spi1));
struct MAX31856_t max31856_temperature_sensor_heater_1 = MAX31856_TPL( SPI_DEV_TPL( IO_PIN_TPL(TEMP_SENSOR_1_CS_GPIO_Port, TEMP_SENSOR_1_CS_Pin), &spi1));
and run the code in the debugger, Sensor Heater 1 and 2 are getting Informations.
How can this happen? I was thinking about a timing problem, but since it is working in the debugger, i don't really know what to do.
Provided that you are debugging and/or running the same binary. Debugging is mostly the same as running except if you halt the processor (es breakpoints).
In that case...
some peripherals could continue to run or be halted togheder with the cpu, the behaviour is some cases can be configured. (timers, watchdog...)
some interrupts can be lost.
some hardware buffers can overflow and data can be lost (if you don't use any flow control in your IO)
How do you run the code in debug mode? Do you have breakpoints somewhere?
You (OP) are right about it being most likely a timing problem, and probably related to physical SPI transmission. Because your line of code to send/receive something over SPI has already executed in the MCU, but physically the bits and bytes are still being transmitted on the line, while MCU is already calling the next SPI function, so one of the transmissions will fail. Try adding some delay after SPI transmission code. If things work after that, then it's the timing of SPI peripheral, and you need to add a check that there is no SPI transmission already in place before you call a functions to send/receive something.
You can do while(transmission) (pseudocode, replace with actual check if SPI transmission is going on) to wait until the previous transmission ends to call the next one.
I want to create a basic project in Vivado that takes a value that i input to a client, which is sent to a server I made (in C), and then the server writes that value to a peripheral in Vivado, and then that data in the peripheral is sent to an output pin that assigns to LED's, making the LED light up.
Basically I want to go from client-->server-->peripheral-->LED lights up
For example, in the client (a GUI) I want to give it a value such as 0011, which is received by the server. Then the server writes that value to the peripheral which will then make, in this case, LED0 & LED1 not light, but LED2 & LED3 will light.
I know how to make an AXI4 peripheral in Vivado, and the client-server (TCP/IP) has been made. My question is what code/design block I would need to then take the data written to the peripheral and assign it to the LED's?
Should I make the peripheral a Master or Slave? Overall confused how should i proceed from here. I am using a Red Pitaya (Xilinx Zynq 7010 SoC) connected by an Ethernet cable to my computer.
Also, I thought of running the program on the Red Pitaya by loading the bitstream on to it (using WinSCP) by running the command
cat FILE_NAME.bit > /dev/xdevcfg
in PuTTY (connected to the Pitaya by IP address), then running the server on the pitaya, and then sending the signal from the client for the server to receive. Is that the correct way of approaching it?
If my logic is off in anyway please let me know
I am somewhat thrown by your statements.
First you say "I know how to make an AXI4 peripheral in Vivado"
Next I read: "Should I make the peripheral a Master or Slave?"
Maybe I am wrong but to me it says you don't really know what you are doing.
Simplest is to:
Instance a zynq system.
Add the IP with the name "AXI GPIO". (Which, by the way, is an AXI slave.)
Run the auto connection.
Assign the right I/O pins to the GPIO port. (check your development system manual)
Build the system.
By the way you find the address of the peripheral in the address tab and it normally is 0x0080000000.
You wrote that you made a server (TCP/IP). "All" it has to do is write the received value to a register in the GPIO block. (Here I assume Xilinx has a document which describes how the GPIO block works and has example GPIO drivers.)
Hello this will be an experts questions :) You should be familiar with the following topics
Xilinx Multi-Gigabit-Transceivers (MGTs), especially the 7-Series GTX/GTH transceivers (GTXE2_CHANNEL)
Serial-ATA Gen1, Gen2 and Gen3, especially Out-of-Band (OOB) communication
Question:
How should a GTXE2 be configured for Serial-ATA?
OOB signaling is not working neither RX_ElectricalIdle nor ComInit.
Introduction:
I implemented a SATA controller for my final bachelor project, which supports multiple vendor/device platforms (Xilinx Virtex-5, Altera Stratix II, Altera Stratix IV). Now it's time to port this controller to the next device family: Xilinx 7-Series devices, by name a Kintex-7 on a KC705 board.
The SATA controller has a additional abstraction layer in the physical layer, which is based on SAPIS and PIPE 3.0. So to port the SATA controller to a new device family, I have only to write a new transceiver wrapper for a GTXE2 MGT.
As of Xilinx's CoreGenerator doesn't support the SATA protocols in the CoreGen wizard, I started a transceiver project from scratch and applied all necessary settings as far as they are asked by the wizard. After that I copied the GTXE2_COMMON instantiation into my wrapper module, ordered the generics and ports into a meaning full schema.
As a third step I connected all unconnected ports (the wizards doesn't assign all values !!) to their default values (the default from UG476 or zero if not defined).
In step 4 I checked all generics and ports again against the UG476 if they are compatible to the SATA settings. After that I connected my wrapper ports to the MGT and inserted cross-clock modules if necessary.
As of the KC705 board has no 150 MHz reference clock, I program the Si570 to supply this clock as "ProgUser_Clock" after each board "bootup". The MGT is in powerdown mode (P2) while this reconfiguration. When the Si570 is stable, the MGT is powered up, the used Channel PLL (CPLL) locks after ca. 6180 clock cycles. This CPLL_Locked events releases the GTX_TX|RX_Reset wires, which cause a GTX_TX|RX_ResetDone event after additional 270|1760 cycles (all cycles # 150 MHz -> 6,6 ns).
This behavior can be seen in chipscope, captured with a stable, uninterrupted auxiliary clock (200 MHz, slightly oversampled).
So the GXTE2 seams to be powered-up, operational and all clocks are stable.
GTXE2 ports to control the OOB signaling:
The MGT has several ports for OOB signaling. On TX these are:
TX_ElectricalIdle - forces TX into electrical idle condition
TX_ComInit - send a ComInit sequence
TX_ComWake - send a ComWake sequence
TX_ComFinish - sequence was send -> ready for next command
On RX:
RX_ElectricalIdle - RX_n/TX_p are in electrical idle condition (low-level interface)
RX_ComInit_Detected - a complete ComInit sequence was send
RX_ComWake_Detected - a complete ComWake sequence was send
Detailed error desciption:
TX sends no OOB sequences if TX_ComInit is high for one cycle.
RX_ElectricalIdle is always high
Tests:
SATA loopback cable: cut a SATA cable and solder the apropriate wires ;)
-- I'm using a special SFP to SATA adapter, which extends the KC705 with a SATA connector - http://shop.trioflex.ee/product.php?id_product=73
SMA loopback cables: I moved the MGT and connected the LVDS wires to the SMA jacks and installed 2 SMA cables as cross-over.
I programmed my old ML505 (Virtex-5) with onboard SATA connector to send ComInit sequences. The 2 boards are connected with a special SATA cross-over cable.
I connected a HDD with a partial stripped SATA cable to the KC705 (SFP2SATA adapter) and connected a 2.5 GSps scope (yes the signals are undersampled, but it's good to see bursts and idle periods...).
Experiences:
Test 3 shows transmitted OOB sequences from Virtex-5 to Kintex-7 but the ChipScope trigger event does not occur - Rx_ElectricalIdle is still high.
Test 4 shows no transmitted OOB sequences on the cable.
Should I post parts or the complete transceiver instanziation?
only the instance has ca. 650 lines :(
Please ask if you need more information, images, code, ... :)
Appendix:
Electrical idle means that the MGT drives both LVDS wires (TX_n/TX_p) with common mode voltage (V_cm) which is in range 0..2000 mV. If this condition is met, the common mode delta voltage is less than 100 mV, which is referred to as ElectricalIdle condition.
OOB-signaling means that the MGT transmits bursts of electrical idle and normal data symbols (D10.2 in 8b/10b notation) on the LVDS wires. SATA/SAS defines 3 OOB sequences call ComInit, ComWake, ComSAS which have different burst/idle durations. Host controllers and devices use these "Morse signals" to establish a link.
So I think I found some answers to the problem and want to share them.
I started to simulate the GTXE2_CHANNEL hardmacro. The simulation is behaving as "false" as the hardware. So I tried to simulate the MGT in Verilog and used an instance template from here:
http://forums.xilinx.com/t5/7-Series-FPGAs/Using-v7gtx-as-sata-host-PHY-and-there-is-issue-bout-ALIGN/td-p/374203
This template simulates ElectricalIDLE conditions and OOB sequences nearly correct. So I started to diff both solutions:
TXPDELECIDLEMODE, which is a port to choose the behavior of TXElectricalIDLE is not working as expected. So now I'm using the synchronous mode.
PCS_RSVD_ATTR is a unconstrained bit_vector generic of 48 bit. If you have a look into the wrapper code of the secureip GTXE2_CHANNEL component, you will find a conversion from bit_vector => std_logic_vector => string. Internally all generics are treated as DOWNTO ranged. So it's important to pass a DOWNTO constant to the GTXE2 generics!
So now you could ask why is he using to-ranged constants and generics?
Xilinx ISE up to the latest version 14.7 has a major bug in handling vectors of user defined types in unconstrained generics. The default direction of vectors is TO. If you are passing vectors of enums as DOWNTO to unconstrained generics into a component, ISE is reversing the vector elements and "emits" a TO ranged vector in the components !!
This is especially "funny" if the design hierarchy, which uses this generic, is not a balanced tree...
If you are using enums of 2 elements, the problem is not existent -> maybe this enum is mapped to a boolean.
Which task are still open?
TXComFinish is still not acknowledging the send OOB sequences.
I have to investigate this two bug fixes in synthesis and measure the OOB sequences with a scope - this may last some days :)
Edit 1:
Solution for Bug 1:
I have added a timeout counter whose timeout depends on the current generation (clock frequency) and the current COM sequence which is to be send. If the timeout is reached I generate my own TXComFinished signal. Don't or the timeout signal with the original TXComFinished signal from GTX, because sometimes this signal is high while COMWAKE is to be send, but this finished strobe belongs still to the previous COMRESET sequence!
Solution for an other Bug:
RXElectricalIDLE is not glitch free! To solve this problem I added an filter element on this wire, which suppresses spikes on that line.
So currently my controller is running at SATA Gen1 with 1.5 GHz on a KC705 board with a SFP2SATA adapter and I think this question is solved.
Is it possible to query serial port tx (send) pin status if it is active or not ?
For example when issuin break command (SetCommBreak) tx pin is set to active (low). I'd like to know when it is active or not. Thanks.
No. (at least not likely)
If you are using the "16550" family of UARTs, then I am confident that you can not query the serial port tx pin status. Of course, if you are using some new version or other UART family, maybe.
You can assume that the TX pin is in the SPACE state ('0', +Volts) whilst performing SetCommBreak(), but I suspect that is not enough for you.
If you are look to debug your code to know if a break occurred, you can short pins 2 & 3 on a 9-pin D-sub, thus loop backing the transmit to the receive. A paper clip will do. Your receive code would detect the incoming BREAK. Shorting to the incorrect pin does not cause a lasting problem with a conforming serial port, but be careful. Try this first with simple data, before testing BREAK condition.
If you have a "16550"-like UART.
You can put the UART into loop-back mode and see if you receiving you own outgoing BREAK signal. Its somewhat complicated in current PCs. Other UART type may support loop-back.
I am trying to use the expansion header to control a couple motors and auxiliary task mechanism. For this I am using the appropriate pins as GPIO and merely attempting to send high or low signals as needed by the robot. (For instance, I might need the robot to move forward and so I'd send high signals on both sets of pins, whereas if I needed the robot to turn I'd send a high signal to one pin and a low to the other.)
However, the problem is that the pins will only stay high! I've followed the conventions for sysfs just via the terminal, and, although I'm able to set the "values", "active_lows", etc. to 0 or 1, I can't actually get the pins to send 0V. After checking the beagle.h file I used for u-boot it looks like the multiplexer mode is configured correctly. This is also reflected when I get the info from sys/class/gpio/gpio%/% and sys/kernel/debug/gpio. Furthermore I don't get any errors or indication from anywhere that there is something wrong...it just doesn't work!
What should I do? For the first time in my life I have seemingly exhausted the internet...
details:
Beagleboard xm rev c1
ubuntu 12.04
kernel 3.6.8-x4
Im pretty new to the beagle board and I have recently been trying to configure the GPIO pins on my classic beagleboard c4, which i believe should be fairly similar.
Half of my GPIO pins seemed to work fine and the other half seemed to remain high or low no matter what i did. Even though they were configured the same way as the working pins in /sys/class/gpio/
have you tried to use other gpio pins?
I ended up following http://labs.isee.biz/index.php/Mux_instructions
to configure the mux to 4 and now i can control the pins that were not working.
I basically used the command:
sudo echo 0x004 > /sys/kernel/debug/omap_mux/(mux 0 name)
where (mux 0 name) was the name of the subsystem for the mux 0 setting for the gpio pin you wish to configure
ie. for gpio 183 on beagleboard c4
sudo echo 0x004 > /sys/kernel/debug/omap_mux/i2c2_sda
Though I had to change permissions to modify these files
As I said I am pretty new to the beagleboard and ubuntu but this worked for me so I thought I would share it with you, I hope it is of some help.
Regards;
Paul;
It seems that the beagleboard expansion pins are numbered in alternating fashion, as clearly and professionally depicted here.
Thanks to everyone for your help. I now know way more than I should about GPIO on OMAP systems (and so do you). Good luck on finals/life!**
tl;dr I'm an idiot!