Problems getting Altera's Triple Speed Ethernet IP core to work - vhdl

I am using a Cyclone V on a SoCKit board (link here) (provided by Terasic), connecting an HSMC-NET daughter card (link here) to it in order to create a system that can communicate using Ethernet while communication that is both transmitted and received goes through the FPGA - The problem is, I am having a really, really hard time getting this system to work using Altera's Triple Speed Ethernet core.
I am using Qsys to construct the system that contains the Triple Speed Ethernet core, instantiating it inside a VHDL wrapper that also contains an instantiation of a packet generator module, connected directly to the transmit Avalon-ST sink port of the TSE core and controlled through an Avalon-MM slave interface connected to a JTAG to Avalon Master bridge core which has it's master port exported to the VHDL wrapper as well.
Then, using System Console, I am configuring the Triple Speed Ethernet core as described in the core's user guide (link here) at section 5-26 (Register Initialization) and instruct the packet generator module (also using System Console) to start and generate Ethernet packets into the TSE core's transmit Avalon-ST sink interface ports.
Although having everything configured exactly as described in the core's user guide (linked above) I cannot get it to output anything on the MII/GMII output interfaces, neither get any of the statistics counters to increase or even change - clearly, I am doing something wrong, or missing something, but I just can't find out what exactly it is.
Can any one please, please help me with this?
Thanks ahead,
Itamar

Starting the basic checks,
Have you simulated it? It's not clear to me if you are just simulating or synthesizing.
If you haven't simulated, you really should. If it's not working in SIM, why would it ever work in real life.
Make sure you are using the QIP file to synthesize the design. It will automatically include your auto generated SDC constraints. You will still need to add your own PIN constraints, more on that later.
The TSE is fairly old and reliable, so the obvious first things to check are Clock, Reset, Power and Pins.
a.) Power is usually less of problem on devkits if you have already run the demo that came with the kit.
b.) Pins can cause a whole slew of issues if they are not mapped right on this core. I'll assume you are leveraging something from Terasic. It should define a pin for reset, input clock and signal standards. Alot of times, this goes in the .qsf file, and you also reference the QIP file (mentioned above) in here too.
c.) Clock & Reset is a more likely culprit in my mind. No activity on the interface is kind of clue. One way to check, is to route your clocks to spare pins and o-scope them and insure they are what you think they are. Similarly, if you may want to bring out your reset to a pin and check it. MAKE SURE YOU KNOW THE POLARITY and you haven't been using ~reset in some places and non-inverted reset in others.
Reconfig block. Some Altera chips and certain versions of Quartus require you to use a reconfig block to configure the XCVR. This doesn't seem like your issue to me because you say the GMII is flat lined.

Related

Asynchronous FIFO code advice - VHDL

All the codes I've found generate me some errors. My FPGA manufacturer FIFO's when I try to read and write at the same time it create me problems in simulation and also I can't modify it or adapt to another FPGA.
Can someone advice me an already written Asynchronous FIFO (2 Clock FIFO) code in VHDL, possibly already used without problems?
I need the possibilty to read and write at the same time with 2 different clocks.
Our PoC Library has a cross-clock FIFO called PoC.fifo.ic_got. (ic = independent clocks)
This FIFO can be configured in size and width. You can also chose to synthesize it with LUT-RAM or BlockRAM/altsyncram. Depending on the chosen vendor settings, it will use device specific code to improve carry-chain mapping or otherwise use a generic description.
This FIFO requires other modules and packages from PoC Library like PoC.utils, PoC.mem.ocram.* or PoC.arith.counter_gray.
This FIFO was tested on several devices:
Altera: Cyclone III, Startix-II/IV
Xilinx: Spartan-3, Virtex-5/6/7, Kintex-7
Feel free to test it on other devices :).

How should different Linux device tree drivers share common registers?

I'm working on a port of the Linux kernel to an unsupported ARM SoC platform. Unfortunately, on this SoC, different peripherals will sometimes share registers or commingle registers within the same region of memory. This is giving me grief with the Device Tree specification which doesn't seem to support the notion of different devices sharing the same set of registers or registers commingled in the same address space. Various documents I've read on the device tree don't suggest the proper way to handle this.
My simple approach to specify the same register region within multiple drivers throws "can't request region for resource" for the second device that attempts to map the same register region as another driver. From my understanding, this results from the kernel enforcing device tree rules regarding register regions.
What is the preferred general solution for solving this dilemma? Should there be a higher level driver that marshals access to the shared register region? Are there examples in the existing Linux kernel that address this specific issue (I couldn't find any, but I may not be sure what to look for)?
I am facing exactly the same problem. My solution is to create a separate module to guard common resources and then write 'client modules' that use symbols exported from the common module.
Note that this makes sense from the safety point of view as well. How would you otherwise implement proper memory locking and ensure operation coherency across several independent modules?
You can still use devm_ioremap() directly but extra caution has to be exercised with some synchronization.
Below is an example from upstream,
https://github.com/torvalds/linux/blob/master/drivers/usb/phy/phy-tegra-usb.c#L1368

Is it possible to create a virtual IOHIDDevice from userspace?

I have an HID device that is somewhat unfortunately designed (the Griffin Powermate) in that as you turn it, the input value for the "Rotation Axis" HID element doesn't change unless the speed of rotation dramatically changes or unless the direction changes. It sends many HID reports (angular resolution appears to be about 4deg, in that I get ~90 reports per revolution - not great, but whatever...), but they all report the same value (generally -1 or 1 for CCW and CW respectively -- if you turn faster, it will report -2 & 2, and so on, but you have to turn much faster. As a result of this unfortunate behavior, I'm finding this thing largely useless.
It occurred to me that I might be able to write a background userspace app that seized the physical device and presented another, virtual device with some minor additions so as to cause an input value change for every report (like a wrap-around accumulator, which the HID spec has support for -- God only knows why Griffin didn't do this themselves.)
But I'm not seeing how one would go about creating the kernel side object for the virtual device from userspace, and I'm starting to think it might not be possible. I saw this question, and its indications are not good, but it's low on details.
Alternately, if there's a way for me to spoof reports on the existing device, I suppose that would do it as well, since I could set it back to zero immediately after it reports -1 or 1.
Any ideas?
First of all, you can simulate input events via Quartz Event Services but this might not suffice for your purposes, as that's mainly designed for simulating keyboard and mouse events.
Second, the HID driver family of the IOKit framework contains a user client on the (global) IOHIDResource service, called IOHIDResourceDeviceUserClient. It appears that this can spawn IOHIDUserDevice instances on command from user space. In particular, the userspace IOKitLib contains a IOHIDUserDeviceCreate function which seems to be supposed to be able to do this. The HID family source code even comes with a little demo of this which creates a virtual keyboard of sorts. Unfortunately, although I can get this to build, it fails on the IOHIDUserDeviceCreate call. (I can see in IORegistryExplorer that the IOHIDResourceDeviceUserClient instance is never created.) I've not investigated this further due to lack of time, but it seems worth pursuing if you need its functionality.

Failed to load .sof file to Cyclone II fpga board

I am new to VHDL and FPGA. I have written a sample code which does EXOR of a and b and stores it in c. This code is in VHDL behavioral architecture. I am using Quartus 11.1+SP2-2.11.
I assigned pins say a to SW0, b to SW1 and c to LEDG0. Everything is compiling and there are no errors. I go to Tools->Programmer. I have my FPGA in RUN mode. Mode in Programmer is JTAG and hence the Hardware setup is USB-Blaster [PORT 0]. When I load the .sof file and Click "Start", the progress says "failed". I do not know why.
I tried to search everywhere, but all tutorials or links give the same explanation. I guess there are hardly any who encountered this problem. I want to know if I am missing something. I want to get my fundamentals right!
Are you by any chance using Linux? If you are make sure you've done this: http://www.alterawiki.com/wiki/Quartus_for_Linux#Setup_JTAG
There can be multiple reasons as to why the loading of .sof to FPGA fails. I figured out the following for my device. If any of you are beginners, please follow the same:
1) Make sure you have the data sheet of your device with you. I followed a tutorial and entered the device number they mentioned not the one I had.
2) Check for pin assignments. This is the most important. I found out the Pins used for various switches and LEDs in a consolidated document online.
3) If it still does not work, it is best to contact experts.
Is thee FPGA an Altera DE2? If yes, you can try with this file that works with the DE2 board so that you can know if it is your .sof file that needs to be changed. If the USB blaser appears in Quartus Programmer then most likely your driver is installed correctly and you should verify whether it is your .sof file that needs to change or something else.

Omnetpp model asimmetric channel

I have to model a bittorrent network, so there are a number of node connected each other. Each node has a download speed, say 600KBps, and an upload speed, say 130KBps.
The problem is: how can I model this in omnetpp? in the NED file i created the network this way. If A and B are nodes:
A.mygate$o++ --> {something} -->B.mygate$i++
B.mygate$o++ --> {something} -->A.mygate$i++
where mygate is a inout gate, $i and $o are the input and output half channel. But something must be a speed, but:
if I set a speed to the first line of code, this is the upload speed of A BUT is also the download speed of B. Ths is normal, because if I download from a slow server i have a slow download. How can I model the download speed of a peer in Omnetpp? I cannot understand this. Should i have to say: "allow k simultaneus download untill I reach the download speed?" or it is a bad approach? Can someone suggest me the right approach, and if a modul builtin in omnetpp already exists? I have read the manual but is a bit confusing. Thanks for every reply.
I suggest to take a look at OverSim which is a peer to peer network simulator on top of INET framework which is the framework to simulate internet related protocols in OMNeT++.
Generally each host should have a queue at link layer and the interfaces that are connected should manage that they do not put out further network packets until the transimmision line is not idle (which is determined by the datarate on the line and the length of the packet). Once the line gets idle, the next packet can be sent out on the line. This is how the datarate is limited by the actual channel.
If you don't want to implement this from scratch (no reason to do that) take a look at the INET framework. You should drop your hosts and connect their PPP interfaces with the asymmetric connection you have proposed in your question. The PPP interface in the StandardHost will do the queueing for you, so you only have to add some applications that generate your traffic, and you are set.
Still I would take a look at OverSim as it gives an even higher level abstraction on top of INET (I have no experience with it though)

Resources