ProASIC3 clock distribution issues - fpga

I'm working with a ProASIC3L, and I keep getting these errors:
Planning global net placement...
Error: PLC004: No legal global assignment could be found. Some global nets have shared
instances, requiring them to be assigned to overlapping global regions.
Global Nets Whose Drivers Are Limited to Quadrants or Which Have No Valid Locations:
|--------------------------------------------|
|Global Net |Valid Driver Locations |
|--------------------------------------------|
|GLA |(None)
|--------------------------------------------|
|GLB |(None)
|--------------------------------------------|
|RST_N_c |(None)
|--------------------------------------------|
Info: Consider relaxing the constraints for these nets by removing region constraints,
unassigning fixed cells and I/Os, relaxing I/O bank assignments, or using input
buffers without hardwired pad connections.
Error: PLC003: No legal global assignment could be found because of complex region and/or IO
technology constraints.
Error: PLC005: Automatic global net placement failed.
INFO: See the GlobalNet Report from the Reports option of the Tools menu for information about
the global assignment.
The Layout command failed ( 00:00:01 )
The GLA and GLB signals come from a PLL block and are then passed down a few module layers to all the different components in the design. I'm kinda a rookie to clock management and tbh dont really know how to approach debugging. Anyone got any advice for me?

I had to apply additional constraints for clock placement for PRO ASIC3, though this was very long time ago. My advice is to dig into specific ProASIC3L documentation and/or ask Microchip FAE. He/She has access to factory database that has similar problems and resolutions. I think this is a shortest path to resolve this.

Related

How to set a value, attribute or property of a block instance independent from a block

I am trying to define different systems (software and hardware) made of common building blocks in SysML. In the following I try to describe my approach with a sample:
For this purpose I have defined a common set of blocks in one BDD and described the common relationships of all blocks using ports and connectors in one IBD - nothing special so far:
block A, B, C
each block using each two ports
each block's port connected to other blocks ports
Now when using the blocks defined as given above, I want to add static characteristics of blocks and ports for each system I define based on the above building blocks. The system is defined in one additional BDD and IBD using the same blocks from above:
System(s) AX and AY have:
additional connections between two blocks A and B, described in IBD (OK)
additional characteristics of the ports (NOK)
additional characteristics of the blocks (NOK)
Problem:
The last two "NOK" points are a problem as follows:
Whenever I add additional properties/attributes/tags to a block in one system/IBD it also applies to the other systems/blocks
Whenever I add additional properties/attributes/tags to a port in one system it also applies to the other systems/blocks
My question can be generalized:
How would I define characteristics of instances of blocks in a way that they do not affect the original blocks they are instantiated from. The issue came up in multiple attempts to design systems, maybe SysML is not intended to be used in such a way at all?
I also tried to design my system in UML using component diagrams and components / component instances and the same problem appears there: instance specific attributes/values/ports do not seem to be supported.
Side Note:
I am using MagicDraw as a tool for SysML und UML.
I understand you want to define context specific connectors and properties.
First I want to clarify that all of these are already context specific. The context of properties is their owning block or InterfaceBlock (type of the port). The context of connectors is their owning block (IntefaceBlocks cannot have parts, therefore also no connectors).
So, a connector needs a context. Let's call it system A0. It has parts of type A, B and C and owns the connectors between the ports of its parts.
Now you can define systems AX and AY as special kinds of system A0. As such it has the same parts, but you can add more parts and connectors.
If you define additional properties of the parts of your special systems, you are in fact creating new special types of A, B and C and the port types. SysML 1 forces you to define these new types. And I think rightly so. If block A' shall have more features than block A, then it is a new type. It is irrelevant that A' is only used in the context of system AX. If you later decide to use it in system AZ, it would still have the same features.
Not all of these changes mean that it is a new type. For example if you only want to change the voltage of an adjustable power supply, this is not a new type of power supply. In the context of system AX it might be set to 12 V and in system AY it might be set to 24 V. In order to allow this, SysML 1 has context specific initial values. Cameo has great support for these. This helps around the somewhat clumsy definition in SysML 1. This will be much better in SysML 2.
If the value is not adjustable, a 12 V power supply would technically be a new type of power supply. However, I can see that it might be useful to only define this as context specific value, even though it is strictly speaking not context specific. I don't want to be more papal than the pope.
Now a lot of systems engineers don't like to define their blocks. I really don't understand why. But in order to accomodate this modeling habit, SysML 1 has property specific types. In the background these types are still regular blocks, only on the surface it appears as if they are only defined in the context. Up to now, no one could explain to me, what the advantage of this would be. However, SysML 2 has made it the core of the language. Here you can define properties of properties, even without first defining blocks.
Sometimes you have sub assemblies with some flexibility regarding cardinalities and types. If you use such a sub assembly in a certain context, it is often necessary to define context specific limitations. For example you could say the generic landing gear can have 4..8 Wheels, which could be High load wheels or Medium load wheels, but when used in a Boing 747, it will have 6 high load wheels. For this SysML 1 has bound references. Is that your use case?

Problems getting Altera's Triple Speed Ethernet IP core to work

I am using a Cyclone V on a SoCKit board (link here) (provided by Terasic), connecting an HSMC-NET daughter card (link here) to it in order to create a system that can communicate using Ethernet while communication that is both transmitted and received goes through the FPGA - The problem is, I am having a really, really hard time getting this system to work using Altera's Triple Speed Ethernet core.
I am using Qsys to construct the system that contains the Triple Speed Ethernet core, instantiating it inside a VHDL wrapper that also contains an instantiation of a packet generator module, connected directly to the transmit Avalon-ST sink port of the TSE core and controlled through an Avalon-MM slave interface connected to a JTAG to Avalon Master bridge core which has it's master port exported to the VHDL wrapper as well.
Then, using System Console, I am configuring the Triple Speed Ethernet core as described in the core's user guide (link here) at section 5-26 (Register Initialization) and instruct the packet generator module (also using System Console) to start and generate Ethernet packets into the TSE core's transmit Avalon-ST sink interface ports.
Although having everything configured exactly as described in the core's user guide (linked above) I cannot get it to output anything on the MII/GMII output interfaces, neither get any of the statistics counters to increase or even change - clearly, I am doing something wrong, or missing something, but I just can't find out what exactly it is.
Can any one please, please help me with this?
Thanks ahead,
Itamar
Starting the basic checks,
Have you simulated it? It's not clear to me if you are just simulating or synthesizing.
If you haven't simulated, you really should. If it's not working in SIM, why would it ever work in real life.
Make sure you are using the QIP file to synthesize the design. It will automatically include your auto generated SDC constraints. You will still need to add your own PIN constraints, more on that later.
The TSE is fairly old and reliable, so the obvious first things to check are Clock, Reset, Power and Pins.
a.) Power is usually less of problem on devkits if you have already run the demo that came with the kit.
b.) Pins can cause a whole slew of issues if they are not mapped right on this core. I'll assume you are leveraging something from Terasic. It should define a pin for reset, input clock and signal standards. Alot of times, this goes in the .qsf file, and you also reference the QIP file (mentioned above) in here too.
c.) Clock & Reset is a more likely culprit in my mind. No activity on the interface is kind of clue. One way to check, is to route your clocks to spare pins and o-scope them and insure they are what you think they are. Similarly, if you may want to bring out your reset to a pin and check it. MAKE SURE YOU KNOW THE POLARITY and you haven't been using ~reset in some places and non-inverted reset in others.
Reconfig block. Some Altera chips and certain versions of Quartus require you to use a reconfig block to configure the XCVR. This doesn't seem like your issue to me because you say the GMII is flat lined.

Omnet++: Parallelize Single Run Simulation

I'm trying to parallelize my model (I want to parallelize a single config run, not run multiple configs in parallel).
I'm using Omnet++ 4.2.2, but probably the version doesn't matter.
I've read the Parallel Distributed Simulation chapter of the Omnet++ manual
and the principle seems very straightforward:
simply assign different modules/submodules to different partitions.
Following the provided cqn example
*.tandemQueue[0]**.partition-id = 0
*.tandemQueue[1]**.partition-id = 1
*.tandemQueue[2]**.partition-id = 2
If I try to simulate relatively simple models everything works fine I can partition the model at wish.
However, when I start to run simulation that use Standardhost module, or modules that are interconnected using ethernet links that doesn't work anymore.
If i take for example the Inet provided example WiredNetWithDHCP (inet/examples/dhcp/eth), as experiment, lets say I want to run hosts in a different partition than the switch
I therefore assign the switch to a partition and everything else to another:
**.switch**.partition-id = 1
**.partition-id = 0
The different partitions are separated by links, there is delay, and therefore it should be possible to partition this way.
When I run the model, using the graphic interface, I can see that the model is correctly partitioned however the connections are somehow wrong and i get the following error message:
during network initialization: the input/output datarates differ
clearly datarates don't differ (and running the model sequentially works perfectly), by checking the error message this exception is triggered also by link not connected. This is indeed what happen. It seems that the gates are not correctly linked.
Clearly I'm missing something in the Link connection mechanism, should I partition somewhere else?
Due to the simplicity of the paradigm I feel like being an idiot but I'm not able to solve this issue by myself
Just to give a feedback,
It seems that directly it cannot be done, not the full INET as it is can be parallelized in short because it uses global variables in some places.
in this particular case, mac addresses assignment are one of the issues (uses a global variable), hence eth interface cannot be parallelized.
for more details refer to this paper explaining why this is not possible:
Enabling Distributed Simulation of OMNeT++ INET Models:
For reference/possible solution refer to authors webpage from aachen university, where you can download a complete copy of omnet++ and INET that can be parallelized:
project overview and code

How should different Linux device tree drivers share common registers?

I'm working on a port of the Linux kernel to an unsupported ARM SoC platform. Unfortunately, on this SoC, different peripherals will sometimes share registers or commingle registers within the same region of memory. This is giving me grief with the Device Tree specification which doesn't seem to support the notion of different devices sharing the same set of registers or registers commingled in the same address space. Various documents I've read on the device tree don't suggest the proper way to handle this.
My simple approach to specify the same register region within multiple drivers throws "can't request region for resource" for the second device that attempts to map the same register region as another driver. From my understanding, this results from the kernel enforcing device tree rules regarding register regions.
What is the preferred general solution for solving this dilemma? Should there be a higher level driver that marshals access to the shared register region? Are there examples in the existing Linux kernel that address this specific issue (I couldn't find any, but I may not be sure what to look for)?
I am facing exactly the same problem. My solution is to create a separate module to guard common resources and then write 'client modules' that use symbols exported from the common module.
Note that this makes sense from the safety point of view as well. How would you otherwise implement proper memory locking and ensure operation coherency across several independent modules?
You can still use devm_ioremap() directly but extra caution has to be exercised with some synchronization.
Below is an example from upstream,
https://github.com/torvalds/linux/blob/master/drivers/usb/phy/phy-tegra-usb.c#L1368

error LNK2001: unresolved external symbol _fltused in wdk

I am trying to define a double data type variable in a C code which is going to be used in the Windows kernel. The code compiles but gives error while linking. I tried using libcntpr.lib in the source file and also defining __fltused variable in the code but to no avail. I'll really appreciate if someone can help me on how to use this.
Don't know if still applicable to current WDK but Walter Oney demotivates the use of floating point stuff in drivers here.
The problem is worse than just finding the right library,
unfortunately. The C compiler's floating point support assumes that it
will be operating in a an application environment where you can
initialize the coprocessor, install some exception handlers, and then
blast away. It also assumes that the operating system will take care
of saving and restoring each thread's coprocessor context as required
by all the thread context switches that occur from then on.
These assumptions aren't usually true in a driver. Furthermore, the
runtime library support for coprocessor exceptions can't work because
there's a whole bunch of missing infrastructure.
What you basically need to do is write your code in such a way that
you initialize the coprocessor each time you want to use it (don't
forget KeSaveFloatingPointState and KeRestoreFloatingPointState). Set
things up so that the coprocessor will never generate an exception,
too. Then you can simply define the symbol __fltused somewhere to
satisfy the linker. (All that symbol usually does is drag in the
runtime support. You don't want that support becuase, as I said, it
won't work in kernel mode.) You'll undoubtedly need some assembly
language code for the initialization steps.
If you have a system thread that will be doing all your floating point
math, you can initialize the coprocesor once at the start of the
thread. The system will save and restore your state as necessary from
then on.
Don't forget that you can only do floating point at IRQL <
DISPATCH_LEVEL.
There's FINIT, among other things. If you're rusty on coprocessor
programming, my advice would be to tell your management that this is a
specialized problem that will require a good deal of study to solve.
Then fly off to Martinique for a week or so (after hurricane season,
that is) to perform the study in an appropriate environment.
Seriously, if you're unfamiliar with FINIT and other math coprocessor
instructions, this is probably not something you should be
incorporating into your driver.
There is also an interesting read from Microsoft: C++ for Kernel Mode Drivers: Pros and Cons
On x86 systems, the floating point and multimedia units are not
available in kernel mode unless specifically requested. Trying to use
them improperly may or may not cause a floating-point fault at raised
IRQL (which will crash the system), but it could cause silent data
corruption in random processes. Improper use can also cause data
corruption in other processes; such problems are often difficult to
debug.

Resources