Number of active netlists exceeds limit - limit

Lately, I have been stuck with the same error from Vivado when I try to Synthesize my design:
[Common 17-70] Application Exception: Number of active netlists exceeds limit (255)
Does anybody know what this "limit" means? Is it a limitation of the software, or it refers to the capacity of the FPGA? Is there some way to avoid it, or do I have to change all my design in order to have fewer netlists?

I had the same problem. Restarting the vivado tool helped in my case.

Related

How to find the max file descriptors in Nuttx RTOS?

I have a limited understanding of the Nuttx OS but have run into a limitation set by the config parameter CONFIG_NFILE_DESCRIPTORS using the PX4 stack. I'm using a Pixhawk 4 FCU board that has a STM32F76 processor. The firmware build (px4_fmu-v5) by default has that parameter set to 20. My understanding is that this is a soft limit that is applied to each module in the stack to limit its I/O. I can increase that limit without any visible issues so far but this raises a few concerns:
To what extent can I increase the limit of that parameter without causing any issues?
What are the potential consequences of exceeding that limit?
Is there a way to find the hard limit of the number of file descriptors (assuming this is specific to the processor type)? If not, can I monitor the usage of file descriptors per module over an NSH shell?
If this is over-simplifying the issue I'd appreciate any pointers in the right direction, but I preferably would not like to delve too deep into NuttX to understand how this generally works and what the limitations are here.

problem in ram in fpga zynq 7020, someone can give me advice?

Hello I get a strange message when I try to run the MAP, I set the RAM properly and also checked that it uses only 80% of the resources I have on the card. Why do I get this message? Can anyone advise me what to do? And why do I have this message?
The error i got when i try to Synthesize the label "map" to get a bit file
enter image description here - summery of the resources.
enter image description here
ERROR:Place:543 - This design does not fit into the number of slices available
in this device due to the complexity of the design and/or constraints.
Unplaced instances by type:
BLOCKRAM 77 (55.0)
Please evaluate the following:
BLOCKRAM
u_xyz2lcd_for_test/u_send_to_zedboard/dpr_2/U0/xst_blk_mem_generator/gnativeb
mg.native_blk_mem_gen/valid.cstr/ramloop[6].ram.r/v6_noinit.ram/NO_BMM_INFO.S
DP.SIMPLE_PRIM18.ram
BLOCKRAM
It simply means that you want to use more RAM then the device has.
I suggest you check your resources again and check the amount of memory used.
Your 80% may be LUTs or FFs or you may have read something wrong.
There is another possibility although it is very rare:
You memory usage may increase in Place And Route if it has to split the memory over multiple blocks because you have some weird configuration.
This example may not be valid bit it tries to show what can happen:
Suppose you use bit-write enables. Synthesis thinks you have enough memory but PAR has to use a byte for each bit, thus PAR needs to splits the data over more blocks and in the end runs out.
The case where I have seen this was a very complex one with DSPs.

need help debugging an unstable program

Following some changes my Arduino sketch became unstable, it only run 1-2 hours and crashes. It's now a month that I am trying to understand but do not make sensible progress: the main difficulty is that the slightest change make it run apparently "ok" for days...
The program is ~1500 lines long
Can someone suggest how to progress?
Thanks in advance for your time
Well, the embedded systems are wery well known for continuous fight against Universe forth dimension: time. It is known that some delays must be added inside code - this does not imply the use of a sistem delay routine always - just operation order may solve a lot.
Debugging a system with such problem is difficult. Some techniques could be used:
a) invasive ones: mark (i.e. use some printf statements) in various places of your software, entry or exit of some routines or other important steps and run again - when the application crashes, you must note the last message seen and conclude the crash is after that software step marked by the printf.
b) less invasive: use an available GPIO pin as output and set it high at the entry of some routine and low at the exit; the crasing point will leave the pin either high or low. You can use several pins if available and watch the activity with an oscilloscope.
c) non invasive - use the JTAG or SWD debugging - this is the best one - if your micro support faults debugging, then you have the methods to locate the bug.

CPU/Processor error rate in calculations

Does Intel or AMD publish specifications about the rate at what failures in calculations can be expected on their CPUs? I would suspect it would be very age and temperature dependent, but surely there must be some kind of numbers available?
I'm not interested in manufacturing errors. I'm interested in spontaneous errors due to physical phenomena not related to design error. Whether the error originates in the CPU or some other chip on the system is also of interest (for example a momentary voltage failure to the processor would also result in errors).
I'm curious, but my net searching isn't yielding what I want. I just want to get rough ideas of it I left my program running for X hours how many spontaneous errors I could expect to have.
I'm not sure if this is the best StackExchange site to ask, perhaps electronics instead?
The number is zero. If you get calculation errors and your CPU's temperature is witin boundaries defined by specification, then you have a defect CPU that must be replaced.

Map device driver code to Logic Analyzer waveform

As per SDIO specification, the sequence of operations (for write transaction) take place as:
Command53 -- CommandLatency -- Command53Response -- ResponseLatency -- startbit -- write-number-of-bytes -- CRC -- endbit -- WriteLatency -- startbit -- CRC -- endbit -- busybit.
During benchmarking of SDIO UART driver, the time values which I got were more than expected. A lot of latency was found especially during write transaction.
Reasons for latency could be scheduler allocating processor time to other processes, delay in work queues, etc.
I would like to analyze and understand the latency. May be understanding the mapping between the device driver code and the Logic Analyzer waveform can lead to some cue.
Can somebody shed some light on this?
Thank you.
EDIT 1:
Sorry! I assumed a few things.
In sdio_uart_transmit_chars() there is a call to sdio_out() which in turn calls sdio_writeb() and this call writes byte wise (one byte at a time) to a SDIO UART device. I modified the driver to use sdio_writesb() i.e. multi-byte mode. This reduced the time taken to write X bytes relatively. Interestingly, with increase in size of write data, there was exponential increase in WriteLatency (as mentioned above).
This latency could be because of many reasons. I would like to understand these reasons.
Setup: I am using Linux (v 2.6.32) laptop and a loadable kernel module (which is modified sdio_uart.c)
EDIT 2:
May be adding 'SDIO' in this question is misleading..(not sure at the moment). The reasons for delay could be generic to any device driver while interacting with the hardware and it may be independent of SDIO write process.
If somebody can point me to related online resource, I would be happy to explore and update the result here.
Hope I added more clarity this time. Please comment if I the question is still not clear.
Thank you for your time.
EDIT 3:
Yes, I am looking at the signals on Logic Analyzer (LA) and there are longer delays during and between writes than I expected.
To give an idea about time values:
For 512 bytes transfer: At the hardware level theoretically the write should take 50 micro seconds (us), however in reality I got 200 us.
This gap of 150 us is what I want to understand.
Note:
1) I am rounding off the time values to simplify the case.
2) All the time values are calculated at Kernel level and no user space issue is involved here.
One thing worth looking at is if your sd interface functions by DMA, such that the driver can program the state machine and then it just runs by itself, or if getting the message out requires repetitive service by the driver, which might be delayed by other kernel obligations.
You could also see if there may be an I/O bottleneck, for example is the SD interface or whatever bus it hangs off of also used for something else?
Finally you could search for ways to increase the priority. At an extreme, you could switch to a real time SD driver rather than a normal one.

Resources