I'm porting a Xilinx ISE project to Quartus II. When I compile that project Quartus crashes with an error: *** Fatal Error: Access Violation at 0X000007FE88160DE1. So I'm trying to narrow down the error to a minimal example, which will guide me to a hidden VHDL bug or to a small example, which I can send to Altera.
The project uses a VHDL data structure (record of vectors of records of vectors of ... of basic types) to describe a SoC setup. The simplified structure look like this:
SoFPGA System
o-DeviceInstances
| o-Device
| o-Registers
| | o-Bitfields
| o-RegisterMappings
o-Busses
I would like to report the full structure to the synthesis log, but Quartus II seems to hide messages if the message text was already printed to the log.
Example:
Info (10635): ... at pb.pkg.vhdl(1228): "DeviceInstance 1:" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1229): " DeviceInstance: Mult32" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1230): " Device: Mult32" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1234): " 0: OperandA0 Reg#=0 WR" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1241): " 0: FieldID=0 (OperandA)" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1234): " 1: OperandA1 Reg#=1 WR" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1234): " 2: OperandA2 Reg#=2 WR" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1234): " 3: OperandA3 Reg#=3 WR" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1234): " 4: OperandB0 Reg#=4 WR" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1241): " 0: FieldID=1 (OperandB)" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1234): " 5: OperandB1 Reg#=5 WR" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1234): " 6: OperandB2 Reg#=6 WR" (NOTE)
Info (10635): ... at pb.pkg.vhdl(1234): " 7: OperandB3 Reg#=7 WR" (NOTE)
The 'FieldID' message is only printed once per 'Operand[A|B]'. There is no FieldID line after operands A1, A2, A3 and B1, B2, B3.
So I tried to add an unique number (salt) to each report line. For that I defined a shared variable and defined a function salty that increments salt in every report statement.
Variable and function:
shared variable salt : NATURAL := 0;
impure function salty return STRING is
begin
salt := salt + 1;
return INTEGER'image(salt);
end function;
Usage:
report salty & "pb_CreateRegisterRO:" severity NOTE;
But unfortunately, salty always returns "0". Quartus II supports VHDL'08 features. Should I implement the shared variable as a protected type and compile it in VHDL'08 mode?
How can I print all report statement lines to the synthesis report?
Off topic question:
Is someone interested in helping me finding the reason why Quartus crashes? I stripped the project to 6 VHDL files. I think this problem is to specific for a SO question. Contact
Related
I know you can use the debug library of lua to get some tracing info/debugging. But the information is in pieces. So I am wondering if there is a way to trace the execution of a Lua script. A step by step process. it would be required and it will automatically go thru at every execution step To produce a report such as the following;
Called: function xyz from : Table abc
It has n parameters
Param 1: apples
Param 2: oranges
.
.
It has m returns
return 1: red
return 2: yellow
.
.
Called: function xyz2 from : Table abc2
It has n parameters
Param 1: pears
Param 2: bananas
.
.
It has m reruns
return 1: heavy
return 2: light
.
.
and so on....
Here is some code that used to be distributed in the Lua tarball. It's from 2005 and still works fine.
-- trace calls
-- example: lua -ltrace-calls bisect.lua
local level=0
local function hook(event)
local t=debug.getinfo(3)
io.write(level," >>> ",string.rep(" ",level))
if t~=nil and t.currentline>=0 then io.write(t.short_src,":",t.currentline," ") end
t=debug.getinfo(2)
if event=="call" then
level=level+1
else
level=level-1 if level<0 then level=0 end
end
if t.what=="main" then
if event=="call" then
io.write("begin ",t.short_src)
else
io.write("end ",t.short_src)
end
elseif t.what=="Lua" then
io.write(event," ",t.name or "(Lua)"," <",t.linedefined,":",t.short_src,">")
else
io.write(event," ",t.name or "(C)"," [",t.what,"] ")
end
io.write("\n")
end
debug.sethook(hook,"cr")
level=0
I've run into this problem a number of times and maybe it's just my unsophisticated technique as I'm still a bit of a novice with the finer points of text processing, but using pandoc going from html to plain yields pretty tables in the form of:
# IP Address Device Name MAC Address
--- ------------- -------------------------- -------------------
1 192.168.1.3 ANDROID-FFFFFFFFFFFFFFFF FF:FF:FF:FF:FF:FF
2 192.168.1.4 XXXXXXX FF:FF:FF:FF:FF:FF
3 192.168.1.5 -- FF:FF:FF:FF:FF:FF
4 192.168.1.6 -- FF:FF:FF:FF:FF:FF
--- ------------- -------------------------- -------------------
The column headings here in this example (and the fields/cells/whatever in others) aren't especially awk friendly since they contain spaces. There must be some utility (or pandoc option) to add delimiters or otherwise process it in a smart and simple way to make it easier to use with awk (since the dash ruling hints as the max column width), but I'm fast approaching the limits of my knowledge and have been unable to find any good solutions on my own. I'd appreciate any help and I'm open to alternate approaches (I just use pandoc since that's what I know).
I've got a solution for you which parses the dash line to get column lengths, then uses that info to divide each line into columns (similar to what #shellter proposed in the comment, but without the need to hardcode values).
First, within the BEGIN block we read the headers line and the dashes line. Then we will grab the column lengths by splitting the dashline and processing it.
BEGIN {
getline headers
getline dashline
col_count = split(dashline, columns, " ")
for (i=1;i<=col_count;i++)
col_lens[i] = length(columns[i])
}
Now we have the lengths of each column and you can use that inside the main body.
{
start = 1
for (i=start;i<=col_count;i++){
col_n = substr($0, start, col_lens[i])
start = start + col_lens[i] + 1
printf("column %i: [%s]\n",i,col_n);
}
}
That seems a little onerous, but it works. I believe this answers your question. To make things a little nicer, I factored out the line parsing into a user defined function. That's convenient because you can now use it on the headers you stored (if you want).
Here's the complete solution:
function parse_line(line, col_lens, col_count){
start = 1
for (i=start;i<=col_count;i++){
col_i = substr(line, start, col_lens[i])
start = start + col_lens[i] + 1
printf("column %i: [%s]\n", i, col_i)
}
}
BEGIN {
getline headers
getline dashline
col_count = split(dashline, columns, " ")
for (i=1;i<=col_count;i++){
col_lens[i] = length(columns[i])
}
parse_line(headers, col_lens, col_count);
}
{
parse_line($0, col_lens, col_count);
}
If you put your example table into a file called table and this program into a file called dashes.awk, here's the output (using head -n -1 to drop the final row of dashes):
$ head -n -1 table | awk -f dashes.awk
column 1: [ # ]
column 2: [ IP Address ]
column 3: [ Device Name ]
column 4: [ MAC Address]
column 1: [ 1 ]
column 2: [ 192.168.1.3 ]
column 3: [ ANDROID-FFFFFFFFFFFFFFFF ]
column 4: [ FF:FF:FF:FF:FF:FF]
column 1: [ 2 ]
column 2: [ 192.168.1.4 ]
column 3: [ XXXXXXX ]
column 4: [ FF:FF:FF:FF:FF:FF]
column 1: [ 3 ]
column 2: [ 192.168.1.5 ]
column 3: [ -- ]
column 4: [ FF:FF:FF:FF:FF:FF]
column 1: [ 4 ]
column 2: [ 192.168.1.6 ]
column 3: [ -- ]
column 4: [ FF:FF:FF:FF:FF:FF]
Have a look at pandoc's filter functionallity: It allows you to programmatically alter the document without having to parse the table yourself. Probably the simplest option is to use lua-filters, as those require no external program and are fully platform-independent.
Here is a filter which acts on each cell of the table body, ignoring the table header:
function Table (table)
for i, row in ipairs(table.rows) do
for j, cell in ipairs(row) do
local cell_text = pandoc.utils.stringify(pandoc.Div(cell))
local text_val = changed_cell(cell_text)
row[j] = pandoc.read(text_val).blocks
end
end
return table
end
where changed_cell could be either a lua function (lua has good built-in support for patterns) or a function which pipes the output through awk:
function changed_cell (raw_text)
return pandoc.pipe('awk', {'YOUR AWK SCRIPT'}, raw_text)
end
The above is a slightly unidiomatic pandoc filter, as filters usually don't act on raw strings but on pandoc AST elements. However, the above should work fine in your case.
I have noticed this issue when knitting all file types (html, pdf, word). To make sure there's not an issue specific to my program, I went ahead and ran the default .rmd file you get when you create a new markdown. In each case, it does knit correctly, but I always see this at the end. I have searched online and here but cannot seem to find an explanation
Error in yaml::yaml.load(string, ...) :
Scanner error: mapping values are not allowed in this context at line 6, column 19
Error in yaml::yaml.load(string, ...) :
Scanner error: mapping values are not allowed in this context at line 6, column 19
Error in yaml::yaml.load(string, ...) :
Scanner error: mapping values are not allowed in this context at line 4, column 22
Here is my default YAML
---
title: "Untitled"
author: "Scott Jackson"
date: "April 20, 2017"
output: word_document
---
Line 4, column 22 is the space between the 7 and "
I'm not sure where Line 6, column 19 is, but that line is the dashes at the bottom
Any ideas?
Thank you.
I get this error when trying to add a table of contents to the YAML:
title: "STAC2020 Data Analysis"
date: "July 16, 2020"
output: html_notebook:
toc: true
However, if I put html_notebook: on to a separate line then I don't get the error:
title: "STAC2020 Data Analysis"
date: "July 16, 2020"
output:
html_notebook:
toc: true
I do not know why this formatting makes a difference, but it allowed my document to knit and with a table of contents.
I realize this question has gone unanswered for awhile, but maybe someone can still benefit. I had the same error message and I realized I had an extra header command in my yaml. I can't reproduce your exact error, but I get the same message with different line/column references with:
---
title: "Untitled"
author: "Scott Jackson"
date: "April 20, 2017"
output: output: word_document
---
Error in yaml::yaml.load(string, ...) :
Scanner error: mapping values are not allowed in this context at line 4, column 15
Calls: <Anonymous> ... parse_yaml_front_matter -> yaml_load_utf8 -> <Anonymous>
Execution halted
Line 4 column 15 seems to refer to the second colon after the second "output".
I received this error when there was an indentation in the wrong place:
For example, the indentation before header-includes as seen in the example code below caused the error
---
title: "This is a title"
author: "Author Name"
header-includes:
.
.
.
---
When you remove the indentation, the following code below did not produce the error:
---
title: "This is a title"
author: "Author Name"
header-includes:
.
.
.
---
Similarly to Tim Ewers I also got this error when I added a TOC to the YAML:
title: "My title"
date: "April 1, 2020"
output:
pdf_document: default
toc: true
html_document: paged
However, the solution I found was to remove "default", this allowed me to knit the document without an error:
title: "My title"
date: "April 1, 2020"
output:
pdf_document:
toc: true
html_document: paged
I guess this error happens on your content instead of your yaml block.
Because there is no extra content display so I will give a minimal example.
> library(yaml)
> library(magrittr)
> "
+ ---
+ title: 'This is a title'
+ output: github_document
+ ---
+
+ some content
+ " %>%
+ yaml.load()
$title
[1] "This is a title"
$output
[1] "github_document"
It works well. And here is another example.
> "
+ ---
+ title: 'This is a title'
+ output: github_document
+ ---
+
+ some content
+ some content: some content
+ " %>%
+ yaml.load()
Error in yaml.load(.) :
Scanner error: mapping values are not allowed in this context at line 8, column 13
The errors happens at line 8. Because there is a key-value pair not at yaml block.
yaml.load is not enough smart for me.
The temporal solution for me is just extract all lines above the second ---.
> text <- "
+ ---
+ title: 'This is a title'
+ output: github_document
+ ---
+
+ some content
+ some content: some content
+ "
> library(xfun)
> read_lines(text,n_max = 5) %>%
+ yaml.load()
$title
[1] "This is a title"
$output
[1] "github_document"
I had a similar problem and made a request in the YAML and rticles help pages:
https://github.com/viking/r-yaml/issues/92
https://github.com/rstudio/rticles/issues/363
I know this is a 5 year old question but I just got this same error as I was missing a colon
---
title: ''
output:
pdf_document
includes:
before_body: before_body.tex
---
should have been
---
title: ''
output:
pdf_document:
includes:
before_body: before_body.tex
---
and while that doesn't strictly answer the example given, I hope it will help future sufferers of this error message.
I'm using assert VHDL statements to check global constants and generic parameters in VHDL architectures, if they obey to the supported parameter set of a VHDL design.
An always true assert statement is reported in the log. I think this is a bug.
Example: assert TRUE report "This should not be visible in the LSE log." severity NOTE;
Here is the complete test example:
library IEEE;
use IEEE.std_logic_1164.all;
use IEEE.numeric_std.all;
entity assert_test is
end entity;
architecture rtl of assert_test is
type T_VENDOR is (VENDOR_ALTERA, VENDOR_LATTICE, VENDOR_XILINX);
constant VENDOR : T_VENDOR := VENDOR_LATTICE;
begin -- line 39
assert TRUE report "This should not be visible in the LSE log." severity NOTE;
genInfer : if ((VENDOR = VENDOR_LATTICE) or (VENDOR = VENDOR_XILINX)) generate
assert FALSE report "Inside genInfer" severity NOTE;
end generate;
genAltera : if (VENDOR = VENDOR_ALTERA) generate
assert FALSE report "Inside genAltera" severity NOTE;
end generate;
-- line 48
assert ((VENDOR = VENDOR_ALTERA) or (VENDOR = VENDOR_LATTICE) or (VENDOR = VENDOR_XILINX))
report "Vendor '" & T_VENDOR'image(VENDOR) & "' not yet supported."
severity failure;
-- line 52
-- workaround
genAssert : if (not ((VENDOR = VENDOR_ALTERA) or (VENDOR = VENDOR_LATTICE) or (VENDOR = VENDOR_XILINX))) generate
assert FALSE report "Vendor '" & T_VENDOR'image(VENDOR) & "' not yet supported." severity failure;
end generate;
end architecture;
Here is my LSE log:
INFO - synthesis: d:/.../assert_test.vhdl(40): Found User declared VHDL assert of type Note: "This should not be visible in the LSE log.". VHDL-1700
INFO - synthesis: d:/.../assert_test.vhdl(43): Found User declared VHDL assert of type Note: "Inside genInfer". VHDL-1700
INFO - synthesis: d:/.../assert_test.vhdl(51): Found User declared VHDL assert of type Failure: "Vendor 'vendor_lattice' not yet supported.". VHDL-1700
Top module name (VHDL): assert_test
Can anyone confim this behavior?
It's a bug, isn't it?
Workaround:
Placing the assert statements into a generate statements, works as a workaround.
My goal is to write a string to a file where the size of the string will vary. At the moment I have made the string very large so that there is no overflow but is there a way to make it so that the size of the string is the exact number of characters I'm placing into it? I've tried something like the code below but it gives me an error unknown identifier "address count" I think it is because address count is a variable declared in a process and address count is constantly changing. Is there any way around this?
signal address_map :string (1 to address_count);
many thanks
leo
"My goal is to write a string to a file." Hence, lets just focus on that.
Step 1: reference the file IO packages (recommended to turn on VHDL-2008):
use std.textio.all ;
-- use ieee.std_logic_textio.all ; -- include if not using VHDL-2008
Step 2: Declare your file
file MyFile : TEXT open WRITE_MODE is "MyFile.txt";
Step 3: Create a buffer:
TestProc : process
variable WriteBuf : line ;
begin
write ... -- see step 4
writeline ... -- see step 5
Step 4: Use write to write into the buffer (in the process TestProc):
write(WriteBuf, string'("State = ") ) ; -- Any VHDL version
write(WriteBuf, StateType'image(State)) ;
swrite(WriteBuf, " at time = " ); -- VHDL-2008 simplification
write(WriteBuf, NOW, RIGHT, 12) ;
Step 5: Write the buffer to the file (in the process TestProc):
writeline(MyFile, WriteBuf) ;
Alternate Steps 3-5: Use built-in VHDL Write with to_string:
Write(MyFile, "State = " & to_string(State) &
", Data = " & to_hstring(Data) &
" at time " & to_string(NOW, 1 ns) ) ;
Alternate Steps 1-5: Use OSVVM (see http://osvvm.org) (requires VHDL-2008):
library osvvm ;
use osvvm.transcriptpkg.all ; -- all printing goes to same file
. . .
TestProc : process
begin
TranscriptOpen("./results/test1.txt") ;
Print("State = " & to_string(State) &
", Data = " & to_hstring(Data) &
" at time " & to_string(NOW, 1 ns) ) ;
One hard but flexible solution is to use dynamic allocation features of VHDL (copied from ADA).
You have to use an access of string (it is roughly like a "pointer to a string" in C)
type line is access string;
you event don't have to do it because line is already declared in std.textio package.
Ok, the problem next is that you can't use an access type for a signal, so you have to use a shared variable:
shared variable address_map: line;
And finally you have to allocate, read and write to this line:
--Example in a function/procedure/process:
--free a previously allocated string:
if address_map /= NULL then
deallocate(address_map);
end if;
--allocate a new string:
address_map:=new string (1 to address_count);
address_map(1 to 3):="xyz";
--we have here:
-- address_map(1)='y'
-- address_map(2 to 3)="yz"
-- address_map.all = "xyz"
Notice the use of new/deallocate (like malloc/free in C or free/delete in C++).
It is not easy to handle this kind of code, I recommend you to read the documentation of VHDL keywords "new", "deallocate" and "access" (easily found with your favorite search engine) or feel free to ask more questions.
You can also use the READ (read the whole line into a string) and WRITE (append a string to the line) functions from std.textio package.