post gate level simulation in modelsim - vhdl

I'm trying to make a post gate level simulation for a pipelined processor.
I have the net list in vhdl format and I need now to simulate it again to be sure the functionality is right after the synthesis.
The problem is I have a 2 rams one for instructions and the other for data, in post gate level simulation I don't have the ability to view the memory list view and load my instructions and data into my 2 rams.
How can I insert my data into the rams as they have been translated into flip flops and muxs?
Thanks in advance.

From your description I would assume that the two RAMs are for instruction cache and data cache. Since these are usually of a significant size, even on smaller processors, I would doubt that these RAMs are implemented in flip flops and muxes. My first suggestion would therefore be that you check the netlist to see if the RAMs are actually separate RAM primitive modules.
The reason is that RAM primitive modules may sometimes (depending on model) be initialized with the contents of a file. In that case you just need to make a file with the right format.
An alternative if RAM primitive modules are actually in the netlist, but don't allow initialization, is to substitute the RAM primitive modules with your own version that can be initialized.
If the RAMs are actually converted to flip flops and muxes, then the processor may support some cache manipulation instructions, usually available from protected (kernel) mode. These instructions can be used to load the instruction cache and data cache with contents provided by the executed program. Loading the cache RAMs in this way may take numerous instructions, thus some simulation time.
Last, you may consider not to spend that much time on gate level simulation. It may be OK to run a little, just to be sure that the netlist is OK, but commercial well known synthesis tools are generally of a high quality, so these are unlikely to be the reason for bugs in your design. The risk of bugs is much larger in the dedicated design for the project, so you may want to spend more time on functions verification and code review ;-)

Related

Stall all memory accesses of an application

I want to analyze the effect of using slower memories on applications and need a mean to add delay for all memory accesses. Until now I investigated Intel PIN and other software but they seem to be overkill for what I need. Is there any tool to do so?
Is adding NOP operations in the binary code of the application right before each LOAD/STORE a feasible way?
Your best bet is to run your application under an x86 simulator such as MARSSx86 or Sniper. Using these simulators you can smoothly vary the modeled memory latency or any other parameters of the system1 and see how your application performance varies. This is a common approach in academia (often a generic machine will be modeled, rather than x86, which gives you access to more simulator implementations).
The primary disadvantage of using a simulator is that even good simulators are not completely accurate, and how accurate they are depends on the code in question. Certain types of variance from actual performance aren't particularly problematic when answering the question "how does performance vary with latency", but a simulator that doesn't model the memory access path well might produce a answer that is far from reality.
If you really can't use simulation, you could use any binary re-writing tool like PIN to instrument the memory access locations. nop would be a bad choice because it executes very quickly because you cannot add a dependency between the memory load result and the nop instruction. This latter issue means it only adds additional "work" at the location of each load, but that the work is independent of the load itself, so doesn't simulate increased memory latency.
A better approach would be to follow each load with a long latency operation that uses the result of the load as input and output (but doesn't modify it). Maybe something like imul reg, reg, 1 if reg received the result of the load (but this only adds 3 cycles, so you might hunt for longer latency instructions if you want to add a lot of latency).
1 At least within the set of things modeled by the simulator.

Bus protocol for a microcontroller in VHDL

I am designing a microcontroller in VHDL. I am at the point where I understand the role of each component (ALU/Memory...), and some ideas on how to realise them. I basically want to implement a Von Neumann architecture.
But here is what I don't get : how do the components communicate ? I don't know how to design my bus (buses?). I am therefore looking for a simple bus implementation and protocol.
My unresolved questions :
Is it simpler to have one bus for everything or to separate the different kind of data ?
How does each component knows when to "listen" and when to "write" ?
The emphasis is on the simplicity of the design (and thus of the implementation). I do not care about speed. I want to do everything from scratch (ie. no pre-made softcore).
I don't know if this is of importance at this stage, but it will not need to run "real" compiled code, is have any kind of compatibility with anything existing. Also, at which point do I begin to think about my 'assembly' instructions ? I thinks that I will load them directly in the memory.
Thank you for your help.
EDIT :
I ended up drawing (a lot of) inspiration from the Picoblaze, because it is :
simple to understand
under a BSD Licence
Specifically, I started by adding a few instructions to it.
Since your main concern seems to be learning about microcontroller design, a good approach could be taking a look into some of the earlier microprocessor models. Take for instance the Z80:
Source: http://landley.net/history/mirror/cpm/z80.html
Another good Z80 HW description: http://www.msxarchive.nl/pub/msx/mirrors/msx2.com/zaks/z80prg02.htm
To answer your first question (single vs. multiple buses), this chip uses a single bus for everything, and it has a very simple design. You could probably use something similar. To make the terminology clear, a single system bus may be composed of sub-buses (and they are also called buses). The figure shows a system bus composed of a bidirection data bus (8-bit wide) and an address bus (16-bit wide).
To answer your second question (how do components know when they are active),
in the image above you see two distinct signals, memory request and I/O request. Only one will be active at a time, and when I/O request is active, that's when a peripheral could potentially be accessed.
If you don't have many peripherals, you don't need to use all 16 address lines (some Z80's have an 8-bit I/O space). Each peripheral would be accessed through some addresses in this space. For instance, in a very simple system:
a timer peripheral could use addresses from 00h to 03h
a uart could addresses from 08h to 0Fh
In this simple example, you need to provide two circuits: one would detect when the address is within the range 00-03h, and another would do the same for 08-0Fh. If you do a logic "and" between the output of each detector and the I/O request signal, then you would have two signals indicating when each of the peripherals is being accessed. Your peripheral hardware should primarily listen to this signal.
Finally, regarding your question about instructions, the dataflow inside your microprocessor would have several stages. This is usually called a processor's datapath. It is common to divide the stages into:
FETCH: read an instruction from program memory
DECODE: check specific bits within the instructions, and decide what type of instruction it is
EXECUTE: take the actions required by the instruction (e.g., ALU operations)
MEMORY: for some instructions, you need to do a data read or write
WRITE BACK: update your CPU registers with new values affected by the instruction
Source: https://www.cs.umd.edu/class/fall2001/cmsc411/projects/DLX/proj.html
Most of your job of dealing with individual instructions would be done in the DECODE and EXECUTE stages. As for the datapath control, you will need a state machine that controls the sequence of operations through the 5 stages. This functional block is usually called a Control Unit. Here you have a few choices:
Your state machine could go throgh all stages sequentially, one at a time. An instruction would take several clock cycles to execute.
Similar as the choice above, but combining two or more stages in a single cycle if you want to make things simpler and faster.
Pipeline the execution of instructions. This can give a great speed boost, but maybe it's better left for later because things can get quite complex.
As for the implementation, I recommend keeping the functional blocks as separate entities, and make sure you write a testbench for each block. Your job will go faster if you write those testbenches.
As for the blocks, the Register File is pretty easy to code. The Instruction Decoder is also easy if you have a clear idea of your instruction layout and opcodes. And the ALU is also easy if you know the operations it needs to perform.
I would start by writing testbenches for the Instruction Decoder and the Register File. Then I would write a script that runs all the testbenches and checks their results automatically. Only then I would focus on the implementation of the functional blocks themselves.
Basically on-chip busses will use parallel busses for address and data input and output. Usually there will be some kind of arbiter which decides which component is allowed to write to the bus. So a common approach is:
The component that wants to write will set a data line connected to the arbiter to high or low to signal that it wants to access the bus.
The arbiter decides who gets access to the bus
The arbiter sets the chip select of the component that should be allowed next to access the bus.
Usually your on chip bus will use a master/slave concept, so only masters have acting access to the bus. The slaves only wait for requests from the master.
I for one like the AMBA AHB/APB design but this might be a little over the top for your application. You can have a look at this book looking for ideas on how to implement your bus

Design a 256x8 bit RAM using 64 rows and 32 columns programmatically using VHDL

I am new to VHDL programming, I am going to do a project on Built-In Self-Repair.In this project am going to design RAMs of different sizes(256 B,8kB,16kB,32kB)etc. and those rams has to be tested using BIST and then they should be repaired.So please help me by giving an example like how to design RAM with 'n' rows and columns
Start by drawing a block diagram of the RAM at the level of abstraction you want (probably gate-level). Then use VHDL to describe the block diagram.
You should probably limit yourself to a behavioral description, i.e., don't expect to be able to synthesize it. Synthesis for FPGAs usually expects a register-transfer-level description, and synthesis for ASICs is not something I would recommend for a VHDL beginner.
I will assume you want to work with SRAM, since this is the simplest case. Also, let's suppose you want to model a RAM with RAM_DEPTH words, and each word is RAM_DATA_WIDTH bits wide. One possible approach is to structure your solution in three modules:
One module that holds the RAM bits. This module should have the typical ports for a RAM: clock, reset (optional), write_enable, data_in, data_out. Note that each RAM word should be wide enought to hold the data bits, plus the parity bits, which are redundant bits that will allow yout to correct any errors. You can read about Hamming codes used for memory correction here: http://bit.ly/1dKrjV5. You can see a RAM modilng example from Doulos here: http://bit.ly/1aq1tn9.
A second module that loops through all memory locations, fixing them as needed. This should happen right after reset. Note that this will probably take many clock cycles (at least RAM_DEPTH clock cycles). Also note that it won't be implemented as a loop in VHDL. You could implement it using a counter, then use the count value as a read address, pass the data value through a an EDC function, and then write the corrected value back to the RAM module.
A top-level entity (optional), that instantiates modules (1) and (2), and coordinates the process. This module could have a 'init_done' pin that will be asserted after the verification and correction take place. This pin should be checked by the modules that use your RAM to know whether it is safe to start using the RAM.
To summarize, you could loop through all memory locations upon reset, fixing them as needed using an error-correcting code. After making sure all memory locations are ok, just assert an 'init_done' signal.

How to compare two implementations of the same algorithm? (by examine their Assembly code)

Assume I have two implementations of the same algorithm in assembly. I would like to know by examining the two snippets codes which one is faster.
The parameters I thought one might take into account are: number of op-codes, number of branches, number of function frames.
My questions are:
Can I assume each opcode execution is one cycle ?
What is the overhead of branch which break the pipeline ?
What are the effects and overhead of calling a function ?
Is there a difference in the analysis between ARM and x86 ?
The question is theoretical since I have two implementations; one 130 instructions long and one is 184 instructions long.
And I would like to know if it is definitely true to say the 130 instructions long snippet is faster than the 184 instructions long implementation?
"BETTER == FASTER"
Without wanting to be flippant, the answers are
no
that depends on your hardware
that depends on your hardware
yes
You would really need to test things on your target hardware, or have a simulator that understands your hardware fully, in order to answer your question the way you meant to...
For the last part of your question, you need to define "better"…better.
Since you asked about a Cortex A9, the data sheet has instruction cycle counts in appendix B. These counts generally assume that the memory bus is fast enough to keep the CPU busy. In reality this is rarely the case. Many video/audio algorithms will have a big win in how they access memory.
One cycle per op
Of course you can't assume this if you want an exact count. However, if you are deciding which algorithm to choose, you can get a feel for the best algorithm by looking at the instructions in the inner loop. Here, your cache should allow the code to execute as per the instruction counts in the data sheet. If the counts are close, then you probably need to look at each instruction. Load/stores are more expensive and usually multiples, etc. Some algorithms, especially crytographic, will have big wins by using assembler that doesn't map well to C. For example, clz, ror, using the carry for multi-word arithmetic, etc.
Branch overhead
Look in Appendix B, or whatever data sheet has cycle counts for your processor. For an ARM926 it is about 3 cycles. The compiler only generates two conditional opcodes in a row to avoid branching, otherwise, it branches. If the algorithm is large, the branch may disrupt the cache. A hard answer depends on your CPU, cache, and memory. According to the Cortex A9 datasheet (B.5), there is only one cycle overhead to a fixed branch.
Function overhead
This is much the same as the branch overhead. However, the compiler will also have an influence. noted by Jim Does it cache align functions. Does the compiler perform leaf function optimizations, etc. With modern gcc versions, if all the functions are static, the compiler will generally in-line when it is advantageous. If the algorithms are particularly large, a register spill may be advantageous. However, with your example of 130/184 instructions, this seems unlikely. The compiler options will obviously effect the overhead. You can use objdump -S to examine the prologue/epilogue and then determine the number of cycles for your hardware.
ARM verus x86
Of course there is a technical difference in the cycle counts. The CISC x86 also has variable instruction size. This complicates the analysis. It is slightly easier on the ARM.
Normally, you want to ball park things and then actually run them with a profiler. The estimates can help guide development of the algorithms. Loop/memory tuning, etc for your hardware. Something like instruction emulation, page or alignment faults, etc may be dominant and make all the cycle count analysis meaningless. If the algorithm is in user space, per-emption, may negate cache wins from run to run. It is possible that one algorithm will work better in a little loaded system and the other will work better under a higher load.
A note on cycle counts
See the post-process objdump for some complications in getting cycle counts. Basically a typical CPU is several phases (a pipe line) and different conditions can cause stalls. As CPU's become more complex, the pipe line typically gets longer, meaning there are more conditions or phases which can stall. However, cycle count estimates can be helpful in guiding development of an algorithm and evaluating them. Things like memory timing or branch prediction can be just as important, depending on the algorithm. Ie, cycle counts are not completely useless, but they are not complete either. Profiling should confirm actual algorithm times. If they diverge, instruction re-ordering, pre-fetching and other techniques may bring them closer. The fact that cycle counts and active profiling diverge can be helpful in itself.
It is definitely not true to say that the 130 instruction code is faster than the 184 instruction code. it is very easy to have 1000 instructions run faster than 100 and vice versa on either of these platforms.
1 Can I assume each opcode execution is one cycle ?
Start by looking at the advertised mips/mhz, although a marketing number it gives a rough idea of what is possible. If the number is greater than one then more than one instruction per clock is possible.
2 What is the overhead of branch which break the pipeline ?
Anywhere from absolutely no affect to a very dramatic affect, on either system. one clock to hundreds are the potential penalty.
3 What are the effects and overhead of calling a function ?
Depends heavily on the function, and the function calling the function. Depending on the calling convention you might have to save registers to the stack, or rearrange the contents of registers to prepare for the parameters for the function to be called. If passing a struct by value a copy of the struct may need to be made on the stack, the bigger the struct passed the bigger the copy. once in the function a stack frame may need to be prepared, etc, etc. There are many factors involved. This question and answer are also independent of platform.
4 Is there a difference in the analysis between ARM and x86 ?
yes and no, both systems use all the modern tricks of pipelining, branch prediction, etc to keep the mips/mhz up. ARM is going to give a better mips per mhz than x86, x86 being variable instruction length might give more instructions per unit cache. How you analyze the cache, and memory and peripheral systems in the systems side of the analysis is roughly the same. The comparison of the instructions and core are similar and different depending on what aspects you are analyzing. The arm is not microcoded, the x86 likely is so you dont really see how many registers there really are, things like that. at the same time the x86 you can get a better look at the memory system with the arm, since they are generally not system on a chip. Depending on what ARM chip you buy you may lose a lot of the visibility in the boundaries of the chip, might not see all the memory and peripheral busses, for example. (x86 is changing that by putting pcie on chip now for example) in the case of something in the cortex-a class you mentioned you would have similar edge of chip visibility as those would use larger/cheaper dram based memory off chip rather than microcontroller like on chip resources.
Bottom line your final question:
"And I would like to know if it is definitely true to say the 130 instructions long snippet is faster than the 184 instructions long implementation?"
It is definitely NOT TRUE to say the 130 instruction snippet is faster than the 184 instruction snippet. It might be faster it might be slower and it might be about the same. With a lot more information we might be able to make a pretty good statement or it may still be non-deterministic. it is easy to choose 100 instructions that execute faster than 1000 instructions and likewise easy to choose 1000 instructions that execute faster than 100 instructions (even if I were to add no branching and no loops, just linear execution)
Your question is almost entirely meaningless: It probably depends on your input.
Most CPUs have something resembling a branch misprediction penalty (e.g. traditional ARM which throws away an instruction fetch/decode on any taken branch, IIRC). ARM and x86 also allow conditional execution, which can be faster than branching. If either of these are dependent on input data, then different inputs will follow different code paths.
Perhaps one version heavily uses conditional execution, which is wasteful when the condition is false. Perhaps another was compiled using some profiling information that performs no branches (except the return at the end) for a specific case. There are many, many reason why a compiler can take the same source and produce an "optimized" output which is faster for one input and slower for another.
Many optimizations have this characteristic — for example, aligning the start of a loop to 16 bytes helps on some processors, but not when the loop is only executed once.
Some text book answer to this question from Cortex
™
-A Series Programmer’s Guide, chapter 17.
Although cycle timing information can be found in the Technical Reference Manual (TRM) for the processor that you are using, it is very difficult to work out how many cycles even a trivial piece of code will take to execute. The movement of instructions through the pipeline is dependent on the progress of the surrounding instructions and can be significantly affected by memory system
activity. Pending loads or instruction fetches which miss in the cache can stall code for tens of cycles. Standard data processing instructions (logical and arithmetic) will take only one or two cycles to execute, but this does not give the full picture. Instead, we must use profiling tools, or the system performance monitor built-in to the processor, to extract useful information about performance.
Also read under 17.4 Cortex-A9 micro-architecture optimizations which answers your question very very much.

How instructions are differentiated from data?

While reading ARM core document, I got this doubt. How does the CPU differentiate the read data from data bus, whether to execute it as an instruction or as a data that it can operate upon?
Refer to the excerpt from the document -
"Data enters the processor core
through the Data bus. The data may be
an instruction to execute or a data
item."
Thanks in advance for enlightening me!
/MS
Simple answer - it doesn't. Machine code instructions are just binary numbers, as are data. More complicated answer - your processor may (or may not) provide segmentation of memory, meaning that attempting to execute what has been specified as data causes a trap of some sort. This is one of the the meaning of a "segmentation fault" - the processor tried to execute something that was not labelled as being executable code.
Each opcode will consist of an instruction of N bytes, which then expects the subsequent M bytes to be data (memory pointers etc.). So the CPU uses each opcode to determine how manyof the following bytes are data.
Certainly for old processors (e.g. old 8-bit types such as 6502 and the like) there was no differentiation. You would normally point the program counter to the beginning of the program in memory and that would reference data from somewhere else in memory, but program/data were stored as simple 8-bit values. The processor itself couldn't differentiate between the two.
It was perfectly possible to point the program counter at what had deemed as data, and in fact I remember an old college tutorial where my professor did exactly that, and we had to point the mistake out to him. His response was "but that's data! It can't execute that! Can it?", at which point I populated our data with valid opcodes to prove that, indeed, it could.
The original ARM design had a three-stage pipeline for executing instructions:
FETCH the instruction into the CPU
DECODE the instruction to configure the CPU for execution
EXECUTE the instruction.
The CPU's internal logic ensures that it knows whether it is fetching data in stage 1 (i.e. an instruction fetch), or in stage 3 (i.e. a data fetch due to a "load" instruction).
Modern ARM processors have a separate bus for fetching instructions (so the pipeline doesn't stall while fetching data), and a longer pipeline (to allow faster clock speeds), but the general idea is still the same.
Each read by the processor is known to be a data fetch or an instruction fetch. All processors old and new know their instruction fetches from data fetches. From the outside you may or may not be able to tell, usually not except for harvard architecture processors of course, which the ARM is not. I have been working with the mpcore (ARM11) lately and there are bits on the external interface that tell you a little about what kind of read it is, mostly to hook up an external cache, combine that with knowledge of if you have the mmu and L1 cache on and you can tell data from instruction, but that is the exception to the rule. From a memory bus perspective it is just data bits you dont know data from instruction, but the logic that initiated that memory cycle and is waiting for the result knew before it started the cycle what kind of fetch it was and what it is going to do with that data when it gets it.
I think its down to where the data is stored in the program and OS support for informing the CPU whether it is code or data.
All code is placed in different segment of the image (along with static data like constant character strings) compared to storage for variables. The OS (and memory management unit) need to know this because they can swap code out of memory by simply discarding it and reloading it from the original disk file (at least that's how Windows does it).
So, I think the CPU 'knows' whether memory is data or code. No doubt the modern pipeling CPUs we have now also have instructions to read this memory differently to assist the CPU is processing it as fast as possible (eg code may not be cached, data will always be accessed randomly rather than in a stream)
Its still possible to point your program counter at data, but the OS can tell the CPU to prevent this - see NX bit and Windows' "Data Execution Protection" settings (system control panel)
So, I think the CPU 'knows' whether memory is data or code. No doubt the modern pipeling CPUs we have now also have instructions to read this memory differently to assist the CPU is processing it as fast as possible (eg code may not be cached, data will always be accessed randomly rather than in a stream)

Resources