Using Qsys (Quartus II x64 15.0.1 build 150) I made a system with Nios2/e and several standard peripheral components. I also add my custom component with 1 MM-Slave and 2 Interrupt Senders. For each of them I set this slave as "Associated addressable interface" in Component editor during creation of _hw.tcl file.
Qsys reports no errors or warnings, but then I tried to make BSP project in Eclipse using New | Nios 2 BSP project wizard. I select "SOPC Information File name", but "CPU" ComboBox remains empty and error appears: "No Nios II CPU Found".
Then I launch BSP Editor from main menu: Nios 2 | BSP Editor and press File | New Nios 2 BSP. I again provide SOPC file and this tool found CPU, but also reports the error: "Can only have at most one IRQ associated with the following slaves of module "my_component" : mm_slave."
I then returned to Qsys and remove one of Interrupt Senders and this time everything works fine, but I need to generate more than one interrupt.
So what to do if you have Nios2/e connected to custom peripheral with 1 MM-Slave and several Interrupt Senders?
I have some ideas but don't like them:
Add MM-Slave for each irq (it looks like waste of resources).
Do not specify "Associated addressable interface" in Component editor (it is by the way works, but I don't know will it work properly all the time). What this option really do?
I was imprecise saying that it will work, sorry for that. In reality qsys and BSP can be generated but inside BSP's system.h IRQ number will be defined as -1, so it will not work.
Merge all interrupts into one wire (they all will share the same priority).
Configure Interrupt Sender to have irq signal with width more than 1 (Component Editors allows to do this but reports warning: "interrupt_sender: Signal irq_many[4] of type irq must have width [1]".) As with case 2 I don't know what will happen inside Altera's generators/compilers.
After Component Editor stage is finished Qsys doesn't accept such a system.
Please help.
At last, I have found the following:
A. If you need many IRQ Senders inside one custom Qsys module you need one MM-Slave per each. From GUI organization it seems that you are assigning MM-Slave to the IRQ, but (as far as I understand it) it works directly opposite: IRQ is tied to MM-Slave and it may has maximum one IRQ. I didn't try to actually run it, but BSP files looks correct and everything compiles at least.
I hope, that there is (or will be) a better way to achieve this.
B. If you can share the same priority between all interrupts, than you can avoid the problem by using only 1 Interrupt Sender and thus only 1 MM-Slave. It works on dev board.
Related
Coming form this question yesterday, I decided to port this library to my board. I was aware that I needed to change something, so I compiled the library, call it on a small program and see what happens. The 1st problem is here:
// Check for GPIO and peripheral addresses from device tree.
// Adapted from code in the RPi.GPIO library at:
// http://sourceforge.net/p/raspberry-gpio-python/
FILE *fp = fopen("/proc/device-tree/soc/ranges", "rb");
if (fp == NULL) {
return MMIO_ERROR_OFFSET;
}
This lib is aimed for Rpi, os the structure of the system on my board is not the same. So I was wondering if somebody could tell me where I could find this file or how it looks like so I can find it by my self in order to proceed the job.
Thanks.
You don't necessarily want that "file" (or more precisely /proc node).
The code this is found in is setting up to do direct memory mapped I/O using what appears to be a pi-specific gpio-flavored version of the /dev/mem type of device driver for exposing hardware special function registers to userspace.
To port this to your board, you would need to first determine if there is a /dev/mem or similar capability in your kernel which you can activate. Then you would need to determine the appropriate I/O registers for GPIO pins. The pi-specific code is reading the Device Tree to figure this out, but there are other ways, for example you can manually read the programmer's manual of the SoC on which you are running.
Another approach you can consider is adding some small microcontroller (or yes, barebones ***duino) to the system, and using that to collect information from various sensors and peripherals. This can then be forwarded to the SoC over a UART link, or queried out via I2C or similar - add a small amount of cost and some degree of bottleneck, but also means that the software on the SoC then becomes very portable - to a different comparable chip, or perhaps even to run on a desktop PC during development.
In QSYS I have an ADC, PLL and an Avalon-MM Read Master to access the internal ADC of the Altera Max10. The control and user interface of the Read Master are exported.
Now I struggle to setup the control interface to access the ADC channels. Mainly following signals:
control_fixed_location
control_read_base
control_read_length
The interface description is:
The block diagram for the Read Master is:
Questions:
- How do I need to set the control signals to access the ADC channel x?
- Where can I find the base address for the ADC implemented in QSYS?
Attached is the quartus archive. Maybe someone can give me an example to simulate this interface in ModelSim.
Thanks in advance!
I have an answer to your second question. I am struggling myself with the first question.
Where can I find the base address for the ADC implemented in QSYS?
I know two methods to find the base and end address of a component.
One is to open the System Contents view (standard) and scroll to the right side .
I am not permited to embed images yet.
There you see a row named Base and End. Here you can find the addresses.
The second method is to open the Address Map. Should be located in the same column as System Contents, or you can select View in the top left corner and select it there.
Have a look. You should be able to find it yourself with this information.
What i use when i am searching for examples or prebuild designs is the altera website. Here a link for you https://cloud.altera.com/devstore/platform/
Probably you like this one: https://www.altera.com/content/dam/altera-www/global/en_US/pdfs/literature/hb/max-10/ug_m10_adc.pdf
The configuration is complete in QSYS? Like selecting channels and Sequencer in your ADC Block. Selecting the right input clocks and the right frequencies?
You wrote:
In QSYS I have an ADC, PLL and an Avalon-MM Read Master to access the internal ADC of the Altera Max10. The control and user interface of the Read Master are exported.
Have you created a clock for your PLL? When i want to simulate a clock signal for a QSYS system i export the clock signals and define the wanted clock in an additional file.
When you go one step further and include a nios2 processor i recommend to have a look at the altera_modular_adc.c file.
*edit
If you haven't assigned any base addresses there is a function in QSYS which does the job for you.
In System (Same column as File) -> Assign Base Addresses
Basically, this is the same question that was asked here.
When performing kernel debugging of a machine running Windows 7 or older, with WinDbg version 6.2 and up, the debugger doesn't show anything in the registers window. Pressing the Customize... button results in a message box that reads Registers are not yet known.
At the same time, issuing the r command results in perfectly valid register values being printed out.
What is the reason for this behaviour, and can it be fixed?
TL;DR: I wrote an extension DLL that fixes the bug. Available here.
The Problem
To understand the problem, we first need to understand that WinDbg is basically just a frontend to Microsoft's Windows Symbolic Debugger Engine, implemented inside dbgeng.dll. Other frontends include the command-line kd.exe (kernel debugger) and cdb.exe (user-mode debugger).
The engine implements everything we expect from a debugger: working with symbol files, read and writing memory and registers, setting breakpoitns, etc. The engine then exposes all of this functionality through COM-like interfaces (they implement IUnknown but are not registered components). This allows us, for instance, to write our own debugger (like this person did).
Armed with this knowledge, we can now make an educated guess as to how WinDbg obtains the values of the registers on the target machine.
The engine exposes the IDebugRegisters interface for manipulating registers. This interface declares the GetValues method for retrieving the values of multiple registers in one go. But how does WinDbg know how many registers are there? That why we have the GetNumberRegisters method.
So, to retrieve the values of all registers on the target, we'll have to do something like this:
Call IDebugRegisters::GetNumberRegisters to get the total number of registers.
Call IDebugRegisters::GetValues with the Count parameter set to the total number of registers, the Indices parameter set to NULL, and the Start parameter set to 0.
One tiny problem, though: the second call fails with E_INVALIDARG.
Ehm, excuse me? How can it fail? Especially puzzling is the documentation for this return value:
The value of the index of one of the registers is greater than the number of registers on the target machine.
But I just asked you how many registers there are, so how can that value be out of range? Okay, let's continue reading the docs anyway, maybe something will become clear:
If the return value is not S_OK, some of the registers still might have been read. If the target was not accessible, the return type is E_UNEXPECTED and Values is unchanged; otherwise, Values will contain partial results and the registers that could not be read will have type DEBUG_VALUE_INVALID.
(Emphasis mine.)
Aha! So maybe the engine just couldn't read one of the registers! But which one? Turns out that the engine chokes on the xcr0 register. From the Intel 64 and IA-32 Architectures Software Developer’s Manual:
Extended control register XCR0 contains a state-component bitmap that specifies the user state components that software has enabled the XSAVE feature set to manage. If the bit corresponding to a state component is clear in XCR0, instructions in the XSAVE feature set will not operate on that state component, regardless of the value of the instruction mask.
Okay, so the register controls the operation of the XSAVE instruction, which saves the state of the CPU's extended features (like XMM and AVX). According to the last comment on this page, this instruction requires some support from the operating system. Although the comment states that Windows 7 (that's what the VM I was testing on was running) does support this instruction, it seems that the issue at hand is related to the OS anyway, as when the target is Windows 8 everything works fine.
Really, it's unclear whether the bug is within the debugger engine, which reports more registers than it can retrieve values for, or within WinDbg, which refuses to show any values at all if the engine fails to produce all of them.
The Solution
We could, of course, bite the bullet and just use an older version of WinDbg for debugging older Windows versions. But where's the challenge in that?
Instead, I present to you a debugger extension that solves this problem. It does so by hooking (with the help of this library) the relevant debugger engine methods and returning S_OK if the only register that failed was xcr0. Otherwise, it propagates the failure. The extension supports runtime unload, so if you experience problems you can always disable the hooks.
That's it, have fun!
I am working on a boot loader for TMS320DM6437. The idea is to create 2 independent firmware that one will update another. In firmware1 I will download firmware2 file and write it to NOR flash in a specified address. Both firmware are stored in NOR flash in ais format. Now I have two applications in flash. One is my custom boot loader and the second one is my main project. I want to know how I can jump from the first program to the second program located at a specified address. I also expect information about documents which may help me to create custom bootloader
Any recommendations?
You can jump to the entry point. I'm using this approach on TMS320 2802x and 2803x, but it should be the same.
The symbol of the entry point is c_int00.
To get to know the address of c_int00 in the second application, you have to fix the Run-Time Support (RTS) library at a specific address, by modifying the linker command file.
Otherwise you can leave the RTS unconstrained, and create a C variable (at a fixed address) that is initialized with the value of cint_00. Using this method your memory map is more flexible and you can add, togheter with the C variable, a comprehensive data structure with other information for your bootloader, e.g. CRC, version number, etc.
Be carefull with the (re)initialization of the peripherals in the second application, since you are not starting from a hardware reset, and you may need to explicity reset some more registers, or clear interrupt requests.
I am using the MSP430F2013 processor for an application, which doesn't have a UART. I need a UART, and so I used the TI's sample code "msp430x20x3_ta_uart2400.c" to emulate one using the Timer module. This all worked fine (compiled with IAR Embedded Workbench), having tested it using PuTTY to transmit characters to a development board and a loopback to echo them to the terminal.
That was a de-risking exercise, and now I've come to port that code into my application's state machine. Having done this, I'm having issues surrounding the timer interrupts and low power sleep modes. Here's the snippet of my code around the entry into the low power (sleep) mode:
// Prepare the UART to receive one byte.
prepare_receiver();
// Enter low power mode 1.
__bis_SR_register(LPM1_bits + GIE);
// Check whether the full message has been received.
if(true == get_message_complete())
{
process_event(e_euart_message_received, NULL);
}
What I'm seeing on the debugger (C-Spy) is that sometimes it will execute the bis_SR_register() line on first entry and then go to the if statement, i.e., ignoring the fact that I've asked it to go to sleep. On other occasions, when it does go to sleep when it should, the ISR triggers correctly and eventually brings me back to the if statement to continue program execution (as I'm expecting). However, if I try to step to the next statement, the application freezes on that first line, i.e., I can't advance.
I can't think of anything functionally different from TI's example that I'm doing, so I figure my problem must be something to do with how I've ported it. For example, my Timer ISR and the code I've posted here are in different compilation units - would this sort of decision have any bearing on things? I'm aware my question might be a little vague but unfortunately I can't post all of my code, so instead I'm looking for someone with MSP experience who might be able to suggest some things to look at or some potential pitfalls that I may have fallen into.
Debugging interrupts with C-Spy in Low Power Mode is going to be tricky. According to Section A.3 Debugging (C-Spy) - IAR User's Guide:
5) C-SPY can debug applications that utilize interrupts and low power modes
But there are some "gotchas" that you should be aware of that may be causing your headaches.
In particular:
14) When C-SPY has control of the device, the CPU is ON (that is, it is not in low-power mode) regardless of the settings of the low-power
mode bits in the status register. Any low-power mode conditions are
restored prior to Step or Go. Consequently, do not measure the power
consumed by the device while C-SPY has control of the device. Instead,
run your application using Go with JTAG released
19) C-SPY utilizes the system clock to control the device during
debugging. Therefore, device counters, etc., that are clocked by the
Main System Clock (MCLK) are affected when C-SPY has control of the
device. Special precautions are taken to minimize the effect upon the
Watchdog Timer. The CPU core registers are preserved. All other clock
sources (SMCLK, ACLK) and peripherals continue to operate normally
during emulation. In other words, the Flash Emulation Tool is a
partially intrusive tool.
Devices that support clock control (Emulator
→ Advanced → Clock Control) can further minimize these
effects by selecting to stop the clock(s) during debugging
24) Peripheral bits that are cleared when read during normal program
execution (that is, interrupt flags) are cleared when read while being
debugged (that is, memory dump, peripheral registers).
When using certain MSP430 devices (such as MSP430F15x, MSP430F16x,
MSP430F43x, and MSP430F44x devices), bits do not behave this way
(that is, the bits are not cleared by C-SPY read operations).
26) While single stepping with active and enabled interrupts, it can
appear that only the interrupt service routine (ISR) is active (that
is, the non-ISR code never appears to execute, and the single step
operation always stops on the first line of the ISR). However, this
behavior is correct because the device always processes an active and
enabled interrupt before processing non-ISR (that is, mainline) code.
A workaround for this behavior is, while within the ISR, to disable
the GIE bit on the stack so that interrupts are disabled after exiting
the ISR. This permits the non-ISR code to be debugged (but without
interrupts). Interrupts can later be reenabled by setting GIE in the
status register in the Register window.
On devices with the clock control emulation feature, it may be possible
to suspend a clock between single steps and delay an interrupt request
(Emulator → Advanced → Clock Control).
One thing to try is commenting out all the low power code and seeing if your UART code works like that. Then go back and try re-enabling the low power mode.
The answer to this question lies in the debugging setup and more specifically what types of breakpoints are being used. I had quite a complex series of macros that were running on program upload, which set various hooks into memory for testing purposes. These hooks relied on software breakpoints being created, which would then call functions outside of the application. I have seen no problem in using these breakpoints in normal use, however their existence means that the debugging session doesn't run in real-time (i.e., the device is under control of the host PC). This, for a reason yet not completely known to me, caused problems when trying to debug interrupts and low power modes. (I suspect that if I was to look a bit deeper, I would see the need to use clock control whilst debugging, but I'll save that for another day).
So, to solve this problem and allow me to debug my interrupt and low power mode heavy code, which I'd ported into my larger application state machine, I had to do the following:
Disable software breakpoints within IAR.They're not actually enabled by default, but if you've been doing clever things with macros like I had, you probably would've needed to enable them, since there just aren't enough hardware breakpoints available in most MSP430s (for instance, I only have two in the MSP430F2013, and C-SPY more often than not hogs one of those!). The obvious downside to this is that debugging becomes a bit more laborious, but at least it's reliable.
Remove links to .mac Macro files.In other words, if you're using macros, don't. In my case, this meant that I had to hack some state machine logic in order to force myself down a certain route (that previously the macro had been doing for me). This clearly isn't ideal, but it will allow you to debug the interrupt/low power mode code. The macros can then be re-enabled afterwards.
So it turned out that there wasn't a problem with my port after all. I'm not particularly happy with this hacky solution, but at least it's a step forward. If I have the time, I'll investigate to see if I can work out a way of using software breakpoints and add to this answer.