Disable ARM M0+ debug port after loading firmware - debugging

I am working on ARM Cortex-M0+. I need put CPU to a deep sleep mode to measure its standby power consumption. I use Keil uLink debugger to load the firmware. However debugger stops CPU to sleep when connecting. Is it possible to disable debugger port after I load/run the firmware? How can I do that?

It seems this function may fall in the grey area between architected functionality, device specific features, and tool capabilities.
The ARM ADIv5 debug interface certainly can request DEBUGPWRUP. When tools connect over SWD or JTAG, they have to set this before being able to make accesses. The bit won't be cleared by simply pulling the connection (there is no liveness indication on the target side). Clearing this bit using a debug toolchain (as opposed to a low-level drive) might be tricky.
Some STM32 devices seem to provide DBGMCU_Config in a the vendor-specific library to control the interaction between sleep states and debug. It's permitted to either emulate low power states (i.e. remain active, just stalled) or sleep even when debug is connected.
This level of detail is generally described in the device specific documentation from the vendor, and there may be more than one way of achieving what you need. A power-sensitive part is more likely to have an app-note on the type of measurement you're looking for.

Related

How can I override the CUDA kernel execution time limit on Windows with a secondary GPUs?

From Nvidia's website, it explain the time-out problem:
Q: What is the maximum kernel execution time? On Windows, individual
GPU program launches have a maximum run time of around 5 seconds.
Exceeding this time limit usually will cause a launch failure reported
through the CUDA driver or the CUDA runtime, but in some cases can
hang the entire machine, requiring a hard reset. This is caused by
the Windows "watchdog" timer that causes programs using the primary
graphics adapter to time out if they run longer than the maximum
allowed time.
For this reason it is recommended that CUDA is run on a GPU that is
NOT attached to a display and does not have the Windows desktop
extended onto it. In this case, the system must contain at least one
NVIDIA GPU that serves as the primary graphics adapter.
Source: https://developer.nvidia.com/cuda-faq
So it seems that, nvidia believes, or at least strongly implys, having multi- (nvidia) gpus, and with proper configuration, can prevent this from happening?
But how? so far I tried lots ways but there is still the annoying time-out on a GK110 GPU that is: (1) plugging in the secondary PCIE 16X slots; (2) Not being connected to any monitors (3) Is setted to use as an exclusive physX card in driver control panel (as recommended by some other guys), but the block-out is still there.
If your GK110 is a Tesla K20c GPU, then you should switch the device from wddm mode to TCC mode. This can be done with the nvidia-smi.exe tool that gets installed with the driver. Use the windows search function to find this file (nvidia-smi.exe) then use the command line help (`nvidia-smi --help) to discover the commands necessary to switch a GPU from WDDM to TCC mode.
Once you have done this, the windows watchdog mechanism will no longer pay attention to your GK110 device.
If on the other hand it is a GeForce GPU, there is no way to switch it to TCC mode. Your only option is to modify the registry settings, which is somewhat difficult. Your mileage may vary, as the exact structure of the reg keys varies by OS.
If a GPU is in WDDM mode, it is subject to the watchdog timer.

Setting a MCU in low power mode on Linux

I'm running embedded Linux (Angstrom) on an Atmel board (at91 sam9g25) mounting an ARM MCU.
I'd like to set the CPU in idle mode, ideally from userspace by using a function (then the system would be waken up by an hardware gpio interrupt). How can I do that? Alternatively, how can it be done in kernelspace?
I cannot find much, maybe somebody has some example to start from?
Try checking this page . Try also reading Optimizing Power Consumption for AT91SAM9261-based Systems to have an idea about what you can do with power management.
What you can basically do is setting the state you want in /sys/power/state but before entering the low-power state you need to set how your system can be awakened.
Be advised that, in my experience, I have seen a lot of different behaviors by changing the kernel, so by patient and try different versions.

8051 serial debug monitors

I'm working with an 8051 (Cypress FX2LP) that doesn't have jtag/bdm capability. Typically, developers on this project have been using ad-hoc serial printfs for debugging. I'm looking into options for serial debug monitors such as Keil's Mon51, Isd51 or IAR's generic ROM-monitor.
I'll need to modify/configure this debug monitor to write to code RAM (to set software breakpoints). I'd guess that most 8051 debug monitors offer the ability for such modifications in order to support Harvard architecture or bank switching.
Does anybody have recommendations for serial debuggers for 8051 or similar processors?
Have you had to modify it to write to Harvard code RAM or flash etc?
I used for years Keil uVision PK51 and the Cypress FX2 EZ-USB Development kit. This kit (EZ-USB_devtools_version_261700.zip) worked correctly with FX2 and FX2LP.
It includes a Windows driver that automatically downloads the monitor firmware on board and stay resident in 8051 memory. This monitor takes control of one of the 2 serial board and manage the communication with the debugging tool. You have to set the Keil environment debugger to use the "Keil Monitor-51 driver".
Once your fw is downloaded and running you can set breakpoints, display watch, etc...
The Cypress driver works correctly with Windows 2K/XP. I never tried it with Vista or later. Probably there is a newer version of the Cypress that is able to run on the latest Windows.
Good luck
I have been using Mon51 with the Cypress FX2 for going on 10 years with very good success. In addition we use the RTXtiny task switcher and code banking. I have found the monitor to be generally solid and with enough functionality for our needs.
The Mon-51 code comes as a library from Keil, so it is not available. A couple of years ago I was having trouble getting code banking to work with the monitor, and since I wasn't getting very good support from Keil, I started to disassemble the monitor to figure out what was going wrong. Before I got very far I solved my problem and I never finished the reverse engineering project.
Our hardware platform is "von-neumanized" so that code and xdata space overlap. This is necessary for the monitor to work correctly. We have modified the monitor initialization code so that it runs at 115200 baud from an external uart and that works well. In addition we had to build our own version of the monitor so that it was located at a different location in memory. Keil has actually made it pretty easy to configure things without having to dive into the actual monitor code.

How to read/write a hard disk when CPU is in Protected Mode?

I am doing an OS experiment. Until now, all my code utilized the real mode BIOS interrupts to manipulate the hard disk and floppy. But once my code enables Protected Mode, all the real mode BIOS interrupt service routines won't be available.
I have a feeling that I need to do some hardware drivers now. Am I right? Is this why an OS is so difficult to develop?
I know that hardware is controlled by reading from and writing to certain control or data registers. For example, I know that the "Command Block Registers" of a hard disk range from 0x1F0 to 0x1F7. I am wondering whether the register addresses of so many different hardware devices are consistent on different platforms? Or do I have to detect that before using them? How would I do that?
Since I am not sure about how to read/write a floppy or a hard disk in Protected Mode, I have to use BIOS interrupts to load all my necessary kernel files from the floppy before entering protected mode. What could I do if my kernel file exceeds the real mode 1M space limit?
How do I read/write a hard disk when the CPU is in Protected Mode?
I have a feeling that I need to do some hardware drivers now. Am I right?
Strictly speaking; (and depending on your requirements) "need" may be too strong - in theory you can switch back to real mode to use BIOS functions, or use a virtual8086 monitor, or write an interpreter that interprets the firmware's instructions instead of executing them directly.
However, the BIOS is awful (designed for an "only one thing can happen at a time" environment that is completely unsuitable for modern systems where its expected that all devices are able to do useful work at the same time), and the BIOS is deprecated (replaced by UEFI), and it's hard to call something an OS when it doesn't have control over the hardware (because the firmware still has control of the hardware).
Note that if you do continue using BIOS functions; the state of various pieces of hardware (interrupt controller, PCI configuration space for various devices, any PCI bridges, timer/s, etc) has to match the expectations of the BIOS. What this means is that you will either be forced to accept huge limitations (e.g. never being able to use IO APICs, etc. properly) because it will break BIOS functions used by other pre-existing code, or you will be forced to do a huge amount of work to make the BIOS happy (emulating various pieces of hardware so the BIOS thinks the hardware is still in the state it expects even though it's not).
In other words; if you want an OS that's good then you do need to write drivers; but if you only want an OS that doesn't work on modern computers (UEFI), has severe performance problems ("only one thing can happen at a time"), is significantly harder to improve, doesn't support any devices that the BIOS doesn't support (e.g. sound cards), and doesn't support any kind of "hot-plug" (e.g. plugging in a USB device), then you don't need to write drivers.
Is this why an OS is so difficult to develop?
A bad OS is easy to develop. For example, something that is as horrible as MS-DOS (but not compatible with MS-DOS) could probably be slapped together in 1 month.
What makes an OS difficult to develop is making it good. Things like caring about security, trying to get acceptable performance, supporting multi-CPU, providing fault tolerance, trying to make it more future-proof/extensible, providing a nice GUI, creating well thought-out standards (for APIs, etc), and power management - these are what makes an OS difficult.
Device drivers add to the difficulty. Before you can write drivers you'll need support for things that drivers depend on (memory management, IRQ handling, etc - possibly including scheduler and some kind of communication); then something to auto-detect devices (e.g. to scan PCI configuration space) and try to start the drivers for whatever was detected (possibly/hopefully from file system or initial RAM disk, with the ability to add/unload/replace drivers without rebooting); and something to manage the tree of devices - e.g. so that you know which "child devices" will be effected when you put a "parent device" to sleep (or the "parent device" has hardware faults, or its driver crashes, or the device is unplugged). Of course then you'd need to write the device drivers, where the difficulty depends on the device itself (e.g. a device driver for a NVidia GPU is probably harder to write than a device driver for a RS232 serial port controller).
For storage devices themselves (assuming "80x86 PC") there's about 8 standards that matter (ATA/ATAPI, AHCI and NVMe; then OHCI, UHCI, eHCI and xHCI for USB controllers, then the USB mass storage device spec). However, there is also various RAID controllers and/or SCSI controllers where there's no standard (each of these controllers need their own driver), and some obsolete stuff (floppy controller, tape drives that plugged into floppy controller or parallel port, three proprietary CD-ROM interfaces that were built into sound cards).
Please understand that supporting all of this isn't the goal. The goal should be to provide things device drivers depend on (described above), then provide specifications that describe the device driver interfaces (possibly/hopefully including things like IO priorities and synchronization, and notifications for device/media removal, error handling, etc) so that other people can write device drivers for you. Once that's done you might implement a few specific device drivers yourself (e.g. maybe just AHCI initially - everything else could be left until much later or until someone else writes it).
You don't necessarily HAVE to write drivers. You could drop back into real mode to call the BIOS service, and then hop back into protected mode when you're done. This is essentially how DPMI DOS extenders (DOS4GW, Causeway, etc) work.
The source code for the Causeway DOS extender is public domain, you can look at that for a reference. http://www.devoresoftware.com/freesource/cwsrc.htm

kvm vs. vmware for kernel debugging / USB driver development

I'm currently setting up vmware Server 2.0 for kernel debugging with gdb ( see this setup guide ) and someone asked me why not use kvm?
So I ask: kvm vs. vmware for kernel debugging / USB driver development
what are the pros and cons of each?
Driver development? are you working on a driver for a particular piece of hardware? if so, then you probably won't be able to use virtualization, because the virtualized instance won't have access to the new hardware.
For this you will need two machines, one running a remote debugger on the other.
*Edit: * Apparently you're developing a driver for a USB Device? this is one area in particular that a VM actually Can help. These days most VM's have the ability to delegate specific USB devices to a guest OS.
That said, this situation doesn't really offer any benefits over the remote debugger option, because you still need a way to inspect the state of the running or crashed OS, and VM's offer very little assistance in this regard. You might be able to replay saved states from just before a crash.
You might be able to get a bit of traction using UML, which would allow you to do local debugging as on a regular user process, which is a little bit less trouble.
Instead of answering the direct question I'll add another option... Depending on if the kernel in question is a Linux kernel, and what part(s) of it you are working on, you might find that UserModeLinux (included in the 2.6.x source, and available as patch sets for 2.4 and 2.2) may trump both of those options.
As it runs the kernel as a userland process under the host kernel it is easier to attach common debugging tools to. I believe it is very commonly used in the early stages of updates/additions to file-system related code. If you are developing/debugging modules that interact directly with hardware it may be much less use to you though.
Reference links: home,
other
I recently started building GNU Mach/HURD and found the combination of QEmu/KVM to work really quite well.. for the following reasons:
QEmu presents quite a clean environment
Networking has alot of options
I can easily mount the filesystem using a raw device file / loopback
Bottom line is, for kernel work I just want the minimum of functionality to boot and see the result. VMWare is much more for usable virtualization rather than down-and-dirty.
There is however no comparison to booting on a real machine with real hardware. The VM environment can seem like a safety blanket somtimes ... because even my toaster would know what a Realtek RTL8139C was.
If it is a "real hardware" device, of course, vmware will not emulate it, so you won't be able to debug the driver under it (nor will any other virtualisation software, unless you extend one to do so).
Device driver debugging can be done to some extent with a real hardware machine with a normal kernel - although there are obviously things you can't do - like set breakpoints.
It is still possible to attach a debugger to the kernel and inspect stuff. Moreover, traditional printf() debugging is quite possible (printk, anyone), and there are various features in the kernel which make debugging easier. It's possible to build the kernel with various debug options to try to detect pointer problems, memory leaks etc.
By default, the kernel even gives a nice-ish stack trace on the log when it encounters an OOPS or BUG condition (obviously this does not necessarily get written anywhere if the system hangs or crashes). Of course a pointer-out-of-range condition happening inside an interrupt is a recipe for disaster, but you could still get a stack trace on the screen immediately before the panic :)

Resources