I was going through various methods of debugging and stumbled upon a question. In step by step debugging using breakpoints, IDE GUI provides the register values and memory location values for verification for each step of program.I wanted to know are these values fetched from the Microcontroller hardware through JTAG or is it simply virtual simulation results by mimicking the microcontroller core in the Host computer? Is there any methodology to control the live program execution in microcontroller through IDE and see the live values of registers and memory locations for live debugging.
In fact, you need to understand first what debugging means. In debug mode the computer sets break points (program entry address) where the microcontroller need to stop and give back the status to the IDE.
In your case the register values and memory status is fetched directly from the microcontroller when it reach the break point.
JTAG debuggers are varied with many functions depending on the combination of hardware and software you use.
Choosing the right debugger is not an easy task when it come to advanced programming and realtime debugging/tracing. I recommend you to check https://www.tmartis.com/en/34-debugger-programmers for more details about the most used JTAG debuggers.
Hope that i had answered your question.
Related
I have a phenomenon that I fail to understand.
Im using an Atmel UC3C (AVR32) (together with Atmel Studio 7.0) to capture data from UART and writing it to Flash (I'm aware of the limited write cycles).
I set a breakpoint after the write instruction to check if everything went smoothly. It did, but only one time, when I clicked on "continue" and send UART data again, the data will appear in RAM but won't be written to flash.
I don't get any errors or exceptions (I'm catching uart errors and flash-erase errors). I can reproduce that at will. As soon as a breakpoint is hit (anywhere in the application), flash loses the capability to to write data.
The best part is, when I remove the breakpoint, it works flawlessly (I set a breakpoint after multiple writes and looked at all the written flashpages to see if the changes were applied).
I checked for race-conditions but havn't found any. Uart data is buffered in a circular buffer, if the buffer would overflow, then the UART is blocked and the buffer is flushed (there is no sensitive data transmitted).
My question is: Why is a breakpoint interfering with the program flow, to a point, that it breaks the flash-write capability?
edit: A reset is giving the write capability back to the flash.
These kind of phenomenon could be caused by several things:
On pretty much any MCU, the flash driver code that is writing to the flash cannot be located in the same flash bank as is currently being programmed. When you have this problem, you can usually either provoke it or make it seem to go away by introducing breakpoints, or by single-stepping. Most commonly such code might work when single-stepping, but break when free running.
The solution is to preferably place the flash driver in a different bank than the one getting programmed. A more questionable alternative is to load the flash driver into RAM and execute it from there. (The RAM method is actually recommended by several MCU vendors, even though executing code from RAM is widely known as dangerous practice for multiple reasons.)
Your breakpoint could be interrupting the flash programming. Most flash drivers do not tolerate to get interrupted in the middle of execution. You must not allow any form of interrupt to be executed during flash programming. This includes all other interrupts in your program too. Set the global interrupt mask during programming (and make sure there are no non-maskable interrupts).
Similarly, if your flash driver is based on interrupts, make sure that the debugger isn't blocking interrupts when you set a breakpoint.
Your flash clock pre-scaler could be incorrectly set, resulting in odd behavior when you alter the real-time behavior, by for example by introducing a breakpoint.
In addition, UART and other serial peripherals may have status registers that are cleared by reading them, followed by a read of a data register. Debuggers that read all registers, for example for the purpose of displaying a memory map, may destroy such status flags. This could also be the case.
I am prototyping the driver for an 8 bit parallel image sensor on an ARM device with a built-in ISP. I will spare the details, as I seek for general guide on how to approach this particular problem I am having.
Simply put, when I load the ISP driver (not my prototype camera driver) with dyndbg=+pt flag, the camera driver usually grabs images (about ~8 out of 10 attempts). If I remove the flag, and load the ISP driver without any options, my camera driver rarely finishes its job (about 1 out of ~100 attempts). The system gets stuck saying the device has timed out.
I suspect loading the driver with debug flag is somehow altering the timing, resulting more stable interaction between the ISP and the image sensor. I mostly spend my hours debugging electrical aspects of embedded boards, and rarely delve into a deep software stack such as ISP or Video4Linux. Hence my conjecture may be completely off.
Therefore some pointers will be much appreciated. The kernel is 3.18.
You haven't provided a lot of details for us to work with here, but if enabling debug is making your device work, my suspicion would be that the debug output is introducing a delay which is required for your device to work properly. I'd read through your device datasheets carefully to see if there are any timing requirements you might not be respecting.
I want to read the "TSTR"-register (Thermal Sensor Thermometer Read Register) of my Intel Chipset.
I have found that the __readmsr function is what I need.
I have also setup a kernel driver since the function is only available in kernel mode.
But I have no clue how to access the register...
In the datasheet of the chipset it is stated on page 857 that the offset adress of the register is TBARB+03h.
How can I use this adress? Are there tutorials out there that could help me?
Thanks!
As far as I have figured out, trying to do the exact same thing, __readmsr is indeed the right command for accessing registers:
http://msdn.microsoft.com/en-us/library/y55zyfdx%28v=VS.100%29.aspx
however I am working on an i5, and Intel's documentation
http://www.intel.com/content/www/us/en/intelligent-systems/piketon/core-i7-800-i5-700-desktop-datasheet-vol-2.html
suggests that things like MC_RANK_VIRTUAL_TEMP entries are registers, so it ought to work, so you are on the right track probably.. the particular register is on page 272. So technically this is indeed the answer, __readmsr(1568) in my case.
However I am struggling to convince visual studio 2010 to build this in kernel mode, which it seems reluctant to do, I keep getting the Priviledged Instruction error.. When I get this out of the way and the whole program working I'll write a tutorial on the general process, but until then I only dare give a theoretical answer. If your compiler tends to listen to what you say, just add the /kernel compiler option, since you are only reading and not editing the registers it should be safe.
Edit:
There is also this article, the answer more or less suggests what I'm trying to do, though not much more than I have so far, but take a look anyway:
How to access CPU's heat sensors?
"This instruction must be executed at privilege level 0 or in real-address mode; otherwise, a general protection exception #GP(0) will be generated."
http://faydoc.tripod.com/cpu/rdmsr.htm
I am currently trying to reverse a program under Linux that has a bunch of anti-debug tricks. I was able to defeat some of them, but I am still fighting against the remaining ones. Sadly since I am mediocre, it is taking me more time than expected. Anyway, the programs runs without any pain in a VM (I tried with VMWare and VBox), so I was thinking about taking a trace of its execution in the VM, then a trace under the debugger (gdb) and diff them to see were the changes are and find out the anti-debug tricks more easily.
However, I did some kernel debugging with vmware a long time ago, it was more or less ok (I remember having access to the linear address...), but here it's a bit different I think.
Do you see an easy way to debug this userland program without going into too much pain ?
I would suggest using Ether, which is a tool for monitoring the execution of a program and is based on the XEN hypervisor. The whole point of the tool is to trace a program's execution without being observable. The first thing to do is go to their website and click on the malware tab, then submit your binary and see if their automated web interface can do it for you. If this fails, you can install it yourself, which is a pain, but doable, and should yield good results, I have been able to install it in the past. They have instructions on the Ether website, but if you I'd suggest you also take a look at these supplemental instructions from Offensive Computing
A couple of other automated analysis sites that could do the trick for you:
Eureka by SRI international
and Renovo by bitblaze at UC Berkeley
I'm developing an application in LabView on Windows. Starting a week ago, one test machine (a ToughBook, no less) was freezing up completely once every couple days: no mouse cursor, taskbar clock frozen. So yesterday it was retired. But just now, I've seen it on another machine, also a laptop.
This is a pretty uncommon failure mode for PC's. I don't know much about Windows, but I'd expect it to indicate that the software stopped running so completely and suddenly that the kernel was unable to panic.
Is this an accurate assessment? Where do I begin to debug this problem? What controls the cursor in the Windows architecture — is it all kernel mode or is there a window server that might be getting choked by something? Would an unstable third-party hardware driver cause this, rather than a blue screen?
EDIT: I should add that the freezes don't necessarily happen while the code is running.
I'd certainly consider hardware and/or drivers as a possibility - perhaps you could say what hardware is involved?
You could test this by adding a 'debug mode' for each piece of hardware your LabVIEW code talks to, where you would use e.g. a case structure to skip the actual I/O calls and return dummy data to the rest of the application. Make sure it's a similar amount of data to what the real device returns. You'll find this much easier if you've modularised your code into subVI's with clearly defined functions! If disabling I/O calls to a particular bit of hardware stops the freezes it would suggest the problem might be with that hardware or its driver.
Hard to say what the problem is. Base on the symptoms I would check for a possible memory leak (see if your LabVIEW app memory usage is growing overtime using "windows task manager").