I have an ATXmega that I am writing code for. I have IAR embedded workbench and an AVR ONE! that connects to the circuit board with a PDI.
When debugging, I am able to get the chip to halt. That is, the system clock will halt, as though it has hit a break point. The IAR debugging GUI will make no change when this happens; I click 'reset' and then 'go' and IAR tells me the CPU is running normally, but the chip itself thinks a breakpoint has been hit. If I click 'pause' and then 'play' again, the chip will continue without issue. This state happens much more reliably when there are breakpoints set elsewhere in the code.
I have a debug signal on a spare GPIO pin that toggles every clock cycle, producing a signal that is half the frequency of the external crystal. When the chip enters this halted state, both the clock and debug signals cease.
I have been able to remove and add code to the start of main, and the program will halt in different places, but it will always halt after the same amount of time (seemingly. I have not been able to fully confirm that the time or number of clock cycles is consistent).
This only ever happens when debugging, never when the chip is running with no debugger. I have replaced the chip, I have given the same program to a colleague's computer and used his toolchain to program my board, and I have commented out away almost my entire code, and the error persists. My colleagues have never seen this behaviour before.
One final thing: we have a circuit that pulls a few pins high when given a ~8v signal. These are the only pins that are pulled high. The halted/error state has only been seen when this ~8v signal is present.
What's going on here? Has anyone else ever seem this sort of phantom breakpoint?
Edit: I would like to add that we have removed the circuit that pulls those pins high and we still see the behaviour.
Related
I need to minimize the current consumption in my board which uses a LPC1768. Now I don't have any problem with going into Deep Sleep or Power-Down modes and waking up from those modes. I have configured the RTC to generate an interrupt after some predefined time which does wake up the MCU correctly and works just fine.
My problem occurs when I want to go into Deep Power-Down mode which is precisely what I need (its consumes much less power). But after generating the RTC Interrupt the MCU goes into a reset state and starts the execution from the beginning as if someone pushes the reset button!
Now why is that? I read from the documents (like this example: AN10915: Using the LPC1700 power modes) that these three routines are pretty much the same.
I don't understand. There should be no problem according to the example.
I really need to do this otherwise we loose the battery sooner than it is supposed to.
UM10360.pdf, chapter 4.8.4 says: "In Deep Power-down mode, power is shut off to the entire chip" [...]
That means all data that is not in the RTC backup registers is lost, and the chip will thus restart with a reset.
Following some changes my Arduino sketch became unstable, it only run 1-2 hours and crashes. It's now a month that I am trying to understand but do not make sensible progress: the main difficulty is that the slightest change make it run apparently "ok" for days...
The program is ~1500 lines long
Can someone suggest how to progress?
Thanks in advance for your time
Well, the embedded systems are wery well known for continuous fight against Universe forth dimension: time. It is known that some delays must be added inside code - this does not imply the use of a sistem delay routine always - just operation order may solve a lot.
Debugging a system with such problem is difficult. Some techniques could be used:
a) invasive ones: mark (i.e. use some printf statements) in various places of your software, entry or exit of some routines or other important steps and run again - when the application crashes, you must note the last message seen and conclude the crash is after that software step marked by the printf.
b) less invasive: use an available GPIO pin as output and set it high at the entry of some routine and low at the exit; the crasing point will leave the pin either high or low. You can use several pins if available and watch the activity with an oscilloscope.
c) non invasive - use the JTAG or SWD debugging - this is the best one - if your micro support faults debugging, then you have the methods to locate the bug.
I've had a massive curiosity as to this for a long time. How do processors "sleep"? As far as I can tell it spins until a set time, that's fine. But there's a problem with this, if the processor is always running at full speed, then why does it get noisy and heat up when it's from my program but stay quiet and not drain power during the OS idle spinning?
Does this have more to do with shared variable access? Even a spin lock requires checking a register with a shared variable. So would it require an external device I/O to actually heat up, or is there a "different sleep"?
I've always wondered how to put my program into a sleep() and know it won't drain battery, (admittedly before I learned about OS schedules/time slicing).
In short, how does this work, and how would I ensure that my Sleep() or otherwise spin-locks use the low-power type.
Is it possible to program multiple PIC microcontrollers using only 1 PICKit2 programmer? The microcontrollers is connected via daisy chain. With PGC, PGD and MCLR of the PIC to be programmed is connected to the GPIO of the programming PIC.
I may be wrong, but I do not think this will work well as MPLBX will want to read back the written data to verify the programming operation succeed.
Alternatively, have you considered using PICkit3's in their "independent of a computer" mode? The PICKit3's can be configured to burn a specific program into a target PIC independent of a computer. I am wondering if having an "army" of these might address your issues.
I don't believe so. Just for fun after finding this question I took two 12f508's that were known to be good.
To prove that they were good I used IPE to load a previously tested program onto two devices. The devices worked as expected. I then used IPEs "fill memory" tool to program both devices to all empty (every address has 0x00), less the oscillator calibration memory location (I've had trouble with this area in the pass, so I always disable reading and writing to that location).
I then connected both chips up to the programmer in parallel and tried to program them with the same program. This is where everything went horribly awry.
For some reason, the programmer got confused and wrote a value of 0xFF to all addresses, including the out of range addresses. I verified that this was what actually happened by disconnecting the chips from the circuit and reading them independently.
Luckily for me I ran into this problem repeatedly before, and so have built a programmer out of an arduino and some extra circuits, so that I can ignore the stupid "oscillator calibration data invalid" error and reprogram that location to the correct instruction. It takes a long time to read and write memory, but it saves otherwise bricked chips.
In shorter words: No, this does not work, and it may actually "brick" your chips.
I'm trying to generate clock signal on GPIO pin (ARM platform, mach-davinci, kernel 2.6.27) which will have something arroung 100kHz. Using tasklet with high priority to do that. Theory is simple, set gpio high, udelay for 5us, set gpio low, wait another 5us, but strange problems appear. First of all, can't get this 5us of dalay, but it's fine, looks like hw performance problem, so i moved to period = 40us (gives ~25kHz). Second problem is worst. Once per ~10ms udelay waits 3x longer than usual. I'm thinking that it's hearbeat taking this time, but this is is unacceptable from protocol (which will be implemented on top of this) point of view. Is there any way to temporary disable heartbeat procedure, lets say, for 500ms ? Or maybe I'm doing it wrong from the beginning? Any comments?
You cannot use tasklet for this kind of job. Tasklets can be preempted by interrupts. In some case your tasklet can be even executed in the process context!
If you absolutely have to do it this way, use an interrupt handler - get in, disable interrupts, do whatever you have to do and get out as fast as you can.
Generating the clock asynchronously in software is not the right thing to do. I can think of two alternatives that will work better:
Your processor may have a built-in clock generator peripheral that isn't already being used by the kernel or another driver. When you set one of these up, you tell it how fast to run its clock, and it just starts running out the pulses.
Get your processor's datasheet and study it.
You might not find a peripheral called a "clock" per se, but might find something similar that you can press into service, like a PWM peripheral.
The other device you are talking to may not actually require a regular clock. Some chips that need a "clock" line merely need a line that goes high when there is a bit to read, which then goes low while the data line(s) are changing. If this is the case, the 100 kHz thing you're reading isn't a hard requirement for a clock of exactly that frequency, it is just an upper limit on how fast the clock line (and thus the data line(s)) are allowed to transition.
With a CPU so much faster than the clock, you want to split this into two halves:
The "top half" sets the data line(s) state correctly, then brings the clock line up. Then it schedules the bottom half to run 5 μs later, using an interrupt or kernel timer.
In the "bottom half", called by the interrupt or timer, bring the clock line back down, then schedule the top half to run again 5 μs later.
Unless you can run your timer tasklet at higher priority than the kernel timer, you will always be susceptible to this kind of jitter. You do really have to do this by bit-ganging? It would be far easier to use a hardware timer or PWM generator. Configure the timer to run at your desired rate, set the pin to output, and you're done.
If you need software control on each bit period, you can try and work around the other tasks by setting your tasklet to run at a short period, say three-fourths of your 40 us delay. In the tasklet, disable interrupts and poll the clock until you get to the correct 40 us timeslot, set the I/O state, re-enable interrupts, and exit. But this effectively types up 25 % of your system in watching a clock.