How to generate ~100kHz clock signal in Liunx kernel module with bit-banging? - linux-kernel

I'm trying to generate clock signal on GPIO pin (ARM platform, mach-davinci, kernel 2.6.27) which will have something arroung 100kHz. Using tasklet with high priority to do that. Theory is simple, set gpio high, udelay for 5us, set gpio low, wait another 5us, but strange problems appear. First of all, can't get this 5us of dalay, but it's fine, looks like hw performance problem, so i moved to period = 40us (gives ~25kHz). Second problem is worst. Once per ~10ms udelay waits 3x longer than usual. I'm thinking that it's hearbeat taking this time, but this is is unacceptable from protocol (which will be implemented on top of this) point of view. Is there any way to temporary disable heartbeat procedure, lets say, for 500ms ? Or maybe I'm doing it wrong from the beginning? Any comments?

You cannot use tasklet for this kind of job. Tasklets can be preempted by interrupts. In some case your tasklet can be even executed in the process context!
If you absolutely have to do it this way, use an interrupt handler - get in, disable interrupts, do whatever you have to do and get out as fast as you can.

Generating the clock asynchronously in software is not the right thing to do. I can think of two alternatives that will work better:
Your processor may have a built-in clock generator peripheral that isn't already being used by the kernel or another driver. When you set one of these up, you tell it how fast to run its clock, and it just starts running out the pulses.
Get your processor's datasheet and study it.
You might not find a peripheral called a "clock" per se, but might find something similar that you can press into service, like a PWM peripheral.
The other device you are talking to may not actually require a regular clock. Some chips that need a "clock" line merely need a line that goes high when there is a bit to read, which then goes low while the data line(s) are changing. If this is the case, the 100 kHz thing you're reading isn't a hard requirement for a clock of exactly that frequency, it is just an upper limit on how fast the clock line (and thus the data line(s)) are allowed to transition.
With a CPU so much faster than the clock, you want to split this into two halves:
The "top half" sets the data line(s) state correctly, then brings the clock line up. Then it schedules the bottom half to run 5 μs later, using an interrupt or kernel timer.
In the "bottom half", called by the interrupt or timer, bring the clock line back down, then schedule the top half to run again 5 μs later.

Unless you can run your timer tasklet at higher priority than the kernel timer, you will always be susceptible to this kind of jitter. You do really have to do this by bit-ganging? It would be far easier to use a hardware timer or PWM generator. Configure the timer to run at your desired rate, set the pin to output, and you're done.
If you need software control on each bit period, you can try and work around the other tasks by setting your tasklet to run at a short period, say three-fourths of your 40 us delay. In the tasklet, disable interrupts and poll the clock until you get to the correct 40 us timeslot, set the I/O state, re-enable interrupts, and exit. But this effectively types up 25 % of your system in watching a clock.

Related

Best Route For Input Clocks on Kintex7 FPGA

I'm looking for advice on a less than ideal situation.
I've inherited a project where we have a hardware design issue. We generate a clock to a chip which feeds the clock back in over a none clock-capable input. This works at up to 160MHz but we are looking to increase the clock so I'm researching IO options. This is used to clock 8 parallel data inputs.
Right now the data inputs go through a delay and a IDDR block. The output is fed to a FIFO. Our clock is still routed to a BUFG - so we have:
Data - IDELAY - IDDR - FIFO
Clock - BUFG ----^------^
I read somewhere that routing to a BUFG has a large delay so a BUFR-BUFIO is better. Is this the case? Have I missed a better option?
When you say generating a clock to "a chip", I will assume that you mean the Kintex7 chip.
The delay is not a problem. The issue is for your timing closure to be set up properly so that the static timing analysis can validate whether you violate any setup or hold time in all boundary corners of the board.
If you look at DS182 document, you will find under AC Switching characteristics which will give you a rough idea on how well the chip can perform.
However, the best is to let the timing analyzer inside Vivado calculate for you whether your desired clock frequency will be able to close timing.
You just need to make sure
The data input is synchronous to your final clock.
If it isn't, then clock that data input across two stages of registers with respect to the final clock.
Specify your timing constraints
Run through synthesis and implementation
Check the timing to see that there are no violations.
Or maybe I did not understand something about what you are trying to do.

What is the kernel timer system and how is it related to the scheduler?

I'm having a hard time understanding this.
How does the scheduler know that a certain period of time has passed?
Does it use some sort of syscall or interrupt for that?
What's the point of using the constant HZ instead of seconds?
What does the system timer have to do with the scheduler?
How does the scheduler know that a certain period of time has passed?
The scheduler consults the system clock.
Does it use some sort of syscall or interrupt for that?
Since the system clock is updated frequently, it suffices for the scheduler to just read its current value. The scheduler is already in kernel mode so there is no syscall interface involved in reading the clock.
Yes, there are timer interrupts that trigger an ISR, an interrupt service routine, which reads hardware registers and advances the current value of the system clock.
What's the point of using the constant HZ instead of seconds?
Once upon a time there was significant cost to invoking the ISR, and on each invocation it performed a certain amount of bookkeeping, such as looking for scheduler quantum expired and firing TCP RTO retransmit timers. The hardware had limited flexibility and could only invoke the ISR at fixed intervals, e.g. every 10ms if HZ is 100. Higher HZ values made it more likely the ISR would run and find there is nothing to do, that no events had occurred since the previous run, in which case the ISR represented overhead, cycles stolen from a foreground user task. Lower HZ values would impact dispatch latency, leading to sluggish network and interactive response times. The HZ tuning tradeoff tended to wind up somewhere near 100 or 1000 for practical hardware systems. APIs that reported system clock time could only do so in units of ticks, where each ISR invocation would advance the clock by one tick. So callers would need to know the value of HZ in order to convert from tick units to S.I. units. Modern systems perform network tasks on a separately scheduled TCP kernel thread, and may support tickless kernels which discard many of these outdated assumptions.
What does the system timer have to do with the scheduler?
The scheduler runs when the system timer fires an interrupt.
The nature of a pre-emptive scheduler is it can pause "spinning" usermode code, e.g. while (1) {}, and manipulate the run queue, even on a single-core system.
Additionally, the scheduler runs when a process voluntarily gives up its time slice, e.g. when issuing syscalls or taking page faults.

Wake-up from Deep Power-Down mode causes a reset in LPC1768

I need to minimize the current consumption in my board which uses a LPC1768. Now I don't have any problem with going into Deep Sleep or Power-Down modes and waking up from those modes. I have configured the RTC to generate an interrupt after some predefined time which does wake up the MCU correctly and works just fine.
My problem occurs when I want to go into Deep Power-Down mode which is precisely what I need (its consumes much less power). But after generating the RTC Interrupt the MCU goes into a reset state and starts the execution from the beginning as if someone pushes the reset button!
Now why is that? I read from the documents (like this example: AN10915: Using the LPC1700 power modes) that these three routines are pretty much the same.
I don't understand. There should be no problem according to the example.
I really need to do this otherwise we loose the battery sooner than it is supposed to.
UM10360.pdf, chapter 4.8.4 says: "In Deep Power-down mode, power is shut off to the entire chip" [...]
That means all data that is not in the RTC backup registers is lost, and the chip will thus restart with a reset.

Execution time of code on microcontroller

On a 32 bit microcontroller , I want to measure exection time of a code for different operating frequencies of microcontroller. First of all I used Periodic timer (PIT), but it did not provide high resolution, because if I operate PIT at high frequency then its counter got overflow. So I shifted to System timer (STM), because it can run at system clock. but at different operating frequencies of microcontroller, STM give same execution time of code. Could any of you help me in this matter. Thanks
I realize this is an old question, but if this doesn't need to be done in the system "real time," I would just toggle a port pin when entering and exiting the function and use an oscilloscope to measure the time. I'm assuming that you just want to do this for software testing.
If you need to do it "real time" (in the application code), then you'll need to multiply your STM timer value by the microcontroller clock's period. The timer value for the function execution should always be the same (with some exceptions) regardless of the micro's clock frequency. (i.e. the timer's speed will change with clock frequency in the same way your code's execution speed will change)

Why does linux kernel need idle thread?

Rather than "do nothing" if there is nothing to do (including SMP), why linux kernel runs idle thread?
When the scheduler decides to switch to the idle task, at this point, the dynamic tick begins to work, by disabling periodic tick until the next timer expires. The tick will be reenabled after this time span or when an interrupt occurs at some time.
In the mean time, the CPU is going to a well-deserved sleep, in an architecture-specific way, therefore saving your power. Take a look at the definition of cpu_idle() in arch/x86/kernel/process.c.
/*
* The idle thread. There's no useful work to be
* done, so just try to conserve power and have a
* low exit latency (ie sit in a loop waiting for
* somebody to say that they'd like to reschedule)
*/
void cpu_idle(void)
What do you mean by "do nothing"??
When the CPU is powered up there is a rather long list of things that happen. Once powered up the CPU cannot "do nothing". It has to do something because there is a voltage and a periodic clock signal. You can power it down again and do ABSOLUTELY nothing but then you have to go through the long list of things to get a steady clock signal when you need it again.
So the idle thread is a thread that does the bare minimum. i.e. if multiplying two floating point numbers required the least number of cycles and the least number of electronic circuitry; then the idle thread would be multiplying two floats all the time. In addition as Wang said the Linux kernel (in some configurations) monitors when cores start executing idle thread and also switches them to a lower frequency, disabling any sort of periodic OS house keeping. This results in a bit of latency when the core is needed but then there is much less power being used.

Resources