How to use reset counter in kernel module programming every second? - linux-kernel

how can i use jiffies interrupts to reset some other kernel variable counter in a kernel module.
I am trying to reset some counter every second...just to check that i reach certain threshold every second.not sure how to approach this in kernel module programming.
some explanation with an example will be highly appreciated
Thanks

You could do something like this:
unsigned long later = jiffies + 5*HZ; /* five seconds from now */
if (jiffies > later) {
counter++
}
Alternatively, I would recommend consider using some well known macros, as the above code is could cause overflows.
time_after(j,t) here the macro returns true if time jiffies j is after time t; otherwise, it returns false.
time_before(j, t),
time_after_eq(j, t),
time_before_eq(j, t)

Related

How to make a simualtion in verilog have different results everytime if it has random values?

I want to generate a different output of the same code every time I run it as it has random values assigned to some variables. Is there a way to do that, for example seeding using time as in C?
Sample code that has the randomization in it:
class ABC;
rand bit [4 : 0] arr []; // dynamic array
constraint arr_size{
arr.size() >= 2;
arr.size() <= 6;
}
endclass
module constraint_array_randomization();
ABC test_class;
initial begin
test_class = new();
test_class.randomize();
$display("The array has the value = %p ", test_class.arr);
end
endmodule
I this is probably dependent on the tool that is being used. For example xcelium from cadence supports xrun -seed some_seed(Questa has -sv_seed some_seed I think). I am certain all tools support something similar. Look for simulation tool reference/manual/guide/help it may support random seed for every simulation run.
Not sure if this is possible from inside of simulation.
As mentioned in the comments for Questa, -sv_seed random should do the trick.
Usually, having an uncontrolled random seeding at simulation creates repeatability issues. In other words, it would be very difficult to debug a failing case if you do not know the seed. But if you insist, then read the following.
You can mimic the 'c' way of randomizing with time. However, there is no good way in verilog to access system time. Therfore, there is no good way to do time based seeding from within the program.
However as always, there is a work-around available. For example, one can use the $system call to get the system time (is system-dependent). Then the srandom function can be used to set the seed. The following (linux-based) example might work for you (or you can tune it up for your system).
Here the time is provided as unix-time by the date +'%s' command. It writes it into a file and then reads from it as 'int' using $fopen/$fscan.
module constraint_array_randomization();
ABC test_class;
int today ;
initial begin
// get system time
$system("date +'%s' > date_file"); // write date into a file
fh = $fopen("date_file", "r");
void'($fscanf(fh, "%d", today)); // cast to void to avoid warnings
$fclose(fh);
$system("rm -f date_file"); // remove the file
$display("time = %d", today);
test_class = new();
test_class.srandom(today); // seed it
test_class.randomize();
$display("The array has the value = %p ", test_class.arr);
end
endmodule

Time of day clock.

Question: Suppose that you have a clock chip operating at 100 kHz, that is, every ten microseconds the clock will tick and subtract one from a counter. When the counter reaches zero, an interrupt occurs. Suppose that the counter is 16 bits, and you can load it with any value from 0 to 65535. How would you implement a time of day clock with a resolution of one second.
My understanding:
You can't store 100,000 in a 16 bit counter, but you can store 50,000 so could you would you have to use some sort of flag and only execute interrupt every other time?
But, i'm not sure how to go about implement that. Any form of Pseudocode or a general explanation would be most appreciated.
Since you can't get the range you want in hardware, you would need to extend the hardware counter with some sort of software counter (e.g. everytime the hardware counter goes up, increment the software counter). In your case you just need an integer value to keep track of whether or not you have seen a hardware tick. If you wanted to use this for some sort of scheduling policy, you could do something like this:
static int counter; /* initilized to 0 somewhere */
void trap_handler(int trap_num)
{
switch(trap_num)
{
...
case TP_TIMER:
if(counter == 0)
counter++;
else if(counter == 1)
{
/* Action here (flush cache/yield/...)*/
fs_cache_flush();
counter = 0;
  } else {
/* Impossible case */
panic("Counter value bad\n");
}
break;
...
default:
panic("Unknown trap: 0x%x\n", trap_num)
break;
}
...
}
If you want example code in another language let me know and I can change it. I'm assuming you're using C because you tagged your question with OS. It could also make sense to keep your counter on a per process basis (if your operating system supports processes).

Is it possible to inject values in the frama-c value analyzer?

I'm experimenting with the frama-c value analyzer to evaluate C-Code, which is actually threaded.
I want to ignore any threading problems that might occur und just inspect the possible values for a single thread. So far this works by setting the entry point to where the thread starts.
Now to my problem: Inside one thread I read values that are written by another thread, because frama-c does not (and should not?) consider threading (currently) it assumes my variable is in some broad range, but I know that the range is in fact much smaller.
Is it possible to tell the value analyzer the value range of this variable?
Example:
volatile int x = 0;
void f() {
while(x==0)
sleep(100);
...
}
Here frama-c detects that x is volatile and thus has range [--..--], but I know what the other thread will write into x, and I want to tell the analyzer that x can only be 0 or 1.
Is this possible with frama-c, especially in the gui?
Thanks in advance
Christian
This is currently not possible automatically. The value analysis considers that volatile variables always contain the full range of values included in their underlying type. There however exists a proprietary plug-in that transforms accesses to volatile variables into calls to user-supplied function. In your case, your code would be transformed into essentially this:
int x = 0;
void f() {
while(1) {
x = f_volatile_x();
if (x == 0)
sleep(100);
...
}
By specifying f_volatile_x correctly, you can ensure it returns values between 0 and 1 only.
If the variable 'x' is not modified in the thread you are studying, you could also initialize it at the beginning of the 'main' function with :
x = Frama_C_interval (0, 1);
This is a function defined by Frama-C in ...../share/frama-c/builtin.c so you have to add this file to your inputs when you use it.

Calculating the speed of routines?

What would be the best and most accurate way to determine how long it took to process a routine, such as a procedure of function?
I ask because I am currently trying to optimize a few functions in my Application, when i test the changes it is hard to determine just by looking at it if there was any improvements at all. So if I could return an accurate or near accurate time it took to process a routine, I then have a more clear idea of how well, if any changes to the code have been made.
I considered using GetTickCount, but I am unsure if this would be anything near accurate?
It would be useful to have a resuable function/procedure to calculate the time of a routine, and use it something like this:
// < prepare for calcuation of code
...
ExecuteSomeCode; // < code to test
...
// < stop calcuating code and return time it took to process
I look forward to hearing some suggestions.
Thanks.
Craig.
From my knowledge, the most accurate method is by using QueryPerformanceFrequency:
code:
var
Freq, StartCount, StopCount: Int64;
TimingSeconds: real;
begin
QueryPerformanceFrequency(Freq);
QueryPerformanceCounter(StartCount);
// Execute process that you want to time: ...
QueryPerformanceCounter(StopCount);
TimingSeconds := (StopCount - StartCount) / Freq;
// Display timing: ...
end;
Try Eric Grange's Sampling Profiler.
From Delphi 6 upwards you can use the x86 Timestamp counter.
This counts CPU cycles, on a 1 Ghz processor, each count takes one nanosecond.
Can't get more accurate than that.
function RDTSC: Int64; assembler;
asm
// RDTSC can be executed out of order, so the pipeline needs to be flushed
// to prevent RDTSC from executing before your code is finished.
// Flush the pipeline
XOR eax, eax
PUSH EBX
CPUID
POP EBX
RDTSC //Get the CPU's time stamp counter.
end;
On x64 the following code is more accurate, because it does not suffer from the delay of CPUID.
rdtscp // On x64 we can use the serializing version of RDTSC
push rbx // Serialize the code after, to avoid OoO sneaking in
push rax // subsequent instructions prior to executing RDTSCP.
push rdx // See: http://www.intel.de/content/dam/www/public/us/en/documents/white-papers/ia-32-ia-64-benchmark-code-execution-paper.pdf
xor eax,eax
cpuid
pop rdx
pop rax
pop rbx
shl rdx,32
or rax,rdx
Use the above code to get the timestamp before and after executing your code.
Most accurate method possible and easy as pie.
Note that you need to run a test at least 10 times to get a good result, on the first pass the cache will be cold, and random harddisk reads and interrupts can throw off your timings.
Because this thing is so accurate it can give you the wrong idea if you only time the first run.
Why you should not use QueryPerformanceCounter()
QueryPerformanceCounter() gives the same amount of time if the CPU slows down, it compensates for CPU thottling. Whilst RDTSC will give you the same amount of cycles if your CPU slows down due to overheating or whatnot.
So if your CPU starts running hot and needs to throttle down, QueryPerformanceCounter() will say that your routine is taking more time (which is misleading) and RDTSC will say that it takes the same amount of cycles (which is accurate).
This is what you want because you're interested in the amount of CPU-cycles your code uses, not the wall-clock time.
From the lastest intel docs: http://software.intel.com/en-us/articles/measure-code-sections-using-the-enhanced-timer/?wapkw=%28rdtsc%29
Using the Processor Clocks
This timer is very accurate. On a system with a 3GHz processor, this timer can measure events that last less than one nanosecond. [...] If the frequency changes while the targeted code is running, the final reading will be redundant since the initial and final readings were not taken using the same clock frequency. The number of clock ticks that occurred during this time will be accurate, but the elapsed time will be an unknown.
When not to use RDTSC
RDTSC is useful for basic timing. If you're timing multithreaded code on a single CPU machine, RDTSC will work fine. If you have multiple CPU's the startcount may come from one CPU and the endcount from another.
So don't use RDTSC to time multithreaded code on a multi-CPU machine. On a single CPU machine it works fine, or single threaded code on a multi-CPU machine it is also fine.
Also remember that RDTSC counts CPU cycles. If there is something that takes time but doesn't use the CPU, like disk-IO or network than RDTSC is not a good tool.
But the documentation says RDTSC is not accurate on modern CPU's
RDTSC is not a tool for keeping track of time, it's a tool for keeping track of CPU-cycles.
For that it is the only tool that is accurate. Routines that keep track of time are not accurate on modern CPU's because the CPU-clock is not absolute like it used to be.
You didn't specify your Delphi version, but Delphi XE has a TStopWatch declared in unit Diagnostics. This will allow you to measure the runtime with reasonable precision.
uses
Diagnostics;
var
sw: TStopWatch;
begin
sw := TStopWatch.StartNew;
<dosomething>
Writeln(Format('runtime: %d ms', [sw.ElapsedMilliseconds]));
end;
I ask because I am currently trying to
optimize a few functions
It is natural to think that measuring is how you find out what to optimize, but there's a better way.
If something takes a large enough fraction of time (F) to be worth optimizing, then if you simply pause it at random, F is the probability you will catch it in the act.
Do that several times, and you will see precisely why it's doing it, down to the exact lines of code.
More on that.
Here's an example.
Fix it, and then do an overall measurement to see how much you saved, which should be about F.
Rinse and repeat.
Here are some procedures I made to handle checking the duration of a function. I stuck them in a unit I called uTesting and then just throw into the uses clause during my testing.
Declaration
Procedure TST_StartTiming(Index : Integer = 1);
//Starts the timer by storing now in Time
//Index is the index of the timer to use. 100 are available
Procedure TST_StopTiming(Index : Integer = 1;Display : Boolean = True; DisplaySM : Boolean = False);
//Stops the timer and stores the difference between time and now into time
//Displays the result if Display is true
//Index is the index of the timer to use. 100 are available
Procedure TST_ShowTime(Index : Integer = 1;Detail : Boolean = True; DisplaySM : Boolean = False);
//In a ShowMessage displays time
//Uses DateTimeToStr if Detail is false else it breaks it down (H,M,S,MS)
//Index is the index of the timer to use. 100 are available
variables declared
var
Time : array[1..100] of TDateTime;
Implementation
Procedure TST_StartTiming(Index : Integer = 1);
begin
Time[Index] := Now;
end;
Procedure TST_StopTiming(Index : Integer = 1;Display : Boolean = True; DisplaySM : Boolean = False);
begin
Time[Index] := Now - Time[Index];
if Display then TST_ShowTime;
end;
Procedure TST_ShowTime(Index : Integer = 1;Detail : Boolean = True; DisplaySM : Boolean = False);
var
H,M,S,MS : Word;
begin
if Detail then
begin
DecodeTime(Time[Index],H,M,S,MS);
if DisplaySM then
ShowMessage('Hour = ' + FloatToStr(H) + #13#10 +
'Min = ' + FloatToStr(M) + #13#10 +
'Sec = ' + FloatToStr(S) + #13#10 +
'MS = ' + FloatToStr(MS) + #13#10)
else
OutputDebugString(PChar('Hour = ' + FloatToStr(H) + #13#10 +
'Min = ' + FloatToStr(M) + #13#10 +
'Sec = ' + FloatToStr(S) + #13#10 +
'MS = ' + FloatToStr(MS) + #13#10));
end
else
ShowMessage(TimeToStr(Time[Index]));
OutputDebugString(Pchar(TimeToStr(Time[Index])));
end;
Use this http://delphi.about.com/od/windowsshellapi/a/delphi-high-performance-timer-tstopwatch.htm
clock_gettime() is the high solution, which is precise to nano seconds, you can also use rtdsc, which is precise to CPU cycle, and lastly you can simply use gettimeofday().

How do you measure the time a function takes to execute?

How can you measure the amount of time a function will take to execute?
This is a relatively short function and the execution time would probably be in the millisecond range.
This particular question relates to an embedded system, programmed in C or C++.
The best way to do that on an embedded system is to set an external hardware pin when you enter the function and clear it when you leave the function. This is done preferably with a little assembly instruction so you don't skew your results too much.
Edit: One of the benefits is that you can do it in your actual application and you don't need any special test code. External debug pins like that are (should be!) standard practice for every embedded system.
There are three potential solutions:
Hardware Solution:
Use a free output pin on the processor and hook an oscilloscope or logic analyzer to the pin. Initialize the pin to a low state, just before calling the function you want to measure, assert the pin to a high state and just after returning from the function, deassert the pin.
*io_pin = 1;
myfunc();
*io_pin = 0;
Bookworm solution:
If the function is fairly small, and you can manage the disassembled code, you can crack open the processor architecture databook and count the cycles it will take the processor to execute every instructions. This will give you the number of cycles required.
Time = # cycles * Processor Clock Rate / Clock ticks per instructions
This is easier to do for smaller functions, or code written in assembler (for a PIC microcontroller for example)
Timestamp counter solution:
Some processors have a timestamp counter which increments at a rapid rate (every few processor clock ticks). Simply read the timestamp before and after the function.
This will give you the elapsed time, but beware that you might have to deal with the counter rollover.
Invoke it in a loop with a ton of invocations, then divide by the number of invocations to get the average time.
so:
// begin timing
for (int i = 0; i < 10000; i++) {
invokeFunction();
}
// end time
// divide by 10000 to get actual time.
if you're using linux, you can time a program's runtime by typing in the command line:
time [funtion_name]
if you run only the function in main() (assuming C++), the rest of the app's time should be negligible.
I repeat the function call a lot of times (millions) but also employ the following method to discount the loop overhead:
start = getTicks();
repeat n times {
myFunction();
myFunction();
}
lap = getTicks();
repeat n times {
myFunction();
}
finish = getTicks();
// overhead + function + function
elapsed1 = lap - start;
// overhead + function
elapsed2 = finish - lap;
// overhead + function + function - overhead - function = function
ntimes = elapsed1 - elapsed2;
once = ntimes / n; // Average time it took for one function call, sans loop overhead
Instead of calling function() twice in the first loop and once in the second loop, you could just call it once in the first loop and don't call it at all (i.e. empty loop) in the second, however the empty loop could be optimized out by the compiler, giving you negative timing results :)
start_time = timer
function()
exec_time = timer - start_time
Windows XP/NT Embedded or Windows CE/Mobile
You an use the QueryPerformanceCounter() to get the value of a VERY FAST counter before and after your function. Then you substract those 64-bits values and get a delta "ticks". Using QueryPerformanceCounterFrequency() you can convert the "delta ticks" to an actual time unit. You can refer to MSDN documentation about those WIN32 calls.
Other embedded systems
Without operating systems or with only basic OSes you will have to:
program one of the internal CPU timers to run and count freely.
configure it to generate an interrupt when the timer overflows, and in this interrupt routine increment a "carry" variable (this is so you can actually measure time longer than the resolution of the timer chosen).
before your function you save BOTH the "carry" value and the value of the CPU register holding the running ticks for the counting timer you configured.
same after your function
substract them to get a delta counter tick.
from there it is just a matter of knowing how long a tick means on your CPU/Hardware given the external clock and the de-multiplication you configured while setting up your timer. You multiply that "tick length" by the "delta ticks" you just got.
VERY IMPORTANT Do not forget to disable before and restore interrupts after getting those timer values (bot the carry and the register value) otherwise you risk saving incorrect values.
NOTES
This is very fast because it is only a few assembly instructions to disable interrupts, save two integer values and re-enable interrupts. The actual substraction and conversion to real time units occurs OUTSIDE the zone of time measurement, that is AFTER your function.
You may wish to put that code into a function to reuse that code all around but it may slow things a bit because of the function call and the pushing of all the registers to the stack, plus the parameters, then popping them again. In an embedded system this may be significant. It may be better then in C to use MACROS instead or write your own assembly routine saving/restoring only relevant registers.
Depends on your embedded platform and what type of timing you are looking for. For embedded Linux, there are several ways you can accomplish. If you wish to measure the amout of CPU time used by your function, you can do the following:
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#define SEC_TO_NSEC(s) ((s) * 1000 * 1000 * 1000)
int work_function(int c) {
// do some work here
int i, j;
int foo = 0;
for (i = 0; i < 1000; i++) {
for (j = 0; j < 1000; j++) {
for ^= i + j;
}
}
}
int main(int argc, char *argv[]) {
struct timespec pre;
struct timespec post;
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &pre);
work_function(0);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &post);
printf("time %d\n",
(SEC_TO_NSEC(post.tv_sec) + post.tv_nsec) -
(SEC_TO_NSEC(pre.tv_sec) + pre.tv_nsec));
return 0;
}
You will need to link this with the realtime library, just use the following to compile your code:
gcc -o test test.c -lrt
You may also want to read the man page on clock_gettime there is some issues with running this code on SMP based system that could invalidate you testing. You could use something like sched_setaffinity() or the command line cpuset to force the code on only one core.
If you are looking to measure user and system time, then you could use the times(NULL) which returns something like a jiffies. Or you can change the parameter for clock_gettime() from CLOCK_THREAD_CPUTIME_ID to CLOCK_MONOTONIC...but be careful of wrap around with CLOCK_MONOTONIC.
For other platforms, you are on your own.
Drew
I always implement an interrupt driven ticker routine. This then updates a counter that counts the number of milliseconds since start up. This counter is then accessed with a GetTickCount() function.
Example:
#define TICK_INTERVAL 1 // milliseconds between ticker interrupts
static unsigned long tickCounter;
interrupt ticker (void)
{
tickCounter += TICK_INTERVAL;
...
}
unsigned in GetTickCount(void)
{
return tickCounter;
}
In your code you would time the code as follows:
int function(void)
{
unsigned long time = GetTickCount();
do something ...
printf("Time is %ld", GetTickCount() - ticks);
}
In OS X terminal (and probably Unix, too), use "time":
time python function.py
If the code is .Net, use the stopwatch class (.net 2.0+) NOT DateTime.Now. DateTime.Now isn't updated accurately enough and will give you crazy results
If you're looking for sub-millisecond resolution, try one of these timing methods. They'll all get you resolution in at least the tens or hundreds of microseconds:
If it's embedded Linux, look at Linux timers:
http://linux.die.net/man/3/clock_gettime
Embedded Java, look at nanoTime(), though I'm not sure this is in the embedded edition:
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#nanoTime()
If you want to get at the hardware counters, try PAPI:
http://icl.cs.utk.edu/papi/
Otherwise you can always go to assembler. You could look at the PAPI source for your architecture if you need some help with this.

Resources