When I printk(KERN_INFO, ...), I get something like this:
<6>[116584.000183] ...
What is the number between the square brackets exactly?
Seconds since startup. You can control whether this is shown with the CONFIG_PRINTK_TIME variable in kernel config.
It's a timestamp with microsecond resolution. See the printk source.
Related
Currently I'm trying to check the booting time of an Tixi board using systemd on a 2.6.39 linux kernel. To do so I created a service file that calls a bash script which sets and uses a gpio. The problem is that my systems is not allowing me to change the value of the gpio. I can sucessfully export it, change its direction, but NOT the value. I have connected an oscilloscope to check if the value had changed in the hardware but not updated in the file as suggested in some forums, but it was the same: the value just doesn't change!
I should also point out that the same script is working if I use system V, with exactly the same coonfiguration for the kernel, busybox and filesystem.
It is very ironic because I'm already the root of the systems, nevertheless even changing the permissions of the file, would not allow me to change its value. There is also no feedback from the kernel saying that the operation was not possible, but rather it looks as if it was possible but when I check the value, it was the same as before.
I also tried to run that in the Raspbian with a 3.12 (which I changed to systemd) and it was in fact possible to do it, just in the normal way from userspace.
I would appreciate if you have any idea oh what might be the problem since I already run out of ideas.
Thanks
PS: This is the code that should work on the bash line:
echo 0 > /sys/class/gpio/gpio104/value
more /sys/class/gpio/gpio104/value
// I get 1 not 0 as I requested
Nevertheless the same lines of code in the same board work if I use systemV but not if I use systemd
Probably cause by the lack of udev in your new setup which change the permission for those gpio in /sys/class. You might want to just put back udev to see if it fixes your problem.
I don't know your image setting, but each gpio pins needs to be exported prior to usage. Are you doing it or it's done automatically? If you have omap mux kernel switch, you do something like :
echo 0x104 > /sys/kernel/debug/omap_mux/cam_d5 (set mode 4 as stipulate in TI Sitara TRM)
echo 104 > /sys/class/gpio/export (export the pin)
echo out > /sys/class/gpio/gpio104/direction (set the pin as output)
Also do a dmesg | grep gpio and see if there's any initializing problem with the gpio mux.
Actually I've faced an issue similar to your's , ie was not able to change the value of set of gpio pin manually
Finally the result obtained was even though the name of that pin is gpio it can only be used for input only (DM3730 gpiO_114 and gpio_115).
So please refer to the datasheet and confirm it can be used for I/O operations..
Will Linux Kerenl jump to calibrate_delay when console_init is commented ? Debugging is difficult in the bringup environment on SOC hence this question.
I have added printascii patch to bringup my kernel (MIPS-InterAptiv) and I am seeing that prints are coming till init_IRQ and after that no prints are coming. and could see that processor is not coming out of console_init ; wanted to check with console_init commented out ? Also since printascii patch is present my further prints will come . Is my understanding correct ?
On MIPS, calibrate_delay() is called from within start_secondary(),
which is called from arch/mips/kernel/head.S
If you intend to skip running the calibration loop, then you can pass
lpj=<pre-calculated-lpj-value> on the kernel cmd-line(bootargs).
lpj stands for loops-per-jiffies. This is usually calculated by running the CPU in short a loop during boot-up. The lpj value thus calculated will be printed out to console as :
[0.001119] Calibrating delay loop... 364.48 BogoMIPS (lpj=1425408)
The exact value of lpj will differ from device to device and depends upon the CPU-freq as well.
list commands prints a set of lines, but I need one single line, where I am and where an error has probably occurred.
The 'frame' command will give you what you are looking for. (This can be abbreviated just 'f'). Here is an example:
(gdb) frame
\#0 zmq::xsub_t::xrecv (this=0x617180, msg_=0x7ffff00008e0) at xsub.cpp:139
139 int rc = fq.recv (msg_);
(gdb)
Without an argument, 'frame' just tells you where you are at (with an argument it changes the frame). More information on the frame command can be found here.
Command where or frame can be used. where command will give more info with the function name
I do get the same information while debugging. Though not while I am checking the stacktrace. Most probably you would have used the optimization flag I think. Check this link - something related.
Try compiling with -g3 remove any optimization flag.
Then it might work.
HTH!
Keep in mind that gdb is a powerful command -capable of low level instructions- so is tied to assembly concepts.
What you are looking for is called de instruction pointer, i.e:
The instruction pointer register points to the memory address which the processor will next attempt to execute. The instruction pointer is called ip in 16-bit mode, eip in 32-bit mode,and rip in 64-bit mode.
more detail here
all registers available on gdb execution can be shown with:
(gdb) info registers
with it you can find which mode your program is running (looking which of these registers exist)
then (here using most common register rip nowadays, replace with eip or very rarely ip if needed):
(gdb)info line *$rip
will show you line number and file source
(gdb) list *$rip
will show you that line with a few before and after
but probably
(gdb) frame
should be enough in many cases.
All the answers above are correct, What I prefer is to use tui mode (ctrl+X A or 'tui enable') which shows your location and the function in a separate window which is very helpful for the users.
Hope that helps too.
I've been doing some module work and I'm having crashes that occur randomly (usually within 10 hours after boot).
The kernel log messages can vary from one crash to the next, but in some cases I get this:
<4>huh, entered c90390a8 with preempt_count 0000010d, exited with c0340000?
The code that generates this log is from the 2.6.14 kernel, kernel/timer.c:
int preempt_count = preempt_count();
fn(data);
if (preempt_count != preempt_count()) {
printk(KERN_WARNING "huh, entered %p "
"with preempt_count %08x, exited"
" with %08x?\n",
fn, preempt_count,
preempt_count());
BUG();
}
For this condition to happen, what would have had to have occurred (obviously preempt_count changed, but what might cause that)?
The other symptom of the crash is that I'm seeing a scheduling while atomic while doing i2c from a workqueue (which should certainly not be atomic, right?). What might cause this?
I figure this post is a long shot but I'm really just looking for anything to troubleshoot at this point.
Just answering from the top of my head: "preempt_count" is a 32 bit field, which are split up into sub-bit-fields for various purposes. The sub-bit-fields are detailed in O'Reilly's Understanding the Linux Kernel. Again, off the top of my head, I don't know what "c0340000" represents. But since you started with "0000010d", and should have ended up with "0000010d", whatever your timer code did is pretty messed up.
One common cause is if your timer code did something like spin_lock_bh() but forgot to do a spin_unlock_bh(). But that usually results in just a 1-bit difference between the starting and ending preempt_count value. But in your case, your starting and ending values show a massive change.
Michael
I am working on a Windows NDIS driver using the latest WDK that is in need of a millisecond resolution kernel time counter that is monotonically non-decreasing. I looked through MSDN as well as WDK's documentation but found nothing useful except something called TsTime, which I am not sure whether is just a made-up name for an example or an actual variable. I am aware of NDISGetCurrentSystemTime, but would like to have something that is lower-overhead like ticks or jiffies, unless NDISGetCurrentSystemTime itself is low-overhead.
It seems that there ought to be a low-overhead global variable that stores some sort of kernel time counter. Anyone has insight on what this may be?
Use KeQueryTickCount. And perhaps use KeQueryTimeIncrement once to be able to convert the tick count into a more meaningful time unit.
How about GetTickCount / GetTickCount64 (Check the reqs on the latter)