When running an INTEGRITY 178 ARINC/APEX image on a PPC SBC, using uboot console we load the program with tftpboot and initiate execution with bootelf. (We actually enter an extra RETURN after the bootelf command so that the INTEGRITY copyright banner displays on the uboot console when the target begins execution.)
Question: Why is there about a 12 second delay between the bootelf command and the apparent beginning of execution of the application portion of the image? And if there is a way to reduce this delay?
Potential sources uboot, POST, GHNet network initialization, other sources...?
Thanks,
mlk
For us, the filesystem initialization is the longest delay at boot.
Related
For example in a videa about u-boot, https://www.youtube.com/watch?v=INWghYZH3hI, near time 43:01, I see the lecturer gives u-boot the kernel address and fdt address but not the initramfs address. (bootz 0x80000000 - 0x80800000) but linux boots to the login prompt and he can log in.
How come this is possible? I understand after the kernel boots it starts init process in the initramfs.(I forgot there were a precedence). Without initramfs, how is it possible to run login process or shell?
(it's related to programming so I ask it here. If requested I can move it to unix stackexchange. Is there a method of moving a question to somerewhere else automatically? guess not..)
this is what I learned from the comments with combination with my previous knowledge.
you can embed the initramfs.tar.gz file in the kernel binary image using CONFIG_INITRAMFS_SOURCE=nitramfs.cpio.gz in the configuration. (menuconfig). The initramfs image is placed at the end of the kernel image maybe (I remember).
But this was not the case in the youtube video I mentioned in my question. Near 40:29 in the video, the kernel boot command is shown to have "root=/dev/mmcblk0p1 rootfstype=ext4 rootwait console=tty0e,115200". So it's telling the kernel to use SD card 0's partition 1 as the root system after boot, instead of using initramfs.(when you want to use initramfs, you specify root=/dev/ram and pass the initramfs location. in qemu you use -initrd initramfs.cpio.gz option, or in real machine this information is passed through the device tree to the kernel, in the chosen node's initrd-start and initrd-end address.).
Best case would be, if I had a (debug)-tool which runs in the background and tells me the name of the process or driver that breaks my latency requirement to my system. Which tool is suitable? Do you have a short example of its usage for the following case?
Test case:
The oscilloscope measures the time between the trigger of a GPIO input and the response on a GPIO output. Usually the response time is 150µs. I trigger every 25ms.
My linux user test program uses poll() and read()+write() to mirror the detected signal of the input as response back to an output.
The Linux kernel is patched with the Preempt_rt patch.
In the dimension of hours I can see response time peaks of up to 20ms.
The best real chance is to
switch on tracing in the kernel configuration and build such Linux kernel:
CONFIG_FTRACE=y
CONFIG_FUNCTION_TRACER=y
CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_SCHED_TRACER=y
CONFIG_FTRACE_SYSCALLS=y
CONFIG_STACK_TRACER=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_DEBUG_FS=y
then run your application until weird things happen by using a tool trace-cmd
trace-cmd start -b 10000 -e 'sched_wakeup*' -e sched_switch -e gpio_value -e irq_handler_entry -e irq_handler_exit /tmp/myUserApplication
and get a trace.dat file.
trace-cmd stop
trace-cmd extract
Load that trace.dat file in KernelShark and analyse the CPUs, threads, interrupts, kworker threads and user space threads. It's great to see which blocks the system.
I have been trying to understand how do h/w interrupts end up in some user space code, through the kernel.
My research led me to understand that:
1- An external device needs attention from CPU
2- It signals the CPU by raising an interrupt (h/w trance to cpu or bus)
3- The CPU asserts, saves current context, looks up address of ISR in the
interrupt descriptor table (vector)
4- CPU switches to kernel (privileged) mode and executes the ISR.
Question #1: How did the kernel store ISR address in interrupt vector table? It might probably be done by sending the CPU some piece of assembly described in the CPUs user manual? The more detail on this subject the better please.
In user space how can a programmer write a piece of code that listens to a h/w device notifications?
This is what I understand so far.
5- The kernel driver for that specific device has now the message from the device and is now executing the ISR.
Question #3:If the programmer in user space wanted to poll the device, I would assume this would be done through a system call (or at least this is what I understood so far). How is this done? How can a driver tell the kernel to be called upon a specific systemcall so that it can execute the request from the user? And then what happens, how does the driver gives back the requested data to user space?
I might be completely off track here, any guidance would be appreciated.
I am not looking for specific details answers, I am only trying to understand the general picture.
Question #1: How did the kernel store ISR address in interrupt vector table?
Driver calls request_irq kernel function (defined in include/linux/interrupt.h and in kernel/irq/manage.c), and Linux kernel will register it in right way according to current CPU/arch rules.
It might probably be done by sending the CPU some piece of assembly described in the CPUs user manual?
In x86 Linux kernel stores ISR in Interrupt Descriptor Table (IDT), it format is described by vendor (Intel - volume 3) and also in many resources like http://en.wikipedia.org/wiki/Interrupt_descriptor_table and http://wiki.osdev.org/IDT and http://phrack.org/issues/59/4.html and http://en.wikibooks.org/wiki/X86_Assembly/Advanced_Interrupts.
Pointer to IDT table is registered in special CPU register (IDTR) with special assembler commands: LIDT and SIDT.
If the programmer in user space wanted to poll the device, I would assume this would be done through a system call (or at least this is what I understood so far). How is this done? How can a driver tell the kernel to be called upon a specific systemcall so that it can execute the request from the user? And then what happens, how does the driver gives back the requested data to user space?
Driver usually registers some device special file in /dev; pointers to several driver functions are registered for this file as "File Operations". User-space program opens this file (syscall open), and kernels calls device's special code for open; then program calls poll or read syscall on this fd, kernel will call *poll or *read of driver's file operations (http://www.makelinux.net/ldd3/chp-3-sect-7.shtml). Driver may put caller to sleep (wait_event*) and irq handler will wake it up (wake_up* - http://www.makelinux.net/ldd3/chp-6-sect-2 ).
You can read more about linux driver creation in book LINUX DEVICE DRIVERS (2005) by Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman: https://lwn.net/Kernel/LDD3/
Chapter 3: Char Drivers https://lwn.net/images/pdf/LDD3/ch03.pdf
Chapter 10: Interrupt Handling https://lwn.net/images/pdf/LDD3/ch10.pdf
I have an embedded board with a kernel module of thousands of lines which freeze on random and complexe use case with random time. What are the solution for me to try to debug it ?
I have already try magic System Request but it does not work. I guess that the explanation is that I am in a loop or a deadlock in a code where hardware interrupt is disable ?
Thanks,
Eva.
Typically, embedded boards have a watch dog. You should enable this timer and use the watchdog user process to kick the watch dog hard ware. Use nice on the watchdog process so that higher priority tasks must relinquish the CPU. This gives clues as to the issue. If the device does not reset with a watch dog active, then it maybe that only the network or serial port has stopped communicating. Ie, the kernel has not locked up. The issue is that there is no user visible activity. The watch dog is also useful if/when this type of issue occurs in the field.
For a kernel lockup case, the lockup watchdogs kernel features maybe useful. This will work if you have an infinite loop/deadlock as speculated. However, if this is custom hardware, it is also possible that SDRAM or a peripheral device latches up and causes abnormal bus activity. This will stop the CPU from fetching proper code; obviously, it is tough for Linux to recover from this.
You can combine the watchdog with some fallow memory that is used as a trace buffer. memmap= and mem= can limit the memory used by the kernel. A driver/device using this memory can be written that saves trace points that survive a reboot. The fallow memory's ring buffer is dumped when a watchdog reset is detected on kernel boot.
It is also useful to register thread notifiers that can do a printk on context switches, if the issue is repeatable or to discover how to make the event repeatable. Once you determine a sequence of events that leads to the lockup, you can use the scope or logic analyzer to do some final diagnosis. Or, it maybe evident which peripheral is the issue at this point.
You may also set panic=-1 and reboot=... on the kernel command line. The kdump facilities are useful, if you only have a code problem.
Related: kernel trap (at web archive). This link may no longer be available, but aren't important to this answer.
Since the CPU runs in user/kernel mode, I want to know how this is determined by kernel. I mean, if a sys call is invoked, the kernel executes it on behalf of the process, but how does the kernel know that it is executing in kernel mode?
You can tell if you're in user-mode or kernel-mode from the privilege level set in the code-segment register (CS). Every instruction loaded into the CPU from the memory pointed to by the RIP or EIP register (the instruction pointer register depending on if you are x86_64 or x86 respectively) will read from the segment described in the global descriptor table (GDT) by the current code-segment descriptor. The lower two-bits of the code segment descriptor will determine the current privilege level that the code is executing at. When a syscall is made, which is typically done through a software interrupt, the CPU will check the current privilege-level, and if it's in user-mode, will exchange the current code-segment descriptor for a kernel-level one as determined by the syscall's software interrupt gate descriptor, as well as make a stack-switch and save the current flags, the user-level CS value and RIP value on this new kernel-level stack. When the syscall is complete, the user-mode CS value, flags, and instruction pointer (EIP or RIP) value are restored from the kernel-stack, and a stack-switch is made back to the current executing processes' stack.
Broadly if it's running kernel code it's in kernel mode. The transition from user-space to kernel mode (say for a system call) causes a context switch to occur. As part of this context switch the CPU mode is changed.
Kernel code only executes in kernel mode. There is no way, kernel code can execute in user mode. When application calls system call, it will generate a trap (software interrupt) and the mode will be switch to kernel mode and kernel implementation of system call will executed. Once it is done, kernel will switch back to user mode and user application will continue processing in user mode.
The term is called "Superviser Mode", which applies to x86/ARM and many other processor as well.
Read this (which applies only to x86 CPU):
http://en.wikipedia.org/wiki/Ring_(computer_security)
Ring 0 to 3 are the different privileges level of x86 CPU. Normally only Ring0 and 3 are used (kernel and user), but nowadays Ring 1 find usages (eg, VMWare used it to emulate guest's execution of ring 0). Only Ring 0 has the full privilege to run some privileged instructions (like lgdt, or lidt), and so a good test at the assembly level is of course to execute these instruction, and see if your program encounters any exception or not.
Read this to really identify your current privilege level (look for CPL, which is a pictorialization of Jason's answer):
http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
It is a simple question and does not need any expert comment as provided above..
The question is how does a cpu come to know whether it is kernel mode or its a user mode.
The answer is "mode bit"....
It is a bit in Status register of cpu's registers set.
When "mode bit=0",,,it is considered as kernel mode(also called,monitor mode,privileged mode,protected mode...and many other...)
When "mode bit=1",,it is considered as User mode...and user can now perform its personal applications without any special kernel interruption.
so simple...isn't it??