Linux kernel Intel video driver (i915) backlight brightness difference between releases - linux-kernel

I noted that the actual brightness of my laptop monitor is different on the same value(s) for backlight from the same range of available values when running Linux with kernel 5.4 vs. 3.13 (i915 driver). What might have changed to result in that? I like the 3.13 one better.
TL;DR
I noted backlight works differently on my laptop now on 5.4 kernel than on 3.13, range of values to write to /sys/class/backlight/intel_backlight/brightness is the same, but perceived brightness at 1 is much lower on 3.13 whereas max seems same.
I've found out most probably backlight on my laptop is controlled by i915 driver and source code in https://github.com/torvalds/linux/tree/master/drivers/gpu/drm/i915 (it is a mirror per Why does the Linux kernel repository have only one branch?, but on kernel.org I do not know how to open a tree of code to provide a web link to the driver).
How to find code that process input to /sys/class/backlight/intel_backlight/brightness? Hints on what to do next are also welcomed.

Related

How does the ARM Linux kernel map console output to a hardware device on boot?

I'm currently struggling to determine how I can get an emulated environment via QEMU to correctly display output on the command line. I have an environment that displays perfectly well using the virt reference board, a cortex-a9CPU, and the 4.1 Linux kernel cross-compiled for ARM. However, if I swap out the 4.1 kernel for 2.6 or 3.1, suddenly I can no longer see console output.
While solving this issue is my main goal, I feel like I lack a critical understanding of how Linux and the hardware initially integrate before userspace configurations via boot scripts and whatnot have a chance to execute. I am aware of the device tree, and have a loose understanding of how it works. But the issue I ran into where a different kernel version broke console availability entirely confounds me. Can someone explain how Linux initially maps console output to a hardware device on the ARM architecture?
Thank you!
The answer depends quite a bit on which kernel version, what config options are set, what hardware, and also possibly on kernel command line arguments.
For modern kernels, the answer is that it looks in the device tree blob it is passed for descriptions of devices, some of which will be serial ports, and it initializes those. The kernel config or command line will specify which of those is to be used for the console. For earlier kernels, especially if you go all the way back to 2.6, use of device tree was less universal, and for some hardware the boot loader simply said "this is a versatile express board" (for instance) and the kernel had compiled-in data structures to tell it where the devices were for each board that it supported. As the transition to device tree progressed, boards were converted one by one, and sometimes a few devices at a time, so what exactly the situation was for any specific kernel version depends on which board you're using.
The other thing that I rather suspect you're running into is that if the kernel crashes early in bootup (ie before it finds the serial port at all) then it will never output anything. So if the kernel is just too early to support the "virt" board properly at all, or if your kernel config is missing something important, then the chances are good that it crashes in early boot without being able to print you a useful message. (Sometimes "earlycon" or "earlyprintk" kernel arguments can assist here, but not always.)

Embedded Linux device blocking RS485 bus during startup

I'm having trouble with an industrial Linux computer I'm working with to achieve communication over an RS485 bus with multiple connected devices. What I've encountered is that the IO pins used by the RS485 USART driver are set to different levels at startup instead of going to the RS485 idle/tri-state. As a result, the other devices on the bus are blocked for more than 30 seconds while the device boots up, triggering all sorts of external problems. The course of events can be viewed in the attached image, where I've measured the output voltages with an oscilloscope during startup.
My guess is that the actual driver is not started until the voltage levels reach their tri-state levels (e.g. ~2.2V for this device). After that everything works as expected.
I've tried to find any config-files to set the default IO level of the pins at boot (thinking this may be set by the bootloader) to no avail.
Also, I've tried to apply a startup-script to run "early enough" to set DATA- high, but the device in question does not provide any interface to control these pins as regular GPIOs as far as I can tell.
Any help, tips or insights would be much appreciated!
EDIT: I am not an experienced Linux developer, so please highlight if I've left out any important details.
Some specs:
ARM920T rev 0 (v41) CPU
Proprietary distribution of Linux 2.6
Uses Busybox
Atmel USART drivers
Extract from boot log:
Linux version 2.6.28.10 (root#) (gcc version 4.1.2) #94 PREEMPT Tue Oct 29 10:22:19 CET 2013
CPU: ARM920T [41129200] revision 0 (ARMv4T), cr=c0003177
/...
.../
RS485 mode for port /dev/ttyS3 enabled
/...
... (I'm guessing the ~30 secs elapse here)
.../
atmel_usart.3: ttyS3 at MMIO 0xfffcc000 (irq = 9) is a ATMEL_SERIAL
atmel_serial.3: Putting the RS485 RTS pin down
/...
...
.../
Full boot log: https://drive.google.com/file/d/0B2XYl1mNCa8jNUZ5V0Nic1hkU0U/view
Similar issue:
Possibly a similar issue is discussed here: UART initialisation: Prevent UART to pull RTS high
But I'm not sure how to proceed with the suggested solution.
This is little more than wild speculation, but it might be worth adding a start-up script that echoes a NULL character to the device (e.g. /dev/ttyS1 or whatever) as early as possible during start-up. This might be enough to kick the driver into initialising the hardware.
You could also try to locate the driver in the Linux source to look at how it starts up.
Probably you have access to the source code, so you can investigate who and when mess with the that GPIO. Just grep the kernel source for the atmel gpio controller port addresses to figure out what happen. If you are lucky may be there will be kernel command line option that you can pass from the bootloader to set the line to what you need in advance.
this answer may work if you can find required things mentioned below
of your board!
Once I also had a same issue on PWM. There I found that my bootloader was responsible for the same, I changed in bootloader configuration and it started working fine.
Check your BSP provided by board vendor or third party (If you have the source), If your bootloader is U-boot you can find it inside U-boot-(source)/include/configs/(your-board).h there you can find configuration for RS484. As per your datasheet for the board you can check for other things which are muxed on the same pin and disable those if not required for boot time and enable RS485.
enabling/disabling can be done by changing the values 0, 1 or 2 as per your configuration and also you can simply disable anything just by commenting // out the line.

I2C support in new kernels seems to be broken

Can anybody tell me what's the difference in supporting I2C between kernels until and after 3.10 version?
Looks like after 3.10 something changed but I can't realize what exactly. I'm working on Intel Core-i5 2500k CPU with integrated video and am using ddccontrol tool to change brightness on my monitor. But if on kernel 3.2.32 I can do that, since 3.10.5 I2C support seems to be broken.
I don't know what exactly changed, but there are outputs from old and new kernels (i2cdetect -l):
3.2.32: http://pastebin.com/SqDPcwS9
3.10.5: http://pastebin.com/YCTmX90m
If on 3.2.32 I was able to use i2c-4 device to control my monitor, then on 3.10.5 list of i2c devices is shorter, and I don't see any GPIO buses (or what does it mean). On 3.10.5 system detects only monitor on i2c-1 but says that there is no support of DDC/CI on that device (http://pastebin.com/vZ4bALmt). For 3.2.32 everything is OK: http://pastebin.com/QL0fAZVC
Maybe I don't know something, e.g. some new config option have been added/changed in kernel.
Seems I'm not alone in my trouble - there are a lot of questions about I2C and ddccontrol around the web, but there are still no answer.
Need your help, really...
Thanks!
---
UPD: on kernel 3.7 I've watched the same behavior as on 3.10, so, the breakage isn't at 3.10 but a bit earlier
I'm not sure, but I think this has been fixed in commit 59b016fe8fe83920e8717163289e61ab8e327b90 the 17.10.2013
Can you try a more recent kernel (3.12) ?

How can I override the CUDA kernel execution time limit on Windows with a secondary GPUs?

From Nvidia's website, it explain the time-out problem:
Q: What is the maximum kernel execution time? On Windows, individual
GPU program launches have a maximum run time of around 5 seconds.
Exceeding this time limit usually will cause a launch failure reported
through the CUDA driver or the CUDA runtime, but in some cases can
hang the entire machine, requiring a hard reset. This is caused by
the Windows "watchdog" timer that causes programs using the primary
graphics adapter to time out if they run longer than the maximum
allowed time.
For this reason it is recommended that CUDA is run on a GPU that is
NOT attached to a display and does not have the Windows desktop
extended onto it. In this case, the system must contain at least one
NVIDIA GPU that serves as the primary graphics adapter.
Source: https://developer.nvidia.com/cuda-faq
So it seems that, nvidia believes, or at least strongly implys, having multi- (nvidia) gpus, and with proper configuration, can prevent this from happening?
But how? so far I tried lots ways but there is still the annoying time-out on a GK110 GPU that is: (1) plugging in the secondary PCIE 16X slots; (2) Not being connected to any monitors (3) Is setted to use as an exclusive physX card in driver control panel (as recommended by some other guys), but the block-out is still there.
If your GK110 is a Tesla K20c GPU, then you should switch the device from wddm mode to TCC mode. This can be done with the nvidia-smi.exe tool that gets installed with the driver. Use the windows search function to find this file (nvidia-smi.exe) then use the command line help (`nvidia-smi --help) to discover the commands necessary to switch a GPU from WDDM to TCC mode.
Once you have done this, the windows watchdog mechanism will no longer pay attention to your GK110 device.
If on the other hand it is a GeForce GPU, there is no way to switch it to TCC mode. Your only option is to modify the registry settings, which is somewhat difficult. Your mileage may vary, as the exact structure of the reg keys varies by OS.
If a GPU is in WDDM mode, it is subject to the watchdog timer.

What is kernel's KMS(kernel mode setting) API?

What is kernel's KMS(kernel mode setting) API?
ModeSetting does refer to the graphic stack. It is the process of setting up the clocks and scanout buffers, initialize the chips, lighting up the displays and so on.
The kernel subsystem responsible for this is the DRM subsystem. It has a userspace library that is developed in lock-step with the kernel part and allows i.e. Xorg access to the userland facing part of the interface (normally called ABI). The hardware-facing side of the kernel interface is usually referred to as the API.
Specifically you can use the 'xrandr' binary to instruct XOrg via the randr-protocol to instruct the kernel to change the mode. That binary is installed alongside the X server and also gives you some information about the graphics card and the current mode.
The DRM ModeSetting API is IOCTL based and the following site gives an technical overview: http://dri.freedesktop.org/wiki/DrmModesetting
Also the documentation in the current linux-3.7 releases is quite improved. To check that out, you have to fetch the latest kernel sources, and then, in the kernel sourcetree do
$ make htmldocs
and then look at the generated file Documentation/DocBook/drm/index.html .
Hth
Mode setting is usually related to Graphics setup.
A reference article dated April 19, 2008 notes,
kernel mode-setting involves moving the mode-setting code for video adapters from the user-space X server drivers into the Linux kernel. This may seem like an uninteresting topic for end-users, but having the mode-setting done in the kernel allows for a cleaner and richer boot process, improved suspend and resume support, and more reliable VT switching (along with other advantages). Kernel mode-setting isn't yet in the mainline Linux kernel nor is the API for it frozen, but Fedora 9 shipping next month will be the first major distribution carrying this initial support. In this article we're looking more closely at kernel mode-setting with the Intel X.Org driver as well as showing videos of kernel-based mode-setting in action.
Here is a Fedora wiki KernelModesetting page.

Resources