Our product is a specialized device running minimal Ubuntu. Our C++ application on the device periodically scans I2C bus to detect if any new monitor/projector/etc. has been connected. This generally works well. However, once in two to three weeks, we see a random freeze.
As it happens randomly, we cannot consistently reproduce it.
From coding perspective, I essentially scan /dev/i2c-* files, open() the file, and try to read first 128 bytes using ioctl().
I guess we do something similar to what Linux tool i2cdetect does. From the manpage on i2cdtect, it states that "read byte" is known to lock SMBus on various write-only chips. Wondering if anyone knows this could be the problem we are running into. Regards.
Related
I'm more of a web-developer and database guy, but severely inconvenient performance issues relating to kernel_task and temperature on my personal machine have made me interested in digging into the details of my Mac OS (I notices some processes would trigger long-lasting spikes in kernel-task, despite consistently low CPU temperature and newly re-imaged machine).
I am a root user on my own OSX machine. I can read /System/Library/Kernels/kernel. My understanding is this is "Mach/XNU" Kernel of this machine (although I don't know a lot about those, but I'm surprised that it's only 13Mb).
What happens if I modify or delete /System/Library/Kernels/kernel?
I imagine since it's at run-time, things might be okay until I try to reboot. If this is the case, would carefully modifying this file change the behavior of my OS, only effective on reboot, presuming it didn't cause a kernel panic? (is kernel-panic only a linux thing?)
What happens if I modify or delete /System/Library/Kernels/kernel?
First off, you'll need to disable SIP (system integrity protection) in order to be able to modify or edit this file, as it's protected even from the root user by default for security reasons.
If you delete it, your system will no longer boot. If you replace it with a different xnu kernel, that kernel will in theory boot next time, assuming it's sufficiently matched to both the installed device drivers and other kexts, and the OS userland.
Note that you don't need to delete/replace the kernel file to boot a different one, you can have more than one installed at a time. For details, see the documentation that comes with Apple's Kernel Debug Kits (KDKs) which you can download from the Apple Developer Downloads Area.
I imagine since it's at run-time, things might be okay until I try to reboot.
Yes, the kernel is loaded into memory by the bootloader early on during the boot process; the file isn't used past that, except for producing prelinked kernels when your device drivers change.
Finally, I feel like I should explain a little about what you actually seem to be trying to diagnose/fix:
but severely inconvenient performance issues relating to kernel_task and temperature on my personal machine have made me interested in digging into the details of my Mac OS
kernel_task runs more code than just the kernel core itself. Specifically, any kexts that are loaded (see kextstat command) - and there are a lot of those on a modern macOS system - are loaded into kernel space, meaning they are counted under kernel_task.
Long-running spikes of kernel CPU usage sound like they might be caused by file system self-maintenance, or volume encryption/decryption activity. They are almost certainly not basic programming errors in the xnu kernel itself. (Although I suppose stupid mistakes are easy to make.)
Another possible culprits are device drivers; especially GPU drivers are incredibly complex pieces of software, and of course are busy even if your system is seemingly idle.
The first step to dealing with this problem - if there indeed is one - would be to find out what the kernel is actually doing with those CPU cycles. So for that you'd want to do some profiling and/or tracing. Doing this on the running kernel most likely again requires SIP to be disabled. The Instruments.app that ships with Xcode is able to profile processes; I'm not sure if it's still possible to profile kernel_task with it, I think it at least used to be possible in earlier versions. Another possible option is DTrace. (there are entire books written on this topic)
I have a Freescale imx.6q (arm) based board.
Hardware is configured with devicetree.
It had a change major incompatible change to timings and voltage for an onboard fpga, but these changes are invisible to the kernel.
The EE's tell us we shouldn't load the old fpga firmware for fear of damaging it. I would like to support both hardware from the same code (It is already causing confusion)
The solution I have thought of is this:
There are several new spi temperature sensors on the board. If I can read from one of those devices, I can infer that I need the new firmware.
How can I (in one driver) grab an spi device and then release it?
I suspect that I might be able to do something like this with device tree,
But I don't want to make the device unavailable.
Any ideas or examples of something like this being done?
After reading question i think your concern is how to add software support for more than one hardware.
If that is the case i think we can write two drivers supporting both hardware's with different configuration such as irq, voltage, register set etc.
So i will enable both drivers in Makefile and config file.
So at the time of boot when probe of drivers gets called we can check the hardware id by using spi_read command from driver.
If hardware id matches then driver probe gets successful and driver can be used to interact with hardware.
If spi_read fails then driver probe itself will fail.
I think this will do the trick.
EDIT (answer the question)
To detect use an SPI device from another driver use a reference to the device in the devicetree structure.
Short answer: add a reference to the spi device in your devices dts entry.
Slightly longer answer:
When adding spi to another device driver, you are effectively adding a subdevice, which may want its own driver. I have an FPGA which loads its firmware over (something close enough to be considered) SPI. I started with the idea of just treating the spi device as part of the larger driver, but the more work went into it, the more obvious it was that it is its own device, with a purpose and function that is distinct from the rest of the driver. I separated that code into its own driver.
Now instead of a reference to an SPI device, my driver just has a reference to an FPGA Manager device.
See line 98, 370 of https://github.com/d4ddi0/linux/blob/v4.12evi/arch/arm/boot/dts/imx6q-evi.dts
and
make sure the spi driver is loaded before your driver completes loading
My original answer to my question (for historical purposes):
What I ended up doing was using different devicetree files. The difference is know at initial install time (based on the serial number). The bootloader knows which dts filename to load.
There are multiple FPGA firmware versions and the right one is chosen based on the description in the dts.
This way, I can still update the driver and/or dts without breakage.
This works well in practice even though it does not detect anything at runtime.
One problem still exists, if I take an SD card from a new revision, and put it into an old one, the incorrect firmware will be loaded. To really solve this last problem, we've talked about adding an EEPROM to uniquely identify the hardware revision on future boards.
While we want to create a device file in file system, which one should we choose right now? Make a node in udev, which will show up in /dev or use sysfs which will show up in /sys.
I just think I can accomplish most of functions for a device through these two different ways. So it confused me a lot.
Thanks.
Use udev (and or define and publish some major & minor device numbers, like for mknod). See makedev(3)
Application programs want to access physical devices in /dev/ (not in /sys/). Data to/from a device go usually thru /dev/ char or block devices. Metadata and configuration can go thru sysfs
Read more about udev and about sysfs. See also device file wikipage.
You won't get very useful answers if you don't explain more concretely your issues... What exact kind of device are you thinking about? Very probably there exist already similar devices....
Publish very early (even in alpha stage, when it is not fully working) your device driver and software source code as free software preferably as GPLv2 (the license used by Linux kernel). Ask also on kernelnewbies. Work hard (perhaps more than a year) to get your driver incorporated in the official Linux kernel.
You should be familiar with Advanced Linux Programming (in the application userspace world) before attempting to code a kernel driver. After that, read books and resources on Linux kernel driver programming and study the source code of existing drivers in the recent Linux kernels.
Never develop any driver before.
Anyway I'm now writing 2 simple windows kernel mode drivers, and the 2 drivers will be installed onto 2 different devices which connect to 2 different buses(ISA bus / PCI bus), and somehow the 2 drivers need to talk to each other and data exchange is also expected, is there any efficient way to achieve that??
Kernel event might be able to enable the synchronization, but how about the data exchange?
In user mode, pipe/socket might be an option, but in kernel mode, is there a counterpart of named pipe or something? Google said that there's no documented API for kernel mode pipe usage...
I'm not quite familiar with windows driver framework, hope I'm making sense..
thanks!
There is IRP_MJ_INTERNAL_DEVICE_CONTROL for communication between kernel-mode components. Driver #1 opens Driver #2 by its name and sends internal IOCTLs with input or/and output data.
#Harry Johnston: You do need to be careful about writing to a shared memory location. I presume you were responding with the context of implementing a serial buffer between the two devices (only one device can write, and the other can only read), but it should obviously be added that you should approach shared memory locations between devices with caution, especially if there is going to be frequent writes to that location by both devices and cause undefined behavior or lock-ups from interrupts being serviced seemingly unexpectedly.
I'm building an IOKit CFPlugin driver for OS X. I'll be working with network data coming in that will be translated to MIDI data. No hardware is involved other than the built-in Airport. I have experience with drivers on Windows machines and firmware but this is my first dip into doing it on the Mac. So far things are going pretty well, but the Apple documentation sez: "For safety reasons, you should not load your driver on your development machine."
I only have one Mac. I really don't want two Macs- sorry, Apple. Should I take this warning seriously? Are there things I need to know?
Thanks, Tom Jeffries
You could also consider running OS X inside a VM as your testbed. It would surely be much more convenient that having a separate boot volume.
The warning is rather poorly worded; what you should consider doing is using a separate boot volume (partition) for trying out your driver, since it's possible to arbitrarily hose your system with your driver.
If you're doing kernel development on any OS that isn't isolated from your main system (via a VM, alternate boot disk, etc.), you're crazy!
What may be a bigger issue is that you can't do any kernel debugging, because the only option for that is to use GDB on a remote OS X system. For this, you may want to consider running OS X in virtualization.
you DEFINITELY want to have some way to recover a fubar kext installation: a bootable external drive or something you can quickly restore from-- this is the main reason for Apple's warning against running in-development-kernel-extensions on your production machine.
Nicholas is right that in order to debug using gdb (the only way in kernel space) you do need two machines. I've never tried using a VM as Coxy suggests: but I guess it's feasible (assuming that you run your kext on the virtual machine and use the real host machine to run gdb).
My preferred method for tracing and debugging in the kernel is kprintf() routed to firewire (aka firewire kprintf (man fwkpfv) ). for this you do need two machines with firewire ports.
finally, being an old computer musician myself, I wonder why you want to program a MIDI synthesizer (or transformer) on the network stack level. my guess is that you would have a much more gratifying experience working in userland (where you can use floating point math...)
if you need some hints or tips, feel free to get in touch...
|K<
from the ADC Kernel Programming Guide
Kernel programming is a black art that
should be avoided if at all possible.
Fortunately, kernel programming is
usually unnecessary. You can write
most software entirely in user space.
Even most device drivers (FireWire and
USB, for example) can be written as
applications, rather than as kernel
code. A few low-level drivers must be
resident in the kernel's address
space, however, and this document
might be marginally useful if you are
writing drivers that fall into this
category.