A while ago I was digging around in the kexts provided with macOS High Sierra, specifically AppleHDA.kext.
The audio driver appears to implement a DSP chain with some rather complicated signal processing going on. In particular, there is a great deal of dynamic range compression applied to the output as well as other things I'd like to experiment with removing.
There are a large number of configuration plists inside, some of which I suppose may apply to my model of MacBook Pro (MacBookPro13,2). I have been able to narrow down which four are likely to be the one that controls the sound system on my machine, and how to edit it to get what I want.
However, after much effort editing the kext, it will not load properly after being repacked. This is supposedly due to kext signing, and I have tried removing the signature altogether and disabling SIP, but both options do not work. Wondering what some suggestions for further exploration would be.
Thanks!
Related
I own a laptop with nVidia Optimus
I tried everything to get rid of it, or make it work, and it refuses to work.
One problem in particular, is that when the WinAPI is called with information about the hardware (for example queries with capabilities, device-id, device name, and so on), apps always get the information for the integrated Intel card, that is terrible, and don't exactly match the nVidia card in capabilities either, this make some games and apps misbehave or crash.
I was wondering, can I somehow override those WinAPI calls, and make them lie? For example when the app asks about GPU Device-ID, I tell to it that it is a arbitrary device I want.
Bonus question: Can this also be applied to ASM calls, like CPUID and RDTSC? Many older games rely on those... also the Intel Compiler infamously made to work with only P4 tend to treat new (Core i7 of any generation) CPUs as AMD, and choose crap code paths.
EDIT: Some people are misunderstanding what I want to code.
I want to make a launcher app to workaround a common nVidia Optimus bug, like those apps to make games borderless, or to make them use a different more compatible version of DirectX than their original.
nVidia Optimus works (usually, it can be done differently) by the machine having a integrated Intel Chip, and a nVidia Discrete GPU, the computer treats the DGPU as a sort of video-coprocessor, the actual video chip is always the Intel video chip, but when Optimus kicks in, the Hardware Accelerated rendering is passed to the DGPU, that after finishings its work, copy the results into Intel's chip framebuffer, that finally show it on the screens.
The bug in this implementation, is that it never considered what happens when an app queries about the video capabilities, because the video chip is always the Intel one, any queries get a reply related to the Intel one, even if the chip that will actually receive the draw calls in this app is the nVidia one.
As result, any mismatched DX or OGL extensions between the GPUs can cause bugs or crashes, programs may assume wrong things about the available computing power and memory, may have timing problems, and so on.
I've been fighting with this tech for years, and found no practical solution, this idea is my "final stand" idea, make a "Optimus Launcher" app, that allows you to launch any game with Optimus and it will work, hopefully without ugly hacks like disabling Secure Boot (I disabled Secure Boot to play Age of Decadence, in machines with Optimus AoD, and other Torque3D games, don't work if Secure Boot is enabled, I have no idea why).
You can hook WinAPI calls and make them do what ever you like but it's nothing which is implemented easily. Furthermore I guess that some anti virus programs will get very nervous if you application is doing stuff like that...
Take a look at this article which is a good start: API hooking revealed
I'm now a bit experienced with using OpenGL, which I started using because it's said that it is the only way to invoke video card functions. (besides DirectX - which I like less than OpenGL)
For programming (e.g. in C/C++) the OS gives many APIs, like functions for printing. But these can also be bypassed, by coding in Assembly-language - and call much lower level APIs (which gain speed) or direct CPU calls.
So I started wondering why this wouldn't be possible on the video card. Why should an API like OpenGL or DirectX be needed? The process going on with those is:
API-call >
OS calls video card (with complex opcodes, I think) >
video card responses (in complex binary format) >
OS decodes this format and responses to user (in expected API format)
I believe this should decrease the speed of the rendering process.
So my question is:
Is there any possibility to bypass any graphical API (under Windows) and make direct calls to the video card?
Thanks,
Dennis
Using assembly or bypassing an api doesnt automatically make something faster, often slower as you dont know what the folks that wrote the library know.
it is absolutely possible yes, those libraries are just processor instructions that poke and peek at registers and ram, and you could just as easily poke and peek at registers and ram. The first problem is can you get that information, sure, you can look at the linux drivers or other open source resources. Second, much of the heavy lifting today is done in the graphics chip by logic or graphics processors, so the host is just a go between and not necessarily the bottleneck if there is a bottleneck. And yes you can program the gpus depending on your video card/chip, etc.
You need to determine where the bottleneck really is, if there really is one, maybe the bus is your problem, maybe the operating system is your problem, or the compiler, or the hard disk or the system memory, the processor and architecture itself, caches, etc. At the same time how will you ever learn how to find these things unless you try.
I recommend getting rid of windows completely, no operating system, go bare metal. Take the linux and other open source resources plus anything you can get from the vendor and get closer to the metal. You will also need a lot of info about the pci/pcie bus and bridges, dma controllers, everything in the path. If you dont want to go that low then use linux or bsd or some other command line environment where it is well known how to take over the video system, and take over the video system while retaining an operating system and a development environment (vi/emacs, gcc).
if that is all way too advanced, then I recommend, dabbling in simple gpu routines to get a feel for how the video card works at least at some level and tackle this learning exercise one step at a time.
I'm developing a very simple application for the Mac OSX platform making use of Qt and OpenGL (and QtOpenGL) so crossplatform gets easier.
The application receive a variable number of video streams that have to be rendered to the screen. Each frame of these video streams is used as a texture for mapping a rectangle in 3D space (very similar to a videowall).
Apart from the things such as receiving, locking, uploading video data, synchronizing threads... i consider it is clear that it's a quite simple application.
The fact is that all behaves ok when using cocoa based Qt 4.7 binaries (the default ones) in a 10.5 Mac.
But my code has to run fine at all of the OSX versions starting from (and including to) 10.4. So i tried the code in a 10.4 machine and it crashed just when starting. After a few hours of internet reading, i discovered that for a Qt Application to be targeted at 10.4, carbon Qt based has to be used. So i rebuild the whole project with the new framework.
When the new resulting binary gets run, all works well except by the fact that application's fps fall to about 2 fps!! And it behaves the same at both machines (10.5 computer has sensibly better features)
I've spent quite time working on this but i have not reached a solution. Any suggest?
More information about the application and things i've tried
code has not been modified when recompiling carbon based
only two (256x256 textures) videos ar used in order to assure it's not a bandwidth limit problem (although i know it shouldn't because the first code worked)
the 2 video streams arrive from network (local)
when a video stream arrives, a signal is emmited and the data will be uploaded to an OpenGL texture (glTexSubImage2D)
a timer makes render (paintGL) happen at about 20ms (~50 fps)
the render code use the textures (updated or not) to draw the rectangles.
rendering only when a video arrives won't work because of having 2 (asynchronous) video streams; besides more things have to be draw at screen.
only basic OpenGL commands are used (no PBO,FBO,VBO,...) The only one problematic thing could be the use of shaders (available only from Qt 4.7), but its code is trivial.
i've made use of OpenGLProfiler and Instruments. Nothing special/strange was observed.
Some things i suspect (conclusions)
it's clear it's not a hardware issue. The same computer behave differently
it gives me the sensation it's a threading/locking problem but, why?
carbon is 32 bits. The 10.5 application was 64. It's not possibly develop 64 bits in carbon.
for giving away the 32 bits possible cause, i also rebuild the first project for 32 bits. It worked partically the same.
i've read something about carbon having problems (more than usual) with context switching.
maybe OpenGL implementation is Multithread and code is not? (or the opposite?) That could cause a lot of stalls.
maybe carbon handle events differently from cocoa's? (i mean signal/event dispatching, main loop...)
Ok, this is (sorry for the so long writing) my actual headache. Any suggestion, idea.. would be very appreciated.
Thx in advance.
May I ask a diagnostic question? Can you ensure that it's not being passed to the software renderer?
I remember that when 10.4 was released, there was some confusion about quartz extreme, quartz and carbon, with some of it disabled, and hardware renderers disabled by default on some of them, which required configuration by the end user to get it working correctly. I'm not sure whether this information is pertinent, because you say that, having targetted 10.4, the problem exhibits on both the 10.4 and the 10.5, yes?
It's possible (though admittedly I'm grasping at straws here) that even in 10.5 carbon doesn't use the hardware renderers by default. I'd like to think though that OSX prefers hardware renderers to software renderers in all scenarios, but it may be worth spending a little time looking into, given how thoroughly you're already looking into other options.
Good luck.
If you are using Qt, I guess your code would work on a windows or linux platform. Have you tried your application under these platforms ?
This would quickly reveal if it comes from Qt or the mac OSX version.
I am trying to port my screensaver from windows to mac and one of its features was reacting on system sound output. On windows it was easy using Direct Sound, but I can't find any example of capturing sound output on mac. Is it possible even possible without writing something like kernel extension? Using flash it is also very easy — it even gives computeSpectrum method to get raw data or even fft transformed data.
All programs that I have already found use Soundflower or their own kernel extension. But I don't think that asking to install separate program or using kernel extension is a good way.
One thing you can do, considering that Soundflower is open source, is take a look at how they did it. You can't copy & paste GPL code, but you can surely study the techniques used and create your own solution (point you in the right direction).
You won't find Apple being very helpful here. Sound capturing, in this manner, can be used for all kinds of nefarious purposes. I'm not even sure if Core Audio lets you do this without hacks. In any case, you have a working implementation of what you're trying to accomplish. I'd take advantage of it.
I'm not on my Mac right now, but I'm pretty sure that Quartz Composer has a patch for just this thing. Depending on what language you're writing your screen saver in, it may be fairly easy for you to port your code into a QC patch. Well... it probably won't be easy, but it may be doable.
I'm looking at options to access DVB data on OS X. Initially I want to support the EyeTV DTT USB device, but in the long-run I'd like to support a number of popular devices. The problem I have is that there is no standard way of controlling such devices.
All the applications I know of that use them either hide the driver code within the application (for example EyeTV itself, all it's drivers are implemented totally in userspace and are not accessible to external apps), or they use the seemingly defunkt MMInputFamily driver (no source code availible any more, author gone walkabouts).
I've done some research and found that a number of the devices I want to support are supported within the Linux DVB project. Further research indicates that some years ago there was an attempt to abstract the linux implementation so that it could potentially be recompiled on other platforms. The idea being that efforts to support devices should be pooled and the best way to do that would be to make the current open source implementation work on multiple platforms: it seems in the end to have amounted to little however.
The idea of compiling linux drivers against other *nix type platforms has also been taken up elsewhere with some success. The approach the author took is detailed on the page I linked, it seems potentially viable on OS X as well.
At any rate, there seem to be a number of options, but no clear winner:
Find the source code for the MMInputFamily driver, try to get it working on OS X 10.6 and add support for the devices I require, referrencing the linux source code for pointers. Problem: the source code is nowhere to be found, nor is the author. Additionally it seems the author might perhaps have gone down another route had he fully appreciated the previous efforts to port the linux drivers to OS X.
Attempt to port the linux drivers to OS X in a manner similar to the FreeBSD project I linked. Problem: this is very low-level work and work in this layer is not recommended by Apple if it can be avoided.
Write a driver with OS X's IOKit: this is the preferred method for implementing drivers but I would have to do everything from scratch, clearly not a small job.
If I could I would really like to use the Linux source code, but I'm unsure if such a thing is really viable. Does anyone have any advice or ideas on the best way to proceed with this task?