How to get volume data from an input device in Core-Audio? - macos

I am trying to get the volume of the audio heard by an input device using Core-Audio.
So far I have used AudioDeviceAddIOProc and AudioDeviceStart with my AudioDeviceIOProc function to get the input data in the form of an AudioBufferList containing AudioBuffers.
How do I get the volume of the data from the AudioBuffer? Or am I going about this completely the wrong way?

On Mac OS X 10.5, the APIs you mentioned are deprecated and you should read Tech Note TN2223.
Anyway, assuming that you are getting buffers of linear PCM sample data in 32-bit float format, you just need to write a for loop that determines the fabs(y) of each sample and then takes the max of all those values. Then you can save that value or convert it to decibels.

Related

how can i read and write on iso 14443 cards?

I'm trying my hand at using iso 14443 cards. I can't find a way to read or write on them via android app. Anyone have any solutions?
For now I have downloaded android apps like NFC tools, but I'm not very smart in using them.
So as these are sort of like Type 2 NfcA Tags (though not fully Type 2 compliant) and have a datasheet of what commands they support and what their memory organisation is like.
So to read and write data to these Tags you need to transceive a byte array containing the right commands and then you will receive back another byte array with the results of the command.
So here is an example of how to transceive to NfcA on Android.
So your Tag does not support the Fast Read (0x3A) command used in this example but does support a more standard Read command
e.g. send the byte array
0x30,0x00 to read the first 4 blocks of data (16 bytes) from the Tag (see section 6.2.1 of the datasheet and note the CRC is calculated for you.)
A write command begins with 0xA2,0x05 with 4 more bytes of data to write to the first user data area memory block

C++ - Send IOCTL command to WBF to get sensor attributes on Windows

I'm trying to understand how I can retrieve the WINBIO_SENSOR_ATTRIBUTES buffer by using WBF APIs.
I found this link: https://msdn.microsoft.com/en-us/library/windows/hardware/ff536431
It mentions about sending IOCTL command, however, I'm not able to understand how exactly I can invoke this from C++ code and receive the sensor attributes structure. Can anybody help? or point me to some sample code which does the similar stuff?
First take a look at DeviceIoControl - this is the function to which you must supply the IOCTL_BIOMETRIC_GET_ATTRIBUTES value as the second parameter to obtain the biometric data (the link you included describes how to handle the size ouf the output buffer - first supply a DWORD sized buffer to get the actual size of the output and then allocate a properly sized buffer and retrieve the actual data). But to do this you also need a valid device handle (first parameter of DeviceIoControl). This handle should be obtained by calling CreateFile and passing the device name of the driver. If you do not know the PDO then either you can try looking in Device Manager if the shows that to you under the "Details" tab or you have to use the SetupDi* family functions to enumerate the biometrics device class and get the name from there.

OSX: CoreAudio API for setting IO Buffer length?

This is a follow-up to a previous question:
OSX CoreAudio: Getting inNumberFrames in advance - on initialization?
I am trying to figure out what will be the AudioUnit API for possibly setting inNumberFrames or preffered IO buffer duration of an input callback for a single HAL audio component instance in OSX (not a plug-in!).
While I understand there is a comprehensive documentation on how this can be achieved in iOS, by means of AVAudioSession API, I can neither figure out nor find documentation on setting these values in OSX, whichever API.
The web is full of expert, yet conflicting statements ranging from "There is an Audio Unit API to request a sample rate and a preferred buffer duration...", to "You can definitely get the number of frames, but only for the current callback call...".
Is there a way of at least getting (and adapting to) the inNumberFrames or the audio buffer length offerd by the system, for the input-selected sampling rates in OSX? For example, for 44.1k and its multiples (this seems to work partly), as well as for 48k and its multiples (this doesn't seem to work at all, I don't know where's the hack which allows for adapting the buffer lenfth to these values)? Here's the console printout:
Available 7 Sample Rates
Available Sample Rate value : 8000.000000
Available Sample Rate value : 16000.000000
Available Sample Rate value : 32000.000000
Available Sample Rate value : 44100.000000
Available Sample Rate value : 48000.000000
Available Sample Rate value : 88200.000000
Available Sample Rate value : 96000.000000
.mSampleRate = 48000.00
.mFormatID = 1819304813
.mBytesPerPacket = 8
.mFramesPerPacket = 1
.mBytesPerFrame = 8
.mChannelsPerFrame = 2
.mBitsPerChannel = 32
.mFormatFlags = 9
_mFormatHumanReadable = kAudioFormatFlagIsFloat
kAudioFormatFlagIsPacked
kLinearPCMFormatFlagIsFloat
kLinearPCMFormatFlagIsPacked
kLinearPCMFormatFlagsSampleFractionShift
kAppleLosslessFormatFlag_16BitSourceData
kAppleLosslessFormatFlag_24BitSourceData
expectedInNumberFrames = 512
Couldn't render in current context (Error -10863)
The expected inNumberFrames is read from the system:
UInt32 expectedInNumberFrames = 0;
UInt32 propSize = sizeof(UInt32);
AudioUnitGetProperty(gInputUnitComponentInstance,
kAudioDevicePropertyBufferFrameSize,
kAudioUnitScope_Global,
0,
&expectedInNumberFrames,
&propSize);
Thanks in advance for pointing me at the right direction!
See this Apple Technical Note: https://developer.apple.com/library/mac/technotes/tn2321/_index.html#//apple_ref/doc/uid/DTS40013499-CH1-THE_I_O_BUFFER_SIZE
See the OS X example code in this technical note for GetIOBufferFrameSizeRange(), GetCurrentIOBufferFrameSize(), and SetCurrentIOBufferFrameSize().
Note that there is an API property returning an allowed range, and an error return on the property setter. Also note the various Mac power saving modes may change the buffer size while an app is running, so the actual buffer size, inNumberFrames, may not stay constant, or even be known until the Audio Unit starts running.
If you get unusual buffer sizes (not a power of 2), it may be that the actual audio hardware on a particular Apple product model has a fixed or limited range of audio sample rates, and thus OS software is being used to resample and thus resize the buffers being sent to audio unit callbacks depending on that hardware, if the app requests a sample rate not supported by the actual codec chips on the circuit board.

How do I convert an Intel HEX file to raw data like memory view?

I want to make boot loader code for AVR, which can update firmware over the air.
Now I am able to write to the application area using some fixed data. I have a hex file of the new firmware to be updated. How do I convert that hex file to raw data so that I can update the application using that raw data?
If you're using WinAVR for compilation you may do this using included avr-objcopy:
C:\WinAVR-20100110\bin> avr-objcopy.exe -I ihex -O binary input_file.hex output.bin
If you're developing on Linux, there's a package, avr-binutils, with the avr-objcopy program.
You may use some tool (http://hex2bin.sourceforge.net/ or another hex2bin converter) or write your own hex parser that may have some caveats when coming to files > 64 KB.
As you pointed out, the hex file is encoded in Intel Hex format. You have to extract the flash data from the data records. Each record (line) holds up to 16 bytes (common, but may vary) of data.
Note that that there are different record types and some may introduce an address offset, depending on how the flash data is distributed. The Wiki description should be enough to get the concept.

How to use audioConverterFillComplexBuffer and its callback?

I need a step by step walkthrough on how to use audioConverterFillComplexBuffer and its callback. No, don't tell me to read the Apple docs. I do everything they say and the conversion always fails. No, don't tell me to go look for examples of audioConverterFillComplexBuffer and its callback in use - I've duplicated about a dozen such examples both line for line and modified and the conversion always fails. No, there isn't any problem with the input data. No, it isn't an endian issue. No, the problem isn't my version of OS X.
The problem is that I don't understand how audioConverterFillComplexBuffer works, so I don't know what I'm doing wrong. And nothing out there is helping me understand, because it seems like nobody on Earth really understands how audioConverterFillComplexBuffer works, either. From the people who actually use it(I spy cargo cult programming in their code) to even the authors of Learning Core Audio and/or Apple itself(http://stackoverflow.com/questions/13604612/core-audio-how-can-one-packet-one-byte-when-clearly-one-packet-4-bytes).
This isn't just a problem for me, it's a problem for anybody who wants to program high-performance audio on the Mac platform. Threadbare documentation that's apparently wrong and examples that don't work are no fun.
Once again, to be clear: I NEED A STEP BY STEP WALKTHROUGH ON HOW TO USE audioConverterFillComplexBuffer plus its callback and so does the entire Mac developer community.
This is a very old question but I think is still relevant. I've spent a few days fighting this and have finally achieved a successful conversion. I'm certainly no expert but I'll outline my understanding of how it works. Note I'm using Swift, which I'm also just learning.
Here are the main function arguments:
inAudioConverter: AudioConverterRef: This one is simple enough, just pass in a previously created AudioConverterRef.
inInputDataProc: AudioConverterComplexInputDataProc: The very complex callback. We'll come back to this.
inInputDataProcUserData, UnsafeMutableRawPointer?: This is a reference to whatever data you may need to be provided to the callback function. Important because even in swift the callback can't inherit context. E.g. you may need to access an AudioFileID or keep track of the number of packets read so far.
ioOutputDataPacketSize: UnsafeMutablePointer<UInt32>: This one is a little misleading. The name implies it's the packet size but reading the documentation we learn it's the total number of packets expected for the output format. You can calculate this as outPacketCount = frameCount / outStreamDescription.mFramesPerPacket.
outOutputData: UnsafeMutablePointer<AudioBufferList>: This is an audio buffer list which you need to have already initialized with enough space to hold the expected output data. The size can be calculated as byteSize = outPacketCount * outMaxPacketSize.
outPacketDescription: UnsafeMutablePointer<AudioStreamPacketDescription>?: This is optional. If you need packet descriptions, pass in a block of memory the size of outPacketCount * sizeof(AudioStreamPacketDescription).
As the converter runs it will repeatedly call the callback function to request more data to convert. The main job of the callback is simply to read the requested number packets from the source data. The converter will then convert the packets to the output format and fill the output buffer. Here are the arguments for the callback:
inAudioConverter: AudioConverterRef: The audio converter again. You probably won't need to use this.
ioNumberDataPackets: UnsafeMutablePointer<UInt32>: The number of packets to read. After reading, you must set this to the number of packets actually read (which may be less than the number requested if we reached the end).
ioData: UnsafeMutablePointer<AudioBufferList>: An AudioBufferList which is already configured except for the actual data. You need to initialise ioData.mBuffers.mData with enough capacity to hold the expected number of packets, i.e. ioNumberDataPackets * inMaxPacketSize. Set the value of ioData.mBuffers.mDataByteSize to match.
outDataPacketDescription: UnsafeMutablePointer<UnsafeMutablePointer<AudioStreamPacketDescription>?>?: Depending on the formats used, the converter may need to keep track of packet descriptions. You need to initialise this with enough capacity to hold the expected number of packet descriptions.
inUserData: UnsafeMutableRawPointer?: The user data that you provided to the converter.
So, to start you need to:
Have sufficient information about your input and output data, namely the number of frames and maximum packet sizes.
Initialise an AudioBufferList with sufficient capacity to hold the output data.
Call AudioConverterFillComplexBuffer.
And on each run of the callback you need to:
Initialise ioData with sufficient capacity to store ioNumberDataPackets of source data.
Initialise outDataPacketDescription with sufficient capacity to store ioNumberDataPackets of AudioStreamPacketDescriptions.
Fill the buffer with source packets.
Write the packet descriptions.
Set ioNumberDataPackets to the number of packets actually read.
return noErr if successful.
Here's an example where I read the data from an AudioFileID:
var converter: AudioConverterRef?
// User data holds an AudioFileID, input max packet size, and a count of packets read
var uData = (fRef, maxPacketSize, UnsafeMutablePointer<Int64>.allocate(capacity: 1))
err = AudioConverterNew(&inStreamDesc, &outStreamDesc, &converter)
err = AudioConverterFillComplexBuffer(converter!, { _, ioNumberDataPackets, ioData, outDataPacketDescription, inUserData in
let uData = inUserData!.load(as: (AudioFileID, UInt32, UnsafeMutablePointer<Int64>).self)
ioData.pointee.mBuffers.mDataByteSize = uData.1
ioData.pointee.mBuffers.mData = UnsafeMutableRawPointer.allocate(byteCount: Int(uData.1), alignment: 1)
outDataPacketDescription?.pointee = UnsafeMutablePointer<AudioStreamPacketDescription>.allocate(capacity: Int(ioNumberDataPackets.pointee))
let err = AudioFileReadPacketData(uData.0, false, &ioData.pointee.mBuffers.mDataByteSize, outDataPacketDescription?.pointee, uData.2.pointee, ioNumberDataPackets, ioData.pointee.mBuffers.mData)
uData.2.pointee += Int64(ioNumberDataPackets.pointee)
return err
}, &uData, &numPackets, &bufferList, nil)
Again, I'm no expert, this is just what I've learned by trial and error.

Resources