Do I need to differentiate between Chromecast device revisions? - chromecast

Given that Chromecast devices (both the 1st and the recently launched 2nd gen family) do only properly run when they have a working internet connection is it safe to assume that 1st and 2nd gen devices share the same public API?
Or do I need to do some magic number checking in my custom receiver code to check on which device revision the code is executing?

No magic is needed, they support identical set of APIs so developers don't need to differentiate them.

Related

How to pass user setting to Driver Extension (MacOS)?

I am writing a driverkit extension whose goal is to block some categories of USB devices, such as flash drives. The driver should block (match to) any device of the relevant device classes, except those, which are whitelisted (based on their vendor and product ID). The whitelist can be set dynamically by user of the application.
The question is, how to pass these data to the driver as reading from a file or something like Windows registry is not available in the DriverKit. The tricky part is that the driver requires the whitelist data before the device is matched.
From what I understood, rejection of device is possible by returning an error from Start() method and returning from it prematurely. I got an idea to send the data while the driver is running this function, however this is not possible as the communication via IOUserClass is not available until the Start method returns.
Is this somehow doable?
As far as I'm aware, communicating with user space apps from the initial Start() method is not possible from DriverKit extensions. As you say, IOUserClients are the mechanism to use for user space communication, and those aren't available until the service is started and registered. You can have your driver match IOResources/IOUserResources so it is always loaded, but each matched service starts up an independed process of your dext, and I'm not aware of a way to directly communicate between these instances.
If I understand you correctly, you're trying to block other drivers from acquiring the device. I don't think the solution you have in mind will help you with this. If you return success from Start(), your dext will drive the device. If you return failure, no driver is loaded for the device, because matching has already concluded. So other drivers would never get a chance anyway, regardless of whether the device is on your allow-list or deny-list.
It's new in DriverKit 21 (i.e. macOS Monterey), and I've not had a chance to try it yet, but there is an API for reading files, OSMappedFile. I would imagine that the DriverKit sandbox will have something to say about which files a dext can open, but this seems like an avenue worth exploring whether you can open configuration files this way.
Note that none of this will help you during early boot, as your dext will never be considered for matching at that time. And you may not be able to get required entitlements from Apple to build a dext which matches USB device classes rather than specific product/vendor ID patterns. (Apologies for repeating myself, but other users may come across this answer and not be aware of this issue.)

Sending Bluetooth Advertising Packets and Getting Some Answers

I want to build something with Raspberry Pi Zero and write in Go,
I never tried bluetooth before and my goal is;
Sending a dynamic packet which it will change every second, an iOS app will expand this message and with a button, client will send a message back without a connection.
Is Bluetooth Advertising what I am looking for and do you know any GoLang library for it? Where should I start?
There are quite a lot of parts to your question. If you want to be connection-less then the BLE roles are Broadcaster (beacon) and Observer (scanner). There are a number of "standard" beacon formats out there. They are summarized nicely on this cheat sheet
Of course you can create your own format as these are using either the Service Data or Manufacturing Data in a BLE advertisement.
On Linux (Raspberry Pi) the official Bluetooth stack is BlueZ which documents the API's available at: https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc
If you want to be connection-less then each device is going to have to change it's role regularly. This requires a bit of careful thought on how long each is listening and broadcasting as you don't want them always talking at the same time and listening at the same time.
You might find the following article of interest to get you started with BLE and Go Lang:
https://towardsdatascience.com/spelunking-bluetooth-le-with-go-c2cff65a7aca

How can I capture microphone data and route it to a virtual microphone device?

Recently, I wanted to get my hands dirty with Core Audio, so I started working on a simple desktop app that will apply effects (eg. echo) on the microphone data in real-time and then the processed data can be used on communication apps (eg. Skype, Zoom, etc).
To do that, I figured that I have to create a virtual microphone, to be able to send processed (with the applied effects) data over communication apps. For example, the user will need to select this new microphone (virtual) device as Input Device in a Zoom call so that the other users in the call can hear her with her voiced being processed.
My main concern is that I need to find a way to "route" the voice data captured from the physical microphone (eg. the built-in mic) to the virtual microphone. I've spent some time reading the book "Learning Core Audio" by Adamson and Avila, and in Chapter 8 the author explains how to write an app that a) uses an AUHAL in order to capture data from the system's default input device and b) then sends the data to the system's default output using an AUGraph. So, following this example, I figured that I also need to do create an app that captures the microphone data only when it's running.
So, what I've done so far:
I've created the virtual microphone, for which I followed the NullAudio driver example from Apple.
I've created the app that captures the microphone data.
For both of the above "modules" I'm certain that they work as expected independently, since I've tested them with various ways. The only missing piece now is how to "connect" the physical mic with the virtual mic. I need to connect the output of the physical microphone with the input of the virtual microphone.
So, my questions are:
Is this something trivial that can be achieved using the AUGraph approach, as described in the book? Should I just find the correct way to configure the graph in order to achieve this connection between the two devices?
The only related thread I found is this, where the author states that the routing is done by
sending this audio data to driver via socket connection So other apps that request audio from out virtual mic in fact get this audio from user-space application that listen for mic at the same time (so it should be active)
but I'm not quite sure how to even start implementing something like that.
The whole process I did for capturing data from the microphone seems quite long and I was thinking if there's a more optimal way to do this. The book seems to be from 2012 with some corrections done in 2014. Has Core Audio changed dramatically since then and this process can be achieved more easily with just a few lines of code?
I think you'll get more results by searching for the term "play through" instead of "routing".
The Adamson / Avila book has an ideal play through example that unfortunately for you only works for when both input and output are handled by the same device (e.g. the built in hardware on most mac laptops and iphone/ipad devices).
Note that there is another audio device concept called "playthru" (see kAudioDevicePropertyPlayThru and related properties) which seems to be a form of routing internal to a single device. I wish it were a property that let you set a forwarding device, but alas, no.
Some informal doco on this: https://lists.apple.com/archives/coreaudio-api/2005/Aug/msg00250.html
I've never tried it but you should be able to connect input to output on an AUGraph like this. AUGraph is however deprecated in favour of AVAudioEngine which last time I checked did not handle non default input/output devices well.
I instead manually copy buffers from the input device to the output device via a ring buffer (TPCircularBuffer works well). The devil is in the detail, and much of the work is deciding on what properties you want and their consequences. Some common and conflicting example properties:
minimal lag
minimal dropouts
no time distortion
In my case, if output is lagging too much behind input, I brutally dump everything bar 1 or 2 buffers. There is some dated Apple sample code called CAPlayThrough which elegantly speeds up the output stream. You should definitely check this out.
And if you find a simpler way, please tell me!
Update
I found a simpler way:
create an AVCaptureSession that captures from your mic
add an AVCaptureAudioPreviewOutput that references your virtual device
When routing from microphone to headphones, it sounded like it had a few hundred milliseconds' lag, but if AVCaptureAudioPreviewOutput and your virtual device handle timestamps properly, that lag may not matter.

WinRT/C++ issue with concurrent MIDI and BLE communication

My team has been struggling with a pretty strange issue while using the WinRT/C++ APIs for Windows to connect to both a MIDI port and receive BLE notifications through a proprietary service on the same device.
The WinRT/C++ library itself is really nice and provides easy and modern C++ interfaces to access the managed Windows runtime classes.
I've pushed a sample repo to Github where we've replicated the issue with a minimal example.
The repo's readme goes over the problem in detail, but I'll post the relevant bits here for completeness.
The sample program is performing roughly these steps:
Check for available MIDI devices using a DeviceWatcher.
Check for available Bluetooth LE devices using another instance of a DeviceWatcher.
Match discovered MIDI and BluetoothLE devices on their ContainerId property (see DeviceInfo for details). This is the method JUCE employs in the native WinRT code for their library, and works as expected.
Open the MIDI port and attach a handler to the MessageReceived event (see the code).
This causes the system to create a connection to the Bluetooth LE device. The program detects this state change, creates a BluetoothLEDevice, we perform GATT service discovery and attach a handler to the ValueChanged event for the characteristic we're interested in notifications from (see the code).
The program then counts how many MIDI messages are received on each port and how many BLE notifications are received from the corresponding device.
The behaviour we notice is that data from the most recently connected device streams just fine, while the throughput for the others is severly limited. We are at quite a standstill regarding this issue, and are not sure where the problem may lie.
We are at quite a standstill here. I'd be more willing to accept it if all the devices would exhibit this behaviour, but that's not the case. Is there any reason that creating both a MidiInPort and an BluetoothLEDevice from the same peripheral should cause this issue?
A BLE radio can only receive or send at any given time. And therefore only communicate with one device at any given time. It uses a scheduler to allocate radio time for every device when you have many devices. That way a second connection can "interrupt" a connection event from another device, decreasing the throughput for that device. See https://infocenter.nordicsemi.com/topic/sds_s132/SDS/s1xx/multilink_scheduling/central_connection_timing.html

Is ACK mandatory in CAN bus communication

I am making a CAN simulator for GPS trackers, they only record CAN data and doesn't send ACK. Is it possible to send CAN data with raspberry, using mcp2515/tja1050, without any device on bus that would trigger ACK?
This will usually generate a continuous retransmit.
Some devices have a "one-shot" transmit mode when just sends the CAN frame and does not attempt a retransmission. If you transmitter has this mode you can do what you describe, otherwise you will get a lot of retransmissions.
No it isn't possible, you need at least 2 nodes that are actively participating in the communication. This can however be fixed by just providing another CAN controller on the bus, which doesn't have to do anything intelligent except the ACK part.
For development/debug/test purposes you can however put your own node in "loopback mode", meaning it will speak to itself. Can be handy if you have to proper hardware available yet.
You can try to set the controlmode presume-ack to on.
Assuming you are using the ip command for creating your can sockets that would be something like
ip link set <DEVICE> type can presume-ack on
This will ignore missing ACKs. However I am not sure whether this works with all controllers.

Resources