I need to disconnect from the sender and return to receiver's main ("ready to cast" screen).
Receiver decides when and how to do that (e.g. 10 mins idling after media playback was paused)
I've tried to stop receiver explicitly
receiver.stop()
and disconnecting the connection service
cs = receiver.getConnectionService()
cs.disconnect()
That didn't work as I wanted, it doesn't return to the main screen and all senders still see it as an ongoing cast session
So how can I force disconnecting on the receiver? Receiver API page seems to be describing only these two methods.
You shouldn't call receiver.stop(), nor calling cs.disconnect(); instead try using window.close().
Related
In a DriverKit extension, I would like to block a call from a user client until a specific hardware interrupt fires. Since there are no semaphores available (Does the DriverKit SDK support semaphores?), I've reached for a very basic spinlock using an _Atomic(bool) member and busy waiting:
struct IVars
{
volatile _Atomic(bool) InterruptOccurred = false;
}
// In the user client method handler
{
// Clear the flag
atomic_store(&ivars->InterruptOccurred, false);
// Set up the interrupt on the device
...
// Wait for the interrupt
while (!atomic_load(&ivars->InterruptOccurred))
{
IOSleep(10);
}
}
// In the interrupt handler
{
bool expected = false;
if (atomic_compare_exchange_strong(&ivars->InterruptOccurred, &expected, true))
{
return;
}
// Proceed with normal handling if the user client method is not waiting
}
The user client method is called infrequently and the interrupt is guaranteed to fire within 100ms, so in principle busy waiting should be acceptable, but I am not very happy with the solution. I haven't worked with spinlocks before and they make me feel rather uneasy.
I would like to avoid taking an IOLock in the interrupt handler. Is there any other synchronization primitive in DriverKit I could reach for? I guess a cleaner way to handle this would be for the user client method to accept a callback that fires on the interrupt, but that would still require synchronization with the interrupt handler and would complicate the client application code.
Preliminaries
I would like to avoid taking an IOLock in the interrupt handler.
I assume you're aware that, this being DriverKit, this isn't running in the context of a primary interrupt controller, but you're already behind a layer of Mach messaging, kernel/user context switch, and IODispatchQueue serialisation?
Possible solutions:
Since there are no semaphores available[…]
OSAction
The OSAction class contains a set of methods for sleeping in a thread until the action is invoked. (WillWait/Wait/EndWait) This might be a feasible way of implementing what you're trying to do. As usual, the documentation is in the header/iig file but hasn't made it into the web-based API docs.
IODispatchQueue
As of DriverKit 21 (macOS 12), you also get Apple's simpler Sleep/Wakeup event system baked into IODispatchQueue, which you might be familiar with from the kernel. (It is also similar to pthreads condition variables.) Note you need to create the queue with the kIODispatchQueueReentrant option in this case.
From DriverKit 22 (macOS 13/iPadOS) on, there's also a version with a deadline for the sleep SleepWithDeadline.
Async callbacks
I guess a cleaner way to handle this would be for the user client method to accept a callback that fires on the interrupt, but that would still require synchronization with the interrupt handler and would complicate the client application code.
If you're happy calling the async callback in the app on every interrupt, there's not really any synchronisation needed, you can just invoke the same OSAction repeatedly. Even if you want to only invoke the async call on the "next" interrupt, atomic compare-and-swap should be sufficient for the interrupt handler to claim the OSAction* pointer.
Important note:
With all of these potential solutions except IODispatchQueue::Sleep and the async callback: bear in mind that sleeping in the context of a user client external method will block the dispatch queue and thus any other calls to external methods in that user client will fail to make progress. (As well as any other methods scheduled to that queue.)
With ZeroMQ and CPPZMQ 4.3.2, I want to drop old messages for all my sockets including
PAIR
Pub/Sub
REQ/REP
So I use m_socks[channel].setsockopt(ZMQ_CONFLATE, 1) on all my sockets before binding/connecting.
Test
However, when I made the following test, it seems that the old messages are still flushed out on each reconnection. In this test,
I use a thread to keep sending generated sinewave to a receiver thread
Every 10 seconds I double the sinewave's frequency
Then after 10 seconds I stop the process
Below is the pseudocode of the sender
// on sender end
auto thenSec = high_resolution_clock::now();
while(m_isRunning) {
// generate sinewave, double the frequency every 10s or so
auto nowSec = high_resolution_clock::now();
if (duration_cast<seconds>(nowSec - thenSec).count() > 10) {
m_sine.SetFreq(m_sine.GetFreq()*2);
thenSec = nowSec;
}
m_sine.Generate(audio);
// send to rendering thread
m_messenger.send("inproc://sound-ear.pair",
(const void*)(audio),
audio_size,
zmq::send_flags::dontwait
);
}
Note that I already use DONTWAIT to mitigate blocking.
On the receiver side I have a zmq::poller_event handler that simply receives the last message on event polling.
In the stop sequence I reset the sinewave frequency to its lowest value, say, 440Hz.
Expected
The expected behaviour would be:
If I stop both the sender and the receiver after 10s when the frequency is doubled,
and I restart both,
then I should see the sinewave reset to 440Hz.
Observed
But the observed behaviour is that the received sinewave is still of the doubled frequency after restarting the communication, i.e., 880Hz.
Question
Am I doing it wrong or should I use some kind of killswitch to force drop all messages in this case?
OK, I think I solved it myself. Kind of.
Actual solution
I finally realized that the behaviour I want is to flush all messages when I stop the rendering. According to the official doc(How can I flush all messages that are in the ZeroMQ socket queue?), this can only be achieved by
set the sockets of both sender's and receiver's ZMQ_LINGER option to 0, meaning to keep nothing on closing those sockets;
closing the sockets on both sender and receiver ends, which also involves bootstrapping pollers and all references to the sockets.
This seems a lot of unnecessary work if I'm to restart rendering my data again, right after the stop sequence. But I found no other way to solve this cleanly.
Initial effort
It seems to me that ZMQ_CONFLATE does not make a difference on PAIR. I really have to tweak high water marks on sender and receiver ends using ZMQ_SNDHWM and ZMQ_RCVHWM.
However, I said "kind of solved" because tweaking HWM in the end is not the optimal solution for a realtime application,
having ZMQ_SNDHWM / ZMQ_RCVHWM set to the minimum "1", we still have a sizable latency in terms of realtime.
Also, the consumer thread could fall into underrun situatioin, i.e., perceivable jitters with the lowest HWM.
If I'm not doing anything wrong, I guess the optimal solution would still be shared memory for my targeted scenario. This is sad because I really enjoyed the simplicity of ZMQ's multicast messaging patternsand hate to deal with thread locking littered everywhere.
I am making an addon in Firefox, so I have a ChromeWorker - which is a privileged WebWorker. This is just a thread other then the mainthread.
In here I have no code but this (modified to make it look like not js-ctypes [which is the language for addons])
On startup I run this code, conn is a global variable:
conn = xcb_connect(null, null);
Then I run this in a 200ms interval:
evt = xcb_poll_for_event(conn);
console.log('evt:', evt);
if (!evt.isNull()) {
console.log('good got an event!!');
ostypes.API('free')(evt);
}
However evt is always null, I am never getting any events. My goal is to get all events on the system.
Anyone know what can cause something so simple to not work?
I have tried
xcb_change_window_attributes (conn, screens.data->root, XCB_CW_EVENT_MASK, values);
But this didn't fix it :(
The only way I can get it to work is by doing xcb_create_window xcb_map_window but then I get ONLY the events that happen in this created window.
You don't just magically get all events by opening a connection. There's only very few messages any client will receive, such as client messages, most others will only be sent to a client if it explicitly registered itself to receive them.
And yes, that means you have to register them on each and every window, which involves both crawling the tree and listening for windows being created, mapped, unmapped and destroyed and registering on them as well.
However, I would reconsider whether
My goal is to get all events on the system.
isn't an A-B problem. Why do you "need" all events? What do you actually want to do?
In WCF, I'm able to use this call to get the service contract and then in that service contract I can invoke a method which calls back the client. Ok fine.
OperationContext.Current.GetCallbackChannel<IMyServiceContract>
But, ultimately, this is just a bare bones service contract. The method named "GetCallbackChannel" indicates to me that a Channel object should be returned here. You know, a channel object that has a state property such as Closed, Open, etc, as well as events for state change.
WCF sure makes it difficult to grab the async channel that it is keeping open for the Async call. How else can I grab this channel?
Ok, I found an answer. I had tried something similar, but providing the CommunicationObject type in the GetCallbackChannel didn't work. Casting it to ICommuicationObject after getting the channel does work.
var c = OperationContext.Current.GetCallbackChannel<IMyServiceContract>();
ICommunicationObject chan = (ICommunicationObject)c;
We have an explicit requirement to tear down the streaming when the sender disconnected.
However we can see that 'senderdisconnected' and that window.castReceiverManager.onSenderDisconnected() is being called only after 10 minutes after device actually left the network.
Can we somehow force Receiver to check connection more aggressively?
appConfig.maxInactivity = 6000;
Set this an you will be fine.