Trouble With Subscribing To Peripheral Characteristic In Xamarin On IOS13 - xamarin

I use BLE to connect to a device. The device has a characteristic which returns value every 20ms. I am subscribing to a characteristic with SetNotifyValue(true, characteristic). On iOS 13 the amount of characteristic values is so large that it blocks the entire application. Previous versions do not have this problem. Based on my application output the amount of data received in iOS 13 is far greater.
The characteristic values are added to a queue. I then read / dequeue them in another thread. On previous iOS versions, the queue never gets longer than single digit size. On iOS 13 the queue quickly grows in size. It reached 10 000+ before I stopped the application. The values are being added so fast that the thread never gets to access the queue and dequeue them.
Has anyone encountered this or similar problem? I am seeking advice / suggestions on how I can further investigate the cause of this behaviour.
I wanted to see if every subscription to characteristic does this. I checked battery % characteristic and subscribing to it works fine.
I tried emptying the queue if the size reaches more than n before values are added to queue. This did not help because the values were still coming at the same speed and the other thread still could not access the queue to dequeue values.
I removed SetNotifyValue(true, characteristic) from the problematic characteristic and added a timer that would read the characteristic value in intervals. I tried different intervals (20ms / 50ms / 500ms / 10000ms). It would seem it still somehow subscribes to the characteristic as the application output is the same as before.
I am currently unsure if I am allowed to show any / how much of the code.
Here is the output of the application. Each line prints out last 22 received values. I have shortened it for easier overview. I added this to show the speed of data received. This is printed from UpdatedCharacterteristicValue.
iOS 13:
[13:27:38.0410] 22 values
[13:27:38.0416] 22 values
[13:27:38.0423] 22 values
[13:27:38.0430] 22 values
[13:27:38.0435] 22 values
[13:27:38.0440] 22 values
[13:27:38.0445] 22 values
[13:27:38.0450] 22 values
[13:27:38.0455] 22 values
[13:27:38.0461] 22 values
[13:27:38.0465] 22 values
iOS 10.3.4:
[13:20:19.0000] 22 values
[13:20:19.0840] 22 values
[13:20:20.0680] 22 values
[13:20:21.0491] 22 values
[13:20:22.0361] 22 values
[13:20:23.0171] 22 values
[13:20:24.0009] 22 values
[13:20:24.0852] 22 values
[13:20:25.0690] 22 values
[13:20:26.0500] 22 values
[13:20:27.0310] 22 values

The problem was that I had a call to readValue in UpdatedCharacteristicValue.
When looking in nrfConnect the device does not have read option for this characteristic, only notify. My guess is that previous versions handled this differently. On the iOS 13 the subscription was made again.
After removing the readValue() call the application now behaves as expected.
I would also like to thank Apple Support for helpfully recommending me to contact the hardware staff. It was cruicial in finding the error in my code.

Related

Dataflow job has high data freshness and events are dropped due to lateness

I deployed an apache beam pipeline to GCP dataflow in a DEV environment and everything worked well. Then I deployed it to production in Europe environment (to be specific - job region:europe-west1, worker location:europe-west1-d) where we get high data velocity and things started to get complicated.
I am using a session window to group events into sessions. The session key is the tenantId/visitorId and its gap is 30 minutes. I am also using a trigger to emit events every 30 seconds to release events sooner than the end of session (writing them to BigQuery).
The problem appears to happen in the EventToSession/GroupPairsByKey. In this step there are thousands of events under the droppedDueToLateness counter and the dataFreshness keeps increasing (increasing since when I deployed it). All steps before this one operates good and all steps after are affected by it, but doesn't seem to have any other problems.
I looked into some metrics and see that the EventToSession/GroupPairsByKey step is processing between 100K keys to 200K keys per second (depends on time of day), which seems quite a lot to me. The cpu utilization doesn't go over the 70% and I am using streaming engine. Number of workers most of the time is 2. Max worker memory capacity is 32GB while the max worker memory usage currently stands on 23GB. I am using e2-standard-8 machine type.
I don't have any hot keys since each session contains at most a few dozen events.
My biggest suspicious is the huge amount of keys being processed in the EventToSession/GroupPairsByKey step. But on the other, session is usually related to a single customer so google should expect handle this amount of keys to handle per second, no?
Would like to get suggestions how to solve the dataFreshness and events droppedDueToLateness issues.
Adding the piece of code that generates the sessions:
input = input.apply("SetEventTimestamp", WithTimestamps.of(event -> Instant.parse(getEventTimestamp(event))
.withAllowedTimestampSkew(new Duration(Long.MAX_VALUE)))
.apply("SetKeyForRow", WithKeys.of(event -> getSessionKey(event))).setCoder(KvCoder.of(StringUtf8Coder.of(), input.getCoder()))
.apply("CreatingWindow", Window.<KV<String, TableRow>>into(Sessions.withGapDuration(Duration.standardMinutes(30)))
.triggering(Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardSeconds(30))))
.discardingFiredPanes()
.withAllowedLateness(Duration.standardDays(30)))
.apply("GroupPairsByKey", GroupByKey.create())
.apply("CreateCollectionOfValuesOnly", Values.create())
.apply("FlattenTheValues", Flatten.iterables());
After doing some research I found the following:
regarding constantly increasing data freshness: as long as allowing late data to arrive a session window, that specific window will persist in memory. This means that allowing 30 days late data will keep every session for at least 30 days in memory, which obviously can over load the system. Moreover, I found we had some ever-lasting sessions by bots visiting and taking actions in websites we are monitoring. These bots can hold sessions forever which also can over load the system. The solution was decreasing allowed lateness to 2 days and use bounded sessions (look for "bounded sessions").
regarding events dropped due to lateness: these are events that on time of arrival they belong to an expired window, such window that the watermark has passed it's end (See documentation for the droppedDueToLateness here). These events are being dropped in the first GroupByKey after the session window function and can't be processed later. We didn't want to drop any late data so the solution was to check each event's timestamp before it is going to the sessions part and stream to the session part only events that won't be dropped - events that meet this condition: event_timestamp >= event_arrival_time - (gap_duration + allowed_lateness). The rest will be written to BigQuery without the session data (Apparently apache beam drops an event if the event's timestamp is before event_arrival_time - (gap_duration + allowed_lateness) even if there is a live session this event belongs to...)
p.s - in the bounded sessions part where he demonstrates how to implement a time bounded session I believe he has a bug allowing a session to grow beyond the provided max size. Once a session exceeded the max size, one can send late data that intersects this session and is prior to the session, to make the start time of the session earlier and by that expanding the session. Furthermore, once a session exceeded max size it can't be added events that belong to it but don't extend it.
In order to fix that I switched the order of the current window span and if-statement and edited the if-statement (the one checking for session max size) in the mergeWindows function in the window spanning part, so a session can't pass the max size and can only be added data that doesn't extend it beyond the max size. This is my implementation:
public void mergeWindows(MergeContext c) throws Exception {
List<IntervalWindow> sortedWindows = new ArrayList<>();
for (IntervalWindow window : c.windows()) {
sortedWindows.add(window);
}
Collections.sort(sortedWindows);
List<MergeCandidate> merges = new ArrayList<>();
MergeCandidate current = new MergeCandidate();
for (IntervalWindow window : sortedWindows) {
MergeCandidate next = new MergeCandidate(window);
if (current.intersects(window)) {
if ((current.union == null || new Duration(current.union.start(), window.end()).getMillis() <= maxSize.plus(gapDuration).getMillis())) {
current.add(window);
continue;
}
}
merges.add(current);
current = next;
}
merges.add(current);
for (MergeCandidate merge : merges) {
merge.apply(c);
}
}

How to implement queue management system in Redis

Im trying to implement queuing system using redis. So lets say we have this list:
> LPUSH listTickets 1 2 3 4 5 6 7 8 9
and as mobile app user someone was assigned number 4, and another user number 6. Now I want to display to them how many tickets are in front of them ( also to calculate the waiting estimated time). Now this may seem easy,
> LPOP listTicket
"4"
then we broadcast the result (the current ticket number to be called), then on mobile app everyone will subtract it from their own ticket number. for example my current ticket is 6 so 6-4=2 That way every user know how many tickets are ahead
However once you want to add feature to let the user delete their ticket or pushing it to the end of the queue, things get complicated. After deleting for example
> LRANGE listTicket 0 -1
1) "2"
2) "4"
3) "6"
4) "7"
5) "8"
when we LPOP listTicket we will get number 2 and the mobile app with ticket 6 will calculate 6-2 and get 4 which is the wrong calculation.
Do you have any algorithm in mind? Is getting the index of every ticket in the list every time someone delete their ticket an expensive process to calculate? Should I just send all tickets in the queue and let the users calculate their own position?
I want the system to be scale-able. So can the redis-node server handle total of 50 thousands (over different queues or lists) connected mobile apps getting their ranking in the queue they subscribed their ticket to.
I thought about using ZRANK but how will this load the server?

Firebase serverTimeOffset and Firebase.ServerValue.TIMESTAMP differences

Context
I have multiple servers listening to a specific collection (/items). Each of them use NTS for time calibration and the ".info/serverTimeOffset" to measure the expected time difference with Firebase. It is consistently around 20ms.
I have many clients pushing items to the collection with the specific field:
{
...
created: Firebase.database.ServerValue.TIMESTAMP
}
What is expected:
When the server receives the item from Firebase and subtracts the item.created with the Firebase expected time (Date.now() + offset), this value should be positive and probably around 10ms (time for the item to be sent from Firebase to the server).
What is happening:
When the server receives the items, the item.created field is superior to the Firebase expected time. Like it was created in the future. Usually the difference is around -5ms
Question:
What is the Firebase.database.ServerValue.TIMESTAMP set to ? and how is it related to the ".info/serverTimeOffset" ?
The 27th September 2016 at 1am UTC, that difference jumped from -5ms to around -5000ms like a kind of re-calibration happened (it lasted until I reset the .info/serverTimeOffset) Did someone experienced something similar?

how to get recent event recorded in event logs(eg: logged before about 10 seconds) in Windows using C++?

I need to collect event logs from Windows those are logged before 10 seconds. Using pull subscription I could collect already saved logs before execution of program and saving logs while program is running. I tried with the code available on MSDN:
Subscribing to Events
"I need to start to collect the event logged 10 seconds ago". Here I think I need to set value for LPWSTR pwsQuery to achieve that.
L"*[System/Level= 2]" gives the events with level equal to 2.
L"*[System/EventID= 4624]" gives events with eventID is 4624.
L"*[System/Level < 1]" gives events with level < 2.
Like that I need to set the value for pwsQuery to get event logged near 10 seconds. Can I do in the same way as above? If so how? If not what are the other ways to do it?
EvtSubscribe() gives you new events as they happen. You need to use EvtQuery() to get existing events that have already been logged.
The Consuming Events documentation shows a sample query that retrieves events beginning at a specific time:
// The following query selects all events from the channel or log file where the severity level is
// less than or equal to 3 and the event occurred in the last 24 hour period.
XPath Query: *[System[(Level <= 3) and TimeCreated[timediff(#SystemTime) <= 86400000]]]
So, you can use TimeCreated[timediff(#SystemTime) <= 10000] to get events in the last 10 seconds.
The TimeCreated element is documented here:
TimeCreated (SystemPropertiesType) Element
The timediff() function is described on the Consuming Events documentation:
The timediff function is supported. The function computes the difference between the second argument and the first argument. One of the arguments must be a literal number. The arguments must use FILETIME representation. The result is the number of milliseconds between the two times. The result is positive if the second argument represents a later time; otherwise, it is negative. When the second argument is not provided, the current system time is used.
 

How to condense a stream of incrementing sequence numbers down to one?

I am listening to a server which sends certain messages to me with sequence numbers. My client parses out the sequence number in order to keep track of whether we get a duplicate or whether we miss a sequence number, though it is called generically by a wrapper object which expects a single incremental sequence number. Unfortunately this particular server sends different streams of sequence numbers, incremental only within each substream. In other words, a simpler server would send me:
1,2,3,4,5,7
and I would just report back 1,2,3,4,5,6,7 and the wrapper tool would notify of having lost one message. Unfortunately this more complex server sends me something like:
A1,A2,A3,B1,B2,A4,C1,A5,A7
(except the letters are actually numerical codes too, conveniently). The above has no gaps except for A6, but since I need to report one number to the wrapper object, i cannot report:
1,2,3,1,2,4,1,5,7
because that will be interpreted incorrectly. As such, I want to condense, in my client, what I receive into a single incremental stream of numbers. The example
A1,A2,A3,B1,B2,A4,C1,A5,A7
should really translate to something like this:
1,2,3,4 (because B1 is really the 4th unique message), 5, 6, 7, 8, 10 (since 9 could have been A6, B3, C2 or another letter-1)
then this would be picked up as having missed one message (A6). Another example sequence:
A1,A2,B1,A7,C1,A8
could be reported as:
1,2,3,8,9,10
because the first three are logically in a valid sequence without anything missing. Then we get A7 and that means we missed 4 messages (A3,A4,A5, and A6) so I report back 8 so the wrapper can tell. Then C1 comes in and that is fine so I give it #9, and then A8 is now the next expected A so I give it 10.
I am having difficulty figuring out a way to create this behavior though. What are some ways to go about it?
For each stream, make sure that that stream has the correct sequence. Then, emit the count of all valid sequence numbers you've seen as the aggregate one. Pseudocode:
function initialize()
for stream in streams do
stream = 0
aggregateSeqno = 0
function process(streamId, seqno)
if seqno = streams[streamId] then
streams[streamId] = seqno + 1
aggregateSeqno = aggregateSeqno + 1
return aggregateSeqno
else then
try to fix streams[streamId] by replying to the server
function main()
initialize()
while(server not finished) do
(streamId, seqno) = receive()
process(streamId, seqno)

Resources