iBeacon: When to send beacon event to a server - performance

Am working on iBeacon app where I monitor and range beacon however, when the app start ranging for a beacon in region I get endless list of beacon range status as long as the user in the beacon range.
My question is when to send the server the beacon proximity!
And if someone could explain the optimal way to queue and send list of beacons events to web server! it will be much appreciated.

The optimal way to send beacon proximity events to the server all depends on your business use case. Here are a few common options:
Send an event whenever a new beacon identifier is first detected, along with the proximity at that time.
Send an event periodically (say every 10 minutes) with a full list of beacons seen during that period along with their minimum/maximum proximities over that period.
Send an event whenever the proximity crosses a threshold (e.g. send an event only when a unique beacon identifier first becomes in near or immediate proximity).
Implementing the above on iOS often involves tracking the detections in a Dictionary, and then triggering the server call at the appropriate logical time from the didRangeBeacons:inRegion callback based on what has been tracked so far in this dictionary. Using logic to implement 1, 2 or 3 above will ensure that the number of server calls will be limited.

Related

CQRS - out of order messages

Suppose we have 3 different services producing events, each of them publishing to its own event store.
Each of these services consumes other producers services events.
This because each service has to process another service's events AND to create its own projection. Each of the service runs on multiple instances.
The most straight forward way to do it (for me) was to put "something" in front of each ES which is picking events and publishing (pub/sub) them in queues of every other service.
This is perfect because every service can subscribe to each topics it likes, while the event publisher is doing the job and if a service is unavailable events are still delivered. This seems to me to guarantee high scalability and availability.
My problem is the queue. I can't get an easily scalable queue that guarantees ordering of the messages. It actually guarantees "slightly out of order" with at-least once delivery: to be clear, it's AWS SQS.
So, the ordering problems are:
No order guaranteed across events from the same event stream.
No order guaranteed across events from the same ES.
No order guaranteed across events from different ES (different services).
I though I could solve the first two problems just by keeping track of the "sequence number" of the events coming from the same ES.
This would be done by tracking the last sequence number of each topic from which we are consuming events
This should be easy for reacting to events and also building our projection.
Then, when I pop an event from the queue, if the eventSequenceNumber > previousAppliedEventSequenceNumber + 1 i renqueue it (or make it invisible for a certain time).
But it turns out that using this solution, it will destroy performances when events are produced at high rates (I can use a visibility timeout or other stuff, the result should be the same).
This because when I'm expecting event 10 and I ignore event 11 for a moment, I should ignore also all events (from ES) with sequence numbers coming after that event 11, until event 11 shows up again and it's effectively processed.
Other difficulties were:
where to keep track of the event's sequence number for build the projection.
how to keep track of the event's sequence number for build the projection so that when appling it, I have a consistent lastSequenceNumber.
What I'm missing?
P.S.: for the third problem think at the following scenario. We have a UserService and a CartService. The CartService has a projection where for each user keeps track of the products in the cart. Each cart's projection must have also user's name and other info's that are coming from the UserCreated event published from the UserService. If UserCreated comes after ProductAddedToCart the normal flow requires to throw an exception because the user doesn't exist yet.
What I'm missing?
You are missing flow -- consumers pull messages from sources, rather than having sources push the messages to the consumers.
When I wake up, I check my bookmark to find out which of your messages I read last, and then ask you if there have been any since. If there have, I retrieve them from you in order (think "document message"), also writing down the new bookmarks. Then I go back to sleep.
The primary purpose of push notifications is to interrupt the sleep period (thereby reducing latency).
With SQS acting as a queue, the idea is that you read all of the enqueued messages at once. If there are no gaps, then you can order the collection then start processing them and acking them. If there are gaps, you either wait (leaving the messages in the queue) or you go to the event store to fetch copies of the missing messages.
There's no magic -- if the message pipeline is promising "at least once" delivery, then the consumers must take steps to recognize duplicate messages as they arrive.
If UserCreated comes after ProductAddedToCart the normal flow requires to throw an exception because the user doesn't exist yet.
Review Race Conditions Don't Exist, by Udi Dahan: "A microsecond difference in timing shouldn’t make a difference to core business behaviors."
The basic issue is assuming we can get messages IN ORDER...
This is a fallacy in distributed computing...
I suggest you design for no message ordering in your system.
As for your issues, try and use UTC time in the message body/header created by the originator and try and work around this data point. Sequence numbers are going to fail unless you have a central deterministic sequence creator (which will be a non-scalable, single point of failure).
Using Sagas/State machine is a path that can help to make sense of (business) events ordering.

How to handle side effects based on multiple events in a message driven microservice system?

we are currently working in a message driven Microservice environment and some of our messages/events are event sourced (using Apache Kafka). Now we are struggling with implementing more complex business requirements, were we have to take multiple events into account to create new events and side effects.
In the current situation we are working with devices that can produce errors and we already process them and have a single topic which contains ERROR_OCCURRED and ERROR_RESOLVED events (so they are in order). We also make sure, that all messages regarding a specific device always go onto the same partition. And both messages share an ID that identifies that specific error incident. We already have a projection that consumes those events and provides an API for our customers, s.t. they can see all occurred errors and their current state.
Now we have to deal with the following requirement:
Reporting Errors
We need a push system that reports errors of devices to our external partners, but only after 15 minutes and if they have not been resolved in that timeframe. Our first approach was to consume all ERROR_RESOLVED events, store the IDs and have another consumer that is handling the ERROR_OCCURRED events in a delayed fashion (e.g. by only consuming the next ERROR_OCCURRED event on the topic if its timestamp is at least 15 minutes old). We would then be able to know if that particular error has already been resolved and does not need to be reported (since they share a common ID with the corresponding ERROR_RESOLVED event). Otherwise we send an HTTP request to our external partner and create an ERROR_REPORTED event on a new topic. Is there any better approach for delayed and conditional message processing?
We also have to take the following special use cases into account:
Service restarts: currently we are planning to keep the list of resolved errors in memory, so if a service restarts, that list has to be created from scratch. We could just replay the ERROR_RESOLVED messages, but that may take some time and in that time no ERROR_OCCURRED events should be processed because that may result in reporting errors that have been resolved in less then 15 minutes, but we are just not aware of it. Are there any good practices regarding replay vs. "normal" processing?
Scaling: we may increase or decrease the number of instances of our service at any time, so the partition assignment may change during runtime. That should not be a problem if we create a consumer group for each service instance when consuming the ERROR_RESOLVED events, s.t. every instance knows all resolved errors while still only handling the ERROR_OCCURRED events of its assigned partitions (in another consumer group which is shared by all instances). Is there a better approach for handling partition reassignment and internal state?
Thanks in advance!
For side effects, I would record all "side" actions in the event store. In your particular example, when it is time to send a notification, I would call SEND_NOTIFICATION command that emit NOTIFICATION_SENT event. These events would be processed by some worker process that does actual HTTP request.
Actually I would elaborate this even furter, since notifications could fail, so I would have, say, two events NOTIFICATION_REQUIRED, and NORIFICATION_SENT, so we can retry failed notifications.
And finally your logic would be "if error was not resolved in 15 minutes and notification was not sent - send a notification (or just discard if it missed its timeframe)"

What is the minimum interval allowed for calling List Payment API

One of our cafes has a coffee station where the barista needs to see orders (payments) as they come in from the the registers. We are not able to use the Webhooks feature because it does not allow us to filter based on location and register and our volume is too high. So I am developing an iOS app which will periodically call the payments API for that specific location to get the latest transactions. It will use the begin_time parameter to get txns since the last query. The app will only be deployed only on one device and we would like to make the call in intervals of every 5-10 seconds. It will probablly pull down 1-3 txns for each call. Is there a minimum interval that is recommended or enforced?
Thanks,
Mike
The only minimum interval you would have to worry about would running into rate limits, but at that rate, you won't have any problems. You can read more about error codes (including rate limiting) on Square's official documentation

Do we need complex sending strategies for GCM / FCM?

Currently I'm working on a SaaS with support for multiple tenants that can enable push notifications for their user-bases.
I'm thinking of using a message queue to store all pushes and send them with a separate service. That new service would need to read from the queue and send the push notifications.
My question now is: Do I need to come up with a complex sending strategy? I know that with GCM has a limit of 1000 devices per request, so this needs to be considered. I also can't wait for x pushes to fly in as this might delay a previous push from being sent. My next thought was to create a global array and fill it with pushes from the queue. A loop would then fetch that array every say 1 second and send pushes. This way pushes would get sent for sure and I wouldn't exceed the 1000 devices limit.
So ... although this might work I'm not sure if an infinite loop is the best way to go. I'm wondering if GCM / FCM even has a request limit? If not, I wouldn't need to aggregate the pushes in the first place and I could ditch the loop. I could simply fire a request for each push that gets pulled from the queue.
Any enlightenment on this topic or improvement of my prototypical algorithm would be great!
Do I need to come up with a complex sending strategy?
Not really. GCM/FCM is pretty simple enough. Just send the message towards the GCM/FCM server and it would queue it on it's own, then (as per it's behavior) send it as soon as possible.
I know that with GCM has a limit of 1000 devices per request, so this needs to be considered.
I think you're confusing the 1000 devices per request limit. The 1000 devices limit refers to the number of registration tokens you add in the list when using the registration_ids parameter:
This parameter specifies a list of devices (registration tokens, or IDs) receiving a multicast message. It must contain at least 1 and at most 1000 registration tokens.
This means you can only send to 1000 devices with the same message payload in a single request (you can then do a batch request (1000/each request) if you need to).
I'm wondering if GCM / FCM even has a request limit?
AFAIK, there is no such limit. Ditch the loop. Whenever you successfully send a message towards the GCM/FCM server, it will enqueue and keep the message until such time that it is available to send.

How to make my SMS App is highest Priority to receive Broadcast Receiver

At First, I am sorry for my English is not good enough !
My problem is i am writting a Block SMS Application and i want to receive sms with my app and then i abort broadcast to make default sms app can't receive SMS, so i set my app have a highest priority (1000), but my app still receive broadcast after default sms app of android.
I print all of my Android Phone SMS's broadcast received signal in order when my phone receive a SMS and i recognize that The System SMS App alway receive SMS Broadcast first and it also have Highest priority.
So how can i make my sms app could receive SMS Broadcast before Default SMS of system ?
I am really needed your help !
Thank for your reading !
As documented by Google, the maximum priority for a broadcast receiver is less than 1000, literally 999.
But you can set it to a maximum level of 2147483647. As other apps on Google Play uses higher priority (more than 999) than your Broadcast receiver, due to this your app may not receive the SMS. By this maximum level, your app will always receive the SMS first.
So this way you can get broadcast before the default messaging app.
See this answer!
Edit
I was revisiting my old answers on StackOverflow and this answer looked sketchy. The following is an excerpt from the official documentation of IntentFilter priority.
The priority that should be given to the parent component with regard to handling intents of the type described by the filter. This attribute has meaning for both activities and broadcast receivers:
It provides information about how able an activity is to respond to an intent that matches the filter, relative to other activities that could also respond to the intent. When an intent could be handled by multiple activities with different priorities, Android will consider only those with higher priority values as potential targets for the intent.
It controls the order in which broadcast receivers are executed to receive broadcast messages. Those with higher priority values are called before those with lower values. (The order applies only to synchronous messages; it's ignored for asynchronous messages.)
The official lower and upper limits are still -1000 and 1000 respectively. Broadcast receivers with higher priority can abort ordered broadcasts, hence, preventing other receivers with a lower priority from receiving them.

Resources