APNS - what is the cost for sending out a message? - apple-push-notifications

Do the carriers (say AT&T) charge a fee for sending messages through APNS? If so, where can I find the pricing details?

As far as I'm aware there are no costs to the sender... The only possible costs would be on the customer's side.. if they do not have an all inclusive data plan or if they have gone over their data limits. But the notifications are absolutely tiny in size data wise.. so that really shouldn't be an issue.
So basically it's sent as data and not through the cellular networks.

Related

Server load when pushing the same small payload to a very large amounts of clients

I am wondering what would be the best strategy to send the same small(<100B) payload to large amounts of clients and not paying bank for the server resources.
I am trying to create an API that would synchronize multiple media players to one source for the purpose of watch parties through async data pushing through http. I don't need to authenticate clients and the data is not sensitive. The payload will be the same for everyone and very small ~20-40 unicode chars. I want the payload to be able to update every 2-3 seconds, but I predict a median update every 30-60s. My limitation is that I want to be able to serve up to a million users at the same time and make it free to use.
I am not sure how to balance the cost of the server resources and high performance of a possibility of a lot of quick updates to clients. Are there any resources that would help me understand the balancing of server cost/performance in my use case? What is the best way to approach this problem from a technical standpoint?
So websockets are out of the question, since streaming data and keeping up sessions is costly, right? Are AJAX pushes the most lightweight way to approach it? How does the fact that the payload is the same for everyone influences possible strategies of lightening the load? Would lack of auth greatly influence the load? Is P2P out of the question?
Best approach would be to use websockets
https://socket.io/ would be a very good starting point,
or https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API
another approach is to use AJAX (as you mention), but that would carry the whole HTTP protocol data with it, on every request, making your payload doubled (if it's that small).
So binary data via websocket seems the better solution.

Advice on pubsub topic division based on geohashes for ably websocket connection service

My question concerns the following use case:
Use case actors
User A: The user who sets a broadcast region and views stream with live posts.
User B: The first user who sends a broadcast message from within the broadcast region set by user A.
User C: The second user who sends a broadcast message from within the broadcast region set by user A.
Use case description
User A selects a broadcast region within which boundaries (radius) (s)he wants to receive live broadcast messages.
User A opens the livefeed and requests an initial set of livefeed items.
User B broadcasts a message from within the broadcast region of user A while user A’s livefeed is still open.
A label with 1 new livefeed item appears at the top of User A’s livefeed while it is open.
As user C publishes another livefeed post from within the selected broadcast region from user A, the label counter increments.
User A receives a notification similar to this example of Facebook:
The solution I thought to apply (and which I think Pubnub uses), is to create a topic per geohash.
In my case that would mean that for every user who broadcasted a message, it needs to be published to the geohash-topic, and clients (app / website users) would consume the geohash-topic through a websocket if it fell within the range of the defined area (radius). Ably seems to provide this kind of scalable service using web sockets.
I guess it would simplified be something like this:
So this means that a geohash needs to be extracted from the current location from where the broadcast message is sent. This geohash should have granular scale that is small enough so that the receiving user can set a broadcast region that is more or less accurate. (I.e. the geohash should have enough accuracy if we want to allow users to define a broadcast region within which to receive live messages, which means that one should expect a quite large amount of topics if we decided to scale).
Option 2 would be to create topics for a geohash that has a less specific granularity (covering a larger area), and let clients handle the accuracy based on latlng values that are sent along with the message.
The client would then decide whether or not to drop messages. However, this means more messages are sent (more overhead), and a higher cost.
I don't have experience with this kind of architecture, and question the viability / scalability of this approach.
Could you think of an alternate solution to this question to achieve the desired result or provide more insight on how to solve this kind of problem overall? (I also considered using regular req-res flow, but this means spamming the server, which also doesn't seem like a very good solution).
I actually checked.
Given a region of 161.4 km² (like region Brussels), the division of geohashes by length of the string is as follows:
1 ≤ 5,000km × 5,000km
2 ≤ 1,250km × 625km
3 ≤ 156km × 156km
4 ≤ 39.1km × 19.5km
5 ≤ 4.89km × 4.89km
6 ≤ 1.22km × 0.61km
7 ≤ 153m × 153m
8 ≤ 38.2m × 19.1m
9 ≤ 4.77m × 4.77m
10 ≤ 1.19m × 0.596m
11 ≤ 149mm × 149mm
12 ≤ 37.2mm × 18.6mm
Given that we would allow users to have a possible inaccuracy up to 153m (on the region to which users may want to subscribe to receive local broadcast messages), it would require an amount of topics that is definitely already too large to even only cover the entire region of Brussels.
So I'm still a bit stuck at this level currently.
1. PubNub
PubNub is currently the only service that offers an out of the box geohash pub-sub solution over websockets, but their pricing is extremely high (500 connected devices cost about 49$, 20k devices cost 799$) UPDATE: PubNub has updated price, now with unlimited devices. Website updates coming soon.
Pubnub is working on their pricing model because some of their customers were paying a lot for unexpected spikes in traffic.
However, it will not be a viable solution for a generic broadcasting messaging app that is meant to be open for everybody, and for which traffic is therefore very highly unpredictable.
This is a pity, since this service would have been the perfect solution for us otherwise.
2. Ably
Ably offers a pubsub system to stream data to clients over websockets for custom channels. Channels are created dynamically when a client attaches itself in order to either publish or subscribe to that channel.
The main problem here is that:
If we want high geohash accuracy, we need a high number of channels and hence we have to pay more;
If we go with low geohash accuracy, there will be a lot of redundant messaging:
Let's say that we take a channel that is represented by a geohash of 4 characters, spanning a geographical area of 39.1 x 19.5 km.
Any post that gets sent to that channel, would be multiplexed to everybody within that region who is currently listening.
However, let's say that our app allows for a maximum radius of 10km, and half of the connected users has its setting to a 1km radius.
This means that all posts outside of that 2km radius will be multiplexed to these users unnecessarily, and will just be dropped without having any further use.
We should also take into account the scalability of this approach. For every geohash that either producer or consumer needs, another channel will be created.
It is definitely more expensive to have an app that requires topics based on geohashes worldwide, than an app that requires only theme-based topics.
That is, on world-wide adoption, the number of topics increases dramatically, hence will the price.
Another consideration is that our app requires an additional number of channels:
By geohash and group: Our app allows the possibility to create geolocation based groups (which would be the equivalent of Twitter like #hashtags).
By place
By followed users (premium feature)
There are a few optimistic considerations to this approach despite:
Streaming is only required when the newsfeed is active:
when the user has a browser window open with our website +
when the user is on a mobile device, and actively has the related feed open
Further optimisation can be done, e.g. only start streaming as from 10
to 20 seconds after refresh of the feed
Streaming by place / followed users may have high traffic depending on current activity, but many place channels will be idle as well
A very important note in this regard is how Ably bills its consumers, which can be used to our full advantage:
A channel is opened when any of the following happens:
A message is published on the channel via REST
A realtime client attaches to the channel. The channel remains active for the entire time the client is attached to that channel, so
if you connect to Ably, attach to a channel, and publish a message but
never detach the channel, the channel will remain active for as long
as that connection remains open.
A channel that is open will automatically close when all of the
following conditions apply:
There are no more realtime clients attached to the channel At least
two minutes has passed since the last message was published. We keep
channels alive for two minutes to ensure that we can provide
continuity on the channel as part of our connection state recovery.
As an example, if you have 10,000 users, and at your busiest time of
the month there is a single spike where 500 customers establish a
realtime connection to Ably and each attach to one unique channel and
one global shared channel, the peak number of channels would be the
sum of the 500 unique channels per client and the one global shared
channel i.e. 501 peak channels. If throughout the month each of those
10,000 users connects and attaches to their own unique channel, but
not necessarily at the same time, then this does not affect your peak
channel count as peak channels is the concurrent number of channels
open at any point of time during that month.
Optimistic conclusion
The most important conclusion is that we should consider that this feature may not be as crucial as believe it is for a first version of the app.
Although Twitter, Facebook, etc offer this feature of receiving live updates (and users have grown to expect it), an initial beta of our app on a limited scale can work without, i.e. the user has to refresh in order to receive new updates.
During a first launch of the app, statistics can ba gathered to gain more insight into detailed user behaviour. This will enable us to build more solid infrastructural and financial reflections based on factual data.
Putting aside the question of Ably, Pubnub and a DIY solution, the core of the question is this:
Where is message filtering taking place?
There are three possible solution:
The Pub/Sub service.
The Server (WebSocket connection handler).
Client side (the client's device).
Since this is obviously a mobile oriented approach, client side message filtering is extremely rude, as it increases data consumption by the client while much of the data might be irrelevant.
Client side filtering will also increase battery consumption and will likely result in lower acceptance rates by clients.
This leaves pub/sub filtering (channel names / pattern matching) and server-side filtering.
Pub/Sub channel name filtering
A single pub/sub service serves a number of servers (if not all of them), making it a very expensive resource (relative to the resources we have at hand).
Using channel names to filter messages would be ideal - as long as the filtering is cheap (using exact matches with channel name hash mapping).
However, pattern matching (when subscribing to channels with inexact names, such as "users.*") is very expansive when compared to exact pattern matching.
This means that Pub/Sub channel name filtering can't be used to filter all the messages without overloading the pub/sub system.
Server side filtering
Since a server accepts WebSocket connections and bridges between the WebSocket and the pub/sub service, it's in an ideal position to filter the messages.
However, we don't want the server to process all the messages for all the clients for each connection, as this is an extreme duplication of effort.
Hybrid solution
A classic solution would divide the earth into manageable sections (1 sq. km per section will require 510.1 million unique channel names for full coverage... but I would suggest that the 70% ocean space should be neglected).
Busy sections might be subdivided (NYC might require a section per 250 sq meters rather than 1 sq kilometer).
This allows publishers to publish to exact channel names and subscribers to subscribe to exact channel names.
Publishers might need to publish to more than one channel and subscribers might need to subscribe to more than one channel, depending on their exact location and the grid's borders.
This filtering scheme will filter much, but not all.
The server node will need to look into the message, review it's exact geo-location and filter messages before deciding if they should be sent along the WebSocket connection to the client.
Why the Hybrid Solution?
This allows the system to scale with relative ease.
Since server nodes are (by design) cheaper than the pub/sub service, they could be used to handle the exact location filtering (the heavy work).
At the same time, the strength of the pub/sub system can be used to minimize the server's workload and filter the obvious mis-matches.
Pubnub vs. Ably?
I don't know. I didn't use either of them. I worked with Redis and implemented my own pub/sub solution.
I assume they are both great and it's really up to your needs.
Personally I prefer the DIY approach when it comes to customized or complex situations. IMHO, this seems like it would fall into the DIY category if I were to implement it.

phoenix channels and their relation to sockets

I need some advice about elixir/phoenix channels. I have an application that is related to venue changes and in order to reduce the amount of data sent to each client I only want each client to subscribe to the venues it cares about.
With this in mind I was thinking of going down the route of having a channel for "VenueChanges/*" and having each client subscribe to the channel several times with each venue id it cares about i.e. "VenueChanges/1", "VenueChanges/2" etc.
The venues that the client care about will change frequently which will mean a lot of joining and leaving channels.
My question is, what's the overhead of having a client join a channel lots of times. Am I correct in assuming that there would still only be one socket open and not a new socket for each of the channels joined?
Also any advice on managing the constant joining and leaving of channels from the client? Any other advice in general? If this is a bad idea what are better alternatives?
With respect to the socket question, you are correct in that you will still only have one socket per client (multiple channels are multiplexed over that one socket).
While not directly answering your consistent join/leave question, Chris McCord's post on Phoenix Channels vs Rails Action Cable has some really good data on performance best summarized by:
With Phoenix, we've shown that channels performance remains consistent
as notification demand increases, which is essential for handling
traffic spikes and avoiding overload
That said, your server hardware and deployment distribution strategy would also play a significant role in answering that concern.
Lastly, on the basis that you meant join/leaving channel topics (or "rooms" as it's termed in some places) as seen in the Chris's test with 55,000 connections:
It's important to note that Phoenix maintains the same responsiveness when broadcasting for both the 50 and 200 users per room tests.

adaptive http request throttling algorithm

I'm currently in a spray based web application backend project. if you don't know what spray is , never mind, just trait it as a back end http request handling system. unfortunately, there is no existing request throttling support with srpay. so I'd like to write my own.
I don't want to use token bucket or similar algorithm, because the capacity of the server is pre-configured. you maybe give a very conservative estimation which far behind the server's real capacity.
so what I'd like to do is let the server actually learning its' own capacity by the feedback of request
per second, request per second response. request handling per second, and average response time.mainly the four statistics, but not limited to them.
It's adaptive throttling, so the system is dynamically aware the actual request handling capacity.
Anyone can suggest some existing algorithms or related papers.

Bulk SMS service provider Business

I am planning to start a bulk sms service provider business. However I have the following doubts:
Which SMS/MMS Gateway software is good/best for high volume traffic(
OzekiNG, NowSMS etc)?
Do I need to setup connections with all the
major mobile operators or a single major operator will also work?
For the case of 2-way SMSs how can I charge money from the
customers?(e.g. usually the operator charges for the sms, but I need
to get something also)
How much will be the initial costing?
Consider kannel? http://www.kannel.org/
It depend, to guarantee realiability, you should consider connect to the mobile operator. But it would seem a lot of mno in the world hence there are some other aggregators which you can connect to such as the one you mentioned in your first question.
You need something called reverse charging. Read http://en.wikipedia.org/wiki/Reverse_SMS_billing for more information.
The cost will always depend on which operator you integrate with. So remember to quote for their initial costing. You can reference the operators pricing mechanism.
You can read following guide especially written for service providers mentioning that how one can start VoIP based Voice SMS or FAX broadcasting business, Following are answers to your questions respectively
For bulk SMS traffic you must go with SMPP protocol, Fortunately kannel has good support for SMPP and its available freely.
Multiple providers are better, for Failover and comparatively lower cost due to LCR.
It is obvious that you will charge your clients for outgoing SMS. however for inbound you can earn from DID numbers ( user have to pay rent for their inbound number ) further if you have large number of active DIDs / Phone numbers you can earn money from providers too ( who send traffic to your DIDs ) against termination and SS7 dips. but its very advance topic and only feasible for big players.
For initial cost calculation you have to consider multiple things like Infrastructure and server setup, Software licensing fees and 3rd party cost regarding DIDs and Termination services.
Hope it will help

Resources