Plivo Bulk SMS destination limits? - sms

I'm testing Plivo for sending bulk SMS. I'm using the .NET REST API to send the messages.
Plivo's documentation for bulk SMS indicates that you can just concatenate numbers with a delimiter. This works fine. Does anyone know how many numbers can be included or can you tell me how many you have successfully sent in one API request?
var plivo = new RestAPI("xxxxxxxxxxxxxxxxxxx", "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx");
var sendData = new Dictionary<string, string>
{
{ "src", "0000000000" },
{ "dst", "00000000000<00000000000<00000000000<00000000000<00000000000<HOW MANY MORE???" },
{ "text", "test sms from plivo" }
};
IRestResponse<MessageResponse> resp = plivo.send_message(sendData);
I couldn't find this information.

According to Plivo support:
"There is no limit to the number of destination numbers you can add in the "dst"parameter of the outbound message API.
However, the outgoing rate limit is 5 messages per second per account."
Regardless of this response, I'm sure there is still some theoretical limit. Based off the other information I've gathered, it's better to split the API calls up anyway using multiple Plivo number (sender id's). If you have a suitable number of long codes, you shouldn't have to worry about this limit.
Related information:
Long code SMS restrictions
How do I send 6000+ messages using a long code?
In other words, use multiple long codes and split up your destination numbers accordingly to send out in parallel.
Edit* Received another response that seems more accurate
"You can add upto 250 numbers per API request. That should be the ideal limit on the destination parameter."

For anyone wondering what the actual bulk-messaging limit is nowadays (2021), it is 1,000 numbers per API call.
Thankfully this is now clearly stated on Plivo's API Docs page in the Bulk Messaging section:
The Message API supports up to 1,000 unique destination numbers.

Related

Websocket best practice for groups chat / one websocket for all groups or one websocket per group?

I have to implement a chat application using websocket, users will chat via groups, there can be thousands of groups and a user can be in multiple groups. I'm thinking about 2 solutions:
[1] for each group chat, I create a websocket endpoint (using camel-atmosphere-websocket), users in the same group can subscribe to the group endpoint and send/receive message over that endpoint. it means there can be thousands of websocket endpoints. Client side (let's say iPhone) has to subscribes to multiple wbesocket endpoints. is this a good practice?
[2] I just create one websocket endpoint for all groups. Client side just subscribes to this endpoint and I manage the messages distribution myself on server: get group members, pick the websocket of each member from list of connected websockets then write the message to each member via websocket.
Which solution is better in term of performance and easy to implement on both client and server?
Thanks.
EDIT 2015-10-06
I chose the second approach and did a test with jetty websocket client, I use camel atmosphere websocket on server side. On client side, I create websocket connections to server in threads. There was a problem with jetty that I can just create around 160 websocket connections (it means around 160 threads). The result is that I almost see no difference when the number of clients increases from 1 to 160.
Yes, 160 is not a big number, but I think I will do more test when I actually see the performance problem, for now, I'm ok with second approach.
If you are interested in the test code, here it is:
http://www.eclipse.org/jetty/documentation/current/jetty-websocket-client-api.html#d0e22545
I think second approach will be better to use for performance. I am using the same for my application, but it is still in testing phase so can't comment about the real time performance. Now its running for 10-15 groups and working fine. In my app, there is similar condition like you in which user can chat based on group. I am handling the the group creation on server side using node.js. Here is the code to create group, but it is for my app specific condition. Just pasting here for the reference. Getting homeState and userId from front-end. Creating group based on the homeState. This code is only for example, it won't work for you. To improve performance you can use clustering.
this.ConnectionObject = function(homeState, userId, ws) {
this.homeState = homeState;
this.userId = userId;
this.wsConnection = ws;
},
this.createConnectionEntry = function(homeState, userId,
ws) {
var connObject = new ws.thisRefer.ConnectionObject(homeState, userId,
ws);
var connectionEntryList = null;
if (ws.thisRefer.connectionMap[homeState] != undefined) {
connectionEntryList = ws.thisRefer.connectionMap[homeState];
} else {
connectionEntryList = new Array();
}
connectionEntryList.push(connObject);
console.log(connectionEntryList.length);
ws.thisRefer.connectionMap[homeState] = connectionEntryList;
ws.thisRefer.connecteduserIdMap[userId] = "";
}
Browsers implement a restriction on the numbers of websocket that can be opened by the same tab. You can't rely on being able to create as many connection as possible. Go for solution #2

Pubnub chat application with storage

I'm looking to develop a chat application with Pubnub where I want to make sure all the chat messages that are send is been stored in the database and also want to send messages in chat.
I found out that I can use the Parse with pubnub to provide storage options, But I'm not sure how to setup those two in a way where the messages and images send in the chat are been stored in the database.
Anyone have done this before with pubnub and parse? Are there any other easy options available to use with pubnub instead of using parse?
Sutha,
What you are seeking is not a trivial solution unless you are talking about a limited number of end users. So I wouldn't say there are no "easy" solutions, but there are solutions.
The reason is your server would need to listen (subscribe) to every chat channel that is active and store the messages being sent into your database. Imagine your app scaling to 1 million users (doesn't even need to get that big, but that number should help you realize how this can get tricky to scale where several server instances are listening to channels in a non-overlapping manner or with overlap but using a server queue implementation and de-duping messages).
That said, yes, there are PubNub customers that have implemented such a solution - Parse not being the key to making this happen, by the way.
You have three basic options for implementing this:
Implement a solution that will allow many instances of your server to subscribe to all of the channels as they become active and store the messages as they come in. There are a lot of details to making this happen so if you are not up to this then this is not likely where you want to go.
There is a way to monitor all channels that become active or inactive with PubNub Presence webhooks (enable Presence on your keys). You would use this to keep a list of all channels that your server would use to pull history (enable Storage & Playback on your keys) from in an on-demand (not completely realtime) fashion.
For every channel that goes active or inactive, your server will receive these events via the REST call (and endpoint that you implement on your server - your Parse server in this case):
channel active: record "start chat" timetoken in your Parse db
channel inactive: record "end chat" timetoken in your Parse db
the inactive event is the kickoff for a process that uses start/end timetokens that you recorded for that channel to get history from for channel from PubNub: pubnub.history({channel: channelName, start:startTT, end:endTT})
you will need to iterate on this history call until you receive < 100 messages (100 is the max number of messages you can retrieve at a time)
as you retrieve these messages you will save them to your Parse db
New Presence Webhooks have been added:
We now have webhooks for all presence events: join, leave, timeout, state-change.
Finally, you could just save each message to Parse db on success of every pubnub.publish call. I am not a Parse expert and barely know all of its capabilities but I believe they have some sort or store local then sync to cloud db option (like StackMob when that was a product), but even if not, you will save msg to Parse cloud db directly.
The code would look something like this (not complete, likely errors, figure it out or ask PubNub support for details) in your JavaScript client (on the browser).
var pubnub = PUBNUB({
publish_key : your_pub_key,
subscribe_key : your_sub_key
});
var msg = ... // get the message form your UI text box or whatever
pubnub.publish({
// this is some variable you set up when you enter a chat room
channel: chat_channel,
message: msg
callback: function(event){
// DISCLAIMER: code pulled from [Parse example][4]
// but there are some object creation details
// left out here and msg object is not
// fully fleshed out in this sample code
var ChatMessage = Parse.Object.extend("ChatMessage");
var chatMsg = new ChatMessage();
chatMsg.set("message", msg);
chatMsg.set("user", uuid);
chatMsg.set("channel", chat_channel);
chatMsg.set("timetoken", event[2]);
// this ChatMessage object can be
// whatever you want it to be
chatMsg.save();
}
error: function (error) {
// Handle error here, like retry until success, for example
console.log(JSON.stringify(error));
}
});
You might even just store the entire set of publishes (on both ends of the conversation) based on time interval, number of publishes or size of total data but be careful because either user could exit the chat and the browser without notice and you will fail to save. So the per publish save is probably best practice if a bit noisy.
I hope you find one of these techniques as a means to get started in the right direction. There are details left out so I expect you will have follow up questions.
Just some other links that might be helpful:
http://blog.parse.com/learn/building-a-killer-webrtc-video-chat-app-using-pubnub-parse/
http://www.pubnub.com/blog/realtime-collaboration-sync-parse-api-pubnub/
https://www.pubnub.com/knowledge-base/discussion/293/how-do-i-publish-a-message-from-parse
And we have a PubNub Parse SDK, too. :)

Twilio SMS sent from +12345

I am sending SMS using Twilio Node (http://twilio.github.io/twilio-node/).
In the from field I've set the number that Twilio gave me, yet when I receive the SMS it shows as +1 (234) 5.
The only thing I can think of is that I am using the trial account, but their FAQ doesn't say anything about this...
https://www.twilio.com/help/faq/twilio-basics/how-does-twilios-free-trial-work
Code snippet:
// Require and initialize the Twilio module with your credentials
var client = require('twilio')('A-FAKE-8cc', 'b53090-FAKE-6808');
// Send an SMS message
client.sendSms({
to:'+' + to,
from: '+15734XXXXXX',
body: 'Your code is ' + code
}, function(err, responseData) {
if (err) {
console.log(err);
} else {
console.log(responseData.from);
console.log(responseData.body);
}
}
);
Here is another piece of info: my twilio number is in the US, while my destination is in Israel - does it matter? Also, when I verified my number I received the code from this +12345 as well.
Confirming my suspicion: I've just verified a US number this time, and it showed the correct number (both from my app and from twilio).
Hey to be very honest your location doesn't matter to have a call on the Twilio platform apart from that if you want to have an upgraded account i'll suggest you to apply for Github's student developer program you'll get 50$ from that and you can buy some virtual numbers from that.
And to successfully make calls i am linking a repository which will help you in sending messages as well as calls over the internet.Getting started with Twilio at MLH INIT

Implementing bulk-messaging from Salesforce to/from Twilio, hitting Salesforce API limits

I am building an integration between Salesforce and Twilio that sends/receives SMS using TwilioForce REST API. The main issue is getting around the 10-call API limit from Salesforce, as well as the prohibition on HTTP call outs from a trigger.
I am basing the design on Dan Appleman's Asynchronous Request processes, but in either Batch mode or RequestAsync(), ASync(), Sync(), repeat... I'm still hitting the limits.
I'd like to know how other developers have done this successfully; the integrations have been there for a while, but the examples are few and far between.
Are you sending unique messages for each record that has been updated? If not, then why not send one message to multiple recipients to save on your API limits?
Unfortunately, if you do actually need to send more than 10 unique messages there is no way to send messages in bulk with the Twilio API, you could instead write a simple application that runs on Heroku or some other application platform that you can call out to that will handle the SMS functionality for you.
I have it working now using the following structure (I apologize for the formatting - it's mostly pseudocode):
ASyncRequest object:
AsyncType (picklist: 'SMS to Twilio' is it for now),
Params (long text area: comma-separated list of Ids)
Message object:
To (phone), From (phone), Message (text), Sent (boolean), smsId (string), Error (text)
Message trigger: passes trigger details to CreateAsyncRequests() method.
CreateAsyncRequests: evaluate each new/updated Message__c; if Sent == false for any messages, we create an AsyncRequest, type=SMS to Twilio, Params += ',' + message.Id.
// Create a list to be inserted after all the Messages have been processed
List requests = new List();
Once we reach 5 message.Ids in a single AsyncRequest.Params list, add it to requests.
If all the messages have been processed and there's a request with < 5 Ids in Params, add it to requests as well.
If requests.size() > 0 {
insert requests;
AsyncProcessor.StartBatch();
}
AsyncProcessor implements .Batchable and .AllowsCallouts, and queries ASyncRequest__c for any requests that need to be processed, which in this case will be our Messages list.
The execute() method takes the list of ASyncRequests, splits each Params value into its component Message Ids, and then queries the Message object for those particular Messages.
StartBatch() calls execute() with 1 record at a time, so that each execute() process will still contain fewer than the maximum 10 callouts.
Each Message is processed in a try/catch block that calls SendMessage(), sets Message.smsId = Twilio.smsId and sets Message.Sent = true.
If no smsId is returned, then the message was not sent, and I set a boolean bSidIsNull = true indicating that (at least) one message was not sent.
** If any message failed, no smsIds are returned EVEN FOR MESSAGES THAT WERE SUCCESSFUL **
After each batch of messages is processed, I check bSidIsNull; if true, then I go back over the list of messages and put any that do not have an smsId into a map indexed by the Twilio number I'm trying to send them From.
Since I limited each ASyncRequest to 5 messages, I still have the use of a callout to retrieve all of the messages sent from that Twilio.From number for the current date, using
client.getAccount().getMessages('From' => fromNumber, 'DateSent' => currentDate)
Then I can update the Message.smsIds for all of the messages that were successful, and add an error message to Message.Error_on_Send__c for any that failed.

How would I design this scenario in Twilio?

I'm working on a YRS 2013 project and would like to use Twilio. I already have a Twilio account set up with over $100 worth of funds on it. I am working on a project which uses an external API and finds events near a location and date. The project is written in Ruby using Sinatra (which is going to be deployed to Heroku).
I am wondering whether you guys could guide me on how to approach this scenario: a user texts to the number of my Twilio account (the message would contain the location and date data), we process the body of that sms, and send back the results to the number that asked for them. I'm not sure where to start; for example if Twilio would handle some of that task or I would just use Twilio's API and do checking for smss and returning the results. I thinking about not using a database.
Could you guide me on how to approach this task?
I need to present the project on Friday; so I'm on a tight deadline! Thanks for our help.
They have some great documentation on how to do most of this.
When you receive a text you should parse it into the format you need
Put it into your existing project and when it returns the event or events in the area you need to check how long the string is due to a constraint that twilio has of restricting messages to 160 characters or less.
Ensure that you split the message elegantly and not in the middle of an event. If you were returned "Boston Celtics Game", "The Nut Cracker Play". you want to make sure that if both events cannot be put in one message that the first message says "Boston Celtics Game, Another text coming in 1 second" Or something similar.
In order to receive a text message from a mobile device, you'll have to expose an endpoint that is reachable by Twilio. Here is an example
class ReceiveTextController < ActionController
def index
# let's pretend that we've mapped this action to
# http://localhost:3000/sms in the routes.rb file
message_body = params["Body"]
from_number = params["From"]
SMSLogger.log_text_message from_number, message_body
end
end
In this example, the index action receives a POST from Twilio. It grabs the message body, and the phone number of the sender and logs it. Retrieving the information from the Twilio POST is as simple as looking at the params hash
{
"AccountSid"=>"asdf876a87f87a6sdf876876asd8f76a8sdf595asdD",
"Body"=> body,
"ToZip"=>"94949",
"FromState"=>"MI",
"ToCity"=>"NOVATO",
"SmsSid"=>"asd8676585a78sd5f548a64sd4f64a467sg4g858",
"ToState"=>"CA",
"To"=>"5555992673",
"ToCountry"=>"US",
"FromCountry"=>"US",
"SmsMessageSid"=>"hjk87h9j8k79hj8k7h97j7k9hj8k7",
"ApiVersion"=>"2008-08-01",
"FromCity"=>"GRAND RAPIDS",
"SmsStatus"=>"received",
"From"=>"5555992673",
"FromZip"=>"49507"
}
Source

Resources