Twillio failed 150 SMS due to lack of funds in the middle of a campaign. Is there a way to resend those 150 messages in bulk? Thanks!
If you don't have a queue on your side, the easiest way is to use the API to find the list and resend as appropriate:
You can use the SMS Messages List Resource - http://www.twilio.com/docs/api/rest/sms#list - to get a list of messages within a certain date range from a certain number.
From there, you'll get back a list which you can iterate over. For each of those, check the "status" parameter for the "failed" value - http://www.twilio.com/docs/api/rest/sms#sms-status-values
I would recommend making a list of those, looking at them yourself to make sure the numbers are what you expect and then reload them send via your normal means.
On another front, we have auto-recharging specifically to prevent scenarios specifically like this. If that's not turned on, you should enable it so this doesn't happen again.
Disclosure: Twilio Employee here
Related
i am trying to use the API to get the total number of messages posted by each user during a certain period. Ideally, I would be able to break the number of messages by the type of channel (public, private, direct messages.) Is this possible? I am looking through the API documentation but haven't found anything. I would be using it to create a script that would automatically generate weekly activity reports.
Thank you for any advice you can provide!
There is no special endpoint for this information as far as I know, but you can generate something similar yourself by looping though all channels and counting the message per user, e.g.
Get list of all channels with conversations.list
Get history of each channel with conversations.history
Count messages per user
Of course your results would not include message from channels, that your bot has no access too (e.g. some private channels, direct message channels).
Currently I'm working on a SaaS with support for multiple tenants that can enable push notifications for their user-bases.
I'm thinking of using a message queue to store all pushes and send them with a separate service. That new service would need to read from the queue and send the push notifications.
My question now is: Do I need to come up with a complex sending strategy? I know that with GCM has a limit of 1000 devices per request, so this needs to be considered. I also can't wait for x pushes to fly in as this might delay a previous push from being sent. My next thought was to create a global array and fill it with pushes from the queue. A loop would then fetch that array every say 1 second and send pushes. This way pushes would get sent for sure and I wouldn't exceed the 1000 devices limit.
So ... although this might work I'm not sure if an infinite loop is the best way to go. I'm wondering if GCM / FCM even has a request limit? If not, I wouldn't need to aggregate the pushes in the first place and I could ditch the loop. I could simply fire a request for each push that gets pulled from the queue.
Any enlightenment on this topic or improvement of my prototypical algorithm would be great!
Do I need to come up with a complex sending strategy?
Not really. GCM/FCM is pretty simple enough. Just send the message towards the GCM/FCM server and it would queue it on it's own, then (as per it's behavior) send it as soon as possible.
I know that with GCM has a limit of 1000 devices per request, so this needs to be considered.
I think you're confusing the 1000 devices per request limit. The 1000 devices limit refers to the number of registration tokens you add in the list when using the registration_ids parameter:
This parameter specifies a list of devices (registration tokens, or IDs) receiving a multicast message. It must contain at least 1 and at most 1000 registration tokens.
This means you can only send to 1000 devices with the same message payload in a single request (you can then do a batch request (1000/each request) if you need to).
I'm wondering if GCM / FCM even has a request limit?
AFAIK, there is no such limit. Ditch the loop. Whenever you successfully send a message towards the GCM/FCM server, it will enqueue and keep the message until such time that it is available to send.
We're moving from Mandrill to SparkPost. We figured that SparkPost's transmission is the closest thing to Mandrill's send-template message call.
Mandrill responded to those calls with a list of ids and statuses for each email. On the other hand SparkPost returns a single id and summary statistics (number of emails that were sent and number of emails that failed). Is there some way to get those ids and statuses out of the transmission response or at all?
you can get the message IDs for messages sent using the tranmissions API two ways:
Query the message events API, which allows you to filter by recipients, template IDs, campaign IDs, and other values
Use webhooks - messages are sent to your endpoint in batches, and each object in the batch contains the message ID
Which method you choose really depends on your use case. It's essentially poll (message events) vs. push (webhooks). There is no way to get the IDs when you send the transmission because they are sent asynchronously.
Querying message events API, while a viable option, would needlessly complicate our simple solution. On the other hand we very much want to use Webhooks, but not knowing which message they relate to would be troublesome...
The missing link was putting our own id in rcpt_meta. Most of the webhooks we care about do contain rcpt_meta, so we can substitute message_id with that.
I'm stacked too in this problem..
using rcpt_meta solution would be perfect if substitution would work on rcpt_meta but it's not the case.
So, in case of sending a campaign, I cannot specify all recipients inline but have to make a single API call for every message, wich is bad for - say - 10/100k recipients!
But now, all transmission_id are unique for every SINGLE recipient, so I have the missing key and rcpt_meta is not needed anymore.
so the key to be used when receiving a webhook is composed:
transmission_id **AND** rcpt_to
A web application sends an email on behalf of a UserA to UserB, using the new Gmail API (Users.messages: send).
The synchronous response contains threadId, messageId which are stored in the database.
We then query the history API for any changes in user's inbox (Users.history: list).
Is there an efficient way to get all the updates since last sync (new replies, read/unread changes)?
One implementation that we tried was to filter the history API results through a custom label. Unfortunately, we noticed that once a thread/message is tagged with a specific label any subsequent responses are not labeled automatically and new replies are not included in the history API response.
A second approach was to query threads using gmail advanced search for a particular label and date (e.g. after:2014/08/29 label:MY_LABEL). The problem was that gmail does not return threads that were created before 2014/08/29 but had a reply on that date.
Any scalable suggestions would be greatly appreciated.
Not sure I understand here, users.history.list was made exactly for this. Given a previous historyId, you can then call history.list(previousHistoryid), iterate through the results to find all the message Ids that have been updated since the previous historyId. Then call messages.get() on all of those--for any messages you already knew about you can just call format=MINIMAL (to see label updates), and for new messages you can use a different format to get the message content if you need it.
I'm developing a chat application and I've come across the following thought.. Should I use 'multiple' long polling requests to my server, each one handling different things. For example one checking for messages, one for 'is typing', one for managing the contacts list 'is online/offline' etc.. or would it be better to handle it all through one channel?
My opinion is that you’d be better off with one connection, and sending JSON messages back and forth, e.g.:
User joined:
{"user_add": "st3"}
User left:
{"user_left": "sneeu"}
Message received
{"message": "Good morning!", "from": "st3"}
And these could be sent together in an array, so that users could receive everything since their last response.
Polling is going to be your biggest bandwidth/resource hog, so keep it to a minimum; e.g. issue HEAD requests with appropiate date / if-modified-since headers to allow caching to work sensibly, with the server returning just headers containing the date and time of the last change to any of the properties you're interested in - or perhaps something even more minimal than that; and only issue a full GET if the returned headers imply there is new information.