Twilio multiple treads to have each call have 1 process - ruby

Using Ruby, Sinatra, and the Twilio REST API, I'm coding a customer service line for my company. When an incoming call is received, the customer is put on hold in a < Conference > verb, while the application makes an outgoing call to an agent. If he accepts the call, they are then the calls bridged.
I currently have 3 conference rooms (Tech Supp, Sales, and Mobile Supp) created by my fairly linear program. But if a conference room is busy while another call comes in requesting the already occupied room, they can't reach an agent, which is problematic.
My question is : Can I/How do I create a thread in Ruby for each incoming call so that it has its own independent process?
My reasoning behind this is : Once each call has its thread, then I can create a room called "name of department" + "#process.id".
For example : (also adding a randomly generated 7-digit number to make each conference name to make it 100% unique.
#random = Random.rand(10_000_000 - 1_000_000) + 1_000_000
puts #random
< Dial >
< Conference > 'Tech Supp' + PROCESS_ID \ + #random < /Conference >
< /Dial >

Twilio evangelist here.
Two ideas here. Rather than getting into threads, which can get really messy, really quickly, why not just create a different conference room using the inbound callers CallSid. I've created systems similar to what you describe before using that technique. You system just catalogs each CallSid as it arrives so you can go back later and connect and agent to that conference.
Another option might be to use Queue. When a new call dials in, you could just drop them in a queue (or different queues if you want) and they can wait there until an agent is ready. The agent can then pick the next caller out of the queue to speak with.
This HowTo on using <Queue> might be helpful:
http://www.twilio.com/docs/howto/callqueue
Hope that helps.

Related

How to send byte message with ZeroMQ PUB / SUB setting?

So I'm new to ZeroMQ and I am trying to send byte message with ZeroMQ, using a PUB / SUB setting.
Choice of programming language is not important for this question since I am using ZeroMQ for communication between multiple languages.
Here is my server code in python:
import zmq
import time
port = "5556"
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:%s" % port)
while True:
socket.send(b'\x84\xa5Title\xa2hi\xa1y\xcb\x00\x00\x00\x00\x00\x00\x00\x00\xa1x\xcb#\x1c\x00\x00\x00\x00\x00\x00\xa4Data\x08')
time.sleep(1)
and here is my client code in python:
import zmq
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
total_value = 0
for update_nbr in range (5):
string = socket.recv()
print (string)
My client simply blocks at string = socket.recv().
I have done some study, so apparently, if I were to send string using PUB / SUB setting, I need to set some "topic filter" in order to make it work. But I am not sure how to do it if I were to send some byte message.
ZeroMQ defines protocols, that guarantee cross-platform compatibility of both the behaviours and the message-content .
The root-cause:to start receiving messagesone must change the initial "topic-filter" state for the SUB-socket( which initially is "receive nothing" Zero-subscription )
ZeroMQ is a lovely set of tools, created around smart principles.
One of these says, do nothing on SUB-side, until .setsockopt( zmq.SUBSCRIBE, ... ) explicitly says, what to subscribe to, to start checking the incoming messages ( older zmq-fans remember the initial design, where PUB-side always distributes all messages down the road, towards each connected SUB-"radio-broadcast receivers", where upon each message receival the SUB-side performs "topic-filtering" on it's own. Newer versions of zmq reverse the architecture and perform PUB-side filtering ).
Anyway, the inital-state of the "topic-filter" makes sense. Who knows what ought be received a-priori? Nobody. So receive nothing.
Given you need or wish to start the job, an easy move to subscribe to anything ... let's any message get through.
Yes, that simple .setsockopt( zmq.SUBSCRIBE, "" )
If one needs some key-based processing and the messages are of a reasonable size ( no giga-BLOBs ) one may just simply prefix some key ( or a byte-field if more hacky ) in front of the message-string ( or a payload byte-field ).
Sure, one may save some fraction of the transport-layer overhead in case the zmq-filtering is performed on the PUB-side one ( not valid for the older API versions ), otherwise there is typically not big deal to subscribe to receive "anything" and check the messages for some pre-assembled context-key ( a prefix substring, a byte-field etc ) before the rest of the message payload is being processed.
The Best Next Step:
If your code strives to go into production state, not to remain as just an academia example, there will have to be much more work to be done, to provide surviveability measures for the hostile real-world production environments.
An absolutely great perspective for doing this and a good read for realistic designs with ZeroMQ is Pieter HINTJEN's book "Code Connected, Vol.1" ( may check my posts on ZeroMQ to find the book's direct pdf-link ).
Plus another good read comes from Martin SUSTRIK, the co-father of ZeroMQ, on low-level truths about the ZeroMQ implementation details & scale-ability

Pubnub chat application with storage

I'm looking to develop a chat application with Pubnub where I want to make sure all the chat messages that are send is been stored in the database and also want to send messages in chat.
I found out that I can use the Parse with pubnub to provide storage options, But I'm not sure how to setup those two in a way where the messages and images send in the chat are been stored in the database.
Anyone have done this before with pubnub and parse? Are there any other easy options available to use with pubnub instead of using parse?
Sutha,
What you are seeking is not a trivial solution unless you are talking about a limited number of end users. So I wouldn't say there are no "easy" solutions, but there are solutions.
The reason is your server would need to listen (subscribe) to every chat channel that is active and store the messages being sent into your database. Imagine your app scaling to 1 million users (doesn't even need to get that big, but that number should help you realize how this can get tricky to scale where several server instances are listening to channels in a non-overlapping manner or with overlap but using a server queue implementation and de-duping messages).
That said, yes, there are PubNub customers that have implemented such a solution - Parse not being the key to making this happen, by the way.
You have three basic options for implementing this:
Implement a solution that will allow many instances of your server to subscribe to all of the channels as they become active and store the messages as they come in. There are a lot of details to making this happen so if you are not up to this then this is not likely where you want to go.
There is a way to monitor all channels that become active or inactive with PubNub Presence webhooks (enable Presence on your keys). You would use this to keep a list of all channels that your server would use to pull history (enable Storage & Playback on your keys) from in an on-demand (not completely realtime) fashion.
For every channel that goes active or inactive, your server will receive these events via the REST call (and endpoint that you implement on your server - your Parse server in this case):
channel active: record "start chat" timetoken in your Parse db
channel inactive: record "end chat" timetoken in your Parse db
the inactive event is the kickoff for a process that uses start/end timetokens that you recorded for that channel to get history from for channel from PubNub: pubnub.history({channel: channelName, start:startTT, end:endTT})
you will need to iterate on this history call until you receive < 100 messages (100 is the max number of messages you can retrieve at a time)
as you retrieve these messages you will save them to your Parse db
New Presence Webhooks have been added:
We now have webhooks for all presence events: join, leave, timeout, state-change.
Finally, you could just save each message to Parse db on success of every pubnub.publish call. I am not a Parse expert and barely know all of its capabilities but I believe they have some sort or store local then sync to cloud db option (like StackMob when that was a product), but even if not, you will save msg to Parse cloud db directly.
The code would look something like this (not complete, likely errors, figure it out or ask PubNub support for details) in your JavaScript client (on the browser).
var pubnub = PUBNUB({
publish_key : your_pub_key,
subscribe_key : your_sub_key
});
var msg = ... // get the message form your UI text box or whatever
pubnub.publish({
// this is some variable you set up when you enter a chat room
channel: chat_channel,
message: msg
callback: function(event){
// DISCLAIMER: code pulled from [Parse example][4]
// but there are some object creation details
// left out here and msg object is not
// fully fleshed out in this sample code
var ChatMessage = Parse.Object.extend("ChatMessage");
var chatMsg = new ChatMessage();
chatMsg.set("message", msg);
chatMsg.set("user", uuid);
chatMsg.set("channel", chat_channel);
chatMsg.set("timetoken", event[2]);
// this ChatMessage object can be
// whatever you want it to be
chatMsg.save();
}
error: function (error) {
// Handle error here, like retry until success, for example
console.log(JSON.stringify(error));
}
});
You might even just store the entire set of publishes (on both ends of the conversation) based on time interval, number of publishes or size of total data but be careful because either user could exit the chat and the browser without notice and you will fail to save. So the per publish save is probably best practice if a bit noisy.
I hope you find one of these techniques as a means to get started in the right direction. There are details left out so I expect you will have follow up questions.
Just some other links that might be helpful:
http://blog.parse.com/learn/building-a-killer-webrtc-video-chat-app-using-pubnub-parse/
http://www.pubnub.com/blog/realtime-collaboration-sync-parse-api-pubnub/
https://www.pubnub.com/knowledge-base/discussion/293/how-do-i-publish-a-message-from-parse
And we have a PubNub Parse SDK, too. :)

Joining same room more then once and clients in a room

I'm trying to figure out what happens if the clients emits to join the same room more then once, To test and find answer on this I wanted initially to find out how many clients room has after same clients send more then one emit for joining the room, but Rooms chapter in wiki https://github.com/Automattic/socket.io/wiki/Rooms is outdated. When I try to use "io.sockets.clients('room')" I get error "Object # has no method 'clients'".
So I got two questions:
1. what happens if client tries to join same room more then once? Will he get emits for that room for each time he has tried to join?
2. How can I find out which clients are in a room?
Im using socket.io v1.0.2
I got an answer on this question at socket.io github.
As per this line of code, the socket will receive emits only once. The socket is added to a room only once, and if another attempt is made for the same socket to join the room, this attempt will be ignored.
There is currently no public API for getting the clients, and there is some discussion ongoing in #1428. If you really need to get them, for some reason, you can fetch the actual clients from the adapter, assuming you are not using the redis adapter like so:
socket.join('test room');
var clients = io.sockets.adapter.rooms['test room'];
console.log(clients);
for (var clientId in clients) {
console.log(io.sockets.connected[clientId]);
}
Fixed getting clients in a room at socket.io ~1.4.5 like this:
socket.join('test room');
var room = io.sockets.adapter.rooms['test room'];
console.log(room);
for (var socketId in room.sockets) {
console.log(io.sockets.connected[socketId]);
}
Its working fine and does not gives any error,it ignores the second request for joining the room from that socket which is already in the room.
I have actually tried and implemented a solution where
when user click on message notification it joins that specific room from which the notification came and, and when he sends very first message he again join that specific room (It is because I have build a Chat-Directive in AngularJS).
Client Side
1) User Open Notification
Socket.emit('JoinRoomWithThsID', notification.ConversationID);
2) user Sends First Message in that room
Socket.emit('patientChatRoomMessage', adminmessage);

Proper way to maintain many connections with Celluloid?

I am currently working on an application that pulls mail from many IMAP mailboxes. It seems like Celluloid is a goot fit for this part, but I'm unsure on how to employ actors.
The application will be run in a distributed fashion. There are x mailboxes to poll and y processes among which these will be divided. So each process has a list of mailboxes they have to poll and this list will change every now and then. This means the pool of connections maintained by each process is dynamic.
My biggest question is: should I spawn a separate ImapConnection actor for each mailbox, or should I make a single ImapListener actor that manages all connections internally?
My current design features the former solution. There's one central Coordinator actor that keeps an array of actors that each manage one imap connection. A new connection is added with a simple:
#connections << ImapConnection.supervise(account_info)
The ImapConnection either polls the IMAP server at regular intervals, or maintains an IDLE connection. If the Coordinator wants to stop polling a mailbox it looks it up in its #connections array and properly disposes of it.
This seems like a logical approach for me that yields many benefits of Celluloid (such as automatic restarting of crashed actors), but I'm struggling to find examples of other software that uses this approach. Is spawning 100's of actors in this fashion proper use of the actor model or should I use a different approach?
Very glad to hear you are using Celluloid. Good question.
Not sure how you create connections and maintain them, whether that be by a TCPSocket you have the ability to manage or not. If you have the ability to manage a TCPSocket directly, you ought to use Celluloid::IO as well as Celluloid itself. I also don't know where you put information pulled in from IMAP connections. These two things influence your strategy.
Your approach is not bad, but yes - it could possibly be improved by adding something to do your heavy lifting, polling workers; another to hold account_info only; and a final actor to trigger the work and/or maintain the IDLE state. So you'd end up with ImapWorker ( a pool ), ImapMaintainer, and ImapRegistry. Right here, I wonder if since you are polling, if you need to keep an open connection rather than allowing information to be pushed. If you plan to poll and still keep connections open, here is what the three actors would do:
ImapRegistry holds your account_info in a Hash. This would have methods on it like add, get, and remove. I recommend a Hash of #credentials so you can use the same ID between ImapMaintainer and ImapRegistry; one holds live connections in its #connections, and one holds account_info instances in its #credentials. Both #connections and #credentials are accessed by the same ID, but one keeps a volatile connection whereas the other only has static data useable to recreate a connection if necessary. In this way, your heavy lifters could die, be respawned, and the entire system could regenerate itself.
ImapMaintainer would have the actual #connections in it, and every( interval ) { } tasks built into it, added to when account_info is stored in ImapRegistry. There are two tasks I see, depending on what frequency you plan to poll. One could be to simply touch the IMAP connection to maintain it, and the other could be to poll the IMAP server with ImapWorker. ImapWorker would be a pool saved in ImapMaintainer as say #worker. So it has #connections, #worker, #polling, and #keepalive. polling could be an #connections.each situation, or you could have a timer per connection, added at the point a connection is created.
ImapWorker has two methods... one is #touch that keeps a connection alive. The main one is #poll, which takes a connection you maintain, and runs a polling process on it. That method returns the information or even better stores it also, then the worker returns to the #worker pool. This would give you the benefit of having the polling process happen in a separate thread rather than just a separate fiber, and also allows the most tricky aspect to be kept out in the most robust yet most unaware kind of actor.
Working backward, if ImapRegistry receives #add, it stores account_info and gives that to ImapMaintainer which creates the connection, and timers ( but it forgets account_info and only creates the connection and timer(s) or just creates the connection and lets one big timer maintain the connection with #worker which is a pool. ImapMaintainer inevitably hits a timer, so at the start and end of its timer it can check its connection. If the connection is gone for some reason, it can recreate it with #registry.get information. Within its timer prompted task, it can run #worker.poll or #worker.alive.
This illustrates the above requirements, showing how the initializers would put together the actor system, and has an incomplete skeleton of methods mentioned.
WORKERS = 9 #de arbitrarily chosen
class ImapRegistry
include Celluloid
def initialize
#maintainer = ImapMaintainer.supervise
#credentials = {}
end
def add( account_info )
...
end
def get( id )
...
end
def remove( id )
...
end
end
class ImapMaintainer
include Celluloid
def initialize
#worker = ImapWorker.pool size: WORKERS
#connections = {}
end
def add( id, credential )
...
end
def remove( id )
...
end
#de These exist if there is one big timer:
def polling
...
end
def keepalive
...
end
end
class ImapWorker
include Celluloid
def initialize
#de Nothing needed.
end
def poll( connection )
...
end
def touch( connection )
...
end
end
registry = ImapRegistry.supervise
I love Celluloid and hope you have a lot of success with it. Please ask if you want anything clarified, but this at least is another strategy for you to consider.

How would I design this scenario in Twilio?

I'm working on a YRS 2013 project and would like to use Twilio. I already have a Twilio account set up with over $100 worth of funds on it. I am working on a project which uses an external API and finds events near a location and date. The project is written in Ruby using Sinatra (which is going to be deployed to Heroku).
I am wondering whether you guys could guide me on how to approach this scenario: a user texts to the number of my Twilio account (the message would contain the location and date data), we process the body of that sms, and send back the results to the number that asked for them. I'm not sure where to start; for example if Twilio would handle some of that task or I would just use Twilio's API and do checking for smss and returning the results. I thinking about not using a database.
Could you guide me on how to approach this task?
I need to present the project on Friday; so I'm on a tight deadline! Thanks for our help.
They have some great documentation on how to do most of this.
When you receive a text you should parse it into the format you need
Put it into your existing project and when it returns the event or events in the area you need to check how long the string is due to a constraint that twilio has of restricting messages to 160 characters or less.
Ensure that you split the message elegantly and not in the middle of an event. If you were returned "Boston Celtics Game", "The Nut Cracker Play". you want to make sure that if both events cannot be put in one message that the first message says "Boston Celtics Game, Another text coming in 1 second" Or something similar.
In order to receive a text message from a mobile device, you'll have to expose an endpoint that is reachable by Twilio. Here is an example
class ReceiveTextController < ActionController
def index
# let's pretend that we've mapped this action to
# http://localhost:3000/sms in the routes.rb file
message_body = params["Body"]
from_number = params["From"]
SMSLogger.log_text_message from_number, message_body
end
end
In this example, the index action receives a POST from Twilio. It grabs the message body, and the phone number of the sender and logs it. Retrieving the information from the Twilio POST is as simple as looking at the params hash
{
"AccountSid"=>"asdf876a87f87a6sdf876876asd8f76a8sdf595asdD",
"Body"=> body,
"ToZip"=>"94949",
"FromState"=>"MI",
"ToCity"=>"NOVATO",
"SmsSid"=>"asd8676585a78sd5f548a64sd4f64a467sg4g858",
"ToState"=>"CA",
"To"=>"5555992673",
"ToCountry"=>"US",
"FromCountry"=>"US",
"SmsMessageSid"=>"hjk87h9j8k79hj8k7h97j7k9hj8k7",
"ApiVersion"=>"2008-08-01",
"FromCity"=>"GRAND RAPIDS",
"SmsStatus"=>"received",
"From"=>"5555992673",
"FromZip"=>"49507"
}
Source

Resources