VXML :: Difference between bridged and consultation transfer - ivr

From the VXML 2.1 documentation on Consultation
Consultation_Transfer
and from the documentation of VXML 2.0 on Bridged Transfer
Briged transfer
The only difference that i understand is this
The platform maintains the session during the duration of the call even after transferring in Bridged transfer, where as in consultation transfer it throws connection.disconnect.transfer upon the successful transfer.
Please let me know if my understanding is correct

There are actually three types of transfers in VXML. You forgot to mention a blind transfer. A blind transfer terminates the application as soon as the transfer is initiated. A consultation transfer is like a blind transfer except that it makes sure that the transfer completes before terminating the application. If the transfer does not complete successfully then it returns to the application. A bridged transfer on the other hand keeps the application running after the transfer has completed. For a bridged transfer you can consider the two parties and the IVR application being conferenced together.
You want to check your IVR vendor on how they implemented the transfer for any details. Not all IVR vendors are the same in how they implemented the spec. Many do not support the consultation transfer at all, only allowing bridged or blind transfers.

Consultation transfer uses some kind of supervised transfer, before making transfer it analyses the status of destination like busy/ or free, then transfers call.
Bridged transfer will use conference kind of thing, it uses third line, say a caller is on call with IVR, if the call needs to transferred to an agent, IVR will use a third free channel allocated to IVR, then connects to agent, actually these four channels are used for two calls.
now "Bridge" term derived from connecting these two calls like conference.
So, all these four channels utilized till end of conversation with agent

Scenario: Party A calls party B (could be IVR or human). Party B wants to transfer to Party C.
Consult Transfer: Party B initiates the transfer but monitors for Party B to answer before completing the transfer and hanging up. after that, Party A and C continue on the call alone.
Bridge Transfer: Party B initiates the transfer but monitors for Party B to answer before completing the transfer but it stays on the call in suspended
mode. When Party C hangs up, Party B re-engages with Party A. Think of IVR application (i.e. party B) taking the call back to a Customer Survey application after finishing the call with the Agent (i.e. party C).
Blind Transfer: Party B transfers the call to Party C and doesnt care whether or not party C answers, available or hangs up the call.
check this page which provides a good explanation:
https://www.devconnectprogram.com/forums/posts/list/17727.page

Related

Automatic reconnect in case of network failures

I am testing .NET version of ZeroMQ to understand how to handle network failures. I put the server (pub socket) to one external machine and debugging the client (sub socket). If I stop my local Wi-Fi connection for seconds, then ZeroMQ automatically recovers and I even get remaining values. However, if I disable Wi-Fi for longer time like a minute, then it just gets stuck on a frame waiting. How can I configure this period when ZeroMQ is still able to recover? And how can I reconnect manually after, say, several minutes? How can I understand that the socket is locked and I need to kill/open again?
Q :" How can I configure this ... ?"
A :Use the .NET versions of zmq_setsockopt() detailed parameter settings - family of link-management parameters alike ZMQ_RECONNECT_IVL, ZMQ_RCVTIMEO and the likes.
All other questions depend on your code.
If using blocking-forms of the .recv()-methods, you can easily throw yourself into unsalvageable deadlocks, best never block your own code ( why one would ever deliberately lose one's own code domain-of-control ).
If in a need to indeed understand low-level internal link-management details, do not hesitate to use zmq_socket_monitor() instrumentation ( if not available in .NET binding, still may use another language to see details the monitor-instance reports about link-state and related events ).
I was able to find an answer on their GitHub https://github.com/zeromq/netmq/issues/845. Seems that the behavior is by design as I got the same with native zmq lib via .NET binding.

Filenet BPM Webservice receive step design consederations

We are currently designing a web service based process, in which we will be using the web-service invoke and receive steps to communicate with a Microsoft biz-talk server.
Our main concern is that a task on the receive step can wait for some time (up to one week) until the biz-talk responds to us, which (we think) would incur a performance penalty on the workflow system as it will be polling for response.
My question is, is there any known performance considerations for the receive step, specially for leaving work items for extended periods?
No, I don't think there will be any undue "overhead". Yes, internally the process engine "polls". For just about anything. Including invoking components, or executing timers. But from a system perspective, you're just waiting for a request.
It sounds like a "receive" step is exactly the right solution here.

Network structure for online programming game with webSockets

Problem
I'm making a game where you would provide a piece of code to represent the agent program of an Intelligent Agent (think Robocode and the like), but browser-based. Being an AI/ML guy for the most part, my knowledge of web development was/is pretty lacking, so I'm having a bit of a trouble implementing the whole architecture. Basically, after the upload of text (code), naturally part of the client-side, the backend would be responsible for running the core logics and returning JSON data that would be parsed and used by the client mainly for the drawing part. There isn't really a need for multiplayer support right now.
If I model after Robocode's execution loop, I would need a separate process for each battle that then assigns different agents (user-made or not) to different threads and gives them some execution time for each loop, generating new information to be given to the agents as well as data for drawing the whole scene. I've tried to think of a good way to structure the multiple clients, servers/web servers/processes [...], and came to multiple possible solutions.
Favored solution (as of right now)
Clients communicate with a Node.js server that works kinda like an interface (think websocketd) for unique processes running on the same (server) machine, keeping track of client and process via ID and forwarding the data (via webSockets) accordingly. So an example scenario would be:
Client C1 requests new battle to server S and sends code (not necessarily a single step, I know);
S handles the code (e.g. compiling), executes new battle and starts a connection with it's process P1 (named pipes/FIFO?);
P1 generates JSON, sends to S;
S sees P1 is "connected" to C1, sends data to C1 (steps 3 and 4 will be repeated as long as the battle is active);
Client C2 requests new battle;
Previous steps repeated; C2 is assigned to new process P2;
Client C3 requests "watching" battle under P1 (using a unique URL or a token);
S finds P1's ID, compares to the received one and binds P1 to C3;
This way, the Server forwards received data from forked processes to all clients connected to each specific Battle.
Questions
Regarding this approach:
Is it simple enough? Are there easier or even more elegant ways of doing it? Could scalability be a problem?
Is it secure enough (the whole compiling and running code — likely C++ — on the server)?
Is it fast enough (this one worries me the most for now)? It seems a bit counter intuitive to have a single server dealing with the entire traffic, but as far as I know, if I'd assign all these processes to a separate web server, I would need different ports for each of them, which seems even worse.
Since this is a theoretical and opinion based question... I feel free to throwing the ball in different directions. I'll probably edit the answer as I think things over or read comments.
A process per battle?
sounds expensive. Also, there is the issue of messages going back and forth between processes... might as well be able to send the messages between machines and have a total separation of concerns.
Instead of forking battles, we could have them running on their own, allowing them to crash and reboot and do whatever they feel like without ever causing any of the other battles or our server any harm.
Javascript? Why just one language?
I would consider leveraging an Object Oriented approach or language - at least for the battles, if not for the server as well.
If we are separating the code, we can use different languages. Otherwise I would probably go with Ruby, as it's easy for me, but maybe I'm mistaken and delving deeper into Javascript's prototypes will do.
Oh... foreign code - sanitization is in order.
How safe is the foreign code? should it be in a localized sped language that promises safety of using an existing language interpreter, that might allow the code to mess around with things it really shouldn't...
I would probably write my own "pseudo language" designed for the battles... or (if it was a very local project for me and mine) use Ruby with one of it's a sanitizing gems.
Battles and the web-services might not scale at the same speed.
It seems to me that handling messages - both client->server->battle and battle->server->client - is fairly easy work. However, handling the battle seems more resource intensive.
I'm convincing myself that a separation of concerns is almost unavoidable.
Having a server backend and a different battle backend would allow you to scale the battle handlers up more rapidly and without wasting resources on scaling the web-server before there's any need.
Network disconnections.
Assuming we allow the players to go offline while their agents "fight" in the field ... What happens when we need to send our user "Mitchel", who just reconnected to server X, a message to a battle he left raging on server Y?
Separating concerns would mean that right from the start we have a communication system that is ready to scale, allowing our users to connect to different endpoints and still get their messages.
Summing these up, I would consider this as my workflow:
Http workflow:
Client -> Web Server : requesting agent with identifier and optional battle data (battle data is for creating an agent, omitting battle data will be used for limiting the request to an existing agent if it exists).
This step might be automated based on Client authentication / credentials (i.e. session data / cookie identifier or login process).
if battle data exists in the request (request to make):
Web Server -> Battle instance for : creating agent if it doesn't exist.
if battle data is missing from the request:
Web Server -> Battle Database, to check if agent exists.
Web Server -> Client : response about agent (exists / created vs none)
If Agent exists or created, initiate a Websocket connection after setting up credentials for the connection (session data, a unique cookie identifier or a single-use unique token to be appended to the Websocket request query).
If Agent does't exist, forward client to a web form to fill in data such as agent code, battle type etc'.
Websocket "workflows" (non linear):
Agent has data: Agent message -> (Battle communication manager) -> Web Server -> Client
It's possible to put Redis or a similar DB in there, to allow messages to stack while the user is offline and to allow multiple battle instances and multiple web server instances.
Client updates to Agent: Client message -> (Battle communication manager) -> Web Server -> Agent

Get Source Tower Information From SMS at Destination

I'm planing to start some sms based application and currently in feasibility study part. In my application client have to sms their problem to the server and we have to analyse the problem and take reasonable action. Also We have to find the tentative location through which tower they have been connected. I have seen about silent sms feature but not understand. Is any body have experience on how to detect location of sms creator (not in android or iphone). Please help me on determining whether it is possible or not to find the location. If possible then how?
In short this is not possible.
an SMS message weather in PDU mode or text mode does not carry the information to match the source location to the message in any way shape or form.
With reference to the article you linked to in your opening post, I'm sorry to say that there's so much B$$l S$$t in that post that I can smell it from here.
In all the years Iv'e worked with GSM systems, both as a network maintenance engineer and later as a developer writing software to use these systems, not once have I heard of anything such as an 'LMU' or an 'E-OTD' in fact the only acronym that article really got correct was 'BTS' oh and the bit on passing the data over the signalling channel.
As for the silent SMS, well that part actually is true. The special type of SMS they refer to is actually called a Ping-SMS and it exists for exactly the same reason that a regular PING on a TCP/IP network exists, and that's to see if the remote system is alive and responding.
What it's NOT used for is the purpose outlined in the article, and that's for criminal gangs to send it to your phone and find out where you are.
For one, the ONLY people that can correctly send these messages are the telephone operator themselves. That's not to say that it's impossible to send one from a consumer device by directly programming a PDU if you have the necessary equipment and know how. You could for instance pull this stunt off using a normal GSM modem, a batch of AT commands and some serious bit twiddling.
However, since this message would by it's very nature have to go through your operators SMSC and most operators filter out anything from a subscriber connection that's not deemed regular consumer traffic, then there's a high chance this would fail.
You could if you had an account, also send this message using a web sms provider that allowed you to directly construct binary messages, but again they are likely to filter out anything not deemed consumer grade messages.
Finally, if you where to manage to send an SMS to a target device, the target device would not reply with anything anywhere near a chunk of location based info, cell tower, GPS or otherwise. The reason the SMS operators (and ultimately the law enforcement agencies know this info) is because EVERY handset that's attached to the GSM network MUST register itself in the operators MSC (Mobile switching centre), this registration (Known as ratching up) is required by the network so it can track what channels are in use by which device on which towers so that it knows where to send paging and signalling info.
Because of the way the PING SMS works it causes the destination device to re-register itself, usually forcing the MSC to do a location update on the handset which causes a re-registration.
Even then, all you get in the MSC is an identifier of the cell site the device is attached too, so unless you have a database in the organisation of all cell sites along with their exact lat/long co-ordinates, it's really not going to help you all that much.
As for the triangulation aspect, well for that to work you'd need to know at least 2 other transmitters that the device in question can see, and what's more you'd need that device to report that info back to someone inside the network.
Since typically it's only the Ril (Radio interface layer) on the device that actually keeps track of which transmitters it can see, and since the AT commands for many consumer grade GSM modems have the ability to query this information disabled, then it's often not easy to get that info without actually hacking the firmware in the device in question.
How does Google do it? well quite easy, they actually have commercial agreements with network providers that pass the details of registered towers to their back-end infrastructure, in the apps themselves, they have ways of getting the 'BSS List' and sending that list back to Google HQ, where it's cross referenced with the data from the network operator, and the info they have in their own very large transmitter database and finally all this is mashed together with some insane maths to get an approximate location.
Some GSM Modems and some Mobile phone handsets do have the required AT commands enabled to allow you to get this information easy, and if you can then match that information to your own database you can locate the handset your running from, but being able to send a special SMS to another device and get location info back is just a pipe dream nothing more, something like this is only going to work if your target device is already running some custom software that you can control, and if your device is running software that someone else is controlling, then you have bigger problems to worry about.

Is there a way asterisk reconnect calls when internet connection is missed

For being specific, I am using asterisk with a Heartbeat active/pasive cluster. There are 2 nodes in the cluster. Let's suppose Asterisk1 Asterisk2. Eveything is well configured in my cluster. When one of the nodes looses internet connection, asterisk service fails or the Asterisk1 is turned off, the asterisk service and the failover IP migrate to the surviving node (Asterisk2).
The problem is if we actually were processing a call when the Asterisk1 fell down asterisk stops the call and I can redial until asterisk service is up in asterisk2 (5 seconds, not a bad time).
But, my question is: Is there a way to make asterisk work like skype when it looses connection in a call? I mean, not stopping the call and try to reconnect the call, and reconnect it when asterisk service is up in Asterisk2?
There are some commercial systems that support such behavour.
If you want do it on non-comercial system there are 2 way:
1) Force call back to all phones with autoanswer flag. Requerment: Guru in asterisk.
2) Use xen and memory mapping/mirror system to maintain on other node vps with same memory state(same running asterisk). Requirment: guru in XEN. See for example this: http://adrianotto.com/2009/11/remus-project-full-memory-mirroring/
Sorry, both methods require guru knowledge level.
Note, if you do sip via openvpn tunnel, very likly you not loose calls inside tunnel if internet go down for upto 20 sec. That is not exactly what you asked, but can work.
Since there is no accepted answer after almost 2 years I'll provide one: NO. Here's why.
If you failover from one Asterisk server 1 to Asterisk server 2, then Asterisk server 2 has no idea what calls (i.e. endpoint to endpoing) were in progress. (Even if you share a database of called numbers, use asterisk realtime, etc). If asterisk tried to bring up both legs of the call to the same numbers, these might not be the same endpoints of the call.
Another server cannot resume the SIP TCP session of the other server since it closed with the last server.
The MAC source/destination ports may be identical and your firewall will not know you are trying to continue the same session.
etc.....
If you goal is high availability of phone services take a look at the VoIP Info web site. All the rest (network redundancy, disk redundancy, shared block storage devices, router failover protocol, etc) is a distraction...focus instead on early DETECTION of failures across all trunks/routes/devices involved with providing phone service, and then providing the highest degree of recovery without sharing ANY DEVICES. (Too many HA solutions share a disk, channel bank, etc. that create a single point of failure)
Your solution would require a shared database that is updated in realtime on both servers. The database would be managed by an event logger that would keep track of all calls in progress; flagged as LINEUP perhaps. In the event a failure was detected, then all calls that were on the failed server would be flagged as DROPPEDCALL. When your fail-over server spins up and takes over -- using heartbeat monitoring or somesuch -- then the first thing it would do is generate a set of call files of all database records flagged as DROPPPEDCALL. These calls can then be conferenced together.
The hardest part about it is the event monitor, ensuring that you don't miss any RING or HANGUP events, potentially leaving a "ghost" call in the system to be erroneously dialed in a recovery operation.
You likely should also have a mechanism to build your Asterisk config on a "management" machine that then pushes changes out to your farm of call-manager AST boxen. That way any node is replaceable with any other.
What you should likely have is 2 DB servers using replication techniques and Linux High-Availability (LHA) (1). Alternately, DNS round-robin or load-balancing with a "public" IP would do well, too. These machine will likely be light enough load to host your configuration manager as well, with the benefit of getting LHA for "free".
Then, at least N+1 AST Boxen for call handling. N is the number of calls you plan on handling per second divided by 300. The "+1" is your fail-over node. Using node-polling, you can then set up a mechanism where the fail-over node adopts the identity of the failed machine by pulling the correct configuration from the config manager.
If hardware is cheap/free, then 1:1 LHA node redundancy is always an option. However, generally speaking, your failure rate for PC hardware and Asterisk software is fairly lower; 3 or 4 "9s" out of the can. So, really, you're trying to get last bit of distance to the "5th 9".
I hope that gives you some ideas about which way to go. Let me know if you have any questions, and please take the time to "accept" which ever answer does what you need.
(1) http://www.linuxjournal.com/content/ahead-pack-pacemaker-high-availability-stack

Resources