I'm originating multiple sequential calls on freeswitch
originate {continue_on_fail=true,originate_continue_on_timeout=true,originate_timeout=20,ignore_early_media=true}[record_number=abcd,campaign=test-presidio,idbrand=2]sofia/gateway/c-gw-1/yyyyy|[record_number=efgh,campaign=test-presidio,idbrand=2]sofia/gateway/c-gw-1/xxxxxxx &park()
I'm using the bgapi.
is there an event that is raised when the originate command has processed all calls ? Is there a way to determine if all calls failed ?
thx
This question is a little dated, but a note here...
NEVER EVER process calls like this if you are running a dialer (like the OP seems to be doing). This will just cause you heartache, as it is not a clean way to originate calls especially if there are hundreds of calls being sent. You need to be running these in seperate threads (i.e. seperate calls altogether). You can limit calls in your ESL/XML-RPC client if needed.
This is also invalid syntax for an enterprise originate. ':_:' should be used instead of '|'.
Are you passing this directly into cli? Or are you using XMLRPC? Or are you using an ESL? In the latter two scenarios, you can send off your calls and check those calls as they're in-progress. Your language of choice should let you use error checking to find out what happens and how many calls were successful vs. failed, etc..
Related
I am attempting to accomplish something along these lines with Quarkus, and Naryana:
client calls service to start a process that takes a while: /lra/start
This call sets off an LRA, and returns an LRA id used to track the status of the action
client can keep polling some endpoint to determine status
service eventually finishes and marks the action done through the coordinator
client sees that the action has completed, is given the result or makes another request to get that result
Is this a valid use case? Am I visualizing the correct way this tool can work? Based on how the linked guide reads, it seems that the endpoints are more of a passthrough to the coordinator, notifying it that we start and end an LRA. Is there a more programmatic way to interact with the coordinator?
Yes, it might be a valid use case, but in every case please read the MicroProfile LRA specification - https://github.com/eclipse/microprofile-lra.
The idea you describe is more or less one LRA participant executing in a new LRA and polling the status of this execution. This is not totally what the LRA is intended for, but surely can be used this way.
The main idea of LRA is the composition of distributed transactions based on the saga pattern. Basically, the point is to coordinate multiple services to achieve consistent results with an eventual consistency guarantee. So you see that the main benefit arises when you can propagate LRA through different services that either all complete their actions or all of their compensation callbacks will be called in case of failures (and, of course, only for the services that executed their actions in the first place). Here is also an example with the LRA propagation https://github.com/xstefank/quarkus-lra-trip-example.
EDIT: Sorry, I forgot to add the programmatic API that allows same interactions as annotations - https://github.com/jbosstm/narayana/blob/master/rts/lra/client/src/main/java/io/narayana/lra/client/NarayanaLRAClient.java. However, note that is not in the specification and is only specific to Narayana.
I'm trying to write a logic (js script) to communicate with external system. As far as understand, logic will be executed on all endorsing peer.
In this case, how can I avoid duplicate operation to external system ? For example, how to increment a value in external database ? If I write a logic to increment the value in js, I think the value will be incremented by all endorsing peer.
I'll appreciate any comment.
Firstly, currently the only way you can interact with external systems is using the experimental post API. This allows your Transaction Processor function to HTTP POST data to an external system and then to process the response.
Documentation here:
https://hyperledger.github.io/composer/integrating/call-out.html
You are correct in stating that if you have 4 peers, then the chain code container for each peer will run your logic, so you'd expect to see 4 calls to your HTTP service. This is required because each peer node is independent and Fabric must achieve consensus across the peers.
The external functions should therefore (ideally) be side-effect free "pure" functions (idempotent), meaning that for a given set of input parameters you always get the same set of output results.
Clearly a function that returns an incrementing integer doesn't fit this description! You probably need to rethink how you are structuring your problem to make it compatible with a decentralised blockchain-based approach.
I've been trying to implement a call centre type system using Taskrouter using this guide as a base:
https://www.twilio.com/docs/tutorials/walkthrough/dynamic-call-center/ruby/rails
Project location is Australia, if that affects call details.
This system dials multiple numbers (workers), and I have run into an issue where phones will continue to ring even after the call has been accepted or cancelled.
ie. If Taskrouter calls Workers A and B, and A picks up first they are connected to the customer, but B will continue to ring. If B then picks up the phone they are greeted by a hangup tone. Ringing can continue for at least minutes until B picks up (I haven't checked if it ever times out).
Similar occurs if no one picks up and the call simply times out and is redirected to voicemail. As you can imagine, an endlessly ringing phone is pretty annoying, especially when there's no one on the other end.
I was able to replicate this issue using the above guide without modification (other than the minimum changes to set it up locally). Note that it doesn't dial workers simultaneously, rather it dials the first in line for a few seconds before moving to the next.
My interpretation of what is occurring is that Taskrouter is dialling workers, but not updating them when dialling should end, and simply moving on to the next stage of the workflow. It does update Worker status, so it knows if they've timed out for instance, but that doesn't update the actual call.
I have looked for any solutions to this and havent found much about it except the following:
How to make Twilio stop dialing numbers when hangup() is fired?
https://www.twilio.com/docs/api/rest/change-call-state
These don't specifically apply to Taskrouter, but suggest that a call that needs to be ended can be updated and completed.
I am not too sure if I can implement this however, as it seems to be using the same CallSid for all calls being dialled within a Workflow, makes it hard/impossible to seperate each call, and would end the active call as well.
It also just seems wrong that Taskrouter wouldn't be doing this automatically, so I wanted to ask about this before I tinker too much and break things.
Has anyone run into this issue before, or is able/unable to replicate it using the tutorial code?
When testing I've noticed the problem much more on landline numbers, which may only be because mobiles have their own timeout/redirects. VOIPs seem to immediately answer calls, so they behave a bit differently.
Any help/suggestions appreciated, thanks!
Current suggestion to work around this is to not issue the dequeue instruction immediately, but rather issue a Call instruction on the REST API when the Worker wishes to accept the Inbound Call.
This will create an Outbound Call to bridge the two calls together and thus won’t have many outbound calls for the same inbound caller at once.
Your implementation will depend on the behavior that you want to achieve:
Do you want to simul-dial both Workers?
Do you want to send
the task to both Workers and whoever clicks to Accept the Task first
will have the call routed to them?
If it's #2, this is a scenario where you're saying that the Worker should accept the Reservation (reservation.accepted) before issuing the Call.
If it's #1, you can either issue a Call Instruction or Dequeue Instruction. The key being that you provide a DequeueStatusCallbackUrl or CallStatusCallbackUrl to receive call progress events. Once one of the outbound calls is connected, you will need to complete the other associated call. So you will have to unfortunately track which outbound calls are tied to which Reservation, by using AssignmentCallbacks or EventCallbacks, to make that determination within your app.
In my code I have a server process repeatedly probing for incoming messages, which come in two types.
One type of the two will be sent once by each process to give hint to the server process about its
termination.
I was wondering if it is valid to use MPI_Broadcast to broadcast these termination messages and use MPI_Probe to probe their arrivals.
I tried using this combination but it failed. This failure might have been caused by some other things. So I would like anyone who knows about this to confirm.
No, you can only use MPI_Probe for testing for point-to-point communications. For collective communications, the only way to participate at all is to actively make the collective call. From the definition of MPI_Probe in the standard, "The call matches the same message that would have been received by a call to MPI_RECV(..., source, tag, comm, status) executed at the same point in the program" -- eg, it only matches point-to-point stuff like Recv would.
With the new nonblocking collectives coming in MPI3, you would however be able to use MPI_Test (or MPI_Wait) to check to see the status of the nonblocking request, just as you would with a nonblocking send/recv, although I haven't been following that WGs work too closely so I don't know the details.
I'm not sure that the MPI standard excludes this, but I don't see how it would be useful if it is possible. On the (rare) occasions when I've used mpi_probe I've used it to find out the size of an incoming message; it can, of course, get other information about messages 'in flight' too. But mpi_bcast is a collective operation so all the processes in a communicator know everything about a message that you could use mpi_probe to find out. I think ?
I've been reading the MSDN documentation for IcmpSendEcho2 and it raises more questions than it answers.
I'm familiar with asynchronous callbacks from other Win32 APIs such as ReadFileEx... I provide a buffer which I guarantee will be reserved for the driver's use until the operation completes with any result other than IO_PENDING, I get my callback in case of either success or failure (and call GetCompletionStatus to find out which). Timeouts are my responsibility and I can call CancelIo to abort processing, but the buffer is still reserved until the driver cancels the operation and calls my completion routine with a status of CANCELLED. And there's an OVERLAPPED structure which uniquely identifies the request through all of this.
IcmpSendEcho2 doesn't use an OVERLAPPED context structure for asynchronous requests. And the documentation is unclear excessively minimalist about what happens if the ping times out or fails (failure would be lack of a network connection, a missing ARP entry for local peers, ICMP destination unreachable response from an intervening router for remote peers, etc).
Does anyone know whether the callback occurs on timeout and/or failure? And especially, if no response comes, can I reuse the buffer for another call to IcmpSendEcho2 or is it forever reserved in case a reply comes in late?
I'm wanting to use this function from a Win32 service, which means I have to get the error-handling cases right and I can't just leak buffers (or if the API does leak buffers, I have to use a helper process so I have a way to abandon requests).
There's also an ugly incompatibility in the way the callback is made. It looks like the first parameter is consistent between the two signatures, so I should be able to use the newer PIO_APC_ROUTINE as long as I only use the second parameter if an OS version check returns Vista or newer? Although MSDN says "don't do a Windows version check", it seems like I need to, because the set of versions with the new argument aren't the same as the set of versions where the function exists in iphlpapi.dll.
Pointers to additional documentation or working code which uses this function and an APC would be much appreciated.
Please also let me know if this is completely the wrong approach -- i.e. if either using raw sockets or some combination of IcmpCreateFile+WriteFileEx+ReadFileEx would be more robust.
I use IcmpSendEcho2 with an event, not a callback, but I think the flow is the same in both cases. IcmpSendEcho2 uses NtDeviceIoControlFile internally. It detects some ICMP-related errors early on and returns them as error codes in the 12xx range. If (and only if) IcmpSendEcho2 returns ERROR_IO_PENDING, it will eventually call the callback and/or set the event, regardless of whether the ping succeeds, fails or times out. Any buffers you pass in must be preserved until then, but can be reused afterwards.
As for the version check, you can avoid it at a slight cost by using an event with RegisterWaitForSingleObject instead of an APC callback.