Twilio: Ruby: Callback status URL for calls running in parallel: Which call completed? - ruby

I have a list of phones which must be called by my Twilio app every so often. I have a cron job which runs every minute which makes a list of all phones scheduled to be called in the next minute. The list comprises calls which are scheduled to run in the next minute along with calls which did not complete properly in the last hour.
For each phone in the list, I have code which looks like this (in Ruby) to start off list of phone calls (the list "phones") which will run in parallel, and this snippet runs every minute.
phones.each do |phone|
callbackurl="http://myapp.com/twiliocallback?phone=#{phone.id}"
data={:from=>'16135551234',
:to=>phone.number,
:url=>callbackurl
}
client=Twilio::REST::Client.new(ACCOUNT_SID, ACCOUNT_TOKEN, :ssl_verify_peer => false)
client.account.calls.create data
end
However, if a phone call takes longer than a minute, I don't want the cron job to trigger a call to the same number while it is already in conversation with Twilio. Also, if a person hangs up the phone in the middle of a call before I can update the status, I want that number to be called again triggered by a subsequent cron.
I know that I need a status attribute for the phone call (e.g. phone.status) with the values NOT_STARTED, IN_PROGRESS, and SUCCESSFULLY_COMPLETED, and a twilio_status attribute (e.g. phone.twilio_status), with the values TWILIO_NOT_STARTED, TWILIO_IN_PROGRESS, and TWILIO_COMPLETED.
Calls start off as
phone.status="NOT_STARTED"
phone.twilio_status="TWILIO_NOT_STARTED"
and, and as soon as I create the phone call, I can update the status of the call:
phone.status="IN_PROGRESS"
phone.twilio_status="TWILIO_IN_PROGRESS"
If the call completes properly within the success path, I can set the status
phone.status="SUCCESSFULLY_COMPLETED"
If I can figure out when the call has disconnected from Twilio , then I can tell if a call was aborted by doing this
is_aborted? = twilio.status=="IN_PROGRESS" && twilio.twilio_status=="TWILIO_COMPLETED"
However, I don't know how to run the code
phone.twilio_status="TWLIO_COMPLETED"
for a particular phone call when the phone call hangs up in the middle of a Twilio call workflow or even at the end of a workflow.
Twilio seems to have a callback URL which can be called when a phone call completes, but it is not clear how the callback handler can figure out which of the parallel calls completed. Is there a way to do this so that I can tag the right call with the right status?

In every request to your server throughout the call and in the status callback URL request Twilio passes the unique CallSid value to your server to identify a call. This is also returned to you in the call XML or JSON data that you get back when you first initiate the call.
https://www.twilio.com/docs/api/twiml/twilio_request
https://www.twilio.com/docs/api/rest/making-calls
You can store this CallSid value to track status throughout the call's lifecycle.

Related

Handling Asynchronous API Call in Jmeter

I am using Jmeter for functional Testing, below is a problem that I am facing and need some help/suggestion on how to overcome that.
I have a thread-group that consists of 2 requests, 1st is API call and 2nd is sending message to Active MQ.
Now the flow is that I need to do first the API call (this will wait for response), then send the message to a particular Active MQ queue and then only I will get the response for the API.
But since jmeter does sequential execution of requests, its get stuck at the API call waiting for the reply and never executes the second part.
I worked on the below solution but even that did not help.
1 Use a parallel controller and put both the API and ACtive MQ call under the same.
2 Add a Timer to the Active MQ call, so that it just did after the API call (2 Sec)
But when I checked in details I see that both the requests are sent at the same time and the timer does not come into effect anywhere.
Any way I can handle this scenario?
Please note I will get a response to the API only when I send message to the particular Active MQ Queue, else it will timeout in a minute.
Your Parallel Controller approach will work, however you need to amend the configuration a little bit, something like:
You could put your ActiveMQ Request under a different Thread Group and use Inter-Thread Communication Plugin for synchronization between threads
You can keep the current setup but replace the JMS Sampler with the JSR223 Sampler and send the message to ActiveMQ programmatically:
Textual code representation for your convenicence:
sleep(2000)
def connectionFactory = new org.apache.activemq.ActiveMQConnectionFactory('your activemq URL')
def connection = connectionFactory.createConnection()
connection.start()
def session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE)
def destination = session.createQueue('your queue name')
def producer = session.createProducer(destination)
def message = session.createTextMessage('your message body')
producer.send(message)
connection.close()
For your Problem statement, following design will work.
Use 2 Thread Groups, add API call to first Thread group and Message to Active MQ call to second Thread Group
Add a delay to second Thread Group so that it should not run before first Thread Group
Run Test Plan
Use while controller. It will keep on executing till the desired outcome then the next request will be executed.
Hope this helps.
Update:-
While Loop controller execute its samplers until the condition specified is not set to False. The condition can be any variable or function that eventually evaluates to the string 'false'.
So, you need to specify a variable or function in While Loop, that has value 'true' and becomes 'false' somewhere else in the script. Once it changes to 'false', JMeter will exit the While loop.
For example if you are using a X-Path extractor in your script which have a variable named Status and its value changes from 'Start' to 'Finish' during the execution and you want to execute your script till 'Finish' has not been met, then you can use the expression ${__javaScript("'${imp_Status}'!='finish'",)} in your While loop and it will execute the samplers under While controller till the status = finish is met.
It is sort of polling based on certain condition. In your first API reponse, consider one value to be appear as the condition upon which first api call is successful.
It sounds that you just need to define timeout for HTTP Request,
If you define Response Timeout as 60000 (milliseconds), and it will only wait for a minute and then continue to next request
Connect Timeout Connection Timeout. Number of milliseconds to wait for a connection to open. No
Response Timeout Response Timeout. Number of milliseconds to wait for a response. Note that this applies to each wait for a response. If the server response is sent in several chunks, the overall elapsed time may be longer than the timeout.

Safe await on function in another process

TL;DR
How to safely await on function execution (takes str and int as arguments and doesn't require any other context) in a separate process?
Long story
I have aiohtto.web web API that uses Boost.Python wrapper for C++ extension, run under gunicorn (and I plan to deploy it on Heroku), tested by locust.
About extension: it have just one function that does non-blocking operation - takes one string (and one integer for timeout management), does some calculations with it and returns a new string. And for every input string, it is only one possible output (except timeout, but in that case, C++ exception must be raised and translated by Boost.Python to a Python-compatible one).
In short, a handler for specific URL executes the code below:
res = await loop.run_in_executor(executor, func, *args)
where executor is the ProcessPoolExecutor instance, and func -function from C++ extension module. (in the real project, this code is in the coroutine method of the class, and func - it's classmethod that only executes C++ function and returns the result)
Error catching
When a new request arrives, I extract it's POST data by request.post() and then storing it's data to the instance of the custom class named Call (because I have no idea how to name it in another way). So that call object contains all input data (string), request receiving time and unique id that comes with the request.
Then it proceeds to class named Handler (not the aiohttp request handler), that passes it's input to another class' method with loop.run_in_executor inside. But Handler has a logging system that works like a middleware - reads id and receiving time of every incoming call object and logging it with a message that tells you either it just starting to execute, successfully executed or get in trouble. Also, Handler have try/except and stores all errors inside the call object, so that logging middleware knows what error occurred, or what output extension had returned
Testing
I have the unit test that just creates 256 coroutines with this code inside and executor that have 256 workers and it works well.
But when testing with Locust here comes a problem. I use 4 Gunicorn workers and 4 executor workers for this kind of testing. At some time application just starts to return wrong output.
My Locust's TaskSet is configured to log every fault response with all available information: output string, error string, input string (that was returned by the application too), id. All simulated requests are the same, but id is unique for every.
The situation is better when setting Gunicorn's max_requests option to 100 requests, but failures still come.
Interesting thing is, that sometimes I can trigger "wrong output" period by simply stopping and starting Locust's test.
I need a 100% guarantee that my web API works as I expect.
UPDATE & solution
Just asked my teammate to review the C++ code - the problem was in global variables. In some way, it wasn't a problem for 256 parallel coroutines, but for Gunicorn was.

Basic Sidekiq Questions about Idempotency and functions

I'm using Sidekiq to perform some heavy processing in the background. I looked online but couldn't find the answers to the following questions. I am using:
Class.delay.use_method(listing_id)
And then, inside the class, I have a
self.use_method(listing_id)
listing = Listing.find_by_id listing_id
UserMailer.send_mail(listing)
Class.call_example_function()
Two questions:
How do I make this function idempotent for the UserMailer sendmail? In other words, in case the delayed method runs twice, how do I make sure that it only sends the mail once? Would wrapping it in something like this work?
mail_sent = false
if !mail_sent
UserMailer.send_mail(listing)
mail_sent = true
end
I'm guessing not since the function is tried again and then mail_sent is set to false for the second run through. So how do I make it so that UserMailer is only run once.
Are functions called within the delayed async method also asynchronous? In other words, is Class.call_example_function() executed asynchronously (not part of the response / request cycle?) If not, should I use Class.delay.call_example_function()
Overall, just getting familiar with Sidekiq so any thoughts would be appreciated.
Thanks
I'm coming into this late, but having been around the loop and had this StackOverflow entry appearing prominently via Google, it needs clarification.
The issue of idempotency and the issue of unique jobs are not the same thing. The 'unique' gems look at the parameters of job at the point it is about to be processed. If they find that there was another job with the same parameters which had been submitted within some expiry time window then the job is not actually processed.
The gems are literally what they say they are; they consider whether an enqueued job is unique or not within a certain time window. They do not interfere with the retry mechanism. In the case of the O.P.'s question, the e-mail would still get sent twice if Class.call_example_function() threw an error thus causing a job retry, but the previous line of code had successfully sent the e-mail.
Aside: The sidekiq-unique-jobs gem mentioned in another answer has not been updated for Sidekiq 3 at the time of writing. An alternative is sidekiq-middleware which does much the same thing, but has been updated.
https://github.com/krasnoukhov/sidekiq-middleware
https://github.com/mhenrixon/sidekiq-unique-jobs (as previously mentioned)
There are numerous possible solutions to the O.P.'s email problem and the correct one is something that only the O.P. can assess in the context of their application and execution environment. One would be: If the e-mail is only going to be sent once ("Congratulations, you've signed up!") then a simple flag on the User model wrapped in a transaction should do the trick. Assuming a class User accessible as an association through the Listing via listing.user, and adding in a boolean flag mail_sent to the User model (with migration), then:
listing = Listing.find_by_id(listing_id)
unless listing.user.mail_sent?
User.transaction do
listing.user.mail_sent = true
listing.user.save!
UserMailer.send_mail(listing)
end
end
Class.call_example_function()
...so that if the user mailer throws an exception, the transaction is rolled back and the change to the user's flag setting is undone. If the "call_example_function" code throws an exception, then the job fails and will be retried later, but the user's "e-mail sent" flag was successfully saved on the first try so the e-mail won't be resent.
Regarding idempotency, you can use https://github.com/mhenrixon/sidekiq-unique-jobs gem:
All that is required is that you specifically set the sidekiq option
for unique to true like below:
sidekiq_options unique: true
For jobs scheduled in the future it is possible to set for how long
the job should be unique. The job will be unique for the number of
seconds configured or until the job has been completed.
*If you want the unique job to stick around even after it has been successfully processed then just set the unique_unlock_order to
anything except :before_yield or :after_yield (unique_unlock_order =
:never)
I'm not sure I understand the second part of the question - when you delay a method call, the whole method call is deferred to the sidekiq process. If by 'response / request cycle' you mean that you are running a web server, and you call delay from there, so all the calls within the use_method are called from the sidekiq process, and hence outside of that cycle. They are called synchronously relative to each other though...

Spring #Async cancel and start?

I have a spring MVC app where a user can kick off a Report generation via button click. This process could take few minutes ~ 10-20 mins.
I use springs #Async annotation around the service call so that report generation happens asynchronously. While I pop a message to user indicating job is currently running.
Now What I want to do is, if another user (Admin) can kick off Report generation via the button which should cancel/stop currently running #Async task and restart the new task.
To do this, I call the
.. ..
future = getCurrentTask(id); // returns the current task for given report id
if (!future.isDone())
future.cancel(true);
service.generateReport(id);
How can make it so that "service.generateReport" waits while the future cancel task kills all the running threads?
According to the documentation, after i call future.cancel(true), isDone will return true as well as isCancelled will return true. So there is no way of knowing the job is actually cancelled.
I can only start new report generation when old one is cancelled or completed so that it would not dirty data.
From documentation about cancel() method,
Subsequent calls to isCancelled() will always return true if this method returned true
Try this.
future = getCurrentTask(id); // returns the current task for given report id
if (!future.isDone()){
boolean terminatedImmediately=future.cancel(true);
if(terminatedImmediately)
service.generateReport(id);
else
//Inform user existing job couldn't be stopped.And to try again later
}
Assuming the code above runs in thread A, and your recently cancelled report is running in thread B, then you need thread A to stop before service.generateReport(id) and wait until thread B is completes / cancelled.
One approach to achieve this is to use Semaphore. Assuming there can be only 1 report running concurrently, first create a semaphore object acccessible by all threads (normally on the report runner service class)
Semaphore semaphore = new Semaphore(1);
At any point on your code where you need to run the report, call the acquire() method. This method will block until a permit is available. Similarly when the report execution is finished / cancelled, make sure release() is called. Release method will put the permit back and wakes up other waiting thread.
semaphore.acquire();
// run report..
semaphore.release();

PostThreadMessage fails

I have created a UI thread. I m posting message to the UI thread which will write data in a file.
I am using PostThreadMessage API to post the message to User thread. My Problem is it's not writing all the data that I have posted. For Instance, if i post 100 data, it writes randomly 3 or 98 varies for every execution. The handler for Postdata is not getting called for every message.
CWriteToFile *m_pThread = (CWriteToFile *)AfxBeginThread(RUNTIME_CLASS (CWriteToFile));
PostThreadMessage(m_pThread->m_nThreadID , WM_WRITE_TO_FILE, (WPARAM)pData,NULL);
WaitForSingleObject(m_pThread, INFINITE);
The Return value of PostThreadMessage is success.
The PostMessage family of functions can fail if the message queue is full. You should check whether or not the function call succeeds.

Resources