Dialogflow CX return partial response - dialogflow-cx

I have a Dialogflow CX V3 chatbot built with the Google Dialogflow console. I send it text chat using the REST API.
const response = await sessionsClient.detectIntent(request);
And that works fine. But some routes in my Dialogflow agent have to call out to a webhook that may take time to respond. I want to use the Return Partial Response feature to present a message to the user before the webhook is called. It doesn't make sense for that to use the REST API, because that only has the sessionsClient.detectIntent call, and that waits until the whole fulfillment is over before returning. It does have the option to return a PARTIAL result but that never seems to happen, and I cannot see how I might obtain any final result after a partial one.
I went to the RPC API which supports long running operations, hoping to find a variant of detectIntent that returns an operation so I can query the operation multiple times. But I only found
rpc DetectIntent(DetectIntentRequest) returns (DetectIntentResponse)
and DetectIntentResponse is not an operation. Again, it includes a flag responseType which may take the value PARTIAL. However, if I see a partial response, I'm at a loss as to how to get any further output from my API call.
rpc streamingDetectIntent returns a response which can come in multiple pieces. But that seems only applicable to audio chat.
Can anyone please tell me how to use Return Partial Response with a text-based Dialogflow CX chatbot, please?

Related

How to manage a slow callback function in the ESPAsyncWebServer library

I understand that delaying or yielding in the ESPAsyncWebServer library callbacks are a no-no. However, my callback function needs to query another device via the Serial port. This process is slow and will crash the ESP32 as a result.
Here is an example:
void getDeviceConfig(AsyncWebServerRequest *request) {
AsyncResponseStream *response =
request->beginResponseStream("application/json");
StaticJsonDocument<1024> doc;
JsonArray array = doc.createNestedArray("get");
for (size_t i = 0; i < request->params(); i++)
array.add(request->getParam(i)->value());
serializeJson(doc, Serial);
/* At this point, the remote device determines what is being asked for
and builds a response. This can take fair bit of time depending on
what is being asked (>1sec) */
response->print(Serial.readStringUntil('\n'));
request->send(response);
}
I looked into building a response callback. However, I would need to know ahead of time how much data the remote device will generate. There's no way for me to know this.
I also looked into using a chunked response. In this case, the library will continuously call my callback function until I return 0 (which indicates that there is no more data). This is a good start - but doesn't quite fit. I can't inform of the caller that there is definitely more data coming, I just haven't received a single byte yet. All I can do here is return 0 which will stop the caller.
Is there an alternative approach I could use here?
The easiest way to do this without major changes to your code is to separate the request and the response and poll periodically for the results.
Your initial request as you have it written would initiate the work. The callback handler would set global boolean variable indicating there was work to be done, and if there were any parameters for the work, would save them in globals. Then it would return and the client would see the HTTP request complete but wouldn't have an answer.
In loop() you'd look for the boolean that there was work to be done, do the work, store any results in global variables, set a different global boolean indicating that the work was done, and set the original boolean that indicated work needed to be done to false.
You'd write a second HTTP request that checked to see if the work was complete, and issue that request periodically until you got an answer. The callback handler for the second request would check the "work was done" boolean and return either the results or an indication that the results weren't available yet.
Doing it this way would likely be considered hostile on a shared server or public API, but you have 100% of the ESP32 at your disposal so while it's wasteful it doesn't matter that it's wasteful.
It would also have problems if you ever issued a new request to do work before the first one was complete. If that is a possibility you'd need to move to a queueing system where each request created a queue entry for work, returned an ID for the request, and then the polling request to ask if work was complete would send the ID. That's much more complicated and a lot more work.
An alternate solution would be to use websockets. ESPAsyncWebServer supports async websockets. A websocket connection stays open indefinitely.
The server could listen for a websocket connection and then instead of performing a new HTTP request for each query, the client would send an indication over the websocket that it wanted to the server to do the work. The websocket callback would work much the same way as the regular HTTP server callback I wrote about above. But when the work was complete, the code doing it would just write the result back to the client over the websocket.
Like the polling approach this would get a lot more complicated if you could ever have two or more overlapping requests.

AWS API Gateway: Call a Lambda function asynchronously only when a request body parameter is not present

I have a Lambda function that will receive message events from Slack and based on that, perform some actions.
Since those actions include different other API calls, the execution time tends to exceed 3 seconds, which is the threshold for Slack to re-attempt sending the events.
To prevent Slack from re-attempting the events, I've made the Lambda call asynchronous in API Gateway by adding the "X-Amz-Invocation-Type" header with value 'Event' in the Integration request. The API will then return a 200 right away, without waiting for the function to have executed.
However, this now interferes with Slack's endpoint verification mechanism, which expects the response to include the challenge.
My question is; is there a way to dynamically change whether the function will execute asynchronously or not based off of the request body (which will only include the challenge parameter when the endpoint is being verified)? Or perhaps there's another way to achieve the goal?
When the request body includes the "challenge" parameter: Return the challenge
When it doesn't: Asynchronously call the Lambda function

How to get historical Slack call id's?

Is there a way to get historical call ID's from the Slack API?
I would like to use the slack calls.info API (https://api.slack.com/methods/calls.info) to get information on past Slack calls. But I cannot find a way to get the ID's of the calls. (The only way per the documentation is the data returned by the calls.add API
I can see evidence in Slack of my past calls with other users, but when I call conversations.history, I do not see any data for calls...just messages.
Unfortunately, there is no method that will return call ids retroactively.Something like calls.list would be helpful. Right now, I think your only option would be to store the ids when you call calls.addmoving forward so that you can then use them in future calls.info calls.

How to distinguish two responses that have the same status code but different response body?

I have an application where users can take part of puzzle solving events. I have an API endpoint /events/{id} that is used to get data associated to a certain event.
Based on whether the event has ended, the response will differ:
If the event has ended, the endpoint will return event name, participants, scores etc. with status code 200
If the event has not ended, the endpoint will return event name, start time, end time, puzzles etc. with status code 200.
On the client-side, what is the best way to distinguish these two responses from each other to decide which page to display, results page or event page? Is this a good way to accomplish my goal?
Some might answer that I should already know on the client-side whether the event has ended and then query for data accordingly. But what if user uses the address bar to navigate to an event? Then I will have no data to know, whether it truly has ended. I wouldn't like to first make an API call to know that it has (not) ended and then make another one for results/puzzles.
pass a boolean isFinished and return it inside of response object. If your response object is already defined, create a wrapper that has the previous response dto and a boolean flag.
Also we did use a solution like this in one of our projects at work for a big company so I would say it is somewhat industry accepted way of doing it.

Log post parameters of api gateway call by user

I want to log the post values of some variables. Depending on what every user with a valid API keys requests the price should be calculated.
Let's say I want to save how often the user requests for ?qualitiy=high and ?quality=low so I can do something like billing = high * 1 + low * 0.5
I connected the api gateway with Cloudwatch to log every request and it logs everything so I would be able to calculate the price with a script running over the entries. But there is no way to define what the logger should save so there is a huge overflow.
Another idea was to put a lambda function before the request is going to the api where I can extract the necessary information from the request and save it to another place. But I don't know where I can place it. I was thinking about writing an own Authorization function and handle it there.
So is the best way to handle such a case to abuse the authorization function to inspect the request and save some information?
An authorizer may be a bad fit for this situation as you will not have access to the full request.
You can simply use the Lambda proxy integration, do your processing, then call your downstream API from within the Lambda. This would not be disimilar to the existing proxy Lambda mentioned in this blog post.

Resources