DialogflowCX sends a "nested flow transition"error after 3 consecutive requests while managing different flows - dialogflow-cx

I have an agent managing eight different flows. Each flow returns to the Default Flow Start Page after processing the user request.
The issue is that after 3 consecutive requests I get the following error message: ***"More than 10 nested flow transitions detected:
[{ "Step 1": { "Type": "INITIAL_STATE", "InitialState": {
"MatchedIntent": { "Id": "ee2030ba-164f-4b14-ade3-8760e6dbb91d",
..."***.
Then the agent stops working.
Does anybody know what is this issue about and how to deal with it?
How should I manage several flows if they are not allowed to come back to the default flow?
I send a Flow graph screen shot.
Thanks in advance
Claudia

I have tried to replicate your issue and create a flow with a sub-flow order flower:
I have set the Transition of the subflow order flower to Default Start Flow:
Upon sending the user query in the simulator, the error message appears:
To be able to fix this, you can use the End Flow Page as Transition to jump back to caller flow instead of explicitly defining the parent flow name as target.
Here’s the final output when you use End Flow Page, it can jump back to the parent flow successfully without an error.

Related

Breaking a possible infinite loop in AWS step functions

I am writing a state machine with the following functionality.
start State -> Lambda1 which calls external service Describe API endpoint to get State attribute of item example "isOKay" or "isNotOkay" -> Choice state((depending on the state received) if "IsOkay" move to next state and if "isNotOkay" again call lambda1. This happens until it gets a IsOkay state. How can put a limit to this custom retry loop so that I dont get stuck if I never receive a IsOkay response.
You can use input your step in a form of counter, which incremented by lambda. Which when return in retry can be checked for a limit, if crosses one fail lambda with custom exception. Describe separate step for handling the exception.
https://docs.aws.amazon.com/step-functions/latest/dg/input-output-inputpath-params.html
https://docs.aws.amazon.com/step-functions/latest/dg/concepts-error-handling.html

With Akka Streams, how do I know when a source has completed?

I have an Alpakka Elasticsearch Sink that I'm keeping around between requests. When I get a request, I create a Source from an HTTP request and turn that into a Source of Elasticsearch WriteMessages, then run that with mySource.runWith(theElasticseachSink).
How do I get notified when the source has completed? Nothing useful seems to be materialized.
Will completion of the source be passed to the sink, meaning I have to create a new one each time?
If yes to the above, would decoupling them somehow with Flow.fromSourceAndSink help?
My goal is to know when the HTTP download has completed (including the vias it goes through) and to be able to reuse the sink.
you can pass around the single parts of a flow as you wish, you can even pass around the whole executabe graph (those are immutables). The run() call materializes the flow, but does not change your graph or its parts.
1)
Since you want to know when the HttpDownload passed the flow , why not use the full graphs Future[Done] ? Assuming your call to elasticsearch is asynchronous, this should be equal since your sink just fires the call and does not wait.
You could also use Source.queue (https://doc.akka.io/docs/akka/2.5/stream/operators/Source/queue.html) and just add your messages to the queue, which then reuses the defined graph so you can add new messages when proocessing is needed. This one also materializes a SourceQueueWithComplete allowing you to stop the stream.
Apart from this, reuse the sink wherever needed without needing to wait for another stream using it.
2) As described above: no, you do not need to instantiate a sink multiple times.
Best Regards,
Andi
It turns out that Alpakka's Elasticsearch library also supports flow shapes, so I can have my source go via that and run it via any sink that materializes a future. Sink.foreach works fine here for testing purposes, for example, as in https://github.com/danellis/akka-es-test.
Flow fromFunction { product: Product =>
WriteMessage.createUpsertMessage(product.id, product.attributes)
} via ElasticsearchFlow.create[Map[String, String]](index, "_doc")
to define es.flow and then
val graph = response.entity.withSizeLimit(MaxFeedSize).dataBytes
.via(scanner)
.via(CsvToMap.toMap(Utf8))
.map(attrs => Product(attrs("id").decodeString(Utf8), attrs.mapValues(_.decodeString(Utf8))))
.via(es.flow)
val futureDone = graph.runWith(Sink.foreach(println))
futureDone onComplete {
case Success(_) => println("Done")
case Failure(e) => println(e)
}

Running node-red flow with a client request

I have a node-red flow. I want to run the flow without clicking any trigger node like inject. I want to run the flow with a client request from the dialogflow bot. Is there anyone who has encountered with this problem?
Node red has http in node. You can create an end point. It listenst from this end point. When a request send to it, it is trigered. You can use it for injecting.
I have solved the problem. It can be helpfull for some people. The solution is like following:
The inject node sends a post request to an end point which is inject/inject node Id. So when I request (POST) to the end point, the flow runs. An example of the request is like following:
http://localhost:1880/inject/585915a7.b4f89c
In your Primary node, Instead of
node.on('input', function (msg) {
......
node.send(msg);
});
on your .js file itself write your logic
RED.nodes.registerType("PrimaryNode", (c)=>{
.....
node.send({"payload": value});
})
This will triggered when node loads, no trigger needed to start your flow.

spring integration message released twice from aggregator

I have a spring integration flow that starts with a channel inboundadapter and picks up files and passes them through the system as messages.
After a few components, the messages are aggregated at an "Aggregator" from where they are released based on release strategies or by group timeout of 30 sec.
The downstream processing has another bunch of components till the final one.
The problem I am facing is this,
When I send 33 files which create 33 "groups/buckets" based on correlation IDs, aggregated at the "Aggregator", some of the files or messages seems to be "released" twice. The reason I conclude that is because I have a channel interceptor which shows a few messages passing through the "released" channel (appearing right after the aggregator) a second time, after completing the downstream processing successfully, the first time. Additionally, this behavior causes my application to not find a file and throw an exception which I see. This leads me to conclude that the message bucket/group/corrID is somehow being "Released" twice.
I have tried to debug this many ways , but essentially, I want to know how a corrID/bucket after being released and having successfully gone through all downstream components in a single thread, can be "released" again.
My question is, how can I debug this? I want to know what is making this message/bucket re-appear in the aggregator.
My aggregator is as follows,
<int:aggregator id="bufferedFiles" input-channel="inQueueForStage"
output-channel="released" expire-groups-upon-completion="true"
send-partial-result-on-expiry="true" release-strategy="releaseHandler"
release-strategy-method="canRelease"
group-timeout-expression="size() > 0 ? T(com.att.datalake.ifr.loader.utils.MessageUtils).getAggregatorTimeout(one, #sourceSnapshot) : -1">
<int:poller fixed-delay="${files.pickup.delay:3000}"
max-messages-per-poll="${num.files.pickup.per.poll:10}"
task-executor="executor" />
</int:aggregator>
Explanation of aggregator: The size()>0 applies to EACH correlation bucket. each of the 33 files I am sending will spawn/generate/create a new bucket because of the file name, so the aggregator will have 33 buckets/groups/corrIds, each bucket will contain only one file.
So the aggregator SPEL expression simply says that if there no release strategies, then release the bucket/group after 30 secs if the group indeed has at least some files.
My Channel inbound adapter is as follows:
<int-file:inbound-channel-adapter id="files"
channel="dispatchFiles" directory="${source.dir}" scanner="directoryScanner">
<int:poller fixed-delay="${files.pickup.delay:3000}"
max-messages-per-poll="${num.files.pickup.per.poll:10}" />
</int-file:inbound-channel-adapter>
Logs
here is the log of message completing the flow the first time. The completion time invoked suggests reaching the last component a "completionHandler" SA.
Explanation of Log: "cor" is the bucket/corrId that is being released twice. The reason I get the final exception is because during the first time, the file is removed from that original location and processed. So the second time around when this erroneous release happens, there is nothing to process there.
From the pictures it can be seen that the first batch/corrId/bucket is processed and finished around 11:09, and the second one is started around 11:10
an important point I noticed that this behavior only happens when I have a global channel interceptor in which I am doing somewhat long processing. When this interceptor is commented out, the errors go away.
Question:
is it possible for aggregator to double release a batch/corrId under any circumstance? How can I make aggregator emit any logs?
Thanks
Edit 10:15pm
My channel following the aggregator has an interceptor as follows,
public Message<?> preSend(Message<?> message, MessageChannel channel) {
LOGGER.info("******** Releasing from aggregator(interceptor) , corrID:{} at time:{} ********",MessageUtils.getCorrelationId(message), new Date() );
finalReporter.callback(channel.toString(), message);
return message;
}
From Aggregator down to final compeltionHandler SA, I have single threaded processing
Aggregator -> releasedChannel -> some SA1 -> some channel -> ..... -> completionChannel->completeSA
When I run for 33 partitions, let's follow corrId = "alh" The first time it is released, it looks like following,
What it shows is that thread-5 released it and it should process all the downstream components. But it leaves it mid-way and starts doing other things and is picked up again by a diffferent thread a little later as follows,
That seems/seemed to be the problem,
Solution Update:
I did following 3 things to sort of work around, at the moment,
for some reason, my interceptors were doing return super.preSend(message, channel) instead of simply return message. I changed it to latter
I had a global channel interceptors, I removed global and kept individual ones
If the channel interceptors had any issues before returning, would that cause a new release?
Although I still see the above scenario depicted in pictures, I am not getting double processing attempts and as such it avoids the errors. I am still trying to make sense out of this.
I understand it's too specific and difficult to explain; still thanks for the time and comments...
However, yes. I think #GaryRussell is right: since you use expire-groups-upon-completion="true" some partial groups may be released by group-timeout-expression and the new messages with the same correlationId will form a new group, which is released by the next group-timeout. Your size() > 0 isn't good too. It means that it is going to release partial group after that group-timeout. Maybe size() > 1? The group can't be size() == 0 though. Because it is created on the first message, so, if gruop exists, it contains at least one message. Yes, group can be empty, but in that case the aggregator should be marked with expire-groups-upon-completion="false". In that case it is marked as completed and doesn't allow new messages.
After struggling with debugging and various blind scenarios, I believe that at least I have a workaround and a possible root cause. I will try to outline all the things that I modified,
Root Cause:
My interceptors were calling a Common class with a common callback method. This method, based on the channel name from which the request was coming from, would decide the appropriate action to take. The actions were essentially collecting data, incrementing counters and persisting to database some information.
It seems that some of them were having errors and consequently, the thread was dying and message re-released. I am not entirely sure about it and please correct me if that's not the case.
But after I fixed those errors, the re-release issue seems to have subsided or vanished altogether.
The reason it was hard to diagnose was because I could not see those errors thrown during callback method invocations; may be I was catching them or may be they were lost.
I also found that the issue was only on any channel interceptors AFTER the aggregator. Interceptors before the aggregator did not present any issues; may be because they were simpler...
To debug,
I removed the interceptors and made the callback directly from various components (SAs), removed global interceptors and tried to add individual interceptors for specific channels.
Thanks for all the help.

Invoking a spring action repeatedly without user interaction

I Have a requirement like below :
Get invoked a particular action repeatedly without user interaction.For example, I have a message status page which displayed JMS message status.Message status can be changed by a number of application components.What I wanted is, my status UI has to pick latest message status.I need the action which displays status UI to be called repeatedly in an interval of 5 seconds or so, so that UI will get displayed with latest status.
How can I achieve this in spring.Is it something,Polling an action?
Any help highly appreciated
The easiest thing to do is to ask the server every few seconds using JavaScript and AJAX (pseudo-code using jquery):
function askServerForStatus() {
$.getJSON('/your-app/jms-status', function(response) {
$('#status').text(response.status);
}
}
setInterval(askServerForStatus, 5000); //every 5 seconds
Very simple example, it asks Spring MVC controllers mapped to /jms-status and expects the following JSON response:
{"status": "Processing..."}
Consider using setTimeout().
More general, reliable and robust approach is to use websockets, servlet-3.0 asynchronous support or comet. Also have a look at atmosphere.

Resources