I have a GraphQL Websocket based application in which I need to wait for customer payment. The subscription handler is supposed to
Send a request to the payment gateway to activate the payment terminal.
Wait for a webhook that gets called after payment for further processing.
I am sorry I am not able to post any code since, after step 1, I am unable to find details on which constructs (like a Mono or PubSub) to use to "wait" in this use-case.
How can I associate the outgoing request with the webhook call?
I can provide any other details needed to make my question clearer.
There is still too little information but let me give you a rough sketch of how it could work.
After a request to the payment gateway has been sent we need to do two things.
Create a stream (e.g. a Mono) to which you can send a signal
Create a mapping between the (device or transaction) ID and the stream. For starting you can use a simple Map that is also accessible (through a service) by the webhook. For production, you may think about a more sophisticated way.
When the webhook receives a request you look up the stream for the ID and send a signal to it (could also be an error signal). Whoever has subscribed to the stream now receives that signal and can continue.
Now to what construct to use. There may be several alternatives, one is to use a Sink. Here is some pseudo-code(!):
In the method that sends the payment request:
activateTerminal(paymentDetails);
transactionSink = Sinks.empty();
transactionMap.put(deviceId, transactionSink);
return transactionSink.asMono();
Inside the webhook:
transactionSink = transactionMap.get(deviceId);
transactionSink.tryEmitEmpty();
The last statement completes the Mono, so whoever "waits" for it gets notified.
Please understand this answer only as a concept. The things you need to take a closer look at are:
How to store the association
The most appropriate way for you to send a signal to a stream (Mono).
If you have any specific question after that I recommend to create a new one.
Related
I am using an exteranl API that do some work for 15min, when it finish it will call any URL you define in your initial request to send the results to.
Is it possible for dialogFlow to accept this result in 15min? Is there like a built-in async response handler in DialogFlow?
If you are calling external APIs via webhook, it would be subject to the maximum webhook timeout limit of 30 seconds. After the response timeout is exceeded, Dialogflow invokes a webhook error or timeout built-in event and continues processing as usual. Therefore, Dialogflow would no longer accept webhook responses more than the set timeout limit.
Note that conversational interfaces are meant to be designed as a continuous message exchange between the end user and the app/bot. If your web service requires more time for executing operations in the background and this cannot be optimized, consider redesigning the conversation flow in such a way that end users don't wait for the app/bot to reply for more than the set webhook timeout limit.
If you have your own custom application (integrated using APIs or Client Libraries), you can instead call/invoke the function that needs 15 minutes of work (let’s call this function_1) from your custom application.
Here’s a basic setup:
User enters a query from the interface of your custom application.
Your custom application sends the user query with the Detect Intent
request to the Dialogflow agent (using APIs or Client Libraries).
After your custom application receives a Detect Intent response from
the agent, you can create code to get the intent name or event name
from the detectIntentResponse.queryResult.match.intent.displayName
or match.event response json respectively and then call/invoke
function_1 based on the intent or event matched.
Once function_1 is finished processing, you can either send a direct response to the
user in your custom application’s interface or send a Detect Intent
request to your agent so it matches an Intent and sends the intent
response back to your custom application.
No, it won't be possible as you describe it. The only way to call external services is through webhooks, but these are thought as calls that return a very specific object which Dialogflow then returns as an answer to the user directly, so they are inherently synchronous.
What you could do instead is think of a workaround. I don't know the specific of what service you're calling, but you could set up a small server to handle the webhook response from dialogflow which doesn't do anything except trigger the call to the external api, and when you get the answer you could process it (put the relevant content inside a "fulfilment" object as per Dialogflow specification) and trigger an event in your agent through the dialogflow API.
so the final process could look something like this.
user asks for e.g. "pizza": the right intent is triggered and the route for that intent calls a webhook server
your webhook server receives the call from dialogflow and calls the external api asking for the list of all pizzas ever created. it returns an empty fulfilment to the server
when the webhook server receives the response after 15 mins it triggers an event in the agent (look into the dialogflow api for your programming language of choice: python, node, java) and injects some parameters in the request, which you can then use to form a sentence in the agent
when I was just starting out I found this very useful to get a grasp of what the platform expects you to do in terms of interacting with external services, take a look at the graph especially which I think makes it clearer
Intro
Hey, my question is kind of hard to explain so I apologize in advance.
Question
I'm trying to implement microservices for our ecommerce and I'm having issues on how to respond to a request when the actual logic and data needs to be determined by other ( 2-3 ) services.
In order to make it easier to understand, I'll give an example.
Lets say User A is trying to buy a product. after clicking on "check out" button these steps should happen.
Flow
Request comes in:
Ecommerce service:
Check if product has enough quantity in inventory.
Publish an event indicating a new order has been created. order:created
Anti Fraud service:
Receives order:created and checks whether the user is a fraud or not
Publishes an event indicating the check was successful. check:succeed
Payment Service:
Receives check:succeed and creates a url to the gateway.
Sends the gateway url to the user. (( this is where the question arises ))
Since all of these steps are asynchronous, how do I respond to the request?
Possible Solution
After the user has requested to checkout, the ecommerce service creates an order and responds immediately with the orderId of newly created order, on client-side the user has to request periodically and check whether the status of order is PENDING PAYMENT, in order to achieve this, the payment service needs to publish payment:created after the order has been approved by the system and then ecommerce service can update the order.
My solution works, but I'm really new to microservices and I want to ask from experts like you on how to implement this in a better way.
I really appreciate if you read this far, Thank you for your time.
your flow is a synchronous process. you need a result from previous step so it has to go step by step.
point of system view:
what matters here is: "how to handle steps?". which reminds me SAGA design pattern (specially when you need a rollback handling) but in general there are two types (choreography and orchestration). The choreography describes the interactions between multiple services, where as orchestration represents control from one party's perspective.
for simplicity you can implement the command pattern or use EAI(Enterprise Application Integration) tools like Apache camel to handle message between endpoints according to the flow.
if you have a lots of visitors it's also better to use a queue between endpoints whether with an orchestrator or without.
point of user view:
when a user click to checkout their cart. they don't expect many of steps or to do more than just wait. as keeping the connection open for response is not a good idea maybe a loader and a periodically ajax call behind it is quite enough while there are other solutions like push notification (then you can consider on fire and forget mechanism).
Your workflow for handling a request as it is defined is totally synchronous. Each step depends on the previous step, and cannot start until it finishes. However, second step does not seem to need data from the first step, so actually they could be executed in parallel.
so, what can be done is start both of them:
Check if product has enough quantity in inventory.
Checks whether the user is a fraud or not
then
wait for response and if both are ok, then creates a url to the gateway. and sends it to the user.
You can create a camel route or any other tool that implements EIP to achieve the functionality
We need to automate few notifications from our web application. These get triggered at various phases, for eg. Step A, B or C would trigger emails to specific parties.
AS an improvisation to this, teams integration is being looked at where a specific channel is being created and with webhook, the messages can be posted.
I created a custom channel with an incoming webhook and I posted a JSON request (of type #messagecard) which was viewable in the channel. But the need is to really establish a conversation and not separate individual messages. By conversation, we mean a scenario or tree structure like below
OverAll status 1 (Parent message)
--> subsequent reply (child message)
---> subsequent reply (child message)
I did some R&D and found that the incoming webhook post request does not return any message id (thsi feature doesnt exist)
What I do not understand is how bots (Azure or Microsoft) can help here.
Please advise
Webhooks/connectors is perfectly fine for the single messages, are you're seeing, but I don't think it will give you the ability to create and then continue an existing "conversation" (i.e. a thread). You certainly could achieve something like this using a "bot"-based approach. In practice, it's kind of "bot+extra" because you need two things:
1) A bot registered into the channel. This will give you some key info you need to be able to send messages from outside Teams - something called a "proactive" message. Having the bot in the channel also means you have something with the authorization to send a message to the channel
2) Next you need to implement the Proactive message. Have a look at my answer here to see more: Programmatically sending a message to a bot in Microsoft Teams (the answer is in C# - not sure what language/platform you're using, but the same concepts apply in Node)
In addition to the pro-active message, once you send that first message, you need to store the message reference that comes back from "SendToConversationAsync". You then apply it to the subsequent messages, as I've described in the answer here: How to add a mention in Teams alongside an adaptive card using Bot Framework
Hope that helps
I am trying to send messages from several outer sources to a specific channel, which is private and belongs to myself only. The username should be the name of source, not my ID.
I found there are two ways to do such a similar function: Incoming Webhooks and chat.postMessage
I have already practiced these two, which seems no difference between them.
However, in Incoming Webhooks, a statement says:
You can't use Incoming Webhooks with Workspace Apps right now; those
apps can request single channel write access and then use
chat.postMessage in the Web API to post messages, providing very
similar functionality to Incoming Webhooks.
What does it mean?
To my work, which one is better?
with chat.postMessage() you send a message to a specific channel, often you do that in response to a users action. You will need the token to verify the postMessage Request which you receive when the user installs your app.
Incoming webhooks are often used to post general information, e.g. patch notes or general announcements.
As far as I know, you don't need the token since there is a verification behind that Url.
so the webhook url is bound to a specific channel, which is specified through the user. With chat.postMessage you can post messages anywhere (depending on your permissions, maybe not in private channels or direct messages)
Adding to what Ben said:
Incoming webhooks are limited in their functionality. They are great if you need an easy way to send a message that does not require a token, but in general the API method (chat.postMessage) is the better choice. It is more flexible (e.g. not fixed to one channel) and provides the full functionality (e.g. you get the ID for a message and can later update it).
Workspace apps / tokens where a new functionality that allowed apps to be installed in one channel only (among other things). It never left its beta stage and can be safely ignore for further development.
Let suppose the following simple UC based on a CQRS architecture:
We have a backend managing a Business Object, let says a Movie.
This backend is composed of 2 Microservices: a CommandManager (Create/Update/Delete Movie) and a QueryManager (Query Movie)
We have a frontend that offer a web page for creating a new Movie and this action lead automatically to another web page describing the Movie.
A simple way to do that is:
A web page collect movie information using a form and send them to the frontend.
The frontend make a POST request to the CommandManager
The CommandManager write the new movies to the datastore and return the movie key
The frontend make a GET using this key to the QueryManager
The QueryManager looks for the Movie in the Datastore using the key and return it.
The frontend deliver the page with the Movie Information.
Ok, now I want to transform this UC in a more Event Driven way. Here is the new flow:
A web page collect movie information using a form and send them to the frontend.
The frontend write a Message in the BUS with the new movie information
The CommandManager listen the BUS and create the new movies in the datastore. Eventually, it publish a new message in the BUS specifying that a new Movie has been created.
At this point, the frontend is no more waiting for a response due to the fact that this kind of flow is asynchronous. How could we complete this flow in order to forward the user to the Movie Information Web page? We should wait that the creation process is done before querying the QueryManager.
In a more general term, in a asynchronous architecture based on bus/event, how to execute Query used to provide information in a web page?
In addition to #VoiceOfUnreason's answer,
If the two microservices are RESTFul, the CommandManager could return a 202 Accepted with a link pointing to the resource that will be created in the future. The client could then poll that resource until the server responds with a 200 OK.
Another solution would be that the CommandManager would return a 202 Accepted with a link pointing to a command/status endpoint. The client would poll that endpoint until the status is command-processed (including the URL to the the actual resource) or command-failed (including a descriptive message for the failure).
These solutions could be augmented by sending the status of all processed commands using Server Sent Events. In this way, the client gets notified without polling.
If the client is not aware that the architecture is asynchronous, a solution is to use an API gateway that blocks the client's request until the upstream microservice processes the command and then to respond with the complete resource's data.
At this point, the frontend is no more waiting for a response due to the fact that this kind of flow is asynchronous. How could we complete this flow in order to forward the user to the Movie Information Web page? We should wait that the creation process is done before querying the QueryManager.
Short answer: make the protocol explicit.
Longer answer: a good place to look for inspiration here is HTTP.
The front end makes a POST to the origin server; as a result the origin server places a message on the queue and sends a response back.
The representation sent with this response ought to describe the request's current status and point to (or embed) a status monitor that can provide the user with an estimate of when the request will be fulfilled.
The client can then poll the endpoint to find out what progress has been made.
For instance, the endpoint might be a query into the data store, that looks for evidence that the command manager has processed the original command; or it might be an endpoint that is watching the bus for the MovieCreated message, and changes its answer based on whether or not it has seen that.
It may help clarify things to look into idempotent request handling; when the Command Manager pulls a message off of its queue, how does it know if it has previously processed a copy of that message? Your polling endpoint should be able to use the same information to let the consumer know that the message has been successfully processed.
In addition to #Constantin Galbenu's answer, I would like to put in my two cents.
I would strongly advise you to look at a microservices pattern called "BFF" (Backend-For-Frontend) pattern. Instead of having a thick API gateway doing all the work, you can have an API per use-case. For Example: In your case, you can an API called "CreateMovieBFFHandler" which would receive the POST request from front-end and then this guy would coordinate with other things in the system like message queues, events etc. to track the status of the submitted request. UI might have a protocol with this BFFhandler that if the response doesn't come back in X seconds, then the front-end would consider it as failure and if this handler is able to get a successfully processed messaged from message queue or "MovieCreated" event for this key, then it could send a 200 OK back and then you can redirect the page to call write side and then populate the UI.
Useful Link: https://samnewman.io/patterns/architectural/bff/