I'm using PayPal webhooks to get subscription information automatically.
However, we have to wait about 20 seconds between the payment and the subscription activation.
Is it because of the sandbox environment? Is the production environment faster?
This is important because the customers have to wait and if waiting time could be avoided, it would be better.
The sandbox is slower in general, but you will need to test yourself in live -- and the speed of asynchronous notifications vary in different conditions.
If you need a faster notification, what you can do is have the client-side onApprove event call your server (with a JS fetch similar to this demo, plus a body payload if desired), and have the server route that handles that fetch use the Subscriptions API to get the status of the subscription, and see whether it is in fact active in that API response direct from PayPal.
Such a client-side trigger of a server route would happen in parallel to waiting for the webhook notification, so whichever completes first will mark the subscription as active in your records. This way you are not relying on either the client-side trigger nor waiting for the webhook, but rather whichever happens first.
Related
I am implementing a REST API that internally places a message on a message queue and receives a message as a response on a different topic.
How could API implementation handle publishing and consuming different messages and responds to the client?
What if it never receives a message?
How does the service handle this time-out scenario?
Example
I am implementing a REST API to process an order. The implementation internally publishes a series of messages to verify the payment, update inventory, and prepare shipping info. Finally, it sends the response back to the client.
Queues are too low-level abstraction to implement your requirements directly. Look at an orchestration solution like temporal.io that makes programming such async systems trivial.
Disclaimer: I'm one of the founders of the Temporal open source project.
How could API implementation handle publishing and consuming different messages and responds to the client?
Even though messaging systems can be used in RPC like fashion:
there is a request topic/queue and a reply topic/queue
with a request identifier in the messages' header/metadata
this type of communication kills the promise of the messaging system: decouple components in time and space.
Back to your example. If ServiceA receives the request then it publishes a message to topicA and returns with an 202 Accepted status code to indicate that the request is received but not yet processed completely. In the response you can indicate an url on which the consumer of ServiceA's API can retrieve the latest status of its previously issued request.
What if it never receives a message?
In that case the request related data remains in the same state as it was at the time of the message publishing.
How does the service handle this time-out scenario?
You can create scheduled jobs to clean-up never finished/got stuck requests. Based on your business requirements you can simple delete them or transfer them to manual processing by the customer service.
Order placement use case
Rather than creating a customer-facing service which waits for all the processing to be done you can define several statuses/stages of the process:
Order requested
Payment verified
Items locked in inventory
...
Order placed
You can inform your customers about these status/stage changes via websocket, push notification, e-mail, etc.. The orchestration of this order placement flow can be achieved for example via the Saga pattern.
We have a middleware that depends on another system to execute payment requests. This third-party system usually sends a webhook later when a payment request is performed from our end and successfully done at their end after processing. Sometimes they failed or significantly delayed sending webhook and there is no retry mechanism at their end. However, they have a status query API at their end to know the current status of the payment request.
We update our payment status based on this webhook and this is very vital for our system. Now for the use case, we have found two ways to handle this failed webhook
Run a scheduler to cater failed webhook requests and check with their status query API
Implement a Queue, where a new entry will be added to the queue when an original payment request took place and fire status query API Using Time-out events eg. SQS.
The above way around has its own pros and cons. Is there any other way around to handle this use case? If no, which one of two would be the best choice.
One option is to use an orchestrator like temporal.io to implement your business logic. The code to act on the webhook as well as poll the status API in parallel would be pretty simple.
Currently I'm using Socket.io / SignalR to emit an event from my backend message queue system, whenever new data is incoming. That way I can setup an event handler in my React application and update the relay cache from within the event handler.
It does not seem like the most Graphql ish way to do things, so I was playing a bit around with pre-RFC live queries implementations, where you observed data changes in reactive data stores pushed it to the graphql server, and further to the client using websockets... with some rather complex custom code... obviously graphql is not ready for real live queries (not polling)
A few lines further down it says:
When building event-based subscriptions, the problem of determining what should trigger an event is easy, since the event defines that explicitly. It also proved fairly straight-forward to implement atop existing message queue systems.
Which leads me to my question. How can you (in a graphql way) best trigger graphql subscriptions when a new event is incoming to your backend message queue application and you need to reflect this new data in the ui in realtime - let's say each second? I'm not talking about triggering the event in the frontend/client or polling ever x seconds like you usually see when talking about subscriptions.
Not sure it's relevant but I'm using Relay Modern as my preferred graphql client.
Here's some ideas that might work if I get a little help to understand in general how to trigger/call a subscription without a mutation.
Backend worker / message queue "A" receives new incoming event with some device data. It uses either SignalR, or other pubsub (redis/socket.io/?) to notify the graphql server "B" (which subscribes to the event) about a new event has happened. The graphql server then trigger/execute the subscription and the frontend react relay application "C" automatically updates, since it has a relay subscription defined. This would be ideal, right? but how to trigger subscription on the graphql server?
Simply use Socket.io/SignalR to emit events from backend worker / message queue "A" on incoming data, subscribe and handle the event in the frontend "B", and then programically calling the subscription from within the Socket.io/SignalR event handler (if such a thing, directly calling a subscription, is even possible?). But then the only improvement from using subscriptions, instead of pure Socket.io/SignalR will be that I have moved the updating of the relay cache/store from the handler to the subscription. Not a big improvement, if any. But the manual update of the cache/store is really cumbersome, although not that hard :/
How do people handle real streaming live (device) data with signalr, and why is all realtime articles/examples just repeating the same old simple chat application, where the ui just updates after a user makes a click event? Is graphql not suited yet for dealing with a stream of frequently incoming device data in realtime? I understand why live queries was delayed after playing with implementing them myself, but without them, REAL realtime data updates and push it from the server to the frontend?
I'm trying to understand whether the HTML5 Web Notifications API can help me out, but I'm falling short in understanding how it works.
I'd like user_a to be able to send user_b a message within my webapp.
I'd like user_b to receive a notification of this.
Can the web notifications API help here? Does it let me specifically target a user (rather than notify everyone the site has been updated_? I can't see how I would create an alert for one person.
Can anyone help me understand a little more?
The notifications API is client side, so it needs to get events from another client-side technology. Here, read THIS: http://nodejs.org/api/. Just kidding. Node.js+socket.io is probably the best way to go here, you can emit events to one or all clients (broadcast). That's a push scenario. Or each user could be pulling their notifications from the server.
HTML5 Web Notifications API gives you ability to display desktop notifications that your application has generated.
What you are trying to achieve is a different thing and web notification is just a part of your scenario.
Depending upon how you are managing your application, for chat and messaging purpose as humbolight mentioned, you should look into node.js. it will provide you the necessary back-end to manage sending and receiving messages between users.
To notify a user that (s)he has received a message, you can opt for ajax polling on client side.
Simply create a javascript that pings the server every x seconds and checks if there is any notification or new message available for this user.
If response is successful, then you can use HTML5 notification API to show a message to user that (s)he has a new message.
The main problem with long polling is server load, and bandwidth usage even when there are no messages, and if number of users are in thousands then you can expect your server always busy responding to poll calls.
An alternate is to use Server Sent Events API, where you send a request to server and then server PUSHES the notifications/messages to the client as soon as they are available.
This reduces the unnecessary client->server polling and seems much better option in your case.
To get started you can check a good tutorial at
HTML5Rocks
What you're looking for is WebSocket. It's the technology that allows a client (browser) to open a persistent connection to the server and receive data from it at the server's whim, rather than having to "poll" the server to see if there's anything new.
Other answers here have already mentioned node.js, but Node is simply one (though arguably the best) option for implementing websockets on your server. You might also be comfortable with Ratchet, which is a websocket server library for PHP, or Tornado which is in Python.
How you handle your real-time communication is up to you. Websockets are merely the underlying technology that you can use to pass data back and forth. The client side of this will be fairly easy, but on the server side, you'll need a mechanism for websocket handlers to get information from each other. Look at tools like ZeroMQ for handling queues, and Memcached or Redis to handle large swaths of data which don't need to be stored permanently.
I'm working on a web application that submits tasks to a master/worker system that farms out the tasks to any of a series of worker instances. The work queue master runs as a separate process (on a separate machine altogether) and tasks are submitted to the master via HTTP/REST requests. Once tasks are submitted to the work queue, client applications can submit another HTTP request to get status information about tasks.
For my web application, I'd like it to provide some sort of progress bar view that gives the user some indication of how far along task processing has come. The obvious way to implement this would be an AJAX progress meter widget that periodically polls the work queue for status on the tasks that have been submitted. My question is, is there a better way to accomplish this without the frequent polling?
I've considered having the client web application open up a server socket on which it could listen for notifications from the work master. Another similar thought I've had is to use XMPP or a similar protocol for the status notifications. (Of course, the master/worker system would need to be updated to provide notifications either way but I own the code for that so can make any necessary updates myself.)
Any thoughts on the best way to set up a notification system like this? Is the extra effort involved worth it, or is the simple polling solution the way to go?
Polling
The client keeps polling the server to get the status of the response.
Pros
Being really RESTful means cacheable and scaleable.
Cons
Not the best responsiveness if you do not want to poll your server too much.
Persistent connection
The server does not close its HTTP connection with the client until the response is complete. The server can send intermediate status through this connection using HTTP multiparts.
Comet is the most famous framework to implement this behaviour.
Pros
Best responsiveness, almost real-time notifications from the server.
Cons
Connection limit is limited on a web server, keeping a connection open for too long might, at best load your server, at worst open the server to Denial of Service attacks.
Client as a server
Make the server post status updates and the response to the client as if it were another RESTful application.
Pros
Best of every worlds, no resources are wasted waiting for the response, either on the server or on the client side.
Cons
You need a full HTTP server and web application stack on the client
Firewalls and routers with their default "no incoming connections at all" will get in the way.
Feel free to edit to add your thoughts or a new method!
I guess it depends on a few factors
How accurate the feedback can be (1 percent, 5 percent, 50 percent) Accurate feedback makes it worth pursuing some kind of progress bar and comet style push. If you can only say "Busy... hold on... almost there... done" then a simple ajax "are we there yet" poll is certainly easier to code.
How timely the Done message has to be seen by the client
How long each task takes (1 second, 10 seconds, 10 minutes)
1 second makes it a bit moot. 10 seconds makes it worth it. 10 minutes means you're better off suggesting the user goes for a coffee break :-)
How many concurrent requests there will be
Unless you've got a "special" server, live push style systems tend to eat connections and you'll be maxed out pretty quickly. Having to throw more webservers in for a fancy progress bar might hurt the budget.
I've got some sample code on 871184 that shows a hand rolled "forever frame" which seems to work out well. The project I developed that for isn't hammered all that hard though, the operations take a few seconds and we can give pretty accurate percent. The code uses asp.net and jquery, but the general techniques will work with any server and javascript framework.
edit As John points out, status reporting probably isn't the job of the RESTful service. But there's nothing that says you can't open an iframe on the client that hooks to a page on the server that polls the service. Theory says the server and the service will at least be closer to one another :-)
Look into Comet. You make a single request to the server and the server blocks and holds the connection open until an update in status occurs. Once that happens the response is sent and committed. The browser receives this response, handles it and immediately re-requests the same URL. The effect is that of events being pushed to the browser. There are pros and cons and it might not be appropriate for all use cases but would provide the most timely status updates.
My opinion is to stick with the polling solution, but you might be interested in this Wikipedia article on HTTP Push technologies.
REST depends on HTTP, which is a request/response protocol. I don't think you're going to get a pure HTTP server calling the client back with status.
Besides, status reporting isn't the job of the service. It's up to the client to decide when, or if, it wants status reported.
One approach I have used is:
When the job is posted to the server, the server responds back a pubnub-channel id (one could alternatively use Google's PUB-SUB kind of service).
The client on browser subscribes to that channel and starts listening for messages.
The worker/task server publishes status on that pubnub channel to update the progress.
On receiving messages on the subscribed pubnub-channel, the client updates the web UI.
You could also use self-refreshing iframe, but AJAX call is much better. I don't think there is any other way.
PS: If you would open a socket from client, that wouldn't change much - PHP browser would show the page as still "loading", which is not very user-friendly. (assuming you would push or flush buffer to have other things displayed before)