How to cancel http requests made by Apollo (angular) client? - apollo-client

I noticed that when I unsubscribe from query, http request is still executing and not being canceled. Also tried to use AbortController but without any luck. How does one cancel http requests made by Apollo client?

This is an old question, but since I just wanted to do the same and managed to do it with the latest Apollo Client (3.4.13) and Apollo-Angular (2.6.0), you need to make sure that you're using watchQuery() instead of query(), and then call unsubscribe() on the returned Apollo subscription from the previous request. The latter implies of course that you should store somewhere the subscription object that you want to abort.

This is an old question, but I spent two days on this bananas problem and I want to share for posterity.
We're using Angular and GraphQL (apollo-angular and codegen to make GraphQL services) and we opted for an event-driven architecture using NgRx to send events and then perform http calls. When sending multiple identical events (but with different property values) we noticed we got stale data in some cases, especially edge cases like when 20+ of these identical events were sent. Obviously not common, but not ideal, and a hint of perhaps bad scale since we were going to need many more events in future.
The way we resolved this issue was by using .watch() instead of .fetch() on the GraphQL generated services. Initially, since .fetch() returned the same Observable as .watch().valueChanges, we thought it was easier and simpler to just use .fetch(), but their behavior seems much different. We were never able to cancel http requests performed by .fetch(). But after changing to .watch().valueChanges, the Observable acted exactly as http request Observables would, complete with -- thankfully -- cancelation.
So in NgRx, we swapped our generic mergeMap operator for the switchMap operator. This will ensure previous effects listening on dispatched events will be canceled. We needed no extra overhead, no .next-ing to Subjects, no extra Subscriptions. Just change .fetch() into .watch().valueChanges and then switchMap to your heart's content. The takeUntil operator will now also cancel these requests, which is our performed method of unsubscribing from Observables.
Sidenote: I'm amazed that this information was this hard to come by, and honestly this question and one GitHub issue was all I could find to intimate this discrepancy. Even now I don't quite understand why anyone would want .fetch() if all it does is perform an http call that will always resolve and then return an Observable that does not behave the way you expect Observables to behave.

Related

GraphQL resolvers - when to make resolver functions async or not?

I completed this tutorial on making a graphql-node backend server built on Prisma2 and GraphQL. The tutorial doesn't explain why it writes some Resolver functions async and some not.
I thought that the async was added to functions that interacted with the database, but you can see this resolver gets data from the database but doesn't use async. But in this resolver it does use async.
Can somebody please explain why there is this seemingly arbitrary usage of async? When and why I should use it? Thanks in advance.
The first thing you should do is read up on Promises. Promises are a way in JavaScript to encapsulate computations that are still ongoing. This is usually the case when you talk to an external service like a database or the operating system. They have been replacing callback style APIs.
In GraphQL a resolver can either return a value or a Promise that resolves to a value. This means, you can freely choose returning a value or a Promise, but if you call a database function like Prisma, you will get a Promise back, so you are kind of forced to stay "in Promise land", as there is no way to turn a Promise into a value. You can only chain functions, that should be executed with the value "in the future" (with then).
The last concept to understand is async/await. These async syntax is an addition to JavaScript syntax, that makes working with Promises easier. With await, you can stop the execution of a function until a value in a Promise arrives. Now, this looks like you are turning a Promise back into a value, but in reality, you function implicitly returns a Promise. For the VM to know about this, you have to state, that a function might use async by adding the keyword await in front of the function.
So when do you use async for a resolver? You could do it all the time, and the code would be correct. But doing it, even when you don't need to (e.g. you are not talking to a service) might have some performance implications. So it's better to only do it, if you really want to use the await keyword somewhere. I hope this can get you started with the concepts above, there is really a lot to learn. Maybe just go with your intuition and TypeScript errors until you deeply understand what is going on.

GraphQL Asynchronous query results

I'm trying to implement a batch query interface with GraphQL. I can get a request to work synchronously without issue, but I'm not sure how to approach making the result asynchronous. Basically, I want to be able to kick off the query and return a pointer of sorts to where the results will eventually be when the query is done. I'd like to do this because the queries can sometimes take quite a while.
In REST, this is trivial. You return a 202 and return a Location header pointing to where the client can go to fetch the result. GraphQL as a specification does not seem to have this notion; it appears to always want requests to be handled synchronously.
Is there any convention for doing things like this in GraphQL? I very much like the query specification but I'd prefer to not leave the client HTTP connection open for up to a few minutes while a large query is executed on the backend. If anything happens to kill that connection the entire query would need to be retried, even if the results themselves are durable.
What you're trying to do is not solved easily in a spec-compliant way. Apollo introduced the idea of a #defer directive that does pretty much what you're looking for but it's still an experimental feature. I believe Relay Modern is trying to do something similar.
The idea is effectively the same -- the client uses a directive to mark a field or fragment as deferrable. The server resolves the request but leaves the deferred field null. It then sends one or more patches to the client with the deferred data. The client is able to apply the initial request and the patches separately to its cache, triggering the appropriate UI changes each time as usual.
I was working on a similar issue recently. My use case was to submit a job to create a report and provide the result back to the user. Creating a report takes couple of minutes which makes it an asynchronous operation. I created a mutation which submitted the job to the backend processing system and returned a job ID. Then I periodically poll the jobs field using a query to find out about the state of the job and eventually the results. As the result is a file, I return a link to a different endpoint where it can be downloaded (similar approach Github uses).
Polling for actual results is working as expected but I guess this might be better solved by subscriptions.

What is the difference between future and promise in vertx?

I usually see the use of either promise and future in the start of a vert.x verticle. Is there any specific difference between both?
I have read about their differences in Scala language, is it the same in case of Vert.x too?
Also when should I know when to use promise or a future?
The best I've read about:
think on Promise as producer (used by producer on one side of async operation) and Future as consumer (used by consumer on the other side).
Futures vs. Promises
Promise are for defining non-blocking operations, and it's future() method returns the Future associated with a promise, to get notified of the promise completion and retrieve its value. The Future interface is the result of an action that may, or may not, have occurred yet.
A bit late to the game, and the other answers say as much in different words, however this might help. Lets say you were wrapping some older API (e.g. callback based) to use Futures, then you might do something like this :
Future<String> getStringFromLegacyCallbackAPI() {
Promise<String> promise = Promise.promise();
legacyApi.getString(promise::complete);
return promise.future();
}
Note that the person who calls this method gets a Future so they can only specify what should happen in the event of successful completion or failure (they cannot trigger completion or failure). So, I think you should not pass the promise up the stack - rather the Future should be handed back and the Promise should be kept under the control of the code which can resolve or reject it.
A Promise as well is a writable side of an action that may or not have occurred yet.
And according to the wiki :
Given the new Promise / Future APIs the start(Future<Void>) and stop(Future<Void>) methods have been deprecated and will be removed in Vert.x 4.
Please migrate to the start(Promise) and stop(Promise) variants.
As a paraphrase,
A future is a read-only container for a result that does not yet exist, while a promise can be written (normally only once).
More from here

Why do GraphQL Subscriptions use an AsyncIterator?

AsyncIterator requires pulling data using .next(). But with websockets I generally want to push data when events occur. Only thing I can think of is that by using pull-based they can rate-limit.
So what is calling .next()? Is it a timer, or does it listen to a publish message, queue that, then call .next() until it consumes all the queue?
Is this suitable for real-time data, like GPS positions on a map?
Looked here and still could not figure it out: https://github.com/facebook/graphql/blob/master/rfcs/Subscriptions.md
GraphQL Subscriptions repo from Apollo: https://github.com/apollographql/graphql-subscriptions
AsyncIterator iterate through an EventStream, then each event is resolved, sometimes with filter and/or with a payload manipulation.
Payload manipulation can call an another async database request, or resolve other GraphQL Types, which is time consuming.
So GraphQL use a pull-based system to rate-limit resolves eventStream. If you don't use withFilter neither resolves, you won't have delay on Event except with a lot of user.
GraphQL is suitable for low latency data.
Source: https://github.com/graphql/graphql-js/blob/master/src/subscription/subscribe.js#L44

RxJS - Concurrent paging

I'm facing a bit of a tricky problem and feel like my limited knowledge of RxJS is preventing me from reaching a solution.
Essentially what I'm trying to do is page an api endpoint in page sizes of 100, then for each page of data I receive perform an ajax request on each item. However I'm running into some performance issues when retrieving the pages of data, I assumed forkJoin would be exactly what I needed but it doesn't seem to be running the ajax requests in parellel as the operator suggests, this is leading to rather long wait times before the data is ready to process.
So my question is, how can I retrieve pages of data without having to rely on the previous page being fetched?
Sounds like this might be the github users project.
If, say, fetching avatar_url after fetching a list of users, forkjoin is going to wait for completion of all 100 requests until it emits anything.
flatMap is going to be a perceived improvement in the UI as it will emit each response as it arrives. But, does not alter the overall time to completion or the problem of browser limited connections.

Resources