I have a use case of a "quick search' box where the user types first few letters of the search criteria, and the system shows the list of results in real time. E.g. if you type "J" in a box for the country name, it would show "Jamaica, Japan, Jordan". When you proceed to type "Ja", it would show just Jamaica and Japan, and leave Jordan out.
Each search request is an AJAX call. The trouble with AJAX calls is that the responses may not come in the same order as requests. E.g., the following sequence of events is possible:
Request search results for "J".
Request search results for "Ja".
Receive response to request #2: [Jamaica, Japan]
Receive response to request #1: [Jamaica, Japan, Jordan]
If the system blindly shows the last response, it will end up in an inconsistent state, when the search box contains "Ja", but "Jordan" is on the suggestion list. The system should be smart and discard response #4, since it is no longer relevant.
Does RxJS provide a clean way to discard responses to anything but the last issued request?
Keep in mind that "last request" changes over time as new requests are produced. I searched the documentation and did not find much. Most tutorials simply ignore this problem.
&tldr;
You want switchMap instead of flatMap or mergeMap
Breakdown
I just did a quick google search for "rxjs smart search" and found two of the first two hits dealt with the problem, here and here.
But for future readers the answer is to use switchMap or flatMapLatest as it used to be called. As the name would imply, it both switches and maps. What does that mean, well from the docs here it,
Projects each source value to an Observable which is merged in the output Observable, emitting values only from the most recently projected Observable.
In plain terms, like flatMap each single event passed into the callback function of switchMap should result in a stream (or stream-like thing). The result of a new emission is that only results from the latest stream are listened to while there is a "best-effort" cancellation made on the original. This happens for each new emission. By best effort it means that if there is a way to halt progress on the stale stream it will be halted, but for some data structures (read: Promises) there is no way to actually cancel it once it is in progress so the best the library can do is simply ignore the result.
Related
I have an application where users can take part of puzzle solving events. I have an API endpoint /events/{id} that is used to get data associated to a certain event.
Based on whether the event has ended, the response will differ:
If the event has ended, the endpoint will return event name, participants, scores etc. with status code 200
If the event has not ended, the endpoint will return event name, start time, end time, puzzles etc. with status code 200.
On the client-side, what is the best way to distinguish these two responses from each other to decide which page to display, results page or event page? Is this a good way to accomplish my goal?
Some might answer that I should already know on the client-side whether the event has ended and then query for data accordingly. But what if user uses the address bar to navigate to an event? Then I will have no data to know, whether it truly has ended. I wouldn't like to first make an API call to know that it has (not) ended and then make another one for results/puzzles.
pass a boolean isFinished and return it inside of response object. If your response object is already defined, create a wrapper that has the previous response dto and a boolean flag.
Also we did use a solution like this in one of our projects at work for a big company so I would say it is somewhat industry accepted way of doing it.
Is there a reasonable way to implement a job-based query paradigm in GraphQL?
In particular, something like the following:
Caller submits a search request
Backend returns a job ID
Caller receives status updates on the job as it runs
Caller separately can retrieve pages of data from the job results
I guess the problem I see here is that we are splitting up the process into two steps: One is making the request and the second is retrieving data. As a result, the fields requested in the first request do not correspond with what is returned (just a job ID). And similarly, a call to retrieve results has the same issue.
Subscriptions don't really solve this problem either, I don't believe. They might help with requesting data that might take a long time to return I think, but that isn't quite the same as a job-based API.
Maybe this is a niche use case, and I have no doubt that it wasn't what GraphQL was initially built to solve. But, I'm just wondering if this is something doable, or if this is more of trying to fit a square peg into a round hole.
I noticed that when I unsubscribe from query, http request is still executing and not being canceled. Also tried to use AbortController but without any luck. How does one cancel http requests made by Apollo client?
This is an old question, but since I just wanted to do the same and managed to do it with the latest Apollo Client (3.4.13) and Apollo-Angular (2.6.0), you need to make sure that you're using watchQuery() instead of query(), and then call unsubscribe() on the returned Apollo subscription from the previous request. The latter implies of course that you should store somewhere the subscription object that you want to abort.
This is an old question, but I spent two days on this bananas problem and I want to share for posterity.
We're using Angular and GraphQL (apollo-angular and codegen to make GraphQL services) and we opted for an event-driven architecture using NgRx to send events and then perform http calls. When sending multiple identical events (but with different property values) we noticed we got stale data in some cases, especially edge cases like when 20+ of these identical events were sent. Obviously not common, but not ideal, and a hint of perhaps bad scale since we were going to need many more events in future.
The way we resolved this issue was by using .watch() instead of .fetch() on the GraphQL generated services. Initially, since .fetch() returned the same Observable as .watch().valueChanges, we thought it was easier and simpler to just use .fetch(), but their behavior seems much different. We were never able to cancel http requests performed by .fetch(). But after changing to .watch().valueChanges, the Observable acted exactly as http request Observables would, complete with -- thankfully -- cancelation.
So in NgRx, we swapped our generic mergeMap operator for the switchMap operator. This will ensure previous effects listening on dispatched events will be canceled. We needed no extra overhead, no .next-ing to Subjects, no extra Subscriptions. Just change .fetch() into .watch().valueChanges and then switchMap to your heart's content. The takeUntil operator will now also cancel these requests, which is our performed method of unsubscribing from Observables.
Sidenote: I'm amazed that this information was this hard to come by, and honestly this question and one GitHub issue was all I could find to intimate this discrepancy. Even now I don't quite understand why anyone would want .fetch() if all it does is perform an http call that will always resolve and then return an Observable that does not behave the way you expect Observables to behave.
I am implementing an example of spring-boot and axon. I have two events
(deposit and withdraw account balance). I want to know is there any way to get the state of the Account Aggregate by a given date ?
I want to get not just the final state, but to replay events in a range of dates.
I think I can help with this.
In the context of Axon Framework, you can start a replay of events by telling a given TrackingEventProcessor to 'reset' it's Tokens. By the way, the current description on this in the Reference Guide can be found here.
These TrackingTokens are the objects which know how far a given TrackingEventProcessor is in terms of handling events from the Event Stream. Thus resetting/adjusting these TrackingTokens is what will issue a Replay of events.
Knowing all these, the second step is to look at the methods the TrackingEventProcessor provides to 'reset tokens', which is threefold:
TrackingEventProcessor#resetTokens()
TrackingEventProcessor#resetTokens(Function<StreamableMessageSource, TrackingToken>)
TrackingEventProcessor#resetTokens(TrackingToken)
Option one will reset your tokens to the beginning of the event stream, which will thus replay everything.
Option two and three however give you the opportunity to provide a TrackingToken.
Thus, you could provide a TrackingToken starting from several points on the Event Stream. So, how do you go about to creating such a TrackingToken at a specific point in time? To that end, you should take a look at the StreamableMessageSource interface, which has the following operations:
StreamableMessageSource#createTailToken()
StreamableMessageSource#createHeadToken()
StreamableMessageSource#createTokenAt(Instant)
StreamableMessageSource#createTokenSince(Duration)
Option 1 is what's used to create a token at the start of the stream, whilst 2 will create a token at the head of the stream.
Option 3 and 4 will however allow you to create a token at a specific point in time, thus allowing you to replay all the events since the defined instance up to now.
There is one caveat in this scenario however. You're asking to replay an Aggregate. From Axon's perspective by default the Aggregate is the Command Model in a CQRS set up, thus dealing with Commands going in to your system. In the majority of the applications, you want Commands (e.g. the requests to change something) to occur on the current state of the application. As such, the Repository provided to retrieve an Aggregate does not allow specifying a point in time.
The above described solution in regards to replaying is thus solely tied to Query Model creation, as the TrackingEventProcessor is part of the Event Handling side in your application most often used to create views. This idea also ties in with your questions, that you want to know the "state of the Account Aggregate" at a given point in time. That's not a command, but a query, as you have 'a request for data' instead of 'the request to change state'.
Hope this helps you out #Safe!
What is the best way deal with out-of-sequence Ajax requests (preferably using a jQuery)?
For example, an Ajax request is sent from the user's browser anytime a field changes. A user may change dog_name to "Fluffy", but a moment later, she changes it to "Spot". The first request is delayed for whatever reason, so it arrives at the server after the second, and her dog ends up being called "Fluffy" instead of "Spot".
I could pass along a client-side timestamp along with each request, and have the server track it as part of each Dog record and disregard earlier requests to change the same field (but only if there is a difference of less than 5 minutes, in case the user changes the time on her machine).
Is this approach sufficiently robust, or is there a better, more standardized approach?
EDIT:
Matt made a great point in his comment. It's much better to serialize requests to change the same field, so is there a standard way of implementing Ajax request queues?
EDIT #2
In response to #cherouvim's comment, I don't think I'd have to lock the form. The field changes to reflect the user's change, a change request is placed into the queue. If a request to change the same field is waiting in the queue, delete that old request. 2 things I still would have to address:
Placing a request into the queue is an asynchronous task. I could have the callback handler from the previous Ajax request send the next request in the queue. Javascript code isn't multi-threaded (or... is it?)
If a request fails, I would need the user interface to reflect the state of the last successful request. So, if the user changes the dog's name to "Spot" and the Ajax request fails, the field would have to be set back to "Fluffy" (the last value successfully committed).
What issues am I missing?
First of all you need to serialize server side processing for each client. If you are programming in Java then synchronizing execution on the http session object is sufficient. Serializing will help in case the second update comes while the first is being processed.
A second enhancement you can implement in your entity updating is http://en.wikipedia.org/wiki/Optimistic_concurrency_control. You add a version property (and column) for your entity. Each time an update happens this is incremented once. In fact the update statement looks like:
update ... set version=6 ... where id=? and version=5;
If affected rows from above pseudoquery query are 0 then someone else has managed to update the entity first. What you do then is up to you. Note that you need to be rendering the version on the html update form of the entity as a hidden parameter and sending it back to the server each time you update. On return you have to write back the updated version.
Generally the first enhancement would be enough. The second one will improve the system in case many people are editing the same entities at the same time. It solves the "lost update" problem.
I would implement a queue on the client side with chaining of successful requests or rollbacks on unsuccessful requests.
You need to define "unsuccessful", be it a timeout or a returned value.