I am working on a Project where our client generates almost 500 request simultaneously. I am using the forkJoin to get all the responses as Array.
But the Server after 40-50 request Blocks the requests or sends only errors. I have to split these 500 requests in Chunks of 10 requests and loop over this chunks array and have to call forkJoin for each chunk, and convert observable to Promise.
Is there any way to get rid of this for loop over the chucks?
If I understand right you question, I think you are in a situation similar to this
const clientRequestParams = [params1, params2, ..., params500]
const requestAsObservables = clientRequestParams.map(params => {
return myRequest(params)
})
forkJoin(requestAsObservables).subscribe(
responses => {// do something with the array of responses}
)
and probably the problem is that the server can not load so many requests in parallel.
If my understanding is right and if, as you write, there is a limit of 10 for concurrent requests, you could try with mergeMap operator specifying also the concurrent parameter.
A solution could therefore be the following
const clientRequestParams = [params1, params2, ..., params500]
// use the from function from rxjs to create a stream of params
from(clientRequestParams).pipe(
mergeMap(params => {
return myRequest(params)
}, 10) // 10 here is the concurrent parameter which limits the number of
// concurrent requests on the fly to 10
).subscribe(
responseNotification => {
// do something with the response that you get from one invocation
// of the service in the server
}
)
If you adopt this strategy, you limit the concurrency but you are not guaranteed the order in the sequence of the responses. In other words, the second request can return before the first one has returned. So you need to find some mechanism to link the response to the request. One simple way would be to return not only the response from the server, but also the params which you used to invoke that specific request. In this case the code would look like this
const clientRequestParams = [params1, params2, ..., params500]
// use the from function from rxjs to create a stream of params
from(clientRequestParams).pipe(
mergeMap(params => {
return myRequest(params).pipe(
map(resp => {
return {resp, params}
})
)
}, 10)
).subscribe(
responseNotification => {
// do something with the response that you get from one invocation
// of the service in the server
}
)
With this implementation you would create a stream which notifies both the response received from the server and the params used in that specific invocation.
You can adopt also other strategies, e.g. return the response and the sequence number representing that response, or maybe others.
Related
I was reading through the docs to learn pagination approaches for Apollo. This is the simple example where they explain the paginated read function:
https://www.apollographql.com/docs/react/pagination/core-api#paginated-read-functions
Here is the relevant code snippet:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
read(existing, { args: { offset, limit }}) {
// A read function should always return undefined if existing is
// undefined. Returning undefined signals that the field is
// missing from the cache, which instructs Apollo Client to
// fetch its value from your GraphQL server.
return existing && existing.slice(offset, offset + limit);
},
// The keyArgs list and merge function are the same as above.
keyArgs: [],
merge(existing, incoming, { args: { offset = 0 }}) {
const merged = existing ? existing.slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
},
},
},
},
});
I have one major question around this snippet and more snippets from the docs that have the same "flaw" in my eyes, but I feel like I'm missing some piece.
Suppose I run a first query with offset=0 and limit=10. The server will return 10 results based on this query and store it inside cache after accessing merge function.
Afterwards, I run the query with offset=5 and limit=10. Based on the approach described in docs and the above code snippet, what I'm understanding is that I will get only the items from 5 through 10 instead of items from 5 to 15. Because Apollo will see that existing variable is present in read (with existing holding initial 10 items) and it will slice the available 5 items for me.
My question is - what am I missing? How will Apollo know to fetch new data from the server? How will new data arrive into cache after initial query? Keep in mind keyArgs is set to [] so the results will always be merged into a single item in the cache.
Apollo will not slice anything automatically. You have to define a merge function that keeps the data in the correct order in the cache. One approach would be to have an array with empty slots for data not yet fetched, and place incoming data in their respective index. For instance if you fetch items 30-40 out of a total of 100 your array would have 30 empty slots then your items then 60 empty slots. If you subsequently fetch items 70-80 those will be placed in their respective indexes and so on.
Your read function is where the decision on whether a network request is necessary or not will be made. If you find all the data in existing you will return them and no request to the server will be made. If any items are missing then you need to return undefined which will trigger a network request, then your merge function will be triggered once data is fetched, and finally your read function will run again only this time the data will be in the cache and it will be able to return them.
This approach is for the cache-first caching policy which is the default.
The logic for returning undefined from your read function will be implemented by you. There is no apollo magic under the hood.
If you use cache-and-network policy then a your read doesn't need to return undefined when data
I am trying to aggregate/tabulate the results of a set of observables. I have an array of observables that each return a number and I want to total up those results and emit that as the value. Each time the source numbers change, I want the end result to reflect the new total. The problem is that I am getting the previous results added to the new total. This has to do with how I am using the reduce/scan operator. I believe it needs to be nested inside a switchMap/mergeMap, but so far I have been unable to figure out the solution.
I mocked up a simple example. It shows how many cars are owned by all users in total.
Initially, the count is correct, but when you add a car to a user, the new total includes the previous total.
https://stackblitz.com/edit/rxjs-concat-observables-3-drfd36
Any help is greatly appreciated.
Your scan works perfectly right, the point is that for each update the stream gets all data repetitively, so, the fastest way to fix I think is to set a new instance of the stream at the handleClickAddCar.
https://stackblitz.com/edit/rxjs-wrong-count.
I ended up doing this:
this.carCount$ = this.users$.pipe(
map((users: User[]): Array<Observable<number>> => {
let requests = users.map(
(user: User): Observable<number> => {
return this.store.select(UserSelectors.getCarsForUser(user)).pipe(
map((cars: Car[]): number => {
return cars.length;
})
);
}
);
return requests;
}),
flatMap((results): Observable<number> => {
return combineLatest(results).pipe(
take(1),
flatMap(data => data),
reduce((accum: number, result: number): number => {
return accum + result;
}, 0)
)
})
);
I think the take(1) ends up doing the same thing as Yasser was doing above by recreating the entire stream. I think this way is a little cleaner.
I also added another stream below it (in the code) that does one level deeper in terms of retrieving observables of observables.
https://stackblitz.com/edit/rxjs-concat-observables-working-1
Anyone have a cleaner, better way of doing this type of roll-up of observable results?
I am making a splash screen for my app. I want it to last at least N seconds before going to the main screen.
I have an Rx variable myObservable that returns data from the server or from my local cache. How do I force myObservable to complete in at least N seconds?
myObservable
// .doStuff to make it last at least N seconds
.subscribe(...)
You can use forkJoin to wait until two Observables complete:
Observable.forkJoin(myObservable, Observable.timer(N), data => data)
.subscribe(...);
For RxJS 6 without the deprecated result selector function:
forkJoin(myObservable, Observable.timer(N)).pipe(
map(([data]) => data),
)
.subscribe(...);
Edit: As mentioned in comments, Observable.timer(N) with just one parameter will complete after emitting one item so there's not need to use take(1).
Angular 7+ example of forkjoin
I like to build in a higher delay on my development system since I assume production will be slower. Observable.timer doesn't seem to be available any longer but you can use timer directly.
forkJoin(
// any observable such as your service that handles server coms
myObservable,
// or http will work like this
// this.http.get( this.url ),
// tune values for your app so very quick loads don't look strange
timer( environment.production ? 133 : 667 ),
).subscribe( ( response: any ) => {
// since we aren't remapping the response you could have multiple
// and access them in order as an array
this.dataset = response[0] || [];
// the delay is only really useful if some visual state is changing once loaded
this.loading = false;
});
I searched for the usage of defer in RxJS but still I don't understand why and when to use it.
As I understand neither Observable methods is fired before someone subscribes to it.
If that's the case then why do we need to wrap an Observable method with defer?
An example
I'm still wondering why it wrapped Observable with defer? Does it make any difference?
var source = Rx.Observable.defer(function () {
return Rx.Observable.return(42);
});
var subscription = source.subscribe(
function (x) { console.log('Next: ' + x); },
function (err) { console.log('Error: ' + err); },
function () { console.log('Completed'); } );
Quite simply, because Observables can encapsulate many different types of sources and those sources don't necessarily have to obey that interface. Some like Promises always attempt to eagerly compete.
Consider:
var promise = $.get('https://www.google.com');
The promise in this case is already executing before any handlers have been connected. If we want this to act more like an Observable then we need some way of deferring the creation of the promise until there is a subscription.
Hence we use defer to create a block that only gets executed when the resulting Observable is subscribed to.
Observable.defer(() => $.get('https://www.google.com'));
The above will not create the Promise until the Observable gets subscribed to and will thus behaves much more in line with the standard Observable interface.
Take for example (From this article):
const source = Observable.defer(() => Observable.of(
Math.floor(Math.random() * 100)
));
Why don't just set the source Observable to of(Math.floor(Math.random() * 100)?
Because if we do that the expression Math.floor(Math.random() * 100) will run right away and be available in source as a value before we subscribe to source.
We want to delay the evaluation of the expression so we wrap of in defer. Now the expression Math.floor(Math.random() * 100) will be evaluated when source is subscribed to and not any time earlier.
We are wrapping of(...) in the defer factory function such that the construction of of(...) happens when the source observable is subscribed to.
It would be easier to understand if we consider using dates.
const s1 = of(new Date()); //will capture current date time
const s2 = defer(() => of(new Date())); //will capture date time at the moment of subscription
For both observables (s1 and s2) we need to subscribe. But when s1 is subscribed, it will give the date-time at the moment when the constant was set. S2 will give the date-time at the moment of the subscription.
The code above was taken from https://www.learnrxjs.io/operators/creation/defer.html
An example, let's say you want to send a request to a server. You have 2 options.
Via XmlHttpRequest
if you do not subscribe to an existing Observable Observable.create(fn) there would not be any network request. It sends the request only when you subscribe. This is normal and as it should be via Observables. Its the main beauty of it.
Via Promise (fetch, rx.fromPromise)
When you use Promises it does not work that way. Whether you subscribed or not, it sends the network requests right away. To fix this, you need to wrap promises in defer(fn).
Actually you can fully replace defer with regular function. But you have to call the function before subscribing.
function createObservable() {
return from(fetch('https://...'));
}
createObservable().subscribe(...);
In case of defer you only need to pass createObservable function to defer.
Let's say you want to create an observable which when subscribed to, it performs an ajax request.
If you try the code below, the ajax request will be performed immediately,
and after 5 seconds the response object will be printed, which is not what you want.
const obs = from(fetch('http://jsonplaceholder.typicode.com/todos/1'));
setTimeout(()=>obs.subscribe((resp)=>console.log(resp)), 5000)
One solution is to manually create an Observable like below.
In this case the ajax response will be performed after 5 seconds (when subscribe() is called):
let obs = new Observable(observer => {
from(fetch('http://jsonplaceholder.typicode.com/todos/1')).subscribe(observer)
});
setTimeout(()=>obs.subscribe((resp)=>console.log(resp)), 5000)
defer achieves the above in a more straightforward way, and also without the need to use from() to convert promise to observable:
const obs = defer(()=>fetch('http://jsonplaceholder.typicode.com/todos/1'))
setTimeout(()=>obs.subscribe((resp)=>console.log(resp)), 5000)
So I'm attempting to use reactives to recompose chunked messages identified by ID and am having a problem terminating the final observable. I have a Message class which consists of Id, Total Size, Payload, Chunk Number and Type and have the following client-side code:
I need to calculate the number of messages to Take at runtime
(from messages in
(from messageArgs in Receive select Serializer.Deserialize<Message>(new MemoryStream(Encoding.UTF8.GetBytes(messageArgs.Message))))
group messages by messages.Id into grouped select grouped)
.Subscribe(g =>
{
var cache = new List<Message>();
g.TakeWhile((int) Math.Ceiling(MaxPayload/g.First().Size) < cache.Count)
.Subscribe(cache.Add,
_ => { /* Rebuild Message Parts From Cache */ });
});
First I create a grouped observable filtering messages by their unique ID and then I am trying to cache all messages in each group until I have collected them all, then I sort them and put them together. The above seems to block on g.First().
I need a way to calculate the number to take from the first (or any) of the messages that come through however am having difficulty doing so. Any help?
First is a blocking operator (how else can it return T and not IObservable<T>?)
I think using Scan (which builds an aggregate over time) could be what you need. Using Scan, you can hide the "state" of your message re-construction in a "builder" object.
MessageBuilder.IsComplete returns true when all the size of messages it has received reaches MaxPayload (or whatever your requirements are). MessageBuilder.Build() then returns the reconstructed message.
I've also moved your "message building" code into a SelectMany, which keeps the built messages within the monad.
(Apologies for reformatting the code into extension methods, I find it difficult to read/write mixed LINQ syntax)
Receive
.Select(messageArgs => Serializer.Deserialize<Message>(
new MemoryStream(Encoding.UTF8.GetBytes(messageArgs.Message))))
.GroupBy(message => message.Id)
.SelectMany(group =>
{
// Use the builder to "add" message parts to
return group.Scan(new MessageBuilder(), (builder, messagePart) =>
{
builder.AddPart(messagePart);
return builder;
})
.SkipWhile(builder => !builder.IsComplete)
.Select(builder => builder.Build());
})
.Subscribe(OnMessageReceived);