How to best implement a Promise semaphore? - promise

I use a semaphore for two processes that share a resource (rest api endpoint), that can't be called concurrent. I do:
let tokenSemaphore = null;
class restApi {
async getAccessToken() {
let tokenResolve;
if (tokenSemaphore) {
await tokenSemaphore;
}
tokenSemaphore = new Promise((resolve) => tokenResolve = resolve);
return new Promise(async (resolve, reject) => {
// ...
resolve(accessToken);
tokenResolve();
tokenSemaphore = null;
});
}
}
But this looks too complicated. Is there a simpler way to achieve the same thing?
And how to do it for more concurrent processes.

This is not a server side Semaphore. You need interprocess communication for locking processes which are running independently in different threads. In that case the API must support something like that on the server side and this here is not for you.
As this was the first hit when googling for "JavaScript Promise Semaphore", here is what I came up with:
function Semaphore(max, fn, ...a1)
{
let run = 0;
const waits = [];
function next(x)
{
if (run<max && waits.length)
waits.shift()(++run);
return x;
}
return (...a2) => next(new Promise(ok => waits.push(ok)).then(() => fn(...a1,...a2)).finally(_ => run--).finally(next));
}
Example use (above is (nearly) copied from my code, following was typed in directly and hence is not tested):
// do not execute more than 20 fetches in parallel:
const fetch20 = Semaphore(20, fetch);
async function retry(...a)
{
for (let retries=0;; retries++)
{
if (retries)
await new Promise(ok => setTimeout(ok, 100*retries));
try {
return await fetch20(...a)
} catch (e) {
console.log('retry ${retries}', url, e);
}
}
}
and then
for (let i=0; ++i<10000000; ) retry(`https://example.com/?${i}`);
My Browser handles thousands of asynchronous parallel calls to retry very well. However when using fetch directly, the Tabs crash nearly instantly.
For your usage you probably need something like:
async function access_token_api_call()
{
// assume this takes 10s and must not be called in parallel for setting the Cookie
return fetch('https://api.example.com/nonce').then(r => r.json());
}
const get_access_token = Semaphore(1, access_token_api_call);
// both processes need to use the same(!) Semaphore, of course
async function process(...args)
{
const token = await get_access_token();
// processing args here
return //something;
}
proc1 = process(1);
proc2 = process(2);
Promise.all([proc1, proc2]).then( //etc.
YMMV.
Notes:
This assumes that your two processes are just asynchronous functions of the same single JS script (i.E. running in the same Tab).
A Browser usually does not open more than 5 concurrent connects to a backend and then pipelines excess requests. fetch20 is my workaround for a real-world problem when a JS-Frontend needs to queue, say, 5000 fetches in parallel, which crashes my Browser (for unknown reason). We have 2021 and that should not be any problem, right?

But this looks too complicated.
Not complicated enough, I'm afraid. Currently, if multiple code paths call getAccessToken when the semaphore is taken, they'll all block on the same tokenSemaphore instance, and when the semaphore is released, they'll all be released and resolve roughly at the same time, allowing concurrent access to the API.
In order to write an asynchronous lock (or semaphore), you'll need a collection of futures (tokenResolvers). When one is released, it should only remove and resolve a single future from that collection.
I played around with it a bit in TypeScript a few years ago, but never tested or used the code. My Gist is also C#-ish (using "dispoables" and whatnot); it needs some updating to use more natural JS patterns.

Related

Blazor and ContinueWith on page load

I am new to Blazor. I'm working on a Server App (not WASM).
On a page load, I am loading 2 things to a page. Item1 is loaded at the same time as Item2 via API calls. I'd like individual items to show up as soon as they are available. I was thinking something like
protected override async Task OnInitializedAsync()
{
var t1 = _service.GetItem1().ContinueWith(t => {
Item1Loaded = true;
Item2State = t.Result;
await InvokeAsync(StateHasChanged);
});
var t2 = _service.GetItem2().ContinueWith(t => {
Item2Loaded = true;
Item2State = t.Result;
await InvokeAsync(StateHasChanged);
});
}
I have a couple questions about this though:
Do I need to worry about canceling these lines if the user navigates away from the component? (would changing a state variable after the component is removed cause a problem) or does Blazor handle that at the framework level somehow?
Do I need to ensure this gets back to a certain thread with a Synchronization Context? It seems like InvokeAsync just does this for me but want to be sure.
Its hard to find lots of modern examples of ContinueWith. async/await is dominant, but I don't think it allows continuations to execute in the order they complete. Is this a reasonable use of it?
Since you are using Server side you can do this more cleanly (ContinueWith is more or less obsolete since async / await):
protected override async Task OnInitializedAsync()
{
var t1 = Task.Run(async () =>
{ Item1State = await _service.GetItem1();
Item1Loaded = true; // you can probably derive this from Item1State
await InvokeAsync(StateHasChanged);
});
var t2 = Task.Run(async () =>
{ Item2State = await _service.GetItem2();
Item2Loaded = true;
await InvokeAsync(StateHasChanged);
});
await Task.WhenAll(t1, t2);
}
No need to call StateHasChanged() here.
Without the ItemLoaded guards you could do this without Task.Run().
Do I need to worry about canceling these lines if the user navigates away
Most modern DB stuff can be passed a cancellation token. So, use that if you wish to cancel the operation. If It's your own code and the operations are long running, consider using cancellation tokens.
Do I need to ensure this gets back to a certain thread with a Synchronization Context?
Calling await InvokeAsync(StateHasChanged); ensures the UI code is run on the UI thread.
Question 3
Stick with simple await. Yes, it executes in order. And call await InvokeAsync(StateHasChanged) between operations to update the UI if you have more than two await operations.
Note : This will only work if you await real async code, not sync code wrapped in a Task! If there are no yields, the Renderer gets no thread time so the UI doesn't get updated till it gets some.

How do I append to an observable inside the observable itself

My situation is as follows: I am performing sequential HTTP requests, where one HTTP request depends on the response of the previous. I would like to combine the response data of all these HTTP requests into one observable. I have implemented this before using an async generator. The code for this was relatively simple:
async function* AsyncGeneratorVersion() {
let moreItems = true; // whether there is a next page
let lastAssetId: string | undefined = undefined; // used for pagination
while (moreItems) {
// fetch current batch (this performs the HTTP request)
const batch = await this.getBatch(/* arguments */, lastAssetId);
moreItems = batch.more_items;
lastAssetId = batch.last_assetid;
yield* batch.getSteamItemsWithDescription();
}
}
I am trying to move away from async generators, and towards RxJs Observables. My best (and working) attempt is as follows:
const observerVersion = new Observable<SteamItem>((subscriber) => {
(async () => {
let moreItems = true;
let lastAssetId: string | undefined = undefined;
while (moreItems) {
// fetch current batch (this performs the HTTP request)
const batch = await this.getBatch(/* arguments */, lastAssetId);
moreItems = batch.more_items;
lastAssetId = batch.last_assetid;
const items = batch.getSteamItemsWithDescription();
for (const item of items) subscriber.next(item);
}
subscriber.complete();
})();
});
Now, I believe that there must be some way of improving this Observer variant - this code does not seem very reactive to me. I have tried several things using pipe, however unfortunately these were all unsuccessful.
I found concatMap to come close to a solution. This allowed me to concat the next HTTP request as an observable (done with the this.getBatch method), however I could not find a good way to also not abandon the response of the current HTTP request.
How can this be achieved? In short I believe this problem could be described as appending data to an observable inside the observable itself. (But perhaps this is not a good way of handling this situation)
TLDR;
Here's a working StackBlitz demo.
Explanation
Here would be my approach:
// Faking an actual request
const makeReq = (prevArg, response) =>
new Promise((r) => {
console.log(`Running promise with the prev arg as: ${prevArg}!`);
setTimeout(r, 1000, { prevArg, response });
});
// Preparing the sequential requests.
const args = [1, 2, 3, 4, 5];
from(args)
.pipe(
// Running the reuqests sequantially.
mergeScan(
(acc, crtVal) => {
// `acc?.response` will refer to the previous response
// and we're using it for the next request.
return makeReq(acc?.response, crtVal);
},
// The seed(works the same as `reduce`).
null,
// Making sure that only one request is run at a time.
1
),
// Combining all the responses into one object
// and emitting it after all the requests are done.
reduce((acc, val, idx) => ({ ...acc, [`request${idx + 1}`]: val }), {})
)
.subscribe(console.warn);
Firstly, from(array) will emit each item from the array, synchronously and one by one.
Then, there is mergeScan. It is exactly the result of combining scan and merge. With scan, we can accumulate values(in this case we're using it to have access to the response of the previous request) and what merge does is to allow us to use observables.
To make things a bit easier to understand, think of the Array.prototype.reduce function. It looks something like this:
[].reduce((acc, value) => { return { ...acc }}, /* Seed value */{});
What merge does in mergeScan is to allow us to use the accumulator something like (acc, value) => new Observable(...) instead of return { ...acc }. The latter indicates a synchronous behavior, whereas with the former we can have asynchronous behavior.
Let's go a bit step by step:
when 1 is emitted, makeReq(undefined, 1) will be invoked
after the first makeReq(from above) resolves, makeReq(1, 2) will be invoked
after makeReq(1, 2) resolves, makeReq(2, 3) will be invoked and so on...
Somebody I consulted regarding this matter came up with this solution, I think it's quite elegant:
defer(() => this.getBatch(options)).pipe(
expand(({ more_items, last_assetid }) =>
more_items
? this.getBatch({ ...options, startAssetId: last_assetid })
: EMPTY,
),
concatMap((batch) => batch.getSteamItemsWithDescription()),
);
From my understanding the use of expand here is very similar to the use of mergeScan in #Andrei's answer

Create code to clean up and add sample data to tables with relationship [duplicate]

I've been developing JavaScript for a few years and I don't understand the fuss about promises at all.
It seems like all I do is change:
api(function(result){
api2(function(result2){
api3(function(result3){
// do work
});
});
});
Which I could use a library like async for anyway, with something like:
api().then(function(result){
api2().then(function(result2){
api3().then(function(result3){
// do work
});
});
});
Which is more code and less readable. I didn't gain anything here, it's not suddenly magically 'flat' either. Not to mention having to convert things to promises.
So, what's the big fuss about promises here?
Promises are not callbacks. A promise represents the future result of an asynchronous operation. Of course, writing them the way you do, you get little benefit. But if you write them the way they are meant to be used, you can write asynchronous code in a way that resembles synchronous code and is much more easy to follow:
api().then(function(result){
return api2();
}).then(function(result2){
return api3();
}).then(function(result3){
// do work
});
Certainly, not much less code, but much more readable.
But this is not the end. Let's discover the true benefits: What if you wanted to check for any error in any of the steps? It would be hell to do it with callbacks, but with promises, is a piece of cake:
api().then(function(result){
return api2();
}).then(function(result2){
return api3();
}).then(function(result3){
// do work
}).catch(function(error) {
//handle any error that may occur before this point
});
Pretty much the same as a try { ... } catch block.
Even better:
api().then(function(result){
return api2();
}).then(function(result2){
return api3();
}).then(function(result3){
// do work
}).catch(function(error) {
//handle any error that may occur before this point
}).then(function() {
//do something whether there was an error or not
//like hiding an spinner if you were performing an AJAX request.
});
And even better: What if those 3 calls to api, api2, api3 could run simultaneously (e.g. if they were AJAX calls) but you needed to wait for the three? Without promises, you should have to create some sort of counter. With promises, using the ES6 notation, is another piece of cake and pretty neat:
Promise.all([api(), api2(), api3()]).then(function(result) {
//do work. result is an array contains the values of the three fulfilled promises.
}).catch(function(error) {
//handle the error. At least one of the promises rejected.
});
Hope you see Promises in a new light now.
Yes, Promises are asynchronous callbacks. They can't do anything that callbacks can't do, and you face the same problems with asynchrony as with plain callbacks.
However, Promises are more than just callbacks. They are a very mighty abstraction, allow cleaner and better, functional code with less error-prone boilerplate.
So what's the main idea?
Promises are objects representing the result of a single (asynchronous) computation. They resolve to that result only once. There's a few things what this means:
Promises implement an observer pattern:
You don't need to know the callbacks that will use the value before the task completes.
Instead of expecting callbacks as arguments to your functions, you can easily return a Promise object
The promise will store the value, and you can transparently add a callback whenever you want. It will be called when the result is available. "Transparency" implies that when you have a promise and add a callback to it, it doesn't make a difference to your code whether the result has arrived yet - the API and contracts are the same, simplifying caching/memoisation a lot.
You can add multiple callbacks easily
Promises are chainable (monadic, if you want):
If you need to transform the value that a promise represents, you map a transform function over the promise and get back a new promise that represents the transformed result. You cannot synchronously get the value to use it somehow, but you can easily lift the transformation in the promise context. No boilerplate callbacks.
If you want to chain two asynchronous tasks, you can use the .then() method. It will take a callback to be called with the first result, and returns a promise for the result of the promise that the callback returns.
Sounds complicated? Time for a code example.
var p1 = api1(); // returning a promise
var p3 = p1.then(function(api1Result) {
var p2 = api2(); // returning a promise
return p2; // The result of p2 …
}); // … becomes the result of p3
// So it does not make a difference whether you write
api1().then(function(api1Result) {
return api2().then(console.log)
})
// or the flattened version
api1().then(function(api1Result) {
return api2();
}).then(console.log)
Flattening does not come magically, but you can easily do it. For your heavily nested example, the (near) equivalent would be
api1().then(api2).then(api3).then(/* do-work-callback */);
If seeing the code of these methods helps understanding, here's a most basic promise lib in a few lines.
What's the big fuss about promises?
The Promise abstraction allows much better composability of functions. For example, next to then for chaining, the all function creates a promise for the combined result of multiple parallel-waiting promises.
Last but not least Promises come with integrated error handling. The result of the computation might be that either the promise is fulfilled with a value, or it is rejected with a reason. All the composition functions handle this automatically and propagate errors in promise chains, so that you don't need to care about it explicitly everywhere - in contrast to a plain-callback implementation. In the end, you can add a dedicated error callback for all occurred exceptions.
Not to mention having to convert things to promises.
That's quite trivial actually with good promise libraries, see How do I convert an existing callback API to promises?
In addition to the already established answers, with ES6 arrow functions Promises turn from a modestly shining small blue dwarf straight into a red giant. That is about to collapse into a supernova:
api().then(result => api2()).then(result2 => api3()).then(result3 => console.log(result3))
As oligofren pointed out, without arguments between api calls you don't need the anonymous wrapper functions at all:
api().then(api2).then(api3).then(r3 => console.log(r3))
And finally, if you want to reach a supermassive black hole level, Promises can be awaited:
async function callApis() {
let api1Result = await api();
let api2Result = await api2(api1Result);
let api3Result = await api3(api2Result);
return api3Result;
}
In addition to the awesome answers above, 2 more points may be added:
1. Semantic difference:
Promises may be already resolved upon creation. This means they guarantee conditions rather than events. If they are resolved already, the resolved function passed to it is still called.
Conversely, callbacks handle events. So, if the event you are interested in has happened before the callback has been registered, the callback is not called.
2. Inversion of control
Callbacks involve inversion of control. When you register a callback function with any API, the Javascript runtime stores the callback function and calls it from the event loop once it is ready to be run.
Refer The Javascript Event loop for an explanation.
With Promises, control resides with the calling program. The .then() method may be called at any time if we store the promise object.
In addition to the other answers, the ES2015 syntax blends seamlessly with promises, reducing even more boilerplate code:
// Sequentially:
api1()
.then(r1 => api2(r1))
.then(r2 => api3(r2))
.then(r3 => {
// Done
});
// Parallel:
Promise.all([
api1(),
api2(),
api3()
]).then(([r1, r2, r3]) => {
// Done
});
Promises are not callbacks, both are programming idioms that facilitate async programming. Using an async/await-style of programming using coroutines or generators that return promises could be considered a 3rd such idiom. A comparison of these idioms across different programming languages (including Javascript) is here: https://github.com/KjellSchubert/promise-future-task
No, Not at all.
Callbacks are simply Functions In JavaScript which are to be called and then executed after the execution of another function has finished. So how it happens?
Actually, In JavaScript, functions are itself considered as objects and hence as all other objects, even functions can be sent as arguments to other functions. The most common and generic use case one can think of is setTimeout() function in JavaScript.
Promises are nothing but a much more improvised approach of handling and structuring asynchronous code in comparison to doing the same with callbacks.
The Promise receives two Callbacks in constructor function: resolve and reject. These callbacks inside promises provide us with fine-grained control over error handling and success cases. The resolve callback is used when the execution of promise performed successfully and the reject callback is used to handle the error cases.
No promises are just wrapper on callbacks
example
You can use javascript native promises with node js
my cloud 9 code link : https://ide.c9.io/adx2803/native-promises-in-node
/**
* Created by dixit-lab on 20/6/16.
*/
var express = require('express');
var request = require('request'); //Simplified HTTP request client.
var app = express();
function promisify(url) {
return new Promise(function (resolve, reject) {
request.get(url, function (error, response, body) {
if (!error && response.statusCode == 200) {
resolve(body);
}
else {
reject(error);
}
})
});
}
//get all the albums of a user who have posted post 100
app.get('/listAlbums', function (req, res) {
//get the post with post id 100
promisify('http://jsonplaceholder.typicode.com/posts/100').then(function (result) {
var obj = JSON.parse(result);
return promisify('http://jsonplaceholder.typicode.com/users/' + obj.userId + '/albums')
})
.catch(function (e) {
console.log(e);
})
.then(function (result) {
res.end(result);
}
)
})
var server = app.listen(8081, function () {
var host = server.address().address
var port = server.address().port
console.log("Example app listening at http://%s:%s", host, port)
})
//run webservice on browser : http://localhost:8081/listAlbums
JavaScript Promises actually use callback functions to determine what to do after a Promise has been resolved or rejected, therefore both are not fundamentally different. The main idea behind Promises is to take callbacks - especially nested callbacks where you want to perform a sort of actions, but it would be more readable.
Promises overview:
In JS we can wrap asynchronous operations (e.g database calls, AJAX calls) in promises. Usually we want to run some additional logic on the retrieved data. JS promises have handler functions which process the result of the asynchronous operations. The handler functions can even have other asynchronous operations within them which could rely on the value of the previous asynchronous operations.
A promise always has of the 3 following states:
pending: starting state of every promise, neither fulfilled nor rejected.
fulfilled: The operation completed successfully.
rejected: The operation failed.
A pending promise can be resolved/fullfilled or rejected with a value. Then the following handler methods which take callbacks as arguments are called:
Promise.prototype.then() : When the promise is resolved the callback argument of this function will be called.
Promise.prototype.catch() : When the promise is rejected the callback argument of this function will be called.
Although the above methods skill get callback arguments they are far superior than using
only callbacks here is an example that will clarify a lot:
Example
function createProm(resolveVal, rejectVal) {
return new Promise((resolve, reject) => {
setTimeout(() => {
if (Math.random() > 0.5) {
console.log("Resolved");
resolve(resolveVal);
} else {
console.log("Rejected");
reject(rejectVal);
}
}, 1000);
});
}
createProm(1, 2)
.then((resVal) => {
console.log(resVal);
return resVal + 1;
})
.then((resVal) => {
console.log(resVal);
return resVal + 2;
})
.catch((rejectVal) => {
console.log(rejectVal);
return rejectVal + 1;
})
.then((resVal) => {
console.log(resVal);
})
.finally(() => {
console.log("Promise done");
});
The createProm function creates a promises which is resolved or rejected based on a random Nr after 1 second
If the promise is resolved the first then method is called and the resolved value is passed in as an argument of the callback
If the promise is rejected the first catch method is called and the rejected value is passed in as an argument
The catch and then methods return promises that's why we can chain them. They wrap any returned value in Promise.resolve and any thrown value (using the throw keyword) in Promise.reject. So any value returned is transformed into a promise and on this promise we can again call a handler function.
Promise chains give us more fine tuned control and better overview than nested callbacks. For example the catch method handles all the errors which have occurred before the catch handler.
Promises allows programmers to write simpler and far more readable code than by using callbacks.
In a program, there are steps want to do in series.
function f() {
step_a();
step_b();
step_c();
...
}
There's usually information carried between each step.
function f() {
const a = step_a( );
const b = step_b( a );
const c = step_c( b );
...
}
Some of these steps can take a (relatively) long time, so sometimes you want to do them in parallel with other things. One way to do that is using threads. Another is asynchronous programming. (Both approaches has pros and cons, which won't be discussed here.) Here, we're talking about asynchronous programming.
The simple way to achieve the above when using asynchronous programming would be to provide a callback which is called once a step is complete.
// step_* calls the provided function with the returned value once complete.
function f() {
step_a(
function( a )
step_b(
function( b )
step_c(
...
)
},
)
},
)
}
That's quite hard to read. Promises offer a way to flatten the code.
// step_* returns a promise.
function f() {
step_a()
.then( step_b )
.then( step_c )
...
}
The object returned is called a promise because it represents the future result (i.e. promised result) of the function (which could be a value or an exception).
As much as promises help, it's still a bit complicated to use promises. This is where async and await come in. In a function declared as async, await can be used in lieu of then.
// step_* returns a promise.
async function f()
const a = await step_a( );
const b = await step_b( a );
const c = await step_c( b );
...
}
This is undeniably much much more readable than using callbacks.

Time-based cache for REST client using RxJs 5 in Angular2

I'm new to ReactiveX/RxJs and I'm wondering if my use-case is feasible smoothly with RxJs, preferably with a combination of built-in operators. Here's what I want to achieve:
I have an Angular2 application that communicates with a REST API. Different parts of the application need to access the same information at different times. To avoid hammering the servers by firing the same request over and over, I'd like to add client-side caching. The caching should happen in a service layer, where the network calls are actually made. This service layer then just hands out Observables. The caching must be transparent to the rest of the application: it should only be aware of Observables, not the caching.
So initially, a particular piece of information from the REST API should be retrieved only once per, let's say, 60 seconds, even if there's a dozen components requesting this information from the service within those 60 seconds. Each subscriber must be given the (single) last value from the Observable upon subscription.
Currently, I managed to achieve exactly that with an approach like this:
public getInformation(): Observable<Information> {
if (!this.information) {
this.information = this.restService.get('/information/')
.cache(1, 60000);
}
return this.information;
}
In this example, restService.get(...) performs the actual network call and returns an Observable, much like Angular's http Service.
The problem with this approach is refreshing the cache: While it makes sure the network call is executed exactly once, and that the cached value will no longer be pushed to new subscribers after 60 seconds, it doesn't re-execute the initial request after the cache expires. So subscriptions that occur after the 60sec cache will not be given any value from the Observable.
Would it be possible to re-execute the initial request if a new subscription happens after the cache timed out, and to re-cache the new value for 60sec again?
As a bonus: it would be even cooler if existing subscriptions (e.g. those who initiated the first network call) would get the refreshed value whose fetching had been initiated by the newer subscription, so that once the information is refreshed, it is immediately passed through the whole Observable-aware application.
I figured out a solution to achieve exactly what I was looking for. It might go against ReactiveX nomenclature and best practices, but technically, it does exactly what I want it to. That being said, if someone still finds a way to achieve the same with just built-in operators, I'll be happy to accept a better answer.
So basically since I need a way to re-trigger the network call upon subscription (no polling, no timer), I looked at how the ReplaySubject is implemented and even used it as my base class. I then created a callback-based class RefreshingReplaySubject (naming improvements welcome!). Here it is:
export class RefreshingReplaySubject<T> extends ReplaySubject<T> {
private providerCallback: () => Observable<T>;
private lastProviderTrigger: number;
private windowTime;
constructor(providerCallback: () => Observable<T>, windowTime?: number) {
// Cache exactly 1 item forever in the ReplaySubject
super(1);
this.windowTime = windowTime || 60000;
this.lastProviderTrigger = 0;
this.providerCallback = providerCallback;
}
protected _subscribe(subscriber: Subscriber<T>): Subscription {
// Hook into the subscribe method to trigger refreshing
this._triggerProviderIfRequired();
return super._subscribe(subscriber);
}
protected _triggerProviderIfRequired() {
let now = this._getNow();
if ((now - this.lastProviderTrigger) > this.windowTime) {
// Data considered stale, provider triggering required...
this.lastProviderTrigger = now;
this.providerCallback().first().subscribe((t: T) => this.next(t));
}
}
}
And here is the resulting usage:
public getInformation(): Observable<Information> {
if (!this.information) {
this.information = new RefreshingReplaySubject(
() => this.restService.get('/information/'),
60000
);
}
return this.information;
}
To implement this, you will need to create your own observable with custom logic on subscribtion:
function createTimedCache(doRequest, expireTime) {
let lastCallTime = 0;
let lastResult = null;
const result$ = new Rx.Subject();
return Rx.Observable.create(observer => {
const time = Date.now();
if (time - lastCallTime < expireTime) {
return (lastResult
// when result already received
? result$.startWith(lastResult)
// still waiting for result
: result$
).subscribe(observer);
}
const disposable = result$.subscribe(observer);
lastCallTime = time;
lastResult = null;
doRequest()
.do(result => {
lastResult = result;
})
.subscribe(v => result$.next(v), e => result$.error(e));
return disposable;
});
}
and resulting usage would be following:
this.information = createTimedCache(
() => this.restService.get('/information/'),
60000
);
usage example: https://jsbin.com/hutikesoqa/edit?js,console

Angular2: Example with multiple http calls (typeahead) with observables

So I am working on couple of cases in my app where I need the following to happen
When event triggered, do the following
List item
check if the data with that context is already cached, serve cached
if no cache, debounce 500ms
check if other http calls are running (for the same context) and kill them
make http call
On success cache and update/replace model data
Pretty much standard when it comes to typeahead functionality
I would like to use observables with this... in the way, I can cancel them if previous calls are running
any good tutorials on that? I was looking around, couldn't find anything remotely up to date
OK, to give you some clue what I did now:
onChartSelection(chart: any){
let date1:any, date2:any;
try{
date1 = Math.round(chart.xAxis[0].min);
date2 = Math.round(chart.xAxis[0].max);
let data = this.tableService.getCachedChartData(this.currentTable, date1, date2);
if(data){
this.table.data = data;
}else{
if(this.chartTableRes){
this.chartTableRes.unsubscribe();
}
this.chartTableRes = this.tableService.getChartTable(this.currentTable, date1, date2)
.subscribe(
data => {
console.log(data);
this.table.data = data;
this.chartTableRes = null;
},
error => {
console.log(error);
}
);
}
}catch(e){
throw e;
}
}
Missing debounce here
-- I ended up implementing lodash's debounce
import {debounce} from 'lodash';
...
onChartSelectionDebaunced: Function;
constructor(...){
...
this.onChartSelectionDebaunced = debounce(this.onChartSelection, 200);
}
For debaunce you can use Underscore.js. The function will look this way:
onChartSelection: Function = _.debounce((chart: any) => {
...
});
Regarding the cancelation of Observable, it is better to use Observable method share. In your case you should change the method getChartTable in your tableService by adding .share() to your Observable that you return.
This way there will be only one call done to the server even if you subscribe to it multiple times (without this every new subscription will invoke new call).
Take a look at: What is the correct way to share the result of an Angular 2 Http network call in RxJs 5?

Resources