subjectify methods with callbacks - rxjs

I often use the promisify method, which converts a method with a callback signature (e.g. fucntion fn(cb) { ... }) to a method which returns a Promise. It can make the source code a lot cleaner and more compact. So far so good.
Slightly different, are methods which have a callback method, but the callback is called multiple times. In that case a Promise cannot do the trick, because a promise can only be executed once.
In theory, those methods could return a Subject. (e.g. a BehaviorSubject) which would then be fired multiple times.
This made me wondering:
Is there a subjectify method somewhere, which can do this for me ?
For example: it could be useful when there is a method which parses a big document. To report on its progress, it could use a callback method. But wrapping it in a Subject could be more convenient.

There are some operator that helps you convert callback to observable
https://rxjs-dev.firebaseapp.com/api/index/function/fromEventPattern
but you can quite easily integrate subject to callback like below
const progress=new Subject()
myEvent.on('progress', (p)=>{
progress.next(p)
})

You may want to consider to create an Observable in cases your callback is called repeatedly and you want to notify a stream of events as a result of the callback being called.
Let's consider, as an example, the node readline function, which accepts a callback which gets fired for each line read and a second callback which is called when the file end is reached.
In this case, we can create an Observable which emits for each line read and completes when the end is reached, like in the following example
function readLineObs(filePath: string) => {
return new Observable(
(observer: Observer<string>): TeardownLogic => {
const rl = readline.createInterface({
input: fs.createReadStream(filePath),
crlfDelay: Infinity,
});
rl.on('line', (line: string) => {
observer.next(line);
});
rl.on('close', () => {
observer.complete();
});
},
);
};

Related

Calling/Subscribing to a function with parameters, that returns an observable

This is somewhat related to a previous question I asked. The feature$ function in that question returns an observable with a map that uses the parameter passed to the function:
feature$ = (feature: string): Observable<FeatureConfig | null> => {
return this.features$.pipe(
map((features: FeatureConfig[]) => {
return (
features.find((featureConfig: FeatureConfig) => {
return featureConfig.key === feature;
})?.value ?? null
);
})
);
};
This is then used like this elsewhere:
this.featureService
.feature$("featureName")
.subscribe((featureConfig: FeatureConfig) => {
...
});
Or:
someFeature$ = this.featureService.feature$("featureName");
The features$ observable is (I think, by definition) a hot observable as its value can change throughout the life of the observable and it never completes. While this seems to work for its intended purpose, I am just wondering what the effect this has when there are many subscribers to that feature$ function. I fear there might be some unintended behavior that I am not immediately noticing.
Is this a bad pattern in general? And if so, is there a better pattern to do something similar? That is, subscribe to an observable created with a parameter passed to a function.
For example, would something like this be preferred?
feature$ = (featureName: string): Observable<FeatureConfig | null> => {
return of(featureName).pipe(
mergeMap((feature: string) => combineLatest([of(feature), this.features$])),
map(([feature, features]: [string, FeatureConfig[]]) => {
return (
features.find((featureConfig: FeatureConfig) => {
return featureConfig.key === feature;
})?.value ?? null
);
})
);
};
Or does it matter?
The the second stream example is a bit overly complicated, your features$$ is a Behavior subject that might continuously updating itself. Your intend is only take in parameter and process through the features array and output the found feature, the first form of the code is more appropriate.
As the source stream is a BehaviorSubject you will always have a value once subscribe(), just don't forget to unsubcribe() to prevent memory leak. Alternatively use take(1) or first() operator before subscribe()
When you create an observable from a function you get a new instance of that stream, it is a hot observable but not shared(), so filtering on 'featureA' wouldn't affect result on filtering on 'featureB', and yes of() and combineLatest() really does nothing in your use case, as those are static and unchange function param

Bluebird promise was created but not returned warning [duplicate]

I was writing code that does something that looks like:
function getStuffDone(param) { | function getStuffDone(param) {
var d = Q.defer(); /* or $q.defer */ | return new Promise(function(resolve, reject) {
// or = new $.Deferred() etc. | // using a promise constructor
myPromiseFn(param+1) | myPromiseFn(param+1)
.then(function(val) { /* or .done */ | .then(function(val) {
d.resolve(val); | resolve(val);
}).catch(function(err) { /* .fail */ | }).catch(function(err) {
d.reject(err); | reject(err);
}); | });
return d.promise; /* or promise() */ | });
} | }
Someone told me this is called the "deferred antipattern" or the "Promise constructor antipattern" respectively, what's bad about this code and why is this called an antipattern?
The deferred antipattern (now explicit-construction anti-pattern) coined by Esailija is a common anti-pattern people who are new to promises make, I've made it myself when I first used promises. The problem with the above code is that is fails to utilize the fact that promises chain.
Promises can chain with .then and you can return promises directly. Your code in getStuffDone can be rewritten as:
function getStuffDone(param){
return myPromiseFn(param+1); // much nicer, right?
}
Promises are all about making asynchronous code more readable and behave like synchronous code without hiding that fact. Promises represent an abstraction over a value of one time operation, they abstract the notion of a statement or expression in a programming language.
You should only use deferred objects when you are converting an API to promises and can't do it automatically, or when you're writing aggregation functions that are easier expressed this way.
Quoting Esailija:
This is the most common anti-pattern. It is easy to fall into this when you don't really understand promises and think of them as glorified event emitters or callback utility. Let's recap: promises are about making asynchronous code retain most of the lost properties of synchronous code such as flat indentation and one exception channel.
What's wrong with it?
But the pattern works!
Lucky you. Unfortunately, it probably doesn't, as you likely forgot some edge case. In more than half of the occurrences I've seen, the author has forgotten to take care of the error handler:
return new Promise(function(resolve) {
getOtherPromise().then(function(result) {
resolve(result.property.example);
});
})
If the other promise is rejected, this will happen unnoticed instead of being propagated to the new promise (where it would get handled) - and the new promise stays forever pending, which can induce leaks.
The same thing happens in the case that your callback code causes an error - e.g. when result doesn't have a property and an exception is thrown. That would go unhandled and leave the new promise unresolved.
In contrast, using .then() does automatically take care of both these scenarios, and rejects the new promise when an error happens:
return getOtherPromise().then(function(result) {
return result.property.example;
})
The deferred antipattern is not only cumbersome, but also error-prone. Using .then() for chaining is much safer.
But I've handled everything!
Really? Good. However, this will be pretty detailed and copious, especially if you use a promise library that supports other features like cancellation or message passing. Or maybe it will in the future, or you want to swap your library against a better one? You won't want to rewrite your code for that.
The libraries' methods (then) do not only natively support all the features, they also might have certain optimisations in place. Using them will likely make your code faster, or at least allow to be optimised by future revisions of the library.
How do I avoid it?
So whenever you find yourself manually creating a Promise or Deferred and already existing promises are involved, check the library API first. The Deferred antipattern is often applied by people who see promises [only] as an observer pattern - but promises are more than callbacks: they are supposed to be composable. Every decent library has lots of easy-to-use functions for the composition of promises in every thinkable manner, taking care of all the low-level stuff you don't want to deal with.
If you have found a need to compose some promises in a new way that is not supported by an existing helper function, writing your own function with unavoidable Deferreds should be your last option. Consider switching to a more featureful library, and/or file a bug against your current library. Its maintainer should be able to derive the composition from existing functions, implement a new helper function for you and/or help to identify the edge cases that need to be handled.
Now 7 years later there is a simpler answer to this question:
How do I avoid the explicit constructor antipattern?
Use async functions, then await every Promise!
Instead of manually constructing nested Promise chains such as this one:
function promised() {
return new Promise(function(resolve) {
getOtherPromise().then(function(result) {
getAnotherPromise(result).then(function(result2) {
resolve(result2);
});
});
});
}
just turn your function async and use the await keyword to stop execution of the function until the Promise resolves:
async function promised() {
const result = await getOtherPromise();
const result2 = await getAnotherPromise(result);
return result2;
}
This has various benefits:
Calling the async function always returns a Promise, which resolves with the returned value and rejects if an error get's thrown inside the async function
If an awaited Promise rejects, the error get's thrown inside the async function, so you can just try { ... } catch(error) { ... } it like the synchronous errors.
You can await inside loops and if branches, making most of the Promise chain logic trivial
Although async functions behave mostly like chains of Promises, they are way easier to read (and easier to reason about)
How can I await a callback?
If the callback only calls back once, and the API you are calling does not provide a Promise already (most of them do!) this is the only reason to use a Promise constructor:
// Create a wrapper around the "old" function taking a callback, passing the 'resolve' function as callback
const delay = time => new Promise((resolve, reject) =>
setTimeout(resolve, time)
);
await delay(1000);
If await stops execution, does calling an async function return the result directly?
No. If you call an async function, a Promise gets always returned. You can then await that Promise too inside an async function. You cannot wait for the result inside of a synchronous function (you would have to call .then and attach a callback).
Conceptually, synchronous functions always run to completion in one job, while async functions run synchronously till they reach an await, then they continue in another job.

'fromEvent' always observing, but 'from(myArray)' terminates observing after all values are processed

So Im struggling a bit with my understanding of RxJs and observables.
Can it be explained why 'fromEvent' always reacts to a new event no matter how much time has passed without a new value...yet pushing new values into an existing array that is being observed using 'from' doesn't work in the same way if they are both observables at this point and should react to async events why do we need to use 'Subject' for arrays?
I have seen we need to use a 'Subject' for an array ...but why? I would like to understand the reason/mechanism
This is a little bit like asking why and Array.forEach doesn't also iterate over items I add to an array via .push while addEventListener() continues listening for events even far in the future.
The behaviors of the two are different because the underlying data structures are different.
#fromArray
For an Array the implementation is essentially:
function fromArray(array) {
return new Observable(observer => {
try {
// Iterate through each item in the array and emit it to the Observer
array.forEach(item => observer.next(item));
// Array iteration is synchronous, which when we get here we are done iterating
observer.complete();
} catch (e) { observer.error(e) }
})
}
Where the function passed to the observable gets run each time a subscriber subscribes to the Observable. Because Arrays don't have a mechanism to detect changes to them there is no way to listen for additional updates to a native array (note: I am ignoring monkey-patching or creating some sort of substitute array data type that does support such things for simplicity).
#fromEvent
The fromEvent on the other hand would look more like:
function fromEvent(node, eventName, selector) {
// Convert the passed in event or just use the identity
let transform = selector || x => x;
// Construct an Observable using the constructor
return new Observable(observer => {
// Build a compatible handler, we also use this for the unsubscribe logic
const nextHandler = (value) => {
try {
observer.next(transform(value));
} catch (e) { observer.error(e); }
}
// Start listening for events
node.addEventListener(eventName, nextHandler);
// Return a way to tear down the subscription when we are done
return () => node.removeEventListener(eventName, nextHandler);
})
// Shares the underlying Subscription across multiple subscribers
// so we don't create new event handlers for each.
.share();
}
Here we are simply wrapping the native event handler (obviously the real implementation is more robust than this). But because the underlying source is actually an event handler which does have a mechanism to report new event (by definition really), we continue to get events in perpetuity (or until we unsubscribe).

Create code to clean up and add sample data to tables with relationship [duplicate]

I've been developing JavaScript for a few years and I don't understand the fuss about promises at all.
It seems like all I do is change:
api(function(result){
api2(function(result2){
api3(function(result3){
// do work
});
});
});
Which I could use a library like async for anyway, with something like:
api().then(function(result){
api2().then(function(result2){
api3().then(function(result3){
// do work
});
});
});
Which is more code and less readable. I didn't gain anything here, it's not suddenly magically 'flat' either. Not to mention having to convert things to promises.
So, what's the big fuss about promises here?
Promises are not callbacks. A promise represents the future result of an asynchronous operation. Of course, writing them the way you do, you get little benefit. But if you write them the way they are meant to be used, you can write asynchronous code in a way that resembles synchronous code and is much more easy to follow:
api().then(function(result){
return api2();
}).then(function(result2){
return api3();
}).then(function(result3){
// do work
});
Certainly, not much less code, but much more readable.
But this is not the end. Let's discover the true benefits: What if you wanted to check for any error in any of the steps? It would be hell to do it with callbacks, but with promises, is a piece of cake:
api().then(function(result){
return api2();
}).then(function(result2){
return api3();
}).then(function(result3){
// do work
}).catch(function(error) {
//handle any error that may occur before this point
});
Pretty much the same as a try { ... } catch block.
Even better:
api().then(function(result){
return api2();
}).then(function(result2){
return api3();
}).then(function(result3){
// do work
}).catch(function(error) {
//handle any error that may occur before this point
}).then(function() {
//do something whether there was an error or not
//like hiding an spinner if you were performing an AJAX request.
});
And even better: What if those 3 calls to api, api2, api3 could run simultaneously (e.g. if they were AJAX calls) but you needed to wait for the three? Without promises, you should have to create some sort of counter. With promises, using the ES6 notation, is another piece of cake and pretty neat:
Promise.all([api(), api2(), api3()]).then(function(result) {
//do work. result is an array contains the values of the three fulfilled promises.
}).catch(function(error) {
//handle the error. At least one of the promises rejected.
});
Hope you see Promises in a new light now.
Yes, Promises are asynchronous callbacks. They can't do anything that callbacks can't do, and you face the same problems with asynchrony as with plain callbacks.
However, Promises are more than just callbacks. They are a very mighty abstraction, allow cleaner and better, functional code with less error-prone boilerplate.
So what's the main idea?
Promises are objects representing the result of a single (asynchronous) computation. They resolve to that result only once. There's a few things what this means:
Promises implement an observer pattern:
You don't need to know the callbacks that will use the value before the task completes.
Instead of expecting callbacks as arguments to your functions, you can easily return a Promise object
The promise will store the value, and you can transparently add a callback whenever you want. It will be called when the result is available. "Transparency" implies that when you have a promise and add a callback to it, it doesn't make a difference to your code whether the result has arrived yet - the API and contracts are the same, simplifying caching/memoisation a lot.
You can add multiple callbacks easily
Promises are chainable (monadic, if you want):
If you need to transform the value that a promise represents, you map a transform function over the promise and get back a new promise that represents the transformed result. You cannot synchronously get the value to use it somehow, but you can easily lift the transformation in the promise context. No boilerplate callbacks.
If you want to chain two asynchronous tasks, you can use the .then() method. It will take a callback to be called with the first result, and returns a promise for the result of the promise that the callback returns.
Sounds complicated? Time for a code example.
var p1 = api1(); // returning a promise
var p3 = p1.then(function(api1Result) {
var p2 = api2(); // returning a promise
return p2; // The result of p2 …
}); // … becomes the result of p3
// So it does not make a difference whether you write
api1().then(function(api1Result) {
return api2().then(console.log)
})
// or the flattened version
api1().then(function(api1Result) {
return api2();
}).then(console.log)
Flattening does not come magically, but you can easily do it. For your heavily nested example, the (near) equivalent would be
api1().then(api2).then(api3).then(/* do-work-callback */);
If seeing the code of these methods helps understanding, here's a most basic promise lib in a few lines.
What's the big fuss about promises?
The Promise abstraction allows much better composability of functions. For example, next to then for chaining, the all function creates a promise for the combined result of multiple parallel-waiting promises.
Last but not least Promises come with integrated error handling. The result of the computation might be that either the promise is fulfilled with a value, or it is rejected with a reason. All the composition functions handle this automatically and propagate errors in promise chains, so that you don't need to care about it explicitly everywhere - in contrast to a plain-callback implementation. In the end, you can add a dedicated error callback for all occurred exceptions.
Not to mention having to convert things to promises.
That's quite trivial actually with good promise libraries, see How do I convert an existing callback API to promises?
In addition to the already established answers, with ES6 arrow functions Promises turn from a modestly shining small blue dwarf straight into a red giant. That is about to collapse into a supernova:
api().then(result => api2()).then(result2 => api3()).then(result3 => console.log(result3))
As oligofren pointed out, without arguments between api calls you don't need the anonymous wrapper functions at all:
api().then(api2).then(api3).then(r3 => console.log(r3))
And finally, if you want to reach a supermassive black hole level, Promises can be awaited:
async function callApis() {
let api1Result = await api();
let api2Result = await api2(api1Result);
let api3Result = await api3(api2Result);
return api3Result;
}
In addition to the awesome answers above, 2 more points may be added:
1. Semantic difference:
Promises may be already resolved upon creation. This means they guarantee conditions rather than events. If they are resolved already, the resolved function passed to it is still called.
Conversely, callbacks handle events. So, if the event you are interested in has happened before the callback has been registered, the callback is not called.
2. Inversion of control
Callbacks involve inversion of control. When you register a callback function with any API, the Javascript runtime stores the callback function and calls it from the event loop once it is ready to be run.
Refer The Javascript Event loop for an explanation.
With Promises, control resides with the calling program. The .then() method may be called at any time if we store the promise object.
In addition to the other answers, the ES2015 syntax blends seamlessly with promises, reducing even more boilerplate code:
// Sequentially:
api1()
.then(r1 => api2(r1))
.then(r2 => api3(r2))
.then(r3 => {
// Done
});
// Parallel:
Promise.all([
api1(),
api2(),
api3()
]).then(([r1, r2, r3]) => {
// Done
});
Promises are not callbacks, both are programming idioms that facilitate async programming. Using an async/await-style of programming using coroutines or generators that return promises could be considered a 3rd such idiom. A comparison of these idioms across different programming languages (including Javascript) is here: https://github.com/KjellSchubert/promise-future-task
No, Not at all.
Callbacks are simply Functions In JavaScript which are to be called and then executed after the execution of another function has finished. So how it happens?
Actually, In JavaScript, functions are itself considered as objects and hence as all other objects, even functions can be sent as arguments to other functions. The most common and generic use case one can think of is setTimeout() function in JavaScript.
Promises are nothing but a much more improvised approach of handling and structuring asynchronous code in comparison to doing the same with callbacks.
The Promise receives two Callbacks in constructor function: resolve and reject. These callbacks inside promises provide us with fine-grained control over error handling and success cases. The resolve callback is used when the execution of promise performed successfully and the reject callback is used to handle the error cases.
No promises are just wrapper on callbacks
example
You can use javascript native promises with node js
my cloud 9 code link : https://ide.c9.io/adx2803/native-promises-in-node
/**
* Created by dixit-lab on 20/6/16.
*/
var express = require('express');
var request = require('request'); //Simplified HTTP request client.
var app = express();
function promisify(url) {
return new Promise(function (resolve, reject) {
request.get(url, function (error, response, body) {
if (!error && response.statusCode == 200) {
resolve(body);
}
else {
reject(error);
}
})
});
}
//get all the albums of a user who have posted post 100
app.get('/listAlbums', function (req, res) {
//get the post with post id 100
promisify('http://jsonplaceholder.typicode.com/posts/100').then(function (result) {
var obj = JSON.parse(result);
return promisify('http://jsonplaceholder.typicode.com/users/' + obj.userId + '/albums')
})
.catch(function (e) {
console.log(e);
})
.then(function (result) {
res.end(result);
}
)
})
var server = app.listen(8081, function () {
var host = server.address().address
var port = server.address().port
console.log("Example app listening at http://%s:%s", host, port)
})
//run webservice on browser : http://localhost:8081/listAlbums
JavaScript Promises actually use callback functions to determine what to do after a Promise has been resolved or rejected, therefore both are not fundamentally different. The main idea behind Promises is to take callbacks - especially nested callbacks where you want to perform a sort of actions, but it would be more readable.
Promises overview:
In JS we can wrap asynchronous operations (e.g database calls, AJAX calls) in promises. Usually we want to run some additional logic on the retrieved data. JS promises have handler functions which process the result of the asynchronous operations. The handler functions can even have other asynchronous operations within them which could rely on the value of the previous asynchronous operations.
A promise always has of the 3 following states:
pending: starting state of every promise, neither fulfilled nor rejected.
fulfilled: The operation completed successfully.
rejected: The operation failed.
A pending promise can be resolved/fullfilled or rejected with a value. Then the following handler methods which take callbacks as arguments are called:
Promise.prototype.then() : When the promise is resolved the callback argument of this function will be called.
Promise.prototype.catch() : When the promise is rejected the callback argument of this function will be called.
Although the above methods skill get callback arguments they are far superior than using
only callbacks here is an example that will clarify a lot:
Example
function createProm(resolveVal, rejectVal) {
return new Promise((resolve, reject) => {
setTimeout(() => {
if (Math.random() > 0.5) {
console.log("Resolved");
resolve(resolveVal);
} else {
console.log("Rejected");
reject(rejectVal);
}
}, 1000);
});
}
createProm(1, 2)
.then((resVal) => {
console.log(resVal);
return resVal + 1;
})
.then((resVal) => {
console.log(resVal);
return resVal + 2;
})
.catch((rejectVal) => {
console.log(rejectVal);
return rejectVal + 1;
})
.then((resVal) => {
console.log(resVal);
})
.finally(() => {
console.log("Promise done");
});
The createProm function creates a promises which is resolved or rejected based on a random Nr after 1 second
If the promise is resolved the first then method is called and the resolved value is passed in as an argument of the callback
If the promise is rejected the first catch method is called and the rejected value is passed in as an argument
The catch and then methods return promises that's why we can chain them. They wrap any returned value in Promise.resolve and any thrown value (using the throw keyword) in Promise.reject. So any value returned is transformed into a promise and on this promise we can again call a handler function.
Promise chains give us more fine tuned control and better overview than nested callbacks. For example the catch method handles all the errors which have occurred before the catch handler.
Promises allows programmers to write simpler and far more readable code than by using callbacks.
In a program, there are steps want to do in series.
function f() {
step_a();
step_b();
step_c();
...
}
There's usually information carried between each step.
function f() {
const a = step_a( );
const b = step_b( a );
const c = step_c( b );
...
}
Some of these steps can take a (relatively) long time, so sometimes you want to do them in parallel with other things. One way to do that is using threads. Another is asynchronous programming. (Both approaches has pros and cons, which won't be discussed here.) Here, we're talking about asynchronous programming.
The simple way to achieve the above when using asynchronous programming would be to provide a callback which is called once a step is complete.
// step_* calls the provided function with the returned value once complete.
function f() {
step_a(
function( a )
step_b(
function( b )
step_c(
...
)
},
)
},
)
}
That's quite hard to read. Promises offer a way to flatten the code.
// step_* returns a promise.
function f() {
step_a()
.then( step_b )
.then( step_c )
...
}
The object returned is called a promise because it represents the future result (i.e. promised result) of the function (which could be a value or an exception).
As much as promises help, it's still a bit complicated to use promises. This is where async and await come in. In a function declared as async, await can be used in lieu of then.
// step_* returns a promise.
async function f()
const a = await step_a( );
const b = await step_b( a );
const c = await step_c( b );
...
}
This is undeniably much much more readable than using callbacks.

When to use asObservable() in rxjs?

I am wondering what is the use of asObservable:
As per docs:
An observable sequence that hides the identity of the
source sequence.
But why would you need to hide the sequence?
When to use Subject.prototype.asObservable()
The purpose of this is to prevent leaking the "observer side" of the Subject out of an API. Basically to prevent a leaky abstraction when you don't want people to be able to "next" into the resulting observable.
Example
(NOTE: This really isn't how you should make a data source like this into an Observable, instead you should use the new Observable constructor, See below).
const myAPI = {
getData: () => {
const subject = new Subject();
const source = new SomeWeirdDataSource();
source.onMessage = (data) => subject.next({ type: 'message', data });
source.onOtherMessage = (data) => subject.next({ type: 'othermessage', data });
return subject.asObservable();
}
};
Now when someone gets the observable result from myAPI.getData() they can't next values in to the result:
const result = myAPI.getData();
result.next('LOL hax!'); // throws an error because `next` doesn't exist
You should usually be using new Observable(), though
In the example above, we're probably creating something we didn't mean to. For one, getData() isn't lazy like most observables, it's going to create the underlying data source SomeWeirdDataSource (and presumably some side effects) immediately. This also means if you retry or repeat the resulting observable, it's not going to work like you think it will.
It's better to encapsulate the creation of your data source within your observable like so:
const myAPI = {
getData: () => return new Observable(subscriber => {
const source = new SomeWeirdDataSource();
source.onMessage = (data) => subscriber.next({ type: 'message', data });
source.onOtherMessage = (data) => subscriber.next({ type: 'othermessage', data });
return () => {
// Even better, now we can tear down the data source for cancellation!
source.destroy();
};
});
}
With the code above, any behavior, including making it "not lazy" can be composed on top of the observable using RxJS's existing operators.
A Subject can act both as an observer and an observable.
An Obervable has 2 methods.
subscribe
unsubscribe
Whenever you subscribe to an observable, you get an observer which has next, error and complete methods on it.
You'd need to hide the sequence because you don't want the stream source to be publicly available in every component. You can refer to #BenLesh's example, for the same.
P.S. : When I first-time came through Reactive Javascript, I was not able to understand asObservable. Because I had to make sure I understand the basics clearly and then go for asObservable. :)
In addition to this answer I would mention that in my opinion it depends on the language in use.
For untyped (or weakly typed) languages like JavaScript it might make sense to conceal the source object from the caller by creating a delegate object like asObservable() method does. Although if you think about it it won't prevent a caller from doing observable.source.next(...). So this technique doesn't prevent the Subject API from leaking, but it indeed makes it more hidden form the caller.
On the other hand, for strongly typed languages like TypeScript the method asObservable() doesn't seem to make much sense (if any).
Statically typed languages solve the API leakage problem by simply utilizing the type system (e.g. interfaces). For example, if your getData() method is defined as returning Observable<T> then you can safely return the original Subject, and the caller will get a compilation error if attempting to call getData().next() on it.
Think about this modified example:
let myAPI: { getData: () => Observable<any> }
myAPI = {
getData: () => {
const subject = new Subject()
// ... stuff ...
return subject
}
}
myAPI.getData().next() // <--- error TS2339: Property 'next' does not exist on type 'Observable<any>'
Of course, since it all compiles to JavaScript in the end of the day there might still be cases when you want to create a delegate. But my point is that the room for those cases is much smaller then when using vanilla JavaScript , and probably in majority of cases you don't need that method.
(Typescript Only) Use Types Instead of asObservable()
I like what Alex Vayda is saying about using types instead, so I'm going to add some additional information to clarify.
If you use asObservable(), then you are running the following code.
/**
* Creates a new Observable with this Subject as the source. You can do this
* to create customize Observer-side logic of the Subject and conceal it from
* code that uses the Observable.
* #return {Observable} Observable that the Subject casts to
*/
asObservable(): Observable<T> {
const observable = new Observable<T>();
(<any>observable).source = this;
return observable;
}
This is useful for Javascript, but not needed in Typescript. I'll explain why below.
Example
export class ExampleViewModel {
// I don't want the outside codeworld to be able to set this INPUT
// so I'm going to make it private. This means it's scoped to this class
// and only this class can set it.
private _exampleData = new BehaviorSubject<ExampleData>(null);
// I do however want the outside codeworld to be able to listen to
// this data source as an OUTPUT. Therefore, I make it public so that
// any code that has reference to this class can listen to this data
// source, but can't write to it because of a type cast.
// So I write this
public exampleData$ = this._exampleData as Observable<ExampleData>;
// and not this
// public exampleData$ = this._exampleData.asObservable();
}
Both do the samething, but one doesn't add additional code calls or memory allocation to your program.
❌this._exampleData.asObservable();❌
Requires additional memory allocation and computation at runtime.
✅this._exampleData as Observable<ExampleData>;✅
Handled by the type system and will NOT add additional code or memory allocation at runtime.
Conclusion
If your colleagues try this, referenceToExampleViewModel.exampleData$.next(new ExampleData());, then it will be caught at compile time and the type system won't let them, because exampleData$ has been casted to Observable<ExampleData> and is no longer of type BehaviorSubject<ExampleData>, but they will be able to listen (.subscribe()) to that source of data or extend it (.pipe()).
This is useful when you only want a particular service or class to be setting the source of information. It also helps with separating the input from the output, which makes debugging easier.

Resources