RxJS: How many events is too many for combineLatest? - rxjs

I've got a toolbar where each tool can be disabled/hidden independently. The tools depend on other system events and emit individual events configuring their availability. The toolbar uses combineLatest to pull all the tools together and emit a toolbar config.
The combineLatest is listening to 40+ events.
Will this be a performance problem? Is there a practical limit to how many events combineLatest can consume?

Hard to say just like that.
I think that having a huge number of streams combined is not a problem on it's own.
What could be:
- having those streams emitting a value very very often
- having those streams triggering Angular change detection (might be worth running them outside ng zone if possible)
That said, I think that the performance problem here is hiding a conception problem eventually. It really feels like you might need a "single source of truth". Having a look into Redux and eventually Ngrx might be a great help for you.
Then from the unique store, you could easily retrieve the availability of your tools.
The tools depend on other system events and emit individual events
The Redux pattern is generally playing very well with that kind of challenges:
- Async
- State
It really sounds like it might be a perfect fit here.
If you don't know where to start, I'd advise you to first read the Redux documentation. It's one of the best I've ever read: https://redux.js.org
Once you understand how Redux works and whether it's a good fit for you or not, if the answer is yes then take a look into Ngrx. As you seem to be working with streams a lot, if you take the time to learn Redux first then Ngrx will definitely not be a problem: https://redux.js.org
If you decide to go this way, good luck into this amazing journey of reactive and functional programming :)
EDIT 11/07:
If you think that Redux is overkill then maybe you could build a minimal solution that acts a bit like it. The following is completely type safe and you can update multiple properties without firing the final stream as many times as you update properties. Once is enough:
import { BehaviorSubject } from 'rxjs';
import { tap } from 'rxjs/operators';
type YourDataType = {
prop1: string,
prop2: string,
prop3: string,
prop4: string,
// ...
prop40: string,
};
const properties$: BehaviorSubject<YourDataType> = new BehaviorSubject({
prop1: '',
prop2: '',
prop3: '',
prop4: '',
// ...
prop40: '',
});
const patchProperties = (updatedProperties: Partial<YourDataType>) =>
properties$.next({
...properties$.getValue(),
...updatedProperties
});
properties$
.pipe(
tap(x => console.log(JSON.stringify(x, null, 2)))
)
.subscribe()
patchProperties({
prop3: 'new prop 3'
});
patchProperties({
prop1: 'new prop 1',
prop2: 'new prop 2',
prop3: 'final prop 3',
prop40: 'new prop 40',
});
Produces the following output:
{
"prop1": "",
"prop2": "",
"prop3": "",
"prop4": "",
"prop40": ""
}
{
"prop1": "",
"prop2": "",
"prop3": "new prop 3",
"prop4": "",
"prop40": ""
}
{
"prop1": "new prop 1",
"prop2": "new prop 2",
"prop3": "final prop 3",
"prop4": "",
"prop40": "new prop 40"
}
Here's a Stackblitz demo:
https://stackblitz.com/edit/typescript-zafsnk?file=index.ts

Related

GraphQL vs Normalized Data Structure Advantages

From Redux docs:
This [normalized] state structure is much flatter overall. Compared to
the original nested format, this is an improvement in several ways...
From https://github.com/paularmstrong/normalizr
:
Many APIs, public or not, return JSON data that has deeply nested objects. Using data in this kind of structure is often very difficult for JavaScript applications, especially those using Flux or Redux.
Seems like normalized database-ish data structures are better to work with on front end. Then why GraphQL is so popular if it's whole language style is revolved around quickly getting any nested data? Why do people use it then?
This kind of discussion is off-topic on SO ...
it's not only about [normalized] structures ...
graphql client (like apollo) takes care of all data fetching related nuances (error handling, cache, refetching, data conversion, and many more) also but hardly doable with redux.
Different use cases, you can use both:
keep (complex) app state in redux,
handle data fetching in apollo (you can use it for local state, too).
Let's look at why we want to normalize the cache and what kind of work we have to do to get a normalized cache.
For the main page we fetch a list of TODOs and a list of high priority TODOS. Our two endpoints return the following data:
{
all: [{ id: 1, title: "TODO 1" }, { id: 2, title: "TODO 2" }, { id: 2, title: "TODO 2"}],
highPrio: [{ id: 1, title: "TODO 1" }]
}
If we would store the data like this into our cache, we have a difficult time updating a single todo, because we have to update the todo in every array we have in our store or might have in our store in the future.
We can normalize the data and only store references in the array. This way we can easily update a single todo in a single place:
{
queries: {
all: [{ ref: "Todo:1" }, { ref: "Todo:2" }, { ref: "Todo:2" }],
highPrio: [{ ref: "Todo:1" }}]
},
refs: {
"Todo:1": { id: 1, title: "TODO 1" },
"Todo:2": { id: 2, title: "TODO 2" },
"Todo:3": { id: 3, title: "TODO 3" }
}
}
The downside is, that this shape of data is now much harder to use in our list component. We will have to transform the cache a lot, roughtly like so:
function denormalise(cache) {
return {
all: cache.queries.all.map(({ ref }) => cache.ref[ref]),
highPrio: cache.queries.highPrio.map(({ ref }) => cache.ref[ref]),
};
}
Notice how now updating Todo:1 inside of the cache will update all queries that reference the todo automatically, if we run this function inside of the React component (this is often called a selector in Redux).
The magical thing about GraphQL is that it is a strict specification with a type system. This allows GraphQL clients like Apollo to globally identify objects and normalise that cache. At the same time it can also automatically denormalise the cache for you and update objects in the cache automatically after a mutation. This means that most of the time you have to write no caching logic at all. And this should explain why it is so popular: The best code is no code!
const { data, loading, error } = useQuery(gql`
{ all { id title } highPrio { id title }
`);
This code automatically fetches the query on load, normalizes the response and writes it into the cache. Then denormalizes the cache back into the shape of the query using the cache data. Updates to elements in the cache automatically update all subscribed components.

Wait for Subscription set Recursively to Complete

I have an array of objects with children and have a need to set a field (hidden) in each of those objects recursively. The value for each is set in a subscription. I want to wait until each item in the array is recursively updated before the subscription is complete.
The hidden field will be set based on roles and permissions derived from another observable. In the example I added a delay to simulate that.
Here's my first pass at it. I'm certain there is a much cleaner way of going about this.
https://codesandbox.io/s/rxjs-playground-hp3wr
// Array structure. Note children.
const navigation = [
{
id: "applications",
title: "Applications",
children: [
{
id: "dashboard",
title: "Dashboard"
},
{
id: "clients",
title: "Clients"
},
{
id: "documents",
title: "Documents",
children: [
{
id: "dashboard",
title: "Dashboard"
},...
]
},
{
id: "reports",
title: "Reports"
},
{
id: "resources",
title: "Resources"
}
]
}
];
In the code sandbox example, looking at the console messages, I get the correct result. However, I would like to avoid having to subscribe in setHidden and recursivelySetHidden. I would also like to avoid using Subject if possible.
Here is my approach:
const roleObservable = timer(1000).pipe(mapTo("**************"));
function populateWithField(o, field, fieldValue) {
if (Array.isArray(o)) {
return from(o).pipe(
concatMap(c => populateWithField(c, field, fieldValue)),
toArray()
);
}
if (o.children) {
return roleObservable.pipe(
tap(role => (fieldValue = role)),
concatMap(role => populateWithField(o.children, field, role)),
map(children => ({
...o,
[field]: fieldValue,
children
}))
);
}
return roleObservable.pipe(
map(role => ({
[field]: role,
...o
}))
);
}
of(navigation)
.pipe(concatMap(o => populateWithField(o, "hidden")))
.subscribe(console.log, e => console.error(e.message));
The main thing to notice is the frequent use of concatMap. It it a higher-order mapping operator which means, among other things, that it will automatically subscribe to/unsubscribe from its inner observable.
What differentiates concatMap from other operators, is that it keeps a buffer of emitted values, which means that it will wait for the current inner observable to complete before subscribing to the next one.
In this case, you'd have to deal with a lot of Observables-of-Observables(higher-order observables), which is why you have to use concatMap every time you encounter a children property. Any child in that property could have their own children property, so you must make sure an Observable contains only first-order Observables.
You can read more about higher-order and first-order observables here.
Here is a CodeSandbox example

Frame differs in running all test from running only one test, why?

If I run all tests for one epic at once only the first test passes. The other tests fail because the frame differs. But every test singly run passes.
I could not find any related problem to this nether found something in the RxJS not the redux observable docs.
I thought there could be some kind of a reset function on the TestScheduler but there isn't.
One of my test (they all look pretty simular):
test('should fail if e-mail is missing', () => {
testScheduler.run(({ hot, expectObservable }) => {
const action$ = new ActionsObservable(
hot('-a', {
a: login('', 'secret')
})
);
const output$ = epic(action$, null, null);
expectObservable(output$).toBe('-a', {
a: failure(
formErrors.credentialsEmpty(['email', 'password'])
)
});
});
});
I expect the frame of output marble to be 1 but it is 2.
The output of a failing test:
Array [
Object {
- "frame": 1,
+ "frame": 2,
"notification": Notification {
"error": undefined,
"hasValue": true,
"kind": "N",
"value": Object {
edit
I could get around that behaviour by creating one TestScheduler instance per test but I am not sure if I am supposed to do it this way.
Stumbled across this today. I think creating one new TestScheduler per test is probably a good idea. It doesn't seem to have a noticeable impact between tests - and that way you're sure that the state is reset between tests.
One other workaround is to do testScheduler.frame = 0 in a beforeEach - but I opted to just create it from scratch each time.

Not getting data from nested json in react-native

i want to get data from nested json.
My json looks like as given below
{
"id": 2,
"cover_image": "http://13.233.31.123/media/homepage-banner.jpg",
"name": " Website",
"tagline": " IT Right",
"status": "ACTIVE",
"client_name": "Company",
"start_date": null,
"end_date": null,
"technology_frontend": "HTML, CSS, JAVASCRIPT\r\nCMS: WORDPRESS",
"technology_backend": "PHP",
"description": "We provide robust and high quality Custom Web Development.\r\nCodism is a global technology and business services consulting firm. We are specialized in servicing business market needs specializing in Web Design and Development, Online marketing and IT Consulting Services to commercial and government customers. We provide staffing and end-to end consulting services for organizations.",
"gallery": [
{
"project": 2,
"image": "http://localhost/media/gallery_image/homepage-banner.jpg"
},
{
"project": 2,
"image": "http://localhost/media/projects/gallery_image/software-development.jpg"
},
{
"project": 2,
"image": "http://localhost/media/projects/gallery_image/New_FRS_Image_Mobile_app_development.jpg"
}
]
}
I want to get all the images of gallery. I am not getting how i can do that if i am doing console.log(this.state.gallery[0]) it is showing first object but while i am doing console.log(this.state.gallery[0].image) giving error. my i found somewhere like use state as given gallery: {
images: []
} so my state is like this. how should i use map to get all details please help. thanks in advance
I hope you are doing mistake while setState
your state should be like
this.state = { data : {} }
when you setState on componentDidMount or anywhere so do something like
this.setState({ data : jsonData )};
after that you can use data on your render method for rendering the images.
this.state.data.galley.map(item => <Image source={{item.image}/>);
if your jsonData is an array and you want to render first object of array so do like
this.state = {data: []};
this.setState({data:jsonData)};
this.state.data[0].galley.map(item => <Image source={{item.image}/>);
if your jsonData is an array and you want to render all nested images so do like this.
this.state = {data: []};
this.setState({data:jsonData)};
this.state.data.map(data => data.galley.map(item => <Image source={{item.image}/>));
If you want to have all the images of your state, you can do something like this.
When you use the setState, you can try setState({...yourJson}) to create a new object of your json in your state.
Try to parse the JSON first like
.then((responseData) => {
newData = JSON.parse(responseData);
this.setState({ data: newData });
this.setState({ ... newData.gallery });
})

After a mutation, how do I update the affected data across views? [duplicate]

This question already has an answer here:
Auto-update of apollo client cache after mutation not affecting existing queries
(1 answer)
Closed 3 years ago.
I have both the getMovies query and addMovie mutation working. When addMovie happens though, I'm wondering how to best update the list of movies in "Edit Movies" and "My Profile" to reflect the changes. I just need a general/high-level overview, or even just the name of a concept if it's simple, on how to make this happen.
My initial thought was just to hold all of the movies in my Redux store. When the mutation finishes, it should return the newly added movie, which I can concatenate to the movies of my store.
After "Add Movie", it would pop back to the "Edit Movies" screen where you should be able to see the newly added movie, then if you go back to "My Profile", it'd be there too.
Is there a better way to do this than holding it all in my own Redux store? Is there any Apollo magic I don't know about that could possibly handle this update for me?
EDIT: I discovered the idea of updateQueries: http://dev.apollodata.com/react/cache-updates.html#updateQueries I think this is what I want (please let me know if this is not the right approach). This seems better than the traditional way of using my own Redux store.
// this represents the 3rd screen in my picture
const AddMovieWithData = compose(
graphql(searchMovies, {
props: ({ mutate }) => ({
search: (query) => mutate({ variables: { query } }),
}),
}),
graphql(addMovie, {
props: ({ mutate }) => ({
addMovie: (user_id, movieId) => mutate({
variables: { user_id, movieId },
updateQueries: {
getMovies: (prev, { mutationResult }) => {
// my mutation returns just the newly added movie
const newMovie = mutationResult.data.addMovie;
return update(prev, {
getMovies: {
$unshift: [newMovie],
},
});
},
},
}),
}),
})
)(AddMovie);
After addMovie mutation, this properly updates the view in "My Profile" because it uses the getMovies query (woah)! I'm then passing these movies as props into "Edit Movies", so how do I update it there as well? Should I just have them both use the getMovies query? Is there a way to pull the new result of getMovies out of the store, so I can reuse it on "Edit Movies" without doing the query again?
EDIT2: Wrapping MyProfile and EditMovies both with getMovies query container seems to work fine. After addMovie, it's updated in both places due to updateQueries on getMovies. It's fast too. I think it's being cached?
It all works, so I guess this just becomes a question of: Was this the best approach?
The answer to the question in the title is
Use updateQueries to "inform` the queries that drive the other views that the data has changed (as you discovered).
This topic gets ongoing discussion in the react-apollo slack channel, and this answer is the consensus that I'm aware of: there's no obvious alternative.
Note that you can update more than one query (that's why the name is plural, and the argument is an object containing keys that match the name of all the queries that need updating).
As you may guess, this "pattern" does mean that you need to be careful in designing and using queries to make life easy and maintainable in designing mutations. More common queires means less chance that you miss one in a mutation updateQueries action.
The Apollo Client only updates the store on update mutations. So when you use create or delete mutations you need to tell Apollo Client how to update. I had expected the store to update automatically but it doesn’t…
I have founded a workaround with resetStore just after doing your mutation.
You reset the store just after doing the mutation. Then when you will need to query, the store is empty, so apollo refetch fresh data.
here is the code:
import { withApollo } from 'react-apollo'
...
deleteCar = async id => {
await this.props.deleteCar({
variables: { where: {
id: id
} },
})
this.props.client.resetStore().then(data=> {
this.props.history.push('/cars')
})
}
...
export default compose(
graphql(POST_QUERY, {
name: 'carQuery',
options: props => ({
fetchPolicy: 'network-only',
variables: {
where: {
id: props.match.params.id,
}
},
}),
}),
graphql(DELETE_MUTATION, {
name: 'deleteCar',
}),
withRouter,
withApollo
)(DetailPage)
The full code is here: https://github.com/alan345/naperg
Ther error before the hack resetStore

Resources