RxJS - What is the point of operators? - rxjs

What is the advantage of using RxJS operators such as map and filter as compared to simply performing the same operations on the values returned in the subscribe function? Is it faster?

There are cases where you cannot perform everything synchronously. For example when you want to make a rest-call based on some emitted data and the work with the data emitted by the rest-call.
Or when you have 2 seperate streams, but there is a usecase where you need to execute them in order (maybe even based on each other), it is very easy to just chain those.
Also it makes testing a whole lot easier and more precise when you have very small functions with an input and output.
But as with everything: Just because a big company is using this, doesn't automatically mean that it makes sense for your small hobby-project to implement every last bit of what they are using on a project with multiple developers.
As for the performance: No, using rxjs-operators is not the fastest way of manipulating data but it offers a whole lot of other features (some of them mentioned above) that outweigh the (very small) impact this has on the performance. - If you are iterating through big arrays a couple times a second though, I'd suggest you to not use RxJS for obvious reasons.

Some advantages
Declarative
Reusability
I mean if you subscribe in another place you will need to duplicate the same action that are in the subscribe. But if you declare the operators as part of the chain you will always perform the same way when it's called.

Related

How many GraphQL mutations does it take to change a light bulb?

I mean, really. How many?
1? changeTheLightBulb()
or 2? removeTheLightBulb() + addNewLightBulb()
It all depends on whether or not you treat this operation as an atomic one. While GraphQL specification doesn't have any specific restrictions on that, one thing is clear: if you don't want to have 'no bulb' as valid state, you should only use one mutation - in your case, changeLightBulb().
Also note that it's quite all right to have both addLightBulb, removeLightBulb, and changeLightBulb in your set of mutations. While you might attempt to reuse domain-specific types as much as possible (then again, there are certain caveats for reusing generic fragments, covered well enough in this article), reusing the operations just to minimize the code surface is, in general, not a good idea.

Can i use projector inside an aggregate root

Is it okay to use a Projection inside an AggregateRoot? A typical case would be using ProductInventoryProjection inside OrderAggregateRoot when placing an order. Is it anti pattern?
Is it okay to use a Projection inside an AggregateRoot?
That's pretty common; trying to support query directly from an event history can be pretty clumsy. So it is common to pre-compute answers and cache them separately - designing a projection function to do that is generally thought to be a good choice for long term maintainability.
using ProductInventoryProjection inside OrderAggregateRoot when placing an order.
This case gets trickier, because now you are introducing a coupling between Inventory and Orders. In a monolith, where all of the code is updated in lock step, you'll probably be fine.
More challenging is the case where Inventory code can be changed independently of Order code, because it is now easy to end up with a projector in Order that gives the "wrong" answer for some history of events.
That is, of course, less risky if your events and projections are stable - everybody using their own copy of code that won't need to change for 10 years is probably going to be painless.
In more volatile circumstances, it may be a more cost effective choice to keep the projection in Inventory, and provide reports that are consumed by Orders.

Parallel counting using a functional approach and immutable data structures?

I have heard and bought the argument that mutation and state is bad for concurrency. But I struggle to understand what the correct alternatives actually are?
For example, when looking at the simplest of all tasks: counting, e.g. word counting in a large corpus of documents. Accessing and parsing the document takes a while so we want to do it in parallel using k threads or actors or whatever the abstraction for parallelism is.
What would be the correct but also practical pure functional way, using immutable data structures to do this?
The general approach in analyzing data sets in a functional way is to partition the data set in some way that makes sense, for a document you might cut it up into sections based on size. i.e. four threads means the doc is sectioned into four pieces.
The thread or process then executes its algorithm on each section of the data set and generates an output. All the outputs are gathered together and then merged. For word counts, for example, a collection of word counts are sorted by the word, and then each list is stepped through using looking for the same words. If that word occurs in more than one list, the counts are summed. In the end, a new list with the sums of all the words is output.
This approach is commonly referred to as map/reduce. The step of converting a document into word counts is a "map" and the aggregation of the outputs is a "reduce".
In addition to the advantage of eliminating the overhead to prevent data conflicts, a functional approach enables the compiler to optimize to a faster approach. Not all languages and compilers do this, but because a compiler knows its variables are not going to be modified by an outside agent it can apply transforms to the code to increase its performance.
In addition, functional programming lets systems like Spark to dynamically create threads because the boundaries of change are clearly defined. That's why you can write a single function chain in Spark, and then just throw servers at it without having to change the code. Pure functional languages can do this in a general way making every application intrinsically multi-threaded.
One of the reasons functional programming is "hot" is because of this ability to enable multiprocessing transparently and safely.
Mutation and state are bad for concurrency only if mutable state is shared between multiple threads for communication, because it's very hard to argue about impure functions and methods that silently trash some shared memory in parallel.
One possible alternative is using message passing for communication between threads/actors (as is done in Akka), and building ("reasonably pure") functional data analysis frameworks like Apache Spark on top of it. Apache Spark is known to be rather suitable for counting words in a large corpus of documents.

What are the main advantages in using promises in scheme?

Pragmatically, what are the main advantages of using promises? Can you show me some examples of real-life useful usage of promises?
In Scheme a promise is just a value that has a task that is not necessarily done yet and if you never use the value it will never be calculated. In short it is a way to do lazy evaluation in the otherwise eager Scheme. A typical way is to do computations on streams instead of lists.
With lists you can use higher order functions so that you can have a list, then filter it for values you are interested in, then transform these values and perhaps at some point you have enough to produce the value you needed. This is nice since you can abstract each step so that you can make logic that only does one step and compose steps to make the whole program, but in this scenario the first step needs to finish in full before the next step can handle the result of the first while it might be that if you are searching for the first prime number between 0 and 1000 having iterated over all the numbers in each step might not be so effective. Here is where streams comes in.
With streams the code looks the same, but the intermediate result is made by need. A stream is a pair where the parts are promises so that the code that would otherwise make a pair is delayed until the values are used. Every step just produces enough data for the next step and thus should it be enough for the first step to iterate just 20% of the elements for the last step to have computed the final result the 80% rest will never ever be processed in any of the steps. With such a structure the initial stream can also be infinite, like all the numbers from 0 increased by 1.
There are penalties involved using streams. Imagine you make an algorithm that would visit all the elements anyway. Then a stream version of an algorithm would be slower since the promises that are created and the forcing gives th eprogram overhead compared with doing the computation without laziness.
You might be interested in seeing Hal Abelson explaining streams and their pros and cons.
There are other alternatives to streams an lazy evaluation. One is generators. Here you can also make composable procedures that takes a generator and produces a generator. The iteration will be by need like with streams.
Another alternative would be transducers. This is also composable and iterates like streams and generators, but unlike generators initial data cannot be an infinite sequence like with streams and generators unless the underlying structure supports it.
The advantages of using promises or any other technique in this answer is not scheme specific. They are true for all eager programming languages!

Best practice about detecting DOM element exist in D3

All:
In D3, it often uses .data().enter().append() to reuse existing elements rather than remove everything and add them, but on the other hand, when the DOM structure is very deep, it will involve a lot of this detect(one for every level), I wonder if there is a good way to detect until which level, I need to start use .enter() rather than from the top level?
Thanks
The way I understand your question, you could be asking about one of two possible things. Either:
you're asking about how to use d3's .data() binding method to compute the three sets (enter, update, exit) at multiple levels of a dom hierarchy; or
you already know how to do #1, and are asking about how to NOT do it (i.e. skip calling .data()) in certain cases in order to really optimize performance.
If the question is #1, then check out this tutorial on working with nested selection by passing a function into the first argument of .data().
If the question is #2, then you're taking a risk. By that I mean that you're risking spending a whole lot of time and effort to optimize an aspect of your code that's probably far from being the slowest part of the program. Usually, it's the browser's rendering that's the slowest, while the data binding is quite fast. In fact, following the nested selections pattern from #1 is likely the most effective way to optimize, because it eliminates unnecessary appending to - and re-rendering of - the DOM.
If you really want to do #2 anyway, then I think the way to start is by implementing it using nested selections from #1, and then adding some sort of if statement at every level of the hierarchy that decides whether it's ok to skip calling the .data() method. For that, you have to examine the incoming data vs the outgoing data and deciding whether they're still equal or not. However, since deciding whether things are still equal is roughly what d3's .data() method does, then your optimization of it would have to do even less. Perhaps one way to achieve that level of optimization would involve using immutable data structures, because that's a way to quickly test equality of two nested data structures (that's basically how things work in React.js). It sounds complicated though. That's why I say it's a risk....
There may be another approach, in which you analyze the incoming vs outgoing data and determine which branches of the data hierarchy have changed and then pinpoint the equivalent location in the DOM and use d3 .data() locally within those changed DOM nodes. That sounds even more complex and ambiguous. So to get more help with that on, you'd have to create something like a jsFiddle that recreates your specific scenario.

Resources