combineLatest do not emit even individual stream have value - rxjs5

My app is sort of complicated and I can not figure out the exact problem
The observation is the stream derived from combineLatest do not emit at the first time, while individual stram DO emit because they are all transformed somehow and used by other streams, so I am pretty sure they have value

The problem is caused by share, I removed share in some place, and it can work as expected, Can not explain it though

Related

Enforcing well balanced parallelism in a unkeyed Flink stream

Based on my understanding of Flink, it introduces parallelism based on keys (keygroups). However, suppose one had a massive unkeyed stream and would like the work to be done in parallel, what would be the best way to achieve this?
If the stream has some fields, one might think about keying by one of the fields arbirtrarily, however this does not guarantee that the workload will be balanced properly. For instance because one value in that field may occur in 90% of the messages. Hence my question:
How to enforce well balanced parallelism in Flink, without prior knowledge of what is in the stream
One potential solution I could think of is to assign a random number to each message (say 1-3 if you want to have a parallelism of 3, or 1-1000 if you want parallelism to be more flexible). However, I wondered if this was the recommended approach as it does not feel very elegant.
keyBy is one way to specify stream partitioning, and it is especially useful, since you are guaranteed that all stream elements with the same key will be processed together. This is the basis for stateful stream processing with Flink.
However, if you don't need to use key-partitioned state, and instead care about ensuring that the partitions are well balanced, you can use shuffle() or rebalance() to cause a random or round-robin partitioning. See the docs for more details. You can also implement a custom partitioner, if you want more explicit control.
BTW, if you do want to key the stream by a random number, do not do something like keyBy(new Random.nextInt(n)). It's imperative that the key selector be deterministic. This is necessary because the keys do not travel with the stream records -- instead, the key selector function is used to compute the key whenever it is needed. So for random keying, add another field to your events and populate it with a random number, and use that as the key. This technique is useful when you want to use keyed state or timers, but don't have anything suitable to use as a key.

RxJS - What is the point of operators?

What is the advantage of using RxJS operators such as map and filter as compared to simply performing the same operations on the values returned in the subscribe function? Is it faster?
There are cases where you cannot perform everything synchronously. For example when you want to make a rest-call based on some emitted data and the work with the data emitted by the rest-call.
Or when you have 2 seperate streams, but there is a usecase where you need to execute them in order (maybe even based on each other), it is very easy to just chain those.
Also it makes testing a whole lot easier and more precise when you have very small functions with an input and output.
But as with everything: Just because a big company is using this, doesn't automatically mean that it makes sense for your small hobby-project to implement every last bit of what they are using on a project with multiple developers.
As for the performance: No, using rxjs-operators is not the fastest way of manipulating data but it offers a whole lot of other features (some of them mentioned above) that outweigh the (very small) impact this has on the performance. - If you are iterating through big arrays a couple times a second though, I'd suggest you to not use RxJS for obvious reasons.
Some advantages
Declarative
Reusability
I mean if you subscribe in another place you will need to duplicate the same action that are in the subscribe. But if you declare the operators as part of the chain you will always perform the same way when it's called.

What are the main advantages in using promises in scheme?

Pragmatically, what are the main advantages of using promises? Can you show me some examples of real-life useful usage of promises?
In Scheme a promise is just a value that has a task that is not necessarily done yet and if you never use the value it will never be calculated. In short it is a way to do lazy evaluation in the otherwise eager Scheme. A typical way is to do computations on streams instead of lists.
With lists you can use higher order functions so that you can have a list, then filter it for values you are interested in, then transform these values and perhaps at some point you have enough to produce the value you needed. This is nice since you can abstract each step so that you can make logic that only does one step and compose steps to make the whole program, but in this scenario the first step needs to finish in full before the next step can handle the result of the first while it might be that if you are searching for the first prime number between 0 and 1000 having iterated over all the numbers in each step might not be so effective. Here is where streams comes in.
With streams the code looks the same, but the intermediate result is made by need. A stream is a pair where the parts are promises so that the code that would otherwise make a pair is delayed until the values are used. Every step just produces enough data for the next step and thus should it be enough for the first step to iterate just 20% of the elements for the last step to have computed the final result the 80% rest will never ever be processed in any of the steps. With such a structure the initial stream can also be infinite, like all the numbers from 0 increased by 1.
There are penalties involved using streams. Imagine you make an algorithm that would visit all the elements anyway. Then a stream version of an algorithm would be slower since the promises that are created and the forcing gives th eprogram overhead compared with doing the computation without laziness.
You might be interested in seeing Hal Abelson explaining streams and their pros and cons.
There are other alternatives to streams an lazy evaluation. One is generators. Here you can also make composable procedures that takes a generator and produces a generator. The iteration will be by need like with streams.
Another alternative would be transducers. This is also composable and iterates like streams and generators, but unlike generators initial data cannot be an infinite sequence like with streams and generators unless the underlying structure supports it.
The advantages of using promises or any other technique in this answer is not scheme specific. They are true for all eager programming languages!

Best practice about detecting DOM element exist in D3

All:
In D3, it often uses .data().enter().append() to reuse existing elements rather than remove everything and add them, but on the other hand, when the DOM structure is very deep, it will involve a lot of this detect(one for every level), I wonder if there is a good way to detect until which level, I need to start use .enter() rather than from the top level?
Thanks
The way I understand your question, you could be asking about one of two possible things. Either:
you're asking about how to use d3's .data() binding method to compute the three sets (enter, update, exit) at multiple levels of a dom hierarchy; or
you already know how to do #1, and are asking about how to NOT do it (i.e. skip calling .data()) in certain cases in order to really optimize performance.
If the question is #1, then check out this tutorial on working with nested selection by passing a function into the first argument of .data().
If the question is #2, then you're taking a risk. By that I mean that you're risking spending a whole lot of time and effort to optimize an aspect of your code that's probably far from being the slowest part of the program. Usually, it's the browser's rendering that's the slowest, while the data binding is quite fast. In fact, following the nested selections pattern from #1 is likely the most effective way to optimize, because it eliminates unnecessary appending to - and re-rendering of - the DOM.
If you really want to do #2 anyway, then I think the way to start is by implementing it using nested selections from #1, and then adding some sort of if statement at every level of the hierarchy that decides whether it's ok to skip calling the .data() method. For that, you have to examine the incoming data vs the outgoing data and deciding whether they're still equal or not. However, since deciding whether things are still equal is roughly what d3's .data() method does, then your optimization of it would have to do even less. Perhaps one way to achieve that level of optimization would involve using immutable data structures, because that's a way to quickly test equality of two nested data structures (that's basically how things work in React.js). It sounds complicated though. That's why I say it's a risk....
There may be another approach, in which you analyze the incoming vs outgoing data and determine which branches of the data hierarchy have changed and then pinpoint the equivalent location in the DOM and use d3 .data() locally within those changed DOM nodes. That sounds even more complex and ambiguous. So to get more help with that on, you'd have to create something like a jsFiddle that recreates your specific scenario.

Are Core Data fetches (NSFetchRequest) sorted in any specific fashion by default?

I have a basic question. Say you have a NSFetchRequest which you want to perform on a NSManagedObjectContext. If the fetch request doesn't have any sort descriptors set to it explicitly, will the objects be random every time, or are they going to be spit out into an array in the order they were added to the Managed Object Context initially? I can't find this answer anywhere in the documentation.
No, they're not guaranteed to be ordered. You might happen to see consistent ordering depending on what type of data store you use (I've never tried), but it's not something you should depend on in any way.
It's easy to order by creation date though. Just add a date attribute to your entity, initialize it to the current date in awakeFromInsert, and specify that sort descriptor in your fetch.
The order may not be "random every time" but as far as I know you cannot/should not depend on it. If you need a specific order, then use sort descriptors.
I see two questions here: will it come out in the same ordering every time? And, is that ordering on insertion order?
It comes out in set order, which is in some ordering. Note that NSSet is just an interface and there are private classes that implement NSSet. That means that while some instances of NSSet you get back if you call allObjects against it might return them in some consistent ordering, it's almost assuredly in hash ordering as sets are almost universally implemented as hashed dictionaries.
Since the hashing algorithm is highly variable depending on what is stored and how it's hashed, you might "luck out" that it comes out in the same ordering every time, but then be caught off guard another time when something changes very slightly.
So, technically, it's not really random and it could be in some stable ordering.
To the second question, I would say it's almost assuredly NOT in insertion order.
Marc's suggestion for handling awakeFromInsert is a good one, and what you would want.
There is no guarantee on the ordering. For example, I could implement an NSAtomicStore or NSIncrementalStore that returns results in random order and it would be completely correct. I have seen the SQLite store return different ordering on different versions of the operating system as well.

Resources