In Reactivex.IO documentation it was stated that
You can ignore the final n items emitted by an Observable and attend only to those items that come before them, by modifying the Observable with the SkipLast operator.
and the diagram from http://reactivex.io/documentation/operators/skiplast.html
My Expectations: SkipLast will read the entire Observable till it meets the OnCompleted and then generates a new Observable with the timings as the Original one but skipping the Last ones.
My Doubt: How does SkipLast Operator know that "3" is that Last 2nd Item from the Observable? Without seeing the OnCompleted how can it tell the Last nth Item?
Thanks #PanagiotisKanavos, #akarnokd for Valuable comments.
It is internally implemented with a Queue of Fixed Size. Take the items from the Sequence and Enqueue them, when the Queue is full and Starts to Overflow Dequeue the item & Enqueue the latest value and Send to OnNext(dequeued_value), so when OnCompleted Reached you will not Send the Cached items and just call OnCompleted. By this last N cached items are Skipped.
From a source code if skipLast(N) is used, then N messages is kept in the this._ring array. As soon N+1 arrived first message is emitted, N+2 arrived => second messages is emitted etc.
Related
I have two observables combined with combineLatest([this.currentPageIndex$, this.currentStoryIndex$]), they represent a shelv with books, first observable emits current book index, second - page of current book.
I have logger service, logging current page and current book number
When page changes everything is fine, but when i switch to a new book both observables emits values: one from current page which become 1 and another is new book number, and logger service logs twice, is there any way to prevent that double logging?
addition, books can have single page
combineLatest works something like:
Wait for all input observable to emit 1 value. After that emit the values.
Every time an observable emits a value, emit that value along with the other values.
So yes, you'll have duplicates there.
You can use concat instead, if you care about the order of the emissions. If not, you can use merge.
I am using marble diagrams to show the output of two different observables. The first uses a switchmap, that pipes into another switchmap. The second observable has two switchmaps in the same pipe.
Here are the two marble flows:
The first uses switchmaps inside inner piping
https://rxviz.com/v/38MavX78
The second uses switchmaps inside a single pipe
https://rxviz.com/v/9J9zNpq8
How come they have different outcome?
My understanding is that switchMap does what it sounds like from the name - it switches the Observable chain from one Observable (the "outer" one) to another (the "inner" one). If the outer Observable emits before the inner one completes, then switchMap will unsubscribe from that inner Observable, and re-subscribe, effectively "cancelling" the first subscription. Docs here.
Now in your first case, you have nested the switchMap to grandchildren$ INSIDE the switchmap to children$. Therefore when parent$ emits the second time, it will cancel the switch to children$ AND the switch to grandchildren$, since grandchildren$ is a part of children$ (nested within it).
However, in the second case, you do not have them nested. Therefore when parent$ emits the second time it will indeed cancel the children$ subscription, but children$ will not emit anything when that happens, leaving the chain further down untouched. Therefore grandchildren$ keeps emitting until children$ actually emits something, which will be 1000ms after it was re-subscribed to when parent$ emitted.
Hopefully that makes sense.
I'm wondering what are the differences between Observable.combineLatest and Observable.forkJoin?
As far as I can see, the only difference is forkJoin expects the Observables to be completed, while combineLatest returns the latest values.
Not only does forkJoin require all input observables to be completed, but it also returns an observable that produces a single value that is an array of the last values produced by the input observables. In other words, it waits until the last input observable completes, and then produces a single value and completes.
In contrast, combineLatest returns an Observable that produces a new value every time the input observables do, once all input observables have produced at least one value. This means it could have infinite values and may not complete. It also means that the input observables don't have to complete before producing a value.
forkJoin - When all observables are completed, emit the last emitted value from each.
combineLatest - When any observable emits a value, emit the latest value from each.
Usage is pretty similar, but you shouldn't forget to unsubscribe from combineLatest unlike forkJoin.
combineLatest(...)
runs observables in parallel, emitting a value each time an observable emits a value after all observables have emitted at least one value.
forkJoin(...)
runs observables in parallel, and emits a single value once all observables have completed.
Consideration for error handling:
If any of the observables error out - with combineLatest it will emit up to the point the error is thrown. forkJoin will just give back an error if any of the observables error out.
Advanced note: CombineLatest doesn't just get a single value for each source and move onto the next. If you need to ensure you only get the 'next available item' for each source observable you can add .pipe(take(1)) to the source observable as you add it to the input array.
There is a situation in Angular which would explain it better. Assume there is a change detection in Angular component, so the latest value is changed. In the pipe and tap methods of combineLatest, the code will be triggered as well. If the latest value is changed N times by the change detection, then the tap methods is also triggered N times as well.
Erlang uses message passing to communicate between processes. How does it handle concurrent incoming messages? What data structure is used?
The process inbox is made of 2 lists.
The main one is a fifo where all the incoming messages are stored, waiting for the process to examine them in the exact order they were received. The second one is a stack to store the messages that won't match any clause in a given receive statement.
When the process executes a receive statement, it will try to "pattern match" the first message against each clause of the receive in the order they are declared until the first match occurs.
if no match is found, the message is removed from the fifo and stacked on the second list, then the process iterates with the next message (note that the process execution may be suspended in the mean time either because the fifo is empty, or because it has reached his "reduction quota")
if a match is found, the message is removed from the fifo, and the stacked messages are restored in the fifo in their original order
note that the pattern matching process includes the copy of any interesting stuff into the process variables for example if {request,write,Value,_} -> ... succeeds, that means that the examined message is a 4 elements tuple, whose first and second elements are respectively the atoms request and write, whose third element is successfully pattern matched against the variable Value: that means that Value is bound to this element if it was previously unbound, or that Value matches the element, and finally the fourth element is discarded. After this operation is completed, there is no mean to retrieve the original message
You may get some info out of checking out the erl_message primitive, erl_message.c, and its declaration file, erl_message.h.
You may also find these threads helpful (one, two), although I think your question is more about the data structures in play.
ERTS Structures
The erlang runtime system (erts) allocates a fragmented (linked) heap for the scheduling of message passing (see source). The ErlHeapFragment structure can be found here.
However, each process also has a pretty simple fifo queue structure to which they copy messages from the heap in order to consume them. Underlying the queue is a linked list, and there are mechanisms to bypass the heap and use the process queue directly. See here for more info on that guy.
Finally each process also has a stack (also implemented as a list) where messages that don't have a matching pattern in receive are placed. This acts as a way to store messages that might be important, but that the process has no way of handling (matching) until another, different message is received. This is part of how erlang has such powerful "hot-swapping" mechanisms.
Concurrent Message Passing Semantics
At a high level, the erts receives a message and places it in the heap (unless explicitly told not to), and each process is responsible for selecting messages to copy into its own process queue. From what I have read, the messages currently in the queue will be processed before copying from the heap again, but there is likely more nuance.
Is there an option to limit the number of replays when using anchoring?
I have a tuple that should parse json object, in case it gets an exception I prefer it will replay two more times.
I tried to track the number of times storm is replaying with prints, but each time I entered non parse-able value the counter showed different result.
catch{
collector.fail(tuple)
}
Add a field to the tuple to hold the number of times to try again and use the tuple as both id and object on the spout's emit. When the tuple fails, the spout gets the key (which is the tuple with the number of remaining retries) back and you can conditionally re-emit the tuple with the retry count decremented.
The fail method in the BaseRichSpout class is empty . meaning you are supposed to override the same method to handle the failed tuple reply strategy.