Having such a convenient method like .startWith it would make sense to have his oposite, .endWith, which makes the observable yield a value whenever it gets completed.
I have come up with this solution, but is there anything better? This thing gets a bit hard to read for what it is.
source.concat(Rx.Observable.just(lastValue))
There is in RxJS6 (no clue when it was added to be honest)
Documentation:
https://rxjs-dev.firebaseapp.com/api/operators/endWith
Source: https://github.com/ReactiveX/rxjs/blob/df0ea7c78767c07a6ed839608af5a7bb4cefbde5/src/internal/operators/endWith.ts
Also, defaultIfEmpty() only emits a value if the observable CLOSES without emitting a value. It's a subtle, yet not so subtle distinction. It may have the same effect as endWith() in limited situations.
Example of endWith():
const source = of(1, 2, 3, 4, 5);
const example = source.pipe(
takeWhile(val => val != 4),
endWith(4));
Emits:
[1, 2, 3, 4]
Also I'm noticing that the https://learnrxjs.io website is increasingly out of date, and currently doesn't show this operator.
Why did I need it?
I was looking for the ability to emit false until a condition became true, but never go back to false. So slightly similar to debouncing, but not quite.
Related
I am struggling with the 'combineLatest' operator...
I have a operator chain like so:
const observable = interval(1000).pipe(
map((x) => 'myAction'),
mergeMap((action)=>
combineLatest([from([1,2,3]),of(action)])
),
tap(result=>{
console.log('-');
console.log(JSON.stringify(result));
})
);
I would expect this output:
[1, 'myAction']
[2, 'myAction']
[3, 'myAction']
what i get is just one output:
[3, 'myAction']
How can I achieve to get the expected result?
As the name suggests, combine latest only combines the most recent emissions for the given streams. Since from([1,2,3]) is synchronous, (effectively emits all its values at once), you can get some hard to predict behavior. I haven't tested this, but you may be able to switch the order of the observable and it might work as expected (since of(action) gets subscribed to first).
How I would solve this case:
Since of(action) is just wrapping a single value, I wouldn't bother. Just map the value into your observable directly. That might look like this:
const observable = interval(1000).pipe(
map(x => 'myAction'),
mergeMap(action => of(1,2,3).pipe(
map(n => [n, action])
)),
tap(result=>{
console.log('-');
console.log(JSON.stringify(result));
})
);
I tried to google it, but I cannot find any clear answer for that.
From the documentation, I noticed that one is operator, and one is function.
What is the difference between them? and what should I use in my code?
Thanks!
Here is the documentation link:
https://rxjs-dev.firebaseapp.com/api/operators/timeInterval
https://rxjs-dev.firebaseapp.com/api/index/function/interval
interval() is a so called Observable creation method that returns an Observable that emits periodically an ever increasing sequence of numbers with a constant delay between them.
timeInterval() is an operator that basically "timestamps" each emission from its source with time between the two most recent emissions.
The main and probably more obvious difference is how you use them:
range(1, 20).pipe(
timeInterval(), // `timeInterval()` is an operator
).subscribe(...); // TimeInterval objects
interval(1000).pipe( // `interval()` is a source Observable
).subscribe(...); // 0, 1, 2, ...
I am trying to understand what is a semantically right way to use map. As map can behave the same way as each, you could modify the array any way you like. But I've been told by my colleague that after map is applied, array should have
the same order and the same size.
For example, that would mean using the map to return an updated array won't be the right way to use map:
array = [1,2,3,4]
array.map{|num| num unless num == 2 || num == 4}.compact
I've been using map and other Enumerator methods for ages and never thought about this too much. Would appreciate advice from experienced Ruby Developers.
In Computer Science, map according to Wikipedia:
In many programming languages, map is the name of a higher-order
function that applies a given function to each element of a list,
returning a list of results in the same order
This statement implies the returned value of map should be of the same length (because we're applying the function to each element). And the returned-elements are to be in the same order. So when you use map, this is what the reader expects.
How not to use map
arr.map {|i| arr.pop } #=> [3, 2]
This clearly betrays the intention of map since we have a different number of elements returned and they are not even in the original order of application. So don't use map like this. See "How to use ruby's value_at to get subhashes in a hash" and subsequent comments for further clarification and thanks to #meager for originally pointing this out to me.
Meditate on this:
array = [1,2,3,4]
array.map{|num| num unless num == 2 || num == 4} # => [1, nil, 3, nil]
.compact # => [1, 3]
The intermediate value is an array of the same size, however it contains undesirable values, forcing the use of compact. The fallout of this is CPU time is wasted generating the nil values, then deleting them. In addition, memory is being wasted generating another array that is the same size when it shouldn't be. Imagine the CPU and memory cost in a loop that is processing thousands of elements in an array.
Instead, using the right tool cleans up the code and avoids wasting CPU or memory:
array.reject { |num| num == 2 || num == 4 } # => [1, 3]
I've been using map and other Enumerator methods for ages and never thought about this too much.
I'd recommend thinking about it. It's the little things like this that can make or break code or a system, and everything we do when programming needs to be done deliberately, avoiding all negative side-effects we can foresee.
Say I have a sorted Array, such as this:
myArray = [1, 2, 3, 4, 5, 6]
Suppose I call Enumerable#partition on it:
p myArray.partition(&:odd?)
Must the output always be the following?
[[1, 3, 5], [2, 4, 6]]
The documentation doesn't state this; this is what it says:
partition { |obj| block } → [ true_array, false_array ]
partition → an_enumerator
Returns two arrays, the first containing the elements of enum for which the block evaluates to true, the second containing the rest.
If no block is given, an enumerator is returned instead.
But it seems logical to assume partition works this way.
Through testing Matz's interpreter, it appears to be the case that the output works like this, and it makes full sense for it to be like this. However, can I count on partition working this way regardless of the Ruby version or interpreter?
Note: I made implementation-agnostic because I couldn't find any other tag that describes my concern. Feel free to change the tag to something better if you know about it.
No, you can't rely on the order. The reason is parallelism.
A traditional serial implementation of partition would loop through each element of the array evaluating the block one at a time in order. As each call to odd returns, it's immediately pushed into the appropriate true or false array.
Now imagine an implementation which takes advantage of multiple CPU cores. It still iterates through the array in order, but each call to odd can return out of order. odd(myArray[2]) might return before odd(myArray[0]) resulting in [[3, 1, 5], [2, 4, 6]].
List processing idioms such as partition which run a list through a function (most of Enumerable) benefit greatly from parallel processing, and most computers these days have multiple cores. I wouldn't be surprised if a future Ruby implementation took advantage of this. The writers of the API documentation for Enumerable likely carefully omitted any mention of process ordering to leave this optimization possibility open.
The documentation makes no explicit mention of this, but judging from the official code, it does retain ordering:
static VALUE
partition_i(RB_BLOCK_CALL_FUNC_ARGLIST(i, arys))
{
struct MEMO *memo = MEMO_CAST(arys);
VALUE ary;
ENUM_WANT_SVALUE();
if (RTEST(enum_yield(argc, i))) {
ary = memo->v1;
}
else {
ary = memo->v2;
}
rb_ary_push(ary, i);
return Qnil;
}
This code gets called from the public interface.
Essentially, the ordering in which your enumerable emits objects gets retained with the above logic.
How can I mock an array's sort expect a lambda expression?
This is a trivial example of my problem:
# initializing the data
l = lambda { |a,b| a <=> b }
array = [ 1, 2, 3, 4, 5 ]
sorted_array = [ 2, 3, 8, 9, 1]
# I expect that sort will be called using the lambda as a parameter
array.expects(:sort).with( l ).returns( sorted_array )
# perform the sort using the lambda expression
temp = array.sort{|a,b| l.call(a,b) }
Now, at first I expected that this would work; however, I got the following error:
- expected exactly once, not yet invoked: [ 1, 2, 3, 4, 5 ].sort(#<Proc:0xb665eb48>)
I realize that this will not work because l is not passed as a parameter to l. However, is there another way to do what this code is trying to accomplish?
NOTE: I have figured out how to solve my issue without figuring out how to do the above. I will leave this open just in case someone else has a similar problem.
Cheers,
Joseph
Mocking methods with blocks can be quite confusing. One of the keys is to be clear about what behaviour you want to test. I can't tell from your sample code exactly what it is that you want to test. However, you might find the documentation for Mocha::Expectation#yields (or even Mocha::Expectation#multiple_yields) useful.