Buffer elements based on its contents with comparer function - rxjs

In RSJS how to buffer values so buffer will be flushed when next element is different from previous. If elements by some comparator are the same then it should buffer them until next change is detected...
Suppose I have such elements...
{ t: 10, price:12 },
{ t: 10, price:13 },
{ t: 10, price:14 },
{ t: 11, price:12 },
{ t: 11, price:13 },
{ t: 10, price:14 },
{ t: 10, price:15 },
The elements are the same if t property value is the same as previous element t value so at the output I just want such buffers...
[ { t: 10, price:12 }, { t: 10, price:13}, { t: 10, price:14} ],
[ { t: 11, price:12}, { t: 11, price:13} ],
[ { t: 10, price:14 }, { t: 10, price:15 } ]
So in the result I have two elements emited (two buffers each containing the same objects ).
I was trying to use bufferWhen or just buffer but I don't know how to specify closingNotifier in this case because this need to be dependent on elements that are approaching. Anyone can help?

TLDR;
const items = [
{ t: 10, price: 12 },
{ t: 10, price: 13 },
{ t: 10, price: 14 },
{ t: 11, price: 12 },
{ t: 11, price: 13 },
{ t: 10, price: 14 },
{ t: 10, price: 15 }
];
const src$ = from(items).pipe(
delay(0),
share()
);
const closingNotifier$ = src$.pipe(
distinctUntilKeyChanged('t'),
skip(1),
share({ resetOnRefCountZero: false })
);
src$.pipe(bufferWhen(() => closingNotifier$)).subscribe(console.log);
StackBlitz demo.
Detailed explanation
The tricky part was to determine the closingNotifier because, as you said, it depends on the values that come from the stream. My first thought was that src$ has to play 2 different roles: 1) the stream which emits values and 2) the closingNotifier for a buffer operator. This is why the share() operator is used:
const src$ = from(items).pipe(
delay(0),
share()
);
delay(0) is also used because the source's items are emitted synchronously. And since the source would be subscribed twice(because the source is the stream, but also the closingNotifier), its important that both subscribers receive values. If delay(0) was omitted, only the first subscriber would receive the items, and the second one would receive nothing, because it was registered after all the source's items have been emitted. With delay(0) we just ensure that both subscribers(the first one from the subscribe callback and the second one is the inner subscriber of closingNotifier) are registered before the source emits the value.
Onto closingNotifier:
const closingNotifier$ = src$.pipe(
distinctUntilKeyChanged('t'),
skip(1),
share({ resetOnRefCountZero: false })
);
distinctUntilKeyChanged('t'), is used because the signal that the buffer should emit the accumulated items is when an item with a different t value comes from the stream.
skip(1) is used because when the very first value comes from the stream, after the first subscription to the closingNotifier, it will cause the buffered items to be sent immediately, which is not what we want, because it is the first batch of items.
share({ resetOnRefCountZero: false }) - this is the interesting part; as you've seen, we're using bufferWhen(() => closingNotifier$) instead of buffer(closingNotifier$); that is because buffer first subscribes to the source, and then to the notifier; this complicates the situation a bit so I decided to go with bufferWhen, which subscribes to the notifier first and then to the source; the problem with bufferWhen is that it resubscribes the to closingNotifier each time after it emits, so for that we needed to use share, because we wouldn't like to repeat the logic for the first batch of items(the skip operator) when there have already been some items; the problem with share()(without the resetOnRefCountZero option) is that it will still resubscribe each time after it emits, because that's the default behavior when the inner Subject used by share is left without subscribers; this can be solved by using resetOnRefCountZero: false, which won't resubscribe to the source when the first subscriber is registered, after the inner Subject had been previously left without subscribers;

Related

Merge main Observable stream with updates

I need to merge main stream with updates stream this way:
Main: ----A-------B-------------C---|--
Upd: -D-----E---------F---G--------|--
========================================
Result:----A--AE---B----BF--BG---C---|--
I.e., when Main emitted, the result should always be Main (or Main with empty Upd). When Upd was emitted without previous Main, it should be ignored. If Upd was emitted after Main, then they should be combined.
Consider this TypeScript code:
interface Item {
Id: number;
Data: string;
}
function mergeUpdates(main: Item[], upd: Item[]) {
if (!upd || upd.length === 0) {
return main;
}
const result = main;
// const result = {...main};
for (const updatedItem of upd) {
const srcIndex = result.findIndex(_ => _.Id === updatedItem.Id);
if (srcIndex >= 0) {
result[srcIndex] = updatedItem;
} else {
result.push(updatedItem);
}
}
return result;
}
const main$ = new Subject<Item[]>();
const upd$ = new Subject<Item[]>();
const result$ = combineLatest(main$, upd$).pipe( // combineLatest is wrong operator!
map(([main, upd]) => mergeUpdates(main, upd)));
$result.subscribe(r => console.log(r.map(_ => _.Data).join(',')));
main$.next([{Id:1, Data:'Data1'}, {Id:2, Data:'Data2'}]);
upd$.next([{Id:1, Data:'Updated1'}]);
upd$.next([{Id:1, Data:'Updated2'}]);
main$.next([{Id:1, Data:'Data1_Orig'}, {Id:2, Data:'Data2'}]);
// Expexted result:
// 'Data1,Data2'
// 'Updated1,Data2'
// 'Updated2,Data2'
// 'Data1_Orig,Data2'
The only solution I have in mind is to use 'combineLatest' and mark items in upd$ stream as processed, thus do not use it again when data from main$ emitted later. I believe this is not the best approach as it cause unwanted side effects.
Is there any better solution for this task?
Thank you in advance.
Here would be my approach:
main$.pipe(
switchMap(
mainValue => merge(
of(mainValue),
upd$.pipe(
map(updVal => mainValue + updVal)
)
)
)
)
switchMap - make sure the inner observable's emitted values will be combined with the latest outer value
merge(of(a), upd$.pipe()) - emit the main value first, then listen to any notifications upd$ emits and combine them with the current main value
If another outer value comes in, the inner subscriber will be unsubscribed, meaning that the upd subject won't have redundant subscribers.

Updating react-table values after Dragging and Dropping a row in React Redux

I´ve accomplished the react drag and drop functionality into my project so i can reorder a row in a react table´s list. The problem is i have a column named 'Sequence', witch shows me the order of the elements, that i can´t update its values.
Example:
before (the rows are draggable):
Sequence | Name
1 Jack
2 Angel
after ( i need to update the values of Sequence wherea i change their position after dropping a specific draggable row, in this case i dragged Jack at the first position and dropped it at the second position) :
Sequence | Name
1 Angel
2 Jack
React/Redux it´s allowing me to change the index order of this array of elements, without getting the 'A state mutation was detected between dispatches' error message, but is not allowing me to update the Sequence values with a new order values.
This is what i have tried so far:
// within the parent class component
// item is an array of objects from child
UpdateSequence(startIndex, endIndex, item) {
// the state.Data is already an array of object
const result = this.state.Data;
const [removed] = result.splice(startIndex, 1);
result.splice(endIndex, 0, removed);
// this is working without the mutation state error
this.setState({ Data: result })
let positionDiff = 0;
let direction = null;
let newIndex = 0;
positionDiff = endIndex - startIndex;
if (startIndex > endIndex) {
direction = "up";
}
else if (startIndex < endIndex) {
direction = "down";
}
if (positionDiff !== 0) {
for (var x = 0; x <= Math.abs(positionDiff); x++) {
if (x === 0) {
newIndex = startIndex + positionDiff - x;
this.setState(prevState => ({
Data: {
...prevState.Data,
[prevState.Data[newIndex].Sequence]: Data[newIndex].Sequence + positionDiff
},
}));
}
else {
if (direction === "down") {
newIndex = startIndex + positionDiff - x;
this.setState(prevState => ({
Data: {
...prevState.Data,
[prevState.Data[newIndex].Sequence]: Data[newIndex].Sequence - 1
},
}));
}
else if (direction === "up") {
Data= startIndex + positionDiff + x;
this.setState(prevState => ({
Data: {
...prevState.Data,
[prevState.Data[newIndex].Sequence]: Data[newIndex].Sequence + 1
},
}));
}
}
}
// so when i call save action i am stepping into the 'A state mutation was detected between dispatches' error message.
this.props.actions.saveSequence(this.state.Data)
.then(() => {
this.props.actions.loadData();
})
.catch(error => {
toastr['error'](error, 'error....');
})
}
Calling the action 'saveSequence' whenever i try to update the element of the array, 'Sequence', i am getting the 'A state mutation was detected between dispatches' error message.
Any help will be greatfull! Thank you!
note: The logic applied to reorder the Sequence is ok.
While I don't know redux particularly well, I am noticing that you are directly modifying state, which seems like a likely culprit.
const result = this.state.Data;
const [removed] = result.splice(startIndex, 1);
splice is a destructive method that modifies its input, and its input is a reference to something in this.state.
To demonstrate:
> state = {Data: [1,2,3]}
{ Data: [ 1, 2, 3 ] }
> result = state.Data.splice(0,1)
[ 1 ]
> state
{ Data: [ 2, 3 ] }
Notice that state has been modified. This might be what Redux is detecting, and a general React no-no.
To avoid modifying state, the easy way out is to clone the data you are looking to modify
const result = this.state.Data.slice()
Note that this does a shallow copy, so if Data has non-primitive values, you have to watch out for doing destructive edits on those values too. (Look up deep vs shallow copy if you want to find out more.) However, since you are only reordering things, I believe you're safe.
Well, i figured it out changing this part of code:
//code....
const result = item;
const [removed] = result.splice(startIndex, 1);
// i created a new empty copy of the const 'removed', called 'copy' and update the Sequence property of the array like this below. (this code with the sequence number is just a sample of what i came up to fix it )
let copy;
copy = {
...removed,
Sequence: 1000,
};
result.splice(endIndex, 0, copy);
After i didn´t setState for it, so i commented this line:
// this.setState({ Data: result })
//...code
and the end of it was putting the result to the save action as a parameter , and not the state.
this.props.actions.saveSequence(result)
Works and now i have i fully drag and drop functionality saving the new order sequence into the database with no more 'A state mutation was detected between dispatches' error message!

TPL Dataflow not completing with multiple targets

I have a BufferBlock linked to two Target Blocks. The dataflow does not complete. I have followed the suggestions from this post, but I can't get the completion propagation right.
Any help would be appreciated.
// define blocks
var bufferBlock = new BufferBlock<int>();
var actionBlock1 = new TransformBlock<int, int>(i =>
{
Console.WriteLine($"actionBlock1: {i}");
return i;
});
var actionBlock2 = new ActionBlock<int>(i =>
{
Console.WriteLine($"actionBlock2: {i}");
});
// link blocks
bufferBlock.LinkTo(actionBlock1, i => i == 1);
bufferBlock.LinkTo(actionBlock2, i => i == 2);
bufferBlock.LinkTo(DataflowBlock.NullTarget<int>());
// push to block
var items = new List<int> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
foreach (var i in items)
{
bufferBlock.Post(i);
}
// wait for all operations to complete
bufferBlock.Complete();
// NB: block will only propagate completion to one block regardless of how many blocks are linked. This even applies to the BroadcastBlock that broadcasts messages, it will not broadcast completion. In that case you can configure a continuation on the Completion Task
// see https://stackoverflow.com/questions/47310402/tpl-complete-vs-completion/47311537#47311537
var bufferBlockCompletion = bufferBlock.Completion.ContinueWith(tsk =>
{
if (!tsk.IsFaulted)
{
actionBlock1.Complete();
actionBlock2.Complete();
}
else
{
((IDataflowBlock)actionBlock1).Fault(tsk.Exception);
((IDataflowBlock)actionBlock2).Fault(tsk.Exception);
}
});
await Task.WhenAll(bufferBlockCompletion, actionBlock1.Completion, actionBlock2.Completion);
actonBlock1 is a TransformBlock that is not linked to anything. Any items that the block produces will remain in its output buffer, in this cas only the number 1. With items stuck in the output the block can never complete. You can fix that a couple of different ways depending on what exactly you need.
1) Change the TransformBlock to an ActionBlock
2) Link the TransformBlock to a NullTarget or another block.

How to show scatter plot on specific condition which I set using dc.js

I want a scatter plot composed with a line chart, but I only want the scatter plot to show when the value is not zero.
I have data as below, range of val1 is 0~100, range of val2 is -1, 0, 1
[
{
val1: 10,
val2: 0
},
{
val1: 20,
val2: 1
},
{
val1: 30,
val2: -1
},
{
val1: 40,
val2: -1
},
{
val1: 50,
val2: 1
},
{
val1: 60,
val2: 0
},
{
val1: 70,
val2: 0
},
{
val1: 80,
val2: 1
},
{
val1: 90,
val2: 1
},
{
val1: 100,
val2: 1
}
]
I want to show the line chart of val1 every tick and I want to put a scatter plot on top of this line when val2 is -1 or 1, not 0. The scatter plot should be colored by the value.
How can I do it?
This is another of those places where a "fake group" can come in handy, because we're both transforming a group (by coloring the dots), and omitting some points.
(Despite the ugly name for this pattern, it's quite powerful to do live transformations of the data after it's been aggregated, and this technique will probably shape future versions of dc.js.)
Crossfilter on indices
First though, we have to use another unusual technique in order to deal with this data, which has no field which corresponds to the X axis. This may or may not come up in your actual data.
We'll define the crossfilter data as the range of indices within the data, and the dimension key as the index:
var ndx = crossfilter(d3.range(experiments.length)),
dim = ndx.dimension(function(i) { return i; }),
Now whenever we read data, we'll need to use the index to read the original array. So the first group (for the line chart) can be defined like this:
group1 = dim.group().reduceSum(function(i) { return experiments[i].val1; });
Transforming and filtering
Now we get to the heart of the question: how to produce another group which has colored dots for the non-zero val2 values.
Following the "fake group" pattern, we'll create a function which, given a group, produces a new object with a .all() method. The method pulls the data from the first group and transforms it.
function keep_nonzeros(group, field2) {
return {
all: function() {
return group.all().map(function(kv) {
return {
key: kv.key,
value: {
y: kv.value,
color: experiments[kv.key][field2]
}
}
}).filter(function(kv) {
return kv.value.color != 0
})
}
}
}
I chose to first transform the data by adding the color field to the value with .map(), and then filter out the zeros with .filter(). Most "fake groups" use one or both of these handy Array methods.
Building the composite
Now we can build a composite chart using a line chart and a scatter plot:
chart
.width(600)
.height(400)
.x(d3.scale.linear())
.xAxisPadding(0.25).yAxisPadding(5)
.elasticX(true)
.compose([
dc.lineChart(chart).group(group1),
dc.scatterPlot(chart).group(keep_nonzeros(group1, 'val2'))
// https://github.com/dc-js/dc.js/issues/870
.keyAccessor(function(kv) { return kv.key; })
.valueAccessor(function(kv) { return kv.value.y; })
.colorAccessor(function(kv) { return kv.value.color; })
.colors(d3.scale.ordinal().domain([-1,1]).range(['red', 'black']))
]);
Most of this is boilerplate stuff at this point, but note that we have to set both the key and value accessors for the scatterPlot, because it makes unusual assumptions about the key structure which only matter if you want to do rectangular brushing.
Fiddle: https://jsfiddle.net/gordonwoodhull/6cm8bpym/17/

Dimensional Charting with Non-Exclusive Attributes

The following is a schematic, simplified, table, showing HTTP transactions. I'd like to build a DC analysis for it using dc, but some of the columns don't map well to crossfilter.
In the settings of this question, all HTTP transactions have the fields time, host, requestHeaders, responseHeaders, and numBytes. However, different transactions have different specific HTTP request and response headers. In the table above, 0 and 1 represent the absence and presence, respectively, of a specific header in a specific transaction. The sub-columns of requestHeaders and responseHeaders represent the unions of the headers present in transactions. Different HTTP transaction datasets will almost surely generate different sub-columns.
For this question, a row in this chart is represented in code like this:
{
"time": 0,
"host": "a.com",
"requestHeaders": {"foo": 0, "bar": 1, "baz": 1},
"responseHeaders": {"shmip": 0, "shmap": 1, "shmoop": 0},
"numBytes": 12
}
The time, host, and numBytes all translate easily into crossfilter, and so it's possible to build charts answering things like what was the total number of bytes seen for transactions between 2 and 4 for host a.com. E.g.,
var ndx = crossfilter(data);
...
var hostDim = ndx.dimension(function(d) {
return d.host;
});
var hostBytes = hostDim.group().reduceSum(function(d) {
return d.numBytes;
});
The problem is that, for all slices of time and host, I'd like to show (capped) bar charts of the (leading) request and response headers by bytes. E.g. (see the first row), for time 0 and host a.com, the request headers bar chart should show that bar and baz each have 12.
There are two problems, a minor one and a major one.
Minor Problem
This doesn't fit quite naturally into dc, as it's one-directional. These bar charts should be updated for the other slices, but they can't be used for slicing themselves. E.g., you shouldn't be able to select bar and deselect baz, and look for a resulting breakdown of hosts by bytes, because what would this mean: hosts in the transactions that have bar but don't have baz? hosts in the the transactions that have bar and either do or don't have baz? It's too unintuitive.
How can I make some dc charts one directional. Is it through some hack of disabling mouse inputs?
Major Problem
As opposed to host, foo and bar are non-exclusive. Each transaction's host is either something or the other, but a transaction's headers might include any combination of foo and bar.
How can I define crossfilter dimensions for requestHeaders, then, and how can I use dc? That is
var ndx = crossfilter(data);
...
var requestHeadersDim = ndx.dimension(function(d) {
// What should go here?
});
The way I usually deal with the major problem you state is to transform my data so that there is a separate record for each header (all other fields in these duplicate records are the same). Then I use custom group aggregations to avoid double-counting. These custom aggregations are a bit hard to manage so I built Reductio to help with this using the 'exception' function - github.com/esjewett/reductio
Hacked it (efficiently, but very inelegantly) by looking at the source code of dc. It's possible to distort the meaning of crossfilter to achieve the desired effect.
The final result is in this fiddle. It is slightly more limited than the question, as the fields of responseHeaders are hardcoded to foo, bar, and baz. Removing this restriction is more in the domain of simple Javascript.
Minor Problem
Using a simple css hack, I simply defined
.avoid-clicks {
pointer-events: none;
}
and gave the div this class. Inelegant but effective.
Major Problem
The major problem is solved by distorting the meaning of crossfilter concepts, and "fooling" dc.
Let's say the data looks like this:
var transactions = [
{
"time": 0,
"host": "a.com",
"requestHeaders": {"foo": 0, "bar": 1, "baz": 1},
"responseHeaders": {"shmip": 0, "shmap": 1, "shmoop": 0},
"numBytes": 12
},
{
"time": 1,
"host": "b.org",
"requestHeaders": {"foo": 0, "bar": 1, "baz": 1},
"responseHeaders": {"shmip": 0, "shmap": 1, "shmoop": 1},
"numBytes": 3
},
...
];
We can define a "dummy" dimension, which ignores the data:
var transactionsNdx = crossfilter(transactions);
var dummyDim = transactionsNdx
.dimension(function(d) {
return 0;
});
Using this dimension, we can define a group that counts the total foo, bar, and baz bytes of the filtered rows:
var requestHeadersGroup = dummyDim
.group()
.reduce(
/* callback for when data is added to the current filter results */
function (p, v) {
return {
"foo": p.foo + v.requestHeaders.foo * v.numBytes,
"bar": p.bar + v.requestHeaders.bar * v.numBytes,
"baz": p.baz + v.requestHeaders.baz * v.numBytes,
}
},
/* callback for when data is removed from the current filter results */
function (p, v) {
return {
"foo": p.foo - v.requestHeaders.foo * v.numBytes,
"bar": p.bar - v.requestHeaders.bar * v.numBytes,
"baz": p.baz - v.requestHeaders.baz * v.numBytes,
}
},
/* initialize p */
function () {
return {
"foo": 0,
"bar": 0,
"baz": 0
}
}
);
Note that this isn't a proper crossfilter group at all. It will not map the dimensions to their values. Rather, it maps 0 to a value which itself maps the dimensions to their values (ugly!). We therefore need to transform this group into something that actually looks like a crossfilter group:
var getSortedFromGroup = function() {
var all = requestHeadersGroup.all()[0].value;
all = [
{
"key": "foo",
"value": all.foo
},
{
"key": "bar",
"value": all.bar
},
{
"key": "foo",
"value": all.baz
}];
return all.sort(function(lhs, rhs) {
return lhs.value - rhs.value;
});
}
var requestHeadersDisplayGroup = {
"top": function(k) {
return getSortedFromGroup();
},
"all": function() {
return getSortedFromGroup();
},
};
We now can create a regular dc chart, and pass the adaptor group
requestHeadersDisplayGroup to it. It works normally from this point on.

Resources