Related
I'm looking for a way to make streams that are combined
Note: this is the simplest form of my problem, in reality I'm combining 8 different streams some are intertwined, some are async etc :(
import { BehaviorSubject, map, combineLatest } from 'rxjs';
const $A = new BehaviorSubject(1)
const $B = $A.pipe(map(val => `$B : ${val}`))
const $C = $A.pipe(map(val => `$C : ${val}`))
// prints out:
// (1) [1, "$B : 1", "$C : 1"]
combineLatest([$A,$B,$C]).subscribe(console.log)
$A.next(2)
// prints out:
// (2) [2, "$B : 1", "$C : 1"]
// (3) [2, "$B : 2", "$C : 1"]
// (4) [2, "$B : 2", "$C : 2"]
Code example
The print out (1) is great, all streams have a value of "1": [1, "$B : 1", "$C : 1"]
The print out (4) is great, all streams have a value of "2": [2, "$B : 2", "$C : 2"]
But the combine latest fires for (2) and (3) after each stream is updated individually meaning that you have a mixture of "1" and "2"
**What way can I modify the code to only get notified when a change has fully propgaged? **
My best solutions so far:
A) using debouceTime(100)
combineLatest([$A,$B,$C]).pipe(debounceTime(100)).subscribe(console.log)
But it's flaky because it can either swallow valid states if the are process to quickly or notify with invalid states if individual pipes are too slow
B) filter only valid state
combineLatest([$A,$B,$C]).pipe(
filter(([a,b,c])=>{
return b.indexOf(a) > -1 && c.indexOf(a) > -1
})
).subscribe(console.log)
works but adding a validation function seems like the wrong way to do it (and more work :))
C) Make B$ and C$ in which we push the latest and reset at every change"
A$.pipe(tap(val)=>{
B$.next(undefined);
B$.next(val);
C$.next(undefined)
C$.next(val);
})
...
combineLatest([$A,$B.pipe(filter(b => !!b)),$C.pipe(filter(c => !!c))]).pipe(
filter(([a,b,c])=>{
return b.indexOf(a) > -1 && c.indexOf(a) > -1
})
Works but quite a lot of extra code and vars
I have the feeling I'm missing a concept or not seeing how to achieve this in a clean/robust way, but I sure I'm not the first one :)
Thanks
As you've observed, the observable created by combineLatest will emit when any of its sources emit.
Your problem is occurring because you pass multiple observables into combineLatest that share a common source. So whenever that common source emits, it causes each derived observable to emit.
One way to "fix" this in a synchronous scenario is to simply apply debounceTime(0) which will mask the duplicate emission that happens in the same event loop. This approach is a bit naive, but works in simple scenarios:
combineLatest([$A,$B,$C]).pipe(
debounceTime(0)
)
But, since you have some async things going on, I think your solution is to not include duplicate sources inside combineLatest and handle the logic further down the chain:
combineLatest([$A]).pipe(
map(([val]) => [
val,
`$B : ${val}`,
`$C : ${val}`,
])
)
The code above produces the desired output. Obviously, you wouldn't need combineLatest with a single source, but the idea is the same if you had multiple sources.
Let's use a more concrete example that has the same issue:
const userId$ = new ReplaySubject<string>(1);
const maxMsgCount$ = new BehaviorSubject(2);
const details$ = userId$.pipe(switchMap(id => getDetails(id)));
const messages$ = combineLatest([userId$, maxMsgCount$]).pipe(
switchMap(([id, max]) => getMessages(id, max))
);
const user$ = combineLatest([userId$, details$, messages$]).pipe(
map(([id, details, messages]) => ({
id,
age: details.age,
name: details.name,
messages
}))
);
Notice when userId emits a new value, the user$ observable would end up emitting values that had the new userId, but the details from the old user!
We can prevent this by only including unique sources in our combineLatest:
const userId$ = new ReplaySubject<string>(1);
const maxMsgCount$ = new BehaviorSubject(2);
const user$ = combineLatest([userId$, maxMsgCount$]).pipe(
switchMap(([id, max]) => combineLatest([getDetails(id), getMessages(id, max)]).pipe(
map(([details, messages]) => ({
id,
age: details.age,
name: details.name,
messages
}))
))
);
You can see this behavior in action in the below stackblitz samples:
Problem
Solution
I'm trying to fit a model using fitDataset(). I can train using the "normal" approach, with a for loop and getting random batches of data (20000 data points).
I'd like to use the fitDataset() and be able to use the entire dataset and not rely on "randomness" of my getBatch function.
I'm getting closer, using the API docs and the example on tfjs-data but, i'm stuck on a probably dumb data manipulation...
So here's how i'm doing it:
const [trainX, trainY] = await bigData
const model = await cnnLSTM // gru performing well
const BATCH_SIZE = 32
const dataSet = flattenDataset(trainX.slice(200), trainY.slice(200))
model.compile({
loss: 'categoricalCrossentropy',
optimizer: tf.train.adam(0.001),
metrics: ['accuracy']
})
await model.fitDataset(dataSet.train.batch(32), {
epochs: C.trainSteps,
validationData: dataSet.validation,
callbacks: {
onBatchEnd: async (batch, logs) => (await tf.nextFrame()),
onEpochEnd: (epoch, logs) => {
let i = epoch + 1
lossValues.push({'epoch': i, 'loss': logs.loss, 'val_loss': logs.val_loss, 'set': 'train'})
accuracyValues.push({'epoch': i, 'accuracy': logs.acc, 'val_accuracy': logs.val_acc, 'set': 'train'})
// await md `${await plotLosses(train.lossValues)} ${await plotAccuracy(train.accuracyValues)}`
}
}
})
here's my interpretation of the dataset creation:
flattenDataset = (features, labels, split = 0.35) => {
return tf.tidy(() => {
let slice =features.length - Math.floor(features.length * split)
const featuresTrain = features.slice(0, slice)
const featuresVal = features.slice(slice)
const labelsTrain = labels.slice(0, slice)
const labelsVal = labels.slice(slice)
const data = {
train: tf.data.array(featuresTrain, labelsTrain),
validation: tf.data.array(featuresVal, labelsVal)
}
return data
})
}
I'm getting an error:
Error: Dataset iterator for fitDataset() is expected to generate an Array of length 2: `[xs, ys]`, but instead generates Tensor
[[0.4106583, 0.5408, 0.4885066, 0.9021732, 0.1278526],
[0.3711334, 0.5141, 0.4848816, 0.9021571, 0.2688071],
[0.4336613, 0.5747, 0.4822159, 0.9021728, 0.3694479],
...,
[0.4123166, 0.4553, 0.478438 , 0.9020132, 0.8797594],
[0.3963479, 0.3714, 0.4871198, 0.901996 , 0.7170534],
[0.4832076, 0.3557, 0.4892016, 0.9019232, 0.9999322]],Tensor
[[0.3711334, 0.5141, 0.4848816, 0.9021571, 0.2688071],
[0.4336613, 0.5747, 0.4822159, 0.9021728, 0.3694479],
[0.4140858, 0.5985, 0.4789927, 0.9022084, 0.1912155],
...,
The input data is 6 timesteps with 5 dimensions and the labels are just one-hot encoded classes [0,0,1], [0,1,0] and [1, 0, 0]. I guess the flattenDataset() is not sending the data in the correct way.
Does data.train needs to output for each data point [6 timesteps with 5 dims, label] ? I get this error when i tried that:
Error: The feature data generated by the dataset lacks the required input key 'conv1d_Conv1D5_input'.
Could really use some pro insight...
--------------------
Edit #1:
I feel i'm close to an answer.
const X = tf.data.array(trainX.slice(0, 100))//.map(x => x)
const Y = tf.data.array(trainY.slice(0, 100))//.map(x => x)
const zip = tf.data.zip([X, Y])
const dataSet = {
train: zip
}
dataSet.train.forEach(x => console.log(x))
With this i get on the console:
[Array(6), Array(3)]
[Array(6), Array(3)]
[Array(6), Array(3)]
...
[Array(6), Array(3)]
[Array(6), Array(3)]
but the fitDataset is giving me: Error: The feature data generated by the dataset lacks the required input key 'conv1d_Conv1D5_input'.
my model look like this:
const model = tf.sequential()
model.add(tf.layers.conv1d({
inputShape: [6, 5],
kernelSize: (3),
filters: 64,
strides: 1,
padding: 'same',
activation: 'elu',
kernelInitializer: 'varianceScaling',
}))
model.add(tf.layers.maxPooling1d({poolSize: (2)}))
model.add(tf.layers.conv1d({
kernelSize: (1),
filters: 64,
strides: 1,
padding: 'same',
activation: 'elu'
}))
model.add(tf.layers.maxPooling1d({poolSize: (2)}))
model.add(tf.layers.lstm({
units: 18,
activation: 'elu'
}))
model.add(tf.layers.dense({units: 3, activation: 'softmax'}))
model.compile({
loss: 'categoricalCrossentropy',
optimizer: tf.train.adam(0.001),
metrics: ['accuracy']
})
return model
What is wrong here?
What model.fitDataset expects are a Dataset, each element inside this dataset is a tuple of two items, [feature, label].
So in your case, you need to create featureDataset and labelDataset, then merge then with tf.data.zip to create trainDataset. Same for validation dataset.
Solved it
so after a lot of trial an error i found a way to make it work.
So, i had an input shape of [6, 5], meaning an array with 6 arrays of 5 floats each.
[[[0.3467378, 0.3737, 0.4781905, 0.90665, 0.68142351],
[0.44003019602788285, 0.3106, 0.4864576, 0.90193448, 0.5841830879700972],
[0.30672944860847245, 0.3404, 0.490295674, 0.90720676, 0.8331748581920732],
[0.37475716007758336, 0.265, 0.4847249, 0.902056932, 0.6611207914113887],
[0.5639427928616854, 0.2423002, 0.483168235, 0.9020202294447865, 0.82823],
[0.41581425627336555, 0.4086, 0.4721923, 0.902094287, 0.914699]], ... 20k more]
What i did was to flatten the array becoming an array of 5 dimensions arrays. Then applied the .batch(6) to it.
const BATCH_SIZE = 20 //batch size fed to the NN
const X = tf.data.array([].concat(...trainX)).batch(6).batch(BATCH_SIZE)
const Y = tf.data.array(trainY).batch(BATCH_SIZE)
const zip = tf.data.zip([X, Y])
const dataSet = {
train: zip
}
Hope it can help others on complex data!!
If I have an array of events that include a utc timestamp and event data like as follows:
[{utcts: , data: , ... ];
how would you use RxJS to "replay" those events with the correct time differentials between each item in the array? Assume the array is ordered by the utcts field so the first item has the lowest value.
here is a very basic set of data to get started:
var testdata = [
{utcts: 1, data: 'a'},
{utcts: 4, data: 'b'},
{utcts: 6, data: 'c'},
{utcts: 10, data: 'd'}
];
Assume the utcts is just the number of seconds from the start of replaying the event which starts at 0 seconds.
Use delayWhen to give you timed replay.
Since utcts given is relative (not absolute) time, don't need to refresh the timestamp inside the data object.
I have added a timestamp to the console log so we can see the elapsed output time.
Note the extra few milliseconds is typical of rxjs process time.
console.clear()
const testdata = [
{utcts: 1, data: 'a'},
{utcts: 4, data: 'b'},
{utcts: 6, data: 'c'},
{utcts: 10, data: 'd'}
];
const replayData = (data) => Rx.Observable.from(data)
.delayWhen(event => Rx.Observable.of(event).delay(event.utcts * 1000))
// Show replay items with output time (in milliseconds)
const start = new Date()
replayData(testdata)
.timestamp()
.subscribe(x => console.log(x.value, 'at', x.timestamp - start, 'ms'))
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.5.6/Rx.js"></script>
Ref delayWhen, timestamp
This also works, arguably simpler, not sure which is best.
mergeMap() flattens the inner observable, which is necessary to apply the delay.
const replayData = (data) => Rx.Observable.from(data)
.mergeMap(event => Rx.Observable.of(event).delay(event.utcts * 1000))
Rough pseudo (comes out of my head directly without running to verify) might be something similar to
Observable.scan((acc, value) => ({
delay: delay === NaN ? value.utcts - delay,
value
}), { delay: NaN, value: null })
.mergeMap(({delay, value}) => Observable.from(value).delay(delay))
scan operator is similar to reduce, but emits intermediate values. Using that compute diff between times to get delay those values, then emit values per given delayed time. There are couple of other approaches could work in same way.
This should work in https://rxviz.com (copy-paste there):
const { delay, mergeMap } = RxOperators;
const { from, Observable, of } = Rx;
const testdata = [
{utcts: 0.2, data: 'a'},
{utcts: 2.0, data: 'b'},
{utcts: 2.8, data: 'c'},
{utcts: 4.0, data: 'd'}
];
from(testdata).pipe(
mergeMap(event => of(event).pipe(
delay(event.utcts * 1000)
))
)
I am studying
operator map v.s. flatmap
how to add promise into observable chain.
Then I constructed 4 different versions of var source as below.
version 1, 3 works as expected, while version 2, 4 fail oddly.
My code has also been added in => js bin
Could someone tell what is wrong with my code?
Thanks,
Xi
console.clear();
var p = new Promise((resolve, reject) => {
setTimeout( () => {
resolve('resolved!');
} , 1000);
});
var source = Rx.Observable.interval(200).take(3)
.flatMap(x => Rx.Observable.timer(500).map(() => x)) //version 1, works OK
// .flatMap(x => Rx.Observable.timer(500).map((x) => x)) // version 2, not OK, returns => 0, 0, 0
// .map(x => p.then( s => console.log(s))); // version 3, works OK
// .flatMap(x => p.then( s => console.log(s))); // version 4, not OK, error occurs
source.subscribe(x => console.log(x.toString()));
.flatMap(x => Rx.Observable.timer(500).map((x) => x))
returns "0", "0", "0" because timer emits 0 after 500 ms and map takes that value as input x and returns it with (x) => x. In the previous line, x was not redeclared in the map, so it came from flatMap.
.flatMap(x => p.then( s => console.log(s)));
gives an error because a promise emits the return value of the then callback. That's console.log(s) which being a statement evaluates to undefined. So the flatMap gives an Observable of undefined, undefined, undefined. When the first reaches the subscribe it tries to do undefined.toString and errors out.
I haven't found a solution with data set up quite like mine...
var marketshare = [
{"store": "store1", "share": "5.3%", "q1count": 2, "q2count": 4, "q3count": 0},
{"store": "store2","share": "1.9%", "q1count": 5, "q2count": 10, "q3count": 0},
{"store": "store3", "share": "2.5%", "q1count": 3, "q2count": 6, "q3count": 0}
];
Code so far, returning undefined...
var minDataPoint = d3.min( d3.values(marketshare.q1count) ); //Expecting 2 from store 1
var maxDataPoint = d3.max( d3.values(marketshare.q2count) ); //Expecting 10 from store 2
I'm a little overwhelmed by d3.keys, d3.values, d3.maps, converting to array, etc. Any explanations or nudges would be appreciated.
I think you're looking for something like this instead:
d3.min(marketshare, function(d){ return d.q1count; }) // => 2.
You can pass an accessor function as the second argument to d3.min/d3.max.