How to strip an array if array length is 0 in yup - validation

I want to strip the "other" array if its length is 0
Here is my schema
languages: Yup.object({
native: Yup.string().oneOf(languages),
other: Yup.array()
.max(5)
.of(
Yup.object({
language: Yup.string().oneOf(languages),
speaking: Yup.string().oneOf(fluency),
reading: Yup.string().oneOf(fluency),
writing: Yup.string().oneOf(fluency),
})
)
.when("other.length", {
is: 0,
then: (s) => s.strip(),
}),
}),
The error I get:
Uncaught Error: Cyclic dependency, node was: "other"
Thank beforehand

For someone who may encounter the same issue:
I found the issue in:
.when(".length", {
is: 0,
then: (s) => s.strip(),
I needed to reference the .length of the array rather than the arr.length and this resolved the problem.

Related

How to synchronise rxjs observables

I'm looking for a way to make streams that are combined
Note: this is the simplest form of my problem, in reality I'm combining 8 different streams some are intertwined, some are async etc :(
import { BehaviorSubject, map, combineLatest } from 'rxjs';
const $A = new BehaviorSubject(1)
const $B = $A.pipe(map(val => `$B : ${val}`))
const $C = $A.pipe(map(val => `$C : ${val}`))
// prints out:
// (1) [1, "$B : 1", "$C : 1"]
combineLatest([$A,$B,$C]).subscribe(console.log)
$A.next(2)
// prints out:
// (2) [2, "$B : 1", "$C : 1"]
// (3) [2, "$B : 2", "$C : 1"]
// (4) [2, "$B : 2", "$C : 2"]
Code example
The print out (1) is great, all streams have a value of "1": [1, "$B : 1", "$C : 1"]
The print out (4) is great, all streams have a value of "2": [2, "$B : 2", "$C : 2"]
But the combine latest fires for (2) and (3) after each stream is updated individually meaning that you have a mixture of "1" and "2"
**What way can I modify the code to only get notified when a change has fully propgaged? **
My best solutions so far:
A) using debouceTime(100)
combineLatest([$A,$B,$C]).pipe(debounceTime(100)).subscribe(console.log)
But it's flaky because it can either swallow valid states if the are process to quickly or notify with invalid states if individual pipes are too slow
B) filter only valid state
combineLatest([$A,$B,$C]).pipe(
filter(([a,b,c])=>{
return b.indexOf(a) > -1 && c.indexOf(a) > -1
})
).subscribe(console.log)
works but adding a validation function seems like the wrong way to do it (and more work :))
C) Make B$ and C$ in which we push the latest and reset at every change"
A$.pipe(tap(val)=>{
B$.next(undefined);
B$.next(val);
C$.next(undefined)
C$.next(val);
})
...
combineLatest([$A,$B.pipe(filter(b => !!b)),$C.pipe(filter(c => !!c))]).pipe(
filter(([a,b,c])=>{
return b.indexOf(a) > -1 && c.indexOf(a) > -1
})
Works but quite a lot of extra code and vars
I have the feeling I'm missing a concept or not seeing how to achieve this in a clean/robust way, but I sure I'm not the first one :)
Thanks
As you've observed, the observable created by combineLatest will emit when any of its sources emit.
Your problem is occurring because you pass multiple observables into combineLatest that share a common source. So whenever that common source emits, it causes each derived observable to emit.
One way to "fix" this in a synchronous scenario is to simply apply debounceTime(0) which will mask the duplicate emission that happens in the same event loop. This approach is a bit naive, but works in simple scenarios:
combineLatest([$A,$B,$C]).pipe(
debounceTime(0)
)
But, since you have some async things going on, I think your solution is to not include duplicate sources inside combineLatest and handle the logic further down the chain:
combineLatest([$A]).pipe(
map(([val]) => [
val,
`$B : ${val}`,
`$C : ${val}`,
])
)
The code above produces the desired output. Obviously, you wouldn't need combineLatest with a single source, but the idea is the same if you had multiple sources.
Let's use a more concrete example that has the same issue:
const userId$ = new ReplaySubject<string>(1);
const maxMsgCount$ = new BehaviorSubject(2);
const details$ = userId$.pipe(switchMap(id => getDetails(id)));
const messages$ = combineLatest([userId$, maxMsgCount$]).pipe(
switchMap(([id, max]) => getMessages(id, max))
);
const user$ = combineLatest([userId$, details$, messages$]).pipe(
map(([id, details, messages]) => ({
id,
age: details.age,
name: details.name,
messages
}))
);
Notice when userId emits a new value, the user$ observable would end up emitting values that had the new userId, but the details from the old user!
We can prevent this by only including unique sources in our combineLatest:
const userId$ = new ReplaySubject<string>(1);
const maxMsgCount$ = new BehaviorSubject(2);
const user$ = combineLatest([userId$, maxMsgCount$]).pipe(
switchMap(([id, max]) => combineLatest([getDetails(id), getMessages(id, max)]).pipe(
map(([details, messages]) => ({
id,
age: details.age,
name: details.name,
messages
}))
))
);
You can see this behavior in action in the below stackblitz samples:
Problem
Solution

feed data to fitDataset()

I'm trying to fit a model using fitDataset(). I can train using the "normal" approach, with a for loop and getting random batches of data (20000 data points).
I'd like to use the fitDataset() and be able to use the entire dataset and not rely on "randomness" of my getBatch function.
I'm getting closer, using the API docs and the example on tfjs-data but, i'm stuck on a probably dumb data manipulation...
So here's how i'm doing it:
const [trainX, trainY] = await bigData
const model = await cnnLSTM // gru performing well
const BATCH_SIZE = 32
const dataSet = flattenDataset(trainX.slice(200), trainY.slice(200))
model.compile({
loss: 'categoricalCrossentropy',
optimizer: tf.train.adam(0.001),
metrics: ['accuracy']
})
await model.fitDataset(dataSet.train.batch(32), {
epochs: C.trainSteps,
validationData: dataSet.validation,
callbacks: {
onBatchEnd: async (batch, logs) => (await tf.nextFrame()),
onEpochEnd: (epoch, logs) => {
let i = epoch + 1
lossValues.push({'epoch': i, 'loss': logs.loss, 'val_loss': logs.val_loss, 'set': 'train'})
accuracyValues.push({'epoch': i, 'accuracy': logs.acc, 'val_accuracy': logs.val_acc, 'set': 'train'})
// await md `${await plotLosses(train.lossValues)} ${await plotAccuracy(train.accuracyValues)}`
}
}
})
here's my interpretation of the dataset creation:
flattenDataset = (features, labels, split = 0.35) => {
return tf.tidy(() => {
let slice =features.length - Math.floor(features.length * split)
const featuresTrain = features.slice(0, slice)
const featuresVal = features.slice(slice)
const labelsTrain = labels.slice(0, slice)
const labelsVal = labels.slice(slice)
const data = {
train: tf.data.array(featuresTrain, labelsTrain),
validation: tf.data.array(featuresVal, labelsVal)
}
return data
})
}
I'm getting an error:
Error: Dataset iterator for fitDataset() is expected to generate an Array of length 2: `[xs, ys]`, but instead generates Tensor
[[0.4106583, 0.5408, 0.4885066, 0.9021732, 0.1278526],
[0.3711334, 0.5141, 0.4848816, 0.9021571, 0.2688071],
[0.4336613, 0.5747, 0.4822159, 0.9021728, 0.3694479],
...,
[0.4123166, 0.4553, 0.478438 , 0.9020132, 0.8797594],
[0.3963479, 0.3714, 0.4871198, 0.901996 , 0.7170534],
[0.4832076, 0.3557, 0.4892016, 0.9019232, 0.9999322]],Tensor
[[0.3711334, 0.5141, 0.4848816, 0.9021571, 0.2688071],
[0.4336613, 0.5747, 0.4822159, 0.9021728, 0.3694479],
[0.4140858, 0.5985, 0.4789927, 0.9022084, 0.1912155],
...,
The input data is 6 timesteps with 5 dimensions and the labels are just one-hot encoded classes [0,0,1], [0,1,0] and [1, 0, 0]. I guess the flattenDataset() is not sending the data in the correct way.
Does data.train needs to output for each data point [6 timesteps with 5 dims, label] ? I get this error when i tried that:
Error: The feature data generated by the dataset lacks the required input key 'conv1d_Conv1D5_input'.
Could really use some pro insight...
--------------------
Edit #1:
I feel i'm close to an answer.
const X = tf.data.array(trainX.slice(0, 100))//.map(x => x)
const Y = tf.data.array(trainY.slice(0, 100))//.map(x => x)
const zip = tf.data.zip([X, Y])
const dataSet = {
train: zip
}
dataSet.train.forEach(x => console.log(x))
With this i get on the console:
[Array(6), Array(3)]
[Array(6), Array(3)]
[Array(6), Array(3)]
...
[Array(6), Array(3)]
[Array(6), Array(3)]
but the fitDataset is giving me: Error: The feature data generated by the dataset lacks the required input key 'conv1d_Conv1D5_input'.
my model look like this:
const model = tf.sequential()
model.add(tf.layers.conv1d({
inputShape: [6, 5],
kernelSize: (3),
filters: 64,
strides: 1,
padding: 'same',
activation: 'elu',
kernelInitializer: 'varianceScaling',
}))
model.add(tf.layers.maxPooling1d({poolSize: (2)}))
model.add(tf.layers.conv1d({
kernelSize: (1),
filters: 64,
strides: 1,
padding: 'same',
activation: 'elu'
}))
model.add(tf.layers.maxPooling1d({poolSize: (2)}))
model.add(tf.layers.lstm({
units: 18,
activation: 'elu'
}))
model.add(tf.layers.dense({units: 3, activation: 'softmax'}))
model.compile({
loss: 'categoricalCrossentropy',
optimizer: tf.train.adam(0.001),
metrics: ['accuracy']
})
return model
What is wrong here?
What model.fitDataset expects are a Dataset, each element inside this dataset is a tuple of two items, [feature, label].
So in your case, you need to create featureDataset and labelDataset, then merge then with tf.data.zip to create trainDataset. Same for validation dataset.
Solved it
so after a lot of trial an error i found a way to make it work.
So, i had an input shape of [6, 5], meaning an array with 6 arrays of 5 floats each.
[[[0.3467378, 0.3737, 0.4781905, 0.90665, 0.68142351],
[0.44003019602788285, 0.3106, 0.4864576, 0.90193448, 0.5841830879700972],
[0.30672944860847245, 0.3404, 0.490295674, 0.90720676, 0.8331748581920732],
[0.37475716007758336, 0.265, 0.4847249, 0.902056932, 0.6611207914113887],
[0.5639427928616854, 0.2423002, 0.483168235, 0.9020202294447865, 0.82823],
[0.41581425627336555, 0.4086, 0.4721923, 0.902094287, 0.914699]], ... 20k more]
What i did was to flatten the array becoming an array of 5 dimensions arrays. Then applied the .batch(6) to it.
const BATCH_SIZE = 20 //batch size fed to the NN
const X = tf.data.array([].concat(...trainX)).batch(6).batch(BATCH_SIZE)
const Y = tf.data.array(trainY).batch(BATCH_SIZE)
const zip = tf.data.zip([X, Y])
const dataSet = {
train: zip
}
Hope it can help others on complex data!!

Ask about rxjs' use of Obsevable distinct

I want to know how to use Observable.
What I want to do is duplicate deletion. The following sample 1 can be moved, but what I want to do is not this format, but how to cook when preparing an array in advance.
orgLayerDistinct(allList: LabelMasterExt[]) {
// Observable.of( allList ).distinct( );
// [sample 1] このサンプルは動くが好みの形式ではない。
// [sample 1] This sample works, but it's not a form of favorite.
Observable.of<Person>(
{ age: 4, name: 'Foo'},
{ age: 7, name: 'Bar'},
{ age: 5, name: 'Foo'},
{ age: 6, name: 'Foo'})
.distinct((p: Person) => p.name)
.subscribe(x => console.log(x));
// [sample 2 experimental] 配列を用意してある前提で利用したい。
// [sample 2 experimental] I would like to use an array on the assumption that it is prepared.
const persons: Person[] = [];
persons.push({ age: 4, name: 'Foo'});
persons.push({ age: 7, name: 'Bar'});
persons.push({ age: 5, name: 'Foo'});
persons.push({ age: 6, name: 'Foo'});
Observable.of<Person[]>(persons)
.distinct((p: Person) => p.name)
.subscribe(x => console.log(x));
}
[sample 2 experimental]
However, this gives the following error.
The type argument for type parameter 'T' cannot be inferred from the usage.
Consider specifying the type arguments explicitly.
Type argument candidate 'Person[]' is not a valid type argument
because it is not a supertype of candidate 'Person'.
Property 'includes' is missing in type 'Person'.
Is there any good plan?
You can either use Observable.from<Person>(array) or Observable.of<Person>(...array).
The problem your second example has is that Observable.of<Person[]>()s elements are arrays of Person, but the .distinct() is expecting an input of the Person type.

How to I reference an array member in Ruby?

Given this array in Ruby:
myarray = [name: "John", age: 35]
How do I refer to the age?
I tried myarray[:age] but got an error can't convert Symbol into Integer
Update:
I was trying to simplify my question by extracting what I thought my problem is. I may not understand completely.
I'm experimenting with Dashing and trying to send a number to a meter widget. I've created a variable, 'response_raw' and am trying to send it in the third send event. Here's my code:
SCHEDULER.every '1m', :first_in => 0 do
# Get checks
url = "https://#{CGI::escape user}:#{CGI::escape password}#api.pingdom.com/api/2.0/checks"
`enter code here`response = RestClient.get(url, {"App-Key" => api_key})
response = JSON.parse(response.body, :symbolize_names => true)
if response[:checks]
checks = response[:checks].map { |check|
if check[:status] == 'up'
state = 'up'
last_response_time = "#{check[:lastresponsetime]}ms"
response_raw = check[:lastresponsetime]
else
state = 'down'
last_response_time = "DOWN"
response_raw = 0
end
{ name: check[:name], state: state, lastRepsonseTime: last_response_time, pt: response_raw }
}
else
checks = [name: "pingdom", state: "down", lastRepsonseTime: "-", pt: 0]
end
checks.sort_by { |check| check['name'] }
send_event('pingdom', { checks: checks })
send_event('pingdom-meter', { value: checks[:pt] })
end
In CoffeeScript [name: "John", age: 35] is an array containing single object with two properties (name and age).
Here is how it'll look in plain JavaScript:
myarray = [
{
name: "John",
age: 35
}
];
So, answering your question, to access an age you should take the first element of an array and then reference an age property:
myarray[0].age
or
myarray[0]['age']
But, judging from your question, your're probably using wrong data structure. Why don't you want to use a plain object instead of an array?
person = name: "John", age: 35
console.log "#{person.name}'s age is #{person.age}"
Update
It looks like your question is actually about Ruby and not about CoffeeScript. Though, my answer will remain the same.
To access an age you should take the first element of an array and then reference an age property:
myarray[0][:age]
Since myarray is an array, Ruby expects an integer index, but you're giving it symbol :age instead.
I finally figured it out with Leonid's help. Thank you.
I changed:
send_event('pingdom-meter', { value: checks[:pt] })
to
send_event('pingdom-meter', { value: checks[0][:pt] })

XMLHttpRequestProgressEvent.total totalSize giving wrong value

I am listening on xhr.onprogress
request.onprogress = function(e){
return conf.progress ? conf.progress(e) : null;
};
where conf.progress is
function(e){
var position = e.position || e.loaded;
var total = e.totalSize || e.total;
var percent = ((e.loaded/e.total)*100)+"";
console.log(percent);
console.log(position, total);
console.log(e);
}
percent yields wrong value in console like 2.789069431137492e-11 and this is what console.log(e) prints
XMLHttpRequestProgressEvent
bubbles: false
cancelBubble: false
cancelable: true
clipboardData: undefined
currentTarget: undefined
defaultPrevented: false
eventPhase: 2
lengthComputable: false
loaded: 4982035
position: 4982035
returnValue: true
srcElement: undefined
target: undefined
timeStamp: 1323097256269
total: 18446744073709552000
totalSize: 18446744073709552000
type: "progress"
__proto__: XMLHttpRequestProgressEvent
Why the e.totalSize: 18446744073709552000 is so big and even after the document is completely loaded e.loaded: 4982035 as totalSize should be equal to loaded when its complete
Actually, if you're on a WebKit-based browser, it could very likely be a WebKit bug where a length of -1 is being casted without checking for a negative: https://bugs.webkit.org/show_bug.cgi?id=36156
That's because totalSize is set to the maximum value of an unsigned 64-bit integer when it is unknown. You gotta rely on lengthComputable to check whether a content-length header was returned or not.

Resources