I have a request that triggers another request that has a value needed later in the test.
I queued the code that uses the value, but still it is undefined. What an I doing wrong?
let val;
cy.request(api).then(response => {
return fetch(`url-${response.id}`).then(response2 => {
val = response2.id
})
})
cy.then(() => {
console.log('val', val) // undefined
})
Add a Promise around the inner request, and return it.
Cypress automatically waits for promises to resolve.
let val;
cy.request(api).then(response => {
return new Cypress.Promise(resolve => {
fetch(`url-${response.id}`).then(response2 => {
val = response2.id
resolve() // signals to Cypress that 2nd request has completed
})
})
cy.then(() => {
console.log('val', val) // passes
})
Related
I am writing a long test so I added the most reusable part to a Command Folder, however, I need access to a certain return value. How would I get the return value from the command?
Instead of directly returning the salesContractNumber, wrap it and then return it like this:
Your custom command:
Cypress.Commands.add('addStandardGrainSalesContract', () => {
//Rest of the code
return cy.wrap(salesContractNumber)
})
In your test you can do this:
cy.addStandardGrainSalesContract().then((salesContractNumber) => {
cy.get(FixingsAddPageSelectors.ContractNumberField).type(salesContractNumber)
})
Generally speaking, you need to return the value from the last .then().
Cypress puts the results of the commands onto the queue for you, and trailing .then() sections can modify the results.
Cypress.Commands.add('addStandardGrainSalesContract', () => {
let salesContractNumber;
cy.get('SalesContractsAddSelectors.SalesContractNumber').should($h2 => {
...
salesContractNumber = ...
})
.then(() => {
...
return salesContractNumber
})
})
cy.addStandardGrainSalesContract().then(salesContractumber => {
...
Or this should work also
Cypress.Commands.add('addStandardGrainSalesContract', () => {
cy.get('SalesContractsAddSelectors.SalesContractNumber').should($h2 => {
...
const salesContractNumber = ...
return salesContractNumber; // pass into .then()
})
.then(salesContractNumber => {
...
return salesContractNumber // returns to outer code
})
})
cy.addStandardGrainSalesContract().then(salesContractumber => {
...
Extra notes:
const salesContractHeader = $h2.text() // don't need Cypress.$()
const salesContractNumber = salesContractHeader.split(' ').pop() // take last item in array
I perform http requests to my db and have noticed that if I send all the requests at once, some of them will get a timeout errors. I'd like to add a delay between calls so the server doesn't get overloaded. I'm trying to find the RxJS solution to this problem and don't want to add a setTimeout.
Here is what I currently do:
let observables = [];
for(let int = 0; int < 10000; int++){
observables.push(new Observable((observer) => {
db.add(doc[int], (err, result)=>{
observer.next();
observer.complete();
})
}))
}
forkJoin(observables).subscribe(
data => {
},
error => {
console.log(error);
},
() => {
db.close();
}
);
You can indeed achieve this with Rxjs quite nicely. You'll need higher order observables, which means you'll emit an observable into an observable, and the higher order observable will flatten this out for you.
The nice thing about this approach is that you can easily run X requests in // without having to manage the pool of requests yourself.
Here's the working code:
import { Observable, Subject } from "rxjs";
import { mergeAll, take, tap } from "rxjs/operators";
// this is just a mock to demonstrate how it'd behave if the API was
// taking 2s to reply for a call
const mockDbAddHtppCall = (id, cb) =>
setTimeout(() => {
cb(null, `some result for call "${id}"`);
}, 2000);
// I have no idea what your response type looks like so I'm assigning
// any but of course you should have your own type instead of this
type YourRequestType = any;
const NUMBER_OF_ITEMS_TO_FETCH = 10;
const calls$$ = new Subject<Observable<YourRequestType>>();
calls$$
.pipe(
mergeAll(3),
take(NUMBER_OF_ITEMS_TO_FETCH),
tap({ complete: () => console.log(`All calls are done`) })
)
.subscribe(console.log);
for (let id = 0; id < NUMBER_OF_ITEMS_TO_FETCH; id++) {
calls$$.next(
new Observable(observer => {
console.log(`Starting a request for ID "${id}""`);
mockDbAddHtppCall(id, (err, result) => {
if (err) {
observer.error(err);
} else {
observer.next(result);
observer.complete();
}
});
})
);
}
And a live demo on Stackblitz: https://stackblitz.com/edit/rxjs-z1x5m9
Please open the console of your browser and note that the console log showing when a call is being triggered starts straight away for 3 of them, and then wait for 1 to finish before picking up another one.
Looks like you could use an initial timer to trigger the http calls. e.g.
timer(delayTime).pipe(combineLatest(()=>sendHttpRequest()));
This would only trigger the sendHttpRequest() method after the timer observable had completed.
So with your solution. You could do the following...
observables.push(
timer(delay + int).pipe(combineLatest(new Observable((observer) => {
db.add(doc[int], (err, result)=>{
observer.next();
observer.complete();
}))
}))
Where delay could probably start off at 0 and you could increase it using the int index of your loop by some margin.
Timer docs: https://www.learnrxjs.io/learn-rxjs/operators/creation/timer
Combine latest docs: https://www.learnrxjs.io/learn-rxjs/operators/combination/combinelatest
merge with concurrent value:
mergeAll and mergeMap both allow you to define the max number of subscribed observables. mergeAll(1)/mergeMap(LAMBDA, 1) is basically concatAll()/concatMap(LAMBDA).
merge is basically just the static mergeAll
Here's how you might use that:
let observables = [...Array(10000).keys()].map(intV =>
new Observable(observer => {
db.add(doc[intV], (err, result) => {
observer.next();
observer.complete();
});
})
);
const MAX_CONCURRENT_REQUESTS = 10;
merge(...observables, MAX_CONCURRENT_REQUESTS).subscribe({
next: data => {},
error: err => console.log(err),
complete: () => db.close()
});
Of note: This doesn't batch your calls, but it should solve the problem described and it may be a bit faster than batching as well.
mergeMap with concurrent value:
Perhaps a slightly more RxJS way using range and mergeMap
const MAX_CONCURRENT_REQUESTS = 10;
range(0, 10000).pipe(
mergeMap(intV =>
new Observable(observer => {
db.add(doc[intV], (err, result) => {
observer.next();
observer.complete();
});
}),
MAX_CONCURRENT_REQUESTS
)
).subscribe({
next: data => {},
error: err => console.log(err),
complete: () => db.close()
});
I am trying to wrap a grpc-web server-streaming client with rxjs.Observable and be able to perform retries if say the server returns an error.
Consider the following code.
// server
foo = (call: (call: ServerWritableStream<FooRequest, Empty>): void => {
if (!call.request?.getMessage()) {
call.emit("error", { code: StatusCode.FAILED_PRECONDITION, message: "Invalid request" })
}
for (let i = 0; i <= 2; i++) {
call.write(new FooResponse())
}
call.end()
}
// client
test("should not end on retry", (done) => {
new Observable(obs => {
const call = new FooClient("http://localhost:8080").foo(new FooRequest())
call.on("data", data => obs.next(data))
call.on("error", err => {
console.log("server emitted error")
obs.error(err)
})
call.on("end", () => {
console.log("server emitted end")
obs.complete()
})
})
.pipe(retryWhen(<custom retry policy>))
.subscribe(
_resp => () => {},
_error => {
console.log("source observable error")
done()
},
() => {
console.log("source observable completed(?)")
done()
})
})
// output
server emitted error
server emitted end
source observable completed(?)
The server emits the "end" event after(?) emitting "error", so it seems like I have to remove the "end" handler from the source observable.
What would be an "Rx-y" way to end/complete the stream?
For anyone interested, I ended up removing the "end" event handler and replaced it with "status", if the server returns an OK status code (which signals the end of the stream) then the observable is completed.
new Observable(obs => {
const call = new FooClient("http://localhost:8080").foo(new FooRequest())
call.on("data", data => obs.next(data))
call.on("error", err => obs.error(err))
call.on("status", status: grpcWeb.Status => {
if (status.code == grpcWeb.StatusCode.OK) {
return observer.complete()
}
})
})
To ensure an error doesn't complete the outer observable, a common rxjs effects pattern I've adopted is:
public saySomething$: Observable<Action> = createEffect(() => {
return this.actions.pipe(
ofType<AppActions.SaySomething>(AppActions.SAY_SOMETHING),
// Switch to the result of the inner observable.
switchMap((action) => {
// This service could fail.
return this.service.saySomething(action.payload).pipe(
// Return `null` to keep the outer observable alive!
catchError((error) => {
// What can I do with error here?
return of(null);
})
)
}),
// The result could be null because something could go wrong.
tap((result: Result | null) => {
if (result) {
// Do something with the result!
}
}),
// Update the store state.
map((result: Result | null) => {
if (result) {
return new AppActions.SaySomethingSuccess(result);
}
// It would be nice if I had access the **error** here.
return new AppActions.SaySomethingFail();
}));
});
Notice that I'm using catchError on the inner observable to keep the outer observable alive if the underlying network call fails (service.saySomething(action.payload)):
catchError((error) => {
// What can I do with error here?
return of(null);
})
The subsequent tap and map operators accommodate this in their signatures by allowing null, i.e. (result: Result | null). However, I lose the error information. Ultimately when the final map method returns new AppActions.SaySomethingFail(); I have lost any information about the error.
How can I keep the error information throughout the pipe rather than losing it at the point it's caught?
As suggested in comments you should use Type guard function
Unfortunately I can't run typescript in snippet so I commented types
const { of, throwError, operators: {
switchMap,
tap,
map,
catchError
}
} = rxjs;
const actions = of({payload: 'data'});
const service = {
saySomething: () => throwError(new Error('test'))
}
const AppActions = {
}
AppActions.SaySomethingSuccess = function () {
}
AppActions.SaySomethingFail = function() {
}
/* Type guard */
function isError(value/*: Result | Error*/)/* value is Error*/ {
return value instanceof Error;
}
const observable = actions.pipe(
switchMap((action) => {
return service.saySomething(action.payload).pipe(
catchError((error) => {
return of(error);
})
)
}),
tap((result/*: Result | Error*/) => {
if (isError(result)) {
console.log('tap error')
return;
}
console.log('tap result');
}),
map((result/*: Result | Error*/) => {
if (isError(result)) {
console.log('map error')
return new AppActions.SaySomethingFail();
}
console.log('map result');
return new AppActions.SaySomethingSuccess(result);
}));
observable.subscribe(_ => {
})
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/6.5.5/rxjs.umd.js"></script>
I wouldn't try to keep the error information throughout the pipe. Instead you should separate your success pipeline (tap, map) from your error pipeline (catchError) by adding all operators to the observable whose result they should actually work with, i.e. your inner observable.
public saySomething$: Observable<Action> = createEffect(() => {
return this.actions.pipe(
ofType<AppActions.SaySomething>(AppActions.SAY_SOMETHING),
switchMap((action) => this.service.saySomething(action.payload).pipe(
tap((result: Result) => {
// Do something with the result!
}),
// Update the store state.
map((result: Result) => {
return new AppActions.SaySomethingSuccess(result);
}),
catchError((error) => {
// I can access the **error** here.
return of(new AppActions.SaySomethingFail());
})
)),
);
});
This way tap and map will only be executed on success results from this.service.saySomething. Move all your error side effects and error mapping into catchError.
I try to test a rendering of some content after fetching it from server.
I use Vue Test Utils but this is irrelevant.
In the created hook of the component the ajax call is made with axios. I register the axios-mock-adapter response and 'render' the component, the call is made and everything works fine but i have to use the moxios lib only to wait for request to be finished.
it('displays metrics', (done) => {
this.mock.onGet('/pl/metrics').reply((config) => {
let value = 0
if (config.params.start == '2020-01-26') {
value = 80
}
if (config.params.start == '2020-01-28') {
value = 100
}
return [200, {
metrics: [
{
key: "i18n-key",
type: "count",
value: value
}
]
}]
})
.onAny().reply(404)
let wrapper = mount(Dashboard)
moxios.wait(function() {
let text = wrapper.text()
expect(text).toContain('80')
expect(text).toContain('100')
expect(text).toContain('+20')
done()
})
})
Is it possible to get rid of moxios and achieve the same with axios-mock-adapter only?
Yes, you can implement your own flushPromises method with async/ await:
const flushPromises = () => new Promise(resolve => setTimeout(resolve))
it('displays metrics', async () => {
this.mock.onGet('/pl/metrics').reply((config) => {
// ..
}).onAny().reply(404)
let wrapper = mount(Dashboard)
await flushPromises()
expect(text).toContain('80')
})
Or use done and setTimeout:
it('displays metrics', (done) => {
this.mock.onGet('/pl/metrics').reply((config) => {
// ..
}).onAny().reply(404)
let wrapper = mount(Dashboard)
setTimeout(() => {
expect(text).toContain('80')
done()
})
})
moxiois.wait simply schedules a callback with setTimeout. This works because a task scheduled by setTimeout always runs after the microtask queue, like promise callbacks, is emptied.