Cypress - first test randomly fails with "Invalid or unexpected token" - cypress

Recently switched to using Cypress parallel for our Angular project in our pipeline. We run on a Codebuild on AWS and run 5 threads of the Cypress runner. About a quarter of the time, the first test on one of the threads fails with this error:
An uncaught error was detected outside of a test
Invalid or unexpected token
This error originated from your test code, not from Cypress.
When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.
Cypress could not associate this error to any specific test. We dynamically generated a new test to display this failure.
Tried many things to try to fix this, including setting modifyObtrusiveCode to false, chromeWebSecurity to false, upgrading Cypress. We are already catching uncaught exceptions so that doesn't seem like it should be the issue. I turned on some extra logs for this and here is the output
[3] 2020-03-06T19:57:20.369Z cypress:server:project onMocha start
[3] 2020-03-06T19:57:20.369Z cypress:server:reporter got mocha event 'start' with args: [ { start: '2020-03-06T19:57:20.366Z' } ]
[3] 2020-03-06T19:57:20.374Z cypress:server:project onMocha suite
[3] 2020-03-06T19:57:20.374Z cypress:server:reporter got mocha event 'suite' with args: [ { id: 'r1', title: '', root: true, type: 'suite', file: 'cypress/integration/ci-tests/content-acquisition/channels/channel-manual-upload-run-acquired-items-tab.spec.ts' } ]
[3]
[3] 2020-03-06T19:57:20.390Z cypress:server:project onMocha test
[3] 2020-03-06T19:57:20.391Z cypress:server:reporter got mocha event 'test' with args: [ { id: 'r2', title: 'An uncaught error was detected outside of a test', body: 'function throwErr() {\n throw err;\n }', type: 'test' } ]
[3] 2020-03-06T19:57:20.555Z cypress:server:reporter got mocha event 'fail' with args: [ { id: 'r2', title: 'An uncaught error was detected outside of a test', err: { message: 'Unexpected end of input\n' + '\n' + 'This error originated from your test code, not from Cypress.\n' + '\n' + 'When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.\n' + '\n' + 'Cypress could not associate this error to any specific test.\n' + '\n' + 'We dynamically generated a new test to display this failure.', name: 'Uncaught SyntaxError', stack: 'Uncaught SyntaxError: Unexpected end of input\n' + '\n' + 'This error originated from your test code, not from Cypress.\n' + '\n' + 'When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.\n' + '\n' + 'Cypress could not associate this error to any specific test.\n' + '\n' + 'We dynamically generated a new test to display this failure.' }, state: 'failed', body: 'function throwErr() {\n throw err;\n }', type: 'test', duration: 179, wallClockStartedAt: '2020-03-06T19:57:20.374Z', timings: { lifecycle: 26, test: [Object] } } ]
I couldn't really make anything of these errors, but maybe someone else can. I'm kind of out of ideas on what to try (I've tried more things today than I've listed but can't recall them all). Any ideas?

as setting modifyObtrusiveCode to false didn't help you as the folks in https://github.com/cypress-io/cypress/issues/6132 .. I can give my debug procedure when I encountered a similar flakey "unexpected .." error with Cypress:
cypress run has a burn= param, able to repeatedly run. Enable .har output recording for those runs with the cypress-har-generator plugin.
When you have two groups of successful and failing example .har files for the same request, open them in a Browser to compare if anything stands out.
I used diff + jq queries on the .har files to compare between the groups content per request path, but already opening a failing .har in the browser inspector network tab showed a 30s processing time for a .js path that was ultimately incomplete, and thus violated js syntax, causing an unexpected end of input error, similar to your "unexpected token".
Interestingly this occured to the same file at the same code line, hinting at a parsing problem in Cypress.
We exchanged that dependency (or specifically - updated it and changed how it was webpacked) and Cypress stopped to hiccup on the ressource, the flakiness disappeared.
My impression is, running parallel threads of Cypress contributes to the problem occuring.

Related

How to raise timeout error in unittesting

This is first time i am touching ruby, so no sure about correct terminology. I have tried searching for mulitple things, but couldn't find a solution.
I have this code block
domain_response = MyDomain::Api::MyApi::Api.new(parameters: message.to_domain_object, timeout: 1000)
# :nocov:
case (response = domain_response.response)
when MyDomain::Api::MyApi::SuccessResponse
## do something
when Domain::ErrorResponses::TimeoutResponse
## do something.
now i am trying to testing TimeoutResponse, I have written(tried) this
it "when api call timesout" do
expect(MyDomain::Api::MyApi::Api).to{
receive(:new)
} raise_error(MyDomain::ErrorResponses::TimeoutResponse)
end
this gave me error that unexpected identifier.
I have also tried by not providing receive, and it gave me error that block is expected.
Whats the proper way to raise an error that i can test?
Update:
Here is where i am stuck now
it "when api call timesout" do
# 1
expect(MyDomain::Api::MyApi::Api).to(
receive(:new),
).and_return(domain_api_instance)
# 2
expect(domain_api_instance.response).to receive(:response).and_raise(Domain::ErrorResponses::TimeoutResponse)
expect(domain_api_instance.response).to eq(ApiError::Timeout)
end
But with this code i am getting this error
1) Rpc::Package::SubPackage::V1::PackageService#first_test testing when api call timesout
Failure/Error: expect(domain_api_instance.response).to receive(:response).and_raise(Domain::ErrorResponses::TimeoutResponse)
#<InstanceDouble(MyDomain::Api::MyApi::Api) (anonymous)> received unexpected message :response with (no args)

Terminate / Skip / Stop all tests from all spec files if any one test fails in cypress

am trying to skip all other tests from all spec files if one test fails and found a working solution over here Is there a reliable way to have Cypress exit as soon as a test fails?. However, this looks to be working only if the test fails in it() assertions. How can we skip the tests if somethings fails in beforeach()
For eg:
before(() => {
cy.get('[data-name="email-input"]').type(email);
cy.get('[data-name="password-input"]').type(email);
cy.get('[data-name="account-save-btn"]').click();
});
And if something goes wrong (for eg: CypressError: Timed out retrying: Expected to find element: '[data-name="email-input"]', but never found it.) in above code then stop/ skip all tests in all spec files.
Just in case anyone is looking answer for the same question. I have found a solution and would like to share.
To implement the solution I have used a cookie that I will set to value true if something fails and before executing each test cypress will check the value of cookie. If the value of cookie is true it skips the test.
Cypress.on('fail', error => {
document.cookie = "shouldSkip=true" ;
throw error;
});
function stopTests() {
cy.getCookie('shouldSkip').then(cookie => {
if (cookie && typeof cookie === 'object' && cookie.value === 'true') {
Cypress.runner.stop();
}
});
}
beforeEach(stopTests);
Also to note: Tests should be written in it() block and avoid using before() to write tests
As of Cypress 10, tests don't run if before or beforeEach hook fails.

How do I make that my hasura actions are ready to be used for my ci / cd tests?

I have started building up a backend with hasura. That backend is validated on my CI / CD service with api tests, among other things.
Within my hasura backend, I have implemented openfaas functions. I am deploying everything on a kubernetes cluster. Before running the tests, I wait until all jobs and all deployments are done. I am deploying with devspace which deploys everything through helm charts. So, at the end of the deployment, I am dead-sure the deployments are all done (ultimately, I've checked directly on the k8s cluster). Even the openfaas functions are deployed and ready to use.
Yet, when I run my acceptance tests, I run into issues. If I don't wait long enough, then my actions are not working properly. They return some strange errors that e.g. the response returned invalid json
Error: GraphQL error: not a valid json response from webhook
or the mutation is not in the mutation root
Error: GraphQL error: field "login" not found in type: 'mutation_root'
However, the openfaas functions themselves log only success. There is no error there. They are called and they apparently throw no error.
Waiting 3-5 minutes after hasura deployment or trying to call the actions until they return something relevant works fine, however. My current work-around is to wait an additional 5 minutes after my deployments have been done and only then run my api tests.
Is that normal? Is there a more efficient way to get feedback on when hasura really is ready to accept calls to its actions? I am currently working with version 1.2.1.
EDIT
After re-verification, waiting "long enough" does not help. What, however, helps, is calling some actions until they return successful answer. Currently, what I am doing is
#! /bin/sh
if [ "$#" -lt "3" ] ; then
echo "Usage: $0 <hasura-endpoint> <profile> <auth-app-id> [<timeout-in-sec> <deltat-in-sec>]"
exit 1
fi
ENDPOINT=$1
PROFILE=$2
AUTH_APP_ID=$3
TIMEOUT=${4:-300}
DELTA_T=${5:-5}
FIXTURES_FILE=./shared/fixtures/${PROFILE}/database/Users/auth.json
username=$(jq -r '.[1].email' $FIXTURES_FILE)
password=$(jq -r '.[1].password' $FIXTURES_FILE)
user_id=$(jq -r '.[1].id' $FIXTURES_FILE)
echo "Trying to login with $username / $password / $AUTH_APP_ID"
for iteration in `seq 1 $TIMEOUT`; do
result=$(gq $ENDPOINT -q 'mutation($username: String!, $password: String!, $appId: uuid!) { login(username: $username, password: $password, appId: $appId) { userId }}' -v "username=$username" -v "password=$password" -v "appId=$AUTH_APP_ID" | jq -r '.data.login.userId')
if [ "$result" == "$user_id" ] ; then
exit 0
else
sleep $DELTA_T
fi
done
echo "Hasura actions availability timed out" && exit 1
That performs logins with valid credentials until the action returns the right user id, and not an error. The log of this script on my ci / cd is something like
$ ./scripts/login_until_it_works.sh ${API_ENDPOINT}/v1/graphql $PROFILE $AUTH_ADMIN_APP_ID
Trying to login with nathalie.droz#test-vtxnet.ch / yl2YOuSrz_ / [MASKED]
Executing query... error
Error: ApolloError: GraphQL error: not a valid json response from webhook
at new ApolloError (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:92:26)
at Object.next (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1297:31)
at notifySubscription (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:135:18)
at onNotify (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:179:3)
at SubscriptionObserver.next (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:235:7)
at /usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1102:36
at Set.forEach (<anonymous>)
at Object.next (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1101:21)
at notifySubscription (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:135:18)
at onNotify (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:179:3) {
graphQLErrors: [
{
extensions: [Object],
message: 'not a valid json response from webhook'
}
],
networkError: null,
message: 'GraphQL error: not a valid json response from webhook',
extraInfo: undefined
}
Executing query... done
Notice that the second query, 5 seconds after the first, is successful. My action is defined as follows:
- args:
enums: []
input_objects: []
objects:
- description: null
fields:
- description: null
name: token
type: String!
- description: null
name: refreshToken
type: String!
- description: null
name: userId
type: uuid!
name: LoginResponse
scalars: []
type: set_custom_types
- args:
comment: null
definition:
arguments:
- description: null
name: username
type: String!
- description: null
name: password
type: String!
- description: null
name: appId
type: uuid!
forward_client_headers: false
handler: http://gateway.openfaas:8080/function/login.{{FUNCTION_NAMESPACE}}
headers: []
kind: synchronous
output_type: LoginResponse
type: mutation
name: login
type: create_action
- args:
action: login
definition:
select:
filter: {}
role: incognito
type: create_action_permission
When you deploy via Helm, it creates the Deployments and everything else you've defined and tells you it's done. That doesn't mean that whatever you deployed is ready to serve requests. That's because each service may have its own boot time, especially the services who advertise High Availability.
Kubernetes is designed to address this issue with the help of "liveness/readiness probes". Basically, in your yaml/helm files you instruct K8s what it needs to check before it returns that a pod is ready. This could be for example a 200 HTTP status code from /live endpoint in your app or whatever.
Check this out: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

Always some test cases getting jasmine.DEFAULT_TIMEOUT_INTERVAL

I am going to create end to end(e2e) test using protractor with jasmine and angular 6. I have written some test cases almost 10 cases. That's all working fine, but always some cases become fails. And its failed because of jasmine timeout. I have configure timeout value like below. But I am not getting consistant result. sometimes a test cases is success but at next run it will goes to success or fail. I have searched on google but I have not found any useful solution.
I have defined some common properties for wait
waitForElement(element: ElementFinder){
browser.waitForAngularEnabled(false);
browser.wait(() => element.isPresent(), 100000, 'timeout: ');
}
waitForUrl(url: string){
browser.wait(() => protractor.ExpectedConditions.urlContains(url), 100000, 'timeout')
}
And protractor.conf.js file I have defined that
jasmineNodeOpts: {
showColors: true,
includeStackTrace: true,
defaultTimeoutInterval: 20000,
print: function () {
}
}
I am getting below error
- Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
- Failed: stale element reference: element is not attached to the page document
(Session info: chrome=76.0.3809.100)
(Driver info: chromedriver=76.0.3809.12 (220b19a666554bdcac56dff9ffd44c300842c933-refs/branch-heads/3809#{#83}),platform=Windows NT 10.0.17134 x86_64)
I have got the solution:
I have configured waiting timeout 100000 ms for individual element find where whole script timeout was 20000 ms. So I have follow below process:
Keep full spec timeout below than sum of all elements find timeouts. I have configured defaultTimeoutInterval at jasmineNodeOpts greater than sum of value for all test cases timeout. And then add a large value to allScriptsTimeout: 2000000 inside of export.config. Its resolved my problem.
NB: I gave this answer because I think it may help others who will face this kind of problem.

Parse.com CloudCode beforeSave trigger errors

Context: parse.com CloudCode, executing beforeSave() trigger on data object object update.
Getting a rash of unexplained errors in simple trigger function (code below). The errors are mostly
Result: Uncaught undefined
but there are also several
Result: Execution timed out
The code is simple, I have checked it to the point where I don't think this is an error due to the code, but including it anyway since I know people will ask to see it.
This seems (seems!) to be an issue in Parse itself, which I've been unable to solve.
Parse.Cloud.beforeSave("LogoItem", function(request, response) {
var name = request.object.get("productName");
if (typeof name == "undefined") {
name = "";
}
request.object.set("lowercaseName", name.toLowerCase());
response.success(); // Tells parse not to cancel save
});
My question is, has anyone seen this and been able to get any handle on a solution?
Below is some additional detail (logs from parse) that likely won't be too useful (but never know)...
E2016-03-14T14:12:26.306Z] - v643 Ran job ShopSenseJob with:
Input: {"plan":"paid"}
Result: ERROR: startDownloadItems() failed with error [undefined: undefined] undefined: undefined
E2016-03-14T14:11:01.536Z] - v643 before_save triggered for LogoItem as master:
Input: {"original":{"activeURL":"http://api.shopstyle.com/action/apiVisitRetailer?id=486671765\u0026pid=uid4009-26060253-59","category":5,"createdAt":"2016-03-14T13:41:07.487Z","imageURL":"https://resources.shopstyle.com/pim/aa/92/aa9286b799d7640f276657fa5e41ee92_best.jpg","importMarkerTag":6624,"lowercaseName":"mid rise skinny with knee holes in marie vintage blue","objectId":"8o9e8nMF7m","productName":"Mid Rise Skinny With Knee Holes In Marie Vintage Blue","referenceURL":"http://www.shopstyle.com/p/7-for-all-mankind-mid-rise-skinny-with-knee-holes-in-marie-vintage-blue/486671765?pid=uid4009-26060253-59","ssBrandId":3,"ssBrandName":"7 For All Mankind","ssDate":{"__type":"Date","iso":"2014-12-05T00:00:00.000Z"},"ssId":486671765,"ssRetailerId":193,"thumbnailURL":"https://resources.shopstyle.com/pim/aa/92/aa9286b799d7640f276657fa5e41ee92_best.jpg","updatedAt":"2016-03-14T13:41:07.487Z"},"update":{"importMarkerTag":1556}}
Result: Execution timed out
E2016-03-14T14:10:21.498Z] - v643 Ran job ShopSenseJob with:
Input: {}
Result: ERROR: startDownloadItems() failed with error [undefined: undefined] undefined: undefined
E2016-03-14T14:08:18.638Z] - v643 before_save triggered for LogoItem as master:
Input: {"original":{"activeURL":"http://api.shopstyle.com/action/apiVisitRetailer?id=490062821\u0026pid=uid4009-26060253-59","category":2,"createdAt":"2016-03-14T13:40:12.971Z","imageURL":"https://resources.shopstyle.com/pim/20/cb/20cb7387834dd02ac9d950bde0f57b83_best.jpg","importMarkerTag":1556,"lowercaseName":"baggu basic tote in brown","objectId":"I4UTiUMnUG","productName":"Baggu Basic Tote In Brown","referenceURL":"http://www.shopstyle.com/p/baggu-basic-tote-in-brown/490062821?pid=uid4009-26060253-59","ssBrandId":-1,"ssBrandName":"","ssDate":{"__type":"Date","iso":"2014-12-03T00:00:00.000Z"},"ssId":490062821,"ssRetailerId":193,"thumbnailURL":"https://resources.shopstyle.com/pim/20/cb/20cb7387834dd02ac9d950bde0f57b83_best.jpg","updatedAt":"2016-03-14T14:06:12.708Z"},"update":{"importMarkerTag":1556}}
Result: Uncaught undefined
E2016-03-14T14:08:18.225Z] - v643 Ran job ShopSenseJob with:
Input: {}
Result: ERROR: startDownloadItems() failed with error [undefined: undefined] undefined: undefined
E2016-03-14T14:06:12.824Z] - v643 before_save triggered for LogoItem as master:
Input: {"original":{"activeURL":"http://api.shopstyle.com/action/apiVisitRetailer?id=489915836\u0026pid=uid4009-26060253-59","category":2,"createdAt":"2016-03-14T13:40:13.193Z","imageURL":"https://resources.shopstyle.com/pim/14/ef/14ef17ff769227c4293892e770fcf50a_best.jpg","importMarkerTag":1556,"lowercaseName":"basic tote in black","objectId":"JQ7GuRjfgH","productName":"Basic Tote In Black","referenceURL":"http://www.shopstyle.com/p/baggu-basic-tote-in-black/489915836?pid=uid4009-26060253-59","ssBrandId":25820,"ssBrandName":"Baggu","ssDate":{"__type":"Date","iso":"2014-12-02T00:00:00.000Z"},"ssId":489915836,"ssRetailerId":193,"thumbnailURL":"https://resources.shopstyle.com/pim/14/ef/14ef17ff769227c4293892e770fcf50a_best.jpg","updatedAt":"2016-03-14T14:06:10.013Z"},"update":{"importMarkerTag":1556}}
Result: Execution timed out
E2016-03-14T14:05:23.844Z] - v643 Ran job ShopSenseJob with:
Input: {}
Result: ERROR: startDownloadItems() failed with error [undefined: undefined] undefined: undefined

Resources