I have a bunch of HTML5 canvas tests in my library which I run via Karma and Jasmine. If I detect differences in my tests I show the canvas DOM elements with my generated image output and a diff on the browser page. But when I run my tests in Chrome Headless and/or on my CI environment I have no way of checking the test results in case they fail. And that's currently the issue I'm facing: when I run the tests with a UI, they are green, if I run them in Chrome Headless they fail but I have no chance of checking my visual output.
As solution I'd like to write my generated images to disk. On local tests I can then check this folder what happened an on CI I can publish the result images as artifacts. But here comes the point: Karma and Jasmine seem to have no proper mechanism in place for this task. Also I could not find any plugin tackling this challenge of properly accessing the local file-system from your tests.
A tricky aspect is also that I cannot use promise (async/await) operations at the place where I want to save files. I am within a Jasmine CustomMatcher which does not have promise support I could try rewriting my tests again to handle the error reporting to Jasmine.
My attempts so far:
I started with a custom Karma reporter listening to browser logs and potentially use this as channel to hand over the image bytes to Node for writing to disk. But this additional plugin registration messed with my overall Karma configuration. Karma and Rollup were not working anymore once I registered my custom reporter and I never found out if such large byte amounts can be even transferred via this channel.
Then I started with an API via karma-express-http-server where I "upload" the files to be saved. But I stopped half way as such a simple task seem to require again a bunch of libs and custom implementation to get a simple file upload running (karma-express-http-server, multer). Also I need to rely on synchronous Ajax calls here which is not really future proof.
The Native File System API heavily relies on Promises so I cannot use it.
There must be a simpler way of just writing a file as part of your tests to disk while when using Karma and Jasmine.
Related
We use some third party enterprise software ("Container App"), which has an embedded Chromium browser, in which our webapp runs.
We use Cypress to test our webapp in a stand-alone browser outside of this container, however we would like to be able to test it inside, as it interacts with the container in various ways through javascript.
The only thing the container exposes is a "remote devtools-url" to the target (our) browser, which can be pasted to a native browser outside of the container and then debugged in devtools. Like this:
The Container provides 2 different url's for above debugging purposes, and they both work and seemingly similarly. They are something like the following (not precise, unfortunately I am not at work atm):
devtools://...inspector.html?id=xxx
http://ip/...inspector.html?id=xxx
Is it possible to setup Cypress to test "as normal", only having access to this remote devtools-url/port?
The target browser inside the container cannot be started by Cypress, as only the container can start and close it. So the target browser will already be running (with a --remote-debugging-port). I can get the devtools-id dynamically through a call to /json/list.
If not possible, any other way to achieve the goal of testing the browser/app running inside the container?
It is not possible. Testing with Cypress a web page in embedded Chromium running in your application means Cypress needs to connect to already running browser. Cypress doesn't have that possibility.
The documentation states:
When you run tests in Cypress, we launch a browser for you. This enables us to:
Create a clean, pristine testing environment.
Access the privileged browser APIs for automation.
There is a request in Cypress issue tracker to add the option to connect to already running browser. But there is no response on it from Cypress developers.
I am trying to use cypress for running some monitoring tests on production.I am also using snapshot match plugin to compare screenshots.
I just want to know is this safe to do ?
I am not using any dashboard services from cypress -just running tests on our local machines-will cypress sent any info outside our network?
Cypress doesn't send anything to Cypress's servers unless you specifically configure it to - it's safe.
The only other thing is, by default, Cypress will send crash reports (when Cypress itself crashes) to be analyzed. You can turn this off by following the instructions here.
I've been asking myself whether there is an easy way of debugging the JavaScript code of the transactions. JS already has mature debuggers, it is only a question of how to easily bind it to the code running in the container. Does anyone have a clue? -- Thx.
One of the easiest ways to debug your transaction code is to deploy your business network into an embedded fabric which basically means that your code is run as any other NodeJS app is and you can use the node debugger to step through your code or even simple console.log statements if that suffices.
To get an insight into how to achieve this, have a look at the code here: UPDATED LINK
https://github.com/hyperledger/composer-sample-networks/blob/master/packages/carauction-network/test/CarAuction.js#L31-L49
This is the beforeEach method of a unit test for the sample network and as you'll see, it deploys the network to the 'embedded' fabric.
The code then goes on to perform tests that include calling the submitTransaction API on the embedded businessNetworkConnection which then causes the Transaction script code to be eval'ed by the embedded fabric.
So it's all happening within a single Node app and is much easier to debug.
HTH
Im trying to work out which lazy loading techniques and server setup allow my to server a page quickest, currently im using this workflow :
Test ping and download speed
open quicktime screen recorder (so i can review the network tab and load times if there are any anomalies in the data to see what caused them)
open a new incognito tab with cache disabled and network tab open
load website
save screencast
log ping, download speed, time, date, commit version from git, website loading time into spreadsheet
After i have another test i can the spreadsheet and make a quantified decision on what works.
Running this workflow currently takes about 4 minutes each time i run it (im doing all of these manually, generally i run the same test a couple of times to get an average and then change the variables, image loading js script tweaks and also try it on different VPSs, try it with / without CDN to allow sharding etc.)
Is there an automated approach to doing to ? I guess i could setup a selenium script and run the tests, but i was wandering if there was an of the shelf solution ?
Ideally i would be able to say test it with the following git commits (although i would have to do the server config changes manually) but it would even be quicker i could automate the running, screencasts and logging of the tests.
Newbie to Visual Studio load testing
When I record a web performance test I get a list of requests, when I run this test some of the 'requests' expand to show more.
Q: why can I not see all these requests in the recorded list?
If I run the same actions with a tool like LoadUIWeb I see all these requests in the recording, none hidden.
I want to be able to test the website with and without calls to external sites like google-analytics, etc.
I found from googling that it appears that the only way around this is to write a plugin.
I'm surprised that I'd have to do this as other tools will show all requests on the recorded script. I want to ensure that there isn't something I'm missing...
Thanks
The "expanded request" or "child requests" are actually dependent requests. If your site has dependencies to other resources (e.g. images, scripts, css files, etc...) it will result on additional HTTP requests made to obtain these resources (assuming they are not cached locally). VS will simulate this behavior by analyzing the body and issuing dependent requests.
The reason they are not recorded is because they may change.
The following question provides very useful information about the behavior of dependent requests: How can I turn off request caching in a Visual Studio load test