Currently I am using Nightwatch.js with Chromedriver to perform e2e tests of my Vuetify app. However, the test results are indeterministic.
Many times I get errors like: Timed out while waiting for element <.menuable__content__active> to be present for 5000 milliseconds. when running waitForElementVisible('.menuable__content__active', 5000) right after click('.v-select'), whereas sometimes it passes.
There must be a simpler way to select an item in <v-select> other than clicking on it, waiting for .menuable__content__active and clicking on .menuable__content__active .v-list__tile--link. The same with <v-menu>, <v-autocomplete>, <v-date-picker>, etc.
Other times running click('#myid .v-btn') does not work, but execute('document.querySelector("#myid .v-btn").click()') does.
What is the proper way to do deterministic e2e testing of Vuetify apps with a lot of dynamic components?
I managed to successfully e2e-test Vuetify using Cypress instead of Nightwatch.js, which implicitly waits for elements to appear when using cy.get(). Moreover, its snapshots in between tests are really useful for debugging.
Related
I am using Inertia and would like to run some tests to check whether the response contains a certain string.
->assertInertia(
fn (AssertableInertia $page) => $page->component('UsersPage')->has('profile')->dd('profile.0.buttons')
);
so the above works and i can dump the profile.0.buttons and see the string i want to check for, but how do i automatically test that this string exists? normal unit tests, i'd use assertSee. whereContains also doesn't work.
I think that Inertia page about testing has done a great job in summing the testing option for Laravel/Inertia.
Endpoint tests (assertInertia) are feature tests, and you can use them to check if a controller is sending the right components and data to Inertia.
Your question is going more in the direction of "Client-side unit tests" e.g. Jest, where you can send some data to React/Vue component and see how that data has been rendered.
There are "End-to-end tests": Cypress is great but lacks nice integration with setting up Laravel enviroment and seeders in test.
That leave us with Laravel Dusk. I love this tool because it give us best of both worlds (backend and frontend).
You can set up your test with seeders or Model factories, and in the same test you can fire up virtual browser and see how Inertia rendered page. Best thing is that you can use helpers for typing and clicking so you can realy test your app and how it behaves.
I wanted to know if JMeter has a option where you wait until some element disappears.
Example a loading bar only once that has completed or no longer visible then to carry on. (Also being able to monitor the length of time taken)
I have through about writing it as a webdriver test and then running it as a Junit test in JMeter but wanted to know if there is a simpler solution.
Any ideas welcome :)
First of all you need to realize that JMeter is not a browser
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does (it's possible to view the response as HTML etc., but the timings are not included in any samples, and only one sample in one thread is ever displayed at a time).
So JMeter doesn't execute any client-side JavaScript, the only way of implementing "wait until" option is using While Controller in order to re-execute the same request again and again until response data will contain (or stop containing) the element you're looking for.
If you need to evaluate client-side JavaScript the only option would be going for Selenium. I would recommend using WebDriver Sampler instead of going for JUnit as this way you won't have to recompile your script for any change, it will be inlined into .jmx
You can use Transaction Controller to monitor the time taken by the whole process and to wait for a change , have a look at this:
http://www.sourcepole.ch/2011/1/4/waiting-for-a-page-change-in-jmeter
I have a script that use Capybara to publish links in Google+. I would like to have tests to cover this functionality. Usually Capybara is using as a tool for writing Integration tests. In may case i need to test Capybara itself.
I see 3 possible ways:
stub capybara's method (but in this case i test nothing but just stubbed methods)
test capybara agains saved HTML/JS page (that will help me understand that i did not break anything during refactoring)
do not test at all (no comments here)
Have you ever faced such a problem?
If you register different drivers for your app and your test code, possibly manage the sessions manually depending on how you're using it in your app, and make sure you're careful with Capybaras setting's you should be able to go with option 2. You have to be careful with Capybaras settings because most of them are global so changing them for your tests will also change them for your app.
I'm familiar with python unittest tests where if an assertion fails, that test is marked as "failed" and it moves on to other tests. Jasmine on the other hand will continue through all expects even if the one of them fails. How can I make Jasmine stop processing a test after the first expectation fails?
it ("shouldn't need to test other expects if the first fails", function() {
expect(array.length).toBe(1);
// don't need to check this if the first failed.
expect(array[0]).toBe("foo");
});
Am I thinking about it wrong? I have some tests with lots of expect's and it seems like a waste to show all the stack traces when only the first is wrong really.
#Gregg's answer was correct for the latest version of Jasmine at that time (v2.0.0).
However, since then, this new feature was added in v2.3.0:
Allow user to stop a specs execution when an expectation fails (Fixes #577)
It's activated by adding throwFailures=true to the query string of the runner page, eg:
http://localhost:8000/?throwFailures=true
Jasmine doesn't support failing early, in a single spec. The idea is to give you all of the failures in case that helps figure out what is really wrong in your spec.
Jasmine has stop on failure feature and you can check it here:
https://plnkr.co/plunk/Ko5m6MQz9VUPMMrC
This starts jasmine with oneFailurePerSpec property.
According to the comments of https://github.com/jasmine/jasmine/issues/414 I figured out that 2 solutions exists for this:
https://github.com/radialanalytics/protractor-jasmine2-fail-whale
https://github.com/Updater/jasmine-fail-fast
I just started to use the protractor-jasmine2-fail-whale because it seems to have more features. Although to take screenshots in case of test failures I currently use protractor-jasmine2-html-reporter.
I'm using Jasmine in Appium (a tool to test React Native apps).
I fixed the issue by adding stopSpecOnExpectationFailure=true to Jasmine configs
jasmine v3.8.0 & jasmine-core v3.8.0
So lets say I am doing a Test Complete keyword test. If something fails in it the text stops . Actually what I have founded out is that if i have 8 checkpoints if the 4th one fails the rest will always fail after it. So i get a "test execution was interrupted" error. Thats fine but it doesnt finish out the test and close the application. The reason this is an issue is because any tests after it will fail because the application is still left open. I could rewrite these tests so that the application is open when they start but is there a way to kill and application after your tests fail? If the tests pass the application is closed.
You need to organize your tests with test items. In this case, you create at least 3 test items: the first one starts the application, the second performs the test and the third closes the application. If an error occurs during execution of the second test, this second test execution is ended and TestComplete runs the third finalization test item.
Information on test items can be found in the Tests and Test Items help topic. Please note that you need to specify the Test Item value in the Stop on error column for the needed test item (the second one in the above example). Information on this and other columns can be found here. The column is hidden by default and you need to add it: right-click the header of the test items list and select Field Chooser. After this, drag the needed column to the header from the Field Chooser dialog.
Find more information on this solution in Stopping Tests on Errors and Exceptions.
Alternative solution is using the OnLogError or OnStopTest event handlers. Find description of how to handle standard TestComplete events in the Creating Event Handlers for TestComplete Events help topic.
Perhaps I'm oversimplifying, but could it be the setting for the test playback? Pls check the following page and let me know if it helps: http://support.smartbear.com/viewarticle/28751/.
If that doesn't work feel free to repost in the SmartBear Forum: http://community.smartbear.com/
The support team is monitoring the forum and I'm sure they'll be happy to help.