I'm writing protractor e2e tests and use browser.pause() to enter debugger. I appreciate the interactive mode which seems helpful when developing a new test.
However, when I spend too much time in the debuger, the test gets interrupted as timeout is exceeded:
Error: timeout of 240000ms exceeded
I can easily fix that by increasing mochaOpts.timeout in my protractor configuration, but I don't like changing it back and forth depending if I'm debugging or not.
Is there a better way?
if anyone who reads this and was hoping it was for timing out using Jasmine.
you can put this within your individual spec files
jasmine.DEFAULT_TIMEOUT_INTERVAL = 120000; // whatever time you need
I've found answer here: https://stackoverflow.com/a/23492442/4358405
Adding this.timeout(10000000); in the test makes the trick
Related
When I run Jmeter from Windows CLI, after some random time, the tests are being stopped or stuck. I can click on ctrl+C (one time) just to refresh the run but part of the request will be lost during the time it was stuck.
Take a look at jmeter.log file, normally it should be possible to figure out what's wrong by looking at messages there. If you don't see any suspicious entries there - you can increase JMeter's logging verbosity by changing values in logj2.xml file or via -L command-line parameters.
Take a thread dump and see what exactly threads are doing when they're "stuck"
If you're using HTTP Request samplers be aware that JMeter will wait for the result forever and if the application fails to respond at all - your test will never end so you need to set reasonable timeouts.
Make sure to follow JMeter Best Practices
Take a look at resources consumption like CPU, RAM, etc. - if your machine is overloaded and cannot conduct the required load you will need to switch to distributed testing
There are several approaches to debugging a JMeter test which can be combined as a general systematic approach that I capable of diagnosing most problems.
The first thing that I would suggest is running the test within the JMeter GUI to visualize the test execution. For this you may want to add a View Results Tree listener which will provide you with real time results from each request generated:
Another way you can monitor your test execution in real time within the JMeter GUI is with the Log Viewer. If any exceptions are encountered during your test execution you will see detailed output in this window. This can be found under the Options menu:
Beyond this, JMeter records output files which are often very useful in debugging you load tests. Both the .log file and the .jtl file will provide a time stamped history of every action your test performs. From there you can likely track down the offending request or error if your test unexpectedly hangs:
If you do decide to move your test into the cloud using a service that hosts your test, you may be able to ascertain more information through that platform. Here is a comprehensive example on how to debug JMeter load tests that covers the above approaches as well as more advanced concepts. Using a cloud load test provider can provide your test will additional network and machine resources beyond what your local machine can, if the problem is related to a performance bottleneck.
I've a requirement to load test a web application using loadRunner(Community edition : 12.53 ). Currently I've my test scripts recorded using loadrunner default test script recorder. I'm assuming that, the operations I chose to perform in SUT should actually update the application backend/DB when I'm executing the test scripts. Is this the correct behavior of a load testing scenario?
When I ran my test scripts, I couldn't see any value or nothing updated in the application DB.
I've my test scripts written in C and also manual correlation is applied using web_reg_save_param method.
What might be the things that could go wrong in such a scenario?. Any help would be deeply appreciated.
the operations I chose to perform in SUT should actually update the application backend/DB when I'm executing the test scripts. Is this the correct behavior of a load testing scenario? - Yes this is the correct behaviour.
When I ran my test scripts, I couldn't see any value or nothing updated in the application DB. - Something you might be missing in the correlations. this generally happens when some variable is not correlated properly or gets missed. Or something like timestamp that you might think is irrelevant but needs to be taken care of.
I try to debug the Protractor tests that contains many browser.wait statements.
I receive following error message:
Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.configuration-parser.js:50 at ontimeout (timers.js:475:11) configuration-parser.js:50 at tryOnTimeout (timers.js:310:5) configuration-parser.js:50
(repeated few times)
that doesn't help me at all.
Is it possible to force Protractor / Jasmine to provide information on which. particular condition it waits during debugging of an application? Or at least when it crashes, provide information for which condition it waited before timeout exception was raised.
This could help me to understand what is going on, in which step my tests really crashes. I've tried to debug tests step by step, but it also doesn't help, since it seems that code is not really executed when Visual Studio code stops in particular line, but after going through expect statement.
I'm not sure that is possible to know on which script it occurs.
I had the same problem and putting bigger jasmine timeout in conf.js file resolved the issue:
defaultTimeoutInterval: 2000000,
I am using selenium-2.30.0 to run a single test(on windows) which runs for many hours (~ 8 Hrs). I was using the FF driver, but it runs out of memory after just 45 minutes or less, & the test execution just hangs. I was unable to use HTMLUnitDriver (i thought a pure java solution was the answer) to run the same way as the FF driver (as it needs to wait for page loads & I definitely didn't want to put random thread sleeps in my code or implement any new function by extending the HTMLUnitDriver).
I cannot break the test case to multiple smaller units.
I cannot reload the driver as and when i see heavy memory utilization
Is there any way to get this working?
I found this link:creating-firefox-profile-for-your-selenium-rc-tests, & was quite helpful. Created a new firefox profile with absolute minimal settings, & the test has been running without issues for the last 4 hours. Thanks a lot for the help guys !
What sort of testing are you doing? Selenium is used primarily for Acceptance tests. It sounds like what you're trying to do is more like a soak test on your system.
If that's the case, take a look at JMeter, it's much more suited to this type of work. However, a rather significant difference between the two technologies is that JMeter works at the protocol (HTTP Request) level as opposed to Selenium's use of the rendered HTML.
What does crash, your Java test code or Firefox itself? If it's the Java test code, then are you sure that you're not leaking memory? Or maybe the memory leak is in the server side?
In our build there are certain scenarios that fail for reasons which are out of our control or take too long to debug properly. Things such asynchronous javascript etc.
Anyway the point is sometimes they work sometimes they don't, so I was thinking it would be nice to add a tag to a scenario such as #rerun_on_failure or #retry which would retry the scenarion X number of times before failing the build.
I understand this is not an ideal solution, but the test is still valuable and we would like to keep it without having the false negatives
The actual test that fails clicks on a link and expects a tracking event to be sent to a server for analytics (via javascript). Sometimes the selenium web-driver loads the next page too fast and the event does not have time to be sent.
Thanks
More recent versions of Cucumber have a retry flag
cucumber --retry 2
Will retry tests two times if it fails
I've been considering writing something like what you're describing, but I found this:
http://web.archive.org/web/20160713013212/http://blog.crowdint.com/2011/08/22/auto-retry-failed-cucumber-tests.html
If you're tired of having to re-kick builds in your CI server because of non deterministic failures, this post is for you.
In a nutshell: he makes a new rake task called cucumber:rerun that uses rerun.txt to retry failed tests. It should be pretty easy to add some looping in there to retry at most 3x (for example).
For cucumber + java on maven i found this command:
mvn clean test -Dsurefire.rerunFailingTestsCount=2
, u must have actual version of surefire plugin, my is 3.0.0-M5.
And nothing else special u even need.