how to complete jest.useFakeTimers().setSystemTime and avoid timeout - time

I want to set the time of my jest test. However jest complains that the test exceeds timeout.
beforeEach(() => {
jest.useFakeTimers().setSystemTime(new Date('2020-01-01'))
})
error
● Test suite failed to run
thrown: "Exceeded timeout of 5000 ms for a hook.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."

Related

Mocha - retry after timeout

I am using mocha with following flags: --timeout 450000 --retries 3.
Mocha retries whenever something goes wrong during test execution or if the assertion fails i.e. it respects the --retries 3 flag and retries for three (3) more times to see if the test pass before failing it.
But when the tests timeout, it doesn't respect the --retries 3 flag. Instead it simply fails the test with the following message:
Error: Timeout of 450000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves.
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
How can I make mocha retry if the test has timed out? This is important because in my setup the tests can do timeout because they depend on network conditions.

Failed test in protractor stops the execution for rest test cases

I am running the protractor Suite (spec file having multiple test cases), If any test case fails, protractor does not continue with the next test case execution and all the rest of test cases also fail.
EXPECTED BEHAVIOR:
Upon failure on any test case, protractor should continue with next test case execution.
I used "Protractor-Fail-Fast" Npm package to stop the rest test case execution if any test case fail. But ideally I am not looking for the same.
But this will not help me!
Just for reference: In Visual Studio MS test, If I created ordered test (same as Spec file in protractor having multiple test cases) and then set test setting like "continue on failure", ordered test execution will continue even if some test case failed.
I am looking for a similar test setting or any solution for protractor.
If you dont't want to stop all tests run just stop using Protractor-Fail-Fast library? Protractor tests run till the end by default even if some of the tests are failed.
set ignoreUncaughtExceptions: true in config file as following:
/**
* If set, Protractor will ignore uncaught exceptions instead of exiting
* without an error code. The exceptions will still be logged as warnings.
*/
ignoreUncaughtExceptions?: boolean;
you can get above description from here
export.config = {
...
ignoreUncaughtExceptions: true
}

How to get better error messaging from nightwatch when running tests in parallel

We have a problem when we run our nightwatch tests in parallel and there is a problem with the setup, for example the selenium grid is not available. The tests execute very quickly and we get no error messages.
Started child process for: folder1/test1
Started child process for: folder1/test2
Started child process for: folder1/test3
Started child process for: folder1/test4
>> folder1/test1 finished.
>> folder1/test2 finished.
>> folder1/test3 finished.
>> folder1/test4 finished.
But when I run the tests serially, I get a good error message like
Error retrieving a new session from the selenium server
Connection refused! Is selenium server started?
{ status: 13,
value: { message: 'Error forwarding the new session Empty pool of VM for setup Capabilities [{acceptSslCerts=true, name=Test1, browserName=chrome, javascriptEnabled=true, uuid=ab54872b-10ee-43a1-bf65-7676262fa647, platform=ANY}]',
class: 'org.openqa.grid.common.exception.GridException' } }
Why don't I get the good error message when running in parallel mode? Is there something I can change so I get the good error message in parallel mode?
By setting
live_output: true
in your nightwatch config file, you'll see logs while running in parallel.
More information: config-basic

TFS2015 Test Agent Aborted - PowerShell script completed with errors

I am running a TFS nightly build that for the last few days has not been able to complete all its tests. It fails after several hours with a "Test run is aborted" message. Previous to this the tests always ran successfully, and no major changes(or even minor) have been made to the system that runs these tests.
Information:
Two MStest runs in the build(unit tests)
Timeout is set to 20 hours
Runs for approx. 15 hours before failure
Tests are set to continue on failure
When I look in the TFS log for the latest run it lists the following(2017-04-11T06:42:47.5500707Z):
[warning]DistributedTests: Test run is aborted. Logging details of the run logs.
[warning]DistributedTests: New test run created.
[warning]Test Run queued for Project Collection Build Service
[warning]DistributedTests: Test discovery started.
[warning]DistributedTests: Test Run Discovery Completed . Test run id: 533
[warning]DistributedTests: 290 test cases discovered.
[warning]DistributedTests: Test execution started. Test run id : 533
[warning]DistributedTests: Test run timed out. Test run id : 533
[warning]DistributedTests: Test run aborted. Test run id: 533
[error]The test run was aborted, failing the task.
When I look at the run log(worker_20170410-234426-utc_864.log) I see:
06:42:47.659516 BaseLogger.LogConsoleMessage(scope.JobId =
7ced7f31-e360-47f3-b334-ef20faeaf000, message = ##[error]The test run
was aborted, failing the task.) 06:42:47.659516
Microsoft.TeamFoundation.DistributedTask.Agent.Common.AgentExecutionTerminationException:
PowerShell script completed with errors. at
Microsoft.TeamFoundation.DistributedTask.Handlers.PowerShellHandler.Execute(ITaskContext
context, CancellationToken cancellationToken) at
Microsoft.TeamFoundation.DistributedTask.Worker.JobRunner.RunTask(ITaskContext
context, TaskWrapper task, CancellationTokenSource tokenSource)
In the test log, I don't see any errors in the VS, just a warning about not able to connect(I see these often):
W, 2060, 5, 2017/04/10, 16:26:03.595, XXXTESTING\QTController.exe,
Test of LoadTestResultConnectString failed: A network-related or
instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible. Verify
that the instance name is correct and that SQL Server is configured to
allow remote connections. (provider: SQL Network Interfaces, error: 26
- Error Locating Server/Instance Specified)
I also see an error thrown in the Application Event log at the same time:
The description for Event ID 0 from source Application cannot be
found. Either the component that raises this event is not installed on
your local computer or the installation is corrupted. You can install
or repair the component on the local computer.
If the event originated on another computer, the display information
had to be saved with the event.
The following information was included with the event:
Error Handler Exception: System.ServiceModel.CommunicationException:
There was an error reading from the pipe: The pipe has been ended.
(109, 0x6d). ---> System.IO.IOException: The read operation failed,
see inner exception. ---> System.ServiceModel.CommunicationException:
There was an error reading from the pipe: The pipe has been ended.
(109, 0x6d). ---> System.IO.PipeException: There was an error reading
from the pipe: The pipe has been ended. (109, 0x6d).....
the message resource is present but the message is not found in the
string/message table
The issue is that I really don't know how to interpret these messages, each log just says "test run was aborted, failing the task", I'm not even certain the powershell issue is what caused it. I'm also not sure that the error thrown in the application log is related, though it was thrown at exactly the same time that the run failed.
It's also difficult to research this issue, when you really don't know what's causing the test agent to fail. There are posts related to VS, and to the TFS Test Agent, but these don't strike me as related issues, and of course there is this somewhat unhelpful post about the Powershell message.
Has anyone seen this sort of issue before? I don't think anything on my build server has changed over the last few days(maybe updates...), what do you think would cause an issue like this to occur?
If you look at the failed build(containing tests) after it is aborted in the "Build" section of TFS, its says it was "Aborted", that's it... If you look at results of the build(containing tests) in the "Test" section of TFS it specified that the test run "Exceeded Timeout".
Apparently MSTest was running up against the default value of this little gem. I think it defaults to 8 hours when not specified, but I'm not too sure about this. Anyways I set the following setting in my "Default.testsettings" file:
<?xml version="1.0" encoding="utf-8"?>
<TestSettings name="TestSettings1">
<Execution>
<Timeouts runTimeout="200000000" />
</Execution>
</TestSettings>
Seems to resolve the issue. Tests runs successfully and no longer time out.

Elixir pry session interrupted because database connection timed out

I was happily following this advice on how to run a pry debugger inside my Phoenix controller tests:
require IEx in the target file
add IEx.pry to the desired line
run the tests inside IEx: iex -S mix test --trace
But after a few seconds this error always appeared:
16:51:08.108 [error] Postgrex.Protocol (#PID<0.250.0>) disconnected:
** (DBConnection.ConnectionError) owner #PID<0.384.0> timed out because
it owned the connection for longer than 15000ms
As the message says, the database connection appears to time out at this point and any commands that invoke the database connection will error out with a DBConnection.OwnershipError. How do I tell my database connection not to time out so I can debug my tests in peace?
The Ecto.Adapters.SQL.Sandbox FAQ mentions this issue and explains that you can add the :ownership_timeout setting to your Repo config to specify how long db connections should stay open before timing out. I set mine to 10 minutes (test environment only) so I never have to think about it again:
# config.test.exs
config :rumbl, Rumbl.Repo,
# ...other settings...
ownership_timeout: 10 * 60 * 1000 # long timeout so pry sessions don't break
As expected, I can now fool around in pry for 10 minutes before seeing that error.

Resources