Does --num-flaky-test-attempts rerun the whole suite or just the failed test? - firebase-test-lab

The documentation for --num-flaky-test-attempts parameter of gcloud firebase test android run says the following:
Specifies the number of times a test execution should be reattempted if one or more of its test cases fail for any reason.
This means it reruns only the failed tests but not the whole suite, right? In other words, as soon as a test passes it won't be retried, right?

The command line parameter --num-flaky-test-attempts of gcloud firebase test android run appears to rerun all of the tests instead of just the failed tests.
I ran a suite of tests using --num-flaky-test-attempts 10 and here timestamps from the logs for one test in the suite:
04-27 03:41:51.225 passed
04-27 03:41:50.519 passed
04-27 03:41:43.533 failed
04-27 03:41:48.625 failed
04-27 03:42:13.886 failed
04-27 03:41:33.749 failed
04-27 03:41:43.694 failed
04-27 03:41:42.101 failed
04-27 03:41:20.310 passed
04-27 03:40:17.819 passed
04-27 03:33:14.154 failed
It appears to have executed the entire test suite each time. In some cases the test mentioned above passed and in some cases it failed. It passed and failed multiple times, so clearly it's rerunning the test no matter whether it passed or failed.
I believe there were 11 total tests because I specified --num-flaky-test-attempts 10 which means it attempted to run the suite once and since that failed it ran 10 more times for a total of 11.
Here is the full command in case that's helpful to anyone:
gcloud firebase test android run \
--project locuslabs-android-sdk \
--app app/build/outputs/apk/debug/app-debug.apk \
--test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
--device model=walleye,version=28,locale=en_US,orientation=portrait \
--test-targets "class com.locuslabs.android.sdk.TestUITest" \
--use-orchestrator \
--num-flaky-test-attempts 10 \
--timeout 30m \
--environment-variables numShards=10,shardIndex=2 \
--verbosity debug

The documentation states the following for --num-flaky-test-attempts:
Specifies the number of times a test execution should be reattempted if one or more of its test cases fail for any reason. An execution that initially fails but succeeds on any reattempt is reported as FLAKY.
I.e. if one test case in a test execution fails, Test Lab will re-run the whole test execution again. A test execution is comprised of running the whole test suite on one device.
Example: You execute your test suite on two devices, lets call them A and B. The whole test suite succeeds on A, but one test case fails on B. In this case only the test suite on device B will be re-attempted.

Related

Error: failed post-processing: 820:39: missing ',' in argument list

protoc-gen-validate is a protoc plugin to generate polyglot message validators.
The project uses Bazel for builds and has an open pull request to add support for customization of validation error messages.
The original code was made in 2020. Recently it was updated with the latest code, and after fixing all merge conflicts now it fails to build, but I can't find the issue:
~/GitHub/protoc-gen-validate (i18n) $ make bazel-tests
bazel test //tests/... --test_output=errors
INFO: Analyzed 68 targets (0 packages loaded, 0 targets configured).
INFO: Found 62 targets and 6 test targets...
ERROR: /Users/mparnisari/GitHub/protoc-gen-validate/tests/harness/cases/BUILD:46:21: Generating into bazel-out/darwin-fastbuild/bin/tests/harness/cases/go_/github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/go failed: (Exit 1): go-protoc-bin failed: error executing command bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/io_bazel_rules_go/go/tools/builders/go-protoc-bin_/go-protoc-bin -protoc bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/com_google_protobuf/protoc ... (remaining 117 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
[error] failed post-processing: 820:39: missing ',' in argument list (and 10 more errors)
--validate_out: protoc-gen-validate: Plugin failed with status code 1.
2021/12/14 23:46:43 error running protoc: exit status 1
ERROR: /Users/mparnisari/GitHub/protoc-gen-validate/tests/harness/cases/BUILD:46:21 GoCompilePkg tests/harness/cases/go.a failed: (Exit 1): go-protoc-bin failed: error executing command bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/io_bazel_rules_go/go/tools/builders/go-protoc-bin_/go-protoc-bin -protoc bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/com_google_protobuf/protoc ... (remaining 117 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
INFO: Elapsed time: 0.962s, Critical Path: 0.48s
INFO: 8 processes: 8 internal.
FAILED: Build did NOT complete successfully
Judging by the fact that the error ([error] failed post-processing: 820:39: missing ',' in argument list (and 10 more errors)) appears immediately after I fire the tests command, I figure this is an issue with Bazel, but I'm not sure.
UPDATE 1: I narrowed it down to this:
~/GitHub/fork/protoc-gen-validate (i18n) $ make testcases
cd tests/harness/cases && \
protoc \
-I . \
-I ../../.. \
--go_out="module=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/go,Mtests/harness/cases/other_package/embed.proto=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/other_package/go;other_package,Mtests/harness/cases/yet_another_package/embed.proto=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/yet_another_package/go,Mvalidate/validate.proto=github.com/envoyproxy/protoc-gen-validate/validate,Mgoogle/protobuf/any.proto=google.golang.org/protobuf/types/known/anypb,Mgoogle/protobuf/duration.proto=google.golang.org/protobuf/types/known/durationpb,Mgoogle/protobuf/struct.proto=google.golang.org/protobuf/types/known/structpb,Mgoogle/protobuf/timestamp.proto=google.golang.org/protobuf/types/known/timestamppb,Mgoogle/protobuf/wrappers.proto=google.golang.org/protobuf/types/known/wrapperspb,Mgoogle/protobuf/descriptor.proto=google.golang.org/protobuf/types/descriptorpb:./go" \
--plugin=protoc-gen-go=/Users/mparnisari/go/bin/protoc-gen-go \
--validate_out="module=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/go,lang=go,Mtests/harness/cases/other_package/embed.proto=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/other_package/go,Mtests/harness/cases/yet_another_package/embed.proto=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/yet_another_package/go:./go" \
./*.proto
filename-with-dash.proto:5:1: warning: Import validate/validate.proto is unused.
[error] failed post-processing: 820:39: missing ',' in argument list (and 10 more errors)
--validate_out: protoc-gen-validate: Plugin failed with status code 1.
make:
*** [testcases] Error 1
UPDATE 2: narrowed it down. if I delete this file: tests/harness/cases/maps.proto
make testcases works
UPDATE 3: narrowed it down. if I remove these lines: https://github.com/envoyproxy/protoc-gen-validate/blob/main/tests/harness/cases/maps.proto#L14-L17
make testcase works
Building with switch --sandbox_debug would provide an untruncated stacktrace.
It's all about one missing comma ...the only question is within which file and line.
It complains about the harness_py_proto tests # 46:21. Running a syntax check on file tests/harness/executor/cases.go suggested, because if 820:39 isn't in some generated file, this might be the file which fits best (by line numbers and also the error message). I'm not that fluent in Go syntax, that's why I cannot spot it at sight - but a syntax checker or linter could:
lint bazel testcases bazel-tests
Also the Makefile might be a possible candidate, but there's not much going on:
.PHONY: harness
harness: testcases tests/harness/go/harness.pb.go tests/harness/go/main/go-harness tests/harness/cc/cc-harness bin/harness ## runs the test harness, validating a series of test cases in all supported languages
./bin/harness -go -cc
.PHONY: bazel-tests
bazel-tests: ## runs all tests with Bazel
bazel test //tests/... --test_output=errors
Ever tried running make harness previously? When it does what the comment says ...
## runs the test harness, validating a series of test cases in all supported languages

Xcpretty not working when running xcode ui tests in parallel

I've started to run my ui tests in parallel, in order to improve performance. However, when using xcpretty, I don't get now which tests have passed nor what made the tests fail (I only get which tests have failed). Is there any way to solve this? Or an alternative to xcpretty that works with the outputs in the parallel tests. I want to have a nice terminal output, as when using sequential testing.
This is my script:
xcodebuild \
-workspace './code/ios/myApp/myApp.workspace' \
-scheme 'myApp' \
-destination 'platform=iOS Simulator,name=iPhone 6' \
test | xcpretty -c
This is the output I got when running tests sequentially (And the output I want to continue having when running them in parallel):
Selected tests
[15:35:45]: ▸ Test Suite UITests.xctest started
[15:35:45]: ▸ RegisterTest
[15:36:48]: ▸ ✗ testRegisterBrazil, failed - Couldn't find:
"homeBottomBar_myAccountButton" Button
[15:42:50]: ▸ ✓ testRegisterUSA (61.241 seconds)
[15:42:50]: ▸ Executed 4 tests, with 1 failures (1 unexpected) in 425.314 (425.319) seconds
This is the output I get now:
Failing tests:
UITests:
RegisterTest.testRegisterBrazil()
** TEST FAILED **

Failed test in protractor stops the execution for rest test cases

I am running the protractor Suite (spec file having multiple test cases), If any test case fails, protractor does not continue with the next test case execution and all the rest of test cases also fail.
EXPECTED BEHAVIOR:
Upon failure on any test case, protractor should continue with next test case execution.
I used "Protractor-Fail-Fast" Npm package to stop the rest test case execution if any test case fail. But ideally I am not looking for the same.
But this will not help me!
Just for reference: In Visual Studio MS test, If I created ordered test (same as Spec file in protractor having multiple test cases) and then set test setting like "continue on failure", ordered test execution will continue even if some test case failed.
I am looking for a similar test setting or any solution for protractor.
If you dont't want to stop all tests run just stop using Protractor-Fail-Fast library? Protractor tests run till the end by default even if some of the tests are failed.
set ignoreUncaughtExceptions: true in config file as following:
/**
* If set, Protractor will ignore uncaught exceptions instead of exiting
* without an error code. The exceptions will still be logged as warnings.
*/
ignoreUncaughtExceptions?: boolean;
you can get above description from here
export.config = {
...
ignoreUncaughtExceptions: true
}

TestNG #AfterTest called without executing all Tests

I am facing different issue when I run test cases.
I have around 50 test cases.
During the test run suddenly it is calling #AfterTest and completing the test run. At the end it ran only 10 test cases. Every run it is giving different no.of ran test cases. (Like 4, 10, 15 but not running all test cases)
Questions:
#AfterTest supposed to call after executing all #Test methods.
Is there any process behind will terminate the #Test methods execution and call #AfterTest method? How debug this kind of failures?
Could someone please help me in this?
I am using Appium to run the test cases. Whenever Appium throws uncaught exception, it is coming out the test block and executing AfterTest block without executing all test cases.
Logs from Appium:
2018-06-12 00:56:21:720 - error: uncaughtException: write EPIPE date=Tue Jun 12 2018 00:56:21 GMT+0530 (IST), pid=68705, uid=503, gid=20, cwd=/Users/../Desktop/Automation/CodeBase/MyProject, execPath=/usr/local/bin/node, version=v8.9.4, argv=[/usr/local/bin/node, /usr/local/bin/appium, -a, 127.0.0.1, -p, 4721, -cp, 5721, -bp, 6721, --chromedriver-port, 7721, --no-reset, --log-level, debug, --local-timezone, --log, /Users/../target/AppiumLogs/appiumLogs_201.log], rss=194183168, heapTotal=137883648, heapUsed=129103560, external=213116, loadavg=[2.45849609375, 2.568359375, 2.4462890625], uptime=476242, trace=[column=11, file=util.js, function=_errnoException, line=1022, method=null, native=false, column=14, file=net.js, function=WriteWrap.afterWrite, line=867, method=afterWrite, native=false], stack=[Error: write EPIPE, at _errnoException (util.js:1022:11), at WriteWrap.afterWrite (net.js:867:14)]

Rerun a specific test once on failure

I have a flapping test in my test suite - it fails once out of every hundred runs or so.
However, this is kind of expected behavior as it's a test for a algorithm that runs a lot of things and returns a result - every once in while a result just can't be found. If I run the test suite again, everything is green.
Is there a way to re-run this one particular test if it fails and use the results of the rerun?
For example, 99% of the time this test passes. In the event that it fails, it should automatically be re-run without triggering a failure. If that re-run test fails, it should be reported as a normal failing test.
I'm thinking something like this, which is used to stop the test suite on failure:
class << MiniTest::Unit.runner; self; end.class_eval do
def puke(suite, test, e)
super(suite, test, e)
if ENV['FAIL_FAST'] && !e.kind_of?(MiniTest::Skip)
turn_reporter.io.puts "****** Fail fast active for tests, exiting (environment variable FAIL_FAST exists)"
exit(1)
end
end
end
but instead of 'failing fast' it 're-runs' the failing test. The difference is that the above code has an environment variable set - so if any test fails, it 'pukes' - I would only want to put this 're-run' flag on one specific test.
Has anyone found a way to do this?

Resources