Xcpretty not working when running xcode ui tests in parallel - xcode

I've started to run my ui tests in parallel, in order to improve performance. However, when using xcpretty, I don't get now which tests have passed nor what made the tests fail (I only get which tests have failed). Is there any way to solve this? Or an alternative to xcpretty that works with the outputs in the parallel tests. I want to have a nice terminal output, as when using sequential testing.
This is my script:
xcodebuild \
-workspace './code/ios/myApp/myApp.workspace' \
-scheme 'myApp' \
-destination 'platform=iOS Simulator,name=iPhone 6' \
test | xcpretty -c
This is the output I got when running tests sequentially (And the output I want to continue having when running them in parallel):
Selected tests
[15:35:45]: ▸ Test Suite UITests.xctest started
[15:35:45]: ▸ RegisterTest
[15:36:48]: ▸ ✗ testRegisterBrazil, failed - Couldn't find:
"homeBottomBar_myAccountButton" Button
[15:42:50]: ▸ ✓ testRegisterUSA (61.241 seconds)
[15:42:50]: ▸ Executed 4 tests, with 1 failures (1 unexpected) in 425.314 (425.319) seconds
This is the output I get now:
Failing tests:
UITests:
RegisterTest.testRegisterBrazil()
** TEST FAILED **

Related

Error: failed post-processing: 820:39: missing ',' in argument list

protoc-gen-validate is a protoc plugin to generate polyglot message validators.
The project uses Bazel for builds and has an open pull request to add support for customization of validation error messages.
The original code was made in 2020. Recently it was updated with the latest code, and after fixing all merge conflicts now it fails to build, but I can't find the issue:
~/GitHub/protoc-gen-validate (i18n) $ make bazel-tests
bazel test //tests/... --test_output=errors
INFO: Analyzed 68 targets (0 packages loaded, 0 targets configured).
INFO: Found 62 targets and 6 test targets...
ERROR: /Users/mparnisari/GitHub/protoc-gen-validate/tests/harness/cases/BUILD:46:21: Generating into bazel-out/darwin-fastbuild/bin/tests/harness/cases/go_/github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/go failed: (Exit 1): go-protoc-bin failed: error executing command bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/io_bazel_rules_go/go/tools/builders/go-protoc-bin_/go-protoc-bin -protoc bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/com_google_protobuf/protoc ... (remaining 117 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
[error] failed post-processing: 820:39: missing ',' in argument list (and 10 more errors)
--validate_out: protoc-gen-validate: Plugin failed with status code 1.
2021/12/14 23:46:43 error running protoc: exit status 1
ERROR: /Users/mparnisari/GitHub/protoc-gen-validate/tests/harness/cases/BUILD:46:21 GoCompilePkg tests/harness/cases/go.a failed: (Exit 1): go-protoc-bin failed: error executing command bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/io_bazel_rules_go/go/tools/builders/go-protoc-bin_/go-protoc-bin -protoc bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/com_google_protobuf/protoc ... (remaining 117 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
INFO: Elapsed time: 0.962s, Critical Path: 0.48s
INFO: 8 processes: 8 internal.
FAILED: Build did NOT complete successfully
Judging by the fact that the error ([error] failed post-processing: 820:39: missing ',' in argument list (and 10 more errors)) appears immediately after I fire the tests command, I figure this is an issue with Bazel, but I'm not sure.
UPDATE 1: I narrowed it down to this:
~/GitHub/fork/protoc-gen-validate (i18n) $ make testcases
cd tests/harness/cases && \
protoc \
-I . \
-I ../../.. \
--go_out="module=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/go,Mtests/harness/cases/other_package/embed.proto=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/other_package/go;other_package,Mtests/harness/cases/yet_another_package/embed.proto=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/yet_another_package/go,Mvalidate/validate.proto=github.com/envoyproxy/protoc-gen-validate/validate,Mgoogle/protobuf/any.proto=google.golang.org/protobuf/types/known/anypb,Mgoogle/protobuf/duration.proto=google.golang.org/protobuf/types/known/durationpb,Mgoogle/protobuf/struct.proto=google.golang.org/protobuf/types/known/structpb,Mgoogle/protobuf/timestamp.proto=google.golang.org/protobuf/types/known/timestamppb,Mgoogle/protobuf/wrappers.proto=google.golang.org/protobuf/types/known/wrapperspb,Mgoogle/protobuf/descriptor.proto=google.golang.org/protobuf/types/descriptorpb:./go" \
--plugin=protoc-gen-go=/Users/mparnisari/go/bin/protoc-gen-go \
--validate_out="module=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/go,lang=go,Mtests/harness/cases/other_package/embed.proto=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/other_package/go,Mtests/harness/cases/yet_another_package/embed.proto=github.com/envoyproxy/protoc-gen-validate/tests/harness/cases/yet_another_package/go:./go" \
./*.proto
filename-with-dash.proto:5:1: warning: Import validate/validate.proto is unused.
[error] failed post-processing: 820:39: missing ',' in argument list (and 10 more errors)
--validate_out: protoc-gen-validate: Plugin failed with status code 1.
make:
*** [testcases] Error 1
UPDATE 2: narrowed it down. if I delete this file: tests/harness/cases/maps.proto
make testcases works
UPDATE 3: narrowed it down. if I remove these lines: https://github.com/envoyproxy/protoc-gen-validate/blob/main/tests/harness/cases/maps.proto#L14-L17
make testcase works
Building with switch --sandbox_debug would provide an untruncated stacktrace.
It's all about one missing comma ...the only question is within which file and line.
It complains about the harness_py_proto tests # 46:21. Running a syntax check on file tests/harness/executor/cases.go suggested, because if 820:39 isn't in some generated file, this might be the file which fits best (by line numbers and also the error message). I'm not that fluent in Go syntax, that's why I cannot spot it at sight - but a syntax checker or linter could:
lint bazel testcases bazel-tests
Also the Makefile might be a possible candidate, but there's not much going on:
.PHONY: harness
harness: testcases tests/harness/go/harness.pb.go tests/harness/go/main/go-harness tests/harness/cc/cc-harness bin/harness ## runs the test harness, validating a series of test cases in all supported languages
./bin/harness -go -cc
.PHONY: bazel-tests
bazel-tests: ## runs all tests with Bazel
bazel test //tests/... --test_output=errors
Ever tried running make harness previously? When it does what the comment says ...
## runs the test harness, validating a series of test cases in all supported languages

Is there a way to set -UseModernBuildSystem=NO in a Fastlane fastfile?

When building my app using fastlane gym I'm getting error: Unexpected duplicate tasks:. When I run into this issue using Xcode I can get rid of it by setting the build system to the legacy build system. I would like to be able to set the build system using fastlane but have not found the correct way to pass it in using xcarg.
I've tried using this command. fastlane gym --xcargs "UseModernBuildSystem=no"
Which in turn runs this: set -o pipefail && xcodebuild -workspace ./PolyAcademy.xcworkspace -scheme PolyAcademy -destination 'generic/platform=iOS' -archivePath /Users/mattmarshall/Library/Developer/Xcode/Archives/2019-10-21/PolyAcademy\ 2019-10-21\ 16.21.58.xcarchive UseModernBuildSystem=no archive | tee /Users/mattmarshall/Library/Logs/gym/PolyAcademy-PolyAcademy.log | xcpretty
I'd like for this not to appear and for the app to build.
[16:22:00]: ▸ 2019-10-21 16:22:00.145 xcodebuild[32252:315891] DTDeviceKit: deviceType from 870b9074181ce2e0318a5477d3bd3536633ee1ee was NULL
[16:22:01]: ▸ ❌ error: Unexpected duplicate tasks:
[16:22:01]: ▸ ** ARCHIVE FAILED **```
I solved adding
export_xcargs: {
useModernBuildSystem: "NO"
}
to my build_ios_app config in the fastfile

Does --num-flaky-test-attempts rerun the whole suite or just the failed test?

The documentation for --num-flaky-test-attempts parameter of gcloud firebase test android run says the following:
Specifies the number of times a test execution should be reattempted if one or more of its test cases fail for any reason.
This means it reruns only the failed tests but not the whole suite, right? In other words, as soon as a test passes it won't be retried, right?
The command line parameter --num-flaky-test-attempts of gcloud firebase test android run appears to rerun all of the tests instead of just the failed tests.
I ran a suite of tests using --num-flaky-test-attempts 10 and here timestamps from the logs for one test in the suite:
04-27 03:41:51.225 passed
04-27 03:41:50.519 passed
04-27 03:41:43.533 failed
04-27 03:41:48.625 failed
04-27 03:42:13.886 failed
04-27 03:41:33.749 failed
04-27 03:41:43.694 failed
04-27 03:41:42.101 failed
04-27 03:41:20.310 passed
04-27 03:40:17.819 passed
04-27 03:33:14.154 failed
It appears to have executed the entire test suite each time. In some cases the test mentioned above passed and in some cases it failed. It passed and failed multiple times, so clearly it's rerunning the test no matter whether it passed or failed.
I believe there were 11 total tests because I specified --num-flaky-test-attempts 10 which means it attempted to run the suite once and since that failed it ran 10 more times for a total of 11.
Here is the full command in case that's helpful to anyone:
gcloud firebase test android run \
--project locuslabs-android-sdk \
--app app/build/outputs/apk/debug/app-debug.apk \
--test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
--device model=walleye,version=28,locale=en_US,orientation=portrait \
--test-targets "class com.locuslabs.android.sdk.TestUITest" \
--use-orchestrator \
--num-flaky-test-attempts 10 \
--timeout 30m \
--environment-variables numShards=10,shardIndex=2 \
--verbosity debug
The documentation states the following for --num-flaky-test-attempts:
Specifies the number of times a test execution should be reattempted if one or more of its test cases fail for any reason. An execution that initially fails but succeeds on any reattempt is reported as FLAKY.
I.e. if one test case in a test execution fails, Test Lab will re-run the whole test execution again. A test execution is comprised of running the whole test suite on one device.
Example: You execute your test suite on two devices, lets call them A and B. The whole test suite succeeds on A, but one test case fails on B. In this case only the test suite on device B will be re-attempted.

Ginkgo does not provide coverage in Travis CI

I have a GO project which I build in Travis CI.
I have implemented a few tests with Ginkgo, and I am getting code coverage when I run it locally, however I get no coverage when I run it on Travis.
My .travis.yml
language: go
# safelist
branches:
only:
- master
- travis
before_install:
- go get github.com/onsi/gomega
- go get github.com/onsi/ginkgo/ginkgo
- go get github.com/modocache/gover
script:
- ginkgo -r --randomizeAllSpecs --randomizeSuites --failOnPending --cover --trace --race --compilers=2
after_success:
- gover . coverage.txt
- ls -al
- cat coverage.txt
- bash <(curl -s https://codecov.io/bash)
When I run the script command on my own machine I get the following result
$ ginkgo -r --randomizeAllSpecs --randomizeSuites --failOnPending --cover --trace --race --compilers=2
Running Suite: Gitserver Suite
==============================
Random Seed: 1470431018 - Will randomize all specs
Will run 4 of 4 specs
++++
Ran 4 of 4 Specs in 0.000 seconds
SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 0 Skipped PASS
coverage: 25.9% of statements
Ginkgo ran 1 suite in 4.411023s
Test Suite Passed
But on travis CI the coverage says "0.0% of statement"
I have tried to setup a new GOPATH on my local machine to get a clean setup and only run the commands that occurs in the Travis log, and I still get a reported 25% coverage. My machine is running windows where as Travis is linux, that is the only difference I can think of right now.
I have just tried GoCover.io on my package, and that also gives me the 25% coverage that I get locally.
I finally got it to work after I ran the travis build locally through their docker image. For some reason I need to specify which package to cover, so the ginkgo command has been changed into
ginkgo -r --randomizeAllSpecs --randomizeSuites --failOnPending --coverpkg gitserver --trace --race --compilers=2
The following command seems to work for me.
ginkgo -r --randomizeAllSpecs --randomizeSuites --cover --race --trace
If you like to know more, look at the documentation here.

How do you configure the XCode Jenkins plugin to run tests?

The instructions for running tests with the Jenkins XCode plugin say to set the test target (which I've done), the SDK (which I've done) and the configuration (which I tried with nothing, Debug, and Test).
However I keep getting "...is not configured for running".
How do I actually get it to run tests?
This is the output:
+ xcodebuild -workspace /Users/MyDir/.Jenkins/jobs/MyTests/workspace/folder/MyWorkspace.xcworkspace -scheme MyTestScheme clean
xcodebuild: error: Failed to build workspace MyWorkspace with scheme MyTestScheme.
Reason: Scheme "MyTestScheme" is not configured for running.
If within XCode for MyTestScheme if I choose Product/Run then I get the same error message, but if I choose Product/Test then the test code executes successfully. The output from a sucessful run in Xcode is:
2013-08-28 11:10:25.828 otest[65917:303] Unknown Device Type. Using UIUserInterfaceIdiomPhone based on screen size
Test Suite 'Multiple Selected Tests' started at 2013-08-28 18:10:26 +0000
Test Suite '/Users/MyDir/Library/Developer/Xcode/DerivedData/MyWorkspace-ctngidolzdhijvbymvghygtoaiiw/Build/Products/Debug-iphonesimulator/MyTestScheme.octest(Tests)' started at 2013-08-28 18:10:26 +0000
Test Suite 'MyTests' started at 2013-08-28 18:10:26 +0000
Test Case '-[MyTests test1]' started.
2013-08-28 11:10:26.029 otest[65917:303] MDN: (null)
Test Case '-[MyTests testA1]' passed (0.346 seconds).
For me the problem was using the test schema. The debug schema should be used instead.
To configure the Xcode plugin for unit testing you need to write "test" in the "Custom xcodebuild arguments" field inside the "Advanced Xcode build options".
XCode plugin maintainer here. I don't know the answer, but I would love to help you out.
Have you tried fiddling with the destination argument ?
E.g. -destination 'OS=8.0,name=iPhone'
or -destination 'platform=iOS Simulator,OS=8.0,name=iPhone 6s'
(adjust depending on your needs)
If that doesn't work, please try to copy the output generated by the run of the tests from XCode itself.

Resources