How do I disable certain specific tests in mesos?
Say for example, I wish to disable TestCategory.SpecificTest from running when I execute make check. How do I do that? Where do I make those changes?
You can do this in 2 ways.
Specify GTEST_FILTER to make check.
make check GTEST_FILTER=-TestCategory.SpecificTest
Mark a test as DISABLED.
TEST(TestCategory, DISABLED_SpecificTest) { ... }
References
Running a Subset of the Tests.
Temporarily Disabling Tests
Related
I have a cypress workflow in Github and it runs nicely. But, when the e2e tests fail for some reason and I want to re-run them using the re-run all jobs button (below), the following message appears:
The run you are attempting to access is already complete and will not accept new groups.
The existing run is: https://dashboard.cypress.io/projects/abcdef/runs
When a run finishes all of its groups, it waits for a configurable set of time before finally completing. You must add more groups during that time period.
The --tag flag you passed was:
The --group flag you passed was: core
What should I change in my configuration to make these possible? Sometimes the e2e fails because of a backend error that is fixed later.
I'd like to do this instead of a force e2e commit.
I was facing the same issue before.
I think you can try to pass GITHUB_TOKEN or add a custom build id. It fixed my issue. Hoep it helps.
https://github.com/cypress-io/github-action#custom-build-id
Check your Cypress Dashboard subscription plan. Mine got the free plan full (500 test for free and I was running in 3 different browsers 57 tests, so it got full pretty quick since this is 171 tests in one run) and after that it didn't allowed me to keep running or re running more parallel tests. Test kept running but in 1 machine out of 4 in the first browser and stages for the other 2 browsers started failing, I was able to allow the pipeline to not be failing by passing continueOnError: true in the configuration.
Quick edit, I don't remember where but I read that you could also add a delay to your pipeline and/or reduce the default wait on the Dashboard which is 60s(https://docs.cypress.io/guides/guides/parallelization#Run-completion-delay)
I have a Gradle build with both local and remote cache configured. Among other things I use Spotless Gradle plugin. That plugin has marked its tasks (spotlessCheck and spotlessApply) as cacheable. The problem is that in my case task itself is quite fast and thus checking task's output in remote cache takes more time than actually running the task.
So my question: is it possible to disable cache for one task introduced by the 3rd plugin? Even better, is it possible to disable just remote cache for just one task?
I don't think those two particular tasks you mention have the build cache enabled. But other ones like spotlessJava do.
In any case, when you have figured out which tasks use the build cache (e.g. by running with -i), you can configure them with outputs.cacheIf { false }.
Note that this disables both the local and remote build cache. I am not aware of a way to selectively disable just the remote cache for a given task but keep the local one enabled.
For instance:
tasks.named("spotlessJava") {
outputs.cacheIf { false }
}
I don't think that disabling only the remote cache is possible, but if your problem is that the cache result is too big and it's wasting a lot time trying to upload it (which always fails anyway), you can solve this using the useExpectContinue incubating property.
It will try to check if the upload is possible before doing it, it's good enough for me.
I am running a Go test that I think should be cached when I run it more than once, however, Go is running the test every time.
Is there a flag or environment variable I can use to help determine why Go is deciding not to cache this particular test?
Set GODEBUG=gocachetest=1 in the environment, run the test twice and diff the output between test runs.
If that's not enough you can use GODEBUG=gocachehash=1 to determine the components of the cache hash.
We've been using Kerberos auth with several (older) Cloudera instances without a problem but now are now getting 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors. We've been modifying code to add functionality but AFAIK nobody has touched either the authentication code or the cluster configuration.
(I can't rule it out - and clearly SOMETHING has changed.)
I've set up a simple unit test and verified this behavior. At the command line I can execute 'kinit -kt user.keytab user' and get the corresponding Kerberos tickets. That verifies the correct configuration and keytab file.
However my standalone app fails with the error mentioned.
UPDATE
As I edit this I've been running the test in the debugger so I can track down exactly where the test is failing and it seems to be succeed when run in the debugger!!! Obviously there's something different in the environments, not some weird heisenbug that is only triggered when nobody is looking.
I'll update this if I find the cause. Does anyone else have any ideas?
Auth_to_local has to have at least one rule.
Make sure you have “DEFAULT” rule at the very end of auth_to_local.
If none rules before match, at least DEAFULT rule would kick in.
I have a test plan which contains a test suite which has 4 sub suites of automated test cases. I want to execute all the subsuites one after the other without manual intervention. Is it possible?
try running test cases from command prompt, this requires the least manual intervention, as once all things are set you need to change only the build number to run your test cases.
refer this link