How to debug lack of test caching - go

I am running a Go test that I think should be cached when I run it more than once, however, Go is running the test every time.
Is there a flag or environment variable I can use to help determine why Go is deciding not to cache this particular test?

Set GODEBUG=gocachetest=1 in the environment, run the test twice and diff the output between test runs.
If that's not enough you can use GODEBUG=gocachehash=1 to determine the components of the cache hash.

Related

Hibernate: DB not reliably rolled back at end of UnitTests in spite of #Transactional

We have a large application using Spring for application setup, initialisation and "wiring" and Hibernate as persistence framework. For that application we have a couple of unit tests which are causing us headaches because they again and again run "red" when executing them on our Jenkins build server.
These UnitTests execute and verify some rather complex and lengthy core-operations of our application and thus we considered it too complex and too much effort to mock the DB. Instead these UTs run against a real DB. Before the UTs are executed we create the objects required (the "pre-conditions"). Then we run a Test and then we verify the creation of certain objects, their status and values etc. All plain vanilla...
Since we run multiple tests in sequence which all need the same starting point these tests derive from a common parent class that has an #Transactional annotation. The purpose of that is that the DB is always rolled back after each unit-test so that the subsequent test can start from the same baseline.
That approach is working perfectly and reliably when executing the unit-tests "locally" (i.e. running a "mvn verify" on a developer's workstation). However, when we execute the very same tests on our Jenkins, then - not always but very often - these tests fail because there are too many objects being found after a test or due to constraint violations because certain objects already exist that shouldn't yet be there.
As we found out by adding lots of log-statements (because it's otherwise impossible to observe code running on Jenkins) the reason for these failures is, that the DB is occasionally not properly rolled back after a prior test. Thus there are left-overs from the previous test(s) in the DB and these then cause issue during subsequent tests.
What's puzzling us most is:
why are these tests failing ONLY when we execute them on Jenkins, but never when we run the very same tests locally? We are using absolute identical maven command line and code here, also same Java version, Maven version, etc.
We are by now sure that this has nothing to do with UTs being executed in parallel as we initially suspected. We disabled all options to run UTs in parallel, that the Maven Surefire plugin offers. Our log-statements also clearly show that the tests are perfectly serialized but again and again objects "pile up", i.e. after each test-method, the number of these objects that were supposed to have been removed/rolled back at the end of the test, are still there and their number increases which each test.
We also observed a certain "randomness" with this effect. Often, the Jenkins builds run fine for several commits and then suddenly (even without any code change, just by retriggering a new build of the same branch) start to run red. The DB, however, is getting re-initialized before each build & test-run, so that can not be the source of this effect.
Any idea anyone what could cause this? Why do the DB rollbacks that are supposed to be triggered by the #org.springframework.transaction.annotation.Transactional annotation work reliable on our laptops but not on our build server? Any similar experiences and findings on that anyone?

Re-run Cypress test in Github Actions does not work

I have a cypress workflow in Github and it runs nicely. But, when the e2e tests fail for some reason and I want to re-run them using the re-run all jobs button (below), the following message appears:
The run you are attempting to access is already complete and will not accept new groups.
The existing run is: https://dashboard.cypress.io/projects/abcdef/runs
When a run finishes all of its groups, it waits for a configurable set of time before finally completing. You must add more groups during that time period.
The --tag flag you passed was:
The --group flag you passed was: core
What should I change in my configuration to make these possible? Sometimes the e2e fails because of a backend error that is fixed later.
I'd like to do this instead of a force e2e commit.
I was facing the same issue before.
I think you can try to pass GITHUB_TOKEN or add a custom build id. It fixed my issue. Hoep it helps.
https://github.com/cypress-io/github-action#custom-build-id
Check your Cypress Dashboard subscription plan. Mine got the free plan full (500 test for free and I was running in 3 different browsers 57 tests, so it got full pretty quick since this is 171 tests in one run) and after that it didn't allowed me to keep running or re running more parallel tests. Test kept running but in 1 machine out of 4 in the first browser and stages for the other 2 browsers started failing, I was able to allow the pipeline to not be failing by passing continueOnError: true in the configuration.
Quick edit, I don't remember where but I read that you could also add a delay to your pipeline and/or reduce the default wait on the Dashboard which is 60s(https://docs.cypress.io/guides/guides/parallelization#Run-completion-delay)

Is it possible to run parallel tests with Browserstack local?

Im using a test setup with BrowserMobProxy. I'm running my test on Browserstack so I have to start Browserstack local otherwise I can't use BrowserMobProxy.
The next step is to run my tests in parallel but is this possible because I'm using Browsertack local? I can't find a solution.
Yes, it is possible to run parallel tests with Browserstack local. You can run upto 100 parallel tests with only one instance Browserstack local without having any performance impact.
Incase, you want to scale even more, you can start another instance of Browserstack local with the flag : --local-identifier <unique_string> and pass the following capability browserstack.localIdentifier : <unique_string> in your test. This would ensure the traffic of that test would route through this binary instance with the local identifier. You can run multiple such instances by passing some unique strings.

Change GVs at runtime on tibco

I need to change the value of a global Variable while the engine its running, does anybody know how to do it? (javacode or whatever) the idea its to integrate the change as part of a process.
What if i want to do it only in test mode? i have my unitary test on the same project that the functional code, so i want to ensure myself that no tests GV�s are activated before start, to do that withaout changing any ear i want to integrate that on the unitary test process so, i start on tester, the process change the GV�s as need itself, and then it delete the VG so i have not to concern about that.
Thanks in advance!!
Trough standard palette can't alter GV at runtime.
If you want to alter configuration at runtime you can use shared variables and have a poller process to reload them (e.g. on file change or every 5 min)

JMeter getting hanging in GUI mode

I'm having problem with jmeter while running the test plan. Suddenly my Jmeter starts hanging and appears as a black screen in GUI mode. I was running recording controller with multiple thread groups (4 thread groups). Each thread group with 25 users.
I'm using Jmeter 2.11 (current version). I'm not sure whether it is due to overload or some other reason.
Regards
Nayasoft
Don't ever use GUI mode for load test. Run JMeter in command-line non-GUI mode as follows
jmeter -n -t /path/to/your/testplan.jmx -l /path/to/testresults.jtl
Also if you have any listeners in your test plan - disable or remove them as well. After test execution you should be able to open testresults.jtl file with the listener of your choice and analyze results, but don't use them during load test.
Make sure that you following Performance Checklist and other recommendations from JMeter Performance and Tuning Tips guide
You have run out of JMeter/JVM memory. You can increase in with environment variables, command line options when you start jmeter, or change some values in jmeter.properties
This page describes how to start jmeter with more memory.
http://jmeter.apache.org/usermanual/get-started.html
One easy way is to set the environment variable before running jmeter:
set JVM_ARGS="-Xms1024m -Xmx1024m"
This will allow your tests to run longer before running out of memory, but if you store results in memory (for instance using View Results Tree listener), you will still run out eventually. For long running tests, or accurate measurement of short running tests, it is better to run in non-GUI mode, and save results to file, instead of memory.
Graphs can still be generated after the run from the saved results using jmeter utilities.

Resources