I have created a bean to run a cron based task.
and write a test to ensure it is working as expected.
#SpringBootTest(properties = ["tasks.mytask.cron=0/5 * * * * *"])
class ScheduledTasksTest {
companion object {
private val log = LoggerFactory.getLogger(ScheduledTasksTest::class.java)
}
#SpykBean
lateinit var tasks: ScheduledTasks
#Test
fun `verify task mytask at least run twice in 10 seconds`() {
await atMost (Duration.ofSeconds(10)) untilAsserted {
verify(atLeast = 2) { tasks.mytask () }
}
}
}
This task is set to run once per day, but in this test, I changed it to run every 5 seconds.
The test is running well on my local machine, but when I run all tests on Github Actions, the task is always running (with every 5 seconds rate) and never stop.
Related
I want to run a periodic task. In Spring MVC it works flawlessly.
Now I want to integrate Spring Webflux + Kotlin Coroutines.
How can I call suspended functions in the #Scheduled method? I want it to wait until the suspended function is finished.
/// This function starts every 00:10 UTC
#Scheduled(cron = "0 10 0 * * *", zone = "UTC")
fun myScheduler() {
// ???
}
suspend fun mySuspendedFunction() {
// business logic
}
fun myScheduler() {
runBlocking {
mySuspendedFunction()
}
}
This way coroutines will run in the thread that was blocked. If you need to run the code in a different thread or execute several coroutines in parallel, you can pass a dispatcher (e.g. Dispatchers.Default, Dispatchers.IO) to runBlocking() or use withContenxt().
Is it safe to launch a coroutine in a SmartLifecycle?
I need to use CoroutineCrudRepository within an initializer on the very first startup like the following, but I am unsure about the implications as using GlobalScope is marked as a delicate API:
#Component
class Initializer(val configRepo: ConfigRepository) : SmartLifecycle {
private var running = false
override fun start() {
running = true
GlobalScope.launch {
val initialized = configRepo.findByKey(ConfigKey.INITIALIZED)
if (initialized == null) {
// very first run
// ... do some stuff ...
val c = Config(key = ConfigKey.INITIALIZED, value = "1")
configRepo.save(c)
}
running = false
}
}
override fun stop() {
}
override fun isRunning(): Boolean = running
}
From what I understood there is no possibility to stop the coroutine and I cannot implement stop(). But my guess was that this is ok-ish during startup, because startup either fails and the complete application is shutdown (hence the coroutine would stop consuming resources) or the application starts up fine and I at least can indicate the isRunning from within the coroutine.
I would assume the configRepo to work fine, but I do not fully understand what would happen if the coroutine would get stuck.
I've some problems with performance with specifications implemented in Spock - I mean execution time in particular. After digging into the problem, I've noticed that it's somehow related with setting spec up - I don't mean setup() method in particular.
After this discovery, I added #Shared annotation to all the fields declared in the specification and it runs 2 times faster than before. Then, I thought, that performance problems may be related to ConcurrentHashMap or random* methods (from commons-lang3) but that wasn't the case.
In the end, in an act of desperation, I decorated all the fields in my specification in the following way:
class EntryFacadeSpec extends Specification {
static {
println(System.currentTimeMillis())
}
#Shared
def o = new Object()
static {
println(System.currentTimeMillis())
}
#Shared
private salesEntries = new InMemorySalesEntryRepository()
static {
println(System.currentTimeMillis())
}
#Shared
private purchaseEntries = new InMemoryPurchaseEntryRepository()
static {
println(System.currentTimeMillis())
}
...
What's interesting, no matter which field is declared as the first one it takes hundreds of milliseconds to initialize the field:
1542801494583
1542801495045
1542801495045
1542801495045
1542801495045
1542801495045
1542801495045
1542801495045
1542801495045
1542801495045
1542801495046
1542801495046
1542801495046
1542801495046
1542801495047
1542801495047
What's the problem? How to save this several hundred milliseconds?
TL;DR
Calling printlnin the first static block initializes about 30k+ objects related to Groovy Development Kit. It can take 50 ms at the minimum to finish, depending on horsepower of the laptop we run this test on.
The details
I couldn't reproduce the lag at the level of hundreds of milliseconds, but I was able to get a lag between 30 to 80 milliseconds. Let's start with the class I used in my local tests that reproduces your use case.
import spock.lang.Shared
import spock.lang.Specification
class EntryFacadeSpec extends Specification {
static {
println("${System.currentTimeMillis()} - start")
}
#Shared
def o = new Object()
static {
println("${System.currentTimeMillis()} - object")
}
#Shared
private salesEntries = new InMemorySalesEntryRepository()
static {
println("${System.currentTimeMillis()} - sales")
}
#Shared
private purchaseEntries = new InMemoryPurchaseEntryRepository()
static {
println("${System.currentTimeMillis()} - purchase")
}
def "test 1"() {
setup:
System.out.println(String.format('%d - test 1', System.currentTimeMillis()))
when:
def a = 1
then:
a == 1
}
def "test 2"() {
setup:
System.out.println(String.format('%d - test 2', System.currentTimeMillis()))
when:
def a = 2
then:
a == 2
}
static class InMemorySalesEntryRepository {}
static class InMemoryPurchaseEntryRepository {}
}
Now, when I run it I see something like this in the console.
1542819186960 - start
1542819187019 - object
1542819187019 - sales
1542819187019 - purchase
1542819187035 - test 1
1542819187058 - test 2
We can see 59 milliseconds lag between the two first static blocks. It doesn't matter what is between these two blocks, because Groovy compiler merges all these 4 static blocks to a single static block that looks like this in plain Java:
static {
$getCallSiteArray()[0].callStatic(EntryFacadeSpec.class, new GStringImpl(new Object[]{$getCallSiteArray()[1].call(System.class)}, new String[]{"", " - start"}));
$getCallSiteArray()[2].callStatic(EntryFacadeSpec.class, new GStringImpl(new Object[]{$getCallSiteArray()[3].call(System.class)}, new String[]{"", " - object"}));
$getCallSiteArray()[4].callStatic(EntryFacadeSpec.class, new GStringImpl(new Object[]{$getCallSiteArray()[5].call(System.class)}, new String[]{"", " - sales"}));
$getCallSiteArray()[6].callStatic(EntryFacadeSpec.class, new GStringImpl(new Object[]{$getCallSiteArray()[7].call(System.class)}, new String[]{"", " - purchase"}));
}
So this 59 milliseconds lag happens between two first lines. Let's put a breakpoint in the first line and run a debugger.
Let's step over this line to the next line and let's see what happens:
We can see that invoking Groovy's println("${System.currentTimeMillis()} - start") caused creating more than 30k objects in the JVM. Now, let's step over the second line to the 3rd one to see what happens:
Only a few more objects got created.
This example shows that adding
static {
println(System.currentTimeMillis())
}
adds accidental complexity to the test setup and it does not show there is a lag between initialization of two class methods, but it creates this lag. However, the cost of initializing all Groovy related objects is something we can't completely avoid and it has to be paid somewhere. For instance, if we simplify the test to something like this:
import spock.lang.Specification
class EntryFacadeSpec extends Specification {
def "test 1"() {
setup:
println "asd ${System.currentTimeMillis()}"
println "asd ${System.currentTimeMillis()}"
when:
def a = 1
then:
a == 1
}
def "test 2"() {
setup:
System.out.println(String.format('%d - test 2', System.currentTimeMillis()))
when:
def a = 2
then:
a == 2
}
}
and we put a breakpoint in the first println statement and step over to the next one, we will see something like this:
It still creates a few thousands of objects, but it is much less than in the first example because most of the objects we saw in the first example were already created before Spock executed the first method.
Overclocking Spock test performance
One of the first things we can do is to use static compilation. In case of my simple test it reduced execution time from 300 ms (non static compilation) to 227 ms approximately. Also the number of objects that have to be initialized is significantly reduced. If I run the same debugger scenario as the last one shown above with #CompileStatic added, I will get something like this:
It is still pretty significant, but we see that the number of objects initialized to invoke println method was dropped.
And the last thing worth mentioning. When we use static compilation and we want to avoid calling Groovy methods in the class static block to print some output we can use a combination of:
System.out.println(String.format("...", args))
because Groovy executes exactly this. On the other hand, following code in Groovy:
System.out.printf("...", args)
may look similar to the previous one, but it gets compiled to something like this (with static compilation enabled):
DefaultGroovyMethods.printf(System.out, "...", args)
The second case will be much slower when used in the class static block, because at this point Groovy jar is not yet loaded and the classloader has to resolve DefaultGroovyMethods class from the JAR file. When Spock executes test method it doesn't make much difference if you use System.out.println or DefaultGroovyMethods.printf, because Groovy classes are already loaded.
That is why if we rewrite your initial example to something like this:
import groovy.transform.CompileStatic
import spock.lang.Shared
import spock.lang.Specification
#CompileStatic
class EntryFacadeSpec extends Specification {
static {
System.out.println(String.format('%d - start', System.currentTimeMillis()))
}
#Shared
def o = new Object()
static {
System.out.println(String.format('%d - object', System.currentTimeMillis()))
}
#Shared
private salesEntries = new InMemorySalesEntryRepository()
static {
System.out.println(String.format('%d - sales', System.currentTimeMillis()))
}
#Shared
private purchaseEntries = new InMemoryPurchaseEntryRepository()
static {
System.out.println(String.format('%d - purchase', System.currentTimeMillis()))
}
def "test 1"() {
setup:
System.out.println(String.format('%d - test 1', System.currentTimeMillis()))
when:
def a = 1
then:
a == 1
}
def "test 2"() {
setup:
System.out.println(String.format('%d - test 2', System.currentTimeMillis()))
when:
def a = 2
then:
a == 2
}
static class InMemorySalesEntryRepository {}
static class InMemoryPurchaseEntryRepository {}
}
we will get following console output:
1542821438552 - start
1542821438552 - object
1542821438552 - sales
1542821438552 - purchase
1542821438774 - test 1
1542821438786 - test 2
But what is more important, it doesn't log field initialization time, because Groovy compiles these 4 blocks to a single one like this:
static {
System.out.println(String.format("%d - start", System.currentTimeMillis()));
Object var10000 = null;
System.out.println(String.format("%d - object", System.currentTimeMillis()));
var10000 = null;
System.out.println(String.format("%d - sales", System.currentTimeMillis()));
var10000 = null;
System.out.println(String.format("%d - purchase", System.currentTimeMillis()));
var10000 = null;
}
There is no lag between the 1st and 2nd call, because there is no need to load Groovy classes at this point.
Actually question related to testng-failed.xml has already been asked many times but my problem is little different. I want to run all the failed test cases together so what i did is in my pom I passed testng-failed.xml.
But the problem I am facing is first my testng.xml runs then testng-failed.xml and then it testng-failed.xml gets overridden. Due to this , suppose if i give a second time fresh run to my testcases, testng.xml runs, then my testng-failed.xml has previously failed test cases so it runs the previously failed cases and then updates testng-failed.xml with this time failed cases.
I dont knoe which listener to add to handle this issue that whenever i run first testng.xml should run , then it should override testng-failed.xml and then testng-failed.xml should run.
I am using Maven, selenium, testng.
I just eneterd testng-failed.xml in my pom as shown below. Please let me know which listner to use
<suiteXmlFiles>
<suiteXmlFile>src/resources/testng/testng.xml</suiteXmlFile>
<suiteXmlFile>test-output/testng-failed.xml</suiteXmlFile>
</suiteXmlFiles>
Create class 'RetryListener' by implementing 'IAnnotationTransformer'.
public class RetryListener implements IAnnotationTransformer {
#Override
public void transform(ITestAnnotation testannotation, Class testClass,
Constructor testConstructor, Method testMethod) {
IRetryAnalyzer retry = testannotation.getRetryAnalyzer();
if (retry == null) {
testannotation.setRetryAnalyzer(Retry.class);
}
}
}
Now Create another class.
public class Retry implements IRetryAnalyzer {
private int retryCount = 0;
private int maxRetryCount = 1;
// Below method returns 'true' if the test method has to be retried
else 'false'
//and it takes the 'Result' as parameter of the test method that just
ran
public boolean retry(ITestResult result) {
if (retryCount < maxRetryCount) {
System.out.println("Retrying test " + result.getName() + " with status "
+ getResultStatusName(result.getStatus()) + " for the " + (retryCount+1) + " time(s).");
retryCount++;
return true;
}
return false;
}
public String getResultStatusName(int status) {
String resultName = null;
if(status==1)
resultName = "SUCCESS";
if(status==2)
resultName = "FAILURE";
if(status==3)
resultName = "SKIP";
return resultName;
}
}
And Now Add below lines in your testNG xml file
<listeners>
<listener class-name="com.pack.test.RetryListener"/>
</listeners>
and Do not pass Xml file in pom.xml
Hope it will works
Thanks
Why are you running the testng xml and failed test xml in the same testng task. You should have to separate build task, first that runs testng xml and generates the failed tests xml and then another task running the failed test xml. It will work.
I implemented run one time and rerun three times only the newly failed tests.
mvn $par1=$pSuiteXmlFile test > $test1log
mvn $par1=$failedRelPath test > $failed1log
mvn $par1=$failedRelPath test > $failed2log
mvn $par1=$failedRelPath test > $failed3log
It works, but with small test-cases-count. I have a suite with 300 tests in it and somehow the testng-failed.xml is not created by surefire/testng after the main (first) run. When the suite is smaller, the testng-failed.xml is created as required.
In my gradle file, I have 3 tasks of type Test as:
/**
* This is a test block called 'sampleA' to run testng tests.
*/
task sampleA(type: Test) {
include "**/Helloworld4a*"
}
/**
* This is a test block called 'sampleB' to run testng tests.
*/
task sampleB(type: Test) {
include "**/Helloworld4b*"
}
/**
* This is a test block called 'sampleC' to run testng tests.
* This block depends on sampleB block.
*/
task sampleC(type: Test, dependsOn: sampleB) {
include "**/Helloworld4*"
exclude "**/Helloworld4a*"
exclude "**/Helloworld4b*"
}
Now I created a plugin where I add a TaskExecutionListener to it. In the TaskExecutionListener, I only create a file per task irrespective of whether task is executed successfully or not.
sampleA will have test failures
sampleB will have test failures
sampleC will NOT have failures
When I run gradle sampleA sampleB sampleC it runs only sampleA, which is expected (since task failed)
But when I use --continue option, the result is same. Without adding listener I see both sampleA and sampleB being run.
Here is my Listener class
class TestInfraTaskListener implements TaskExecutionListener {
/**
* Generate the test results for the EMTestNGTest types
*/
public void afterExecute(Task task, TaskState state) {
if(task instanceof Test) {
/* If testng test type, then generate results. */
def resultHandler = new TestResultsHandler(task, state)
/* Generate individual result files */
resultHandler.generateResultFiles()
resultHandler.uploadToJira()
state.rethrowFailure()
}
}
}
When I set ignoreFailure = true, all 3 tasks are run. What is that I'm doing wrong here. I want only sampleA and sampleB to be run with --continue option.
My Gradle version is 1.11 (I do not have authority to upgrade)
Ok, finally got it working with below rules:
DO NOT set ignoreFailures = true in your Test Tasks
DO NOT call state.rethrowFailure() OR throw new GradleException(...) in any condition (Whether task succeeded or failed)