I have a total of 405 tests. They are all executed fine when running on a single thread. However, when trying to run it in parallel, it seems the number of tests is not being properly allocated per thread.
So, for example, executing it using 3 threads:
mvn integration-test -Dwebdriver.remote.url=http://selenium-hub.project.svc.cluster.local:4444/wd/hub \
-Dwebdriver.remote.driver=chrome \
-Dwebdriver.driver=chrome \
-Dconfig.threads=3 \
-Dserenity.batch.size=3 \
-Dserenity.batch.number=<"from 1 to 3"> \
-Dserenity.batch.strategy=DIVIDE_BY_TEST_COUNT \
-Dserenity.take.screenshots=FOR_EACH_ACTION
After triggered maven, as according to the sample above, the tests have been allocated as follows:
Thread 1: 106
Thread 2: 96
Thread 3: 103
Total: 305
The funny thing is that those numbers vary, changing the tests count per thread on every execution.
As well, it is like it is counting 4 threads instead of 3.
I'm running those tests using Jenkins, hosted in an Openshift environment.
Found a workaround by increasing the number of threads (e.g. from 3 to 4). It looks like somehow there is a limitation regarding the number of tests executed per thread.
I will search for this config. and keep this post updated in case I find something.
Related
I am running Cucumber Junit test in JAVA using Spring Boot Project and I am getting my console output as below
Tests passed :30 of 30 tests
Note : Actually I am running only 10 scenarios with each scenario 2 steps but the output shows as (10 + 10*2 = 30 passed) . This number is not making scenes . It should show me 10 of 10 tests
In build.gradle file ,
My cucumber version used = '1.2.5'
List cucumber = ["info.cukes:cucumber-jvm:${cucumberVersion}",
"info.cukes:cucumber-core:${cucumberVersion}",
"info.cukes:cucumber-java:${cucumberVersion}",
"info.cukes:cucumber-junit:${cucumberVersion}",
"info.cukes:cucumber-spring:${cucumberVersion}"]
Is there any way i can give a meaningful output instead of adding the steps count again with the actual scenarios count ?
I have a master/slave for jmeter set up using jmeter 5.1
For time to time I am noticing the tests just hangs up while waiting for threads to shutdown.
In the jmeter.logs I am seeing:
2020-02-06 00:06:35,100 INFO o.a.j.r.Summariser: summary + 9 in 00:30:34 = 0.0/s Avg: 5647 Min: 5520 Max: 5833 Err: 0 (0.00%) Active: 1 Started: 4 Finished: 3
I tried waiting but it never finishes this 1 active thread and it causes issue for rest of the steps I have in the pipeline to read the jmeter test result file and generate HTML report.
Any suggestions how to debug this?
I saw this post:
Threads keep running even after test finishes in Jmeter
But would be nice to understand the issue, rather than just forcing the threads to stop.
Regards,
Vikas
If you want to "understand" the issue you need to understand what this thread is doing and the only way to get the information is taking a JVM thread dump, the options are in:
Starting from JMeter version 3.2 there is an option to take a thread dump directly from JMeter GUI
You can use jstack tool and provide to it the PID of the Java process where JMeter is running
On Linux you can use kill -3 command which will print the status of threads into the console window
You can also check jmeter-server.log for for any suspicious entries.
I have a mvn project in which is also integrated jmeter, to test performance. So far I have 6 thread groups in my test plan, all of them contains HTTP Requests. I run the tests using the command "mvn clean verify" from jmeter-maven plugin. Among the results I found multiple rows like this:
summary + 1 in 00:00:02 = 0.6/s Avg: 208 Min: 208 Max: 208 Err: 0 (0.00%) Active: 6 Started: 12 Finished: 6
I would need some extra information in console, especially the name and the avg time of each thread group or of the HTTP Request that ran. For example, something similar with aggregate report from GUI mode :
Label Samples Average Median 90% Line 95% Line 99% Line Min Max ...
AppleCodeRequest 6 196 119 279 284 284 108 284
PearCodeRequest 3 382 485 490 490 490 173 490
I want this because I am using a sh script to run the tests and I would like to trigger some performance issues before opening the html reports.
Is there any way to obtain this? Maby some user properties (even if I searched for one and no result) or some workaround ?
The easiest solution is going for a plugin like BlazeMeter Uploader, this way you will be able to observe real time test metrics in a fancy web UI. You can install BlazeMeter Uploader plugin using JMeter Plugins Manager
Alternative solution would be using JMeterPluginsCMD Command Line Tool.
Add the next lines to your pom.xml file
<configuration>
<jmeterExtensions>
<artifact>kg.apc:jmeter-plugins-cmd:2.2</artifact>
<artifact>kg.apc:jmeter-plugins-synthesis:2.2</artifact>
<artifact>kg.apc:jmeter-plugins-dummy:0.2</artifact>
<artifact>kg.apc:cmdrunner:2.0</artifact>
<artifact>kg.apc:jmeter-plugins-filterresults:2.2</artifact>
<artifact>kg.apc:jmeter-plugins-cmn-jmeter:0.6</artifact>
</jmeterExtensions>
<!-- The plugin uses some broken dependencies
An alternative is to set this to true and use excludedArtifacts, see below
-->
<downloadExtensionDependencies>false</downloadExtensionDependencies>
<propertiesJMeter>
<jmeter.save.saveservice.autoflush>true</jmeter.save.saveservice.autoflush>
</propertiesJMeter>
</configuration>
Add another Thread Group to your Test Plan with 1 user and infinite number of loops
Add JSR223 Sampler to your Thread Group
Put the following code into "Script" area:
SampleResult.setIgnore()
def resultFile = new File('../results').list().first()
"java -jar ../lib/ext/cmdrunner-2.0.jar --tool Reporter --generate-csv temp.csv --input-jtl ../results/$resultFile --plugin-type AggregateReport".execute().waitFor()
println("cat temp.csv".execute().text)
new File("temp.csv").delete()
Control how frequently you want to see this info using i.e. Constant Timer
You should be able to see the results in the console window:
I'm running the following command to run my .net Core tests:
dotnet test
This runs fine. I want to now generate code coverage stats, so after following this article, I ran this:
dotnet test AI.Core.Tests.csproj
/p:CollectCoverage=true
/p:CoverletOutputFormat=cobertura
/p:CoverletOutput=TestResults\Coverage
I get the following output from this command:
C:\Users\sp4_rm\.nuget\packages\coverlet.msbuild\2.2.1\build\netstandard2.0\coverlet.msbuild.targets(23,5): error :
Index was out of range. Must be non-negative and less than the size of the collection.
[C:\Users\sp4_rm\Desktop\EVO\AI.Core\src\Tests\AI.Core.Tests.csproj]
C:\Users\sp4_rm\.nuget\packages\coverlet.msbuild\2.2.1\build\netstandard2.0\coverlet.msbuild.targets(23,5): error :
Parameter name: index
[C:\Users\sp4_rm\Desktop\EVO\AI.Core\src\Tests\AI.Core.Tests.csproj]
See screen shot below:
Has anyone got this command running? What am I doing wrong?
Ok so this was due to a school boy error in not actually have any code to test (or test case to test it) in my sample project!! Adding a couple of classes into the main projects and a couple of tests in the test project does away with this problem! (just in case anyway does the same thing as me!)
I'm trying to run those Flink Benchmarks:
https://github.com/dataArtisans/flink-benchmarks
I've generated the jar file using maven with that command:
mvn clean package -Pbuild-jar
Then I'm trying to run the benchmark on a Flink Cluster with that command:
./bin/flink run -c org.apache.flink.benchmark.WindowBenchmarks ~/flinkBenchmarks/target/flink-hackathon-benchmarks-0.1.jar
I've used the -c option to add to the classpath the Main of the benchmark (WindowBenchmarks) I want to run.
Finally, I get that error:
# JMH version: 1.19
# VM version: JDK 1.8.0_151, VM 25.151-b12
# VM invoker: /usr/lib/jvm/java-8-oracle/jre/bin/java
# VM options: -Dlog.file=/home/user/flink-1.3.2/flink-dist/target/flink-1.3.2-bin/flink-1.3.2/log/flink-user-client-mypc.log -Dlog4j.configuration=file:/home/user/flink-1.3.2/flink-dist/target/flink-1.3.2-bin/flink-1.3.2/conf/log4j-cli.properties -Dlogback.configurationFile=file:/home/user/flink-1.3.2/flink-dist/target/flink-1.3.2-bin/flink-1.3.2/conf/logback.xml -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
# Warmup: 10 iterations, 1 s each
# Measurement: 10 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.apache.flink.benchmark.WindowBenchmarks.sessionWindow
# Run progress: 0.00% complete, ETA 00:04:00
# Fork: 1 of 3
Error: Could not find or load main class org.openjdk.jmh.runner.ForkedMain
<forked VM failed with exit code 1>
<stdout last='20 lines'>
</stdout>
<stderr last='20 lines'>
Error: Could not find or load main class org.openjdk.jmh.runner.ForkedMain
</stderr>
# Run complete. Total time: 00:00:00
Benchmark Mode Cnt Score Error Units
The program didn't contain a Flink job. Perhaps you forgot to call execute() on the execution environment.
I don't have any previous experience with Flink and Maven so I find out what is missing. My first thought was that it's a missing dependencies error, but they look fine. Any suggestions?
Thank you in advance!
flink-benchmarks is a repository that contains sets of micro benchmarks designed to run on single machine, not on the cluster. The main functions defined in the various classes (test cases) are 'JMH' runners, not Flink programs. As such you can either execute whole benchmark suite (which takes ~1hour):
mvn -Dflink.version=1.5.0 clean install exec:exec
or if you want to execute just one benchmark, the best approach is to execute selected main function manually. For example from your IDE (don't forget about selecting flink.version, default value for the property is defined in pom.xml).
There is also a possibility to execute single benchmark from console, but I haven't tried it for very long time.