Unable to achieve expected transaction in karate-gatling for load testing [duplicate] - performance

I am trying to reuse karate scripts and perform load testing using gatling. The scenario defined is to load constant 50 users per second for 10 seconds. (To load test 500 users) However the number of requests per second does not exceed 20 requests per second in the gatling report. Please let me know if i am doing anything wrong.
ExampleTest.java code which executes Karate scripts
//package examples;
import com.intuit.karate.Results;
import com.intuit.karate.Runner;
import static org.junit.jupiter.api.Assertions.*;
import org.junit.jupiter.api.Test;
import java.io.File;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import net.masterthought.cucumber.Configuration;
import net.masterthought.cucumber.ReportBuilder;
import org.apache.commons.io.FileUtils;
class ExamplesTest {
#Test
void testParallel() {
//System.setProperty("karate.env", "demo"); // ensure reset if other tests (e.g. mock) had set env in CI
Results results = Runner.path("classpath:examples").tags("~#ignore").parallel(10);
generateReport(results.getReportDir());
assertEquals(0, results.getFailCount(), results.getErrorMessages());
}
public static void generateReport(String karateOutputPath) {
Collection<File> jsonFiles = FileUtils.listFiles(new File(karateOutputPath), new String[] {"json"}, true);
List<String> jsonPaths = new ArrayList<String>(jsonFiles.size());
jsonFiles.forEach(file -> jsonPaths.add(file.getAbsolutePath()));
Configuration config = new Configuration(new File("target"), "demo");
ReportBuilder reportBuilder = new ReportBuilder(jsonPaths, config);
reportBuilder.generateReports();
}
}
Scala Code to define load test scenarios.
package perf
import com.intuit.karate.gatling.PreDef._
import io.gatling.core.Predef._
import scala.concurrent.duration._
class KarateSimulate extends Simulation {
val protocol = karateProtocol(
"/v2/" -> Nil,
"/v2/" -> pauseFor("get" -> 0, "post" -> 25)
)
val userfeeder = csv("data/Token.csv").circular
val getScores = scenario("Get Scores for Students").feed(userfeeder).exec(karateFeature("classpath:examples/scores/student.feature"))
setUp(
getScores.inject(constantUsersPerSec(50) during (10 seconds)).protocols(protocol)
)
}

We updated the docs (in the develop branch) with tips on how to increase the thread-pool size if needed: https://github.com/intuit/karate/tree/develop/karate-gatling#increasing-thread-pool-size
Add a file called gatling-akka.conf to the root of the classpath (typically src/test/resources). Here is an example:
akka {
actor {
default-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
fixed-pool-size = 100
}
throughput = 1
}
}
}
Since we made some fixes recently, please try to build from source if the above does not work for 0.9.6.RC4, it is easy, here are the instructions: https://github.com/intuit/karate/wiki/Developer-Guide
If that does not work, it is important that you follow this process so that we can replicate: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
Please see these links below for good examples of how others have worked with the Karate project team to replicate issues so that they can be fixed:
https://github.com/intuit/karate/issues/1668
https://github.com/intuit/karate/issues/845

Related

How to get the immediate parent name for a sampler in Jmeter

How to get the immediate parent name for a sampler in Jmeter. I have many transaction controllers. I am using Jmeter 5.3
I have a beanshell script for the same which is as below, but it always prints the very first controller name.
import org.apache.jmeter.control.GenericController;
import org.apache.jmeter.engine.StandardJMeterEngine;
import org.apache.jorphan.collections.HashTree;
import org.apache.jorphan.collections.SearchByClass;
import java.lang.reflect.Field;
import java.util.Collection;
StandardJMeterEngine engine = ctx.getEngine();
Field test = engine.getClass().getDeclaredField("test");
test.setAccessible(true);
HashTree testPlanTree = (HashTree) test.get(engine);
SearchByClass simpleCtrlSearch= new SearchByClass(GenericController.class);
testPlanTree.traverse(simpleCtrlSearch);
Collection simpleControllers = simpleCtrlSearch.getSearchResults();
for (Object simpleController : simpleControllers) {
log.info(((GenericController) simpleController).getName());
}
In general this is either not possible or not too simple.
For particular your case if you need to determine the name of the Transaction Controller for the particular Sampler you can go for JSR223 Listener and the following code:
if (sampleEvent.isTransactionSampleEvent()) {
log.info("Transaction Controller name: " + sampleEvent.result.getSampleLabel())
}
Demo:
More information: Apache Groovy - Why and How You Should Use It

How to wait for full Kafka-message batch with Spring Boot?

When batch-consuming Kafka messages, one can limit the batch size using max.poll.records.
In case the consumer is very fast and its commit offset does not lag significantly, this means that most batches will be much smaller. I'd like to only receive "full" batches, i.e., having my consumer function only invoked then the batch size is reached. So I'm looking for something like min.poll.records, which does not exist in that form.
Here is a minimal example of what I'm doing:
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.autoconfigure.kafka.KafkaProperties
import org.springframework.boot.runApplication
import org.springframework.context.annotation.Bean
import org.springframework.kafka.annotation.KafkaListener
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory
import org.springframework.kafka.core.DefaultKafkaConsumerFactory
import org.springframework.stereotype.Component
#SpringBootApplication
class Application
fun main(args: Array<String>) {
runApplication<Application>(*args)
}
#Component
class TestConsumer {
#Bean
fun kafkaBatchListenerContainerFactory(kafkaProperties: KafkaProperties): ConcurrentKafkaListenerContainerFactory<String, String> {
val configs = kafkaProperties.buildConsumerProperties()
configs[ConsumerConfig.MAX_POLL_RECORDS_CONFIG] = 1000
val factory = ConcurrentKafkaListenerContainerFactory<String, String>()
factory.consumerFactory = DefaultKafkaConsumerFactory(configs)
factory.isBatchListener = true
return factory
}
#KafkaListener(
topics = ["myTopic"],
containerFactory = "kafkaBatchListenerContainerFactory"
)
fun batchListen(values: List<ConsumerRecord<String, String>>) {
println(values.count())
}
}
When started with a bit of consumer lag, it outputs something like:
[...]
1000
1000
1000
[...]
1000
1000
1000
256
27
8
9
3
1
1
23
[...]
Is there any way (without manually sleep-ing in the consumer handler in case of "incomplete" batches) to have the function invoked when one of the following two conditions is met?
- only when at least n messages are there
- or at least m milliseconds were spend waiting
Kafka has no min.poll.records; you can approximate it using fetch.min.bytes if your records are a similar length. Also see fetch.max.wait.ms.
Since, as pointed out nicely by Gary Russel, it's currently not possible to do make Kafka do what I was looking for, here is my solution with manual buffering, which achieves the desired behavior:
import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication
import org.springframework.kafka.annotation.KafkaListener
import org.springframework.scheduling.annotation.Scheduled
import org.springframework.stereotype.Component
import java.text.SimpleDateFormat
import java.util.*
import javax.annotation.PreDestroy
#SpringBootApplication
class Application
fun main(args: Array<String>) {
runApplication<Application>(*args)
}
#Component
class TestConsumer {
#KafkaListener(topics = ["myTopic"])
fun listen(value: String) {
addToBuffer(value)
}
private val buffer = mutableSetOf<String>()
#Synchronized
fun addToBuffer(message: String) {
buffer.add(message)
if (buffer.size >= 300) {
flushBuffer()
}
}
#Synchronized
#Scheduled(fixedDelay = 700)
#PreDestroy
fun flushBuffer() {
if (buffer.isEmpty()) {
return
}
val timestamp = SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS").format(Date())
println("$timestamp: ${buffer.count()}")
buffer.clear()
}
}
Example output:
[...]
2020-01-03T07:01:13.032: 300
2020-01-03T07:01:13.041: 300
2020-01-03T07:01:13.078: 300
2020-01-03T07:01:13.133: 300
2020-01-03T07:01:13.143: 300
2020-01-03T07:01:13.188: 300
2020-01-03T07:01:13.197: 300
2020-01-03T07:01:13.321: 300
2020-01-03T07:01:13.352: 300
2020-01-03T07:01:13.359: 300
2020-01-03T07:01:13.399: 300
2020-01-03T07:01:13.407: 300
2020-01-03T07:01:13.533: 300
2020-01-03T07:01:13.571: 300
2020-01-03T07:01:13.580: 300
2020-01-03T07:01:13.607: 300
2020-01-03T07:01:13.611: 300
2020-01-03T07:01:13.632: 300
2020-01-03T07:01:13.682: 300
2020-01-03T07:01:13.687: 300
2020-01-03T07:01:13.708: 300
2020-01-03T07:01:13.712: 300
2020-01-03T07:01:13.738: 300
2020-01-03T07:01:13.880: 300
2020-01-03T07:01:13.884: 300
2020-01-03T07:01:13.911: 300
2020-01-03T07:01:14.301: 300
2020-01-03T07:01:14.714: 300
2020-01-03T07:01:15.029: 300
2020-01-03T07:01:15.459: 300
2020-01-03T07:01:15.888: 300
2020-01-03T07:01:16.359: 300
[...]
So we see after catching up with the consumer lag, it provides batches of 300 matching the topic throughput.
Yes, the #Synchronized does kill concurrent processing, but in my use-case, this part is far away from being the bottleneck.
As you wait for the batch to complete (accrue to 300), your offset will be committed each time you go back to the listener to fetch. Each time the listener goes back, it would commit the previous batch, though you may not have processed them as you hold them in the buffer.
If there is a failure (a listener crash, for example), then you will loose messages in the buffer. This may not be an issue for your use case, but just wanted to highlight the possibility.

Groovy #CompileStatic and #TypeChecked order, bug or misunderstanding

I started getting a strange failure when compiling a gradle task class. This is the task I created:
package sample
import groovy.transform.CompileStatic
import groovy.transform.TypeChecked
import org.gradle.api.artifacts.Dependency
import org.gradle.api.provider.Property
import org.gradle.api.tasks.AbstractCopyTask
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.Internal
import org.gradle.api.tasks.bundling.Zip
import sample.internal.DataSourceXmlConfig
#TypeChecked
#CompileStatic
class DataSource extends Zip {
#Internal
final Property<File> warFile = project.objects.property(File.class)
DataSource() {
warFile.convention(project.provider {
def files = project.configurations.getByName('warApp').fileCollection { Dependency d ->
d.name == (archiveFileName.getOrElse("") - (~/\.[^.]+$/))
}
files.empty ? null : files.first()
})
}
/**
* This function is used to specify the location of data-sources.xml
* and injects it into the archive
* #param dsConf The configuration object used to specify the location of the
* file as well as any extra variables which should be injected into the file
*/
#Input
void dataSourceXml(#DelegatesTo(DataSourceXmlConfig) Closure dsConf) {
filesToUpdate {
DataSourceXmlConfig ds = new DataSourceXmlConfig()
dsConf.delegate = ds
dsConf.resolveStrategy = Closure.DELEGATE_FIRST
dsConf.call()
exclude('**/WEB-INF/classes/data-sources.xml')
from(ds.source) {
if (ds.expansions) {
expand(ds.expansions)
}
into('WEB-INF/classes/')
rename { 'data-sources.xml' }
}
}
}
private def filesToUpdate(#DelegatesTo(AbstractCopyTask) Closure action) {
action.delegate = this
action.resolveStrategy = Closure.DELEGATE_FIRST
if (warFile.isPresent()) {
from(project.zipTree(warFile)) {
action.call()
}
}
}
}
When groovy compiles this class, I get the following error:
Execution failed for task ':buildSrc:compileGroovy'.
BUG! exception in phase 'class generation' in source unit '/tmp/bus-server/buildSrc/src/main/groovy/sample/DataSource.groovy'
At line 28 column 28 On receiver: archiveFileName.getOrElse() with
message: minus and arguments: .[^.]+$ This method should not have
been called. Please try to create a simple example reproducing this
error and file a bug report at
https://issues.apache.org/jira/browse/GROOVY
Gradle version: 5.6
Groovy version: localGroovy() = 2.5.4
tl;dr, is this a bug or am I missing something about how these annotations work?
The first thing I tried to do was to remove either one of #TypeChecked and #CompileStatic annotations to see if the error goes away.
This actually fixed the problem right away. Compiling the source with either annotations added was successful, but fails when both are present.
I read some questions and answers regarding the use of both annotations, but none of them seemed to suggest that one cannot use both at the same time.
Finally, I tried switching the order of the annotations to see if that helps and to my surprise, it worked! No compilation errors!
This works:
#CompileStatic
#TypeChecked
class DataSource extends Zip { ... }
At this point, I guess my question would be, is this a bug or is there something I am not understanding about the use of both of these annotations? I'm leaning more towards it being a bug just because of the fact that the order made the error message go away.

Gluon PositionService for Desktop - Testing Purposes

I want to be able to test the Gluon PositionService from my laptop. Is this possible? Looking at the runtime dependencies, iOS and Android have this jar file included. The desktop dependencies don't have it though.
What is a workaround to be able to test this service from my laptop. I just want to be able to print out the location to the console right now. Is this a limitation of my laptop?
Given that the laptop/desktop machine doesn't include a GPS sensor, there is no sense on having a DesktopPositionService implementation.
But if you just want to test your code for mobile on your laptop, you can easily create a fake task that randomly provides a new position after a given period of time.
There are two simple ways to mock the PositionService on desktop.
One, by simply providing an alternative to the case where you don't actually have a PositionService implementation:
Services.get(PositionService.class)
.map(s -> {
// Mobile - real implementation
s.positionProperty().addListener((obs, ov, nv) ->
System.out.println(String.format("Lat: %.6f, Lon: %.6f", nv.getLatitude(), nv.getLongitude())));
return s.getPosition();
}).orElseGet(() -> {
if (Platform.isDesktop()) {
// Desktop - Mock implementation
PauseTransition pause = new PauseTransition(Duration.seconds(5));
pause.setOnFinished(t -> {
System.out.println(String.format("Lat: %.6f, Lon: %.6f", new Random().nextFloat() * 100, new Random().nextFloat() * 100));
pause.playFromStart();
});
pause.play();
}
return null;
});
And two, following the design of all the different plugins in Charm Down, by actually providing a PositionService implementation, creating the DesktopPositionService class in the Desktop/Java Package of your project under the package com.gluonhq.charm.down.plugins.desktop.
package com.gluonhq.charm.down.plugins.desktop;
import com.gluonhq.charm.down.plugins.Position;
import com.gluonhq.charm.down.plugins.PositionService;
import java.util.Random;
import javafx.animation.PauseTransition;
import javafx.beans.property.ReadOnlyObjectProperty;
import javafx.beans.property.ReadOnlyObjectWrapper;
import javafx.util.Duration;
public class DesktopPositionService implements PositionService {
private final ReadOnlyObjectWrapper<Position> positionProperty = new ReadOnlyObjectWrapper<>();
public DesktopPositionService() {
mockPosition();
}
#Override
public ReadOnlyObjectProperty<Position> positionProperty() {
return positionProperty.getReadOnlyProperty();
}
#Override
public Position getPosition() {
return positionProperty.get();
}
private void mockPosition() {
PauseTransition pause = new PauseTransition(Duration.seconds(5));
pause.setOnFinished(t -> {
positionProperty.set(new Position(new Random().nextFloat() * 100, new Random().nextFloat() * 100));
pause.playFromStart();
});
pause.play();
}
}
So now this will work for both mobile (real sensor) and desktop (mock):
Services.get(PositionService.class)
.ifPresent(s ->
s.positionProperty().addListener((obs, ov, nv) ->
System.out.println(String.format("Lat: %.6f, Lon: %.6f", nv.getLatitude(), nv.getLongitude()))));

Call a method of a web component and respond to events

I'm working on a Dart project where I have created a custom element with the Web_ui package that has some animation. What I was hoping to do is to have within the dart code for the element something like this....
class MyElement extends WebComponent {
...
void StartAnimation() { ... }
...
}
and then in the main() function of the dart app itself I have something like this...
void main() {
MyElement elm = new MyElement();
elm.StartAnimation(); // Kicks off the animation
}
The Dart editor tells me that Directly constructing a web component is not currently supported. It then says to use WebComponent.forElement -- but I'm not clear on how to use that to achieve my goal.
While you can't yet import web components into a Dart file, you can access them via query() and .xtag. xtag gives you a reference the web component instance that the element is associated with. You do have to be careful that you allow the Web UI setup to complete so that xtag is given a value.
Here's an example:
import 'dart:async';
import 'dart:html';
import 'package:web_ui/web_ui.dart';
main() {
Timer.run(() {
var myElement = query('#my-element').xtag;
myElement.startAnimation();
});
}
This will get better with the ability to import components, directly subclass Element and maybe some lifecycle events that guarantee that you get the correct class back from a query(). This is what the exemple should look like in the future:
import 'dart:async';
import 'dart:html';
import 'package:web_ui/web_ui.dart';
import 'package:my_app/my_element.dart';
main() {
MyElement myElement = query('#my-element');
myElement.startAnimation();
}

Resources