How to distribute workload to many compute and do scatter-gather scenarios with Kafka Steam? - apache-kafka-streams

I am new to Kafka Stream and Alpakka Kafka.
Problem: I have been using Java Executor Service to run parallel jobs and when ALL of them are done, marking the entire process done. The issue is fault tolerance, High Availability and Not utilizing all computes to do the work. It is using just ONE HOST JVM to do work.
We have Apache Kafka as infrastructure, so I was wondering how I can use Kafka Stream to do scatter-gather or just execute child task use case implemented to distribute workload and then gather results or get an indication that all tasks are done.
Any pointer to sample work or scatter-gather or Fork join would be great with Kafka Steam or Alpakka Kafka.
Here is a Sample:
import org.springframework.http.MediaType;
import org.springframework.web.reactive.function.client.WebClient;
import java.util.LinkedList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class Main {
private static final ExecutorService executorService = Executors.newFixedThreadPool(15);
public static void main(String[] args) throws Exception {
final WebClient webClient = WebClient.builder().build();
List<CompletableFuture<String>> allTasks = new LinkedList<>();
String urls[] = {"http://test1", "http://test2", "http://test3"};
// Distribute the work ( webcient can do async but I wanted to just give example).
for (final String url : urls) {
CompletableFuture<String> task = CompletableFuture.supplyAsync(() -> {
// SOME Task JUST FOR Example I have put GET call it could be any thing
String response =
webClient.get().uri(url).accept(MediaType.APPLICATION_JSON).retrieve().bodyToMono(String.class).block();
return response;
}, executorService);
allTasks.add(task);
}
// wait for all do be done (Join)
CompletableFuture.allOf(allTasks.toArray(new CompletableFuture[]{})).join();
for(CompletableFuture<String> task: allTasks){
processResponse(task.get());
}
}
public static void processResponse(String response){
System.out.println(response);
}
}

Related

aggregator spring cloud stream with timeout

I want to make an application that receives messages, stores those messages in a list, and later with and schedule releases those messages every x amount of time.
I know spring cloud stream has an aggregator that already does this, but I think I need it to be done manually because I need to keep a unique message based upon a key and only replace the old message if it matches a specific condition ( I think of it as a Set aggregator with conditions)
what I have tried so far.
also in this link https://github.com/chalimbu/AggregatorQuestionStack
Processor.
import org.springframework.cloud.stream.annotation.EnableBinding
import org.springframework.cloud.stream.annotation.Input
import org.springframework.cloud.stream.annotation.Output
import org.springframework.cloud.stream.messaging.Processor
import org.springframework.scheduling.annotation.Scheduled
#EnableBinding(Processor::class)
class SetAggregatorProcessor(val storageService: StorageService) {
#Input
public fun inputMessage(input: Map<String,Any>){
storageService.messages.add(input)
}
#Output
#Scheduled(fixedDelay = 20000)
public fun produceOutput():List<Map<String,Any>>{
val message= storageService.messages
storageService.messages.clear()
return message;
}
}
Memory storage.
import org.springframework.stereotype.Service
#Service
class StorageService {
public var messages: MutableList<Map<String,Any>> = mutableListOf()
}
This code generates the following error when I start pushing messages.
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:139) ~[spring-integration-core-5.5.8.jar:5.5.8]
The idea is to deploy this app as part of the spring cloud stream (dataflow) platform.
I prefer the declarative approach(over the functional approach), but if somebody knows how to do it with the reactor way, I could settle for it.
Thanks for any help or advice.
thanks to this example(https://github.com/spring-cloud/spring-cloud-stream-samples/blob/main/processor-samples/sensor-average-reactive-kafka/src/main/java/sample/sensor/average/SensorAverageProcessorApplication.java) I was able to figure something out using flux in case someone else needs it
#Configuration
class SetAggregatorProcessor : Function<Flux<Map<String, Any>>, Flux<MutableList<Map<String, Any>>>> {
override fun apply(data: Flux<Map<String, Any>>):Flux<MutableList<Map<String, Any>>> {
return data.window(Duration.ofSeconds(20)).flatMap { window: Flux<Map<String, Any>> ->
this.aggregateList(window)
}
}
private fun aggregateList(group: Flux<Map<String, Any>>): Mono<MutableList<Map<String, Any>>>? {
return group.reduce(
mutableListOf(),
BiFunction<MutableList<Map<String, Any>>, Map<String, Any>, MutableList<Map<String, Any>>> {
acumulator: MutableList<Map<String, Any>>, element: Map<String, Any> ->
acumulator.add(element)
acumulator
}
)
}
}
update https://github.com/chalimbu/AggregatorQuestionStack/tree/main/src/main/kotlin/com/project/co/SetAggregator

Running tests with cucumber-junit-platform-engine and Selenium WebDriver opens too many threads

I have tried to configure an existing Maven project to run using cucumber-junit-platform-engine.
I have used this repo as inspiration.
I added the Maven dependencies needed, as in the linked project using spring-boot-starter-parent version 2.4.5 and cucumber-jvm version 6.10.4.
I set the junit-platform properties as follows:
cucumber.execution.parallel.enabled=true
cucumber.execution.parallel.config.strategy=fixed
cucumber.execution.parallel.config.fixed.parallelism=4
Used annotation #Cucumber in the runner class and #SpringBootTest for classes with steps definition.
It seems to work fine with creating parallel threads, but the problem is it creates all the threads at the start and opens as many browser windows (drivers) as the number of scenarios (e.g. 51 instead of 4).
I am using a CucumberHooks class to add logic before and after scenarios and I'm guessing it interferes with the runner because of the annotations I'm using:
import java.util.List;
import org.openqa.selenium.OutputType;
import org.openqa.selenium.TakesScreenshot;
import org.openqa.selenium.WebDriver;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import io.cucumber.java.After;
import io.cucumber.java.Before;
import io.cucumber.java.Scenario;
import io.cucumber.plugin.ConcurrentEventListener;
import io.cucumber.plugin.event.EventHandler;
import io.cucumber.plugin.event.EventPublisher;
import io.cucumber.plugin.event.TestRunFinished;
import io.cucumber.plugin.event.TestRunStarted;
import io.github.bonigarcia.wdm.WebDriverManager;
public class CucumberHooks implements ConcurrentEventListener {
#Autowired
private ScenarioContext scenarioContext;
#Before
public void beforeScenario(Scenario scenario) {
scenarioContext.getNewDriverInstance();
scenarioContext.setScenario(scenario);
LOGGER.info("Driver initialized for scenario - {}", scenario.getName());
....
<some business logic here>
....
}
#After
public void afterScenario() {
Scenario scenario = scenarioContext.getScenario();
WebDriver driver = scenarioContext.getDriver();
takeErrorScreenshot(scenario, driver);
LOGGER.info("Driver will close for scenario - {}", scenario.getName());
driver.quit();
}
private void takeErrorScreenshot(Scenario scenario, WebDriver driver) {
if (scenario.isFailed()) {
final byte[] screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.BYTES);
scenario.attach(screenshot, "image/png", "Failure");
}
}
#Override
public void setEventPublisher(EventPublisher eventPublisher) {
eventPublisher.registerHandlerFor(TestRunStarted.class, beforeAll);
}
private EventHandler<TestRunStarted> beforeAll = event -> {
// something that needs doing before everything
.....<some business logic here>....
WebDriverManager.getInstance(DriverManagerType.CHROME).setup();
};
}
I tried replacing the #Before tag from io.cucumber.java with the #BeforeEach from org.junit.jupiter.api and it does not work.
How can I solve this issue?
New answer, JUnit 5 has been improved somewhat.
If you are on Java 9+ you can use the following in junit-platform.properties to enable a custom parallelism.
cucumber.execution.parallel.enabled=true
cucumber.execution.parallel.config.strategy=custom
cucumber.execution.parallel.config.custom.class=com.example.MyCustomParallelStrategy
And you'd implement MyCustomParallelStrategy as:
package com.example;
import org.junit.platform.engine.ConfigurationParameters;
import org.junit.platform.engine.support.hierarchical.ParallelExecutionConfiguration;
import org.junit.platform.engine.support.hierarchical.ParallelExecutionConfigurationStrategy;
import java.util.concurrent.ForkJoinPool;
import java.util.function.Predicate;
public class MyCustomParallelStrategy implements ParallelExecutionConfiguration, ParallelExecutionConfigurationStrategy {
private static final int FIXED_PARALLELISM = 4
#Override
public ParallelExecutionConfiguration createConfiguration(final ConfigurationParameters configurationParameters) {
return this;
}
#Override
public Predicate<? super ForkJoinPool> getSaturatePredicate() {
return (ForkJoinPool p) -> true;
}
#Override
public int getParallelism() {
return FIXED_PARALLELISM;
}
#Override
public int getMinimumRunnable() {
return FIXED_PARALLELISM;
}
#Override
public int getMaxPoolSize() {
return FIXED_PARALLELISM;
}
#Override
public int getCorePoolSize() {
return FIXED_PARALLELISM;
}
#Override
public int getKeepAliveSeconds() {
return 30;
}
On Java 9+ this will limit the max-pool size of the underlying forkjoin pool to FIXED_PARALLELISM and there should never be more then 8 web drivers active at the same time.
Also once JUnit5/#3044 is merged, released an integrated into Cucumber, you can use the cucumber.execution.parallel.config.fixed.max-pool-size on Java 9+ to limit the maximum number of concurrent tests.
So as it turns out parallism is mostly a suggestion. Cucumber uses JUnit5s ForkJoinPoolHierarchicalTestExecutorService which constructs a ForkJoinPool.
From the docs on ForkJoinPool:
For applications that require separate or custom pools, a ForkJoinPool may be constructed with a given target parallelism level; by default, equal to the number of available processors. The pool attempts to maintain enough active (or available) threads by dynamically adding, suspending, or resuming internal worker threads, even if some tasks are stalled waiting to join others. However, no such adjustments are guaranteed in the face of blocked I/O or other unmanaged synchronization.
So within a ForkJoinPool when ever a thread blocks for example because it starts asynchronous communication with the web driver another thread may be started to maintain the parallelism.
Since all threads wait, more threads are added to the pool and more web drivers are started.
This means that rather then relying on the ForkJoinPool to limit the number of webdrivers you have to do this yourself. You can use a library like Apache Commons Pool or implement a rudimentary pool using a counting semaphore.
#Component
#ScenarioScope
public class ScenarioContext {
private static final int MAX_CONCURRENT_WEB_DRIVERS = 1;
private static final Semaphore semaphore = new Semaphore(MAX_CONCURRENT_WEB_DRIVERS, true);
private WebDriver driver;
public WebDriver getDriver() {
if (driver != null) {
return driver;
}
try {
semaphore.acquire();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
try {
driver = CustomChromeDriver.getInstance();
} catch (Throwable t){
semaphore.release();
throw t;
}
return driver;
}
public void retireDriver() {
if (driver == null) {
return;
}
try {
driver.quit();
} finally {
driver = null;
semaphore.release();
}
}
}

Need a way to prevent unwanted job param from propagating to next execution of spring boot batch job

I am running a batch app using spring boot 2.1.2 and spring batch 4.1.1. The app uses a MySQL database for the spring batch metadata data source.
First, I run the job with this command:
java -jar target/batchdemo-0.0.1-SNAPSHOT.jar -Dspring.batch.job.names=echo com.paypal.batch.batchdemo.BatchdemoApplication myparam1=value1 myparam2=value2
Notice I am passing two params:
myparam1=value1
myparam2=value2
Since the job uses RunIdIncrementer, the actual params used by the app are logged as:
Job: [SimpleJob: [name=echo]] completed with the following parameters: [{myparam2=value2, run.id=1, myparam1=value1}]
Next I run the job again, this time dropping myparam2:
java -jar target/batchdemo-0.0.1-SNAPSHOT.jar -Dspring.batch.job.names=echo com.paypal.batch.batchdemo.BatchdemoApplication myparam1=value1
This time the job again runs with param2 still included:
Job: [SimpleJob: [name=echo]] completed with the following parameters: [{myparam2=value2, run.id=2, myparam1=value1}]
This causes business logic to be invoked as if I had again passed myparam2 to the app.
Is there a way to drop the job parameter and have it not be passed to the next instance?
App code:
package com.paypal.batch.batchdemo;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.support.RunIdIncrementer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
#SpringBootApplication
#EnableBatchProcessing
public class BatchdemoApplication {
public static void main(String[] args) {
SpringApplication.run(BatchdemoApplication.class, args);
}
#Autowired
JobBuilderFactory jobBuilder;
#Autowired
StepBuilderFactory stepBuilder;
#Autowired
ParamEchoTasklet paramEchoTasklet;
#Bean
public RunIdIncrementer incrementer() {
return new RunIdIncrementer();
}
#Bean
public Job job() {
return jobBuilder.get("echo").incrementer(incrementer()).start(echoParamsStep()).build();
}
#Bean
public Step echoParamsStep() {
return stepBuilder.get("echoParams").tasklet(paramEchoTasklet).build();
}
}
package com.paypal.batch.batchdemo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.batch.core.StepContribution;
import org.springframework.batch.core.scope.context.ChunkContext;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.stereotype.Component;
#Component
public class ParamEchoTasklet implements Tasklet {
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
LOGGER.info("ParamEchoTasklet BEGIN");
chunkContext.getStepContext().getJobParameters().entrySet().stream().forEachOrdered((entry) -> {
String key = entry.getKey();
Object value = entry.getValue();
LOGGER.info("Param {} = {}", key, value);
});
LOGGER.info("ParamEchoTasklet END");
return RepeatStatus.FINISHED;
}
private Logger LOGGER = LoggerFactory.getLogger(ParamEchoTasklet.class);
}
I debugged the spring batch and spring boot code, and here is what is happening. JobParametersBuilder line 273 adds the params from the most recent prior job instance to the nextParameters map along with any params added by the JobParametersIncrementer:
List<JobExecution> previousExecutions = this.jobExplorer.getJobExecutions(lastInstances.get(0));
if (previousExecutions.isEmpty()) {
// Normally this will not happen - an instance exists with no executions
nextParameters = incrementer.getNext(new JobParameters());
}
else {
JobExecution previousExecution = previousExecutions.get(0);
nextParameters = incrementer.getNext(previousExecution.getJobParameters());
}
Then since I am using spring boot, JobLauncherCommandLineRunner line 213 merges the prior params with the new params passed for the new execution, which results in the old param being passed to the new execution:
return merge(nextParameters, jobParameters);
It appears to be impossible to run the job ever again without the param unless I am missing something. Could it be a bug in spring batch?
The normal behavior for RunIdIncrementer appears to increment the run id for the JobExecution and pass along the remaining prior JobParameters. I would not call this a bug.
Keep in mind that the idea behind the RunIdIncrementer is simply to change one identifying parameter to allow a job to be run again, even if a prior run with the same (other) parameters completed successfully and restart has not been configured.
You could always create a customized incrementer by implementing JobParametersIncrementer.
Another alternative is to use the JobParametersBuilder to build a JobParameters object and then use the JobLauncher to run your job with those parameters. I often use the current system time in milliseconds to create uniqueness if I'm running jobs that will otherwise have the same JobParameters. You will obviously have to figure out the logic for pulling your specific parameters from the command line (or wherever else) and iterating over them to populate the JobParameters object.
Example:
public JobExecution executeJob(Job job) {
JobExecution jobExecution = null;
try {
JobParameters jobParameters =
new JobParametersBuilder()
.addLong( "time.millis", System.currentTimeMillis(), true)
.addString( "param1", "value1", true)
.toJobParameters();
jobExecution = jobLauncher.run( job, jobParameters );
} catch ( JobInstanceAlreadyCompleteException | JobRestartException | JobParametersInvalidException | JobExecutionAlreadyRunningException e ) {
e.printStackTrace();
}
return jobExecution;
}

is putting sqs-consumer to detect receiveMessage event in sqs scalable

I am using aws sqs as message queue. After sqs.sendMessage sends the data , I want to detect sqs.receiveMessage via either infinite loop or event triggering in scalable way. Then I came accross sqs-consumer
to handle sqs.receiveMessage events, the moment it receives the messages. But I was wondering , is it the most suitable way to handle message passing between microservices or is there any other better way to handle this thing?
I had written the code in java for fetching the data from sqs queue with SQSBufferedAsyncClient, advantages using this API is buffered the messages in async mode.
/**
*
*/
package com.sxm.aota.tsc.config;
import java.net.UnknownHostException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonWebServiceRequest;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.InstanceProfileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.retry.RetryPolicy;
import com.amazonaws.retry.RetryPolicy.BackoffStrategy;
import com.amazonaws.services.sqs.AmazonSQSAsync;
import com.amazonaws.services.sqs.AmazonSQSAsyncClient;
import com.amazonaws.services.sqs.buffered.AmazonSQSBufferedAsyncClient;
import com.amazonaws.services.sqs.buffered.QueueBufferConfig;
#Configuration
public class SQSConfiguration {
/** The properties cache config. */
#Autowired
private PropertiesCacheConfig propertiesCacheConfig;
#Bean
public AmazonSQSAsync amazonSQSClient() {
// Create Client Configuration
ClientConfiguration clientConfig = new ClientConfiguration()
.withMaxErrorRetry(5)
.withConnectionTTL(10_000L)
.withTcpKeepAlive(true)
.withRetryPolicy(new RetryPolicy(
null,
new BackoffStrategy() {
#Override
public long delayBeforeNextRetry(AmazonWebServiceRequest req,
AmazonClientException exception, int retries) {
// Delay between retries is 10s unless it is UnknownHostException
// for which retry is 60s
return exception.getCause() instanceof UnknownHostException ? 60_000L : 10_000L;
}
}, 10, true));
// Create Amazon client
AmazonSQSAsync asyncSqsClient = null;
if (propertiesCacheConfig.isIamRole()) {
asyncSqsClient = new AmazonSQSAsyncClient(new InstanceProfileCredentialsProvider(true), clientConfig);
} else {
asyncSqsClient = new AmazonSQSAsyncClient(
new BasicAWSCredentials("sceretkey", "accesskey"));
}
final Regions regions = Regions.fromName(propertiesCacheConfig.getRegionName());
asyncSqsClient.setRegion(Region.getRegion(regions));
asyncSqsClient.setEndpoint(propertiesCacheConfig.getEndPoint());
// Buffer for request batching
final QueueBufferConfig bufferConfig = new QueueBufferConfig();
// Ensure visibility timeout is maintained
bufferConfig.setVisibilityTimeoutSeconds(20);
// Enable long polling
bufferConfig.setLongPoll(true);
// Set batch parameters
// bufferConfig.setMaxBatchOpenMs(500);
// Set to receive messages only on demand
// bufferConfig.setMaxDoneReceiveBatches(0);
// bufferConfig.setMaxInflightReceiveBatches(0);
return new AmazonSQSBufferedAsyncClient(asyncSqsClient, bufferConfig);
}
}
then written the scheduleR which executes after every 2 secs and fetches the data from queue, process it and delete it from queue before visibility timeout otherwise it will be ready for processing again when visibility tiiimeout expires again.
package com.sxm.aota.tsc.sqs;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.DependsOn;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import com.amazonaws.services.sqs.AmazonSQSAsync;
import com.amazonaws.services.sqs.model.DeleteMessageRequest;
import com.amazonaws.services.sqs.model.GetQueueUrlRequest;
import com.amazonaws.services.sqs.model.GetQueueUrlResult;
import com.amazonaws.services.sqs.model.ReceiveMessageRequest;
import com.amazonaws.services.sqs.model.ReceiveMessageResult;
import com.fasterxml.jackson.databind.ObjectMapper;
/**
* The Class TSCDataSenderScheduledTask.
*
* Sends the aggregated Vehicle data to TSC in batches
*/
#EnableScheduling
#Component("sqsScheduledTask")
#DependsOn({ "propertiesCacheConfig", "amazonSQSClient" })
public class SQSScheduledTask {
private static final Logger LOGGER = LoggerFactory.getLogger(SQSScheduledTask.class);
#Autowired
private PropertiesCacheConfig propertiesCacheConfig;
#Autowired
public AmazonSQSAsync amazonSQSClient;
/**
* Timer Task that will run after specific interval of time Majorly
* responsible for sending the data in batches to TSC.
*/
private String queueUrl;
private final ObjectMapper mapper = new ObjectMapper();
#PostConstruct
public void initialize() throws Exception {
LOGGER.info("SQS-Publisher", "Publisher initializing for queue " + propertiesCacheConfig.getSQSQueueName(),
"Publisher initializing for queue " + propertiesCacheConfig.getSQSQueueName());
// Get queue URL
final GetQueueUrlRequest request = new GetQueueUrlRequest().withQueueName(propertiesCacheConfig.getSQSQueueName());
final GetQueueUrlResult response = amazonSQSClient.getQueueUrl(request);
queueUrl = response.getQueueUrl();
LOGGER.info("SQS-Publisher", "Publisher initialized for queue " + propertiesCacheConfig.getSQSQueueName(),
"Publisher initialized for queue " + propertiesCacheConfig.getSQSQueueName() + ", URL = " + queueUrl);
}
#Scheduled(fixedDelayString = "${sqs.consumer.delay}")
public void timerTask() {
final ReceiveMessageResult receiveResult = getMessagesFromSQS();
String messageBody = null;
if (receiveResult != null && receiveResult.getMessages() != null && !receiveResult.getMessages().isEmpty()) {
try {
messageBody = receiveResult.getMessages().get(0).getBody();
String messageReceiptHandle = receiveResult.getMessages().get(0).getReceiptHandle();
Vehicles vehicles = mapper.readValue(messageBody, Vehicles.class);
processMessage(vehicles.getVehicles(),messageReceiptHandle);
} catch (Exception e) {
LOGGER.error("Exception while processing SQS message : {}", messageBody);
// Message is not deleted on SQS and will be processed again after visibility timeout
}
}
}
public void processMessage(List<Vehicle> vehicles,String messageReceiptHandle) throws InterruptedException {
//processing code
//delete the sqs message as the processing is completed
//Need to create atomic counter that will be increamented by all TS.. Once it will be 0 then we will be deleting the messages
amazonSQSClient.deleteMessage(new DeleteMessageRequest(queueUrl, messageReceiptHandle));
}
private ReceiveMessageResult getMessagesFromSQS() {
try {
// Create new request and fetch data from Amazon SQS queue
final ReceiveMessageResult receiveResult = amazonSQSClient
.receiveMessage(new ReceiveMessageRequest().withMaxNumberOfMessages(1).withQueueUrl(queueUrl));
return receiveResult;
} catch (Exception e) {
LOGGER.error("Error while fetching data from SQS", e);
}
return null;
}
}

Route lines from file to persistent JMS queue: How to improve performance?

I need some help with performance tuning of a use case. In this use case the Camel route is tailing status lines in a log file and sends each line as a message to a JMS queue. I have implemented the use case like this:
package tests;
import java.io.File;
import java.net.URI;
import org.apache.activemq.ActiveMQConnectionFactory;
import org.apache.activemq.broker.BrokerFactory;
import org.apache.activemq.broker.BrokerService;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.sjms.SjmsComponent;
import org.apache.camel.main.Main;
public class LinesToQueue {
public static void main() throws Exception {
final File file = new File("data/log.txt");
final String uri = "tcp://127.0.0.1:61616";
final BrokerService jmsService = BrokerFactory.createBroker(new URI("broker:" + uri));
jmsService.start();
final SjmsComponent jmsComponent = new SjmsComponent();
jmsComponent.setConnectionFactory(new ActiveMQConnectionFactory(uri));
final Main main = new Main();
main.bind("jms", jmsComponent);
main.addRouteBuilder(new RouteBuilder() {
#Override
public void configure() throws Exception {
fromF("stream:file?fileName=%s&scanStream=true&scanStreamDelay=0", file.getAbsolutePath())
.routeId("LinesToQueue")
.to("jms:LogLines?synchronous=false");
}
});
main.enableHangupSupport();
main.run();
}
}
When I run this use case with a file already filled with 1.000.000 lines the overall performance I get in the route is about 313 lines/second. This means that it takes about 55 minutes to process the file.
As some sort of reference I also have created another use case. In this use case the Camel route is tailing status lines in a log file and sends each line as a document to an Elasticsearch index. I have implemented the use case like this:
package tests;
import java.io.File;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.main.Main;
public class LinesToIndex {
public static void main() throws Exception {
final File file = new File("data/log.txt");
final String uri = "local";
final Main main = new Main();
main.addRouteBuilder(new RouteBuilder() {
#Override
public void configure() throws Exception {
fromF("stream:file?fileName=%s&scanStream=true&scanStreamDelay=0", file.getAbsolutePath())
.routeId("LinesToIndex")
.bean(new LineConverter())
.toF("elasticsearch://%s?operation=INDEX&indexName=log&indexType=line", uri);
}
});
main.enableHangupSupport();
main.run();
}
}
When I run this use case with a file already filled with 1.000.000 lines the overall performance I get in the route is about 8333 lines/second. This means that it takes about 2 minutes to process the file.
I understand that there is a huge difference between a JMS queue and an Elasticsearch index but how can have the JMS use case above to perform better?
Update #1:
It seems to be the persistence in the JMS service that is the bottleneck in my first use case above. If I disable the persistence in the JMS service then the performance in the route is about 11111 lines/second. Which persistence storage for the JMS service will give me a better performance?
a couple of things to consider...
ActiveMQ producer connections are expensive, make sure you use a pooled connection factory...
consider using the VM transport for an in process ActiveMQ instance
consider using an external ActiveMQ broker over TCP (so it doesn't compete for resources with your test)
setup/tune KahaDB or LevelDB to optimize persistent storage for your use case

Resources