Springboot test method involve kafka producer for integration test - spring-boot

I am testing a method which involve the use kafka as a producer.When I when the test i found that i just keep looping for waiting the consumer,which i have not set.
Here is method in service class:
public String Applyjob(int order_id,int apply_id){
//check order_id
DashBroad dashBroad=dashBroadRepository.findByOrder_id(order_id);
try{
if(dashBroad.getApplier_id().contains(userCoreService.findById(apply_id))){
return "you have already applied the job";
}
dashBroad.getApplier_id().add(userCoreService.getUser(apply_id)); //update the dashbroad
dashBroad.setApplier_id(dashBroad.getApplier_id());
dashBroadRepository.save(dashBroad);
//add in applications records in user entity
postApplication(apply_id,order_id);
//send notification
String notification="You have successfully applied for job id:"+order_id;
sendNotice(notification,apply_id,order_id);
return "successfully added";
} catch(IndexOutOfBoundsException exception){
return "the number of application exceed the limit";
}
}
//kafka producer
public void sendNotice(String notification,int apply_id,int order_id){
try{
LocalDateTime myDateObj = LocalDateTime.now();
DateTimeFormatter myFormatObj = DateTimeFormatter.ofPattern("dd-MM-yyyy HH:mm:ss");
String formattedDate = myDateObj.format(myFormatObj);
kafkaTemplate.send("notificationTopic",new NoticeRespond(
apply_id,formattedDate,notification
));
log.info(apply_id+"has applied job with id: "+order_id);}
catch (Exception exception){
log.error("cant found the consumer");
}
}
private void postApplication(int apply_id,int order_id){
try{
JobOrder job=jobService.findByOrderid(order_id);
User user=userCoreService.findById(apply_id);
user.getApplications().add(job);
System.out.println(job);
userCoreService.saveAndReturn(user);
log.info("add application");
}catch (IndexOutOfBoundsException exception){
String notification="You have already send to much of applications.Please delete some and try again:"+order_id;
sendNotice(notification,apply_id,order_id);
}
}
I am testing the apply job method, which involve the method of sendNotice(kafka producer method)
test code:
#SpringBootTest
#AutoConfigureMockMvc
class DashbroadServiceTest {
#Autowired
private DashbroadService dashbroadService;
#Autowired
private DashBroadRepository dashBroadRepository;
#Autowired
private UserRepository userRepository;
#Autowired
private JobRepository jobRepository;
#Autowired
private UserCoreService userCoreService;
#Test
#Transactional
void applyjob() {
List<User> list=new ArrayList<>();
User user1=new User(0,"admin","admin",null,null,"yl","sd"
,"434","dsf",null,4,2,new ArrayList<>());
User user2=new User(0,"alex","admin",null,null,"yl","sd"
,"434","dsf",null,4,2,new ArrayList<>());
userRepository.save(user1);
userRepository.save(user2);
jobRepository.save(new JobOrder(0,1,"sda",null,null,null,0,3,false,0,null));
Assertions.assertEquals("admin",userCoreService.findById(1).getUsername());
dashBroadRepository.save(new DashBroad(0,1,1,2,list,list));
String res=dashbroadService.Applyjob(1,2);
Assertions.assertEquals("successfully added",res);
}
Log:
-02-12T02:26:17.457+08:00 WARN 15971 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
2023-02-12T02:26:17.659+08:00 INFO 15971 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Node -1 disconnected.
2023-02-12T02:26:18.873+08:00 WARN 15971 --
It just loop the above code,but when i stop it , it pass due to the catch method.And my question is can i just set runtime error on it and let it catch the error,or build a mockconsumer for kafka or is there any method i can just ignore the part of kafka.Please help

The producer sends messages to Kafka independently of the consumers. Why do you think that the problem is waiting for the consumer? You probably didn't set up Kafka configuration for the test and kafkaTemplate can't connect to it.
First of all you can delegate the work of sending a message to a separate KafkaSender service using the Single Responsibility Principle (move the sendNotice method to a new KafkaSender class).
#Service
#AllArgsConstructor
public class KafkaSender {
private final KafkaTemplate<String, Object> kafkaTemplate;
public void sendNotice(String notification, int apply_id, int order_id) {
// ...
}
}
This will make it easier to test the current complex DashbroadService class.
Next, what kind of test do you want to write?
If you want to write a Unit test without Kafka, then just mock this KafkaSender bean in the test for Spring context:
#SpringBootTest
#AutoConfigureMockMvc
class DashbroadServiceTest {
// ...
#MockBean
private KafkaSender kafkaSender;
// ...
}
You will also be able to verify the calls to this mocked kafkaSender bean via Mockito.verify(...) if needed.
If you want to write an Integration or E2E test with Kafka, then use Embedded Kafka or Kafka with TestContainers (doc). In this case, you can configure the producer to connect to a running Kafka. You can also programmatically create a consumer for additional validation of messages in topics (it is not necessary to send messages through the Spring kafkaTemplate).

Related

Assertions on RaabitMQ Listener message received in Spring Boot Integration tests behaves strange

I've a RabbitMq listener which receives message from queue succesfully but fails on Assertion in Spring Boot Integration.
I'm adding every message received on the queue to list. But at the assertion, list is empty.
Below are my classes :
#Component
public class NotificationEventListener {
private List<MigrationEvent> queuedEvents = new ArrayList<>();
#RabbitListener(queues = "queue-name", concurrency = "1")
public void handleNotificationEvent(#Payload final MigrationEvent migrationEvent) {
queuedEvents.add(0, migrationEvent);
}
public MigrationEvent getLatestMigrationEvent() {
return queuedEvents.get(0);
}
}
This is in the Integration Test class
ExtendWith(SpringExtension.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#AutoConfigureMockMvc
public class ControllerTest {
#Autowired
protected NotificationEventListener listener;
#Test
void givenUserMigrationStateExist_whenCallPostPlm_thenUserStateInQueueShouldBeReturned() throws Exception {
final var userUid = TestDataGenerator.generateUserUid();
final var txId = TestDataGenerator.generateCorrelationId();
final var messageCount = getAllMigrationEvents().size();
final var response = assertQueueUserSuccess(userUid, txId);
//above line calls an http end point in the server code and puts message onto rabbit queue
assertMigrationEventOnQueue(userUid, txId);//fails here the list is empty
assertThat((getAllMigrationEvents().size() - messageCount)).isEqualTo(1);
}
protected void assertMigrationEventOnQueue(final String userUid, final String txId) {
assertThat(listener.getLatestMigrationEvent().getUseruid()).isEqualTo(userUid);
assertThat(listener.getLatestMigrationEvent().getCorrelationId()).isEqualTo(txId)));
}
I can assure this is not the case of delay because putting loggers in I could receive the message and add it to the list. But At the time of assertion in test class it fails saying list is empty.
This seems like there are two different processes running, one verifying and other listening. Is that something with the listener ?

Transactional kafka listener retry

I'm trying to create a Spring Kafka #KafkaListener which is both transactional (kafa and database) and uses retry. I am using Spring Boot. The documentation for error handlers says that
When transactions are being used, no error handlers are configured, by default, so that the exception will roll back the transaction. Error handling for transactional containers are handled by the AfterRollbackProcessor. If you provide a custom error handler when using transactions, it must throw an exception if you want the transaction rolled back (source).
However, when I configure my listener with a #Transactional("kafkaTransactionManager) annotation, even though I can clearly see that the template rolls back produced messages when an exception is raised, the container actually uses a non-null commonErrorHandler rather than an AfterRollbackProcessor. This is the case even when I explicitly configure the commonErrorHandler to null in the container factory. I do not see any evidence that my configured AfterRollbackProcessor is ever invoked, even after the commonErrorHandler exhausts its retry policy.
I'm uncertain how Spring Kafka's error handling works in general at this point, and am looking for clarification. The questions I want to answer are:
What is the recommended way to configure transactional kafka listeners with Spring-Kafka 2.8.0? Have I done it correctly?
Should the common error handler indeed be used rather than the after rollback processor? Does it rollback the current transaction before trying to process the message again according to the retry policy?
In general, when I have a transactional kafka listener, is there ever more than one layer of error handling I should be aware of? E.g. if my common error handler re-throws exceptions of kind T, will another handler catch that and potentially start retry of its own?
Thanks!
My code:
#Configuration
#EnableScheduling
#EnableKafka
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Integer, Object>();
factory.setConsumerFactory(consumerFactory);
var afterRollbackProcessor =
new DefaultAfterRollbackProcessor<Object, Object>(
(record, e) -> LOGGER.info("After rollback processor triggered! {}", e.getMessage()),
new FixedBackOff(1_000, 1));
// Configures different error handling for different listeners.
factory.setContainerCustomizer(
container -> {
var groupId = container.getContainerProperties().getGroupId();
if (groupId.equals("InputProcessorHigh") || groupId.equals("InputProcessorLow")) {
container.setAfterRollbackProcessor(afterRollbackProcessor);
// If I set commonErrorHandler to null, it is defaulted instead.
}
});
return factory;
}
}
#Component
public class InputProcessor {
private static final Logger LOGGER = LoggerFactory.getLogger(InputProcessor.class);
private final KafkaTemplate<Integer, Object> template;
private final AuditLogRepository repository;
#Autowired
public InputProcessor(KafkaTemplate<Integer, Object> template, AuditLogRepository repository) {
this.template = template;
this.repository = repository;
}
#KafkaListener(id = "InputProcessorHigh", topics = "input-high", concurrency = "3")
#Transactional("kafkaTransactionManager")
public void inputHighProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
#KafkaListener(id = "InputProcessorLow", topics = "input-low", concurrency = "1")
#Transactional("kafkaTransactionManager")
public void inputLowProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
public void processInputs(ConsumerRecord<Integer, Input> input) {
var key = input.key();
var message = input.value().getMessage();
var output = new Output().setMessage(message);
LOGGER.info("Processing {}", message);
template.send("output-left", key, output);
repository.createIfNotExists(message); // idempotent insert
template.send("output-right", key, output);
if (message.contains("ERROR")) {
throw new RuntimeException("Simulated processing error!");
}
}
}
My application.yaml (minus my bootstrap-servers and security config):
spring:
kafka:
consumer:
auto-offset-reset: 'earliest'
key-deserializer: 'org.apache.kafka.common.serialization.IntegerDeserializer'
value-deserializer: 'org.springframework.kafka.support.serializer.JsonDeserializer'
isolation-level: 'read_committed'
properties:
spring.json.trusted.packages: 'java.util,java.lang,com.github.tomboyo.silverbroccoli.*'
producer:
transaction-id-prefix: 'tx-'
key-serializer: 'org.apache.kafka.common.serialization.IntegerSerializer'
value-serializer: 'org.springframework.kafka.support.serializer.JsonSerializer'
[EDIT] (solution code)
I was able to figure it out with Gary's help. As they say, we need to set the kafka transaction manager on the container so that the container can start transactions. The transactions documentation doesn't cover how to do this, and there are a few ways. First, we can get the mutable container properties object from the factory and set the transaction manager on that:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
var factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setTransactionManager(...);
return factory;
}
If we are in Spring Boot, we can re-use some of the auto configuration to set sensible defaults on our factory before we customize it. We can see that the KafkaAutoConfiguration module imports KafkaAnnotationDrivenConfiguration, which produces a ConcurrentKafkaListenerContainerFactoryConfigurer bean. This appears to be responsible for all the default configuration in a Spring-Boot application. So, we can inject that bean and use it to initialize our factory before adding customizations:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer bootConfigurer,
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
// Apply default spring-boot configuration.
bootConfigurer.configure(factory, consumerFactory);
factory.setContainerCustomizer(
container -> {
... // do whatever
});
return factory;
}
Once that's done, the container uses the AfterRollbackProcessor for error handling, as expected. As long as I don't explicitly configure a common error handler, this appears to be the only layer of exception handling.
The AfterRollbackProcessor is only used when the container knows about the transaction; you must provide a KafkaTransactionManager to the container so that the kafka transaction is started by the container, and the offsets sent to the transaction. Using #Transactional is not the correct way to start a Kafka Transaction.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

Netty how to test Handler which uses Remote Address of a client

I have a Netty TCP Server with Spring Boot 2.3.1 with the following handler :
#Slf4j
#Component
#RequiredArgsConstructor
#ChannelHandler.Sharable
public class QrReaderProcessingHandler extends ChannelInboundHandlerAdapter {
private final CarParkPermissionService permissionService;
private final Gson gson = new Gson();
private String remoteAddress;
#Override
public void channelActive(ChannelHandlerContext ctx) {
ctx.fireChannelActive();
remoteAddress = ctx.channel().remoteAddress().toString();
if (log.isDebugEnabled()) {
log.debug(remoteAddress);
}
ctx.writeAndFlush("Your remote address is " + remoteAddress + ".\r\n");
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
log.info("CLIENT_IP: {}", remoteAddress);
String stringMsg = (String) msg;
log.info("CLIENT_REQUEST: {}", stringMsg);
String lowerCaseMsg = stringMsg.toLowerCase();
if (RequestType.HEARTBEAT.containsName(lowerCaseMsg)) {
HeartbeatRequest heartbeatRequest = gson.fromJson(stringMsg, HeartbeatRequest.class);
log.debug("heartbeat request: {}", heartbeatRequest);
HeartbeatResponse response = HeartbeatResponse.builder()
.responseCode("ok")
.build();
ctx.writeAndFlush(response + "\n\r");
}
}
Request DTO:
#Data
#Builder
#NoArgsConstructor
#AllArgsConstructor
public class HeartbeatRequest {
private String messageID;
}
Response DTO:
#Data
#Builder
#NoArgsConstructor
#AllArgsConstructor
public class HeartbeatResponse {
private String responseCode;
}
Logic is quite simple. Only I have to know the IP address of the client.
I need to test it as well.
I have been looking for many resources for testing handlers for Netty, like
Testing Netty with EmbeddedChannel
How to unit test netty handler
However, it didn't work for me.
For EmbeddedChannel I have following error - Your remote address is embedded.
Here is code:
#ActiveProfiles("test")
#RunWith(MockitoJUnitRunner.class)
public class ProcessingHandlerTest_Embedded {
#Mock
private PermissionService permissionService;
private EmbeddedChannel embeddedChannel;
private final Gson gson = new Gson();
private ProcessingHandler processingHandler;
#Before
public void setUp() {
processingHandler = new ProcessingHandler(permissionService);
embeddedChannel = new EmbeddedChannel(processingHandler);
}
#Test
public void testHeartbeatMessage() {
// given
HeartbeatRequest heartbeatMessage = HeartbeatRequest.builder()
.messageID("heartbeat")
.build();
HeartbeatResponse response = HeartbeatResponse.builder()
.responseCode("ok")
.build();
String request = gson.toJson(heartbeatMessage).concat("\r\n");
String expected = gson.toJson(response).concat("\r\n");
// when
embeddedChannel.writeInbound(request);
// then
Queue<Object> outboundMessages = embeddedChannel.outboundMessages();
assertEquals(expected, outboundMessages.poll());
}
}
Output:
22:21:29.062 [main] INFO handler.ProcessingHandler - CLIENT_IP: embedded
22:21:29.062 [main] INFO handler.ProcessingHandler - CLIENT_REQUEST: {"messageID":"heartbeat"}
22:21:29.067 [main] DEBUG handler.ProcessingHandler - heartbeat request: HeartbeatRequest(messageID=heartbeat)
org.junit.ComparisonFailure:
<Click to see difference>
However, I don't know how to do exact testing for such a case.
Here is a snippet from configuration:
#Bean
#SneakyThrows
public InetSocketAddress tcpSocketAddress() {
// for now, hostname is: localhost/127.0.0.1:9090
return new InetSocketAddress("localhost", nettyProperties.getTcpPort());
// for real client devices: A05264/172.28.1.162:9090
// return new InetSocketAddress(InetAddress.getLocalHost(), nettyProperties.getTcpPort());
}
#Component
#RequiredArgsConstructor
public class QrReaderChannelInitializer extends ChannelInitializer<SocketChannel> {
private final StringEncoder stringEncoder = new StringEncoder();
private final StringDecoder stringDecoder = new StringDecoder();
private final QrReaderProcessingHandler readerServerHandler;
private final NettyProperties nettyProperties;
#Override
protected void initChannel(SocketChannel socketChannel) {
ChannelPipeline pipeline = socketChannel.pipeline();
// Add the text line codec combination first
pipeline.addLast(new DelimiterBasedFrameDecoder(1024 * 1024, Delimiters.lineDelimiter()));
pipeline.addLast(new ReadTimeoutHandler(nettyProperties.getClientTimeout()));
pipeline.addLast(stringDecoder);
pipeline.addLast(stringEncoder);
pipeline.addLast(readerServerHandler);
}
}
How to test handler with IP address of a client?
Two things that could help:
Do not annotate with #ChannelHandler.Sharable if your handler is NOT sharable. This can be misleading. Remove unnecessary state from handlers. In your case you should remove the remoteAddress member variable and ensure that Gson and CarParkPermissionService can be reused and are thread-safe.
"Your remote address is embedded" is NOT an error. It actually is the message written by your handler onto the outbound channel (cf. your channelActive() method)
So it looks like it could work.
EDIT
Following your comments here are some clarifications regarding the second point. I mean that:
your code making use of EmbeddedChannel is almost correct. There is just a misunderstanding on the expected results (assert).
To make the unit test successful, you just have either:
to comment this line in channelActive(): ctx.writeAndFlush("Your remote ...")
or to poll the second message from Queue<Object> outboundMessages in testHeartbeatMessage()
Indeed, when you do this:
// when
embeddedChannel.writeInbound(request);
(1) You actually open the channel once, which fires a channelActive() event. You don't have a log in it but we see that the variable remoteAddress is not null afterwards, meaning that it was assigned in the channelActive() method.
(2) At the end of the channelActive() method, you eventually already send back a message by writing on the channel pipeline, as seen at this line:
ctx.writeAndFlush("Your remote address is " + remoteAddress + ".\r\n");
// In fact, this is the message you see in your failed assertion.
(3) Then the message written by embeddedChannel.writeInbound(request) is received and can be read, which fires a channelRead() event. This time, we see this in your log output:
22:21:29.062 [main] INFO handler.ProcessingHandler - CLIENT_IP: embedded
22:21:29.062 [main] INFO handler.ProcessingHandler - CLIENT_REQUEST: {"messageID":"heartbeat"}
22:21:29.067 [main] DEBUG handler.ProcessingHandler - heartbeat request: HeartbeatRequest(messageID=heartbeat)
(4) At the end of channelRead(ChannelHandlerContext ctx, Object msg), you will then send a second message (the expected one):
HeartbeatResponse response = HeartbeatResponse.builder()
.responseCode("ok")
.build();
ctx.writeAndFlush(response + "\n\r");
Therefore, with the following code of your unit test...
Queue<Object> outboundMessages = embeddedChannel.outboundMessages();
assertEquals(expected, outboundMessages.poll());
... you should be able to poll() two messages:
"Your remote address is embedded"
"{ResponseCode":"ok"}
Does it make sense for you?

Return job id "immediately" for spring batch job before it completes

I am working on a project where we are using Spring Boot, Spring Batch and Camel.
The batch process is started by a call to a rest endpoint. The rest controller starts a camel route that starts the spring batch job flow (via spring batch camel component).
I have no control over the external application that calls my application. My application is part of a bigger nightly work flow.
The batch job can take a long time to complete and therefore the external application periodically polls my batch job via another rest endpoint asking if the job is complete. It does this by polling a status rest endpoint with the id of the jobExecution it wants a status on.
To accomplish this flow I have implemented a rest controller that starts the camel route via a ProducerTemplate. My problem is returning the job execution id immediately after starting the camel route. I don't want the rest call to wait until the job is complete to return.
startJobViaRestCall ------> createBatchJob ----> runBatchJobUntilDone
|
|
Return jobExecutionData |
<----------------------------------
I have tried using async calls and futures, but with no luck. I have also tried to use Camels wiretap to no avail. The problem is that there is only "onComplete" events. I need an hook that returns as soon as the job has been created, but not run.
For example, the following code waits until the batch job is done before returning the JobExecution data I want to send back (as json). It makes sense as extractFutureBody will wait until the response is ready.
#RestController
#Slf4j
public class BatchJobController {
#Autowired
ProducerTemplate producerTemplate;
#RequestMapping(value = "/batch/job/start", method = RequestMethod.GET)
#ResponseBody
public String startBatchJob() {
log.info("BatchJob start called...");
String jobExecution = producerTemplate.extractFutureBody(producerTemplate.asyncRequestBody(BatchRoute.ENDPOINT_JOB_START, ""), String.class);
return jobExecution;
}
}
The camel route is a simple call to the spring-batch-component
public class BatchRoute<I, O> extends BaseRoute {
private static final String ROUTE_START_BATCH = "spring-batch:springBatchJob";
#Override
public void configure() {
super.configure();
from(ENDPOINT_JOB_START).to(ROUTE_START_BATCH);
}
}
Any ideas as to how I can return the JobExecution data as soon as it is available?
Not sure How you could do it in Camel, but here is sample Job execution using spring-rest.
#RestController
public class KpRest {
private static final Logger LOG = LoggerFactory.getLogger(KpRest.class);
private static String RUN_ID_KEY = "run.id";
#Autowired
private JobLauncher launcher;
private final AtomicLong incrementer = new AtomicLong();
#Autowired
private Job job;
#RequestMapping("/hello")
public String sayHello(){
try {
JobParameters parameters = new JobParametersBuilder().addLong(RUN_ID_KEY, incrementer.incrementAndGet()).toJobParameters();
JobExecution execution = launcher.run(job, parameters);
LOG.info("JobId {}, JobStatus {}", execution.getJobId(), execution.getStatus().getBatchStatus());
return String.valueOf(execution.getJobId());
} catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException
| JobParametersInvalidException e) {
LOG.info("Job execution failed, {}", e);
}
return "Some Error";
}
}
You can make the Job async by modifying JobLauncher.
#Bean
public JobLauncher simpleJobLauncher(JobRepository jobRepository){
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(jobRepository);
jobLauncher.setTaskExecutor(new SimpleAsyncTaskExecutor());
return jobLauncher;
}
Refer the documentation for more info

Spring rabbitlistner stop listening to queue using annotation syntax

A colleague and I are working on an application using Spring which needs to get a message from a RabbitMQ queue. The idea is to do this using (the usually excellent) spring annotation system to make the code easy to understand. We have the system working using the #RabbitListner annotation but we want to get a message on demand. The #RabbitListner annotation does not do this, it just receives messages when they are available. The demand is determined by the "readiness" of the client i.e. a client should "get" a message from te queue stop listing and process the message. Then determine if it is ready to receive a new one and reconnect to the queue.
We have been looking into doing this by hand just using the spring-amqp/spring-rabbit modules and while this is probably possible we would really like to do this using spring. After many hours of searching and going through the documentation, we have not been able to find an answer.
Here is the recieving code we currently have:
#RabbitListener(queues = "jobRequests")
public class Receiver {
#Autowired
private JobProcessor jobProcessor;
#RabbitHandler
public void receive(Job job) throws InterruptedException, IOException {
System.out.println(" [x] Received '" + job + "'");
jobProcessor.processJob(job);
}
}
Job processor:
#Service
public class JobProcessor {
#Autowired
private RabbitTemplate rabbitTemplate;
public boolean processJob(Job job) throws InterruptedException, IOException {
rabbitTemplate.convertAndSend("jobResponses", job);
System.out.println(" [x] Processing job: " + job);
rabbitTemplate.convertAndSend("processedJobs", job);
return true;
}
}
In other words, when the job is received by the Receiver it should stop listening for new jobs and wait for the job processor to be done and then start listing for new messages.
We have re-created the null pointer exception here is the code we use to send from the server side.
#Controller
public class MainController {
#Autowired
RabbitTemplate rabbitTemplate;
#Autowired
private Queue jobRequests;
#RequestMapping("/do-job")
public String doJob() {
Job job = new Job(new Application(), "henk", 42);
System.out.println(" [X] Job sent: " + job);
rabbitTemplate.convertAndSend(jobRequests.getName(), job);
return "index";
}
}
And then the receiving code on the client side
#Component
public class Receiver {
#Autowired
private JobProcessor jobProcessor;
#Autowired
private RabbitListenerEndpointRegistry rabbitListenerEndpointRegistry;
#RabbitListener(queues = "jobRequests")
public void receive(Job job) throws InterruptedException, IOException, TimeoutException {
Collection<MessageListenerContainer> messageListenerContainers = rabbitListenerEndpointRegistry.getListenerContainers();
for (MessageListenerContainer listenerContainer :messageListenerContainers) {
System.out.println(listenerContainer);
listenerContainer.stop();
}
System.out.println(" [x] Received '" + job + "'");
jobProcessor.processJob(job);
for (MessageListenerContainer listenerContainer :messageListenerContainers) {
listenerContainer.start();
}
}
}
And the updated job processor
#Service
public class JobProcessor {
public boolean processJob(Job job) throws InterruptedException, IOException {
System.out.println(" [x] Processing job: " + job);
return true;
}
}
And the stacktrace
[x] Received 'Job{application=com.olifarm.application.Application#aaa517, name='henk', id=42}'
[x] Processing job: Job{application=com.olifarm.application.Application#aaa517, name='henk', id=42}
Exception in thread "SimpleAsyncTaskExecutor-1" java.lang.NullPointerException
2015-12-18 11:17:44.494 at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.isActive(SimpleMessageListenerContainer.java:838)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$700(SimpleMessageListenerContainer.java:93)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1301)
at java.lang.Thread.run(Thread.java:745)
WARN 325899 --- [cTaskExecutor-1] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it
java.lang.NullPointerException: null
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.isActive(SimpleMessageListenerContainer.java:838) ~[spring-rabbit-1.5.2.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$700(SimpleMessageListenerContainer.java:93) ~[spring-rabbit-1.5.2.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1195) ~[spring-rabbit-1.5.2.RELEASE.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_91]
The stopping of the listener works and we do receive a new job but when it try's to start it again the NPE is thrown. We checked the rabbitMQ log and found that the connection is closed for about 2 seconds and then re-opened automatically even if we put the thread in sleep in the job processor. This might be the source of the problem? The error doesn't break the program however and after it is thrown the receiver is still able to receive new jobs. Are we abusing the mechanism here or is this valid code?
To get messages on-demand, it's generally better to use rabbitTemplate.receiveAndConvert() rather than a listener; that way you completely control when you receive messages.
Starting with version 1.5 you can configure the template to block for some period of time (or until a message arrives). Otherwise it immediately returns null if there's no message.
The listener is really designed for message-driven applications.
If you can block the thread in the listener until the job completes, no more messages will be delivered - by default the container has only one thread.
If you can't block the thread until the job completes, for some reason, you can stop()/start() the listener container by getting a reference to it from the Endpoint Registry.
It's generally better to stop the container on a separate thread.

Resources