Mock a #PostConstruct method for a mock object containing this method - spring

I'm mocking a kafka producer (a custom library) and when I run the test cases, I get:
2022-03-10 11:34:33 [] WARN [kafka-producer-network-thread |
test-kafka-producer] o.apache.kafka.clients.NetworkClient
[NetworkClient.java:748] [Producer clientId=test-kafka-producer]
Connection to node -1 (localhost/127.0.0.1:8000) could not be
established. Broker may not be available.
There is start method in Producer class annotated with #PostConstruct. How can I mock this method to do nothing? I have a class that has Producer #Autowired. But when I'm mocking a Producer, it creates a mock object, but how can I stop it from calling original PostConstruct method of Producer class?

Related

ServiceActivator invoked twice when only one message is published

I have the following JUnit test that is basically an example of a production test.
#Autowired
private MessageChannel messageChannel;
#SpyBean
#Autowired
private Handler handler;
#Test
public void testPublishing() {
SomeEvent event = new SomeEvent(); // implements Message
messageChannel.send(event);
Thread.sleep(2000); // sleep 2 seconds
Mockito.verify(handler, times(1))
.someMethod(Mockito.any());
}
The service activator is the someMethod method inside the Handler class. For some reason this test fails stating that someMethod was invoked twice even though only a single message was published to the channel. I even added code to someMethod to print the memory address of the message consumed and both invocations are the exact same address. Any idea what could cause this?
NOTE: I built this basic code example as a test case and it verifies as single invocation as I'd expect, but what could possibly (in my production system test) cause the send operation to result in 2 separate invocations of the service activator?
NOTE2: I added a print statement inside my real service activator. When I have the #SpyBean annotation on the handler and use the Mockito.verify(... I get two print outs of the input. However, if I remove the annotation and the verify call then I only get one print out. However, this does not happen in the simple demo I shared here.
NOTE3: Seems to be some sort of weird SpyBean behavior as I am only seeing the single event downstream. No idea why Mockito is giving me trouble on this.

ProducerFencedException after kafka consumer group rebalancing

I cannot comment on similar topic:
TransactionId prefix for producer-only and read-process-write - ProducerFencedException
so I will ask a new question.
Use case:
One topic with 2 partitions
Spring #KafkaListener with concurrency=1 (default) in consumer-group "sample-consumer-group"
Two instances of the same application - both with the same "transaction-id-prefix"
1) I start first app instance (lets call it "instance1" - evenrything is ok - single consumer subscribes to both partitions. Log:
o.s.k.l.KafkaMessageListenerContainer : sample-consumer-group: partitions assigned: [sampleTopic-1, sampleTopic-0]
2) I start second app instance (instance2) - everything semms ok - log from this instance:
o.s.k.l.KafkaMessageListenerContainer : sample-consumer-group: partitions assigned: [sampleTopic-1]
log from "instance1":
o.s.k.l.KafkaMessageListenerContainer : sample-consumer-group: partitions revoked: [sampleTopic-1, sampleTopic-0]
o.s.k.l.KafkaMessageListenerContainer : sample-consumer-group: partitions assigned: [sampleTopic-0]
Still seems ok...
But, when I then try to send message to any other topic (not from any kafkaListener, but from some #Transactional method - so this is producer only transaction) the following errors occur:
ERROR 4395 --- [roducer-tx-prefix-0] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-tx-prefix-0, transactionalId=tx-prefix-0] Aborting producer batches due to fatal error
org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.
ERROR 4395 --- [roducer-tx-prefix-0] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='sync-register' and payload='2020-04-21T13:52:12.148412Z' to topic anotherTopic
So Is it related to the problem that I should have separate transaction-id-prefix for each instance for producer only transactions ? If yes, what is the current status of this and how to achieve this without using separate kafkaTemplate for for consumer started and producer started transactions ?
See the documentation.
As mentioned in the overview, the producer factory is configured with this property to build the producer transactional.id property. There is rather a dichotomy when specifying this property in that, when running multiple instances of the application, it must be the same on all instances to satisfy fencing zombies (also mentioned in the overview) when producing records on a listener container thread. However, when producing records using transactions that are not started by a listener container, the prefix has to be different on each instance. Version 2.3, makes this simpler to configure, especially in a Spring Boot application. In previous versions, you had to create two producer factories and KafkaTemplate s - one for producing records on a listener container thread and one for stand-alone transactions started by kafkaTemplate.executeInTransaction() or by a transaction interceptor on a #Transactional method.
Now, you can override the factory’s transactionalIdPrefix on the KafkaTemplate and the KafkaTransactionManager.
When using a transaction manager and template for a listener container, you would normally leave this to default to the producer factory’s property. This value should be the same for all application instances. For transactions started by the template (or the transaction manager for #Transaction) you should set the property on the template and transaction manager respectively. This property must have a different value on each application instance.
Here's the working workaround I've figured out, if you use Kafka < 2.5 and you have to satisfy the requirement of having static transaction id-s for consumer-based transactions, with random transaction ids for producer-based:
There is rather a dichotomy when specifying this property in that, when running multiple instances of the application, it must be the same on all instances to satisfy fencing zombies (also mentioned in the overview) when producing records on a listener container thread. However, when producing records using transactions that are not started by a listener container, the prefix has to be different on each instance.
public class TransactionSourceAwareKafkaProducerFactory<K, V> extends DefaultKafkaProducerFactory<K, V> {
public TransactionSourceAwareKafkaProducerFactory(Map<String, Object> configs) {
super(configs);
}
#NonNull #Override
public Producer<K, V> createProducer(String txIdPrefixArg) {
if (isProducerBasedTransation())
return super.createProducer(/** use some random id here **/);
else
return super.createProducer(txIdPrefixArg);
}
protected boolean isConsumerBasedTransation() {
return TransactionSupport.getTransactionIdSuffix() != null;
}
protected boolean isProducerBasedTransation() {
return !isConsumerBasedTransation();
}
}

Notifying calling thread when all asynchronous child threads used for transformation have completed in spring

I need to transform data in child threads which are managed by ThreadPoolTaskExecutor. Is there a way the calling thread can be notified when all asynchronous child threads have completed transformation in spring.
Approach outside spring:
One way I can think of(outside spring) is using Executor Service. Each invocation to transformerChannel.send will happen in a Callable which can return a Future.
i.e. public <TransformedData> call(){
sendDataToTransformer();
}
ExecutorService pool=Executors.newFixedThreadPool(7);
List<Future< DataToBeTransformed >> future=new ArrayList<Future< DataToBeTransformed >>();
List<Callable< DataToBeTransformed >> callList = new ArrayList<Callable<DataToBeTransformed>>();
future=pool.invokeAll(callList);
Is there a way to do anything similar in spring using transformer tag. The transformer method in this case is not a part of any Callable. The invocation to send method of transformer channel is not being done in any callable.

WebAppInitializer -- how to cause shutdown on startup errors

I'm trying to perform some checks at the startup of a spring web application (e.g. check that the DB version is as excepted). If the checks fail, the servlet should be killed (or better yet, never started) so as to prevent it from serving any pages. Ideally, the containing Tomcat/Netty/whatever service should also be killed (although this looks more tricky).
I can't call System.exit because my startup check depends on a lot of services that should be safely shut down (e.g. DB connections, etc...).
I found this thread, which suggests calling close on the spring context. However, except for reporting exceptions, spring merrily continues to start up the servlet (see below).
I've looked into the Java Servlet documentation -- it says not to call destroy on the servlet - and I've no idea whether I'd be calling Servlet.destroy from methods where the Servlet object appears further up the stack (don't want to eat my own tail). In fact, I'd rather the servlet was never created in the first place. Better to run my at-startup checks first, before starting any web-serving.
Here's what I have...
#Service
class StartupCheckService extends InitializingBean {
#Autowired a:OtherServiceToCheckA = null
#Autowired b:OtherServiceToCheckB = null
override def afterPropertiesSet = {
try{
checkSomeEssentialStuff();
} catch {
case e: Any => {
// DON'T LET THE SERVICE START!
ctx = getTheContext();
ctx.close();
throw e;
}
}
The close call causes the error:
BeanFactory not initialized or already closed - call 'refresh' before accessing beans via the ApplicationContext.
presumably because you shouldn't call close while bean initialization is happening (and refresh is likely to put us into an infinite loop). Here's my startup code...
class WebAppInitializer extends WebApplicationInitializer {
def onStartup(servletContext: ServletContext): Unit = {
val ctx = new AnnotationConfigWebApplicationContext()
// Includes StartupCheckService
ctx.register(classOf[MyAppConfig])
ctx.registerShutdownHook() // add a shutdown hook for the above context...
// Can't access StartupCheckService bean here.
val loaderListener = new ContextLoaderListener(ctx)
// Make context listens for servlet events
servletContext.addListener(loaderListener)
// Make context know about the servletContext
ctx.setServletContext(servletContext)
val servlet: Dynamic = servletContext.addServlet(DISPATCHER_SERVLET_NAME, new DispatcherServlet(ctx))
servlet.addMapping("/")
servlet.setLoadOnStartup(1)
}
I've tried doing this kind of thing in onStartup
ctx.refresh()
val ss:StartupService = ctx.getBean(classOf[StartupCheckService])
ss.runStarupRountines()
but apparently I'm not allowed to call refresh until onStartup exits.
Sadly, Spring's infinite onion of abstraction layers is making it very hard for me to grapple with this simple problem. All of the important details about the order of how things get initialize are hidden.
Before the "should have Googled it" Nazis arrive... A B C D E F
I'm not sure why you need to do this in a WebApplicationInitializer. If you want to configure a #Bean that does the health check for you then do it in an ApplicationListener<ContextRefreshedEvent>. You can access the ConfigurableApplicationContext from there (the source of the event) and close it. That will shut down the Spring context. Throw an exception if you want the Servlet and the webapp to die.
You can't kill a container (Tomcat etc.) unless you started it. You could try using an embedded container (e.g. Spring Boot will do that for you easily).
As far as I understand, you don't have to explicitly call close().
Just let the exception escape the afterPropertiesSet(), Spring should automatically stop instantiating remaining beans and shutdown the whole context.
You can use #PreDestroy if you have to make some cleanup on beans which have been initialized so far.

How does a transaction work with multiple DAOs, want to understand how single connection is shared

This might be a repeat but i couldn't find suitable post myself.
My question is, how does it really work (how the spring/hibernate support) to manage a single transaction with multiple DAO classes?
Does it really mean that same JDBC connection is used across multiple DAOs that are participating in a transaction? I would like to understand the fundamentals here.
Thanks in advance
Harinath
Using a simple example:
#Controller
#Transactional
#RequestMapping("/")
public class HomeController {
#Inject
private UserRepository userRepository;
#Inject
private TagRepository tagRepository;
...
#RequestMapping(value = "/user/{user_id}", method = RequestMethod.POST)
public #ResponseBody void operationX(#PathVariable("user_id") long userId) {
User user = userRepository.findById(userId);
List<Tags> tags = tagRepository.findTagsByUser(user);
...
}
...
}
In this example your controller has the overarching transaction, thus the entity manager will keep track of all operations in this operationX method and commit the transaction at the end of the method. Spring's #Transactional annotation creates a proxy for the annotated class which wraps it's methods in a transaction when called. It achieves this through the use of AOP.
Regarding the connection to the database - it is generally obtained from a connection pool and uses the connection for the duration of the transaction, whereafter it returns it to the connection pool. Similar question answered here: Does the Spring transaction manager bind a connection to a thread?
EDIT:
Furthermore, for the duration of the transaction, the connection is bound to the thread. In subsequent database operations, the connection is obtained every time by getting the connection mapped to the thread in question. I believe the TransactionSynchronizationManager is responsible for doing this. The docs of which you can find here: TransactionSynchronizationManager

Resources