#EventHandler retry logic and DistributedCommandBus setup - spring-boot

1st Question:
So i am using Spring Eureka and the DistributedCommandBus set via the following:
public CommandRouter springCloudCommandRouter(DiscoveryClient discoveryClient, Registration localServiceInstance) { ... }
public CommandBusConnector springHttpCommandBusConnector(#Qualifier("localSegment") CommandBus localSegment, RestOperations restOperations, Serializer serializer) { .. }
public DistributedCommandBus springCloudDistributedCommandBus(CommandRouter commandRouter, CommandBusConnector commandBusConnector) { ... }
and my question for this part is how can i show that this is working? I have two K8 pods running the above code and have seen one run the #CommandHandler and the other run the #EventSourcingEvent but did not see anything in the logs to give any indication that it is using the bus.
Just want to be able to show that it is "working" as i have been asked to do so.
the Eureka part is working and i see all the info from said dashboard.
Edit - removed 2nd question to ask in another thread

To keep focus with my answer, I'll only provide a suggestion for your first question, which summarizes to:
How can I point out my DistributedCommandBus set up with Eureka is actually routing the commands to different instances?
I would suggest to set up some logging around this.
That way, you could log when the message is dispatched from Node 1 and when it is handled by Node 2.
Ideal for this would be to register the LoggingInterceptor as a MessageHandlerInterceptor and MessageDispatchInterceptor.
To do so, you will have to register it on the DistributedCommandBus, but also on the "local segment" CommandBus. The DistributedCommandBus will be in charge of dispatching it and thus calling the LoggingInterceptor upon dispatching. The local segment/CommandBus is in charge of providing the command to a Command Handler in the right JVM and as such will call the LoggingInterceptor upon handling.
The sole downside to this, is that the LoggingInterceptor will be a handler and dispatch interceptor as off Axon Framework release 4.2.
Thus, for now, you will have to do with it only being a Handler Interceptor.
However, this would suffice as well, as the LoggingInterceptor will only log upon handling the command.
This would then only occur on the Node which actually handles the command.
Hope this helps!

Related

Spring boot start on listening to messages on application start

I have a Spring Boot application that starts listening on Azure IOT Hub at application start. It is done this way:
#EventListener
public void subscribeEventMessages(ContextRefreshedEvent event) {
client
.receive(false) // set this to false to read only the newly available events
.subscribe(this::hubAllEventsCallback);
}
My problem is, that this uses ContextRefreshedEvent but in fact i only want to start it once on application start.
I also checked other methods how start something at the beginning, like CommandLineRunner.
On the other hand if implementing listeners for more standard stuff like JMS there are specific Annotations like #JmsListener or providing Beans of specific Types.
My question is: Can i leverage some of these more message(subscribe) related mechanisms to start my method?
If we don't want our #EventListener to listen on "context refresh" but only on "context start", please (try) replace:
ContextRefreshEvent
with ContextStartEvent
...which is "sibling class" with exactly this semantical difference.

Need some guidance with Spring Integration Flow

I am new to Spring Integration and have read quite some documentation and other topics here on StackOverflow. But I am still a bit overwhelmed on how to apply the newly acquired knowledge in a Spring Boot Application.
This is what should happen:
receive message from a Kafka topic, eg from "request-topic" (payload is a custom Job POJO). InboundChannelAdapter?
do some preparation (checkout from a git repo)
process files using a batch job
commit&push to git, update Job object with commit-id
publish message to Kafka with updated Job object, eg to "reply-topic". OutboundChannelAdapter?
Using DSL or plain Java configuration does not matter. My problem after trying several variants is that I could not achieve the desired result. For example, handlers would be called too early, or not at all, and thus the reply in step 5 would not be updated.
Also, there should only be one flow running at any given time, so I guess, a queue should be involved at some point, probably at step 1(?).
Where and when should I use QueueChannels, DirectChannel (or any other?), do I need GatewayHandlers, eg to reply with a commit-id?
Any hints are appreciated.
Something like this:
#Bean
IntegrationFlow flow() {
return IntegrationFlows.from(Kafka.inboundGateway(...))
.handle(// prep)
.transform(// to JobLaunchRequest)
.handle(// JobLaunchingGateway)
.handle(// cleanUp and return result)
.get();
}
It will only process one request at a time (with default concurrency).

Custom HealthIndicator not invoked during startup

I implemented a custom HealthIndicator for our application, which is working fine.
I noticed that when I run the application through my IDE (tested with both IntelliJ and Eclipse), the HealthIndicator.health() method is invoked during startup.
However, when I run the application by using the JAR file itself, the HealthIndicator.health() method is not invoked during startup of the application.
Why is the HealthIndicator.health() method not invoked during startup when I run it as a JAR file, and shouldn't it behave similarly as when running it through the IDE?
This is actually not really a bug, but a side effect caused by your IDE. You should be aware that Actuator endpoints are not only exposed over HTTP, but also over JMX. If you take a look at the documentation, you'll also see that the health endpoint is enabled by default on both HTTP and JMX.
Additionally, most IDEs, including IntelliJ and Eclipse, will enable a JMX agent when running the application through the IDE itself. This means that when the application is started, a JMX connection is made, which will in turn trigger the custom health indicator.
You can verify this quite easily, for example let's assume the following health indicator:
#Bean
public HealthIndicator alwaysUpHealthIndicator() {
return () -> {
log.info("Indicator invoked");
return Health.up().withDetail("Foo", "Bar").build();
};
}
If you change your IntelliJ run configuration and disable Enable JMX agent, you'll notice that the message no longer appears in the log.
Likewise, if you disable the health JMX endpoint, you'll also notice that you won't get the additional message within the logs:
management.endpoints.jmx.exposure.exclude=health
This means that you shouldn't rely on the HealthIndicator being executed during the startup of your application. If you have code that should be executed when your application is starting up, consider using an ApplicationRunner or CommandLineRunner bean. For example:
#Bean
public ApplicationRunner applicationRunner() {
return args -> log.info("This will be invoked on startup");
}
I can't answer directly the question, but it looks like there is no real question here, if its a bug - submit a bug to spring boot team. Otherwise its just a statement that I fully agree with.
Either HelathIndicator health() method should be executed in both ways or none at all.
What you're describing sounds more like a weird bug, here is a dirty way to check what happens (remove it or course in production):
Inside the health method of the health indicator obtain a stack trace and print it on console.
Analyze the stack trace and check that its not a result of some /health invocation through the HTTP (for example, maybe your IDE is configured to automatically call actuator's health upon start, who knows)

Spring KafkaListener: How to know when it's ready

I have a simple Spring Boot application which reads from Kafka and writes to Kafka. I wrote a SpringBootTest using an EmbeddedKafka to test all that.
The main problem is: Sometimes the test fails because the test sends the Kafka message too early. That way, the message is already written to Kafka before the Spring application (or its KafkaListener to be precise) is ready. Since the listener reads from the latest offset (I do not want to change any config for my test - except bootstrap.servers), it will not receive all messages in that test.
Does anyone have an idea how I could know inside the test, that the KafkaListener is ready to receive messages?
Only way I could think of is waiting until /health comes available but I have no idea whether I can be sure that this implies the KafkaListener to be ready at all.
Any help is greatly appreciated!
Best regards.
If you have a KafkaMessageListenerContainer instance, then it is very easy to use org.springframework.kafka.test.utils.ContainerTestUtils.waitForAssignment(Object container, int partitions).
https://docs.spring.io/spring-kafka/api/org/springframework/kafka/test/utils/ContainerTestUtils.html
e.g. calling ContainerTestUtils.waitForAssignment(container, 1); in your Test setup will block until the container has gotten 1 partition assigned.
So, I just read about #PostConstruct and it turns out that you can easily use this also within the test:
#PostConstruct
public void checkApplicationReady() {
applicationReady = true;
}
Now I added an #Before method to wait until that flag is set to true.
So far this seems to work very nicely!

How to update multiple spring config instance clients

Spring cloud config client helps to change the properties in run time. Below are 2 ways to do that
Update GIT repository and hit /refresh in the client application to get the latest values
Update the client directly by posting the update to /env and then /refresh
Problem here in both the approaches is that there could be multiple instances of client application running in cloud foundry and above rest calls will reach any one of the instances leaving application in inconsistent state
Eg. POST to /env could hit instance 1 and leaves instance 2 with old data.
One solution I could think of is to continuously hit these end points "n" times using for loop just to make sure all instance will be updated but it is a crude solution. Do any body have better solution for this?
Note: We are deploying our application in private PCF environment.
The canonical solution for that problem is the Spring Cloud Bus. If your apps are bound to a RabbitMQ service and they have the bus on the classpath there will be additional endpoints /bus/env and /bus/refresh that broadcast the messages to all instances. See docs for more details.
Spring Cloud Config Server Not Refreshing
see org.springframework.cloud.bootstrap.config.RefreshEndpoint code hereļ¼š
public synchronized String[] refresh() {
Map<String, Object> before = extract(context.getEnvironment()
.getPropertySources());
addConfigFilesToEnvironment();
Set<String> keys = changes(before,
extract(context.getEnvironment().getPropertySources())).keySet();
scope.refreshAll();
if (keys.isEmpty()) {
return new String[0];
}
context.publishEvent(new EnvironmentChangeEvent(keys));
return keys.toArray(new String[keys.size()]);
}
that means /refresh endpoint pull git first and then refresh catch,and public a environmentChangeEvent,so we can customer the code like this.

Resources