Quarkus MDC not propagated to Mutiny thread logs - quarkus

I need to get a single MDC value to appear also in logs from mutiny.
It seems like MDC from caller thread is not automatically propagated to thread created by mutiny.
Actually thought that this would be done by quarkus-smallrye-context-propagation in the background.
Using Quarkus 2.10.3.Final
pom.xml
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-context-propagation</artifactId>
</dependency>
java class
import org.jboss.logging.Logger;
import software.amazon.awssdk.services.sqs.SqsAsyncClient;
class MyClass{
#Inject
protected Logger log;
#Inject
protected SqsAsyncClient awsSqsClient;
public Cancellable callAsync() {
MDC.put("mykey", "myvalue");
log.info("hello");
// prints: 2022-10-04 16:04:28,651 INFO [com....MyClass] (<thread name>) (myvalue) hello
CompletableFuture<SendMessageResponse> cf = awsSqsClient.sendMessage(m ->
m.queueUrl(<url>)
.messageGroupId(<mgid>)
.messageBody(<mb>));
return Uni.createFrom()
.completionStage(cf)
// prints: 2022-10-04 16:04:28,651 INFO [com....MyClass] (<aws-java-sdk-NettyEventLoop-0-13>) () hello from mutiny
.onItem().invoke(response -> log.info("hello from mutiny"))
.subscribe().with(ignored -> {});
}
Logs after call of callAsync
2022-10-04 16:04:28 INFO [com....MyClass] (<thread name>) (myvalue) hello
2022-10-04 16:04:28 INFO [com....MyClass] (<aws-java-sdk-NettyEventLoop-0-13>) () hello from mutiny

Related

Is it possible to enforce message order on ActiveMQ topics using Spring Boot and JmsTemplate?

In playing around with Spring Boot, ActiveMQ, and JmsTemplate, I noticed that it appears that message order is not always preserved. In reading on ActiveMQ, "Message Groups" are offered as a potential solution to preserving message order when sending to a topic. Is there a way to do this with JmsTemplate?
Add Note: I'm starting to think that JmsTemplate is nice for "getting launched", but has too many issues.
Sample code and console output posted below...
#RestController
public class EmptyControllerSB {
#Autowired
MsgSender msgSender;
#RequestMapping(method = RequestMethod.GET, value = { "/v1/msgqueue" })
public String getAccount() {
msgSender.sendJmsMessageA();
msgSender.sendJmsMessageB();
return "Do nothing...successfully!";
}
}
#Component
public class MsgSender {
#Autowired
JmsTemplate jmsTemplate;
void sendJmsMessageA() {
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.TEST-TOPIC"), "message A");
}
void sendJmsMessageB() {
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.TEST-TOPIC"), "message B");
}
}
#Component
public class MsgReceiver {
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";
private final String consumerTwo = "Consumer.myConsumer2.VirtualTopic.TEST-TOPIC";
#JmsListener(destination = consumerOne )
public void receiveMessage1(String strMessage) {
System.out.println("Received on #1a -> " + strMessage);
}
#JmsListener(destination = consumerOne )
public void receiveMessage2(String strMessage) {
System.out.println("Received on #1b -> " + strMessage);
}
#JmsListener(destination = consumerTwo )
public void receiveMessage3(String strMessage) {
System.out.println("Received on #2 -> " + strMessage);
}
}
Here's the console output (note the order of output in first sequence)...
\Intel\Intel(R) Management Engine Components\DAL;C:\WINDOWS\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\gnupg\bin;C:\Users\LesR\AppData\Local\Microsoft\WindowsApps;c:\Gradle\gradle-5.0\bin;;C:\Program Files\JetBrains\IntelliJ IDEA 2018.3\bin;;.]
2019-04-03 09:23:08.408 INFO 13936 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-04-03 09:23:08.408 INFO 13936 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 672 ms
2019-04-03 09:23:08.705 INFO 13936 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-04-03 09:23:08.845 INFO 13936 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2019-04-03 09:23:08.877 INFO 13936 --- [ main] mil.navy.msgqueue.MsgqueueApplication : Started MsgqueueApplication in 1.391 seconds (JVM running for 1.857)
2019-04-03 09:23:14.949 INFO 13936 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2019-04-03 09:23:14.949 INFO 13936 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2019-04-03 09:23:14.952 INFO 13936 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 3 ms
Received on #2 -> message A
Received on #1a -> message B
Received on #1b -> message A
Received on #2 -> message B
<HIT DO-NOTHING ENDPOINT AGAIN>
Received on #1b -> message A
Received on #2 -> message A
Received on #1a -> message B
Received on #2 -> message B
BLUF - Add "?consumer.exclusive=true" to the declaration of the destination for the JmsListener annotation.
It seems that the solution is not that complex, especially if one abandons ActiveMQ's "message groups" in favor or "exclusive consumers". The drawback to the "message groups" is that the sender has to have prior knowledge of the potential partitioning of message consumers. If the producer has this knowledge, then "message groups" are a nice solution, as the solution is somewhat independent of the consumer.
But, a similar solution can be implemented from the consumer side, by having the consumer declare "exclusive consumer" on the queue. While I did not see anything in the JmsTemplate implementation that directly supports this, it seems that Spring's JmsTemplate implementation passes the queue name to ActiveMQ, and then ActiveMQ "does the right thing" and enforces the exclusive consumer behavior.
So...
Change the following...
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";
to...
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";?consumer.exclusive=true
Once I did this, only one of the two declared receive methods were invoked, and message order was maintained in all my test runs.

When is it safe to depend on Spring's #PreDestroy?

Per Spring's documentation here, I added a shutdown hook:
SpringApplication app = new SpringApplication(App.class);
DefaultProfileUtil.addDefaultProfile(app);
appContext = app.run(args);
appContext.registerShutdownHook();
However the #PreDestroy method does not get called if the application is killed after starting.
import org.springframework.stereotype.Service;
import javax.annotation.PreDestroy;
import javax.annotation.PostConstruct;
#Service
public class Processor {
public Processor() {
...
}
#PostConstruct
public void init() {
System.err.println("processor started");
}
//not called reliably
#PreDestroy
public void shutdown() {
System.err.println("starting shutdown");
try {Thread.sleep(1000*10);} catch (InterruptedException e) {e.printStackTrace();}
System.err.println("shutdown completed properly");
}
}
All I ever see is processor started...
processor started
^C
If I wait at least 30 seconds for spring to complete starting up, and THEN kill the process, then the #PreDestroy annotated function does get called.
processor started
[...]
2018-12-26 17:01:09.050 INFO 31398 --- [ restartedMain] c.App : Started App in 67.555 seconds (JVM running for 69.338)
2018-12-26 17:01:09.111 INFO 31398 --- [ restartedMain] c.App :
----------------------------------------------------------
Application 'App' is running! Access URLs:
Local: http://localhost:8081
External: http://10.10.7.29:8081
Profile(s): [dev]
----------------------------------------------------------
2018-12-26 17:01:09.111 INFO 31398 --- [ restartedMain] c.app :
----------------------------------------------------------
^Cstarting shutdown
shutdown completed properly
How do I determine when it is safe to depend on the calling of all #PreDestroy annotated functions?
I know how to register a shutdown hook with the JVM and that is what I am currently doing, however it seems to me that #PreDestroy should be doing that.
By "safe to depend on" I am assuming a normal shutdown sequence (i.e. requested by SIGTERM or SIGINT) and not power outages and killing the process, etc.

Allow Camel context to run forever

I am using camel-spring jar for springCamelContext. When I start the camel context , it run for 5 minutes (Default time). I can make my thread sleep for some specific time i.e.
try {
camelContext.start();
Thread.sleep(50 * 60 * 1000);
camelContext.stop();
} catch (Exception e) {
e.printStackTrace();
}
BUT I want is my camelContext to run FOREVER because this application is going to be deployed and It will be listening for messages from KAFKA server. I know there is a class
org.apache.camel.spring.Main
But I don't know how to configure it with springCamelContext or not sure if there any other way. Thanks
Update : Even If I remove camelContext.stop() , context is stopped after sometime and I get following logs :
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) is shutting down
[Thread-1] INFO org.apache.camel.impl.DefaultShutdownStrategy - Starting to graceful shutdown 1 routes (timeout 300 seconds)
[Camel (camel-1) thread #1 - ShutdownTask] INFO org.apache.camel.component.kafka.KafkaConsumer - Stopping Kafka consumer
[Camel (camel-1) thread #1 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Route: route1 shutdown complete, was consuming from: Endpoint[kafka://localhost:9092?groupId=group0&serializerClass=org.springframework.integration.kafka.serializer.avro.AvroSerializer&topic=my-topic]
[Thread-1] INFO org.apache.camel.impl.DefaultShutdownStrategy - Graceful shutdown of 1 routes completed in 0 seconds
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) uptime 4 minutes
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) is shutdown in 0.022 seconds
Here is a minimal example which runs forever and only copies files from one folder to another:
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.main.Main;
public class FileWriteRoute {
public static void main(String[] args) throws Exception {
Main main = new Main();
main.addRouteBuilder(new RouteBuilder() {
public void configure() {
from("file://D:/dev/playground/camel-activemq/src/data")
.to("file://D:/dev/playground/camel-activemq/src/data_out");
}
});
main.run();
}
}
Of if you have your Route defined in a class try:
public static void main(String[] args) throws Exception {
Main main = new Main();
CamelContext context = main.getOrCreateCamelContext();
try {
context.addRoutes(new YOURROUTECLASS());
context.start();
main.run();
}
catch (Exception e){
enter code here
}
}

how do I make Spring Schedulling to call a job with single tasklet periodically

With the annotation #Scheduled(fixedRate = 600000), I was expecting to trigger the job and, consequently, the tasklet as well, each 10 minutes (600000 milliseconds = 600 seconds = 10 minutes). Firstly, I tried by using return RepeatStatus.FINISHED since I understood the spring scheduler would trigger each 10 minutes an independent thread. In fact, if I use return RepeatStatus.FINISHED, it finishes the program at all, in other word, spring scheduler will not call the job again.
I am not sure if I have setup something wrong in Spring Scheduler or I have some wrong concept in my mind about tasklet. As a rule of thumb, I have in my mind based on what I have studded recently, when I don't need a reader and writer method, tasklet is a possible alternative. I want to create a batch process which will just move file from one folder to other folder each ten minutes. There will be no file process.
From the console logs, I can see that the TestScheduller.runJob was evoked once when I ran CommandLineJobRunner.
Then, as my first investigation test, I changed to return RepeatStatus.CONTINUABLE and, after that, I noted that the tasklet did ran infinite time but, instead of 10 minutes, let's say each 1 second. Certainly, this isn't correct. Additionally, the job didn't finish at all.
So, my question is: how can I make spring.schedulling evoke the below job each ten minutes?
Scheduler created in order to trigger the tasklet each 10 minutes:
#Component
public class TestScheduller {
private Job job;
private JobLauncher jobLauncher;
#Autowired
public TestScheduller(JobLauncher jobLauncher,
#Qualifier("helloWorldJob") Job job) {
this.job = job;
this.jobLauncher = jobLauncher;
}
#Scheduled(fixedRate = 600000)
public void runJob() {
try {
System.out.println("runJob");
JobParameters jobParameters = new JobParametersBuilder().addLong(
"time", System.currentTimeMillis()).toJobParameters();
jobLauncher.run(job, jobParameters);
} catch (Exception ex) {
System.out.println("runJob exception ***********");
}
}
Java Configuration class
#Configuration
#ComponentScan("com.test.config")
#EnableScheduling
#Import(StandaloneInfrastructureConfiguration.class)
public class HelloWorldJobConfig {
#Autowired
private JobBuilderFactory jobBuilders;
#Autowired
private StepBuilderFactory stepBuilders;
#Autowired
private InfrastructureConfiguration infrastructureConfiguration;
#Autowired
private DataSource dataSource; // just for show...
#Bean
public Job helloWorldJob(){
return jobBuilders.get("helloWorldJob")
.start(step())
.build();
}
#Bean
public Step step(){
return stepBuilders.get("step")
.tasklet(tasklet())
.build();
}
#Bean
public Tasklet tasklet() {
return new HelloWorldTasklet();
}
}
Tasklet:
public class HelloWorldTasklet implements Tasklet {
public RepeatStatus execute(StepContribution arg0, ChunkContext arg1)
throws Exception {
System.out.println("HelloWorldTasklet.execute called");
return RepeatStatus.CONTINUABLE;
}
}
Console Logs:
2016-01-18 14:16:16,376 INFO org.springframework.context.annotation.AnnotationConfigApplicationContext - Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#dcf3e99: startup date [Mon Jan 18 14:16:16 CST 2016]; root of context hierarchy
2016-01-18 14:16:16,985 WARN org.springframework.context.annotation.ConfigurationClassEnhancer - #Bean method ScopeConfiguration.stepScope is non-static and returns an object assignable to Spring's BeanFactoryPostProcessor interface. This will result in a failure to process annotations such as #Autowired, #Resource and #PostConstruct within the method's declaring #Configuration class. Add the 'static' modifier to this method to avoid these container lifecycle issues; see #Bean Javadoc for complete details
2016-01-18 14:16:17,024 WARN org.springframework.context.annotation.ConfigurationClassEnhancer - #Bean method ScopeConfiguration.jobScope is non-static and returns an object assignable to Spring's BeanFactoryPostProcessor interface. This will result in a failure to process annotations such as #Autowired, #Resource and #PostConstruct within the method's declaring #Configuration class. Add the 'static' modifier to this method to avoid these container lifecycle issues; see #Bean Javadoc for complete details
2016-01-18 14:16:17,091 INFO org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.scheduling.annotation.SchedulingConfiguration' of type [class org.springframework.scheduling.annotation.SchedulingConfiguration$$EnhancerBySpringCGLIB$$e07fa052] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2016-01-18 14:16:17,257 INFO org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseFactory - Starting embedded database: url='jdbc:hsqldb:mem:testdb', username='sa'
2016-01-18 14:16:17,425 INFO org.springframework.jdbc.datasource.init.ScriptUtils - Executing SQL script from class path resource [org/springframework/batch/core/schema-drop-hsqldb.sql]
2016-01-18 14:16:17,430 INFO org.springframework.jdbc.datasource.init.ScriptUtils - Executed SQL script from class path resource [org/springframework/batch/core/schema-drop-hsqldb.sql] in 5 ms.
2016-01-18 14:16:17,430 INFO org.springframework.jdbc.datasource.init.ScriptUtils - Executing SQL script from class path resource [org/springframework/batch/core/schema-hsqldb.sql]
2016-01-18 14:16:17,456 INFO org.springframework.jdbc.datasource.init.ScriptUtils - Executed SQL script from class path resource [org/springframework/batch/core/schema-hsqldb.sql] in 25 ms.
runJob
2016-01-18 14:16:18,083 INFO org.springframework.batch.core.repository.support.JobRepositoryFactoryBean - No database type set, using meta data indicating: HSQL
2016-01-18 14:16:18,103 INFO org.springframework.batch.core.repository.support.JobRepositoryFactoryBean - No database type set, using meta data indicating: HSQL
2016-01-18 14:16:18,448 INFO org.springframework.batch.core.launch.support.SimpleJobLauncher - No TaskExecutor has been set, defaulting to synchronous executor.
2016-01-18 14:16:18,454 INFO org.springframework.batch.core.launch.support.SimpleJobLauncher - No TaskExecutor has been set, defaulting to synchronous executor.
2016-01-18 14:16:18,558 INFO org.springframework.batch.core.launch.support.SimpleJobLauncher - Job: [SimpleJob: [name=helloWorldJob]] launched with the following parameters: [{time=1453148177985}]
2016-01-18 14:16:18,591 INFO org.springframework.batch.core.launch.support.SimpleJobLauncher - Job: [SimpleJob: [name=helloWorldJob]] launched with the following parameters: [{}]
2016-01-18 14:16:18,613 INFO org.springframework.batch.core.job.SimpleStepHandler - Executing step: [step]
HelloWorldTasklet.execute called
2016-01-18 14:16:18,661 INFO org.springframework.batch.core.launch.support.SimpleJobLauncher - Job: [SimpleJob: [name=helloWorldJob]] completed with the following parameters: [{}] and the following status: [COMPLETED]
2016-01-18 14:16:18,661 INFO org.springframework.context.annotation.AnnotationConfigApplicationContext - Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#dcf3e99: startup date [Mon Jan 18 14:16:16 CST 2016]; root of context hierarchy
2016-01-18 14:16:18,665 INFO org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseFactory - Shutting down embedded database: url='jdbc:hsqldb:mem:testdb'
2016-01-18 14:16:18,844 INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml]
Picked up JAVA_TOOL_OPTIONS: -agentlib:jvmhook
Picked up _JAVA_OPTIONS: -Xrunjvmhook -Xbootclasspath/a:C:\PROGRA~2\HP\QUICKT~1\bin\JAVA_S~1\classes;C:\PROGRA~2\HP\QUICKT~1\bin\JAVA_S~1\classes\jasmine.jar
You need to call the method setAllowStartIfComplete(true) of the TaskletStep.
So instead of having a method like
#Bean
public Step step(){
return stepBuilders.get("step")
.tasklet(tasklet())
.build();
}
it should look like:
#Bean
public Step step(){
TaskletStep step = stepBuilders.get("step")
.tasklet(tasklet())
.build();
step.setAllowSta
}

Spring Web Application: Post-DispatcherServlet initialization

I am using Spring 3.2 DispatcherServlet. I am looking for an initialization hook that takes place after the DispatcherServlet initialization completes; either a standard Spring solution or servlet solution. Any suggestions?
As a point of reference, the final logging statements after servlet startup follow. I want my initialization method to execute right after the configured successfully log statement.
DEBUG o.s.w.s.DispatcherServlet - Published WebApplicationContext of servlet 'mySpringDispatcherServlet' as ServletContext attribute with name [org.springframework.web.servlet.FrameworkServlet.CONTEXT.mySpringDispatcherServlet]
INFO o.s.w.s.DispatcherServlet - FrameworkServlet 'mySpringDispatcherServlet': initialization completed in 5000 ms
DEBUG o.s.w.s.DispatcherServlet - Servlet 'mySpringDispatcherServlet' configured successfully
From my research, so far the following have not had the desired effect:
Extending ContextLoaderListener/implementing ServletContextListener per this answer.
Implementing WebApplicationInitializer per the javaoc.
My beans use #PostConstruct successfully; I'm looking for a Servlet or container level hook that will be executed essentially after the container initializes and post-processes the beans.
The root issue was that I couldn't override the final method HttpsServlet.init(). I found a nearby #Override-able method in DispatcherServlet.initWebApplicationContext that ensured my beans and context were fully initialized:
#Override
protected WebApplicationContext initWebApplicationContext()
{
WebApplicationContext wac = super.initWebApplicationContext();
// do stuff with initialized Foo beans via:
// wac.getBean(Foo.class);
return result;
}
From Spring's Standard and Custom Events.
import org.springframework.context.ApplicationListener;
import org.springframework.context.event.ContextRefreshedEvent;
import org.springframework.stereotype.Component;
#Component
public class ApplicationContextListener implements
ApplicationListener<ContextRefreshedEvent> {
#Override
public void onApplicationEvent(ContextRefreshedEvent event) {
System.out.println("ApplicationContext was initialized or refreshed: "
+ event.getApplicationContext().getDisplayName());
}
}
The event above will be fired when the DispatcherServlet is initialized, such as when it prints:
INFO org.springframework.web.servlet.DispatcherServlet - FrameworkServlet 'ServletName': initialization completed in 1234 ms
You can implement ApplicationListener<ContextStartedEvent> within your application context. This event listener will then be called once for your root context and once for each servlet context.
public class StartupListener implements ApplicationListener<ContextStartedEvent> {
public void onApplicationEvent(ContextStartedEvent event) {
ApplicationContext context = (ApplicationContext) event.getSource();
System.out.println("Context '" + context.getDisplayName() + "' started.");
}
}
If you define this listener within your servlet context, it should be called just once for the servlet context intself.
try this, change your port number.
In my case i changed from server.port=8001 to server.port=8002

Resources