I have built a project using Spring Quartz with a clustered environment. I am trying to test if the jobs can be picked up , in case the server that initiated them shut down. While it works perfectly as expected for Cron Triggers, same cannot be said about the SimpleTrigger job.
For Cron Triggers which have not been executed yet, Quartz runs the job without any hassle.
Steps to Reproduce:
Start the servers at port 8080 and port 8081.
Schedule a job using server at port 8081.
Shut down the server while the job is running.
This is what I get when the ClusterManager picks up the jobs:
2022-06-22 13:37:52.659 INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore : ClusterManager: detected 2 failed or restarted instances.
2022-06-22 13:37:52.661 INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore : ClusterManager: Scanning for instance "MacBook-Pro.local1655885157031"'s failed in-progress jobs.
2022-06-22 13:37:52.677 INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore : ClusterManager: Scanning for instance "MacBook-Pro.local1655885169333"'s failed in-progress jobs.
2022-06-22 13:37:52.720 INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore : ClusterManager: ......Deleted 1 complete triggers(s).
2022-06-22 13:37:52.722 INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore : ClusterManager: ......Cleaned-up 1 other failed job(s).
This is how my Job looks like:
public class ApiJob implements Job {
final static Logger log = LoggerFactory.getLogger(ApiJob.class);
#Override
public void execute(JobExecutionContext context) {
this.context=context;
log.info("Job Execution Started");
JobDataMap map=context.getMergedJobDataMap();
ApiRequest request=new ApiRequest(map.getString("message"));
try {
Thread.sleep(2*60*1000);
log.info("Job scheduled...{}",context.getJobDetail().getKey().getName());
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
}
Trigger:
Trigger trigger=TriggerBuilder.newTrigger().forJob(jobDetail)
.withIdentity(jobDetail.getKey().getName(), "quartz-jobs-triggers")
.withDescription("Random trigger")
.startAt(Date.from(startTime.toInstant()))
//Add custom logic here?
.withSchedule(SimpleScheduleBuilder.simpleSchedule().withMisfireHandlingInstructionFireNow())
.build();
And finally how I setup the clustering in the application.properties:
#Quartz Properties
spring.quartz.job-store-type=jdbc
spring.quartz.properties.org.quartz.threadPool.threadCount=5
spring.quartz.properties.org.quartz.scheduler.instanceId=AUTO
spring.quartz.properties.org.quartz.jobStore.isClustered = true
spring.quartz.properties.org.quartz.jobStore.clusterCheckinInterval = 20000
Related
I've run testcontainers like below in the test code.
#Testcontainers
public class TestEnvironmentSupport {
static String version = "5.4.0";
static DockerImageName kafkaImage = DockerImageName.parse("confluentinc/cp-server").withTag(version);
static DockerImageName zookeeperImage = DockerImageName.parse("confluentinc/cp-zookeeper").withTag(version);
static DockerImageName schemaRegistryImage = DockerImageName.parse("confluentinc/cp-schema-registry").withTag(version);
static Network network = Network.newNetwork();
#Container
static GenericContainer zookeeper = new GenericContainer<>(zookeeperImage)
.withNetwork(network)
.withCreateContainerCmdModifier(cmd -> cmd.withHostName("zookeeper"))
.withExposedPorts(2181)
.withEnv("ZOOKEEPER_CLIENT_PORT", "2181")
.withEnv("ZOOKEEPER_TICK_TIME", "2000");
#Container
static GenericContainer kafka = new GenericContainer<>(kafkaImage)
.withNetwork(network)
.withCreateContainerCmdModifier(cmd -> cmd.withHostName("kafka"))
.withExposedPorts(9092)
.dependsOn(zookeeper)
.withEnv("KAFKA_BROKER_ID", "1")
.withEnv("KAFKA_ZOOKEEPER_CONNECT", "zookeeper:2181")
.withEnv("KAFKA_LISTENER_SECURITY_PROTOCOL_MAP", "PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT")
.withEnv("KAFKA_ADVERTISED_LISTENERS", "PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092")
.withEnv("KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL", "schema-registry:8081");
#Container
static GenericContainer schemaRegistry = new GenericContainer<>(schemaRegistryImage)
.withNetwork(network)
.withCreateContainerCmdModifier(cmd -> cmd.withHostName("schema-registry"))
.withExposedPorts(8081)
.dependsOn(zookeeper, kafka)
.withEnv("SCHEMA_REGISTRY_HOST_NAME", "schema-registry")
.withEnv("SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL", "zookeeper:2181");
#Test
void test() {
assertTrue(zookeeper.isRunning());
assertTrue(kafka.isRunning());
assertTrue(schemaRegistry.isRunning());
}
}
and it works totally fine.
But the problem occurred when I tried to run spring boot test with the above testcontainer configuration because testcontainer dynamically generates broker ports but NetworkClient continuously accessing the broker with localhost:9092 even though I dynamically override properties on #SpringBootTest code like below
#DynamicPropertySource
static void testcontainerProperties(final DynamicPropertyRegistry registry) {
var bootstrapServers = kafka.getHost() + ":" + kafka.getMappedPort(9092);
var schemaRegistryUrl = "http://" + schemaRegistry.getHost() + ":" + schemaRegistry.getMappedPort(8081);
registry.add("spring.cloud.stream.kafka.binder.brokers", () -> bootstrapServers);
registry.add("bootstrap.servers", () -> bootstrapServers);
registry.add("schema.registry.url", () -> schemaRegistryUrl);
registry.add("spring.cloud.stream.kafka.default.consumer.configuration.schema.registry.url", () -> schemaRegistryUrl);
}
Below is AdminClientConfig log on startup time, and it shows that bootstrap.servers = [localhost:56001] on which the port is dynamically binded by testcontainer.
2021-02-21 20:28:52.291 INFO 78241 --- [ Test worker] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [localhost:56013]
client.dns.lookup = use_all_dns_ips
Even though I set like these it keeps trying to connect to localhost:9092 like below.
2021-02-21 20:28:52.457 INFO 78241 --- [ Test worker] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1613906932454
2021-02-21 20:28:53.095 WARN 78241 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2021-02-21 20:28:53.202 WARN 78241 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2021-02-21 20:28:53.407 WARN 78241 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
And below is the result of docker ps while running spring boot test.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e340d9e15fe4 confluentinc/cp-schema-registry:5.4.0 "/etc/confluent/dock…" 46 seconds ago Up 46 seconds 0.0.0.0:56014->8081/tcp optimistic_joliot
ad3bf06df4b3 confluentinc/cp-server:5.4.0 "/etc/confluent/dock…" 55 seconds ago Up 54 seconds 0.0.0.0:56013->9092/tcp infallible_brown
f7fa5f4ae23c confluentinc/cp-zookeeper:5.4.0 "/etc/confluent/dock…" About a minute ago Up 59 seconds 0.0.0.0:56012->2181/tcp, 0.0.0.0:56011->2888/tcp, 0.0.0.0:56010->3888/tcp agitated_leavitt
b1c036cdf00b testcontainers/ryuk:0.3.0 "/app" About a minute ago Up About a minute 0.0.0.0:56009->8080/tcp testcontainers-ryuk-68190eaa-8513-4dd8-ab67-175275f15a82
I tried run testcontainers with docker compose module but it has the same issue.
What am I doing wrong?
Please help.
It's trying to connect to localhost:9092 because you've tried to connect to the advertised PLAINTEXT_HOST port, and that's the address it'll return. You shouldn't need to advertise two listeners for tests, so try using kafka:29092 directly instead of calling the mapped port method. Also, unless you have a specific need for server-side schema validation, you only need confluentinc/cp-kafka image.
spring-kafka embedded broker should work your tests, too, so you wouldn't need testcontainers
I have set a sample Micro Service using 3 spring boot applications. Post that I am trying to connect all of them to Eureka Server.
There are 3 spring Boot Applications Task-Display,Task-Repo and Task-Status-Repo.
Task Display communicates with the other two and retrieve data.
Now Problem is except Task-Display all others are linked to eureka server. Getting the following error when Task Display is deployed
2019-08-31 18:43:00.055 INFO 15528 --- [tbeatExecutor-0]
com.netflix.discovery.DiscoveryClient : DiscoveryClient_TASK-DISPLAY-
ME;/192.168.1.9:task-display-me;:8180 - Re-registering apps/TASK-DISPLAY-
ME;
2019-08-31 18:43:00.055 INFO 15528 --- [tbeatExecutor-0]
com.netflix.discovery.DiscoveryClient : DiscoveryClient_TASK-DISPLAY-
ME;/192.168.1.9:task-display-me;:8180: registering service...
2019-08-31 18:43:00.057 WARN 15528 --- [tbeatExecutor-0]
c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failure with
status code 400; retrying on another server if available
2019-08-31 18:43:00.059 WARN 15528 --- [tbeatExecutor-0]
c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failure with
status code 400; retrying on another server if available
2019-08-31 18:43:00.060 WARN 15528 --- [tbeatExecutor-0]
com.netflix.discovery.DiscoveryClient : DiscoveryClient_TASK-DISPLAY-
ME;/192.168.1.9:task-display-me;:8180 - registration failed Cannot execute
request on any known server
com.netflix.discovery.shared.transport.TransportException: Cannot execute
request on any known server
This is my Application.java of the server which is not getting linked to Eureka
#SpringBootApplication
#EnableEurekaClient
public class TaskDisplayApplication {
public static void main(String[] args) {
SpringApplication.run(TaskDisplayApplication.class, args);
}
#LoadBalanced
#Bean
public RestTemplate getRestTemplate() {
return new RestTemplate();
}
}
Per Spring's documentation here, I added a shutdown hook:
SpringApplication app = new SpringApplication(App.class);
DefaultProfileUtil.addDefaultProfile(app);
appContext = app.run(args);
appContext.registerShutdownHook();
However the #PreDestroy method does not get called if the application is killed after starting.
import org.springframework.stereotype.Service;
import javax.annotation.PreDestroy;
import javax.annotation.PostConstruct;
#Service
public class Processor {
public Processor() {
...
}
#PostConstruct
public void init() {
System.err.println("processor started");
}
//not called reliably
#PreDestroy
public void shutdown() {
System.err.println("starting shutdown");
try {Thread.sleep(1000*10);} catch (InterruptedException e) {e.printStackTrace();}
System.err.println("shutdown completed properly");
}
}
All I ever see is processor started...
processor started
^C
If I wait at least 30 seconds for spring to complete starting up, and THEN kill the process, then the #PreDestroy annotated function does get called.
processor started
[...]
2018-12-26 17:01:09.050 INFO 31398 --- [ restartedMain] c.App : Started App in 67.555 seconds (JVM running for 69.338)
2018-12-26 17:01:09.111 INFO 31398 --- [ restartedMain] c.App :
----------------------------------------------------------
Application 'App' is running! Access URLs:
Local: http://localhost:8081
External: http://10.10.7.29:8081
Profile(s): [dev]
----------------------------------------------------------
2018-12-26 17:01:09.111 INFO 31398 --- [ restartedMain] c.app :
----------------------------------------------------------
^Cstarting shutdown
shutdown completed properly
How do I determine when it is safe to depend on the calling of all #PreDestroy annotated functions?
I know how to register a shutdown hook with the JVM and that is what I am currently doing, however it seems to me that #PreDestroy should be doing that.
By "safe to depend on" I am assuming a normal shutdown sequence (i.e. requested by SIGTERM or SIGINT) and not power outages and killing the process, etc.
I am developing a spring boot application which will listen to ibm mq with
#JmsListener(id="abc", destination="${queueName}", containerFactory="defaultJmsListenerContainerFactory")
I have a JmsListenerEndpointRegistry which starts the listenerContainer.
On message will try to push the same message with some business logic to kafka. The poster code is
kafkaTemplate.send(kafkaProp.getTopic(), uniqueId, message)
Now in case a kafka producer fails, I want my boot application to get terminated. So I have added a custom
setErrorHandler.
So I have tried
`System.exit(1)`, `configurableApplicationContextObject.close()`, `Runtime.getRuntime.exit(1)`.
But none of them work. Below is the log that gets generated after
System.exit(0) or above others.
2018-05-24 12:12:47.981 INFO 18904 --- [ Thread-4] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#1d08376: startup date [Thu May 24 12:10:35 IST 2018]; root of context hierarchy
2018-05-24 12:12:48.027 INFO 18904 --- [ Thread-4] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 2147483647
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 0
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.a.k.clients.producer.KafkaProducer : Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2018-05-24 12:12:48.044 INFO 18904 --- [ Thread-4] o.a.k.clients.producer.KafkaProducer : Closing the Kafka producer with timeoutMillis = 30000 ms.
But the application is still running and below are the running threads
Daemon Thread [Tomcat JDBC Pool Cleaner[14341596:1527144039908]] (Running)
Thread [DefaultMessageListenerContainer-1] (Running)
Thread [DestroyJavaVM] (Running)
Daemon Thread [JMSCCThreadPoolMaster] (Running)
Daemon Thread [RcvThread: com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection#12474910[qmid=*******,fap=**,channel=****,ccsid=***,sharecnv=***,hbint=*****,peer=*******,localport=****,ssl=****]] (Running)
Thread [Thread-4] (Running)
The help is much appreciated. Thanks in advance. I simply want the application should exit.
Below is the thread dump before I call System.exit(1)
"DefaultMessageListenerContainer-1"
java.lang.Thread.State: RUNNABLE
at sun.management.ThreadImpl.getThreadInfo1(Native Method)
at sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:174)
at com.QueueErrorHandler.handleError(QueueErrorHandler.java:42)
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeErrorHandler(AbstractMessageListenerContainer.java:931)
at org.springframework.jms.listener.AbstractMessageListenerContainer.handleListenerException(AbstractMessageListenerContainer.java:902)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:326)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:235)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1166)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1158)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1055)
at java.lang.Thread.run(Thread.java:745)
You should take a thread dump to see what Thread [DefaultMessageListenerContainer-1] (Running) is doing.
Now in case a kafka producer fails
What kind of failure? If the broker is down, the thread will block in the producer library for up to 60 seconds by default.
You can reduce that time by setting the max.block.ms producer property.
Couple of solutions which worked for me to solve above.
Solutions 1.
Get all threads in error handler and interrupt them all and then exist the system.
ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
ThreadInfo[] threadInfos = threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
for (ThreadInfo threadInfo : threadInfos) {
Thread.currentThread().interrupt();
}
System.exit(1);
Solution 2. Define a application context manager. Like
public class AppContextManager implements ApplicationContextAware {
private static ApplicationContext _appCtx;
#Override
public void setApplicationContext(ApplicationContext ctx){
_appCtx = ctx;
}
public static ApplicationContext getAppContext(){
return _appCtx;
}
public static void exit(Integer exitCode) {
System.exit(SpringApplication.exit(_appCtx,() -> exitCode));
}
}
Then use same manager to exit in error handler
Executors.newSingleThreadExecutor().execute(new Runnable() {
public void run() {
jmsListenerEndpointRegistry.stop();
AppContextManager.exit(-1);
}
});
I am using camel-spring jar for springCamelContext. When I start the camel context , it run for 5 minutes (Default time). I can make my thread sleep for some specific time i.e.
try {
camelContext.start();
Thread.sleep(50 * 60 * 1000);
camelContext.stop();
} catch (Exception e) {
e.printStackTrace();
}
BUT I want is my camelContext to run FOREVER because this application is going to be deployed and It will be listening for messages from KAFKA server. I know there is a class
org.apache.camel.spring.Main
But I don't know how to configure it with springCamelContext or not sure if there any other way. Thanks
Update : Even If I remove camelContext.stop() , context is stopped after sometime and I get following logs :
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) is shutting down
[Thread-1] INFO org.apache.camel.impl.DefaultShutdownStrategy - Starting to graceful shutdown 1 routes (timeout 300 seconds)
[Camel (camel-1) thread #1 - ShutdownTask] INFO org.apache.camel.component.kafka.KafkaConsumer - Stopping Kafka consumer
[Camel (camel-1) thread #1 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Route: route1 shutdown complete, was consuming from: Endpoint[kafka://localhost:9092?groupId=group0&serializerClass=org.springframework.integration.kafka.serializer.avro.AvroSerializer&topic=my-topic]
[Thread-1] INFO org.apache.camel.impl.DefaultShutdownStrategy - Graceful shutdown of 1 routes completed in 0 seconds
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) uptime 4 minutes
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) is shutdown in 0.022 seconds
Here is a minimal example which runs forever and only copies files from one folder to another:
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.main.Main;
public class FileWriteRoute {
public static void main(String[] args) throws Exception {
Main main = new Main();
main.addRouteBuilder(new RouteBuilder() {
public void configure() {
from("file://D:/dev/playground/camel-activemq/src/data")
.to("file://D:/dev/playground/camel-activemq/src/data_out");
}
});
main.run();
}
}
Of if you have your Route defined in a class try:
public static void main(String[] args) throws Exception {
Main main = new Main();
CamelContext context = main.getOrCreateCamelContext();
try {
context.addRoutes(new YOURROUTECLASS());
context.start();
main.run();
}
catch (Exception e){
enter code here
}
}