Error while triggering quartz job in spring boot, kubernetes - spring

I'm using Quartz Scheduler in Spring Boot with Postgres as Database. It was working fine in local enivornment with both dev and prod profile, but once we deployed the appliation in kubernetes. We got the error while triggering the job.
quartz.properties file
> org.quartz.scheduler.instanceId=AUTO
org.quartz.threadPool.threadCount=20
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
org.quartz.jobStore.misfireThreshold=60000
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval=20000
org.quartz.scheduler.classLoadHelper.class=org.quartz.simpl.ThreadContextClassLoadHelper
>
We received the following error
> org.quartz.JobPersistenceException: Couldn't retrieve trigger: Bad value for type long : \x
at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveTrigger(JobStoreSupport.java:1538)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.recoverMisfiredJobs(JobStoreSupport.java:984)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.doRecoverMisfires(JobStoreSupport.java:3264)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.manage(JobStoreSupport.java:4012)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.run(JobStoreSupport.java:4033)
>
> Caused by: org.postgresql.util.PSQLException: Bad value for type long : \x
at org.postgresql.jdbc.PgResultSet.toLong(PgResultSet.java:2878)
at org.postgresql.jdbc.PgResultSet.getLong(PgResultSet.java:2085)
at org.postgresql.jdbc.PgResultSet.getBlob(PgResultSet.java:417)
at org.postgresql.jdbc.PgResultSet.getBlob(PgResultSet.java:404)
at com.zaxxer.hikari.pool.HikariProxyResultSet.getBlob(HikariProxyResultSet.java)
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.getObjectFromBlob(StdJDBCDelegate.java:3190)
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.selectTrigger(StdJDBCDelegate.java:1780)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveTrigger(JobStoreSupport.java:1536)

Looking at your stacktrace, you are still using
org.quartz.impl.jdbcjobstore.StdJDBCDelegate
where your quartz.properties is set to
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
Therefore my guess is you are picking up wrong quartz.properties when it is running in kubernetes.

Related

Liquibase in a Spring Application doesn't fail on unknown change type

We’ve recently ran into an issue where we had a typo in a Liquibase change type and nothing caught this issue. We used the addNonNullConstraint change type, which does not exist, instead of addNotNullConstraint.
The issue here is that we're using YAML format for the changeset file, which doesn't have any validation. With the XML format, we would at least get an error in the editor.
Executing the Liquibase CLI validate command finds the faulty change set:
➜ git: ✗ liquibase validate
Starting Liquibase at 13:46:47 (version 4.17.0 #4922 built at 2022-10-05 14:56+0000)
Liquibase Version: 4.17.0
Liquibase Community 4.17.0 by Liquibase
Unexpected error running Liquibase: Error parsing db/changelog.yaml
- Caused by: Error parsing /db/changes/0019_mark-column-non-null.yaml
- Caused by: Error parsing /db/changes/0019_mark-column-non-null.yaml: Unknown change type 'addNonNullConstraint'. Check for spelling or capitalization errors and missing extensions such as liquibase-commercial.
For more information, please use the --log-level flag
But when starting up the Spring Boot app, there’s no hint that something went wrong with one of the new changesets that need to be applied:
INFO Liquibase.database : Set default schema name to public
INFO liquibase.lockservice : Successfully acquired change log lock
INFO liquibase.changelog : Reading from public.databasechangelog
Running Changeset: /db/changes/0019_mark-column-non-null.yaml::Mark column as non-null::Dev
INFO liquibase.changelog : ChangeSet /db/changes/0019_mark-column-non-null.yaml::Mark column as non-null::Dev ran successfully in 0ms
Ideally, the Spring Boot app would validate the change sets before applying them, and fail if the change type is unknown. But I could not find out how to run the Liquibase validate command in the context of the Spring integration.

Getting an error while pushing message to IBM -MQ from apache camel spring application

I am getting below error while pushing a message from apache camel component to IBM-MQ.
Error:-
Caused by: com.ibm.msg.client.jms.DetailedMessageFormatException: JMSCC0050: The property name 'JMS_Solace_DeadMsgQueueEligible' is reserved and cannot be set.
Below are my pom jars i am using -
<camel-spring-boot-starter.version>2.21.0</camel-spring-boot-starter.version>
<camel-spring.version>2.21.0</camel-spring.version>
<camel-jms.version>2.21.0</camel-jms.version>
I am running the application using spring boot container.
Try to remove that header, something a like:
from solace
removeHeaders("JMS_Solace*")
to ibmmq

Unable to create state store in Kafka streams

I am getting this error Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1 while creating state store in my kafka streams application. Here is the complete stack trace of the application
[2016-08-30 12:43:09,408] ERROR [StreamThread-1] User provided listener org.apache.kafka.streams.processor.internals.StreamThread$1 for group string-monitor failed on partition assignment (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:71)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:86)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
at org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: java.io.IOException: Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:95)
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:69)
... 32 more
Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to rebalance
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:299)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:71)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:86)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
at org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
... 1 more
Caused by: java.io.IOException: Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:95)
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:69)
... 32 more
And I create a state store as below
StateStoreSupplier avgStore = Stores.create("avgStore")
.withKeys(Serdes.String())
.withValues(Serdes.String())
.persistent()
.build();
Any idea of how to fix this?
Did you configure multiple threads within your application instance? If yes, it may be due to a known issue in older versions of Kafka, where the underlying consumer used by Kafka Streams (in the app instance) can take too long to rebalance, causing itself to be kicked out of the consumer group (and hence triggering another consumer group rebalance) while it is still under the first rebalance process.
The following error messages in your stacktrace indicate that you are actually running into the problem I described above:
Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to rebalance
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:299)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
The problem is summarized in this Apache Kafka ticket:
https://issues.apache.org/jira/browse/KAFKA-3758
A recent change to the underlying Kafka consumer client fixes this issue:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread
However it is not included in an official Kafka release yet and, as of today, is available only in Apache Kafka trunk. If you are able to run your app with Kafka trunk you can verify if this problem has gone away already.
I also saw this problem when the user didn't have permissions to write to the default state.dir
When I changed the following property to a dir w/good permissions everything was fine:
property.put(StreamsConfig.STATE_DIR_CONFIG, "{goodDir}")
This was observed in 0.10.2
Just a note for those who receives the same Failed to lock the state directory exception and uses The Spring for Apache Kafka (spring-kafka) (example from spring doc):
It took me a while to figure out in the source code that Spring starts your built Steams automatically, so there is no need for you to do manually something like:
KafkaStreams streams = new KafkaStreams(builder, config);
streams.start();
I dit it and faced the aforementioned exception because, not surpassingly, there were two Application instances running: one is created by Spring, another by me

Spring Boot Actuator Liquibase endpoint fail

I'm trying to use Liquibase with Spring Boot.
Here is my application.properties file:
# ----------------------------------------
# DATA PROPERTIES
# ----------------------------------------
spring.datasource.url=jdbc:postgresql://xxxxxx:5432/dev
spring.datasource.schema=my_schema
spring.datasource.username=my_username
spring.datasource.password=my_password
# LIQUIBASE (LiquibaseProperties)
liquibase.default-schema=${spring.datasource.schema}
liquibase.user=${spring.datasource.username}
liquibase.password=${spring.datasource.password}
Change sets are well applied (table creation is ok).
The problem comes when I access /liquibase actuator's endpoint, I get a 500 error:
Unable to get Liquibase changelog
I also get the following log:
org.postgresql.util.PSQLException: ERROR: relation "public.databasechangelog" does not exist
If thing the problem is the schema prefix used to access changelog table: "public" versus "my_schema".
I thought spring.datasource.schema was the right parameter to set ?
Here is a working solution (come from this answer):
# ----------------------------------------
# DATA PROPERTIES
# ----------------------------------------
spring.datasource.url=jdbc:postgresql://xxxxxx:5432/dev?currentSchema=my_schema
Just a guess here - I think that the issue is that your 'real' schema is being set by the spring.datasource.schema but the liquibase tables are being stored in public and it may be that the Spring Boot actuator doesn't know that those can be separate.

Spring Boot JMS & Batch

Previously everything worked properly. Today I configured Spring Batch together with my Spring Boot application and faced an issue with application.properties.
I have following properties encrypted with Jasypt:
spring.profiles.active=https
ENVIRONMENT=h2
#aws sqs
aws.sqs.account.access.key=ENC(kjsdh456fgkjhdfsgkjhdfg)
#queue message listener
queue.message.listener.task.executor.threads.number=1
queue.message.listener.task.executor.max.concurrent.consumers=1
Now, in order to configure Spring Batch I added
ENVIRONMENT=h2
to application.properties file.
also, I have added batch-h2.properties file:
# Placeholders batch.* for H2 database:
batch.jdbc.driver=org.h2.Driver
batch.jdbc.url=jdbc:h2:~/testdb;CIPHER=AES;AUTO_SERVER=TRUE;DB_CLOSE_ON_EXIT=FALSE
batch.jdbc.user=sa
batch.jdbc.password="sa sa"
batch.jdbc.testWhileIdle=false
batch.jdbc.validationQuery=
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-h2.sql
batch.schema.script=classpath:/org/springframework/batch/core/schema-h2.sql
batch.business.schema.script=classpath:/business-schema-h2.sql
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.H2SequenceMaxValueIncrementer
batch.database.incrementer.parent=sequenceIncrementerParent
batch.lob.handler.class=org.springframework.jdbc.support.lob.DefaultLobHandler
batch.grid.size=2
batch.jdbc.pool.size=6
batch.verify.cursor.position=true
batch.isolationlevel=ISOLATION_SERIALIZABLE
batch.table.prefix=BATCH_
and after that I continuously receiving following exception:
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'aws.sqs.account.access.key' in string value "${aws.sqs.account.access.key}"
aws.sqs.account.access.key property now cannot be resolved.
I'm injecting this property into my configuration:
#Configuration
public class SQSConfig {
#Value("${aws.sqs.account.access.key}")
private String accessKey;
How to fix it ?

Resources