Shedlock only works on one instance, but not multiple - spring

I would like to setup Shedlock to guard a sensitive process so that only ONE instance of the process ever runs even when multiple application processes are started.
In my pom.xml
<dependency>
<groupId>net.javacrumbs.shedlock</groupId>
<artifactId>shedlock-spring</artifactId>
</dependency>
<dependency>
<groupId>net.javacrumbs.shedlock</groupId>
<artifactId>shedlock-provider-jdbc-template</artifactId>
</dependency>
My DB:
CREATE TABLE shedlock(
name VARCHAR(64) NOT NULL,
lock_until TIMESTAMP NOT NULL,
locked_at TIMESTAMP NOT NULL,
locked_by VARCHAR(255) NOT NULL,
PRIMARY KEY (name));
My configuration:
#Configuration
#EnableScheduling
public class ShedlockConfiguration {
#Bean
public LockProvider lockProvider(DataSource dataSource) {
return new JdbcTemplateLockProvider(
JdbcTemplateLockProvider.Configuration.builder()
.withJdbcTemplate(new JdbcTemplate(dataSource))
.usingDbTime()
.build()
);
}
}
My schedule:
#Component
public class SchedulerA {
#Scheduled(initialDelayString = "${examples.scheduler.initial-delay:PT1S}",
fixedDelayString = "${examples.scheduler.fixed-delay:PT10S}")
#SchedulerLock(name = "example_scheduler",
lockAtLeastFor = "${examples.scheduler.lock-at-least:PT5S}",
lockAtMostFor = "${examples.scheduler.lock-at-most:PT30S}")
public void schedule() {
// Implementation not important
}
}
Symptom:
If I start only one instance with multiple SchedulerA classes like SchedulerB, SchedulerC, etc which are all copies of the same code I can see the Shedlock does its thing and only allow one LOCAL instance to execute at a time. But, when I start up multiple Spring Boot applications, they all their schedules concurrently even when they use the same DB, same table, same scheduler name. I also notice no entries in the DB table, but the debug logs also reveals no errors.
Question:
Is this the expected behaviour of Shedlock? Should I research another solution or did I misconfigured something?

You need to add #EnableSchedulerLock to your configuration class as per documents : "In order to enable schedule locking use #EnableSchedulerLock annotation"

You need to add #EnableSchedulerLock annotation with manadatory parameter defaultLockAtMostFor on your main class from where spring boot application is starting. It will prevent multiple instances of same spring boot application to run scheduled task at a same time.

Related

Change value at Runtime in Spring Boot

I have the below code .Can I change the ThreadPoolExecutor thread number size at run time ?
I am using spring boot.
#Configuration
public class ExecutorConfig
{
#Value(numberOfThreads)
private String numberOfThreads ; // numberOfThreads is configured app.properties file
#Bean
public ThreadPoolExecutor executorConfig()
{
ThreadPoolExecutor e = Executors.newFixedThreadPool(numberOfThreads);
return e;
}
}
One option is to add a set method for the property numberOfThread and then provide a way to update it, like a new endpoint. But if your app restarts it will still get the previous value from application.properties.
Other option is to use Spring Cloud Config, but this may or may not be overkill for your case.
Also, this answer goes a bit deeper with some code examples to force a reload.

How do I get JobRunr to detect my scheduled background job in a Spring controller/service?

I have been looking into using JobRunr for starting background jobs on my Spring MVC application, as I really like the simplicity of it, and the ease of integrating it into an IoC container.
I am trying to create a simple test scheduled job that writes a line of text to my configured logger every minute, but I'm struggling to figure out how to get the JobRunr background job server to detect it and queue it up. I am not using Spring Boot so I am just using the generic jobrunr Maven artifact rather than the "Spring Boot Starter". My setup is as follows:
pom.xml
<dependency>
<groupId>org.jobrunr</groupId>
<artifactId>jobrunr</artifactId>
<version>2.0.0</version>
</dependency>
ApplicationConfig.java
#Bean
public JobMapper jobMapper() {
return new JobMapper(new JacksonJsonMapper());
}
#Bean
#DependsOn("jobMapper")
public StorageProvider storageProvider(JobMapper jobMapper) {
InMemoryStorageProvider storageProvider = new InMemoryStorageProvider();
storageProvider.setJobMapper(jobMapper);
return storageProvider;
}
#Bean
#DependsOn("storageProvider")
public JobScheduler jobScheduler(StorageProvider storageProvider, ApplicationContext applicationContext) {
return JobRunr.configure().useStorageProvider(storageProvider)
.useJobActivator(applicationContext::getBean)
.useDefaultBackgroundJobServer()
.useDashboard()
.useJmxExtensions()
.initialize();
}
BackgroundJobsController.java
#Controller
public class BackgroundJobsController {
private final Logger logger = LoggerFactory.getLogger(getClass());
private #Autowired JobScheduler jobScheduler;
#Job(name = "Test")
public void executeJob() {
BackgroundJob.scheduleRecurrently(Cron.minutely(), () -> logger.debug("It works!"));
jobScheduler.scheduleRecurrently(Cron.minutely(), () -> logger.debug("It works too!"));
}
}
As you can see, I have tried both methods of initiating the background job in the executeJob method. The issue is basically getting Jobrunr to detect the jobs - is it simply a case of somehow triggering the executeJob method upon startup of the application? If so, does anyone know the most simple way to do that? Previously I have used the Spring #Scheduled annotation to automatically run through methods in a Service/Controller class upon startup of the application, so I was hoping there was a straightforward way to get Jobrunr to pick up the scheduled tasks I am trying to create. Apologies if it is something stupid that I have overlooked. I've spent a good few hours trying different things and reading through the documentation!
Thanks in advance!
There are different ways for doing so:
This is one, annotating a method with #PostConstruct is indeed another.
#SpringBootApplication
#Import(JobRunrExampleConfiguration.class)
public class JobRunrApplication {
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext = SpringApplication.run(JobRunrApplication.class, args);
JobScheduler jobScheduler = applicationContext.getBean(JobScheduler.class);
jobScheduler.<SampleJobService>scheduleRecurrently("recurring-sample-job", every5minutes(), x -> x.executeSampleJob("Hello from recurring job"));
}
}
You can see an example here: https://github.com/jobrunr/example-java-mag/blob/main/src/main/java/org/jobrunr/examples/JobRunrApplication.java
Have you tried annotating your executeJob Method with a #PostConstruct ? That way upon initialisation of your application, the jobs would be registered to the JobServer.
I believe the #Job annotation is meant fo the method of the job itself. (In your case the debug method).
There is now a new way to do so:
You can add #Recurring to any Spring Boot, Micronaut or Quarkus bean method. A Spring Boot example:
#Component
public class SomeService {
#Recurring(id="recurring-job-every-5-min" interval = "PT5M")
#Job(name="job name for the dashboard")
public void runEvery5Minutes() {
// business logic comes here
}
}
For more info, see the JobRunr documentation.

Is there an sample code available to capture Spring Batch Micro Metrics?

Developing a spring boot batch application, wanted to know if there is a sample code to how to get the micro metrics discussed in the spring batch document?
I am looking way of getting these details per execution. Also, since I run the application using cron task schedule, can we get a separation of this data per execution?
Found the solution
You need not do any coding, all readily available.
Include actuator dependency in pom
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
In the property, file add this line (instead of * you can add comma separated and security) refrence
management.endpoints.web.exposure.include=*
If your application is running on port 8080
http://localhost:8080/actuator/metrics/spring.batch.job
Will give job stats. You can query on any other metrics parameters. Reference
You can calculate metrics for all execution in the corn task schedule by
First create a custom metric endpoint.
#Component
#Endpoint(id = "spring.batch.executions")
public class BatchMetricEndpoint {
#Autowired
BatchAllJobsMetricContext batchAllJobsMetricContext;
#ReadOperation
public Map<String, BatchPerJobMetricContext> features() {
return batchAllJobsMetricContext.getAll();
}
}
Second
Create a job listener
Tap into local metric endpoint after execution.
#Component
public class BatchJobListener implements JobExecutionListener {
#Autowired
private MetricsEndpoint metricsEndpoint;
#Autowired
BatchAllJobsMetricContext batchAllJobsMetricContext;
#Autowired
BatchPerJobMetricContext batchPerJobMetricContext;
#Override
public void beforeJob(JobExecution jobExecution) {
}
#Override
public void afterJob(JobExecution jobExecution) {
MetricsEndpoint.MetricResponse metricResponse = metricsEndpoint.metric("spring.batch.job",null);
String key = jobExecution.getJobParameters().getString("jobId");
String execution = "Execution "+jobExecution.getJobParameters().getString("executionCout");
if(batchAllJobsMetricContext.hasKey(key)){
batchAllJobsMetricContext.get(key).put(execution,metricResponse);
}else{
batchPerJobMetricContext.put(execution,metricResponse);
batchAllJobsMetricContext.put(key,batchPerJobMetricContext);
}
}
}
Third
Tap into the local metric to aggrigate data.
Please note this way of holding metic per iteration would be expensive on memory, you would like to keep some limit and push this data to time series data source.
Sample code

Spring boot test: Wait for microservices schedulder task

I'm trying to test a service which's trying to communicate with other one.
One of them generates auditories which are stored on memory until an scheduled task flushs them on a redis node:
#Component
public class AuditFlushTask {
private AuditService auditService;
private AuditFlushTask(AuditService auditService) {
this.auditService = auditService;
}
#Scheduled(fixedDelayString = "${fo.audit-flush-interval}")
public void flushAudits() {
this.auditService.flush();
}
}
By other hand, this service provide an endpoint stands for providing those flushed auditories:
public Collection<String> listAudits(
) {
return this.boService.listRawAudits(deadlineTimestamp);
}
The problem is I'm building an integration test in order to check if this process works right, I mean, if audits are well provided.
So, I don't know how to "wait until audits has been flushed on microservice".
Any ideas?
Don't test the framework: Spring almost certainly has tests which test fixed delays.
Instead, keep all logic within the service itself, and integration test that in isolation from the Spring #Scheduled function.

Table name configured with external properties file

I build a Spring-Boot application that accesses a Database and extracts data from it. Everything is working fine, but I want to configure the table names from an external .properties file.
like:
#Entity
#Table(name = "${fleet.table.name}")
public class Fleet {
...
}
I tried to find something but I didn't.
You can access external properties with the #Value("...") annotation.
So my question is: Is there any way I can configure the table names? Or can I change/intercept the query that is sent by hibernate?
Solution:
Ok, hibernate 5 works with the PhysicalNamingStrategy. So I created my own PhysicalNamingStrategy.
#Configuration
public class TableNameConfig{
#Value("${fleet.table.name}")
private String fleetTableName;
#Value("${visits.table.name}")
private String visitsTableName;
#Value("${route.table.name}")
private String routeTableName;
#Bean
public PhysicalNamingStrategyStandardImpl physicalNamingStrategyStandard(){
return new PhysicalNamingImpl();
}
class PhysicalNamingImpl extends PhysicalNamingStrategyStandardImpl {
#Override
public Identifier toPhysicalTableName(Identifier name, JdbcEnvironment context) {
switch (name.getText()) {
case "Fleet":
return new Identifier(fleetTableName, name.isQuoted());
case "Visits":
return new Identifier(visitsTableName, name.isQuoted());
case "Result":
return new Identifier(routeTableName, name.isQuoted());
default:
return super.toPhysicalTableName(name, context);
}
}
}
}
Also, this Stackoverflow article over NamingStrategy gave me the idea.
Table names are really coming from hibernate itself via its strategy interfaces. Boot configures this as SpringNamingStrategy and there were some changes in Boot 2.x how things can be customised. Worth to read gh-1525 where these changes were made. Configure Hibernate Naming Strategy has some more info.
There were some ideas to add some custom properties to configure SpringNamingStrategy but we went with allowing easier customisation of a whole strategy beans as that allows users to to whatever they need to do.
AFAIK, there's no direct way to do config like you asked but I'd assume that if you create your own strategy you can then auto-wire you own properties to there. As in those customised strategy interfaces you will see the entity name, you could reserve a keyspace in boot's configuration properties to this and match entity names.
mytables.naming.fleet.name=foobar
mytables.naming.othertable.name=xxx
Your configuration properties would take mytables and within that naming would be a Map. Then in your custom strategy it would simply be by checking from mapping table if you defined a custom name.
Spring boot solution:
Create below class
#Configuration
public class CustomPhysicalNamingStrategy extends SpringPhysicalNamingStrategy{
#Value("${table.name}")
private String tableName;
#Override
public Identifier toPhysicalTableName(final Identifier identifier, final JdbcEnvironment jdbcEnv) {
return Identifier.toIdentifier(tableName);
}
}
Add below property to application.properties:
spring.jpa.properties.hibernate.physical_naming_strategy=<package.name>.CustomPhysicalNamingStrategy
table.name=product

Resources