Hikari connection is unavailable - spring-boot

I am encountering an issue where a Spring Boot service, after running for about a day with some load, suddenly is unable to create a connection.
I used the standard Spring Boot 2 project starter. Created a rest controller, a model, a service, and a repository.
#Entity
#Table(name = "partners")
public class Partner {
/** Unique identifier. */
#Id
#JsonProperty("id")
private long id;
/** Partner name. */
#JsonProperty("name")
private String name;
/**
* Default constructor. Not usable publicly.
*/
protected Partner() {
}
// etc...
}
Repo
/**
* Partner repo.
*/
public interface PartnerRepository extends CrudRepository<Partner, Long> {
}
The Service
#Service
public class PartnersService {
/** Access to Partner repo. */
#Autowired
private PartnerRepository partnerRepo;
/**
* Get partner data.
*
* #param id Partner identifier.
* #return Partner instance, or null if not found.
*/
public Partner getPartner(final long id) throws ServiceException {
if (id <= 0) {
throw new ServiceException("Please enter a valid partner identifier.");
}
final Optional<Partner> partner = partnerRepo.findById(id);
return partner.orElse(null);
}
}
The Controller
#RestController
public class PartnerController extends BaseController {
/** Partners service. */
#Autowired
private PartnersService partnersService;
#RequestMapping(value = "/partners/api/v1/partners/{id}", method = GET)
public ResponseEntity<FilldResponse> getPartner(#PathVariable final long id) {
final FilldResponse response = new FilldResponse();
HttpStatus httpStatus = HttpStatus.OK;
try {
final Partner partner = partnersService.getPartner(id);
if (partner != null) {
response.setData(partner);
response.setMessage("Partner retrieved successfully.");
} else {
response.setMessage("Partner not found.");
response.setStatusCode(4000);
response.setStatus(false);
httpStatus = HttpStatus.NOT_FOUND;
}
} catch (Exception ex) {
response.setStatus(false);
response.setMessage(ex.getMessage());
response.setStatusCode(4000);
httpStatus = HttpStatus.BAD_REQUEST;
}
return new ResponseEntity<>(response, httpStatus);
}
}
Everything is pretty much default configuration. After a while of use in production, I get:
Mar 09 09:18:22 52.43.134.45-1 partners: 2020-03-09 16:18:22.449 WARN 1 --- [nio-8080-exec-9] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: 08S01
Mar 09 09:18:22 52.43.134.45-1 partners: 2020-03-09 16:18:22.449 ERROR 1 --- [nio-8080-exec-9] o.h.engine.jdbc.spi.SqlExceptionHelper : HikariPool-1 - Connection is not available, request timed out after 30000ms.
This is running in AWS as a docker image.
Are there known issues or configuration that is not configured out of the box that might be causing this?

Related

How to manually initialize a database using a datasource?

My current AbstractionDataSource implementation does the following:
1- Spring initializes with the default URL/Schema so the user can login.
2- After a successful login, the default schema changes to another schema based on the UserDetails class.
An example would be a user from company X being redirected to schema X and a user from company K being redirected to schema K after a successful login.
Problem:
I need to initialize the database from step 2. With
spring.jpa.hibernate.ddl-auto=create
Currently, Spring boot only initializes the default database that the user uses to login. However, I need to execute a different initialization for the other schemas that are dependent on the logged-in user.
public class UserSchemaAwareRoutingDataSource extends AbstractDataSource {
#Autowired
private UsuarioProvider usuarioProvider;
/**
* This is the data source that is dependent on the user.
*/
#Autowired
#Qualifier(value = "companyDependentDataSource")
private DataSource companyDependentDataSource;
/**
* This is the initial datasource.
*/
#Autowired
#Qualifier(value = "loginDataSource")
private DataSource loginDataSource;
/**
* Variable representing the environment in which the current application is
* running.
*/
#Autowired
Environment env;
/**
* A semi-persistent mapping from Schemas to dataSources. This exists,
* because ??? to increase performance and diminish overhead???
*/
private LoadingCache<String, DataSource> dataSources = createCache();
public UserSchemaAwareRoutingDataSource() {
}
/**
* Creates the cache. ???
*
* #return
*/
private LoadingCache<String, DataSource> createCache() {
return CacheBuilder.newBuilder()
.maximumSize(100)
.expireAfterWrite(10, TimeUnit.MINUTES)
.build(
new CacheLoader<String, DataSource>() {
public DataSource load(String key) throws Exception {
return buildDataSourceForSchema(key);
}
});
}
/**
* Builds the datasource with the schema parameter. Notice that the other
* parameters are fixed by the application.properties.
*
* #param schema
* #return
*/
private DataSource buildDataSourceForSchema(String schema) {
logger.info("building datasource with schema " + schema);
String url = env.getRequiredProperty("companydatasource.url");
String username = env.getRequiredProperty("companydatasource.username");
String password = env.getRequiredProperty("companydatasource.password");
DataSource build = DataSourceBuilder.create()
.driverClassName(env.getRequiredProperty("companydatasource.driver-class-name"))
.username(username)
.password(password)
.url(url)
.build();
return build;
}
/**
* Gets the Schema from the Cache, or build one if it doesnt exist.
*
* #return
*/
private DataSource determineTargetDataSource() {
try {
String db_schema = determineTargetSchema();
logger.info("using schema " + db_schema);
return dataSources.get(db_schema);
} catch (Exception ex) {
throw new RuntimeException(ex);
}
}
/**
* Determine the schema based on the logger-in User.
*
* #return
*/
private String determineTargetSchema() {
try {
Usuario usuario = usuarioProvider.customUserDetails(); // request scoped answer!
return usuario.getTunnel().getDb_schema();
} catch (RuntimeException e) {
// This shouldn't be necessary, since we are planning to use a pre-initialized database.
// And there should only be usages of this DataSource in a logged-in situation
logger.info("usuario not present, falling back to default schema", e);
return "default_company_schema";
}
}
#Override
public Connection getConnection() throws SQLException {
return determineTargetDataSource().getConnection();
}
#Override
public Connection getConnection(String username, String password) throws SQLException {
return determineTargetDataSource().getConnection(username, password);
}
#Override
public ConnectionBuilder createConnectionBuilder() throws SQLException {
return super.createConnectionBuilder();
}
}
//
Im expecting there is someway to call hibernate tools to initialize the database on the following method:
/**
* Builds the datasource with the schema parameter. Notice that the other
* parameters are fixed by the application.properties.
*
* #param schema
* #return
*/
private DataSource buildDataSourceForSchema(String schema) {
logger.info("building datasource with schema " + schema);
String url = env.getRequiredProperty("companydatasource.url");
String username = env.getRequiredProperty("companydatasource.username");
String password = env.getRequiredProperty("companydatasource.password");
DataSource build = DataSourceBuilder.create()
.driverClassName(env.getRequiredProperty("companydatasource.driver-class-name"))
.username(username)
.password(password)
.url(url)
.build();
//Init database or update it with hibernate
String initDatabase = env.getRequiredProperty("spring.jpa.hibernate.ddl-auto");
if(initDatabase.equalsIgnoreCase("update")){
org.hibernate.tool.hbm2ddl.SchemaUpdate schemaUpdate = new SchemaUpdate();
schemaUpdate.execute(??);
}
//
return build;
}
//
Repository: https://github.com/KenobySky/SpringSchema
With the initialization of the object, there's no guarantee that Spring would have assigned the Environment object to the Autowired property before the private dataSources property would be getting set.
There are a couple of options:
Add a constructor to take on the assignment of that property for you
Use the #PostConstruct annotation to make that assignment wait for the object to be constructed.

How to retrieve the repository from JHipster spring controller?

I have a JHipster microservice application, and I've added a spring controller. However, it is generated without a repository and I don't know how to retrieve it to perform data tasks.
This is the code:
#RestController
#RequestMapping("/api/data")
public class DataResource {
private final Logger log = LoggerFactory.getLogger(DataResource.class);
private final DeviceRepository deviceRepository;
public DataResource() {
}
/**
* GET global
*/
#GetMapping("/global")
public ResponseEntity<GlobalStatusDTO[]> global() {
List<Device> list=deviceRepository.findAll();
GlobalStatusDTO data[]=new GlobalStatusDTO[]{new GlobalStatusDTO(list.size(),1,1,1,1)};
return ResponseEntity.ok(data);
}
}
EDIT: I need to inject an already existing repository, here is the CRUD part where the repository is initialized:
#RestController
#RequestMapping("/api")
#Transactional
public class DeviceResource {
private final Logger log = LoggerFactory.getLogger(DeviceResource.class);
private static final String ENTITY_NAME = "powerbackDevice";
#Value("${jhipster.clientApp.name}")
private String applicationName;
private final DeviceRepository deviceRepository;
public DeviceResource(DeviceRepository deviceRepository) {
this.deviceRepository = deviceRepository;
}
/**
* {#code POST /devices} : Create a new device.
*
* #param device the device to create.
* #return the {#link ResponseEntity} with status {#code 201 (Created)} and with body the new device, or with status {#code 400 (Bad Request)} if the device has already an ID.
* #throws URISyntaxException if the Location URI syntax is incorrect.
*/
#PostMapping("/devices")
public ResponseEntity<Device> createDevice(#Valid #RequestBody Device device) throws URISyntaxException {
...
I might misunderstand you, but your first code part doesn't work, because, you didn't inject DeviceRepository by the constructor. Of course, there are other methods of injections.
#RestController
#RequestMapping("/api/data")
public class DataResource {
private final Logger log = LoggerFactory.getLogger(DataResource.class);
private final DeviceRepository deviceRepository;
//changes are here only, constructor method of injection
public DataResource(DeviceRepository deviceRepository) {
this.deviceRepository = deviceRepository;
}
/**
* GET global
*/
#GetMapping("/global")
public ResponseEntity<GlobalStatusDTO[]> global() {
List<Device> list=deviceRepository.findAll();
GlobalStatusDTO data[]=new GlobalStatusDTO[]{new GlobalStatusDTO(list.size(),1,1,1,1)};
return ResponseEntity.ok(data);
}
}

UpdateById method in Spring Reactive Mongo Router Handler

In Spring Reactive Java how can I write an updateById() method using the Router and Handler?
For example, the Router has this code:
RouterFunctions.route(RequestPredicates.PUT("/employees/{id}").and(RequestPredicates.accept(MediaType.APPLICATION_JSON))
.and(RequestPredicates.contentType(MediaType.APPLICATION_JSON)),
employeeHandler::updateEmployeeById);
My question is how to write the employeeHandler::updateEmployeeById() keeping the ID as same but changing the other members of the Employee object?
public Mono<ServerResponse> updateEmployeeById(ServerRequest serverRequest) {
Mono<Employee> employeeMono = serverRequest.bodyToMono(Employee.class);
<And now what??>
return ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).body(employeeMono, Employee.class);
}
The Employee class looks like this:
#Document
#Data
#AllArgsConstructor
#NoArgsConstructor
public class Employee {
#Id
int id;
double salary;
}
Thanks for any help.
First of all, you have to add ReactiveMongoRepository in your classpath. You can also read about it here.
#Repository
public interface EmployeeRepository extends ReactiveMongoRepository<Employee, Integer> {
Mono<Employee> findById(Integer id);
}
Then your updateEmployeeById method can have the following structure:
public Mono<ServerResponse> updateEmployeeById(ServerRequest serverRequest) {
return serverRequest
.bodyToMono(Employee.class)
.doOnSubscribe(e -> log.info("update employee request received"))
.flatMap(employee -> {
Integer id = Integer.parseInt(serverRequest.pathVariable("id"));
return employeeRepository
.findById(id)
.switchIfEmpty(Mono.error(new NotFoundException("employee with " + id + " has not been found")))
// what you need to do is to update already found entity with
// new values. Usually map() function is used for that purpose
// because map is about 'transformation' what is setting new
// values in our case
.map(foundEmployee -> {
foundEmployee.setSalary(employee.getSalary());
return foundEmployee;
});
})
.flatMap(employeeRepository::save)
.doOnError(error -> log.error("error while updating employee", error))
.doOnSuccess(e -> log.info("employee [{}] has been updated", e.getId()))
.flatMap(employee -> ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).body(BodyInserters.fromValue(employee), Employee.class));
}
UPDATE:
Based on Prana's answer, I have updated the code above merging our solutions in one. Logging with a help of Slf4j was added. And switchIfEmpty() functions for the case when the entity was not found.
I would also suggest your reading about global exception handling which will make your API even better. A part of it I can provide here:
/**
* Returns routing function.
*
* #param errorAttributes errorAttributes
* #return routing function
*/
#Override
protected RouterFunction<ServerResponse> getRoutingFunction(ErrorAttributes errorAttributes) {
return RouterFunctions.route(RequestPredicates.all(), this::renderErrorResponse);
}
private HttpStatus getStatus(Throwable error) {
HttpStatus status;
if (error instanceof NotFoundException) {
status = NOT_FOUND;
} else if (error instanceof ValidationException) {
status = BAD_REQUEST;
} else {
status = INTERNAL_SERVER_ERROR;
}
return status;
}
/**
* Custom global error handler.
*
* #param request request
* #return response
*/
private Mono<ServerResponse> renderErrorResponse(ServerRequest request) {
Map<String, Object> errorPropertiesMap = getErrorAttributes(request, false);
Throwable error = getError(request);
HttpStatus errorStatus = getStatus(error);
return ServerResponse
.status(errorStatus)
.contentType(APPLICATION_JSON)
.body(BodyInserters.fromValue(errorPropertiesMap));
}
A slightly different version of the above worked without any exceptions:
public Mono<ServerResponse> updateEmployeeById(ServerRequest serverRequest) {
Mono<ServerResponse> notFound = ServerResponse.notFound().build();
Mono<Employee> employeeMono = serverRequest.bodyToMono(Employee.class);
Integer employeeId = Integer.parseInt(serverRequest.pathVariable("id"));
employeeMono = employeeMono.flatMap(employee -> employeeRepository.findById(employeeId)
.map(foundEmployee -> {
foundEmployee.setSalary(employee.getSalary());
return foundEmployee;
})
.flatMap(employeeRepository::save));
return ServerResponse.ok().contentType(MediaType.APPLICATION_JSON).body(employeeMono, Employee.class).switchIfEmpty(notFound);
}
Thanks to Stepan Tsybulski.

What are the best practice for audit log(user activity) in micro-services?

In our microservice architecture, we are logging user-activity to mongo database table? Is there any good way to store and retrieve audit log?
You can think of a solution something similar to the below by storing AuditLogging into the Mongo db by using DAO pattern.
#Entity
#Table(name = "AuditLogging")
public class AuditLogging implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "auditid", updatable = false, nullable = false)
private Long auditId;
#Column(name = "event_type", length = 100)
#Enumerated(EnumType.STRING)
private AuditingEvent event;
#Column(name = "event_creator", length = 100)
#Enumerated(EnumType.STRING)
private EventCreator eventCreator;
#Column(name = "adminid", length = 20)
private String adminId;
#Column(name = "userid", length = 20)
private String userId;
#Column(name = "event_date")
private Date eventDate;
}
public class Constants {
public static final String EVENT_TYPE = "eventType";
public static final String EVENT_CREATOR = "eventCreator";
public static final String NEW_EMAIL_ID = "newEmailId";
public static final String OLD_EMAIL_ID = "oldEmailId";
}
public enum AuditEvent {
USER_REGISTRATION,
USER_LOGIN,
USER_LOGIN_FAIL,
USER_ACCOUNT_LOCK,
USER_LOGOFF,
USER_PASSWORD_CHANGE,
USER_PASSWORD_CHANGE_FAIL,
USER_FORGOT_PASSWORD,
USER_FORGOT_PASSWORD_FAIL,
ADMIN_LOGIN
}
public enum EventCreator {
ADMIN_FOR_SELF,
USER_FOR_SELF,
ADMIN_FOR_USER
}
public interface AuditingDao {
/**
* Stores the event into the DB/Mongo or Whatever
*
* #param auditLogging
* #return Boolean status
*/
Boolean createAuditLog(final AuditLogging auditLogging);
/* Returns the log occurrence of a specific event
*
* #param event
* #return List of logged events by type
*/
List<AuditLogging> getLogsForEvent(final AuditingEvent event);
}
public interface AuditingService {
/**
* Creates an Audit entry in the AuditLogging table using the
* DAO layer
*
* #param auditingEvent
* #param eventCreator
* #param userId
* #param adminId *
* #return {#link Boolean.TRUE} for success and {#link Boolean.FALSE} for
* failure
*/
Boolean createUserAuditEvent(final AuditEvent auditEvent,
final EventCreator eventCreator, final String userId, final String adminId,
final String newEmailId,final String oldEmailId);
/**
*
* Returns all event for a user/admin based on the id
*
* #param id
* #return List of logged events for an id
*/
List<AuditLogging> fetchLoggedEventsById(final String id);
/***
* Returns all event based on event type
*
* #param eventName
* #return List of logged events for an event
*/
List<AuditLogging> fetchLoggedEventsByEventName(final String eventName);
}
#Service("auditingService")
public class AuditServiceImpl implements AuditingService {
#Autowired
private AuditingDao auditingDao;
private static Logger log = LogManager.getLogger();
#Override
public Boolean createUserAuditingEvent(AuditEvent auditEvent,
EventCreator eventCreator, String userId, String adminId,
String newEmailId,String oldEmailId) {
AuditLogging auditLogging = new AuditLogging();
auditLogging.setEvent(auditingEvent);
auditLogging.setEventCreator(eventCreator);
auditLogging.setUserId(userId);
auditLogging.setAdminId(adminId);
auditLogging.setEventDate(new Date());
return Boolean.TRUE;
}
#Override
public List<AuditLogging> fetchLoggedEventsByEventName(
final String eventName) {
AuditEvent event = null;
try {
event = AuditingEvent.valueOf(eventName);
} catch (Exception e) {
log.error(e);
return Collections.emptyList();
}
return auditingDao.getLogsForEvent(event);
}
public void setAuditingDao(AuditingDao auditingDao) {
this.auditingDao = auditingDao;
}
}
Writing an aspect is always good for this type of scenarios by pointing to the appropriate controller method to trigger the event.
#Aspect
#Component("auditingAspect")
public class AuditingAspect {
#Autowired
AuditingService auditingService;
/* The below controllers you can think of your microservice endpoints
*/
#Pointcut("execution(* com.controller.RegistrationController.authenticateUser(..)) ||execution(* com.controller.RegistrationController.changeUserPassword(..)) || execution(* com.controller.RegistrationController.resetPassword(..)) ||execution(* com.controller.UpdateFunctionalityController.updateCustomerDetails(..))")
public void aroundPointCut() {}
#Around("aroundPointCut()")
public Object afterMethodInControllerClass(ProceedingJoinPoint joinPoint)
throws Throwable {
joinPoint.getSignature().getName();
joinPoint.getArgs();
// auditingService
Object result = joinPoint.proceed();
ResponseEntity entity = (ResponseEntity) result;
HttpServletRequest request =
((ServletRequestAttributes) RequestContextHolder
.getRequestAttributes()).getRequest();
if(!((request.getAttribute(Constants.EVENT_TYPE).toString()).equalsIgnoreCase(AuditEvent.USER_LOGIN.toString()) || (((request.getAttribute(Constants.EVENT_TYPE).toString()).equalsIgnoreCase(AuditEvent.ADMIN_LOGIN.toString()))))){
auditingService.createUserAuditEvent(
(AuditingEvent) request.getAttribute(Constants.EVENT_TYPE),
(EventCreator) request.getAttribute(Constants.EVENT_CREATOR),
(request.getAttribute(Constants.USER_ID)!= null ? request.getAttribute(Constants.USER_ID).toString():""), null,
(request.getAttribute(Constants.NEW_EMAIL_ID) == null ? ""
: request.getAttribute(Constants.NEW_EMAIL_ID).toString()),
(request.getAttribute(Constants.OLD_EMAIL_ID) == null ? ""
: request.getAttribute(Constants.OLD_EMAIL_ID).toString()));
}
return entity;
}
}
From the REST controller the Aspect will be triggered when it finds the corresponding event.
#RestController
public class RegistrationController {
#RequestMapping(path = "/authenticateUser", method = RequestMethod.POST,
produces = MediaType.APPLICATION_JSON_VALUE)
/* This method call triggers the aspect */
#ResponseBody
public ResponseEntity<String> authenticateUser(HttpServletRequest request, #RequestBody User user)
throws Exception {
request.setAttribute(Constants.EVENT_TYPE, AuditingEvent.USER_LOGIN);
request.setAttribute(Constants.EVENT_CREATOR, EventCreator.USER_FOR_SELF);
request.setAttribute(Constants.USER_ID, user.getUserId());
ResponseEntity<String> responseEntity = null;
try {
// Logic for authentication goes here
responseEntity = new ResponseEntity<>(respData, HttpStatus.OK);
} catch (Exception e) {
request.setAttribute(Constants.EVENT_TYPE, AuditEvent.USER_LOGIN_FAIL);
responseEntity = new ResponseEntity<>(respData, HttpStatus.INTERNAL_SERVER_ERROR);
}
return responseEntity;
}
}
I hope this answer make sense and you can implement similar functionality for Mongo as well.
Cheers !

Spring Scheduled Task running in clustered environment

I am writing an application that has a cron job that executes every 60 seconds. The application is configured to scale when required onto multiple instances. I only want to execute the task on 1 instance every 60 seconds (On any node). Out of the box I can not find a solution to this and I am surprised it has not been asked multiple times before. I am using Spring 4.1.6.
<task:scheduled-tasks>
<task:scheduled ref="beanName" method="execute" cron="0/60 * * * * *"/>
</task:scheduled-tasks>
There is a ShedLock project that serves exactly this purpose. You just annotate tasks which should be locked when executed
#Scheduled( ... )
#SchedulerLock(name = "scheduledTaskName")
public void scheduledTask() {
// do something
}
Configure Spring and a LockProvider
#Configuration
#EnableScheduling
#EnableSchedulerLock(defaultLockAtMostFor = "10m")
class MySpringConfiguration {
...
#Bean
public LockProvider lockProvider(DataSource dataSource) {
return new JdbcTemplateLockProvider(dataSource);
}
...
}
I think you have to use Quartz Clustering with JDBC-JobStore for this purpose
The is another simple and robust way to safe execute a job in a cluster. You can based on database and execute the task only if the node is the "leader" in the cluster.
Also when a node is failed or shutdown in the cluster another node became the leader.
All you have is to create a "leader election" mechanism and every time to check if your are the leader:
#Scheduled(cron = "*/30 * * * * *")
public void executeFailedEmailTasks() {
if (checkIfLeader()) {
final List<EmailTask> list = emailTaskService.getFailedEmailTasks();
for (EmailTask emailTask : list) {
dispatchService.sendEmail(emailTask);
}
}
}
Follow those steps:
1.Define the object and table that holds one entry per node in the cluster:
#Entity(name = "SYS_NODE")
public class SystemNode {
/** The id. */
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
/** The name. */
#Column(name = "TIMESTAMP")
private String timestamp;
/** The ip. */
#Column(name = "IP")
private String ip;
/** The last ping. */
#Column(name = "LAST_PING")
private Date lastPing;
/** The last ping. */
#Column(name = "CREATED_AT")
private Date createdAt = new Date();
/** The last ping. */
#Column(name = "IS_LEADER")
private Boolean isLeader = Boolean.FALSE;
public Long getId() {
return id;
}
public void setId(final Long id) {
this.id = id;
}
public String getTimestamp() {
return timestamp;
}
public void setTimestamp(final String timestamp) {
this.timestamp = timestamp;
}
public String getIp() {
return ip;
}
public void setIp(final String ip) {
this.ip = ip;
}
public Date getLastPing() {
return lastPing;
}
public void setLastPing(final Date lastPing) {
this.lastPing = lastPing;
}
public Date getCreatedAt() {
return createdAt;
}
public void setCreatedAt(final Date createdAt) {
this.createdAt = createdAt;
}
public Boolean getIsLeader() {
return isLeader;
}
public void setIsLeader(final Boolean isLeader) {
this.isLeader = isLeader;
}
#Override
public String toString() {
return "SystemNode{" +
"id=" + id +
", timestamp='" + timestamp + '\'' +
", ip='" + ip + '\'' +
", lastPing=" + lastPing +
", createdAt=" + createdAt +
", isLeader=" + isLeader +
'}';
}
}
2.Create the service that a) insert the node in database , b) check for leader
#Service
#Transactional
public class SystemNodeServiceImpl implements SystemNodeService, ApplicationListener {
/** The logger. */
private static final Logger LOGGER = Logger.getLogger(SystemNodeService.class);
/** The constant NO_ALIVE_NODES. */
private static final String NO_ALIVE_NODES = "Not alive nodes found in list {0}";
/** The ip. */
private String ip;
/** The system service. */
private SystemService systemService;
/** The system node repository. */
private SystemNodeRepository systemNodeRepository;
#Autowired
public void setSystemService(final SystemService systemService) {
this.systemService = systemService;
}
#Autowired
public void setSystemNodeRepository(final SystemNodeRepository systemNodeRepository) {
this.systemNodeRepository = systemNodeRepository;
}
#Override
public void pingNode() {
final SystemNode node = systemNodeRepository.findByIp(ip);
if (node == null) {
createNode();
} else {
updateNode(node);
}
}
#Override
public void checkLeaderShip() {
final List<SystemNode> allList = systemNodeRepository.findAll();
final List<SystemNode> aliveList = filterAliveNodes(allList);
SystemNode leader = findLeader(allList);
if (leader != null && aliveList.contains(leader)) {
setLeaderFlag(allList, Boolean.FALSE);
leader.setIsLeader(Boolean.TRUE);
systemNodeRepository.save(allList);
} else {
final SystemNode node = findMinNode(aliveList);
setLeaderFlag(allList, Boolean.FALSE);
node.setIsLeader(Boolean.TRUE);
systemNodeRepository.save(allList);
}
}
/**
* Returns the leaded
* #param list
* the list
* #return the leader
*/
private SystemNode findLeader(final List<SystemNode> list) {
for (SystemNode systemNode : list) {
if (systemNode.getIsLeader()) {
return systemNode;
}
}
return null;
}
#Override
public boolean isLeader() {
final SystemNode node = systemNodeRepository.findByIp(ip);
return node != null && node.getIsLeader();
}
#Override
public void onApplicationEvent(final ApplicationEvent applicationEvent) {
try {
ip = InetAddress.getLocalHost().getHostAddress();
} catch (Exception e) {
throw new RuntimeException(e);
}
if (applicationEvent instanceof ContextRefreshedEvent) {
pingNode();
}
}
/**
* Creates the node
*/
private void createNode() {
final SystemNode node = new SystemNode();
node.setIp(ip);
node.setTimestamp(String.valueOf(System.currentTimeMillis()));
node.setCreatedAt(new Date());
node.setLastPing(new Date());
node.setIsLeader(CollectionUtils.isEmpty(systemNodeRepository.findAll()));
systemNodeRepository.save(node);
}
/**
* Updates the node
*/
private void updateNode(final SystemNode node) {
node.setLastPing(new Date());
systemNodeRepository.save(node);
}
/**
* Returns the alive nodes.
*
* #param list
* the list
* #return the alive nodes
*/
private List<SystemNode> filterAliveNodes(final List<SystemNode> list) {
int timeout = systemService.getSetting(SettingEnum.SYSTEM_CONFIGURATION_SYSTEM_NODE_ALIVE_TIMEOUT, Integer.class);
final List<SystemNode> finalList = new LinkedList<>();
for (SystemNode systemNode : list) {
if (!DateUtils.hasExpired(systemNode.getLastPing(), timeout)) {
finalList.add(systemNode);
}
}
if (CollectionUtils.isEmpty(finalList)) {
LOGGER.warn(MessageFormat.format(NO_ALIVE_NODES, list));
throw new RuntimeException(MessageFormat.format(NO_ALIVE_NODES, list));
}
return finalList;
}
/**
* Finds the min name node.
*
* #param list
* the list
* #return the min node
*/
private SystemNode findMinNode(final List<SystemNode> list) {
SystemNode min = list.get(0);
for (SystemNode systemNode : list) {
if (systemNode.getTimestamp().compareTo(min.getTimestamp()) < -1) {
min = systemNode;
}
}
return min;
}
/**
* Sets the leader flag.
*
* #param list
* the list
* #param value
* the value
*/
private void setLeaderFlag(final List<SystemNode> list, final Boolean value) {
for (SystemNode systemNode : list) {
systemNode.setIsLeader(value);
}
}
}
3.ping the database to send that your are alive
#Override
#Scheduled(cron = "0 0/5 * * * ?")
public void executeSystemNodePing() {
systemNodeService.pingNode();
}
#Override
#Scheduled(cron = "0 0/10 * * * ?")
public void executeLeaderResolution() {
systemNodeService.checkLeaderShip();
}
4.you are ready! Just check if you are the leader before execute the task:
#Override
#Scheduled(cron = "*/30 * * * * *")
public void executeFailedEmailTasks() {
if (checkIfLeader()) {
final List<EmailTask> list = emailTaskService.getFailedEmailTasks();
for (EmailTask emailTask : list) {
dispatchService.sendEmail(emailTask);
}
}
}
Batch and scheduled jobs are typically run on their own standalone servers, away from customer facing apps so it is not a common requirement to include a job in an application that is expected to run on a cluster. Additionally, jobs in clustered environments typically do not need to worry about other instances of the same job running in parallel so another reason why isolation of job instances is not a big requirement.
A simple solution would be to configure your jobs inside a Spring Profile. For example, if your current configuration is:
<beans>
<bean id="someBean" .../>
<task:scheduled-tasks>
<task:scheduled ref="someBean" method="execute" cron="0/60 * * * * *"/>
</task:scheduled-tasks>
</beans>
change it to:
<beans>
<beans profile="scheduled">
<bean id="someBean" .../>
<task:scheduled-tasks>
<task:scheduled ref="someBean" method="execute" cron="0/60 * * * * *"/>
</task:scheduled-tasks>
</beans>
</beans>
Then, launch your application on just one machine with the scheduled profile activated (-Dspring.profiles.active=scheduled).
If the primary server becomes unavailable for some reason, just launch another server with the profile enabled and things will continue to work just fine.
Things change if you want automatic failover for the jobs as well. Then, you will need to keep the job running on all servers and check synchronization through a common resource such as a database table, a clustered cache, a JMX variable, etc.
I'm using a database table to do the locking. Only one task at a time can do a insert to the table. The other one will get a DuplicateKeyException.
The insert and delete logic is handeld by an aspect around the #Scheduled annotation.
I'm using Spring Boot 2.0
#Component
#Aspect
public class SchedulerLock {
private static final Logger LOGGER = LoggerFactory.getLogger(SchedulerLock.class);
#Autowired
private JdbcTemplate jdbcTemplate;
#Around("execution(#org.springframework.scheduling.annotation.Scheduled * *(..))")
public Object lockTask(ProceedingJoinPoint joinPoint) throws Throwable {
String jobSignature = joinPoint.getSignature().toString();
try {
jdbcTemplate.update("INSERT INTO scheduler_lock (signature, date) VALUES (?, ?)", new Object[] {jobSignature, new Date()});
Object proceed = joinPoint.proceed();
jdbcTemplate.update("DELETE FROM scheduler_lock WHERE lock_signature = ?", new Object[] {jobSignature});
return proceed;
}catch (DuplicateKeyException e) {
LOGGER.warn("Job is currently locked: "+jobSignature);
return null;
}
}
}
#Component
public class EveryTenSecondJob {
#Scheduled(cron = "0/10 * * * * *")
public void taskExecution() {
System.out.println("Hello World");
}
}
CREATE TABLE scheduler_lock(
signature varchar(255) NOT NULL,
date datetime DEFAULT NULL,
PRIMARY KEY(signature)
);
dlock is designed to run tasks only once by using database indexes and constraints. You can simply do something like below.
#Scheduled(cron = "30 30 3 * * *")
#TryLock(name = "executeMyTask", owner = SERVER_NAME, lockFor = THREE_MINUTES)
public void execute() {
}
See the article about using it.
You can use Zookeeper here to elect master instance and master instance will only run the scheduled job.
One implementation here is with Aspect and Apache Curator
#SpringBootApplication
#EnableScheduling
public class Application {
private static final int PORT = 2181;
#Bean
public CuratorFramework curatorFramework() {
CuratorFramework client = CuratorFrameworkFactory.newClient("localhost:" + PORT, new ExponentialBackoffRetry(1000, 3));
client.start();
return client;
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
Aspect class
#Aspect
#Component
public class LeaderAspect implements LeaderLatchListener{
private static final Logger log = LoggerFactory.getLogger(LeaderAspect.class);
private static final String ELECTION_ROOT = "/election";
private volatile boolean isLeader = false;
#Autowired
public LeaderAspect(CuratorFramework client) throws Exception {
LeaderLatch ll = new LeaderLatch(client , ELECTION_ROOT);
ll.addListener(this);
ll.start();
}
#Override
public void isLeader() {
isLeader = true;
log.info("Leadership granted.");
}
#Override
public void notLeader() {
isLeader = false;
log.info("Leadership revoked.");
}
#Around("#annotation(com.example.apache.curator.annotation.LeaderOnly)")
public void onlyExecuteForLeader(ProceedingJoinPoint joinPoint) {
if (!isLeader) {
log.debug("I'm not leader, skip leader-only tasks.");
return;
}
try {
log.debug("I'm leader, execute leader-only tasks.");
joinPoint.proceed();
} catch (Throwable ex) {
log.error(ex.getMessage());
}
}
}
LeaderOnlyAnnotation
#Target(ElementType.METHOD)
#Retention(RetentionPolicy.RUNTIME)
#Documented
public #interface LeaderOnly {
}
Scheduled Task
#Component
public class HelloWorld {
private static final Logger log = LoggerFactory.getLogger(HelloWorld.class);
#LeaderOnly
#Scheduled(fixedRate = 1000L)
public void sayHello() {
log.info("Hello, world!");
}
}
I am using a different approach without need to setup a database for managing the lock between the nodes.
The component is called FencedLock and is provided by Hazelcast.
We're using it to prevent another node to make some operation (not necessarily linked to schedule) but it could also be used for sharing a locks between nodes for a schedule.
For doing this, we just set up two functions helper that can create different lock names:
#Scheduled(cron = "${cron.expression}")
public void executeMyScheduler(){
// This can also be a member of the class.
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
Lock lock = hazelcastInstance.getCPSubsystem().getLock("mySchedulerName");
lock.lock();
try {
// do your schedule tasks here
} finally {
// don't forget to release lock whatever happens: end of task or any exceptions.
lock.unlock();
}
}
Alternatively you can also release automatically the lock after a delay: let say your cron job is running every hour, you can setup an automatic release after e.g. 50 minutes like this:
#Scheduled(cron = "${cron.expression}")
public void executeMyScheduler(){
// This can also be a member of the class.
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
Lock lock = hazelcastInstance.getCPSubsystem().getLock("mySchedulerName");
if ( lock.tryLock ( 50, TimeUnit.MINUTES ) ) {
try {
// do your schedule tasks here
} finally {
// don't forget to release lock whatever happens: end of task or any exceptions.
lock.unlock();
}
} else {
// warning: lock has been released by timeout!
}
}
Note that this Hazelcast component works very good in a cloud based environment (e.g. k8s clusters) and without need to pay for an extra database.
Here is what you need to configure:
// We need to specify the name otherwise it can conflict with internal Hazelcast beans
#Bean("hazelcastInstance")
public HazelcastInstance hazelcastInstance() {
Config config = new Config();
config.setClusterName(hazelcastProperties.getGroup().getName());
NetworkConfig networkConfig = config.getNetworkConfig();
networkConfig.setPortAutoIncrement(false);
networkConfig.getJoin().getKubernetesConfig().setEnabled(hazelcastProperties.isNetworkEnabled())
.setProperty("service-dns", hazelcastProperties.getServiceDNS())
.setProperty("service-port", hazelcastProperties.getServicePort().toString());
config.setProperty("hazelcast.metrics.enabled", "false");
networkConfig.getJoin().getMulticastConfig().setEnabled(false);
return Hazelcast.newHazelcastInstance(config);
}
The HazelcastProperties being the ConfigurationProperties object mapped with the properties.
For local testing you can just disable the network config by using the properties in your local profile:
hazelcast:
network-enabled: false
service-port: 5701
group:
name: your-hazelcast-group-name
You could use an embeddable scheduler like db-scheduler to accomplish this. It has persistent executions and uses a simple optimistic locking mechanism to guarantee execution by a single node.
Example code for how the use-case can be achieved:
RecurringTask<Void> recurring1 = Tasks.recurring("my-task-name", FixedDelay.of(Duration.ofSeconds(60)))
.execute((taskInstance, executionContext) -> {
System.out.println("Executing " + taskInstance.getTaskAndInstance());
});
final Scheduler scheduler = Scheduler
.create(dataSource)
.startTasks(recurring1)
.build();
scheduler.start();
I am using an free HTTP service called kJob-Manager. https://kjob-manager.ciesielski-systems.de/
Advantage is that you dont create a new table in your database and also dont need any database connection because it is just a HTTP request.
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.URL;
import java.nio.charset.StandardCharsets;
import java.util.LinkedHashMap;
import org.apache.tomcat.util.json.JSONParser;
import org.apache.tomcat.util.json.ParseException;
import org.junit.jupiter.api.Test;
public class KJobManagerTest {
#Test
public void example() throws IOException, ParseException {
String data = "{\"token\":\"<API-Token>\"}";
URL url = new URL("https://kjob-manager.ciesielski-systems.de/api/ticket/<JOB-ID>");
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestProperty("Content-Type", "application/json");
connection.setRequestMethod("POST");
connection.setDoOutput(true);
connection.getOutputStream().write(data.getBytes(StandardCharsets.UTF_8));
JSONParser jsonParser = new JSONParser(connection.getInputStream());
LinkedHashMap<String, LinkedHashMap<String, Object>> result = (LinkedHashMap<String, LinkedHashMap<String, Object>>) jsonParser.parse();
if ((boolean) result.get("ticket").get("open")) {
System.out.println("This replica could run the cronjob!");
} else {
System.out.println("This replica has nothing to do!");
}
}
}
Spring context is not clustered so manage the task in distributed application is a little bit difficult and you need to use systems supporting jgroup to synchronis the state and let your task take the priority to execute the action. Or you could use ejb context to manage clustered ha singleton service like jboss ha environment
https://developers.redhat.com/quickstarts/eap/cluster-ha-singleton/?referrer=jbd
Or you could use clustered cache and access lock resource between the service and first service take the lock will beform the action or implement you own jgroup to communicat your service and perform the action one one node

Resources