org.neo4j.graphdb.NotInTransactionException - spring

Its been a week banging my head over this but still cannot seem to find a solution. I am using spring-data-neo4j maven artifact and the following lines of code which causes this issue:
/**
*
*/
#Autowired
private UserRepository userRepository;
#Transactional
public void addClassDescriptor(User user, ClassDescriptor classDescriptor) {
Project project = user.getDefaultProject();
ManagedFieldAccessorSet<ClassDescriptor> accessorSet = (ManagedFieldAccessorSet<ClassDescriptor>) project.getClassDescriptors();
accessorSet.add(classDescriptor);
/*
* Save the user object after updating the set
*/
userRepository.save(user);
}
When executing the method it gives the following error at
accessorSet.add(classDescriptor);
Stacktrace:
org.neo4j.graphdb.NotInTransactionException
at org.neo4j.kernel.impl.persistence.PersistenceManager.getResource(PersistenceManager.java:252)
at org.neo4j.kernel.impl.persistence.PersistenceManager.nodeCreate(PersistenceManager.java:155)
at org.neo4j.kernel.impl.core.NodeManager.createNode(NodeManager.java:270)
at org.neo4j.kernel.EmbeddedGraphDbImpl.createNode(EmbeddedGraphDbImpl.java:317)
at org.neo4j.kernel.EmbeddedGraphDatabase.createNode(EmbeddedGraphDatabase.java:103)
at org.springframework.data.neo4j.support.DelegatingGraphDatabase.createNode(DelegatingGraphDatabase.java:82)
at org.springframework.data.neo4j.support.mapping.EntityStateHandler.useOrCreateState(EntityStateHandler.java:115)
at org.springframework.data.neo4j.support.mapping.Neo4jEntityConverterImpl.write(Neo4jEntityConverterImpl.java:145)
at org.springframework.data.neo4j.support.mapping.Neo4jEntityPersister$CachedConverter.write(Neo4jEntityPersister.java:176)
at org.springframework.data.neo4j.support.mapping.Neo4jEntityPersister.persist(Neo4jEntityPersister.java:238)
at org.springframework.data.neo4j.support.mapping.Neo4jEntityPersister.persist(Neo4jEntityPersister.java:227)
at org.springframework.data.neo4j.support.Neo4jTemplate.save(Neo4jTemplate.java:295)
at org.springframework.data.neo4j.fieldaccess.AbstractNodeRelationshipFieldAccessor.getOrCreateState(AbstractNodeRelationshipFieldAccessor.java:97)
at org.springframework.data.neo4j.fieldaccess.AbstractNodeRelationshipFieldAccessor.createSetOfTargetNodes(AbstractNodeRelationshipFieldAccessor.java:89)
at org.springframework.data.neo4j.fieldaccess.OneToNRelationshipFieldAccessorFactory$OneToNRelationshipFieldAccessor.setValue(OneToNRelationshipFieldAccessorFactory.java:66)
at org.springframework.data.neo4j.fieldaccess.ManagedFieldAccessorSet.updateValue(ManagedFieldAccessorSet.java:90)
at org.springframework.data.neo4j.fieldaccess.ManagedFieldAccessorSet.update(ManagedFieldAccessorSet.java:78)
at org.springframework.data.neo4j.fieldaccess.ManagedFieldAccessorSet.add(ManagedFieldAccessorSet.java:104)
My entities are as follows : ( User.java )
#GraphId
private Long id;
#RelatedTo(elementClass = Project.class)
#Fetch
private Set<Project> projects;
( Project.java )
#GraphId
private Long id;
/**
*
*/
#RelatedTo(elementClass = ClassDescriptor.class)
#Fetch
private Set<ClassDescriptor> classDescriptors;
/**
*
*/
private boolean defaultProject;
Please help ! Attached is the dependency tree.

I found when adding relations using a collection operation, as well as the #Transactional annotation you need to obtain a reference to the GraphDatabaseService and explicitly begin and end a transaction:
#Autowired
private GraphDatabaseService graphDb;
#Transactional
public void addRelation() {
Transaction tx = graphDb.beginTx();
...
tx.success(); //or tx.failure();
tx.finish();
}

Abhi,
If it is happening consistently then I think you service is not a proper bean. How do you wire up your stuff?
Are you using simple or advanced mode (AspectJ)?
The stuff you have posted looks fine and simple, so I can't see why it wouldn't work...
Regards,
Lasse

You must protect the Transaction in a try chach...
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.Node;
import org.neo4j.graphdb.Transaction;
public class App {
static Node firstNode = null;
static GraphDatabaseService graphDb = null;
public static void main(String[] args) {
System.out.println("Start ...");
graphDb = ConnectNeo4j.initDB();
Transaction tx = graphDb.beginTx();
try {
firstNode = graphDb.createNode();
firstNode.setProperty("message", "Es Geht!");
System.out.println(firstNode.getProperty("message"));
tx.success();
} catch (Exception e) {
System.err.println(e.getMessage());
} finally {
tx.close();
}
}
}

Related

Cannot Write Data to ElasticSearch with AbstractReactiveElasticsearchConfiguration

I am trying out to write data to my local Elasticsearch Docker Container (7.4.2), for simplicity I used the AbstractReactiveElasticsearchConfiguration given from Spring also Overriding the entityMapper function. The I constructed my repository extending the ReactiveElasticsearchRepository
Then in the end I used my autowired repository to saveAll() my collection of elements containing the data. However Elasticsearch doesn't write any data. Also i have a REST controller which is starting my whole process returning nothing basicly, DeferredResult>
The REST method coming from my ApiDelegateImpl
#Override
public DeferredResult<ResponseEntity<Void>> openUsageExporterStartPost() {
final DeferredResult<ResponseEntity<Void>> deferredResult = new DeferredResult<>();
ForkJoinPool.commonPool().execute(() -> {
try {
openUsageExporterAdapter.startExport();
deferredResult.setResult(ResponseEntity.accepted().build());
} catch (Exception e) {
deferredResult.setErrorResult(e);
}
}
);
return deferredResult;
}
My Elasticsearch Configuration
#Configuration
public class ElasticSearchConfig extends AbstractReactiveElasticsearchConfiguration {
#Value("${spring.data.elasticsearch.client.reactive.endpoints}")
private String elasticSearchEndpoint;
#Bean
#Override
public EntityMapper entityMapper() {
final ElasticsearchEntityMapper entityMapper = new ElasticsearchEntityMapper(elasticsearchMappingContext(), new DefaultConversionService());
entityMapper.setConversions(elasticsearchCustomConversions());
return entityMapper;
}
#Override
public ReactiveElasticsearchClient reactiveElasticsearchClient() {
ClientConfiguration clientConfiguration = ClientConfiguration.builder()
.connectedTo(elasticSearchEndpoint)
.build();
return ReactiveRestClients.create(clientConfiguration);
}
}
My Repository
public interface OpenUsageRepository extends ReactiveElasticsearchRepository<OpenUsage, Long> {
}
My DTO
#Data
#Document(indexName = "open_usages", type = "open_usages")
#TypeAlias("OpenUsage")
public class OpenUsage {
#Field(name = "id")
#Id
private Long id;
......
}
My Adapter Implementation
#Autowired
private final OpenUsageRepository openUsageRepository;
...transform entity into OpenUsage...
public void doSomething(final List<OpenUsage> openUsages){
openUsageRepository.saveAll(openUsages)
}
And finally my IT test
#SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
#Testcontainers
#TestPropertySource(locations = {"classpath:application-it.properties"})
#ContextConfiguration(initializers = OpenUsageExporterApplicationIT.Initializer.class)
class OpenUsageExporterApplicationIT {
#LocalServerPort
private int port;
private final static String STARTCALL = "http://localhost:%s/open-usage-exporter/start/";
#Container
private static ElasticsearchContainer container = new ElasticsearchContainer("docker.elastic.co/elasticsearch/elasticsearch:6.8.4").withExposedPorts(9200);
static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
#Override
public void initialize(final ConfigurableApplicationContext configurableApplicationContext) {
final List<String> pairs = new ArrayList<>();
pairs.add("spring.data.elasticsearch.client.reactive.endpoints=" + container.getContainerIpAddress() + ":" + container.getFirstMappedPort());
pairs.add("spring.elasticsearch.rest.uris=http://" + container.getContainerIpAddress() + ":" + container.getFirstMappedPort());
TestPropertyValues.of(pairs).applyTo(configurableApplicationContext);
}
}
#Test
void testExportToES() throws IOException, InterruptedException {
final List<OpenUsageEntity> openUsageEntities = dbPreparator.insertTestData();
assertTrue(openUsageEntities.size() > 0);
final String result = executeRestCall(STARTCALL);
// Awaitility here tells me nothing is in ElasticSearch :(
}
private String executeRestCall(final String urlTemplate) throws IOException {
final String url = String.format(urlTemplate, port);
final HttpUriRequest request = new HttpPost(url);
final HttpResponse response = HttpClientBuilder.create().build().execute(request);
// Get the result.
return EntityUtils.toString(response.getEntity());
}
}
public void doSomething(final List<OpenUsage> openUsages){
openUsageRepository.saveAll(openUsages)
}
This lacks a semicolon at the end, so it should not compile.
But I assume this is just a typo, and there is a semicolon in reality.
Anyway, saveAll() returns a Flux. This Flux is just a recipe for saving your data, and it is not 'executed' until subscribe() is called by someone (or something like blockLast()). You just throw that Flux away, so the saving never gets executed.
How to fix this? One option is to add .blockLast() call:
openUsageRepository.saveAll(openUsages).blockLast();
But this will save the data in a blocking way effectively defeating the reactivity.
Another option is, if the code you are calling saveAll() from supports reactivity is just to return the Flux returned by saveAll(), but, as your doSomething() has void return type, this is doubtful.
It is not seen how your startExport() connects to doSomething() anyway. But it looks like your 'calling code' does not use any notion of reactivity, so a real solution would be to either rewrite the calling code to use reactivity (obtain a Publisher and subscribe() on it, then wait till the data arrives), or revert to using blocking API (ElasticsearchRepository instead of ReactiveElasticsearchRepository).

Spring Boot class cast exception in PostConstruct method

I am running a Spring Boot application with a PostConstruct method to populate a POJO before application initialization. This is to ensure that the database isn't hit by multiple requests to get the POJO content after it starts running.
I'm able to pull the data from Oracle database through Hibernate query and store it in my POJO. The problem arises when I try to access the stored data. The dataset contains a list of objects that contain strings and numbers. Just trying to print the description of the object at the top of the list raises a class cast exception. How should I mitigate this issue?
#Autowired
private TaskDescrBean taskBean;
#PostConstruct
public void loadDescriptions() {
TaskDataLoader taskData = new TaskDataLoader(taskBean.acquireDataSourceParams());
List<TaskDescription> taskList = tdf.getTaskDescription();
taskBean.setTaskDescriptionList(taskList);
System.out.println("Task description size: " + taskBean.getTaskDescriptionList().get(0).getTaskDescription());
}
My POJO class:
#Component
public class TaskDescrBean implements ApplicationContextAware {
#Resource
private Environment environment;
protected List<TaskDescription> taskDescriptionList;
public Properties acquireDataSourceParams() {
Properties dataSource = new Properties();
dataSource.setProperty("hibernate.connection.driver_class", environment.getProperty("spring.datasource.driver-class-name"));
dataSource.setProperty("hibernate.connection.url", environment.getProperty("spring.datasource.url"));
dataSource.setProperty("hibernate.connection.username", environment.getProperty("spring.datasource.username"));
dataSource.setProperty("hibernate.connection.password", environment.getProperty("spring.datasource.password"));
return dataSource;
}
public List<TaskDescription> getTaskDescriptionList() {
return taskDescriptionList;
}
public void setTaskDescriptionList(List<TaskDescription> taskDescriptionList) {
this.taskDescriptionList = taskDescriptionList;
}
public ApplicationContext getApplicationContext() {
return applicationContext;
}
public void setApplicationContext(ApplicationContext applicationContext) {
this.applicationContext = applicationContext;
}
}
My DAO class:
public class TaskDataLoader {
private Session session;
private SessionFactory sessionFactory;
public TaskDataLoader(Properties connectionProperties) {
Configuration config = new Configuration().setProperties(connectionProperties);
config.addAnnotatedClass(TaskDescription.class);
sessionFactory = config.buildSessionFactory();
}
#SuppressWarnings("unchecked")
public List<TaskDescription> getTaskDescription() {
List<TaskDescription> taskList = null;
session = sessionFactory.openSession();
try {
String description = "from TaskDescription des";
Query taskDescriptionQuery = session.createQuery(description);
taskList = taskDescriptionQuery.list();
System.out.println("Task description fetched. " + taskList.getClass());
} catch (Exception e) {
e.printStackTrace();
} finally {
session.close();
}
return taskList;
}
TaskDescription Entity:
#Entity
#Table(name="TASK_DESCRIPTION")
#JsonIgnoreProperties
public class TaskDescription implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#Column(name="TASK_DESCRIPTION_ID")
private Long taskDescriptionId;
#Column(name="TASK_DESCRIPTION")
private String taskDescription;
public Long getTaskDescriptionId() {
return taskDescriptionId;
}
public void setTaskDescriptionId(Long taskDescriptionId) {
this.taskDescriptionId = taskDescriptionId;
}
public String getTaskDescription() {
return taskDescription;
}
public void setTaskDescription(String taskDescription) {
this.taskDescription = taskDescription;
}
}
StackTrace
Instead of sending the List in the return statement, I transformed it into a JSON object and sent its String representation which I mapped back to the Object after transforming it using mapper.readValue()

Spring Scheduled Task running in clustered environment

I am writing an application that has a cron job that executes every 60 seconds. The application is configured to scale when required onto multiple instances. I only want to execute the task on 1 instance every 60 seconds (On any node). Out of the box I can not find a solution to this and I am surprised it has not been asked multiple times before. I am using Spring 4.1.6.
<task:scheduled-tasks>
<task:scheduled ref="beanName" method="execute" cron="0/60 * * * * *"/>
</task:scheduled-tasks>
There is a ShedLock project that serves exactly this purpose. You just annotate tasks which should be locked when executed
#Scheduled( ... )
#SchedulerLock(name = "scheduledTaskName")
public void scheduledTask() {
// do something
}
Configure Spring and a LockProvider
#Configuration
#EnableScheduling
#EnableSchedulerLock(defaultLockAtMostFor = "10m")
class MySpringConfiguration {
...
#Bean
public LockProvider lockProvider(DataSource dataSource) {
return new JdbcTemplateLockProvider(dataSource);
}
...
}
I think you have to use Quartz Clustering with JDBC-JobStore for this purpose
The is another simple and robust way to safe execute a job in a cluster. You can based on database and execute the task only if the node is the "leader" in the cluster.
Also when a node is failed or shutdown in the cluster another node became the leader.
All you have is to create a "leader election" mechanism and every time to check if your are the leader:
#Scheduled(cron = "*/30 * * * * *")
public void executeFailedEmailTasks() {
if (checkIfLeader()) {
final List<EmailTask> list = emailTaskService.getFailedEmailTasks();
for (EmailTask emailTask : list) {
dispatchService.sendEmail(emailTask);
}
}
}
Follow those steps:
1.Define the object and table that holds one entry per node in the cluster:
#Entity(name = "SYS_NODE")
public class SystemNode {
/** The id. */
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
/** The name. */
#Column(name = "TIMESTAMP")
private String timestamp;
/** The ip. */
#Column(name = "IP")
private String ip;
/** The last ping. */
#Column(name = "LAST_PING")
private Date lastPing;
/** The last ping. */
#Column(name = "CREATED_AT")
private Date createdAt = new Date();
/** The last ping. */
#Column(name = "IS_LEADER")
private Boolean isLeader = Boolean.FALSE;
public Long getId() {
return id;
}
public void setId(final Long id) {
this.id = id;
}
public String getTimestamp() {
return timestamp;
}
public void setTimestamp(final String timestamp) {
this.timestamp = timestamp;
}
public String getIp() {
return ip;
}
public void setIp(final String ip) {
this.ip = ip;
}
public Date getLastPing() {
return lastPing;
}
public void setLastPing(final Date lastPing) {
this.lastPing = lastPing;
}
public Date getCreatedAt() {
return createdAt;
}
public void setCreatedAt(final Date createdAt) {
this.createdAt = createdAt;
}
public Boolean getIsLeader() {
return isLeader;
}
public void setIsLeader(final Boolean isLeader) {
this.isLeader = isLeader;
}
#Override
public String toString() {
return "SystemNode{" +
"id=" + id +
", timestamp='" + timestamp + '\'' +
", ip='" + ip + '\'' +
", lastPing=" + lastPing +
", createdAt=" + createdAt +
", isLeader=" + isLeader +
'}';
}
}
2.Create the service that a) insert the node in database , b) check for leader
#Service
#Transactional
public class SystemNodeServiceImpl implements SystemNodeService, ApplicationListener {
/** The logger. */
private static final Logger LOGGER = Logger.getLogger(SystemNodeService.class);
/** The constant NO_ALIVE_NODES. */
private static final String NO_ALIVE_NODES = "Not alive nodes found in list {0}";
/** The ip. */
private String ip;
/** The system service. */
private SystemService systemService;
/** The system node repository. */
private SystemNodeRepository systemNodeRepository;
#Autowired
public void setSystemService(final SystemService systemService) {
this.systemService = systemService;
}
#Autowired
public void setSystemNodeRepository(final SystemNodeRepository systemNodeRepository) {
this.systemNodeRepository = systemNodeRepository;
}
#Override
public void pingNode() {
final SystemNode node = systemNodeRepository.findByIp(ip);
if (node == null) {
createNode();
} else {
updateNode(node);
}
}
#Override
public void checkLeaderShip() {
final List<SystemNode> allList = systemNodeRepository.findAll();
final List<SystemNode> aliveList = filterAliveNodes(allList);
SystemNode leader = findLeader(allList);
if (leader != null && aliveList.contains(leader)) {
setLeaderFlag(allList, Boolean.FALSE);
leader.setIsLeader(Boolean.TRUE);
systemNodeRepository.save(allList);
} else {
final SystemNode node = findMinNode(aliveList);
setLeaderFlag(allList, Boolean.FALSE);
node.setIsLeader(Boolean.TRUE);
systemNodeRepository.save(allList);
}
}
/**
* Returns the leaded
* #param list
* the list
* #return the leader
*/
private SystemNode findLeader(final List<SystemNode> list) {
for (SystemNode systemNode : list) {
if (systemNode.getIsLeader()) {
return systemNode;
}
}
return null;
}
#Override
public boolean isLeader() {
final SystemNode node = systemNodeRepository.findByIp(ip);
return node != null && node.getIsLeader();
}
#Override
public void onApplicationEvent(final ApplicationEvent applicationEvent) {
try {
ip = InetAddress.getLocalHost().getHostAddress();
} catch (Exception e) {
throw new RuntimeException(e);
}
if (applicationEvent instanceof ContextRefreshedEvent) {
pingNode();
}
}
/**
* Creates the node
*/
private void createNode() {
final SystemNode node = new SystemNode();
node.setIp(ip);
node.setTimestamp(String.valueOf(System.currentTimeMillis()));
node.setCreatedAt(new Date());
node.setLastPing(new Date());
node.setIsLeader(CollectionUtils.isEmpty(systemNodeRepository.findAll()));
systemNodeRepository.save(node);
}
/**
* Updates the node
*/
private void updateNode(final SystemNode node) {
node.setLastPing(new Date());
systemNodeRepository.save(node);
}
/**
* Returns the alive nodes.
*
* #param list
* the list
* #return the alive nodes
*/
private List<SystemNode> filterAliveNodes(final List<SystemNode> list) {
int timeout = systemService.getSetting(SettingEnum.SYSTEM_CONFIGURATION_SYSTEM_NODE_ALIVE_TIMEOUT, Integer.class);
final List<SystemNode> finalList = new LinkedList<>();
for (SystemNode systemNode : list) {
if (!DateUtils.hasExpired(systemNode.getLastPing(), timeout)) {
finalList.add(systemNode);
}
}
if (CollectionUtils.isEmpty(finalList)) {
LOGGER.warn(MessageFormat.format(NO_ALIVE_NODES, list));
throw new RuntimeException(MessageFormat.format(NO_ALIVE_NODES, list));
}
return finalList;
}
/**
* Finds the min name node.
*
* #param list
* the list
* #return the min node
*/
private SystemNode findMinNode(final List<SystemNode> list) {
SystemNode min = list.get(0);
for (SystemNode systemNode : list) {
if (systemNode.getTimestamp().compareTo(min.getTimestamp()) < -1) {
min = systemNode;
}
}
return min;
}
/**
* Sets the leader flag.
*
* #param list
* the list
* #param value
* the value
*/
private void setLeaderFlag(final List<SystemNode> list, final Boolean value) {
for (SystemNode systemNode : list) {
systemNode.setIsLeader(value);
}
}
}
3.ping the database to send that your are alive
#Override
#Scheduled(cron = "0 0/5 * * * ?")
public void executeSystemNodePing() {
systemNodeService.pingNode();
}
#Override
#Scheduled(cron = "0 0/10 * * * ?")
public void executeLeaderResolution() {
systemNodeService.checkLeaderShip();
}
4.you are ready! Just check if you are the leader before execute the task:
#Override
#Scheduled(cron = "*/30 * * * * *")
public void executeFailedEmailTasks() {
if (checkIfLeader()) {
final List<EmailTask> list = emailTaskService.getFailedEmailTasks();
for (EmailTask emailTask : list) {
dispatchService.sendEmail(emailTask);
}
}
}
Batch and scheduled jobs are typically run on their own standalone servers, away from customer facing apps so it is not a common requirement to include a job in an application that is expected to run on a cluster. Additionally, jobs in clustered environments typically do not need to worry about other instances of the same job running in parallel so another reason why isolation of job instances is not a big requirement.
A simple solution would be to configure your jobs inside a Spring Profile. For example, if your current configuration is:
<beans>
<bean id="someBean" .../>
<task:scheduled-tasks>
<task:scheduled ref="someBean" method="execute" cron="0/60 * * * * *"/>
</task:scheduled-tasks>
</beans>
change it to:
<beans>
<beans profile="scheduled">
<bean id="someBean" .../>
<task:scheduled-tasks>
<task:scheduled ref="someBean" method="execute" cron="0/60 * * * * *"/>
</task:scheduled-tasks>
</beans>
</beans>
Then, launch your application on just one machine with the scheduled profile activated (-Dspring.profiles.active=scheduled).
If the primary server becomes unavailable for some reason, just launch another server with the profile enabled and things will continue to work just fine.
Things change if you want automatic failover for the jobs as well. Then, you will need to keep the job running on all servers and check synchronization through a common resource such as a database table, a clustered cache, a JMX variable, etc.
I'm using a database table to do the locking. Only one task at a time can do a insert to the table. The other one will get a DuplicateKeyException.
The insert and delete logic is handeld by an aspect around the #Scheduled annotation.
I'm using Spring Boot 2.0
#Component
#Aspect
public class SchedulerLock {
private static final Logger LOGGER = LoggerFactory.getLogger(SchedulerLock.class);
#Autowired
private JdbcTemplate jdbcTemplate;
#Around("execution(#org.springframework.scheduling.annotation.Scheduled * *(..))")
public Object lockTask(ProceedingJoinPoint joinPoint) throws Throwable {
String jobSignature = joinPoint.getSignature().toString();
try {
jdbcTemplate.update("INSERT INTO scheduler_lock (signature, date) VALUES (?, ?)", new Object[] {jobSignature, new Date()});
Object proceed = joinPoint.proceed();
jdbcTemplate.update("DELETE FROM scheduler_lock WHERE lock_signature = ?", new Object[] {jobSignature});
return proceed;
}catch (DuplicateKeyException e) {
LOGGER.warn("Job is currently locked: "+jobSignature);
return null;
}
}
}
#Component
public class EveryTenSecondJob {
#Scheduled(cron = "0/10 * * * * *")
public void taskExecution() {
System.out.println("Hello World");
}
}
CREATE TABLE scheduler_lock(
signature varchar(255) NOT NULL,
date datetime DEFAULT NULL,
PRIMARY KEY(signature)
);
dlock is designed to run tasks only once by using database indexes and constraints. You can simply do something like below.
#Scheduled(cron = "30 30 3 * * *")
#TryLock(name = "executeMyTask", owner = SERVER_NAME, lockFor = THREE_MINUTES)
public void execute() {
}
See the article about using it.
You can use Zookeeper here to elect master instance and master instance will only run the scheduled job.
One implementation here is with Aspect and Apache Curator
#SpringBootApplication
#EnableScheduling
public class Application {
private static final int PORT = 2181;
#Bean
public CuratorFramework curatorFramework() {
CuratorFramework client = CuratorFrameworkFactory.newClient("localhost:" + PORT, new ExponentialBackoffRetry(1000, 3));
client.start();
return client;
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
Aspect class
#Aspect
#Component
public class LeaderAspect implements LeaderLatchListener{
private static final Logger log = LoggerFactory.getLogger(LeaderAspect.class);
private static final String ELECTION_ROOT = "/election";
private volatile boolean isLeader = false;
#Autowired
public LeaderAspect(CuratorFramework client) throws Exception {
LeaderLatch ll = new LeaderLatch(client , ELECTION_ROOT);
ll.addListener(this);
ll.start();
}
#Override
public void isLeader() {
isLeader = true;
log.info("Leadership granted.");
}
#Override
public void notLeader() {
isLeader = false;
log.info("Leadership revoked.");
}
#Around("#annotation(com.example.apache.curator.annotation.LeaderOnly)")
public void onlyExecuteForLeader(ProceedingJoinPoint joinPoint) {
if (!isLeader) {
log.debug("I'm not leader, skip leader-only tasks.");
return;
}
try {
log.debug("I'm leader, execute leader-only tasks.");
joinPoint.proceed();
} catch (Throwable ex) {
log.error(ex.getMessage());
}
}
}
LeaderOnlyAnnotation
#Target(ElementType.METHOD)
#Retention(RetentionPolicy.RUNTIME)
#Documented
public #interface LeaderOnly {
}
Scheduled Task
#Component
public class HelloWorld {
private static final Logger log = LoggerFactory.getLogger(HelloWorld.class);
#LeaderOnly
#Scheduled(fixedRate = 1000L)
public void sayHello() {
log.info("Hello, world!");
}
}
I am using a different approach without need to setup a database for managing the lock between the nodes.
The component is called FencedLock and is provided by Hazelcast.
We're using it to prevent another node to make some operation (not necessarily linked to schedule) but it could also be used for sharing a locks between nodes for a schedule.
For doing this, we just set up two functions helper that can create different lock names:
#Scheduled(cron = "${cron.expression}")
public void executeMyScheduler(){
// This can also be a member of the class.
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
Lock lock = hazelcastInstance.getCPSubsystem().getLock("mySchedulerName");
lock.lock();
try {
// do your schedule tasks here
} finally {
// don't forget to release lock whatever happens: end of task or any exceptions.
lock.unlock();
}
}
Alternatively you can also release automatically the lock after a delay: let say your cron job is running every hour, you can setup an automatic release after e.g. 50 minutes like this:
#Scheduled(cron = "${cron.expression}")
public void executeMyScheduler(){
// This can also be a member of the class.
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
Lock lock = hazelcastInstance.getCPSubsystem().getLock("mySchedulerName");
if ( lock.tryLock ( 50, TimeUnit.MINUTES ) ) {
try {
// do your schedule tasks here
} finally {
// don't forget to release lock whatever happens: end of task or any exceptions.
lock.unlock();
}
} else {
// warning: lock has been released by timeout!
}
}
Note that this Hazelcast component works very good in a cloud based environment (e.g. k8s clusters) and without need to pay for an extra database.
Here is what you need to configure:
// We need to specify the name otherwise it can conflict with internal Hazelcast beans
#Bean("hazelcastInstance")
public HazelcastInstance hazelcastInstance() {
Config config = new Config();
config.setClusterName(hazelcastProperties.getGroup().getName());
NetworkConfig networkConfig = config.getNetworkConfig();
networkConfig.setPortAutoIncrement(false);
networkConfig.getJoin().getKubernetesConfig().setEnabled(hazelcastProperties.isNetworkEnabled())
.setProperty("service-dns", hazelcastProperties.getServiceDNS())
.setProperty("service-port", hazelcastProperties.getServicePort().toString());
config.setProperty("hazelcast.metrics.enabled", "false");
networkConfig.getJoin().getMulticastConfig().setEnabled(false);
return Hazelcast.newHazelcastInstance(config);
}
The HazelcastProperties being the ConfigurationProperties object mapped with the properties.
For local testing you can just disable the network config by using the properties in your local profile:
hazelcast:
network-enabled: false
service-port: 5701
group:
name: your-hazelcast-group-name
You could use an embeddable scheduler like db-scheduler to accomplish this. It has persistent executions and uses a simple optimistic locking mechanism to guarantee execution by a single node.
Example code for how the use-case can be achieved:
RecurringTask<Void> recurring1 = Tasks.recurring("my-task-name", FixedDelay.of(Duration.ofSeconds(60)))
.execute((taskInstance, executionContext) -> {
System.out.println("Executing " + taskInstance.getTaskAndInstance());
});
final Scheduler scheduler = Scheduler
.create(dataSource)
.startTasks(recurring1)
.build();
scheduler.start();
I am using an free HTTP service called kJob-Manager. https://kjob-manager.ciesielski-systems.de/
Advantage is that you dont create a new table in your database and also dont need any database connection because it is just a HTTP request.
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.URL;
import java.nio.charset.StandardCharsets;
import java.util.LinkedHashMap;
import org.apache.tomcat.util.json.JSONParser;
import org.apache.tomcat.util.json.ParseException;
import org.junit.jupiter.api.Test;
public class KJobManagerTest {
#Test
public void example() throws IOException, ParseException {
String data = "{\"token\":\"<API-Token>\"}";
URL url = new URL("https://kjob-manager.ciesielski-systems.de/api/ticket/<JOB-ID>");
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestProperty("Content-Type", "application/json");
connection.setRequestMethod("POST");
connection.setDoOutput(true);
connection.getOutputStream().write(data.getBytes(StandardCharsets.UTF_8));
JSONParser jsonParser = new JSONParser(connection.getInputStream());
LinkedHashMap<String, LinkedHashMap<String, Object>> result = (LinkedHashMap<String, LinkedHashMap<String, Object>>) jsonParser.parse();
if ((boolean) result.get("ticket").get("open")) {
System.out.println("This replica could run the cronjob!");
} else {
System.out.println("This replica has nothing to do!");
}
}
}
Spring context is not clustered so manage the task in distributed application is a little bit difficult and you need to use systems supporting jgroup to synchronis the state and let your task take the priority to execute the action. Or you could use ejb context to manage clustered ha singleton service like jboss ha environment
https://developers.redhat.com/quickstarts/eap/cluster-ha-singleton/?referrer=jbd
Or you could use clustered cache and access lock resource between the service and first service take the lock will beform the action or implement you own jgroup to communicat your service and perform the action one one node

OptimisticLockException not thrown when version has changed in spring-boot project

Model structure:
#MappedSuperclass
public class BaseModel<K extends Comparable> implements Serializable, Comparable<Object> {
private static final long serialVersionUID = 1L;
#Id
private K id;
#Version
private Integer version;
// getter/setter
}
#Entity
public class MyEntity extends BaseModel<String> {
// some fields and it's getter/setter
}
Record in my database for my_entity:
id: 1
version: 1
...
Below is my update method:
void update(String id, Integer currentVersion, ....) {
MyEntity myEntity = myRepository.findOne(id);
myEntity.setVersion(currentVersion);
// other assignments
myRepository.save(myEntity);
}
Below is the query being fired when this method is invoked.
update my_entity set version=?, x=?, y=?, ...
where id=? and version=?
I am expecting OptimisticLockException when currentVersion passed in above method is other than 1.
Can any body help me why I am not getting OptimisticLockException?
I am using spring-boot for my webmvc project.
Section 11.1.54 of the JPA specification notes that:
In general, fields or properties that are specified with the Version
annotation should not be updated by the application.
From experience, I can advise that some JPA providers (OpenJPA being one) actually throw an exception should you try to manually update the version field.
While not strictly an answer to your question, you can re-factor as below to ensure both portability between JPA providers and strict compliance with the JPA specification:
public void update(String id, Integer currentVersion) throws MyWrappedException {
MyEntity myEntity = myRepository.findOne(id);
if(currentVersion != myEntity.getVersion()){
throw new MyWrappedException();
}
myRepository.save(myEntity);
//still an issue here however: see below
}
Assuming your update(...) method is running in a transaction however you still have an issue with the above as section 3.4.5 of the JPA specification notes:
3.4.5 OptimisticLockException Provider implementations may defer writing to the database until the end of the transaction, when
consistent with the lock mode and flush mode settings in effect. In
this case, an optimistic lock check may not occur until commit time,
and the OptimisticLockException may be thrown in the "before
completion" phase of the commit. If the OptimisticLockException must
be caught or handled by the application, the flush method should be
used by the application to force the database writes to occur. This
will allow the application to catch and handle optimistic lock
exceptions.
Essentially then, 2 users can submit concurrent modifications for the same Entity. Both threads can pass the initial check however one will fail when the updates are flushed to the database which may be on transaction commit i.e. after your method has completed.
In order that you can catch and handle the OptimisticLock exception, your code should then look something like the below:
public void update(String id, Integer currentVersion) throws MyWrappedException {
MyEntity myEntity = myRepository.findOne(id);
if(currentVersion != myEntity.getVersion()){
throw new MyWrappedException();
}
myRepository.save(myEntity);
try{
myRepository.flush()
}
catch(OptimisticLockingFailureException ex){
throw new MyWrappedException();
}
}
Use EVICT before updating when using JPA. I did not get the #Version to work either. The property was increased but no exception was thrown when updating an object that had the wrong version-property.
The only thing I have got to work is to first EVICT the object and then save it. Then the HibernateOptimisticLockingException is thrown if the Version properties does not match.
Set the hibernates ShowSQL to 'true' to verify that the actual update sql ends with "where id=? and version=?". If the object is not evicted first, the update statement only has "where id=?", and that will (for obvious reasons) not work.
Optimistic hibernation lock works out of the box (You don't must put a version for Entity):
#Entity
#Table(name = "product")
public class Product {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private Long quantity;
private Long likes;
#Version
private Long version;
public Product() {
}
//setter and getter
//equals and hashcode
repository
public interface ProductRepository extends JpaRepository<Product, Long> {}
service
#Service
public class ProductOptimisticLockingService {
private final ProductRepository productRepository;
public ProductOptimisticLockingService(ProductRepository productRepository) {
this.productRepository = productRepository;
}
#Transactional(readOnly = true)
public Product findById(Long id, String nameThread){
Product product =
productRepository
.findById(id)
.get();
System.out.printf(
"\n Select (%s) .... " +
"(id:) %d | (likes:) %d | (quantity:) %d | (version:) %d \n",
nameThread,
product.getId(),
product.getLikes(),
product.getQuantity(),
product.getVersion()
);
return product;
}
#Transactional(isolation = Isolation.READ_COMMITTED)
public void updateWithOptimisticLocking(Product product, String nameThread) {
try {
productRepository.save(product);
} catch (ObjectOptimisticLockingFailureException ex) {
System.out.printf(
"\n (%s) Another transaction is already working with a string with an ID: %d \n",
nameThread,
product.getId()
);
}
System.out.printf("\n--- Update has been performed (%s)---\n", nameThread);
}
}
test
#SpringBootTest
class ProductOptimisticLockingServiceTest {
#Autowired
private ProductOptimisticLockingService productService;
#Autowired
private ProductRepository productRepository;
#Test
void saveWithOptimisticLocking() {
/*ID may be - 1 or another. You must put the ID to pass in your methods. You must think how to write right your tests*/
Product product = new Product();
product.setLikes(7L);
product.setQuantity(5L);
productRepository.save(product);
ExecutorService executor = Executors.newFixedThreadPool(2);
Lock lockService = new ReentrantLock();
Runnable taskForAlice = makeTaskForAlice(lockService);
Runnable taskForBob = makeTaskForBob(lockService);
executor.submit(taskForAlice);
executor.submit(taskForBob);
executorServiceMethod(executor);
}
/*------ Alice-----*/
private Runnable makeTaskForAlice(Lock lockService){
return () -> {
System.out.println("Thread-1 - Alice");
Product product;
lockService.lock();
try{
product = productService
.findById(1L, "Thread-1 - Alice");
}finally {
lockService.unlock();
}
setPause(1000L); /*a pause is needed in order for the 2nd transaction to attempt
read the line from which the 1st transaction started working*/
lockService.lock();
try{
product.setQuantity(6L);
product.setLikes(7L);
update(product,"Thread-1 - Alice");
}finally {
lockService.unlock();
}
System.out.println("Thread-1 - Alice - end");
};
}
/*------ Bob-----*/
private Runnable makeTaskForBob(Lock lockService){
return () -> {
/*the pause makes it possible to start the transaction first
from Alice*/
setPause(50L);
System.out.println("Thread-2 - Bob");
Product product;
lockService.lock();
try{
product = findProduct("Thread-2 - Bob");
}finally {
lockService.unlock();
}
setPause(3000L); /*a pause is needed in order for the 1st transaction to update
the string that the 2nd transaction is trying to work with*/
lockService.lock();
try{
product.setQuantity(5L);
product.setLikes(10L);
update(product,"Thread-2 - Bob");
}finally {
lockService.unlock();
}
System.out.println("Thread-2 - Bob - end");
};
}
private void update(Product product, String nameThread){
productService
.updateWithOptimisticLocking(product, nameThread);
}
private Product findProduct(String nameThread){
return productService
.findById(1L, nameThread);
}
private void setPause(long timeOut){
try {
TimeUnit.MILLISECONDS.sleep(timeOut);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private void executorServiceMethod(ExecutorService executor){
try {
executor.awaitTermination(10L, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
executor.shutdown();
}
}

Update Logic for JPA Entity Fails when using DBUnit and Spring

I am currently using DBUnit in conjunction with Spring in order to unit test my application, but have run into an issue where my update logic test always fails because a deadlock occurs on the database and I cannot figure out why this is the case. Please note that I have been able to get around the issue by removing the method annotated by #After, which really isn't needed because I am using the #TransactionConfiguration annotation, but I'm concerned that I'm misunderstanding something regarding how the transaction processing works and thus am hoping someone can indicate why I always get the following exception when running my updateTerritory method.
java.sql.SQLTransactionRollbackException: A lock could not be obtained within the time requested
One thing that may be helpful to point out is that I am able to perform other actions like querying the database and inserting new records without any lock errors. In addition I am using OpenJPA and spring is injecting the PersistenceUnit into my DAO. I'm guessing that mixing up the usage of the PersistenceUnit and the direct use of the datasource within my DBUnit setup code (testSetup and testTeardown) may be part of the issue. I'm currently using Derby as my database.
My Code is provided below:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = "/applicationContext.xml")
#TransactionConfiguration(defaultRollback = true)
public class TerritoryZoneManagerTest {
#Autowired
private DataSource unitTestingDataSource;
#Autowired
private ITerritoryZoneDaoManager mgr;
#Before
public void testSetup() throws DatabaseUnitException, SQLException,
FileNotFoundException {
Connection con = DataSourceUtils.getConnection(unitTestingDataSource);
IDatabaseConnection dbUnitCon = new DatabaseConnection(con);
FlatXmlDataSetBuilder builder = new FlatXmlDataSetBuilder();
IDataSet dataSet = builder
.build(new FileInputStream(
"./src/com.company.territoryzonelookup/dao/test/TerritoryZoneManagerTest.xml"));
try {
// NOTE: There is no need to use the DatabaseOperation.DELETE
// functionality because spring will automatically remove all
// inserted records after each test case is executed.
DatabaseOperation.REFRESH.execute(dbUnitCon, dataSet);
} finally {
DataSourceUtils.releaseConnection(con, unitTestingDataSource);
}
}
#After
public void testTeardown() throws DatabaseUnitException, SQLException,
FileNotFoundException {
Connection con = DataSourceUtils.getConnection(unitTestingDataSource);
IDatabaseConnection dbUnitCon = new DatabaseConnection(con);
FlatXmlDataSetBuilder builder = new FlatXmlDataSetBuilder();
IDataSet dataSet = builder
.build(new FileInputStream(
"./src/com.company.territoryzonelookup/dao/test/TerritoryZoneManagerTest.xml"));
try {
// NOTE: There is no need to use the DatabaseOperation.DELETE
// functionality because spring will automatically remove all
// inserted records after each test case is executed.
DatabaseOperation.DELETE.execute(dbUnitCon, dataSet);
} finally {
DataSourceUtils.releaseConnection(con, unitTestingDataSource);
}
}
#Test
#Transactional
public void updateTerritory() {
TerritoryZone zone = new TerritoryZone();
int id = 1;
zone = mgr.getTerritory(id);
String newCity = "Congerville";
zone.setCity(newCity);
mgr.updateTerritory(zone);
zone = mgr.getTerritory(id);
Assert.assertEquals(newCity, zone.getCity());
}
}
The DAO object is provided below as well in case that is useful.
#Repository
public class TerritoryZoneDaoManager implements ITerritoryZoneDaoManager {
/*
#Autowired
private EntityManagerFactory emf;
*/
/*
* #PersistenceUnit EntityManagerFactory emf;
*
* #PersistenceContext private EntityManager getEntityManager(){ return
* emf.createEntityManager(); }
*/
#PersistenceContext
private EntityManager em;
private EntityManager getEntityManager() {
// return emf.createEntityManager();
return em;
}
/* (non-Javadoc)
* #see com.company.territoryzonelookup.dao.ITerritoryZoneManager#addTerritory(com.company.territoryzonelookup.dao.TerritoryZone)
*/
#Override
public TerritoryZone addTerritory(TerritoryZone territoryZone) {
EntityManager em = getEntityManager();
em.persist(territoryZone);
return territoryZone;
}
/* (non-Javadoc)
* #see com.company.territoryzonelookup.dao.ITerritoryZoneManager#getTerritory(int)
*/
#Override
public TerritoryZone getTerritory(int id) {
TerritoryZone obj = null;
Query query = getEntityManager().createNamedQuery("selectById");
query.setParameter("id", id);
obj = (TerritoryZone) query.getSingleResult();
return obj;
}
/* (non-Javadoc)
* #see com.company.territoryzonelookup.dao.ITerritoryZoneManager#updateTerritory(com.company.territoryzonelookup.dao.TerritoryZone)
*/
#Override
public TerritoryZone updateTerritory(TerritoryZone territoryZone){
getEntityManager().merge(territoryZone);
return territoryZone;
}
/* (non-Javadoc)
* #see com.company.territoryzonelookup.dao.ITerritoryZoneManager#getActiveTerritoriesByStateZipLob(java.lang.String, java.lang.String, java.util.Date, java.lang.String)
*/
#Override
public List<TerritoryZone> getActiveTerritoriesByStateZipLob(String stateCd, String zipCode, Date effectiveDate, String lobCd){
List<TerritoryZone> territoryList;
Query query = getEntityManager().createNamedQuery("selectActiveByZipStateLob");
query.setParameter("zipCode", zipCode);
query.setParameter("state", stateCd);
query.setParameter("lob",lobCd);
query.setParameter("effectiveDate", effectiveDate);
territoryList = (List<TerritoryZone>) query.getResultList();
return territoryList;
}
/* (non-Javadoc)
* #see com.company.territoryzonelookup.dao.ITerritoryZoneManager#deleteAll()
*/
#Override
public void deleteAll(){
Query query = getEntityManager().createNativeQuery("Delete from TerritoryZone");
query.executeUpdate();
}
/***
* the load method will remove all existing records from the database and then will reload it using it the data passed.
* #param terrList
*/
public void load(List<TerritoryZone> terrList){
deleteAll();
for (TerritoryZone terr:terrList){
addTerritory(terr);
}
}
}
Thanks in advance for your assistance.
Jeremy
jwmajors81
i can not know what's wrong with your unit testing code for lacking some details.
i also used spring unit test and dbunit for my himvc framework, a RAD framework based on spring3 and hibernate. here is the code of my super class for unit testingļ¼Œ
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {"classpath:config/application-test-config.xml"})
#Transactional
#TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true)
public class HiMVCTransactionalUnitTest extends AbstractTransactionalJUnit4SpringContextTests{
#Autowired
protected DBUnitHelper dbHelper;
protected void loadFixture(){
try{
String fixtureFile=this.dbHelper.getDataFile();
if(fixtureFile==null){
fixtureFile=this.getDefaultXMLFixtureFile();
this.dbHelper.setDataFile(fixtureFile);
}
if(this.dbHelper.isDataFileExisted()){
if(this.dbHelper.isMSSQL()){
HiMVCInsertIdentityOperation operation=new HiMVCInsertIdentityOperation(DatabaseOperation.CLEAN_INSERT);
operation.setInTransaction(true);
this.dbHelper.executeDBOperation(operation);
}else{
this.dbHelper.executeDBOperation(DatabaseOperation.CLEAN_INSERT);
}
}
}catch(Exception x){
x.printStackTrace();
}
}
...
}
i use the #Transactional annotation in the class declaration, and also specified the transactionManager. i wrote a DBUnitHelper to wrap the dbunit details of data loading.
here is a unit test sample:
public class MyTest extends HiMVCTransactionalUnitTest{
#Before
public void setup(){
super.loadFixture();
}
//other testing methods
}
Hope these codes helpful.

Resources