It appears that the update for mongoOperations do not trigger the events in AbstractMongoEventListener.
This post indicates that was at least the case in Nov 2014
Is there currently any way to listen to update events like below? This seems to be quite a big omission if it is the case.
MongoTemplate.updateMulti()
Thanks!
This is no oversight. Events are designed around the lifecycle of a domain object or a document at least, which means they usually contain an instance of the domain object you're interested in.
Updates on the other hand are completely handled in the database. So there are no documents or even domain objects handled in MongoTemplate. Consider this basically the same way JPA #EntityListeners are only triggered for entities that are loaded into the persistence context in the first place, but not triggered when a query is executed as the execution of the query is happening in the database.
I know it's too late to answer this Question, I have the same situation with MongoTemplate.findAndModify method and the reason I needed events is for Auditing purpose. here is what i did.
1.EventPublisher (which is ofc MongoTemplate's methods)
public class CustomMongoTemplate extends MongoTemplate {
private ApplicationEventPublisher applicationEventPublisher;
#Autowired
public void setApplicationEventPublisher(ApplicationEventPublisher
applicationEventPublisher) {
this.applicationEventPublisher = applicationEventPublisher;
}
//Default Constructor here
#Override
public <T> T findAndModify(Query query, Update update, Class<T> entityClass) {
T result = super.findAndModify(query, update, entityClass);
//Publishing Custom Event on findAndModify
if(result!=null && result instanceof Parent)//All of my Domain class extends Parent
this.applicationEventPublisher.publishEvent(new AfterFindAndModify
(this,((Parent)result).getId(),
result.getClass().toString())
);
return result;
} }
2.Application Event
public class AfterFindAndModify extends ApplicationEvent {
private DocumentAuditLog documentAuditLog;
public AfterFindAndModify(Object source, String documentId,
String documentObject) {
super(source);
this.documentAuditLog = new DocumentAuditLog(documentId,
documentObject,new Date(),"UPDATE");
}
public DocumentAuditLog getDocumentAuditLog() {
return documentAuditLog;
}
}
3.Application Listener
public class FindandUpdateMongoEventListner implements ApplicationListener<AfterFindAndModify> {
#Autowired
MongoOperations mongoOperations;
#Override
public void onApplicationEvent(AfterFindAndModify event) {
mongoOperations.save(event.getDocumentAuditLog());
}
}
and then
#Configuration
#EnableMongoRepositories(basePackages = "my.pkg")
#ComponentScan(basePackages = {"my.pkg"})
public class MongoConfig extends AbstractMongoConfiguration {
//.....
#Bean
public FindandUpdateMongoEventListner findandUpdateMongoEventListner(){
return new FindandUpdateMongoEventListner();
}
}
You can listen to database changes, even the changes completely outside your program (MongoDB 4.2 and newer).
(code is in kotlin language. same for java)
#Autowired private lateinit var op: MongoTemplate
#PostConstruct
fun listenOnExternalChanges() {
Thread {
op.getCollection("Item").watch().onEach {
if(it.updateDescription.updatedFields.containsKey("name")) {
println("name changed on a document: ${it.updateDescription.updatedFields["name"]}")
}
}
}.start()
}
This code only works when replication is enabled. You can enable it even when you have a single node:
Add the following replica set details to mongodb.conf (/etc/mongodb.conf or /usr/local/etc/mongod.conf or C:\Program Files\MongoDB\Server\4.0\bin\mongod.cfg) file
replication:
replSetName: "local"
Restart mongo service, Then open mongo console and run this command:
rs.initiate()
Related
In my project I need to do some updates in database and finish transaction. Then I need to call external application that should have access to result of current transaction (via other endpoints). currently I have just #Transactional annotation above of endpoint method.
Which is common way to deal with such situations?
Use ApplicationEventPublisher to publish an event at the end of the #Transactional method. Implement a #TransactionalEventListener method to handle this event which by default will only get called after the transaction commits successfully which means you don't need to worry about it will execute accidentally if the transaction fails.
Code wise , it looks like :
#Service
public class MyServce {
#Autowired
private ApplicationEventPublisher appEventPublisher;
#Transactional
public void doSomething(){
Result result = doMyStuff();
appEventPublisher.publishEvent(new StuffFinishedEvent(result));
}
}
public class StuffFinishedEvent{
private Result result;
public StuffFinishedEvent(Result result){
this.result = result;
}
}
And the #TransactionalEventListener :
#Component
public class FinishStuffHandler {
#TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
public void handle(StuffFinishedEvent event) {
//access the result here.....
event.getResult();
}
}
I would like to use Oracle NoSQL database together with Spring data. The aim is to access the data over spring data repositories and even use spring data rest on top of it.
So I think the spring-data-keyvalue project would help me, to implement an adapter for Oracle NoSQL KV.
I tried to understand the documentation of spring-data-keyvalue (http://docs.spring.io/spring-data/keyvalue/docs/current/reference/html/#key-value.core-concepts), but didn't get the idea.
An example/tutorial about how to implement an adapter from scratch would be very helpful.
What I have is this configuration class where I provide a custom KeyValueAdapter. Now if I use CrudRepository methods it uses my custom adapter.
#Configuration
#EnableMapRepositories
public class KeyValueConfig {
#Bean
public KeyValueOperations keyValueTemplate() {
return new KeyValueTemplate(new OracleKeyValueAdapter());
}
}
The OracleKeyValueAdapter is an implementation of KeyValueAdapter. I got this from the spring-data-keyvalue-redis project (https://github.com/christophstrobl/spring-data-keyvalue-redis/blob/master/src/main/java/org/springframework/data/keyvalue/redis/RedisKeyValueAdapter.java)
public class OracleKeyValueAdapter extends AbstractKeyValueAdapter {
private KVStore store;
public OracleKeyValueAdapter() {
String storeName = "kvstore";
String hostName = "localhost";
String hostPort = "5000";
store = KVStoreFactory.getStore
(new KVStoreConfig(storeName, hostName + ":" + hostPort));
}
//Custom implementations:
#Override
public Object put(Serializable serializable, Object o, Serializable
serializable1) {
return null;
}
#Override
public boolean contains(Serializable serializable, Serializable
serializable1) {
return false;
}
.
.
.
Now I'm trying to implement this OracleKeyValueAdapter, but i don't know if that does even make sense.
Can you help me?
You might want to start with how spring-data-keyvalue is implemented over Redis, the link here should be a good starting point - http://docs.spring.io/spring-data/data-keyvalue/docs/1.0.0.BUILD-SNAPSHOT/reference/redis.html
Let me know how that goes, I am interested in what you are trying to accomplish.
The following configuration should work (tested on v2.4.3)
#Configuration
#EnableMapRepositories
public class Configuration {
#Bean
public KeyValueOperations mapKeyValueTemplate() {
return new KeyValueTemplate(keyValueAdapter());
}
#Bean
public KeyValueAdapter keyValueAdapter() {
return new YourKeyValueAdapter();
}
}
The name (mapKeyValueTemplate) of the KeyValueOperations bean is important here but it can also be changed as followed:
#Configuration
#EnableMapRepositories(keyValueTemplateRef = "foo")
public class Configuration {
#Bean
public KeyValueOperations foo() {
return new KeyValueTemplate(keyValueAdapter());
}
#Bean
public KeyValueAdapter keyValueAdapter() {
return new YourKeyValueAdapter();
}
}
I saw sources of Spring KeyValue Repository:
https://github.com/spring-projects/spring-data-keyvalue
I recomend to understand, how Spring Repository work inside.
If you want to realise own repository (CustomKeyValueRepository), you must create at least 6 classes:
EnableCustomKeyValueRepositories - annotation to enable repository type in your project.
CustomKeyValueRepositoriesRegistrar - registrator for this annotaion.
CustomKeyValueRepository - repository
CustomKeyValueRepositoryConfigurationExtension - implementation of Spring ConfigurationExtension.
CustomKeyValueAdapter - implementation of custom adapter for your data store.
CustomKeyValueConfiguration - configuration of beans Adapter and Template.
I code Infinispan KeyValue Repository by this way:
https://github.com/OsokinAlexander/infinispan-spring-repository
I also write article about this:
https://habr.com/ru/post/535218/
In Chrome you can translate it to your language.
The simplest way you can try implement only CustomKeyValueAdapter and Configuration. In Configuration you must redefine Spring KeyValueAdapter bean and KeyValueTemplate (it is very important that the name of the bean is with a lowercase letter, that's the only way it worked for me):
#Configuration
public class CustomKeyValueConfiguration extends CachingConfigurerSupport {
#Autowired
private ApplicationContext applicationContext;
#Bean
public CustomKeyValueAdapter getKeyValueAdapter() {
return new CustomKeyValueAdapter();
}
#Bean("keyValueTemplate")
public KeyValueTemplate getKeyValueTemplate() {
return new KeyValueTemplate(getKeyValueAdapter());
}
}
I am trying to implement proxy design pattern for caching services as below.
public interface IProductService
{
int ProcessOrder(int orderId);
}
public class ProductService : IProductService
{
public int ProcessOrder(int orderId)
{
// implementation
}
}
public class CachedProductService : IProductService
{
private IProductService _realService;
public CachedProductService(IProductService realService)
{
_realService = realService;
}
public int ProcessOrder(int orderId)
{
if (exists-in-cache)
return from cache
else
return _realService.ProcessOrder(orderId);
}
}
How do I to use IoC container (Unity/Autofac) to create real service and cached service objects as I can register IProductService to ProductService or CachedProductService but CachedProductService in turn requires a IProductService object (ProductService) during creation.
I am trying to arrive at something like this:
The application will target IProductService and request IoC container for an instance and depending on the configuration of the application (if cache is enabled/disabled), the application will be provided with ProductService or CachedProductService instance.
Any ideas? Thanks.
Without a container your graph would look like this:
new CachedProductService(
new ProductService());
Here's an example using Simple Injector:
container.Register<IProductService, ProductService>();
// Add caching conditionally based on a config switch
if (ConfigurationManager.AppSettings["usecaching"] == "true")
container.RegisterDecorator<IProductService, CachedProductService>();
I have a Listener with #PostPersist method called "doAudit" to audit create action.On debugging I see This method gets called and Audit record is created on JAVA side. But when I verify the DB I don't see the record.
public class AuditListener {
#PostPersist
public void doAudit(Object object) {
AuditDao auditManager = AuditDaoImpl.getInstance();
auditManager.logEvent("create", object);
}
}
public interface AuditDao {
#Transactional(propagation= Propagation.REQUIRED)
public AuditEntity logEvent(String action, Object object);
}
#Component
public class AuditDaoImpl implements AuditDao {
private static AuditDaoImpl me;
public AuditDaoImpl() {
me = this;
}
public static AuditDaoImpl getInstance() {
return me;
}
#Autowired
private AuditDao dao;
#Transactional
#Override
public AuditEntity logEvent(String action, Object object) {
AuditEntity act = new AuditEntity();
act.setAction(action);
act.setObject(object);
dao.create(act);
return act;
}
}
I am using Open JPA 2.0 as for my ORM. Deployed on karaf container. I am using Postgres SQL as my backend.
Add a debug breakpoint and check if the current stack-trace contains the TransactionInterceptor. If there's no such entry, the Spring transaction management is not properly configured and your DAOs don't use transactions at all.
JPA allows you to run queries without configuring transactions explicitly. For saving data, transactions are mandatory.
In my application, I need to retrieve the lists of new, updated and removed entities per each transaction. Like this:
// useful functionality
#Transactional
public void createNewBlogPost(int userId, String title, String text) {
Post post = new Post();
post.title = title; // "hello"
post.text = text; // "there"
postRepository.save(post);
// more work with JPA repositories here
}
...
// gets called right after createNewBlogPost()
public void onTransaction(UnitOfWork uow) {
List<?> newEntities = uow.getNewEntities();
assertEquals(1, newEntities.size()); // 1 new entity
Object firstNewEntity = newEntities.get(0);
assertTrue(firstNewEntity instanceof Post); // this new entity
// is a Post
Post newPost = (Post)firstNewEntity;
assertEquals("hello", newPost.title);
assertEquals("there", newPost.text);
}
The most relevant thing I managed to find was an audit functionality that Spring provides with annotations like #CreatedBy, #CreatedDate, #LastModifiedBy, #LastModifiedDate. Though it's technically very close, yet it's not exactly what I want to achieve.
Does Spring Data JPA provide a mechanism to retrieve data changes per every single transaction?
Since your use case is Hibernate and JPA specific, you should take a look at Hibernate Envers and Spring Data Envers. They might give you some ideas, but be careful re: the projects themselves, I'm not sure if they're active.
I've spent some time for the research and managed to find a relatively straightforward Hibernate-specific solution. There are basically 2 problems to resolve:
Intercept data change events.
Do it on a per-request basis.
To address p.1, one can use EventListenerRegistry. Here's an example:
#Component
public class HibernateListenersConfigurer {
#Autowired
private EntityManagerFactory entityManagerFactory;
#Autowired
private HibernateListeners hibernateListeners;
#PostConstruct
public void init() {
HibernateEntityManagerFactory hibernateEntityManagerFactory =
(HibernateEntityManagerFactory)entityManagerFactory;
SessionFactoryImpl sessionFactoryImpl =
(SessionFactoryImpl)hibernateEntityManagerFactory.getSessionFactory();
EventListenerRegistry eventListenerRegistry = sessionFactoryImpl.
getServiceRegistry().
getService(EventListenerRegistry.class);
eventListenerRegistry.appendListeners(EventType.PRE_INSERT, hibernateListeners);
eventListenerRegistry.appendListeners(EventType.PRE_UPDATE, hibernateListeners);
eventListenerRegistry.appendListeners(EventType.PRE_DELETE, hibernateListeners);
}
}
hibernateListeners object gets all these events and can do whatever required to audit them. Here's an example:
#Component
public class HibernateListeners implements
PreInsertEventListener,
PreUpdateEventListener,
PreDeleteEventListener {
#Autowired
private ChangeTracker changeTracker;
#Override
public boolean onPreInsert(PreInsertEvent event) {
// event has a bunch of relevant details
changeTracker.trackChange(event);
return false;
}
...other listeners here...
Then, to address p.2, changeTracker seen above is a request-scoped bean:
#Component
#Scope(value = "request", proxyMode = ScopedProxyMode.TARGET_CLASS)
public class ChangeTracker {
// a sort of "Unit of Work"
private List<Change> changes = new ArrayList<Change>();
public void trackChange(PreInsertEvent event) {
changes.add(makeChangeFromEvent(event));
}
public void handleChanges() {
// Do whatever needed :-)
}
}
Then, there are few options available to finally call handleChanges() once request processing is complete: call it manually, use HandlerInterceptor, use filter, use AOP. HandlerInterceptors and filters, are not as great, because in my case they were getting called after response has already been sent to the client, this caused inconsistency between "business data" and "changes data". I eventually switched to AOP and it seems to work just fine.
Here's a playground: https://github.com/loki2302/spring-change-tracking-experiment