I have a single member that has a MapStore/Loader that reads/writes to a database and also a client that adds an EntryAddedListener listener.
If the member is bounced, I see entry added listeners being fired, as the MapLoader reloads the data from database.
However this suggests to the client that new entries have been added, whereas in fact, they are only being "added" because of the node bootstrapping up.
Basically I don't want these listeners to be fired as a result of the MapLoader bootstrapping the map - they should only be fired afterwards.
How do I stop these MapLoader events firing off EntryAdded listeners?
There's no way to do it.
Loading an Entry using a MapLoader is basically adding an Entry to the Map.
What you could do is to add these listeners after the map has been loaded.
If your loadingMode is set to EAGER it's easy to determine when the loading has finished completely.
In order to wait till the loading has finished you can invoke map.size() operation. When it finishes the map is fully-populated.
It is possible to distinguish ADD and LOAD events starting from Hazelcast v3.11. This feature has been introduced as a part of hazelcast-13181 issue.
Also you may want to check Java-docs for EntryAddedListener and EntryLoadedListener for more details.
Related
I have a defined ApplicationEvent and a series of its listeners. The listeners are properly arranged with the Ordered interface.
Amidst the execution of my first listener, there are business-level checks that determines whether the rest of logic (from subsequent listeners) shall apply. If this check fails, all of the subsequent event listeners should not be executed.
The business-level context is not available to the event publisher hence I am not able to do checks before publishing the event.
Solutions I myself can think of:
Throwing an uncheck exception. This is what I am currently doing but does not look clean
Performing the check at the start of every subsequent listeners. This wastes a lot of resources doing repetitive checks and is error prone, since new listeners (without implementing the Ordered interface) may be added.
Making the first listener the only one that listens to this type of event, and after it processes it, publish the event wrapped in another type. This seem like the way to go however I just want to understand if there are better alternatives.
Thank you!
We are using microservices, cqrs, event store using nodejs cqrs-domain, everything works like a charm and the typical flow goes like:
REST->2. Service->3. Command validation->4. Command->5. aggregate->6. event->7. eventstore(transactional Data)->8. returns aggregate with aggregate ID-> 9. store in microservice local DB(essentially the read DB)-> 10. Publish Event to the Queue
The problem with the flow above is that since the transactional data save i.e. persistence to the event store and storage to the microservice's read data happen in a different transaction context if there is any failure at step 9 how should i handle the event which has already been propagated to the event store and the aggregate which has already been updated?
Any suggestions would be highly appreciated.
The problem with the flow above is that since the transactional data save i.e. persistence to the event store and storage to the microservice's read data happen in a different transaction context if there is any failure at step 9 how should i handle the event which has already been propagated to the event store and the aggregate which has already been updated?
You retry it later.
The "book of record" is the event store. The downstream views (the "published events", the read models) are derived from the book of record. They are typically behind the book of record in time (eventual consistency) and are not typically synchronized with each other.
So you might have, at some point in time, 105 events written to the book of record, but only 100 published to the queue, and a representation in your service database constructed from only 98.
Updating a view is typically done in one of two ways. You can, of course, start with a brand new representation and replay all of the events into it as part of each update. Alternatively, you track in the metadata of the view how far along in the event history you have already gotten, and use that information to determine where the next read of the event history begins.
Inside your event store, you could track whether read-side replication was successful.
As soon as step 9 suceeds, you can flag the event as 'replicated'.
That way, you could introduce a component watching for unreplicated events and trigger step 9. You could also track whether the replication failed multiple times.
Updating the read-side (step 9) and flagigng an event as replicated should happen consistently. You could use a saga pattern here.
I think i have now understood it to a better extent.
The Aggregate would still be created, answer is that all the validations for any type of consistency should happen before my aggregate is constructed, it is in case of a failure beyond the purview of the code that a failure exists while updating the read side DB of the microservice which needs to be handled.
So in an ideal case aggregate would be created however the event associated would remain as undispatched unless all the read dependencies are updated, if not it remains as undispatched and that can be handled seperately.
The Event Store will still have all the event and the eventual consistency this way is maintained as is.
I've read a lot about Event::queue but I just cant get my head around it, so i have something like:
Event::listen('send_notification');
and in the controller I use
Event::fire('send_notification');
But because this takes sometime before sending the user to somewhere else, I instead want to use
Event::queue('send_notification');
To fire the event after the user has been redirected, but I don't know how.
(In the app/config/app.php i have the queue driver set to sync)
EDIT:
a small note about firing the event ,u can do all ur work just like normal ,and add all the Event::flush() as a filter ,then just call that filter through ->after() or afterFilter().
First, let me make something clear. Event::queue has nothing to do with the Queue facade and the query driver in the config. It won't enable you to fire the event after the request has happened.
But you can delay the firing of an event and therefore "prepare" it.
The usage is pretty basic. Obviously you need one or many Event::listen (well it works without them but makes no sense at all)
Event::listen('send_notification', function($text){
// send notification
});
Now we queue the event:
Event::queue('send_notification', array('Hello World'));
And finally, fire it by calling flush
Event::flush('send_notification');
In your comment you asked about flushing multiple events at once. Unfortunately that's not really possible. You have to call flush() multiple times
Event::flush('send_notification');
Event::flush('foo');
Event::flush('bar');
If you have a lot of events to flush you might need to think about your architecture and if it's possible to combine some of those into one event with multiple listeners.
Flushing the Event after redirect
Event::queue can't be used to fire an event after the request lifecycle has ended. You have to use "real" queues for that.
I have core data nested contexts setup. Main queue context for UI and saving to SQLite persistent store. Private queue context for syncing data with the web service.
My problem is the syncing process can take a long time and there are the chance that the syncing object is deleted in the Main queue context. When the private queue is saved, it will crash with the "Core Data could not fulfill faulted" exception.
Do you have any suggestion on how to check this issue or the way to configure the context for handle this case?
There is no magic behind nested contexts. They don't solve a lot of problems related to concurrency without additional work. Many people (you seem to be one of those people) expect things to work out of the box which are not supposed to work. Here is a little bit of background information:
If you create a child context using the private queue concurrency type then Core Data will create a queue for this context. To interact with objects registered at this context you have to use either performBlock: or performBlockAndWait:. The most important thing those two methods do is to make sure to invoke the passed block on the queue of the context. Nothing more - nothing less.
Think about this for a moment in the context of a non Core Data based application. If you want to do something in the background you could create a new queue and schedule blocks to do work on that queue in the background. If your job is done you want to communicate the result of the background operations to another layer inside your app logic. What happens when the user deleted the object/data in the meantime which is related to the results from the background operation? Basically the same: A crash.
What you experience is not a Core Data specific problem. It is a problem you have as soon you introduce concurrency. What you need is to think about a policy or some kind of contract between your child and parent contexts. For example, before you delete the object from the root context you should cancel all of the operations/blocks which are running on other queues and wait for the cancellation to finish before you actually delete the object.
I found an interesting questions regarding the events in action script: is the event buffered and ordered?
Ie) In a swfloader example, I setup a timer(1 sec) to run a function, in the function I setup a listener to event INIT of the loaded swf. It depends on the network condition that whether the timer handler or the INIT event will be first executed. Imagine a case that the INIT event fired first but the handler to handle the INIT event be setup later, will the handler be invoked?
Another question, if the loaded swf fired several events very fast, will the events be kept ordered as the fire sequence?
First Question: No, if INIT event is fired first and there is no handler for that Event then that event will be lost. So the best way is to setup all the listeners first then start any loading operation.Second Question: Yes, all the events fired will be handled in the same order as they're fired.
i just wanted to add to that you can change the order in the optional params
by default first in is the first served but if you change you priorities around that can change
obj.addEventListener(type,listener,useCapture,priority,useWeakRefrence);
the higher the number is the higher it is in priority. so if i would add these events:
obj.addEventListener(type,listener1,useCapture,1,useWeakRefrence);
obj.addEventListener(type,listener2,useCapture,2,useWeakRefrence);
the second event would happen before the first one. p.s after you create the event there is no way to change the order without removing the event and adding it back in.