I have a defined ApplicationEvent and a series of its listeners. The listeners are properly arranged with the Ordered interface.
Amidst the execution of my first listener, there are business-level checks that determines whether the rest of logic (from subsequent listeners) shall apply. If this check fails, all of the subsequent event listeners should not be executed.
The business-level context is not available to the event publisher hence I am not able to do checks before publishing the event.
Solutions I myself can think of:
Throwing an uncheck exception. This is what I am currently doing but does not look clean
Performing the check at the start of every subsequent listeners. This wastes a lot of resources doing repetitive checks and is error prone, since new listeners (without implementing the Ordered interface) may be added.
Making the first listener the only one that listens to this type of event, and after it processes it, publish the event wrapped in another type. This seem like the way to go however I just want to understand if there are better alternatives.
Thank you!
I'm trying to wrap my head around Laravel's queued event listener vs jobs.
To me, it seems like that both are very similar:
Both implement the ShouldQueue interface (in the case of event listener, this is an option)
Both implement the handle() and failed() (optional) methods to perform their respective tasks.
Essentially, to me, both are queued items that can be run asynchronously.
What I am able to distinguish so far is that jobs have more "advanced" features/configurations like $timeout, $tries properties and you may also delay the 'trigger' of a job (courtesy of the Illuminate\Bus\Queueable trait).
There are more I'm sure, but I'm pointing out the one that pops out to me.
So, the question is, what's the actual difference between the two and more importantly, when do you favor one over the other?
Good question, I will begin by how laravel docs explains it
Events : Laravel's events provides a simple observer implementation, allowing you to subscribe and listen for various events that occur in your application. Events serve as a great way to decouple various aspects of your application, since a single event can have multiple listeners that do not depend on each other.
Where as
Jobs : Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
Essentially both are pushing jobs on to queues and/or do some processing its asked to do, the main difference I would say is how they are called.
Events are on the lookout to be called where as Jobs are always explicitly called.
Power of Events is that we can register multiple listeners for a single event and the event helper will dispatch the event to all of its registered listeners without us calling them explicitly. where in case of Jobs we would have to call them each one explicitly.
In short if you have a scenario where an event would trigger multiple method calls event would help. If its a single method call Jobs are good.
Events Scenario: user signs up -> Send email, Dispatch complimentary swag, Create a subdomain for a user profile userxyz.site.com etc etc
Jobs Scenario: user signs up -> Send email.
In the exact context of the question: "Event" is a "Queued Event Listener". Every Laravel Event has a Listener bound to it (the event listener), then, when we queue that event (in the listener) it is magically a "Queued Event Listener"
Well, they are very similar, the eventlistener takes an event as a parameter in the handle method, jobs don't.
Events are good in situations where you want to decouple the triggering part from the action part. For instance when you have several modules in the project and you want one module to react on an event in another.
One limitation of events compared to jobs are job chaining. If you for instance trigger several events from a controller you can't be sure that the worker dispatches them in sequence and that the first is completed before the other is started.
In these (rare) situations I sometimes end up with (non queued) listeners that in turn dispatches (queued) jobs that do the actual work (chained or unchained).
I have core data nested contexts setup. Main queue context for UI and saving to SQLite persistent store. Private queue context for syncing data with the web service.
My problem is the syncing process can take a long time and there are the chance that the syncing object is deleted in the Main queue context. When the private queue is saved, it will crash with the "Core Data could not fulfill faulted" exception.
Do you have any suggestion on how to check this issue or the way to configure the context for handle this case?
There is no magic behind nested contexts. They don't solve a lot of problems related to concurrency without additional work. Many people (you seem to be one of those people) expect things to work out of the box which are not supposed to work. Here is a little bit of background information:
If you create a child context using the private queue concurrency type then Core Data will create a queue for this context. To interact with objects registered at this context you have to use either performBlock: or performBlockAndWait:. The most important thing those two methods do is to make sure to invoke the passed block on the queue of the context. Nothing more - nothing less.
Think about this for a moment in the context of a non Core Data based application. If you want to do something in the background you could create a new queue and schedule blocks to do work on that queue in the background. If your job is done you want to communicate the result of the background operations to another layer inside your app logic. What happens when the user deleted the object/data in the meantime which is related to the results from the background operation? Basically the same: A crash.
What you experience is not a Core Data specific problem. It is a problem you have as soon you introduce concurrency. What you need is to think about a policy or some kind of contract between your child and parent contexts. For example, before you delete the object from the root context you should cancel all of the operations/blocks which are running on other queues and wait for the cancellation to finish before you actually delete the object.
When we call fsm.process_event('eventname');
is there a way to return true if the transition occured and false if "no_transition" was called or an exception occurred?
Thanks
Seeing as no one has answered so far I'll post my quite humble suggestion. You could try calling the current_state() method before and after calling fsm.process_event() and compare the results. This however would not cover the case of self transitions or internal transitions and is not something I would use if there are other alternatives (its a hack at best).
If you are trying to catch the case of an event not being handled by any state and just propagating through you could add one more bottom layer superstate which reports events that reach it (i.e. are ignored by all states they propagated through).
I have had situations where I needed to know if some event actually did something and when it did it (maybe it was deferred first and then executed). In that case I made my MSM post "ACK" messages to an outside queue, I'm not sure if this applies to your problem.
In my humble knowledge interrupts and state machines don't mix very well, I usually either simply swallow them or try and turn them into some event depending on the context. You should never allow you sates (the underlying function objects) to throw.
At work, we have a huge framework and use events to send data from one part of it to another. I recently started a personal project and I often think to use events to control the interactions of my objects.
For example, I have a Mixer class that play sound effects and I initially thought I should receive events to play a sound effect. Then I decided to only make my class static and call
Mixer.playSfx(SoundEffect)
in my classes. I have a ton of examples like this one where I initially think of an implementation with events and then change my mind, saying to myself it is too complex for nothing.
So when should I use events in a project? In which occasions events have a serious advantage over others techniques?
You generally use events to notify subscribers about some action or state change that occurred on the object. By using an event, you let different subscribers react differently, and by decoupling the subscriber (and its logic) from the event generator, the object becomes reusable.
In your Mixer example, I'd have events signal the start and end of playing of the sound effect. If I were to use this in a desktop application, I could use those events to enable/disable controls in the UI.
The difference between Calling a subroutine and raising events has to do with: Specification, Election, Cardinality and ultimately, which side, the initiator or the receiver has Control.
With Calls, the initiator elects to call the receiving routine, and the initiator specifies the receiver. And this leads to many-to-one cardinality, as many callers may elect to call the same subroutine.
With Events on the other hand, the initiator raises an event that will be received by those routines that have elected to receive that event. The receiver specifies what events it will receive from what initiators. This then leads to one-to-many cardinality as one event source can have many receivers.
So the decision as to Calls or Events, mostly has to do with whether the initiator determines the receiver is or the receiver determines the initiator.
Its a tradeoff between simplicity and re-usability. Lets take an metaphor of "Sending the email" process:
If you know the recipients and they are finite in number that you can always determine, its as simple as putting them in "To" list and hitting the send button. Its simple as thats what we use most of the time. This is calling the function directly.
However, in case of mailing list, you don't know in advance that how many users are going to subscribe to your email. In that case, you create a mailing list program where the users can subscribe to and the email goes automatically to all the subscribed users. This is event modeling.
Now, even though, in both above option, emails are sent to users, you are a better judge of when to send email directly and when to use the mailing list program. Apply the same judgement, hope that you would get your answer :)
Cheers,
Ajit.
I have been working with a huge code base at my previous work place and have seen, that using events can increase the complexity quite a lot and often unnecessarily.
I had often to reverse engineer existing code in order to fix it or to extend it.
In both cases, it is a lot easier to understand what is going on, when you can simply read a list of function calls instead of just seeing the raise of an event.
The event forces you to look for usages in order to fully understand what is happening. Not a problem with modern IDEs, but if you then encounter many functions, which also raise events, it quickly becomes complex. I had encountered cases, where it mattered in what order functions did subscribe to an event, even though most languages don't even gurantee a calling order...
There are cases when it is a really good idea to use events. But before you start eventing, consider the alternative. It is probably easier to read and mantain.
A Classic example for the use of events is a UI framework, which provides elements like buttons etc.
You want the function "ButtonPressed()" of the framework to call some of your functions, so that you can react to the user action.
The alternative to an event that you can subscribe to, would for example be a public bool "buttonPressed", which the UI framework exposes
and which you can regurlary check for beeing true or false. This is of course very ineffecient, when there are hundreds of UI elements.