Is it possible to skip any missed #schedule events instead of catching them up? - websphere

Using Websphere 9, I have a schedule service, e.g:
#Schedule(minute="30", hour="6-20", dayOfWeek="Mon-Fri",
dayOfMonth="*", month="*", year="*", info="TimerName", persistent=true)
public void scheduledTimeout(final Timer t)
{
// do something
}
It's persistent so that it will only trigger on one of the nodes on the cluster.
If for some reason the timer runs long, or otherwise doesn't run; I don't want WebSphere to try again - I just want it to wait until the next trigger.
Is this possible?

I don't see any relevant settings in WAS v9 in regard of this, as what EJB spec says it is responsibility of bean provider to handle any out of sequence or additional events. So you would have to implement that logic in your bean using timer parameter.
However you could consider WebSphere/Open Liberty server which adds additional configuration (see details here https://github.com/OpenLiberty/open-liberty/issues/10563)
And allows you for example to specify what to do which such events:
New missedPersistentTimerAction element will have the following 2
options:
ALL The timeout method is invoked immediately for all missed
expirations. When multiple expirations have been missed for the same
timer, each invocation will occur synchronously until all missed
expirations have been processed, then the timer will resume with the
next future expiration.
ALL is the current behavior, and will be the default when failover is
not enabled.
ONCE The timeout method is invoked once immediately. All other missed
expirations are skipped and the timer will resume with the next future
expiration.
ONCE will be the default behavior when failover is enabled. This is
the minimal level of support required by the specification.
When the timer runs on server start, calling getNextTimeout() will
return the next timeout in the future, accounting for all the
expirations that will be skipped, not the next timeout based on the
missed expiration (i.e. so not a time in the past)
Note: Does not apply to single action timers. Single action timers
will always run once on server start, and then removed.

Related

Is There Any Way to Stop the Execution of Subsequent Event Listeners in Spring?

I have a defined ApplicationEvent and a series of its listeners. The listeners are properly arranged with the Ordered interface.
Amidst the execution of my first listener, there are business-level checks that determines whether the rest of logic (from subsequent listeners) shall apply. If this check fails, all of the subsequent event listeners should not be executed.
The business-level context is not available to the event publisher hence I am not able to do checks before publishing the event.
Solutions I myself can think of:
Throwing an uncheck exception. This is what I am currently doing but does not look clean
Performing the check at the start of every subsequent listeners. This wastes a lot of resources doing repetitive checks and is error prone, since new listeners (without implementing the Ordered interface) may be added.
Making the first listener the only one that listens to this type of event, and after it processes it, publish the event wrapped in another type. This seem like the way to go however I just want to understand if there are better alternatives.
Thank you!

When to session.commit() in a NiFi Processor

I am implementing a NiFi processor and have couple of clarifications to make with respect to best practices:
session.getProvenanceReporter().modify(...) - Should we emit the event immediately after every session.transfer()
session.commit() - Documentation says, after performing operations on flowfiles, either commit or rollback can be invoked.
Developer guide: https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html#process_session
Question is, what do I lose by not invoking these methods explicitly?
1) Yes typically the provenance event is emitted after transferring the flow file.
2) It depends if you are extending AbstractProcessor, or AbstractSessionFactoryProcessor. AbstractProcessor will call commit or rollback for you so you don't need to, AbstractSessionFactoryProcessor requires you to call them appropriately.
If you are extending AbstractSessionFactoryProcessor and never call commit, eventually that session will get garbage collected and rollback will be called, and all the operations performed by that session will be rolled back.
There is also an annotation #SupportsBatching which can be placed on a processor. When this annotation is present, the UI shows a slider on the processor's scheduling tab that indicates how many milliseconds worth of framework operations like commit() can be batched together behind the scenes for increased throughput. If latency is more important then leaving the slides at 0 milliseconds is appropriate, but the key here is that the user gets to decide this when building the flow and configuring the processor.

Aggregator behavior on server restart - spring integration

Premise -
In spring integration,if i have a aggregator with a message group which is incomplete. Before group release stratergy is met, server is restarted.
Current Behavior->
all the messages posted to the aggregator go to the same message group and not a new one, since it is not marked complete, messages keep flowing in.
Expected->
If server is restarted, aggregator picks the left over messages from message store, marks already persisted ones complete & then cater new ones,
Is my expectation incorrect? Can somebody guide?
I think we can reach your requirements with MessageGroupStoreReaper, which you will run just on the server startup, e.g. via catching ContextRefreshedEvent:
The MessageGroupStore maintains a list of these callbacks which it applies, on demand, to all messages whose timestamp is earlier than a time supplied as a parameter (see the registerMessageGroupExpiryCallback(..) and expireMessageGroups(..) methods above).
The expireMessageGroups method can be called with a timeout value. Any message older than the current time minus this value will be expired, and have the callbacks applied. Thus it is the user of the store that defines what is meant by message group "expiry".
http://docs.spring.io/spring-integration/reference/html/messaging-routing-chapter.html#reaper

Does <do-status> with level "retry" block any other event from being processed?

I have a NetIQ (Novell) IDM 4.0.1 driver. In a policy I have a <do-status> rule with level retry.
Does this retry block any other event from being processed?
From the logic of the application the event for (A) can not be processed until the object (B) is associated by the very same driver. Therefore I have added the retry rule on (A). However, it seems that the event for (B) is blocked when the event for (A) is waiting for being retried. If I use veto instead of retry for (A) then the event for (B) is processed regulary.
Is the behaviour specified somewhere?
This takes the top event in the queue, and retries it every 'interval' (which is defined in an Engine Control Value, defaults to 30 seconds).
So yes, it blocks all following events until it completes and stops being a retry.
What you could do is much simpler. In the Input Transform policy set, look for the operation add-association since that is when the object is successfully added to the connected system.
Then do your rule B stuff.
Unless you mean two different objects A and B, that are otherwise unrelated. If so, would let object A logic go through, and when you see object B come through then do the work on object A that is needed.

spring integration: stop a flow based on a given a condition

Is it possible to stop the flow execution in SI based on a header/message value ?
Thanks.
You can use a Control Bus to start and stop an inbound-adapter.
If you want to stop an existing flow mid-execution, I'm not aware of any standard ESB component that will enable you to do that. You could perhaps use a Channel Interceptor and lock the thread execution manually, but this approach would only be as granular as your message endpoints.
Also, if you find a way to interrupt the execution, be careful of any timeout values you set in your flow configuration. Otherwise you may find the flow will fail when you eventually resume it!

Resources