Which AmqpEvent or AmqpException to handle when an exclusive consumer fails - spring

I have two instances of the same application, running in different virtual machines. I want to grant exclusive access to a queue for the consumer of one of them, while invalidating the local cache that is used by the consumer on the other.
Now, I have figured out that I need to handle ListenerContainerConsumerFailedEvent but I am guessing that implementing an ApplicationListener for this event is not going to ensure that I am receiving this event because of an exclusive consumer exception. I might want to check the Throwable of the event, or event further checks.
Which subclass of AmqpException or what further checks should I perform to ensure that the exception is received due to exclusive consumer access?

The logic in the listener container implementations is like this:
if (e.getCause() instanceof ShutdownSignalException
&& e.getCause().getMessage().contains("in exclusive use")) {
getExclusiveConsumerExceptionLogger().log(logger,
"Exclusive consumer failure", e.getCause());
publishConsumerFailedEvent("Consumer raised exception, attempting restart", false, e);
}
So, we indeed raise a ListenerContainerConsumerFailedEvent event and you can trace the cause message like we do in the framework, but on the other hand you can just inject your own ConditionalExceptionLogger:
/**
* Set a {#link ConditionalExceptionLogger} for logging exclusive consumer failures. The
* default is to log such failures at WARN level.
* #param exclusiveConsumerExceptionLogger the conditional exception logger.
* #since 1.5
*/
public void setExclusiveConsumerExceptionLogger(ConditionalExceptionLogger exclusiveConsumerExceptionLogger) {
and catch such an exclusive situation over there.
Also you can consider to use RabbitUtils.isExclusiveUseChannelClose(cause) in your code:
/**
* Return true if the {#link ShutdownSignalException} reason is AMQP.Channel.Close
* and the operation that failed was basicConsumer and the failure text contains
* "exclusive".
* #param sig the exception.
* #return true if the declaration failed because of an exclusive queue.
*/
public static boolean isExclusiveUseChannelClose(ShutdownSignalException sig) {

Related

How to see the types that flows in Spring Integration's IntegrationFlow

I try to understand what's the type that returns when I aggregate in Spring Integration and that's pretty hard. I'm using Project Reactor and my code snippet is:
public FluxAggregatorMessageHandler randomIdsBatchAggregator() {
FluxAggregatorMessageHandler f = new FluxAggregatorMessageHandler();
f.setWindowTimespan(Duration.ofSeconds(5));
f.setCombineFunction(messageFlux -> messageFlux
.map(Message::getPayload)
.collectList()
.map(GenericMessage::new);
return f;
}
#Bean
public IntegrationFlow dataPipeline() {
return IntegrationFlows.from(somePublisher)
// ----> The type Message<?> passed? Or Flux<Message<?>>?
.handle(randomIdsBatchAggregator())
// ----> What type has been returned from the aggregation?
.handle(bla())
.get();
}
More than understanding the types that passes in the example, I want to know in general how can I know what are the objects that flows in the IntegrationFlow and their types.
IntegrationFlows.from(somePublisher)
This creates a FluxMessageChannel internally which subscribes to the provided Publsiher. Every single event is emitted from this channel to its subscriber - your aggregator.
The FluxAggregatorMessageHandler produces whatever is explained in the setCombineFunction() JavaDocs:
/**
* Configure a transformation {#link Function} to apply for a {#link Flux} window to emit.
* Requires a {#link Mono} result with a {#link Message} as value as a combination result
* of the incoming {#link Flux} for window.
* By default a {#link Flux} for window is fully wrapped into a message with headers copied
* from the first message in window. Such a {#link Flux} in the payload has to be subscribed
* and consumed downstream.
* #param combineFunction the {#link Function} to use for result windows transformation.
*/
public void setCombineFunction(Function<Flux<Message<?>>, Mono<Message<?>>> combineFunction) {
So, it is a Mono with a message which you really do with your .collectList(). That Mono is subscribed by the framework when it emits a reply message from the FluxAggregatorMessageHandler. Therefore your .handle(bla()) must expect a list of payloads. Which is really natural for the aggregator result.
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#flux-aggregator

Apache Ignite server crashes after incorporating Auditing events

In start, it works fine, but after a certain time (1-2 hours) it crashes with the following exception in server logs.
ERROR 1 --- [-ignite-server%] : JVM will be halted immediately due to the failure: [failureCtx=FailureContext [type=CRITICAL_ERROR, err=class o.a.i.i.IgniteDeploymentCheckedException: Failed to obtain deployment for class: com.event.audit.AuditEventListener$$Lambda$1484/0x0000000800a7ec40]]
public static void remoteListener(Ignite ignite) {
// This optional local callback is called for each event notification
// that passed remote predicate listener.
IgniteBiPredicate<UUID, CacheEvent> locLsnr = new IgniteBiPredicate<UUID, CacheEvent>() {
#Override public boolean apply(UUID nodeId, CacheEvent evt) {
System.out.println("Listener caught an event");
//--- My custom code to persists the event in another cache
};
IgnitePredicate<CacheEvent> remoteListener = cacheEvent -> {
return true;
};
// Register event listeners on all nodes to listen for task events.
UUID lsnrId = ignite.events(ignite.cluster()).remoteListen(locLsnr, remoteListener, EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_REMOVED);
}
}
As I understand you, you try to perform cache operations in event listener:
//--- My custom code to persists the event in another cache
Event listeners are called under the locks and this is bad idea to make any other cache operations in listeners. I suppose it could be the root cause of your issue.
Try to change you design, for example you can add your caught event in a queue and then read this queue in another thread and save the data in another cache.

Adempiere 380 Webui doesn't show popup for process error message and on complete error messages

I am using adempiere 380 webui, i would like to show error message on any failure of adempiere process or onComplete of any document.
The code which i have written to show error popup working in desktop application. But in webui - jboss it printing in console of jboss.
I have accomplished this using AbstractADWindowPanel.java where i am checking process id or table then execute particular code in that and if error codition is true then i am displaying FDialog.ask("Print Message"); .
Is there any generic way to do this by which it can be used for all classes.
Since processes can be fully automated and run on the server, your code needs to be aware of the GUI being used so that the correct dialog script can be called. There are three options, a server process (no dialog), swing (ADialog) or ZK (FDialog). Generally, its discouraged to use dialogs in this way. Certainly, you wouldn't want a server process to block waiting for user input. But, if you know what your doing and really need to...
In the most recent releases, the process code includes a flag that tests which of the states its in so it can display errors. An example of how this is used is with the Migration Script saves to XML format. In the process, the GUI info is used to open the correct file dialog in swing or, in ZK, pass the request to the browser.
Here is a snippet of how it works from ProcessInfo.java in the current release
/**
* Get the interface type this process is being run from. The interface type
* can be used by the process to perform UI type actions from within the process
* or in the {#link #postProcess(boolean)}
* #return The InterfaceType which will be one of
* <li> {#link #INTERFACE_TYPE_NOT_SET}
* <li> {#link #INTERFACE_TYPE_SWING} or
* <li> {#link #INTERFACE_TYPE_ZK}
*/
public String getInterfaceType() {
if (interfaceType == null || interfaceType.isEmpty())
interfaceType = INTERFACE_TYPE_NOT_SET;
return interfaceType;
}
/**
* Sets the Interface Type
* #param uiType which must equal one of the following:
* <li> {#link #INTERFACE_TYPE_NOT_SET} (default)
* <li> {#link #INTERFACE_TYPE_SWING} or
* <li> {#link #INTERFACE_TYPE_ZK}
* The interface should be set by UI dialogs that start the process.
* #throws IllegalArgumentException if the interfaceType is not recognized.
*/
public void setInterfaceType(String uiType) {
// Limit value to known types
if (uiType.equals(INTERFACE_TYPE_NOT_SET)
||uiType.equals(INTERFACE_TYPE_ZK)
||uiType.equals(INTERFACE_TYPE_SWING) )
{
this.interfaceType = uiType;
}
else
{
throw new IllegalArgumentException("Unknown interface type " + uiType);
}
}
The call to setInterfaceType() is made when the process is launched by the ProcessModalDialog in swing or the AbstractZKForm or ProcessPanel in zk.
For other processes, the value is set by the AbstractFormController which is used by both interfaces. If the interface type is not set the loadProcessInfo method will try to figure it out as follows:
// Determine the interface type being used. Its set explicitly in the ProcessInfo data
// but we will fallback to testing the stack trace in case it wasn't. Note that the
// stack trace test may not be accurate as it depends on the calling class names.
// TODO Also note that we are only testing for ZK or Swing. If another UI is added, we'll
// have to fix this logic.
if (processInfo == null || processInfo.getInterfaceType().equals(ProcessInfo.INTERFACE_TYPE_NOT_SET))
{
// Need to know which interface is being used as the events may be different and the proper
// listeners have to be activated. Test the calling stack trace for "webui".
// If not found, assume the SWING interface
isSwing = true;
StackTraceElement[] stElements = Thread.currentThread().getStackTrace();
for (int i=1; i<stElements.length; i++) {
StackTraceElement ste = stElements[i];
if (ste.getClassName().contains("webui")
|| ste.getClassName().contains("zk.ui")) {
isSwing = false;
break;
}
}
log.warning("Process Info is null or interface type is not set. Testing isSwing = " + isSwing);
}
else
{
isSwing = processInfo.getInterfaceType().equals(ProcessInfo.INTERFACE_TYPE_SWING);
}
Finally, this can be used to control the dialogs within your process with a call similar to
if (ProcessInfo.INTERFACE_TYPE_SWING.equals(this.getProcessInfo().getInterfaceType()))
{
... Do something on a swing...
}
else ...

JmsMessageDrivenChannelAdapter start phase finishing observation [duplicate]

I have an integration test for my Spring Integration config, which consumes messages from a JMS topic with durable subscription. For testing, I am using ActiveMQ instead of Tibco EMS.
The issue I have is that I have to delay sending the first message to the endpoint using a sleep call at the beginning of our test method. Otherwise the message is dropped.
If I remove the setting for durable subscription and selector, then the first message can be sent right away without delay.
I'd like to get rid of the sleep, which is unreliable. Is there a way to check if the endpoint is completely setup before I send the message?
Below is the configuration.
Thanks for your help!
<int-jms:message-driven-channel-adapter
id="myConsumer" connection-factory="myCachedConnectionFactory"
destination="myTopic" channel="myChannel" error-channel="errorChannel"
pub-sub-domain="true" subscription-durable="true"
durable-subscription-name="testDurable"
selector="..."
transaction-manager="emsTransactionManager" auto-startup="false"/>
If you are using a clean embedded activemq for the test, the durability of the subscription is irrelevant until the subscription is established. So you have no choice but to wait until that happens.
You could avoid the sleep by sending a series of startup messages and only start the real test when the last one is received.
EDIT
I forgot that there is a methodisRegisteredWithDestination() on the DefaultMessageListenerContainer.
Javadocs...
/**
* Return whether at least one consumer has entered a fixed registration with the
* target destination. This is particularly interesting for the pub-sub case where
* it might be important to have an actual consumer registered that is guaranteed
* not to miss any messages that are just about to be published.
* <p>This method may be polled after a {#link #start()} call, until asynchronous
* registration of consumers has happened which is when the method will start returning
* {#code true} – provided that the listener container ever actually establishes
* a fixed registration. It will then keep returning {#code true} until shutdown,
* since the container will hold on to at least one consumer registration thereafter.
* <p>Note that a listener container is not bound to having a fixed registration in
* the first place. It may also keep recreating consumers for every invoker execution.
* This particularly depends on the {#link #setCacheLevel cache level} setting:
* only {#link #CACHE_CONSUMER} will lead to a fixed registration.
*/
We use it in some channel tests, where we get the container using reflection and then poll the method until we are subscribed to the topic.
/**
* Blocks until the listener container has subscribed; if the container does not support
* this test, or the caching mode is incompatible, true is returned. Otherwise blocks
* until timeout milliseconds have passed, or the consumer has registered.
* #see DefaultMessageListenerContainer#isRegisteredWithDestination()
* #param timeout Timeout in milliseconds.
* #return True if a subscriber has connected or the container/attributes does not support
* the test. False if a valid container does not have a registered consumer within
* timeout milliseconds.
*/
private static boolean waitUntilRegisteredWithDestination(SubscribableJmsChannel channel, long timeout) {
AbstractMessageListenerContainer container =
(AbstractMessageListenerContainer) new DirectFieldAccessor(channel).getPropertyValue("container");
if (container instanceof DefaultMessageListenerContainer) {
DefaultMessageListenerContainer listenerContainer =
(DefaultMessageListenerContainer) container;
if (listenerContainer.getCacheLevel() != DefaultMessageListenerContainer.CACHE_CONSUMER) {
return true;
}
while (timeout > 0) {
if (listenerContainer.isRegisteredWithDestination()) {
return true;
}
try {
Thread.sleep(100);
} catch (InterruptedException e) { }
timeout -= 100;
}
return false;
}
return true;
}

What is the difference between Rx.Observable subscribe and forEach

After creating an Observable like so
var source = Rx.Observable.create(function(observer) {...});
What is the difference between subscribe
source.subscribe(function(x) {});
and forEach
source.forEach(function(x) {});
In the ES7 spec, which RxJS 5.0 follows (but RxJS 4.0 does not), the two are NOT the same.
subscribe
public subscribe(observerOrNext: Observer | Function, error: Function, complete: Function): Subscription
Observable.subscribe is where you will do most of your true Observable handling. It returns a subscription token, which you can use to cancel your subscription. This is important when you do not know the duration of the events/sequence you have subscribed to, or if you may need to stop listening before a known duration.
forEach
public forEach(next: Function, PromiseCtor?: PromiseConstructor): Promise
Observable.forEach returns a promise that will either resolve or reject when the Observable completes or errors. It is intended to clarify situations where you are processing an observable sequence of bounded/finite duration in a more 'synchronous' manner, such as collating all the incoming values and then presenting once, by handling the promise.
Effectively, you can act on each value, as well as error and completion events either way. So the most significant functional difference is the inability to cancel a promise.
I just review the latest code available, technically the code of foreach is actually calling subscribe in RxScala, RxJS, and RxJava. It doesn't seems a big different. They now have a return type allowing user to have an way for stopping a subscription or similar.
When I work on the RxJava earlier version, the subscribe has a subscription return, and forEach is just a void. Which you may see some different answer due to the changes.
/**
* Subscribes to the [[Observable]] and receives notifications for each element.
*
* Alias to `subscribe(T => Unit)`.
*
* $noDefaultScheduler
*
* #param onNext function to execute for each item.
* #throws java.lang.IllegalArgumentException if `onNext` is null
* #throws rx.exceptions.OnErrorNotImplementedException if the [[Observable]] tries to call `onError`
* #since 0.19
* #see ReactiveX operators documentation: Subscribe
*/
def foreach(onNext: T => Unit): Unit = {
asJavaObservable.subscribe(onNext)
}
def subscribe(onNext: T => Unit): Subscription = {
asJavaObservable.subscribe(scalaFunction1ProducingUnitToAction1(onNext))
}
/**
* Subscribes an o to the observable sequence.
* #param {Mixed} [oOrOnNext] The object that is to receive notifications or an action to invoke for each element in the observable sequence.
* #param {Function} [onError] Action to invoke upon exceptional termination of the observable sequence.
* #param {Function} [onCompleted] Action to invoke upon graceful termination of the observable sequence.
* #returns {Disposable} A disposable handling the subscriptions and unsubscriptions.
*/
observableProto.subscribe = observableProto.forEach = function (oOrOnNext, onError, onCompleted) {
return this._subscribe(typeof oOrOnNext === 'object' ?
oOrOnNext :
observerCreate(oOrOnNext, onError, onCompleted));
};
/**
* Subscribes to the {#link Observable} and receives notifications for each element.
* <p>
* Alias to {#link #subscribe(Action1)}
* <dl>
* <dt><b>Scheduler:</b></dt>
* <dd>{#code forEach} does not operate by default on a particular {#link Scheduler}.</dd>
* </dl>
*
* #param onNext
* {#link Action1} to execute for each item.
* #throws IllegalArgumentException
* if {#code onNext} is null
* #throws OnErrorNotImplementedException
* if the Observable calls {#code onError}
* #see ReactiveX operators documentation: Subscribe
*/
public final void forEach(final Action1<? super T> onNext) {
subscribe(onNext);
}
public final Disposable forEach(Consumer<? super T> onNext) {
return subscribe(onNext);
}

Resources