MassTransit's PublisherConfirmation option no longer exists - masstransit

Upgrading to MassTransit 4.x, overtop RabbitMQ. My application configuration was using PublisherConfirmation set to true, to ensure message delivery without the overhead of transactions. (In least, that was what the docs used to say.)
In MT 4.x., it appears that PublisherConfirmation no longer exists.
I haven't found any info (yet) on why this went away, or what replaces it moving forward. Essentially, I don't want fire-and-forget; if the message doesn't reach the queue I want an exception.
Any guidance would be appreciated.

To configure PublisherConfirmation using MT 4.x or later, that option is now configured on the host, instead of the bus.
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.RabbitMqTransport/Configuration/IRabbitMqHostConfigurator.cs#L24

Related

Should I set Kafka auto.offset.reset to earliest or latest?

I have a Spring Kafka Consumer application that lives in K8. Sometimes the application is recycled/restarted. When the consumer comes back on, I want it to consume all the messages that were produced while it was recycling. I experimented with auto.offset.rest=earliest and it works as expected, but I noticed that the default value for kafka is latest.
What are the risks imposed if I use earliest? In what scenarios I go with latest v.s earliest? I tried to find a post on here that explains it via a scenario but most of them were copy pasted from some documentation than real life example.
That property only applies if the broker has no committed offset for the group/topic/partition.
i.e. the first time the app runs or if the offsets expire (with modern brokers, the default expiration is when the consumer has not run for 7 days - 7 days after the last consumer left the group). With older brokers, offsets were expired much earlier - even if the consumer was still running, but hasn't received anything. The current behavior started with 2.1, IIRC.
When there is already a committed offset, that is used for the start position, and this property is ignored.
For most use cases, earliest is used but, as you say, latest is the default, which means "new" consumers will start at the end and won't get any records already in the topic.
So the "risk" is, if you don't run your app for a week, you'll get any unexpired records again. You can increase offset.retention.minutes to avoid that.

ActiveMQ - Detecting Message Duplication

I am currently exploring de-duplication strategies within Active MQ. Artemis supports duplicate detection, but I'm not sure about ActiveMQ 5
Is it possible to prevent a message from being placed on a queue if it currently exists on the queue in ActiveMQ 5?
Messages which are no longer on the queue and have been so in the past will be allowed back on the queue.
The underlying capability I am trying to achieve is flow control in which multiple messages of the same value are not placed on the queue as to remove duplicate processing.
Based on the documentation, I have tried using the message property defined _AMQ_DUPL_ID, but I am still experiencing duplication. I suspect this may not be supported in ActiveMQ 5 and am unsure what alternative option is. I'm open to suggestions.
NOTE: The Active MQ instance being used is provided by Amazon MQ.
As you suspect, ActiveMQ 5.x doesn't support automatic duplicate detection. This is only supported in ActiveMQ Artemis. That said, messages are not removed from the broker's duplicate ID cache when the message is consumed from the queue. This is because in most cases a duplicate sent after the message is consumed is still considered a duplicate.
You may be able to implement some kind of duplicate detection in a broker plugin, but I have no idea of Amazon MQ supports adding custom plugins. It's more likely that you'll have to implement duplicate detection in the clients themselves.

What does Actor[akka:\\play\deadLetters].tell() mean in a New Relic's trace of a Play Framework 2.0 web transaction?

I have a Play Framework 2.0 Java application hosted on Heroku, and I am monitoring it using the free-tier New Relic addon. For most of the transactions, a majority of the time is spent in what New Relic labels as Actor[akka:\\play\deadLetters].tell(). What is the application actually doing during this time?
As a simple description, Akka (http://en.wikipedia.org/wiki/Akka_(toolkit); http://akka.io/) is part of the Play framework as one of their integrations. As the application on Play is instrumented for monitoring there are HTTP requests made by Akka that are traced as a web transaction. In short, we measure it. As for what is is specifically doing, I recommend checking the Play documentation or the Akka link from the first sentence.
If you have a Java agent version older than 3.2.0, upgrading the Java agent will give you the following change:
akka.actor.ActorKilledException is now ignored by default
The ActorKilledException is commonly thrown in Play applications as a
control mechanism in normally functioning applications. In previous
versions, this exception inflated the reported error rate. These
exceptions are now ingored by default. You can override the default
ignore_errors list to provide your own exceptions or to omit the
ActorKilledException.
Let us know if this information is helpful or if you need additional assistance.
Jeanie Swan
New Relic Support
I'm not very familiar with how NewRelic collects data, however deadLetters is a special Actor that receives "all messages that were sent to a dead (or non existent) Actor". You can read more about dead letters in the official docs.
For example you can subscribe to these dead letters and print them (which should give you enough information to then track down their source and fix it). A typical case where many dead letters may be encountered is when you're sending messages to an Actor which has stopped, but someone is still sending messages to it - this you should be able to detect once printing the deadletters.

Two consumers on same Websphere MQ JMS Queue, both receiving same message

I am working with someone who is trying to achieve a load-balancing behavior using JMS Queues with IBM Websphere MQ. As such, they have multiple Camel JMS consumers configured to read from the same Queue. Despite that this behavior is undefined according to the JMS spec (last time I looked anyway), they expect a sort of round-robin / load-balancing behavior. And, while the spec leaves this undefined, I'm led to believe that the normal behavior of Websphere MQ is to deliver the message to only one of the consumers, and that it may do some type of load-balancing. See here, for example: When multi MessageConsumer connect to same queue(Websphere MQ),how to load balance message-consumer?
But in this particular case, it appears that both consumers are receiving the same message.
Can anyone who is more of an expert with Websphere MQ shed any light on this? Is there any situation where this behavior is expected? Is there any configuration change that can alleviate this?
I'm leaning towards telling everyone here to use the native Websphere MQ clustering facility and go away from having multiple consumers pointing at the same Queue, but that will be a big change for them, so I'd love to discover a way to make this work.
Not that I'm a fan of relying on anything that's undefined, but if they're willing to rely on IBM specific behavior, I'll leave that up to them.
The only way for them to both receive the same messages are:
There are multiple copies of the message.
The apps are browsing the message without a lock, then circling back to delete it.
The apps are backing out a transaction and making the message available again.
The connection is severed before the app acknowledges the message.
Having multiple apps compete for messages in a queue is a recommended practice. If one app goes down the queue is still served. In a cluster this is crucial because the cluster will continue to direct messages to the un-served queue instance until it fills up.
If it's a Dev system, install SupportPac MA0W and tell it to trace just that one queue and you will be able to see exactly what is happening.
See the JMS spec in section 4.4. The provider must never deliver a second copy of an acknowledged message. Exception is made for session handling in 4.4.13 which I cover in #4 above. That's pretty unambiguous and part of the official spec so not an IBM-specific behavior.

Does ZMQ expose any internal logging? If so, how do you use it?

I've found references in a few places to some internal logging capabilities of ZMQ. The functionality that I think might exist is the ability to connect to either or both of a inproc or ipc SUB socket and listen to messages that give information about the internal state of ZMQ. This would be quite useful when debugging a distributed application. For instance, if messages are missing/being dropped, it might shed some light on why they're being dropped.
The most obvious mention of this is here: http://lists.zeromq.org/pipermail/zeromq-dev/2010-September/005724.html, but it's also referred to here: http://lists.zeromq.org/pipermail/zeromq-dev/2011-April/010830.html. However, I haven't found any documentation of this feature.
Is some sort of logging functionality truly available? If so, how is it used?
Some grepping through the git history eventually answered my question. The short answer is that a way for ZMQ to transmit logging messages to the outside world was implemented, but it was never used to actually send logging messages by the rest of the code base. After a while it was removed since nothing used it.
The commit that originally added it making use of an inproc socket:
https://github.com/zeromq/libzmq/commit/ce0972dca3982538fd123b61fbae3928fad6d1e7
The commit that added a new "sys" socket type specifically to support the logging:
https://github.com/zeromq/libzmq/commit/651c1adc80ddc724877f2ebedf07d18e21e363f6
JIRA issue, pull request, and commit to remove the functionality:
https://zeromq.jira.com/browse/LIBZMQ-336
https://github.com/zeromq/libzmq/pull/277
https://github.com/zeromq/libzmq/commit/5973da486696aca389dab0f558c5ef514470bcd2

Resources