I am new to AMQP trying to understand the concept so my question might be very naive.
I am sending message to ActiveMQ Broker and while sending the message, I have to mention LinkName but that doesn't matter what I am putting at consumer side and producer side I am receiving the data anyway.
I am confused what is the deal with LinkName?
I can't really state it any better than section 2.6.1 of the AMQP 1.0 specification:
2.6.1 Naming A Link
Links are named so that they can be recovered when communication is interrupted. Link names MUST uniquely identify the link amongst all links of the same direction between the two participating containers. Link names are only used when attaching a link, so they can be arbitrarily long without a significant penalty.
A link’s name uniquely identifies the link from the container of the source to the container of the target node, i.e., if the container of the source node is A, and the container of the target node is B, the link can be globally identified by the (ordered) tuple (A,B,<name>). Consequently, a link can only be active in one connection at a time. If an attempt is made to attach the link subsequently when it is not suspended, then the link can be ’stolen’, i.e., the second attach succeeds and the first attach MUST then be closed with a link error of stolen. This behavior ensures that in the event of a connection failure occurring and being noticed by one party, that re-establishment has the desired effect.
Related
I'm trying to build a mental model of the role of off-chain workers in substrate. The bigger picture seems to be that they move logic inside the substrate node, that was otherwise done by oracles, triggering on predefined transactions. There are two use cases I was thinking of specifically:
1: Validating file formats: incoming transaction proposes a file accessible via url or ipfs hash, and it's format needs to be validated. An off-chain worker fetches the file, asserts format (size, encoding, content, whatever) and if correct submits another transaction saying it's valid.
2: Key generation: let's assume there is a separate service distributed with the substrate node, which manages keys for each instance. Node A runs a key sharing algorithm (like Shamir's secret sharing) via this external service between participants A, B and C, then makes a transaction creating a group (A,B,C) on-chain. This transaction triggers all nodes that are in this group to run off-chain workers, call into their local key store verifying having the key. They can all mark it on-chain afterwards.
As far as I understand it correctly, off-chain workers are triggered in every node after block execution. In the former use case, this would result in lots of transactions validating just one file, and nothing guarantees the correctness of these. What is a good way of reaching consensus on the validity of the file? Is it also possible without economic incentives like staking? It would be problematic with tokens having no value in the network, e.g in enterprise settings. Is this even the right use case for off-chain workers? The second example should not suffer from such issue, we just need all parties to verify having the key.
Where does the thought process above go wrong, and why?
As far as I understand it correctly, off-chain workers are triggered in every node after block execution.
Yes and no. There is a CLI flag for it. And at the time of this writing it says:
--offchain-worker <ENABLED>
Should execute offchain workers on every block.
By default it's only enabled for nodes that are authoring new blocks. [default: WhenValidating] [possible
values: Always, Never, WhenValidating]
In the former use case, this would result in lots of transactions validating just one file, and nothing guarantees the correctness of these.
I think it is the responsibility of the receiving function (aka. Call) to handle and incentivise this. For example, there could be a reward opportunity to validate an address. But, if it has already been submitted by another transaction, you will get slashed (or even if not, you do pay some transaction fee, for nothing). In such cases, you can assume that not all participants will submit a transaction. They will only do it when there is a chance of improvement, which should be depicted by your potential reward/slash scheme.
Is this even the right use case for off-chain workers?
I am no expert here, but I think at least the validation example is a good example. It is just a matter of finding a good incentive + anti-spam slashing.
I am less familiar with the second example, so no comments on that.
When sending a message, MassTransit wraps that payload with an envelope which has a field called destinationAddress. What purpose does this field have?
I found this because I have a number of C# microservices communicating with some node and java based services - so I've been using the minimum payload defined here:
http://masstransit-project.com/MassTransit/advanced/interoperability.html
I've had no problem integrating the two services together I was just wondering what the point was of having the destinationAddress as part of the message itself? Is it just a belts and braces kind of thing to make sure messages don't go on the wrong queue by mistake?
I would have thought that all of this information can be derived since it is literally just built up of a) the message bus host and b) the queue name used when actually sending the message?
Transports have a variety of ways to delivering messages. For instance, publishing a message to a topic would set the destination address to (URI of topic) but it may be delivered to a queue (via a subscription, forwarded by the transport) with a different address. In this case, the envelope has the original destinationAddress, whereas the queue would have a different address.
There are also cases where messages may be scheduled, redelivered, faulted, etc., and having that information helps in troubleshooting production systems in cases where the original destination may not be known otherwise.
So, yeah, in the simplest case it seems superfluous, however, it comes in useful down the road when trying to figure out why something doesn't work.
Premise -
In spring integration,if i have a aggregator with a message group which is incomplete. Before group release stratergy is met, server is restarted.
Current Behavior->
all the messages posted to the aggregator go to the same message group and not a new one, since it is not marked complete, messages keep flowing in.
Expected->
If server is restarted, aggregator picks the left over messages from message store, marks already persisted ones complete & then cater new ones,
Is my expectation incorrect? Can somebody guide?
I think we can reach your requirements with MessageGroupStoreReaper, which you will run just on the server startup, e.g. via catching ContextRefreshedEvent:
The MessageGroupStore maintains a list of these callbacks which it applies, on demand, to all messages whose timestamp is earlier than a time supplied as a parameter (see the registerMessageGroupExpiryCallback(..) and expireMessageGroups(..) methods above).
The expireMessageGroups method can be called with a timeout value. Any message older than the current time minus this value will be expired, and have the callbacks applied. Thus it is the user of the store that defines what is meant by message group "expiry".
http://docs.spring.io/spring-integration/reference/html/messaging-routing-chapter.html#reaper
Problem
When my web application updates an item in the database, it sends a message containing the item ID via Camel onto an ActiveMQ queue, the consumer of which will get an external service (Solr) updated. The external service reads from the database independently.
What I want is that if the web application sends another message with the same item ID while the old one is still on queue, that the new message be dropped to avoid running the Solr update twice.
After the update request has been processed and the message with that item ID is off the queue, new request with the same ID should again be accepted.
Is there a way to make this work out of the box? I'm really tempted to drop ActiveMQ and simply implement the update request queue as a database table with a unique constraint, ordered by timestamp or a running insert id.
What I tried so far
I've read this and this page on Stackoverflow. These are the solutions mentioned there:
Idempotent consumers in Camel: Here I can specify an expression that defines what constitutes a duplicate, but that would also prevent all future attempts to send the same message, i.e. update the same item. I only want new update requests to be dropped while they are still on queue.
"ActiveMQ already does duplicate checks, look at auditDepth!": Well, this looks like a good start and definitely closest to what I want, but this determines equality based on the Message ID which I cannot set. So either I find a way to make ActiveMQ generate the Message ID for this queue in a certain way or I find a way to make the audit stuff look at my item ID field instead of the Message ID. (One comment in my second link even suggests using "a well defined property you set on the header", but fails to explain how.)
Write a custom plugin that redirects incoming messages to the deadletter queue if they match one that's already on the queue. This seems to be the most complete solution offered so far, but it feels so overkill for what I perceive as a fairly mundane and every-day task.
PS: I found another SO page that asks the same thing without an answer.
What you want is not message broker functionality, repeat after me, "A message broker is not a database, A message broker is not a database", repeat as necessary.
The broker's job is get messages reliably from point A to point B. The client offers some filtering capabilities via message selectors but this is minimal and mainly useful in keeping only specific messages that a single client is interested in from flowing there and not others which some other client might be in charge of processing.
Your use case calls for a more stateful database centric solution as you've described. Creating a broker plugin to walk the Queue to check for a message is reinventing the wheel and prone to error if the Queue depth is large as ActiveMQ might not even page in all the messages for you based on memory constraints.
We are trying to delete one or more queue from a MQ Channel that was previously configured. DEletion was successful. When we ran the application code we got the below error code/description:-
2136 (0858) (RC2136): MQRC_MULTIPLE_REASONS
Explanation
An MQOPEN, MQPUT or MQPUT1 call was issued to open a distribution list or put a message to a distribution list, but the result of the call was not the same for all of the destinations in the list. One of the following applies:
• The call succeeded for some of the destinations but not others. The completion code is MQCC_WARNING in this case.
• The call failed for all of the destinations, but for differing reasons. The completion code is MQCC_FAILED in this case.
This reason code occurs in the following environments: AIX®, HP-UX, i5/OS™, Solaris, Windows, plus WebSphere® MQ clients connected to these systems.
Completion Code
MQCC_WARNING or MQCC_FAILED
Programmer response
Examine the MQRR response records to identify the destinations for which the call failed, and the reason for the failure. Ensure that sufficient response records are provided by the application on the call to enable the error(s) to be determined. For the MQPUT1 call, the response records must be specified using the MQOD structure, and not the MQPMO structure.
https://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/topic/com.ibm.mq.amqzao.doc/fm13300_.htm
http://www.mqseries.net/phpBB/viewtopic.php?p=6475&sid=eb310522e0959bb828917836dfa550ea
How can we solve this issue ?
This reason code suggests that your application is doing an MQPUT to multiple queues at once by providing the names of them in the MQOR structure hung off the MQOD structure.
You say that you deleted some queues and then after that deletion was done your application started to report this error. This suggests that you have deleted at least one of the queues that your application was previously referring to, but not all of the queues that it was using. So some of the reason codes were MQRC_NONE and some of the reason codes were MQRC_OBJECT_NOT_FOUND because you deleted them, hence the MQRC_MULTIPLE_REASONS. As your text in your question indicates, to see all the individual reason codes, you need to look in the MQRR structure returned to your application.
Perhaps you could post your application code, or at least the part where the queue names are set, so we can advise further.
The solution is to not only delete the queue but also remove it from the distribution list.