I have question about composer event function. Let's say ...
I have two committing peers
Business Network is deployed and an event is defined
I also have client application subscribing the event with composer-client module
In this case, when I emit the event from Transaction Processing Function and the transaction is committed, how many times event callback function of client application is called ?
Two or One ?
its emitted to each peer (so, 2). https://hyperledger.github.io/composer/business-network/publishing-events.html
Applications can subscribe to emitted (published) events through the composer-client API will get a notification from each peer (ie each having eventUrl with defined event listener in the connection profile). Its up to the application to decide how to process them.
Related
Automatonymous using Redis : Same event is used by multiple state machines Sagas.When the event is published (Fan out), The sagas which are not expecting the event and using x.OnMissingInstance(x=>x.Discard()), the published event is going to Error Queue. I was expecting it to just ignore without putting it to error queue. Please clarify
I was wondering which of my two methods is more appropriate, or is there event another one?
(1) Direct
Direct communication between GATEWAY and μSERVICE A
UI sends HTTP request to GATEWAY
GATEWAY sends HTTP request to μSERVICE A
μSERVICE A returns either SUCCESS or ERROR
Event is stored in EVENT STORE and published to QUEUE
PROJECTION DATABASE is updated
Other μSERVICES might consume event
(2) Events
Event-based communication via a message queue
UI sends HTTP request to GATEWAY
GATEWAY published event to QUEUE
μSERVICE A consumes event
Event is stored in EVENT STORE and published to QUEUE
PROJECTION DATABASE is updated
Other μSERVICES might consume event
GATEWAY consumes event and sends response (SUCCESS or ERROR) to UI
I am really sorry if I misunderstood some concept, I am relatively new to this style of architecture.
Thanks in advance for every help! :)
Second approach is a preferred way and is async approach.
Direct
In first approach your microsvc B and C wait for the event to get published . The scalability of this system is directly dependent on microsvc A. what if microsvc A is down or falling behind writing events to queue? it's like single point of failure and bottleneck. you can't scale system easily.
Events
In microservices we keep system async so they can scale.
Gateway should be writing to the queue using pub/sub and all these microservices can use events at same time. system over all is more robust and can be scaled.
As a part of data integration between two application, we plan to send data to Azure service bus queue. On the receiving end I've an API that loads the data to target DB.
I need to understand how can I handle the following:
1. How should the API be notified whenever the messages arrives in the queue. I am thinking of Azure function or web job or some scheduling component.
2. What if an API is down, messages should retain in queue.
3. What if when API receives the message but somehow the target DB is down. In that case, my messages should be retained in the queue for retries.
Please help me and guide me to get the correct approach to implement this.
Thanks!
Logic Apps! Use Logic Apps with Service Bus Queue Trigger so that it fires on the arrival of new messages in Service Bus Queue. To send the data to Web API, use Logic Apps Http Connector.
It is scalable by individual messages.
How should the API be notified whenever the messages arrives in the queue. I am thinking of Azure function or web job or some scheduling component. Logic Apps
What if an API is down, messages should retain in queue. Azure Service Bus Queue ensures FIFO, Batching,.. Messages are retained until it is de-queued and deleted
What if when API receives the message but somehow the target DB is down. In that case, my messages should be retained in the queue for retries. Check transaction feature of Azure Service Bus Queue
I am looking for a way for each consumer instance to receive a message that is published to RabbitMQ via MassTransit. The scenario would be, we have multiple microservices that need to invalidate a cache on notification. Pub-Sub won't work in this instance as there will be 5 consumers of the same type as its the same code per service instance, so only one would receive the message in a traditional PubSub.
Message observation could be an option but this means the messages would never be consumed and hang around forever on the bus.
Can anyone suggest a pattern to use in the context of MassTransit?
Thanks in advance.
You should create a management endpoint in each service, which could even be a temporary queue (just request a receive endpoint without a queue name and one will be dynamically generated). Then, put your queue invalidation consumers on that endpoint. Each service instance will receive a unique instance of the message (when Publish is called), and those queues and bindings will automatically be removed once the service exits.
This is exactly how the bus endpoint works, but in your case, you're creating a receive endpoint which can have consumer message type bindings, so that published messages are received, one copy per service.
cfg.ReceiveEndpoint(cfg => { ... });
Note that the queue name is not specified, and will be automatically generated uniquely.
I would like to know if zmq already solves following problem (or) the application sitting on top of zmq needs to take care of it.
1) A central publisher which publishes data to all subscribers. This data is static in nature, something like configuration. The data can be modified at any point in time.
2) Multiple subscribers subscribe to messages from this publisher. The publisher can join at any point in time.
3) If data changes, publisher should just publish the diff to the existing subscribers.
4) If a subscriber joins later, the publisher should publish all the data (current configuration) to the new subscriber.
Zeromq guide suggests following for solving Slow Joiner syndrome but this does not solve above problem.
http://zguide.zeromq.org/page:all#Slow-Subscriber-Detection-Suicidal-Snail-Pattern
The Clone pattern from the Guide does precisely what you want.
The problem I'm seeing with your setup is that it requires all the subscribers to have the same state. If all subscribers are at version 7 and you publish the 7-to-8 diff, then they all update to version 8. But this requires a tightly-coupled state synchronization between nodes. How would you handle the case when subscribers get out of sync?
Consider this alternative setup:
the "publisher" has a single ROUTER socket that it binds
each "subscriber" has a single DEALER socket that connects to the ROUTER
can't use a REQ socket because that would prohibit the sending of "update-hints" (details to follow)
when a subscriber i joins the network, it sends an "update" request to the publisher, so that the publisher is aware of the subscriber's identity and his current version version[i]
the publisher responds with the diffs necessary to bring subscriber i up to date
if data changes on the publisher (i.e., a new version) it sends an "update-hint" to all of the known subscribers
when a subscriber receives an "update-hint," it performs an "update" request
(optional) subscribers periodically send an "update" request (infrequent polling)
This approach has the following benefits:
the publisher is server; the subscribers are clients
the publisher never initiates the sending of any actual data - it only responds to requests from clients (that is, the "update-hints" don't count as sending actual data)
the subscribers are all independently keeping themselves up to date (eventual consistency) even though they may be out of sync intermittently