Corda State Events : Do events have an order? - events

A network consist of 3 nodes, where 1 node is read-only and participates in every transaction. Request can start from either of the nodes which in turn creates a request state. It is received and processed by other node to create a new response state. Both only issue new states and do not consume the state. Both these state events are received by the read-only node. Would the State events received by the read-only corda node have an order or would it be processed in any order ?
For eg can we say that the request originator state event would be received/processed first and then the other node ? or can it be possible under high load that the other node request gets received/processed by the read-only node first and then the originators event is received.
My experience with corda is very minimal and need to understand
how events are received by the parties when one party acts as
read-only and all remaining parties only issue new states.

In general, the order of the receiving messages is not guaranteed. A node will process messages in the order it receives them. But it's not guaranteed that the received messages are sequential.
If Node A is receiving messages from Node B and Node C, and Node B produces a message before node C. There is no guarantee that the message from Node B is processed first. The one which reaches Node A first gets processed first. The delay could be because of multiple reasons like network latency, etc.

Related

Newly added Raft node cannot receive proposal?

I am new to etfd/raft. I managed to make a simplest raft demo to create a cluster and make them sync proposal. The nodes in the cluster are pre-configured in peers of raft.StartNode(...)
However, when I tried to dynamically add a new node to a running cluster, although I managed to make the cluster recognized the new node, new proposals are not sent to the new node.
Here is my code: main.go
For short:
I did got entries from Ready(), and invoke ApplyConfChange() when I got raftpb.EntryConfChange type from CommittedEntries
Messages to each nodes are sent by channel, simulating a network RPC
Advance() and Step() are invoked make the state machines work.
And how I add the new node? Also for short:
Each node has another indivisual channel to receive conf change message
I send raftpb.ConfChange type to every cond change channels
Watching stdout, this new node is reconized by all previous nodes because they all got EntryConfChange in CommittedEntries
Then I make another proposal. All previous nodes received the proposal from leader node, but the new one did not.
I must had miss something or made sonething wrong, but I cannot figure out.
Anyone help? Thank you in ahead!

emulate radio communication using channel or mutex

I need to emulate a radio communication network composed of N nodes, with these properties:
nodes either send then receive data, or receive then send data, but not at the same time.
data sent over-the-air are received by all nodes which are in receive mode at that time.
if two or more nodes sends data simultaneously, data are lost.
there is no time synchronization among nodes.
In Go, if I use a channel to emulate the transmission media, data are serialized, and only one receiver gets the data, not all of them.
Also, I cannot think of a way to "ruin" the data if two sender try to send at the same time. Whether I use a mutex or not, one of the sender will successfully get its message sent.
Why don't you create a publisher and subscriber module using Golang Channels.
Create a centralized queuing system where all your sender and receiver nodes register themself. If any node sends the data it will go to that module and from the sender's list, it picks the senders channel and starts writing on it. The same applies to receivers also.
You have to create one channel per node and register it to a central pub/sub module. This will definitely solve your problem.

Collector Node Issue (IIB)

Collector node issue: I am currently using collector node to group messages (XML's). My requirement is to collect messages till the last message is received. (Reading from file input)
Control terminal: I'm sending a control message to stop collection and propagate to next node. But this doesn't work. As it still waits for timeout/quantity condition to be satisfied.
MY QUESTION: What condition can I use to collect messages till the last message received?
Add a separate input terminal on the Collector node that is used to complete a collection. Once you send a message to the second terminal, the collection is complete and propagated.
The Control terminal can be used to signal the Collector node when complete collections are propagated, not to determine when a collection is complete.
A collection is complete when either the set number of messages are received or the timeout is exhausted for all input terminals.
So if you don't know in advance how many messages you want to include in a collection, you have 3 options:
Set message quantity to 0 and set an appropriate timeout for input terminals.
This way the node will include all messages received within the time between the first message and the timeout value in the collection.
Set a large number as message quantity and use collection expiry
With collection expiry, incomplete collections can be propagated to the expiry terminal, but this will work essentially the same as the previous method.
Develop your own collector flow
You can develop a flow for combining messages using MQ Input, Get and Output nodes, keeping intermediate combined messages in MQ queues. Use this flow to combine your inputs and send the complete message onto the input queue of your processing flow.

Storm Global Grouping Fault Tolerance

I have started to use Storm recently but could not find any resources on the net about global grouping option's fault tolerance.
According to my understanding from the documents; while running a topology with a bolt(Bolt A) which is uses global grouping will receive tuples from tasks of Bolt B into the task of Bolt A. As it is using global grouping option, there is only one task of Bolt A in the topology.
The question is as follows: What will happen if we store some historical data of the stream within Bolt A and the worker process dies which contains the task of Bolt A? Meaning that will the data stored in this bolt get lost?
Thanks in advance
Once all the downstream tasks have acked the tuple, it means that they have successfully processed the message and it need not be replayed if there is a shut down. If you are keeping any state in memory, then you should store it in a persistent store. Message should be acked when the state change due to the message has been persisted.

Does Kademlia iterativeFindNode operation store found contacts in the k-buckets?

Following the Kademlia specifications found at XLattice, I was wondering the exact working of the iterativeFindNode operation and how it is useful for bootstrapping and refreshing buckets. The document says:
At the end of this process, the node will have accumulated a set of k active contacts or (if the RPC was FIND_VALUE) may have found a data value. Either a set of triples or the value is returned to the caller. (§4.5, Node Lookup)
The found nodes will be returned to the caller, but the specification don't specify what to do with these values once returned. Especially in the context of refresh and bootstrap:
If no node lookups have been performed in any given bucket's range for tRefresh (an hour in basic Kademlia), the node selects a random number in that range and does a refresh, an iterativeFindNode using that number as key. (§4.6, Refresh)
A node joins the network as follows: [...] it does an iterativeFindNode for n [the node id] (§4.7, Join)
Does running the iterativeFindNode operation in itself enough to refresh k-buckets of contacts, or does the specification omits that the result should be inserted in the contact buckets?
Note: the iterativeFindNode operation uses the underlying RPC and through them can update the k-buckets as specified:
Whenever a node receives a communication from another, it updates the corresponding
bucket. (§3.4.4, Updates)
However, only the recipient of the FIND_NODE RPC will be inserted in the k-buckets, and the response from that node (containing a list of k-contacts) will be ignored.
However, only the recipient of the FIND_NODE RPC will be inserted in the k-buckets, and the response from that node (containing a list of k-contacts) will be ignored.
I can't speak for XLattice, but having worked on a bittorrent kademlia implementation this strikes me as strange.
Incoming requests are not verified to be reachable nodes (NAT and firewall issues) while responses to outgoing RPC calls are a good indicator that a node is indeed reachable.
So incoming requests could only be added as tentative contacts which have still to be verified while incoming responses should be immediately useful for routing table maintenance.
But it's important to distinguish between the triples contained in the response and the response itself. The triples are unverified, the response itself on the other hand is a good verification for the liveness of that node.
Summary:
Incoming requests
semi-useful for routing table
reachability needs to be tested
Incoming responses
Immediately useful for the routing table
Tuples inside responses
not useful by themselves
but you may end up visiting them as part of the lookup process, thus they can become responses

Resources