How to retrieve messages from dmsMessageTable of a Device Controller Using SNMP - snmp

I am working on NTCIP/SNMP Protocol I was able to connect to the device controller using one one of the MIBBrowser and was able to walk through the different objects(OIDS) loaded through a MIB File. However,When I do a walk over the dmsMessageTable I can see only two messages(again through object IDs) being retrieved but the Device controller has more than two messages. The Messages being retrieved are default one provided with the device.
Can anyone help in this ?

Are you using the correct primary index (the second last node of the OID)? This node corresponds to the message memory type. For changeable messages the index should be 3 or 4.
You can retrieve the number of messages for the memory type, (for example, for changeable messages use dmsNumChangeableMsg - 1.3.6.1.4.1.1206.4.2.3.5.2.0) and then the last node of your OID should correspond to the message number in that type of memory bank.
EXAMPLE:
For the first message in changeable memory:
1.3.6.1.4.1.1206.4.2.3.5.8.1.3.3.1
For the second message in volatile memory:
1.3.6.1.4.1.1206.4.2.3.5.8.1.3.4.2

Related

How to get the position of a destination node?

I have been working on a position-based protocol using veins-inet and I want to get the position of the destination node.
In my code, I got the IP Address of the destination from the datagram.
const L3Address& destAddr = datagram->getDestinationAddress();
and I want to get the current position of this node.
I already checked the following question
How to get RSU coordinate from TraCIDem11p.cc?
But it seems that it refers to the node by using the node ID.
Is there a way to get the position of the node by referring to its IP Address?
I am using instant veins-4.7.1
A very simple solution would be to have each node publish its current L3Address and Coord to a lookup table whenever it moves. This lookup table could be located in a shared module or every node could have its own lookup table. Remember, you are writing C++ code, so even a simple singleton class with methods for getting/setting information is enough to coordinate this.
If, however, the process of "a node figures out where another node is" is something you would like to model (e.g., this should be a process that takes some time, can fail, causes load on the wireless channel, ...) you would first need to decide how this information would be transferred in real life, then model this using messages exchanged between nodes.

Inet framework getting time

I'm using the wireless example, and i want get the simulation time and save in parameter for calculate the time the packet arrival and the packet send. Have anyone solution for these?
There is a statistics for this automatically collected in application models (i.e. like BasicUdpApp etc.). It's called endToEndDelay.
The proper way to do this (and this what is already done in INET) is that you during packet creation you should add a TAG to the sent packet which contains a simtime_t variable and put the actual simtime there and then read the same TAG when the packet arrives and calculate the difference. Putting values into the "parameters" of a message would NOT work as the packets could be fragmented/defragmented in the network so their identity is not kept and the attached parameters are destroyed.
But again, this is already present in INET 4.2

sending snapshot time from physical process to node in Castalia

When the physical process replies to node through sensor manager, I want to send source_snapshot[i].time along with retVal.
For this purpose, I am using phyMsg->setTimestamp() function in Physical Process and getTimestamp() wherever needed.
Is this the correct practice?
EDIT: or maybe I can add a time variable in physical process message structure?

Does Kademlia iterativeFindNode operation store found contacts in the k-buckets?

Following the Kademlia specifications found at XLattice, I was wondering the exact working of the iterativeFindNode operation and how it is useful for bootstrapping and refreshing buckets. The document says:
At the end of this process, the node will have accumulated a set of k active contacts or (if the RPC was FIND_VALUE) may have found a data value. Either a set of triples or the value is returned to the caller. (§4.5, Node Lookup)
The found nodes will be returned to the caller, but the specification don't specify what to do with these values once returned. Especially in the context of refresh and bootstrap:
If no node lookups have been performed in any given bucket's range for tRefresh (an hour in basic Kademlia), the node selects a random number in that range and does a refresh, an iterativeFindNode using that number as key. (§4.6, Refresh)
A node joins the network as follows: [...] it does an iterativeFindNode for n [the node id] (§4.7, Join)
Does running the iterativeFindNode operation in itself enough to refresh k-buckets of contacts, or does the specification omits that the result should be inserted in the contact buckets?
Note: the iterativeFindNode operation uses the underlying RPC and through them can update the k-buckets as specified:
Whenever a node receives a communication from another, it updates the corresponding
bucket. (§3.4.4, Updates)
However, only the recipient of the FIND_NODE RPC will be inserted in the k-buckets, and the response from that node (containing a list of k-contacts) will be ignored.
However, only the recipient of the FIND_NODE RPC will be inserted in the k-buckets, and the response from that node (containing a list of k-contacts) will be ignored.
I can't speak for XLattice, but having worked on a bittorrent kademlia implementation this strikes me as strange.
Incoming requests are not verified to be reachable nodes (NAT and firewall issues) while responses to outgoing RPC calls are a good indicator that a node is indeed reachable.
So incoming requests could only be added as tentative contacts which have still to be verified while incoming responses should be immediately useful for routing table maintenance.
But it's important to distinguish between the triples contained in the response and the response itself. The triples are unverified, the response itself on the other hand is a good verification for the liveness of that node.
Summary:
Incoming requests
semi-useful for routing table
reachability needs to be tested
Incoming responses
Immediately useful for the routing table
Tuples inside responses
not useful by themselves
but you may end up visiting them as part of the lookup process, thus they can become responses

How to determine the value of `MaxMsgLength` of queue

I am trying to write simple string message into a queue. The MaxMsgLength property of queue is set as 4 kb. The message has 2700 characters and when I try to put into queue I am getting 2030 (07EE) (RC2030): MQRC_MSG_TOO_BIG_FOR_Q exception. I am not doing any special kind of encoding and hence whatever is default for Windows should be used.
I want to know how to determine the value that I should give in MaxMsgLength property. How to calculate that.
Please remember that the MaxMsgLength as specified in the queue definition includes not just the payload, but also the message header and any properties that you set. If you check the Infocenter MQ_* (String Lengths) page and look for MQ_MSG_HEADER_LENGTH you will see that the MQMD alone is 4000 bytes. So if you set the MaxMsgLength of the queue to 4k, the largest payload you can have is 96 bytes. If the queue in question is a transmission queue, you need the queue size plus the size of the MQXQH transmission queue header.
To specifically answer the question in the title of the post, you can find the MaxMsgLength in two ways. Visually, by displaying the queue attributes. Programmatically, add "Inquire" to the open options when opening the queue and use the MQInq API call. Then add the total of the MQMD, any properties that you add (including the XML structures that contain them but are not returned in the API calls that manipulate them) plus any headers such as RFH2 (if the queues are set to use that instead of native properties), MQXQH, MQDLQ, etc.
Not sure what language you are using in your application. Assuming it is C, check BufferLength parameter value you have specified on the MQPUT call.
This IBM MQ InfoCenter link explains the case where you can run into 2030 error and possible remedies.

Resources