I have been working on a position-based protocol using veins-inet and I want to get the position of the destination node.
In my code, I got the IP Address of the destination from the datagram.
const L3Address& destAddr = datagram->getDestinationAddress();
and I want to get the current position of this node.
I already checked the following question
How to get RSU coordinate from TraCIDem11p.cc?
But it seems that it refers to the node by using the node ID.
Is there a way to get the position of the node by referring to its IP Address?
I am using instant veins-4.7.1
A very simple solution would be to have each node publish its current L3Address and Coord to a lookup table whenever it moves. This lookup table could be located in a shared module or every node could have its own lookup table. Remember, you are writing C++ code, so even a simple singleton class with methods for getting/setting information is enough to coordinate this.
If, however, the process of "a node figures out where another node is" is something you would like to model (e.g., this should be a process that takes some time, can fail, causes load on the wireless channel, ...) you would first need to decide how this information would be transferred in real life, then model this using messages exchanged between nodes.
Related
I have been thrown into a pool of golang / gremlin / Neptune, and am able to get some things to work. Life is good - enough, but I am hoping there is a simple answer (which I have not been able to find) to what seems like a simple question.
I have 'obs' nodes with some properties, two of which are ('type','domain') and ('value','whitehouse.com).
Another set of nodes is 'attack' ('type','group') and ('value','Emotet'), along with other properties.
An observation node can have an edge pointing to one or more attack nodes. (and actually, other types of nodes as well.) These edges have a time-based property - when the observation was seen manifesting a certain type of attack.
I'm working in Go, using gremson to communicate with a Neptune db. In this environment you construct your query as a string and send it down the wire to Neptune, and get something called graphson back.
Thus, I construct this, and send it...
fmt.Sprintf("g.V().hasLabel('obs').has('value','%s').limit(1)", domain)
And I get back properties for a vector, in gremson. Were I using the console, all I would get back would be the id. Go figure.
Then I construct this, and send it...
fmt.Sprintf("g.V().hasLabel('obs').has('value','%s').limit(1).out()", domain)
and I get back the properties of the connected nodes, in graphson. Again, using the console I would only get back ids. No sweat.
What I would LIKE to do is to combine these two queries somehow so that I am not doing what seems to be like two almost identical lookups.
console-wise, assume both queries also have valueMap() or entityMap() tacked on the end. Is there any way to do them as one query?
There are many ways you could write this query. Here are a couple of options
g.V().hasLabel('obs').
has('value','%s').
limit(1).as('a').
out().as('b').
select('a','b')
or using project
g.V().hasLabel('obs').
has('value','%s').
limit(1).
project('a','b').
by().
by(out().fold())
My preference is for the project example as you will get the connected vertices back in a list.
I was wondering if there was any way of attaching state to individual rooms, can that be done?
Say I create a room room_name:
socket.join("room_name")
How can I assign an object, array, variable(s) to that specific room? I want to do something like:
io.adapter.rooms["room_name"].max = maxPeople
Where I give the room room_name a state variable max, and max is assigned the value of the variable maxPeople that is say type int. Max stores a variable for the maximum amount of people allowed to join that specific room. Other rooms can/could be assigned different max values.
Well, there is an object (internal to socket.io) that represents a room. It is stored in the adapter. You can see the basic definition of the object here in the source. so, if you reached into the adapter and got the room object for a given name, you could add data to it and it would stay there as long as the room didn't get removed.
But, that's a bit dangerous for a couple reasons:
If there's only one connection in the room and that user disconnects and reconnects (like say because they navigate to a new page), the room object may get destroyed and then recreated and your data would be lost.
Direct access to the Room object and the list of rooms is not in the public socket.io interface (as far as I know). The adapter is a replaceable thing and when you're doing things like using the redis adapter with clustering it may work differently (in fact this probably does because the list of rooms is centralized into a redis database). The non-public interface is also subject to change in future versions of socket.io (and socket.io has been known to rearrange some internals from time to time).
So, if this were my code, I'd just create my own data structure to keep room specific info.
When you add someone to a room, you make sure your own room object exists and is initialized properly with the desired data. It would probably work well to use a Map object with room name as key and your own Room object as value. When you remove someone from a room, you can clean up your own data structure if the room is empty.
You could even make your own room object be the central API you use for joining or leaving a room and it would then maintain its own data structure and then also call socket.io to to the join or leave there. This would centralize the maintenance of your own data structure when anyone joins or leaves a room. This would also allow you to pre-create your room objects with their own properties before there are any users in them (if you needed/wanted to do that) which is something socket.io will not do.
I have three rsu in my app. is there any id for rsu like car id for cars? if yes, how can I get rsu id in rsu initializer method? if not how can I distinguish between rsus?
If you the consider the demo scenario, it has one RSU integrated stored in an Array structure. However, you can have an arbitrary number of RSUs by increasing the number in the brackets.
Therefore, you can address every RSU individually by its module id *.rsu[<index>] (e.g. RSUExampleScenario.rsu[0]) which is also available in the code via getId(). OMNeT++ also provides other useful functions for getting the name of a module.
If this is identifier is not enough for you, at least in the Mac layer there is an additional id which you can use to distinguish nodes.
If this is not not enough, you would need to add your own identifier variable to the NED module.
I am working on NTCIP/SNMP Protocol I was able to connect to the device controller using one one of the MIBBrowser and was able to walk through the different objects(OIDS) loaded through a MIB File. However,When I do a walk over the dmsMessageTable I can see only two messages(again through object IDs) being retrieved but the Device controller has more than two messages. The Messages being retrieved are default one provided with the device.
Can anyone help in this ?
Are you using the correct primary index (the second last node of the OID)? This node corresponds to the message memory type. For changeable messages the index should be 3 or 4.
You can retrieve the number of messages for the memory type, (for example, for changeable messages use dmsNumChangeableMsg - 1.3.6.1.4.1.1206.4.2.3.5.2.0) and then the last node of your OID should correspond to the message number in that type of memory bank.
EXAMPLE:
For the first message in changeable memory:
1.3.6.1.4.1.1206.4.2.3.5.8.1.3.3.1
For the second message in volatile memory:
1.3.6.1.4.1.1206.4.2.3.5.8.1.3.4.2
We have a messaging system where one module sends some messages to another remote module at a high rate. The receiving module decodes this message in a specific format and forwards it to two threads. One is called the logger thread and other is the forwarder thread.
Before we send this message to these threads we need to do some kind of grouping of these messages.
Please note that these messages are coming at a high rate approx 800 per second.
The alert structure is as follows:
INT type
INT Sending System ID
INT Recpt System ID
INT timestamp
INT codes
INT Source Port
INT Destination Port
Source IP Address (ipv4 or ipv6)
Destination IP Address (ipv4 or ipv6)
At the end of the match we need to maintain a structure with the following details
struct{
INT COUNT
INT First Alert Timestamp
INT Last Alert Timestamp
INT First Alert ID
INT Last Alert ID
}
For each alert which matches the 8 criterias, a group will be created/picked and the count will be incremented along with the other details.
The IP Address fields can be either a structure of 5 fields (INT Address Type, INT Address1, INT Address2, INT Address3 and INT Address 4) or it can be converted to string and then stored in the structure.
We have been rattling our heads for quite sometime but were unable to find a structure or algo efficient enough so that the memory and speed both can be addressed.
Hence thought of coming to you experts for help.
A double linked list to store the matched Alerts. Makes it easy to retrieve the first and last AlertID. You might will neeed to extend the double linked list to have a count field.
Depending on your performance requirements you could group the Alerts from a list with a hash on the identifiers. And if that isn't fast enough implement a more complex tree structure that groups by the identifying fields.
The best thing I can suggest is get it working in the most simple way possible, 800 per second is nothing. If you then have performance issues, then optimize. So much fun writing stuff like that using test driven development, beats the hell out of your average crud code!
What do you plan on writing this in? Any suggestion is going to depend heavily on the language.
Your best best is to start off with something like a Dictionary<string, ContainerObject> where the key consists of the needed parameters concatenated for quick lookups. Keep working with this dictionary in memory while you have another processes logging the values appropriately to say a DB or flat file.
Keep it simple and 800 a sec shouldn't be a problem. However the means of communication is going to be a major factor. Is this local or remote? if it's remote and coming from a single source your nemesis is going to be latency building up if it's done in individual requests.