What exactly does handleParkingUpdate() do? - omnet++

I am trying to implement a VANET model for smart parking simulations. Trying to fully understand the TraCIDemo11pp.cc and files relevant to it and its proving quite difficult to get my head around the general structure of each module and the communications between them despite understanding the TicToc tutorial.
I understand how SUMO and OMNETPP are run in parallel, TraCIScenarioManager from OMNETPP communicates with the TraCI server in order to exchange information to SUMO etc. But I'm finding it hard to get my head around how the TraCIDemoApp is utilised.
The question is quite specific, but hoping an answer to it would let me figure out the rest. Any help would be appreciated!
Thanks,
Wesley

Veins comes with a very small demo example in the city of Erlangen:
Vehicles start at the parking lot of the university and drive to a location off-sight. After some time the first vehicle (node[0]) emulates an accident and stops driving. Therefore, it broadcasts this information which gets re-distributed via the RSU to all other vehicles in range. They, in turn, try to use an alternative route to their destination while re-broadcasting the information about the accident. Thus, newly spawned vehicles also get informed and immediately try to choose a different route to the destination.
All of this (i.e. accident, broadcasting, switching route) is implemented in the TraCIDemo* files which represent a VANET application running in a car or RSU utilizing the NIC (i.e. PHY & MAC) to do communication. See what policy is based vehicle rerouting in case of accident? for more information.
handleParkingUpdate() is used to react to a vehicle having switched it's state from driving to parking or vice versa. Depending on the current state and whether parked cars should be allowed to communicate in the simulation this method registers the vehicle's NIC module at the BaseConnectionManager which is involved in handling the actual wireless communication. For more details see this module or follow a packet from one application layer to another (i.e. twice through the networking stack and the wireless transmission).

Related

Need to c++ functions like handlemessage function for single or multi-radio wireless node interface

In interface element level of wireless nodes:
I know handleMessage() is called by the simulation kernel when the module receives a message.
Is there a similar function when a physical wireless link is established between two single or multi-radio nodes to communicate them is called? If there is no such function, how can I generate it?
Thanks
There is no such thing as physical wireless link. Radios transmit packets that may or may not be received on the other end. That physical wireless link is just an abstraction layered on top of the low level communication.
I.e. when do you consider a physical wireless link present? When data was exchanged between the two peer? Only one way or data should travel in both ways? How will node1 know that node2 received the sent data? Should it wait for acknowledgement? For how long? What is if the acknowledgement is lost during the transmission. And so on...
Providing reliable communication channel between two nodes is the responsibility of the Link Layer (or Mac layer if you want to call it that way like in Ieee80211Mac). So you should add your logic somewhere there, however you must define your own logic. Take a look at the handleLowerPacket() as a good place to insert your code.

Problems getting Altera's Triple Speed Ethernet IP core to work

I am using a Cyclone V on a SoCKit board (link here) (provided by Terasic), connecting an HSMC-NET daughter card (link here) to it in order to create a system that can communicate using Ethernet while communication that is both transmitted and received goes through the FPGA - The problem is, I am having a really, really hard time getting this system to work using Altera's Triple Speed Ethernet core.
I am using Qsys to construct the system that contains the Triple Speed Ethernet core, instantiating it inside a VHDL wrapper that also contains an instantiation of a packet generator module, connected directly to the transmit Avalon-ST sink port of the TSE core and controlled through an Avalon-MM slave interface connected to a JTAG to Avalon Master bridge core which has it's master port exported to the VHDL wrapper as well.
Then, using System Console, I am configuring the Triple Speed Ethernet core as described in the core's user guide (link here) at section 5-26 (Register Initialization) and instruct the packet generator module (also using System Console) to start and generate Ethernet packets into the TSE core's transmit Avalon-ST sink interface ports.
Although having everything configured exactly as described in the core's user guide (linked above) I cannot get it to output anything on the MII/GMII output interfaces, neither get any of the statistics counters to increase or even change - clearly, I am doing something wrong, or missing something, but I just can't find out what exactly it is.
Can any one please, please help me with this?
Thanks ahead,
Itamar
Starting the basic checks,
Have you simulated it? It's not clear to me if you are just simulating or synthesizing.
If you haven't simulated, you really should. If it's not working in SIM, why would it ever work in real life.
Make sure you are using the QIP file to synthesize the design. It will automatically include your auto generated SDC constraints. You will still need to add your own PIN constraints, more on that later.
The TSE is fairly old and reliable, so the obvious first things to check are Clock, Reset, Power and Pins.
a.) Power is usually less of problem on devkits if you have already run the demo that came with the kit.
b.) Pins can cause a whole slew of issues if they are not mapped right on this core. I'll assume you are leveraging something from Terasic. It should define a pin for reset, input clock and signal standards. Alot of times, this goes in the .qsf file, and you also reference the QIP file (mentioned above) in here too.
c.) Clock & Reset is a more likely culprit in my mind. No activity on the interface is kind of clue. One way to check, is to route your clocks to spare pins and o-scope them and insure they are what you think they are. Similarly, if you may want to bring out your reset to a pin and check it. MAKE SURE YOU KNOW THE POLARITY and you haven't been using ~reset in some places and non-inverted reset in others.
Reconfig block. Some Altera chips and certain versions of Quartus require you to use a reconfig block to configure the XCVR. This doesn't seem like your issue to me because you say the GMII is flat lined.

Transmission of vehicular status in Veins

I want to transmit a given car's vehicular data such as it's vType, instantaneous speed and position to a RSU in a scenario in Veins.
How can I obtain the data from SUMO and send it through MiXiM methods to an RSU node?
To achieve your goal you have to use the TraCIMobility component of Veins.
You can do that by first getting a pointer to that component in the initialize() method of your node
cModule *tmpMobility = getParentModule()->getSubmodule("veinsmobility");
mobility = dynamic_cast<Veins::TraCIMobility*>(tmpMobility);
ASSERT(mobility);
Once you have the mobility component you can query it for various data.
The type of data that it can provide can be found in TraCIMobility.h
For example in your case you could do:
mobility->getCurrentSpeed().length()); /* will provide velocity vector */
mobility->getAngleRad()); /* will provide angle of movement */
Then you can attach this data to your message and send it to the RSU of your choice.
If this exact solution does not work for you that could be due to me using a different Veins version than yours.
However you will certainly find what you need in TraCIDemo11p.cc or TraCIDemoRSU.cc of your Veins project.
Also, TraCICommandInterface is something you should have a look at.
In the official Veins website under the Documentation section it is said:
Application modules can use the TraCICommandInterface class and
related classes, conveniently accessible from TraCIMobility, to
interact with the running simulation. The following example shows how
to make a vehicle aware of slow traffic on a road called Second
Street, potentially causing it to change its route to avoid this road.
mobility = TraCIMobilityAccess().get(getParentModule());
traci = mobility->getCommandInterface();
traciVehicle = mobility->getVehicleCommandInterface();
traciVehicle->changeRoute("Second Street", 3600);
Some of the other vehicle related commands are setSpeed or setParking.
Similar methods are available for the whole simulation (e.g.,
addVehicle, addPolygon), roads (getMeanSpeed), individual lanes
(getShape), traffic lights (setProgram), polygons (setShape), points
of interest, junctions, routes, vehicle types, or the graphical user
interface.
How to use these modules is demonstrated in the source code of the
Veins tutorial example. Again, a list of all 80+ available methods can
be found in TraCICommandInterface.h or the autogenerated module
documentation.
A potentially related question/answer here: https://stackoverflow.com/a/29918148/4786271

CAN Bus - Engine Control

I have read a lot of postings and solutions here in stackoverflow.
I am very new to CAN networks and protocols and currently working on a project that entails communicating with the Vehicle engine Control Unit to cause the vehicle to decelerate to a preset speed.
Basically, I intend to establish a node in the CAN network where I can inject packets of data to the engine ECU to cause the car to decelerate to a predefined speed.
How do I translate signals received on the CAN bus, meant for the ECU, so as to make it possible for ECU to decode?
I plan to send two speed signals to the ECU.
speedSignal_1 = current vehicle speed
speedSignal_2 = target vehicle speed.
My intention is to make the ECU force the vehicle at current vehicle speed (speedSignal_1) to reduce to the target vehicle speed (speedSignal_2).
Can you advice me on how to proceed in achieving this?
Thanks
DISCLAIMER: Experimenting with the CAN bus on a moving vehicle is extremely dangerous. If you must audit an active vehicle, make sure that it is secured in a way in which it cannot escape and you cannot lose control (raised with wheels off the ground, on rollers, etc). Use the information in this post at your own risk. In no event shall I be liable for any direct, indirect, punitive, incidental, special consequential damages, to property or life, whatsoever arising out of or connected with the use or misuse of this information.
Your question assumes that the model of car that you're working on has this functionality built in. If your car has cruise control, you can most likely try and figure out what CAN messages are sent out when you activate the cruise control and modify the message to request the speed that you desire. The information regarding the exact message parameters is often proprietary to the manufacturer though, so you will most likely have to do some reverse engineering to figure out the correct messages by listening to the correct CAN bus and toggling the cruise control on/off.

Omnetpp model asimmetric channel

I have to model a bittorrent network, so there are a number of node connected each other. Each node has a download speed, say 600KBps, and an upload speed, say 130KBps.
The problem is: how can I model this in omnetpp? in the NED file i created the network this way. If A and B are nodes:
A.mygate$o++ --> {something} -->B.mygate$i++
B.mygate$o++ --> {something} -->A.mygate$i++
where mygate is a inout gate, $i and $o are the input and output half channel. But something must be a speed, but:
if I set a speed to the first line of code, this is the upload speed of A BUT is also the download speed of B. Ths is normal, because if I download from a slow server i have a slow download. How can I model the download speed of a peer in Omnetpp? I cannot understand this. Should i have to say: "allow k simultaneus download untill I reach the download speed?" or it is a bad approach? Can someone suggest me the right approach, and if a modul builtin in omnetpp already exists? I have read the manual but is a bit confusing. Thanks for every reply.
I suggest to take a look at OverSim which is a peer to peer network simulator on top of INET framework which is the framework to simulate internet related protocols in OMNeT++.
Generally each host should have a queue at link layer and the interfaces that are connected should manage that they do not put out further network packets until the transimmision line is not idle (which is determined by the datarate on the line and the length of the packet). Once the line gets idle, the next packet can be sent out on the line. This is how the datarate is limited by the actual channel.
If you don't want to implement this from scratch (no reason to do that) take a look at the INET framework. You should drop your hosts and connect their PPP interfaces with the asymmetric connection you have proposed in your question. The PPP interface in the StandardHost will do the queueing for you, so you only have to add some applications that generate your traffic, and you are set.
Still I would take a look at OverSim as it gives an even higher level abstraction on top of INET (I have no experience with it though)

Resources