I am working on a OMNeT++, Veins, and sumo project. The goal of the project is to calculate when 2 vehicles will crash into each other. To do this they need to communicate their position, direction, and speed.
I could use the manager to get all of the vehicles at once, but the goal is to use messages.
Although it's difficult to find out how these messages exactly work, the documentation is very minimal. So I was hoping someone here could help me out.
void InteractingVehicle::onBSM(veins::DemoSafetyMessage* bsm)
{
EV_DEBUGF_C("test") << "TEST" << std::endl;
}
void InteractingVehicle::handlePositionUpdate(cObject* obj)
{
veins::DemoSafetyMessage* bsm = new veins::DemosafetyMessage();
populateWSM(bsm);
sendDown(bsm->dup());
}
This doesn't do much, but it should at least show "TEST" in debug mode, which it doesn't.
Am I forgetting something?
Any help would be much appreciated.
Related
I am a new bee working upon Veins5.2 and Omnetpp-5.6.2 version.
I am working upon a simulation scenario in which an emergency vehicle (an Ambulance e.g;)is passing by and RSU has to send a stop message(beacon or WSM) to normal passenger cars/vehicles. I am creating my application file referring to example scenario.
Please find below snippet of my code.
void MyApplLayer::handleSelfMsg(cMessage* msg)
{
BaseFrame1609_4* wsm1 = new BaseFrame1609_4();
sendDown(wsm1);
}
void MyApplLayer::onWSM(BaseFrame1609_4* frame)
{
//this function is called for all the cars nodes in my simulation
}
void TraCIDemoRSU11p::onWSM(BaseFrame1609_4* frame)
{
//this function is never called for RSU in TraCIDemoRSU.cc
}
handleSelfMsg() and onWSM() is getting called for all the Cars in my application layer but onWSM() is not getting invoked in TraCIDempRSU11p.cc file (as per the requirement RSU has to broadcast message to stop the vehicle. For this purpose, I am trying to invoke onWSM() method for RSU (in TraCIDempRSU11p.cc file).
Could it be I am missing something in omnetpp.ini file. Could someone please suggest some way forward or any kind of suggestions/ideas are much appreciated.
I found following post during research:
RSU not receiving WSMs in Veins 4.5
the above post suggest to check handLowerMsg is getting called or not.
handleLowerMsg function (inside DemoBaseApplLayer) onWSM is getting called but only for car nodes in my scenario.
I am writing an application for my own purposes that aims to get play pause events no matter what is going on in the system. I have gotten this much working
let commandCenter = MPRemoteCommandCenter.shared()
commandCenter.togglePlayPauseCommand.isEnabled = true
commandCenter.togglePlayPauseCommand.addTarget { (MPRemoteCommandEvent) -> MPRemoteCommandHandlerStatus in
print("Play Pause Command")
return .success
}
commandCenter.nextTrackCommand.isEnabled = true
commandCenter.nextTrackCommand.addTarget { (MPRemoteCommandEvent) -> MPRemoteCommandHandlerStatus in
print("NextTrackCommand")
return .success
}
commandCenter.previousTrackCommand.isEnabled = true
commandCenter.previousTrackCommand.addTarget { (MPRemoteCommandEvent) -> MPRemoteCommandHandlerStatus in
print("previousTrackCommand")
return .success
}
commandCenter.playCommand.isEnabled = true
commandCenter.playCommand.addTarget { (MPRemoteCommandEvent) -> MPRemoteCommandHandlerStatus in
print("playCommand")
return .success
}
MPNowPlayingInfoCenter.default().playbackState = .playing
Most of those methods are there because apparently you will not get any notifications without having nextTrackCommand or previousTrackCommand or playCommand implemented.
Anyways my one issue is that as soon as you open another application that uses audio these event handlers stop getting called and I cant find a way to detect and fix this.
I would normally try doing AVAudioSession things to state this as a background application however that does not seem to work. Any ideas on how I can get playpause events no matter what state the system is in?
I would like to be able to always listen for these events OR get an indication of when someone else has taken control of the audio? Perhaps even be able to re-subscribe to these play pause events.
There's an internal queue in the system which contains all the audio event subscribers. Other applications get on top of it when you start using them.
I would like to be able to always listen for these events
There's no API for that but there's a dirty workaround. If I understand your issue correctly, this snippet:
MPNowPlayingInfoCenter.default().playbackState = .paused
MPNowPlayingInfoCenter.default().playbackState = .playing
must do the trick for you if you run it in a loop somewhere in your application.
Note that this is not 100% reliable because:
If an event is generated before two subsequent playbackState state changes right after you've switched to a different application, it would still be catched by the application in the active window;
If another application is doing the same thing, there would be a constant race condition in the queue, with unpredictable outcome.
References:
Documentation for playbackState is here;
See also a similar question;
See also a bug report for mpv with a similar
issue (a pre-MPRemoteCommandCenter one, but still very valuable)
OR get an indication of when someone else has taken control of the audio
As far as I know there's no public API for this in macOS.
Let me give a brief context first:
I have a scenario where the RSUs will broadcast a fixed message 'RSUmessage' about every TRSU seconds. I have implemented the following code for RSU broadcast (also, these fixed messages have Psid = -100 to differentiate them from others):
void TraCIDemoRSU11p::handleSelfMsg(cMessage* msg) {
if (WaveShortMessage* wsm = dynamic_cast<WaveShortMessage*>(msg)) {
if(wsm->getPsid()==-100){
sendDown(RSUmessage->dup());
scheduleAt(simTime() + trsu +uniform(0.02, 0.05), RSUmessage);
}
}
else {
BaseWaveApplLayer::handleSelfMsg(wsm);
}
}
A car can receive these messages from other cars as well as RSUs. RSUs discard the messages received from cars. The cars will receive multiple such messages, do some comparison stuff and periodically broadcast a similar type of message : 'aggregatedMessage' per interval Tcar. aggregatedMessage also have Psid=-100 ,so that the message can be differentiated from other messages easily.
I am scheduling the car events using self messages. (Though it could have been done inside handlePositionUpdate I believe). The handleSelfMsg of a car is following:
void TraCIDemo11p::handleSelfMsg(cMessage* msg) {
if (WaveShortMessage* wsm = dynamic_cast<WaveShortMessage*>(msg)) {
wsm->setSerial(wsm->getSerial() +1);
if (wsm->getPsid() == -100) {
sendDown(aggregatedMessage->dup());
//sendDelayedDown(aggregatedMessage->dup(), simTime()+uniform(0.1,0.5));
scheduleAt(simTime()+tcar+uniform(0.01, 0.05), aggregatedMessage);
}
//send this message on the service channel until the counter is 3 or higher.
//this code only runs when channel switching is enabled
else if (wsm->getSerial() >= 3) {
//stop service advertisements
stopService();
delete(wsm);
}
else {
scheduleAt(simTime()+1, wsm);
}
}
else {
BaseWaveApplLayer::handleSelfMsg(msg);
}
}
PROBLEM: With this setting, the simulation is very very slow. I get about 50 simulation seconds in 5-6 hours or more in Express mode in OMNET GUI. (No. of RSU: 64, Number of Vehicle: 40, around 1kmx1km map)
Also, I am referring to this post. The OP says that he got faster speed by removing the sending of message after each RSU received a message. In my case I cannot remove that, because I need to send out the broadcast messages after each interval.
Question: I think that this slowness is because every node is trying to sendDown messages at the beginning of each simulated second. Is it the case that when all vehicles and nodes schedules and sends message at the same time OMNET slows down? (Makes sense to slow down , but by what degree) But, there are only around 100 nodes overall in the simulation. Surely it cannot be this slow.
What I tried : I tried using sendDelayedDown(wsm->dup(), simTime()+uniform(0.1,0.5)); to spread the sending of the messages through out 1st half of each simulated second. This seems to stop messages piling up at the beginning of each simulation seconds and sped things up a bit, but not so much overall.
Can anybody please let me know if this is normal behavior or whether I am doing something wrong.
Also I apologize for the long post. I could not explain my problem without giving the context.
It seems you are flooding your network with messages: Every message from an RSU gets duplicated and transmitted again by every Car which has received this message. Hence, the computational time increases quadratically with the number of nodes (sender of messages) in your network (every sent message has to be handled by every node which is in range to receive it). The limit of 3 transmissions per message does not seem to help much and, as the comment in the code indicates, is not used at all, if there is no channel switching.
Therefore, if you can not improve/change your code to simply send less messages, you have to live with that. Your little tweak to send the messages in a delayed manner only distributes the messages over one second but does not solve the problem of flooding.
However, there are still some hints you can follow to improve the performance of your simulation:
Compile in release mode: make MODE=Release
Run your simulation in the terminal environment Cmdenv: ./run -u Cmdenv ...
If you need to use the graphical environment by all means, you should at least speed up the animations by using the slider in the upper part of the interface.
Removing the simtime-resolution parameter from the omnetpp.ini file solves the problem.
It seems the simulation kernel has an issue when the channel delay does not match the simulation-time resolution.
You can verify the solution by cloning the following repository. Note that you need a functional installation of the OMNeT++ framework. Specifically, I test this fix in OMNeT++ 5.6.2.
https://github.com/Ryuuba/flooding
Code I'm using:
client.blockingConnect();
try {
Wearable.MessageApi.sendMessage(client,
nodeId, path, message.getBytes("UTF-16"));
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
client.disconnect();
The variables path, and message are strings that contain just what they're named after, and client, and nodeId are set with this code (which with the latest Android Wear release needs to be modified too accommodate multiple devices, but the not the current issue I'm working on):
client = new GoogleApiClient.Builder(context)
.addApi(Wearable.API)
.build();
while (nodeId.length() < 1) {
client.blockingConnect();
Wearable.NodeApi.getConnectedNodes(client).setResultCallback(new ResultCallback<NodeApi.GetConnectedNodesResult>() {
#Override
public void onResult(NodeApi.GetConnectedNodesResult nodes) {
for (Node node : nodes.getNodes()) {
nodeId = node.getId();
//nodeName = node.getDisplayName();
haveId = true;
status = ConnectionStatus.connected;
}
}
});
client.disconnect();
The problem I'm having is sometimes it works, sometimes quick, and other times after a long delay, and sometimes not at all. Tides, phase of the moon, humidity, butterflys flapping on the other side of the world, not sure what changes. Android wear reports the device as connected always though. Sometimes the messages are the same values, but still need to be handled separately, because when they happen it's important either the watch or mobile respond.
Is there anyway to improve the reliability?
I've tried:
sendMessage(String.valueOf(System.currentTimeMillis()), "wake up!");
But that don't go through sometimes either.
No, MessageApi is inherently unreliable. Think of it as UDP. You can use it if you want to deliver the message fast and you don't mind it will fail, because you can repeat it (for example, user switches track in your music app - either it works, or he will have to press the button again).
If you need reliability, use DataApi. It's slower, but has guarantees eventual consistency.
If you want both speed and guaranteed delivery, use both approaches - send both a message and set a data item with the same token. If the message is received, keep the token and ignore the data item later. If not, the data item will finally trigger the action.
EDIT
Document states that the messages will be delivered to a node only if the node is connected:
Messages are delivered to connected network nodes. A message is
considered successful if it has been queued for delivery to the
specified node. A message will only be queued if the specified node is
connected. The DataApi should be used for messages to nodes which are
not currently connected (to be delivered on connection).
while coding a server supporting both TCP and UDP with the boost library, I encountered a strange problem: After the server receives any UDP message, a call of std::cin (or std::getline) will crash if I try to put the input into a string.
This does only happen after at least one UDP message was received. I have no idea what happens here, because I hardly do anything when receiving a message. I broke the important code down:
void AsynchronousServer::DoReceiveUDP()
{
m_udp_socket.async_receive_from(boost::asio::buffer(m_udp_receive_buffer,
m_udp_receive_buffer.size()),
udp::endpoint(), [this](boost::system::error_code error, std::size_t
bytes_transferred)
{
});
}
The DoReceiveUDP() method is called right when the server is up and before io_service.run(). Usually it does a bit more (e.g. call itself again), but for testing purposes I commented everything out so that it really does nothing more than receive once. m_udp_receive_buffer is an
std::array<char, 8196>
, an attribute of the AsynchronousServer class that is not used anywhere else.
In the main thread, this is all I really do after setting up the server:
while(true)
{
std::string message;
std::getline(std::cin, message); //On this line the program crashes
//server.SendMessageTCP(1, message);
}
Now as I said, the crash (debug message says buffer overflow) only happens after a message was received via UDP. My server also reads TCP messages via async_read. This does not provoke the error though.
I also tested this with storing the getline-input in an constant sized array, which works fine. But I cant really do that since I dont know how long the message is then, which means the buffer is filled with a lot of useless characters when I send the message. Besides, I dont really feel safe anyway with strange stuff like that happening and would rather solve the problem than bypass it.
Do any of you have some ideas on what could be the problem here? If you need more code, just ask, but I think I already posted everything relevant. :)
EDIT: I commented out the error code and bytes transferred too, but it is in the "full" version. I don't get any errors and bytes transferred is exactly the length of the message.
After some more tests I can at least guess a little more. The problem seems to occur if I am expected to enter input via cin and during this, a message is received.
E.g. if I do this:
while(true)
{
std::string message;
boost::this_thread::sleep(boost::posix_time::seconds(3));
std::getline(std::cin, message);
}
and the client sends a UDP message within this three seconds the thread sleeps, everything goes fine. If the three seconds pass and THEN the message is received, it crashes as before.
However, there is one really strange behaviour: After I sended a UDP message within these three seconds, the program won't crash anymore at all - even if I wait with the next message until the thread has reached getline again. I have no idea why that happens...
Alright so I found a "solution" for this problem. I still don't know why it happens and if that is really a solution at all or whether I'll run into other problems later.
Also, I have no idea, why this solution works. :D
Anyway, it works if the buffer is not a member function but created anew for every call of ReceiveUDP:
void AsynchronousServer::DoReceiveUDP()
{
std::shared_ptr<std::array<char, 8192>> udp_receive_buffer;
m_udp_socket.async_receive_from(boost::asio::buffer(*udp_receive_buffer, udp_receive_buffer->size()),
udp::endpoint(), boost::bind<void>([this](boost::system::error_code error, std::size_t bytes_transferred,
std::shared_ptr<std::array<char, 8192>> udp_receive_buffer)
{
}, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred, udp_receive_buffer));
}