I'm using omnetpp-5.4.1 , veins-4.7.1 , sumo-0.30.0 .I'm going to do fuzzy clustering by RSU in veins.I created a new module called FCM in veins/modules/application/traci and inherited the TraCIDemo11p and wrote the clustering code in it.
Because I want to RSU start clustering,I used the initialize method in the TraCIDemoRSU11p to call the method inside the FMC at the start of work.
void TraCIDemoRSU11p::initialize(int stage) {
BaseWaveApplLayer::initialize(stage);
std::cout<<"starting clustering";
FCM * fcm_clustering;
fcm_clustering->clustering();
}
When I run the program, it is not allowed to run at the start of the program, saying "finish with error" and the program stops running.
What can I do to call the clustering by RSU at the beginning of the simulation?
please help me to solvemy problem.
Thanks.
You have defined a pointer fcm_clustering but you didn't initialize it. Therefore an attempt to use it ends up with memory violation.
Try to create FCM object, for example:
FCM * fcm_clustering = new FCM();
Related
I am a new bee working upon Veins5.2 and Omnetpp-5.6.2 version.
I am working upon a simulation scenario in which an emergency vehicle (an Ambulance e.g;)is passing by and RSU has to send a stop message(beacon or WSM) to normal passenger cars/vehicles. I am creating my application file referring to example scenario.
Please find below snippet of my code.
void MyApplLayer::handleSelfMsg(cMessage* msg)
{
BaseFrame1609_4* wsm1 = new BaseFrame1609_4();
sendDown(wsm1);
}
void MyApplLayer::onWSM(BaseFrame1609_4* frame)
{
//this function is called for all the cars nodes in my simulation
}
void TraCIDemoRSU11p::onWSM(BaseFrame1609_4* frame)
{
//this function is never called for RSU in TraCIDemoRSU.cc
}
handleSelfMsg() and onWSM() is getting called for all the Cars in my application layer but onWSM() is not getting invoked in TraCIDempRSU11p.cc file (as per the requirement RSU has to broadcast message to stop the vehicle. For this purpose, I am trying to invoke onWSM() method for RSU (in TraCIDempRSU11p.cc file).
Could it be I am missing something in omnetpp.ini file. Could someone please suggest some way forward or any kind of suggestions/ideas are much appreciated.
I found following post during research:
RSU not receiving WSMs in Veins 4.5
the above post suggest to check handLowerMsg is getting called or not.
handleLowerMsg function (inside DemoBaseApplLayer) onWSM is getting called but only for car nodes in my scenario.
I am using Spring's Reactor pattern in my web application. Internally it uses LMAX's RingBuffer implementation as one of it's message queues. I was wondering if there's any way to find out the current RingBuffer occupancy dynamically. It would help me determine the number of producers and consumers needed (and also their relative rates), and whether the RingBuffer as a message queue is being used optimally.
I tried the getBacklog() of the reactor.event.dispatch.AbstractSingleThreadDispatcher class, but it seems to always give the same value: the size of the RingBuffer I used while instantiating the reactor.
Any light on the problem would be greatly appreciated.
Use com.lmax.disruptor.Sequencer.remainingCapacity()
To have access to the instance of Sequencer you have to create it explicitly as well as RingBuffer.
In my case initialization of outcoming Disruptor
Disruptor<MessageEvent> outcomingDisruptor =
new Disruptor<MessageEvent>(
MyEventFactory.getInstance(),
RING_BUFFER_OUT_SIZE,
MyExecutor.getInstance(),
ProducerType.SINGLE, new BlockingWaitStrategy());
transforms into
this.sequencer =
SingleProducerSequencer(RING_BUFFER_OUT_SIZE, new BlockingWaitStrategy());
RingBuffer ringBuffer =
new RingBuffer<MessageEvent>(MyEventFactory.getInstance(), sequencer);
Disruptor<MessageEvent> outcomingDisruptor =
new Disruptor<MessageEvent>(ringBuffer, MyExecutor.getInstance());
and then
this.getOutCapacity() {
return sequencer.remainingCapacity();
}
UPDATE
small bug :|
we need outMessagesCount instead of getOutCapacity.
public long outMessagesCount() {
return RING_BUFFER_OUT_SIZE - sequencer.remainingCapacity();
}
The latest version of reactor-core in mvnrepository (version 1.1.4 RELEASE) does not have a way to dynamically monitor the state of the message queue. However, after going through the reactor code on github, I found the TraceableDelegatingDispatcher, which allows tracing the message queue (if supported by the underlying dispatcher implementation) at runtime, via its remainingSlots() method. The easiest option was to compile the source and use it.
I have the following test code:
using (ShimsContext.Create())
{
// act
sut.MethodCall();
}
The SUT has the following method (for MethodCall):
Dim mq As New MSMQ.MessageQueue(messageQPath)
mq.Send(mqMsg)
But I'm getting following error:
"The queue does not exist or you do not have sufficient permissions to perform the operation."
Obviously the queue won't exist and I won't have sufficient permissions if I don't have a queue created on the fake message queue. Has anyone got any experience with working with MSMQ and Fakes so that the call to the MSMQ send is basically a no operation which I can verify?
The shim needs to be set-up like so:
ShimMessageQueue.AllInstances.SendObject = (m, o) =>
{
// verification code here
};
As Fakes doesn't have concept of verifying a call directly using the framework, you just put the verification code inside the lambda for the SendObject call.
Let's see how simple of a question I can ask. I have:
void TCPClient::test(const boost::system::error_code& ErrorCode)
{
// Anything can be here
}
and I would like to call it from another class. I have a global boost::thread_group that creates a thread
clientThreadGroup->create_thread(boost::bind(&TCPClient::test,client, /* this is where I need help */));
but am uncertain on how to call test, if this is even the correct way.
As an explanation for the overall project, I am creating a tcp connection between a client and a server and have a method "send" (in another class) that will be called when data needs to be sent. My current goal is to be able to call test (which currently has async_send in it) and send the information through the socket that is already set up when called. However, I am open to other ideas on how to implement and will probably work on creating a consumer/producer model if this proves to be too difficult.
I can use either for this project, but I will later have to implement listen to be able to receive control packets from the server later, so if there is any advice on which method to use, I would greatly appreciate it.
boost::system::error_code err;
clientThreadGroup->create_thread(boost::bind(&TCPClient::test,client, err));
This works for me. I don't know if it will actually have an error if something goes wrong, so if someone wants to correct me there, I would appreciate it (if just for the experience sake).
In using Plone 4, I have successfully created a subscriber event to do extra processing when a custom content type is saved. This I accomplished by using the Products.Archetypes.interfaces.IObjectInitializedEvent interface.
configure.zcml
<subscriber
for="mycustom.product.interfaces.IRepositoryItem
Products.Archetypes.interfaces.IObjectInitializedEvent"
handler=".subscribers.notifyCreatedRepositoryItem"
/>
subscribers.py
def notifyCreatedRepositoryItem(repositoryitem, event):
"""
This gets called on IObjectInitializedEvent - which occurs when a new object is created.
"""
my custom processing goes here. Should be asynchronous
However, the extra processing can sometimes take too long, and I was wondering if there is a way to run it in the background i.e. asynchronously.
Is it possible to run subscriber events asynchronously for example when one is saving an object?
Not out of the box. You'd need to add asynch support to your environment.
Take a look at plone.app.async; you'll need a ZEO environment and at least one extra instance. The latter will run async jobs you push into the queue from your site.
You can then define methods to be executed asynchronously and push tasks into the queue to execute such a method asynchronously.
Example code, push a task into the queue:
from plone.app.async.interfaces import IAsyncService
async = getUtility(IAsyncService)
async.queueJob(an_async_task, someobject, arg1_value, arg2_value)
and the task itself:
def an_async_task(someobject, arg1, arg2):
# do something with someobject
where someobject is a persistent object in your ZODB. The IAsyncService.queueJob takes at least a function and a context object, but you can add as many further arguments as you need to execute your task. The arguments must be pickleable.
The task will then be executed by an async worker instance when it can, outside of the context of the current request.
Just to give more options, you could try collective.taskqueue for that, really simple and really powerful (and avoid some of the drawbacks of plone.app.async).
The description on PyPI already has enough to get you up to speed in no time, and you can use redis for the queue management which is a big plus.