am developing a simulation model on the omnet++..Basically my work is to develop something related to LTE, but first I need to develop a simple model which takes the packet from a source and store it in a queue for sometime and deliver it to sink...
I have developed this model and its working fine for me....
Now I need to place tokenbucket meter in between the queue and the sink...to handle the burst and send back rejected packet from the token meter back to the queue..something like second attached image..I have taken this tokenbucketmeter from the simuLTE package of OMNET...
When I simulate this, it is showing error like
Quote: cannot cast (queueing::Job *)net.tokenBucketMeter.job-1 to type 'cPacket *'
Am not getting where exactly the problem is, may be the source am using is creating the jobs, and tokenbucket meter accepts only the packets..If it is so then what type of the source should I use??
Will you please clarify this?? Will be very thankful
I am using OMNeT++ in a project at the moment too. Learning to use OMNeT++ having only touched some C99 before can be a bit frustrating.
From checking the demo projects you are using as a base for your project - it looks like Job and cPacket do not share any useful types other than cObject so I would not try to cast like this.
Have a look at PassiveQueue.cc in the /queueinglib project handles Jobs - everything is passed around as a cMessage and cast using the built in cast:
cMessage msg (comes in from method signature)
Job *job = check_and_cast<Job *>(msg);
cPackets, which you want to use, are a child of cMessage in the inheritance hierarchy shown in this link:
http://www.omnetpp.org/doc/omnetpp/api/index.html
I am not using cPackets myself, but it seems likely, given how protocols work, that you would be able to translate a message into one or more packets.
Related
I was using a very old queueinglib (maybe from ten years ago). In which Job is inhereted from cPacket, not cMessage.
Now I changed my IDE version from 5 to 6 and had to update queueinglib. When I do I am very suprised to see that Job is now inhereted from cMessage.
In my model, I have both internal and external messages(through datarate channel). For internal messages, it is okay to use cMessage but I need to use cPacket for external messages. Thats why my own message type was derived from cPacket.
Now I have messages derived from cPacket, but queueinglib blocks cannot cast them to Job. How can I solve this problem? Here are some ideas that I can think of:
-I can change queueinglib entirely but I don't want to do this to an external library. I believe it is using cMessage instead of cPacket for a reason.
-Multiple inheritence. I can derive my message type from both cMessage and cPacket but I saw in the manual that it is not possible.
-I can create a new message when transmitting between a block of mine and queueinglib. But then message ids will be useless. I will be constructing and destructing messages constantly..
So is there a better, recomended approach?
The reason why Jobs are messages in the queueinglib example is because those packages never travel along datarate channel links so they don't need to be packets and the example is intentionally kept as simple as possible.
BUT: this is an example. It was never meant to be an external library to be consumed by other project, so it was not designed to be extensible. You can safely copy and modify it (I recommend to mark/document your changes in case you want to upgrade/compare the lib in future).
The easiest approach: modify Job.msg and change message Job to packet Job and you are done. And use Job as the base of your messages. As Job extends Message, the whole queuing library will work just fine.
Without modifying the queinglib sources: There are various fields in cMessage that can be (mis)used to carry a pointer. For example you could use setContextPointer() or setContolInfo() to set your packet's pointer (after some casting) whenever it enters the parts where you use the queuinglib, and remove the prointer when leaves. This requires a bit more work (but only in your code) and has it's advantage that the network packets do not contain anything from the queuinlib fields (which is a proper design) as data releated to queuing component are not meant to travel between network nodes (e.g. priority or generation).
Also using a proper external queuing library (like the one in inet's src/inet/queueing folder) would have been the best solution, but that ship has sailed long ago, I believe.
In short, go with the first recommendation.
I'm working with veins and OMNeT++ in a scenario that has different types of nodes (cars, pedestrians, and others). For evaluation purposes, I'm getting the std::map using the TraCIScenarioManager::getManagedHosts method based on this post (I also answered one of my related questions).
Now, I want to check the type of each node in the scenario. To be clearer, I want to obtain some kind of list that indicates the type of each node (is it a pedestrian? Is it a bus?). Is there any way to obtain this from the map? Is there any attribute that identifies the node type?
I already can identify the type of nodes through messages adding specifics tags to it, but now I need to obtain the type of node independent of the arrival of messages.
I really appreciate any help you can provide.
TraCIScenarioManager::getManagedHosts returns a std::map<std::string, cModule*> which maps each SUMO identifier to one OMNeT++ cModule*. Depending on how cars, buses, etc differ in your simulation, I can think of multiple ways of figuring out what type of SUMO object a host models.
Maybe they are named differently in SUMO? Then you can use the std::string to tell them apart.
Maybe they are named differently in OMNeT++? Then you can use getFullName() of the cModule* to tell them apart.
Maybe they use different C++ classes as models for their application layers? Then you can use something like getSubmodule() of the cModule* to get a pointer to their application layer module and check if a dynamic_cast<ApplicationOfACar*> of this pointer is successful.
When working with NPAPI, you have the control over two functions: NPP_WriteReady & NPP_Write. This is basically a data push model.
However I need to implement support for a new file format. The library I am using takes any concrete subclass of the following source model (simplified c++ code):
struct compressed_source {
virtual int read(char *buf, int num_bytes) = 0;
}
This model is trivial to implement when dealing with FILE* (C) or socket (BSD) and other(s), since they comply with a pull data model. However I do not see how to fullfill this pull model from the NPAPI push model.
As far as I understand I cannot explicitely call NPP_Write within my concrete implementation of ::read(char *, size_t).
What is the solution here ?
EDIT:
I did not want to add too much details to avoid confusing answer. Just for reference, I want to be build an OpenJPEG/NPAPI plugin. OpenJPEG is a huge library, and the underlying JPEG 2000 implementation really wants a pull data model to allow fine access on massive image (eg: specific sub-region of an 100000 x 100000 image thanks to low level indexing information). In other word, I really need a pull data model plugin interface.
Preload the file
Well, preloading the whole file is always an option that would work, but often not a good one. From your other questions I gather that files/downloads in question might be rather large, so avoiding network traffic might be a good idea, so preloading the file is not really an option.
Hack the library
If you're using some open source library, you might be able to implement a push API along or instead of the current pull API directly within the library.
Or you could implement things entirely by yourself. IIRC you're trying to decode some image format, and image formats are usually reasonably easy to implement from scratch.
Implement blocking reads by blocking the thread
You could put the image decoding stuff into a new thread, and whenever there is not enough buffered data already to fulfill a read immediately, do a blocking wait for the data receiving thread (main thread in case of NPAPI) until it indicates the buffer is sufficiently filled again. This is essentially the Producer/Consumer problem.
Of course, you'll first need to choose how to use threads and synchronization primitives (a library such as C++11 std::thread, Boost threads, low-level pthreads and/or Windows threads, etc.). There are tons of related SO questions on SO/SE and tons of articles/postings/discussions/tutorials etc. all over the internet.
I have a very specific problem: I want to write my own DMX-Software to control our DMX-fixtures. Does anyone know a interface to use? It would be great if there would be any Framework for using it, so that I only have to sent the channel and the value to the interface.
I noticed your question was for Mac, but I wrote a Windows specific C++ program, which could probably be easily modified. It's adapted from the C# example on Enttec's OpenUSB website. See:
https://github.com/chloelle/DMX_CPP
There's some really good information & code samples (including a working class that I wrote) here: Lighting USB OpenDMX FTD2XX DMXking
Ultimately, you end up setting byte values (between 0 and 255[FF] (brightest) in a byte array.
It's fairly trivial to implement simple effects such as fades or chases.
You would need to use a USB controller to convert your program's instructions to the actual hardware.
I suggest using a simple iphone application talking to a webservice which then interacts with the hardware.
Code samples above are in c# though will show you how to interact with a DMX controller
I've been reading about TDD, and would like to use it for my next project, but I'm not sure how to structure my classes with this new paradigm. The language I'd like to use is Java, although the problem is not really language-specific.
The Project
I have a few pieces of hardware that come with a ASCII-over-RS232 interface. I can issue simple commands, and get simple responses, and control them as if from their front panels. Each one has a slightly different syntax and very different sets of commands. My goal is to create an abstraction/interface so I can control them all through a GUI and/or remote procedure calls.
The Problem
I believe the first step is to create an abstract class (I'm bad at names, how about 'Communicator'?) to implement all the stuff like Serial I/O, and then create a subclass for each device. I'm sure it will be a little more complicated than that, but that's the core of the application in my mind.
Now, for unit tests, I don't think I really need the actual hardware or a serial connection. What I'd like to do is hand my Communicators an InputStream and OutputStream (or Reader and Writer) that could be from a serial port, file, stdin/stdout, piped from a test function, whatever. So, would I just have the Communicator constructor take those as inputs? If so, it would be easy to put the responsibility of setting it all up on the testing framework, but for the real thing, who makes the actual connection? A separate constructor? The function calling the constructor again? A separate class who's job it is to 'connect' the Communicator to the correct I/O streams?
Edit
I was about to rewrite the problem section in order to get answers to the question I thought I was asking, but I think I figured it out. I had (correctly?) identified two different functional areas.
1) Dealing with the serial port
2) Communicating with the device (understanding its output & generating commands)
A few months ago, I would have combined it all into one class. My first idea towards breaking away from this was to pass just the IO streams to the class that understands the device, and I couldn't figure out who would then be responsible for creating the streams.
Having done more research on inversion of control, I think I have an answer. Have a separate interface and class that solve problem #1 and pass it to the constructor of the class(es?) that solve problem #2. That way, it's easy to test both separately. #1 by connecting to the actual hardware and allowing the test framework to do different things. #2 can be tested by being given a mock of #1.
Does this sound reasonable? Do I need to share more information?
With TDD, you should let your design emerge, start small, with baby steps and grow your classes test by test, little by little.
CLARIFIED: Start with a concrete class, to send one command, unit test it with a mock or a stub. When it will work enough (perhaps not with all options), test it against your real device, once, to validate your mock/stub/simulator.
Once the class for the first command is operational, start implementing a second command, the same way: first again your mock/stub, then once against the device for validation. Now if you're seeing similarities between your two classes, you can refactor to your abstract class based design - or to something different.
Sorry for being a little Linux centric ..
My favorite way of simulating gadgets is to write character device drivers that simulate their behavior. This also gives you fun abilities, like providing an ioctl() interface that makes the simulated device behave abnormally.
At that point .. from testing to real world, it only matters which device(s) you actually open, read and write.
It should not be too hard to mimic the behavior of your gadgets .. it sounds like they take very basic instructions and return very basic responses. Again, a simple ioctl() could tell the simulated device that its time to misbehave, so you can ensure that your code is handling such events adequately. For instance, fail intentionally on every n'th instruction, where n is randomly selected upon the call to ioctl().
After seeing your edits I think you are heading in exactly the right direction. TDD tends to drive you towards a design composed of small classes with a well-defined responsibility. I would also echo tinkertim's advice - a device simulator which you can control and "provoke" into behaving in different ways is invaluable for testing.