How can use queueinglib with cPacket instead of cMessage? - omnet++

I was using a very old queueinglib (maybe from ten years ago). In which Job is inhereted from cPacket, not cMessage.
Now I changed my IDE version from 5 to 6 and had to update queueinglib. When I do I am very suprised to see that Job is now inhereted from cMessage.
In my model, I have both internal and external messages(through datarate channel). For internal messages, it is okay to use cMessage but I need to use cPacket for external messages. Thats why my own message type was derived from cPacket.
Now I have messages derived from cPacket, but queueinglib blocks cannot cast them to Job. How can I solve this problem? Here are some ideas that I can think of:
-I can change queueinglib entirely but I don't want to do this to an external library. I believe it is using cMessage instead of cPacket for a reason.
-Multiple inheritence. I can derive my message type from both cMessage and cPacket but I saw in the manual that it is not possible.
-I can create a new message when transmitting between a block of mine and queueinglib. But then message ids will be useless. I will be constructing and destructing messages constantly..
So is there a better, recomended approach?

The reason why Jobs are messages in the queueinglib example is because those packages never travel along datarate channel links so they don't need to be packets and the example is intentionally kept as simple as possible.
BUT: this is an example. It was never meant to be an external library to be consumed by other project, so it was not designed to be extensible. You can safely copy and modify it (I recommend to mark/document your changes in case you want to upgrade/compare the lib in future).
The easiest approach: modify Job.msg and change message Job to packet Job and you are done. And use Job as the base of your messages. As Job extends Message, the whole queuing library will work just fine.
Without modifying the queinglib sources: There are various fields in cMessage that can be (mis)used to carry a pointer. For example you could use setContextPointer() or setContolInfo() to set your packet's pointer (after some casting) whenever it enters the parts where you use the queuinglib, and remove the prointer when leaves. This requires a bit more work (but only in your code) and has it's advantage that the network packets do not contain anything from the queuinlib fields (which is a proper design) as data releated to queuing component are not meant to travel between network nodes (e.g. priority or generation).
Also using a proper external queuing library (like the one in inet's src/inet/queueing folder) would have been the best solution, but that ship has sailed long ago, I believe.
In short, go with the first recommendation.

Related

Which one is preferable : zmq_send or zmq_msg_send?

I am discovering zeroMQ, and I understand that zmq_send sends a buffer and zmq_msg_send sends a zmq_msg_t message.
It seems to me that it is two different ways of doing the same thing (both can send multi-part messages, etc).
What are the advantages of using zmq_msg_t structs?
Advantage is simply that your code works on a bit lower-level, closer to the metal, and saves a few CPU-cycles, that .zmq_send() wrapper spends on preparing the zmq_msg_t struct and passing it forward to the ZMQ-internal messaging processing as the .zmq_msg_send() does in one step.

NPAPI: data push model?

When working with NPAPI, you have the control over two functions: NPP_WriteReady & NPP_Write. This is basically a data push model.
However I need to implement support for a new file format. The library I am using takes any concrete subclass of the following source model (simplified c++ code):
struct compressed_source {
virtual int read(char *buf, int num_bytes) = 0;
}
This model is trivial to implement when dealing with FILE* (C) or socket (BSD) and other(s), since they comply with a pull data model. However I do not see how to fullfill this pull model from the NPAPI push model.
As far as I understand I cannot explicitely call NPP_Write within my concrete implementation of ::read(char *, size_t).
What is the solution here ?
EDIT:
I did not want to add too much details to avoid confusing answer. Just for reference, I want to be build an OpenJPEG/NPAPI plugin. OpenJPEG is a huge library, and the underlying JPEG 2000 implementation really wants a pull data model to allow fine access on massive image (eg: specific sub-region of an 100000 x 100000 image thanks to low level indexing information). In other word, I really need a pull data model plugin interface.
Preload the file
Well, preloading the whole file is always an option that would work, but often not a good one. From your other questions I gather that files/downloads in question might be rather large, so avoiding network traffic might be a good idea, so preloading the file is not really an option.
Hack the library
If you're using some open source library, you might be able to implement a push API along or instead of the current pull API directly within the library.
Or you could implement things entirely by yourself. IIRC you're trying to decode some image format, and image formats are usually reasonably easy to implement from scratch.
Implement blocking reads by blocking the thread
You could put the image decoding stuff into a new thread, and whenever there is not enough buffered data already to fulfill a read immediately, do a blocking wait for the data receiving thread (main thread in case of NPAPI) until it indicates the buffer is sufficiently filled again. This is essentially the Producer/Consumer problem.
Of course, you'll first need to choose how to use threads and synchronization primitives (a library such as C++11 std::thread, Boost threads, low-level pthreads and/or Windows threads, etc.). There are tons of related SO questions on SO/SE and tons of articles/postings/discussions/tutorials etc. all over the internet.

omnet simulation of token bucket

am developing a simulation model on the omnet++..Basically my work is to develop something related to LTE, but first I need to develop a simple model which takes the packet from a source and store it in a queue for sometime and deliver it to sink...
I have developed this model and its working fine for me....
Now I need to place tokenbucket meter in between the queue and the sink...to handle the burst and send back rejected packet from the token meter back to the queue..something like second attached image..I have taken this tokenbucketmeter from the simuLTE package of OMNET...
When I simulate this, it is showing error like
Quote: cannot cast (queueing::Job *)net.tokenBucketMeter.job-1 to type 'cPacket *'
Am not getting where exactly the problem is, may be the source am using is creating the jobs, and tokenbucket meter accepts only the packets..If it is so then what type of the source should I use??
Will you please clarify this?? Will be very thankful
I am using OMNeT++ in a project at the moment too. Learning to use OMNeT++ having only touched some C99 before can be a bit frustrating.
From checking the demo projects you are using as a base for your project - it looks like Job and cPacket do not share any useful types other than cObject so I would not try to cast like this.
Have a look at PassiveQueue.cc in the /queueinglib project handles Jobs - everything is passed around as a cMessage and cast using the built in cast:
cMessage msg (comes in from method signature)
Job *job = check_and_cast<Job *>(msg);
cPackets, which you want to use, are a child of cMessage in the inheritance hierarchy shown in this link:
http://www.omnetpp.org/doc/omnetpp/api/index.html
I am not using cPackets myself, but it seems likely, given how protocols work, that you would be able to translate a message into one or more packets.

Is there a preferred way to design signal or event APIs in Go?

I am designing a package where I want to provide an API based on the observer pattern: that is, there are points where I'd like to emit a signal that will trigger zero or more interested parties. Those interested parties shouldn't necessarily need to know about each other.
I know I can implement an API like this from scratch (e.g. using a collection of channels or callback functions), but was wondering if there was a preferred way of structuring such APIs.
In many of the languages or frameworks I've played with, there has been standard ways to build these APIs so that they behave the way users expect: e.g. the g_signal_* functions for glib based applications, events and addEventListener() for JavaScript DOM apps, or multicast delegates for .NET.
Is there anything similar for Go? If not, is there some other way of structuring this type of API that is more idiomatic in Go?
I would say that a goroutine receiving from a channel is an analogue of an observer to a certain extent. An idiomatic way to expose events in Go would be thus IMHO to return channels from a package (function). Another observation is that callbacks are not used too often in Go programs. One of the reasons is also the existence of the powerful select statement.
As a final note: some people (me too) consider GoF patterns as Go antipatterns.
Go gives you a lot of tools for designing a signal api.
First you have to decide a few things:
Do you want a push or a pull model? eg. Does the publisher push events to the subscribers or do the subscribers pull events from the publisher?
If you want a push system then having the subscribers give the publisher a channel to send messages on would work really well. If you want a pull method then just a message box guarded with a mutex would work. Other than that without knowing more about your requirements it's hard to give much more detail.
I needed an "observer pattern" type thing in a couple of projects. Here's a reusable example from a recent project.
It's got a corresponding test that shows how to use it.
The basic theory is that an event emitter calls Submit with some piece of data whenever something interesting occurs. Any client that wants to be aware of that event will Register a channel it reads the event data off of. This channel you registered can be used in a select loop, or you can read it directly (or buffer and poll it).
When you're done, you Unregister.
It's not perfect for all cases (e.g. I may want a force-unregister type of event for slow observers), but it works where I use it.
I would say there is no standard way of doing this because channels are built into the language. There is no channel library with standard ways of doing things with channels, there are simply channels. Having channels as built in first class objects frees you from having to work with standard techniques and lets you solve problems in the simplest most natural way.
There is a basic Golang version of Node EventEmitter at https://github.com/chuckpreslar/emission
See http://itjumpstart.wordpress.com/2014/11/21/eventemitter-in-go/

Best form of IPC for a decentralized roguelike?

I've got a project to create a roguelike that in some way abstracts the UI from the engine and the engine from map creation, line-of-site, etc. To narrow the focus, i first want to just get the UI (player's client) and engine working.
My current idea is to make the client basically a program that decides what one character (player, monsters) will do for its turn and waits until it can move again. So each monster has a client, and so does the player. The player's client prints the map, waits for input, sends it to the engine, and tells the player what happened. The monster's client does the same except without printing the map and using AI instead of keyboard input.
Before i go any futher, if this seems somehow an obfuscated way of doing things, my goal is to learn, not write a roguelike. It's the journy, not the destination.
And so i need to choose what form of ipc fits this model best.
My first attempt used pipes because they're simplest and i wrote a
UI for the player and a program to pipe in instructions such as
where to put the map and player. While this works, it only allows
one client--communicating through stdin and out.
I've thought about making the engine a daemon that looks in a spool
where clients, when started, create unique-per-client temp files to
give instructions to the engine and recieve feedback.
Lastly, i've done a little introductory programing with sockets.
They seem like they might be the way to go, and would allow the game
to perhaps someday be run over a net. I'd like to, if possible, use
a simpler solution, and since i'm unfamiliar with them, it's more
error prone.
I'm always open to suggestions.
I've been playing around with using these combinations for a similar problem (multiple clients talking via a single daemon on the local box, with much of the intelligence shoved off into the clients).
mmap for sharing large data blobs, with unix domain sockets, messages queues, or named pipes for notification
same, but using individual files per blob instead of munging them all together in an mmap
same, but without the files or mmap (in other words, more like conventional messaging)
In general I like the idea of breaking things up into separate executables this way -- it certainly makes testing easier, for instance. I think the choice of method comes down to usage patterns -- how large are messages, how persistent does the data in them need to be, can you afford the cost of multiple trips through the network stack for a socket-based message, that sort of thing. The fact that you're sticking to Linux makes things easy in terms of what's available -- you don't need to worry about portability of message queues, for instance.
This one's also applicable: https://stackoverflow.com/a/1428542/1264797

Resources