How to implement frames and commands on omnet++? - omnet++

I'm working on a project evaluation of TSCH feature in 802.15.4. I don't know how to implement the Enhanced beacon and the request and confirm messages between nodes on omnet++.

This is quite a generic question. Generally I would look into the current codebase of INET (2.99 and later recommended) and would try to find similar features as a starting point.

Related

Fuchsia: how to use a built-in capability in a component

I'm trying to learn and use Fuchsia for fun, and a pretty basic concept is keeping me from progressing.
I thought that, as a learning experience, I could write a simple HTTP client that prints the content of some random URL to the log. Really nothing fancy.
As I understand, using the network (in my case I'd like to utilize fuchsia.net.http.Loader) is a capability, which has to be granted to a running component. Makes sense, that's pretty much the core of the OS.
I also understand that the initiating component, the one that runs my component, needs to grant this capability to my component. That's fair.
What I don't understand, and I'd very much appreciate any additional information (pretty please!) is how I can grant this to my component?
Specifically all demos and examples I saw had a custom client & server under a realm, which talked to each other. That's a good practice, but it doesn't bring in any capability that's built in.
What am I missing? Thanks in advance!
I'm trying to learn and use Fuchsia for fun, and a pretty basic concept is keeping me from progressing.
Thanks for your interest in Fuchsia! First of all, if you haven't already gone through Fuchsia Fundamentals I would strongly suggest that as a starting point for many of the foundational concepts.
Specifically all demos and examples I saw had a custom client & server under a realm, which talked to each other. That's a good practice, but it doesn't bring in any capability that's built in.
This is primarily because there's isn't necessarily a concept of any set of components or capabilities being "built in" to the system. The capabilities available to components in the system are entirely dependent on the rest of the components in a particular product build and how they are organized (this is called the component topology).
I thought that, as a learning experience, I could write a simple HTTP client that prints the content of some random URL to the log. Really nothing fancy.
The answer has a few sharp edges to it at the moment, as Fuchsia is a rapidly evolving open source project. Hopefully some of the details below will help you move forward.
Determine the capability routes
So you'll have to do a bit of work to figure out where the capability you need is provided and routed. In fact, one of the components exercises shows you how to do this for the fuchsia.net.http.Loader capability. Knowing where a capability is offered/used allows you to determine where your component would need to be instantiated to obtain the necessary capability.
You might also find some of the content in the Connect components developer guide useful in accessing the capability.
Run the component
Knowing where a capability is routed allows you to determine how to run your component. The most straightforward way of instantiating a component in the topology is to do so dynamically using ffx component. However, this requires a collection somewhere on the system with the capabilities you need. The ffx-laboratory realm where most examples are run has a very limited set of capabilities that does not include fuchsia.net.http.Loader.
You'll likely need to add your component statically to the topology using a core realm shard so that the necessary routes can be declared explicitly between the components that offer fuchsia.net.http.Loader and your component. With the component included statically in your product build, you can execute it using ffx component commands.
For more details on component execution, check out the Run components developer guide as well.
Run a CLI binary
Since this is a learning exercise, another option is to build your code as a binary that runs within the context of a component that already has the capabilities you need vs. creating and running an entirely new component. This is commonly used for CLI tools. With the ffx component explore command you can run your code as a binary inside the existing component that provides the HTTP capability you are looking for using the --tools argument, without the need to work through all the capability routing pieces described above.
For more details on ffx component explore, see Explore components.

Veins 802.11p CSMA/CA and retransmission

Vehicles communicate using 802.11p on Veins 5.1, OmNET 5.6.2, sumo 1.8.0 environment.
My questions
Do I have to implement retransmission process(like CSMA/CA) when collision is occurred?
or, is retransmission process(like CSMA/CA) already implemented in library or such class?
I want to use RTS/CTS option, do I have to implement it too?
Thank you
To directly answer your question: the 802.11p modules that are included in Veins 5.1 trigger automatic retransmissions (following the 802.11 specification) for lost unicast messages if the MAC layer useAcks parameter is set to true (which is not the default). RTS/CTS is not implemented, so if you want to use only modules from Veins you would need to implement this yourself.
More generally, though, your research sounds like it might be better served if you would combine Veins with The INET Framework (via veins_inet); this would allow you to use the more general 802.11 simulation model included in The INET Framework. It includes features like block-ACKs, RTS/CTS, fragmentation and reassembly, infrastructure mode, automatic rate selection, and many more.

Using an alternative connection channel/transport for GRPC

I currently have a primitive RPC setup relying on JSON transferred over secured sockets, but I would like to switch to gRPC. Unfortunately I also need access to AF_UNIX on windows (Which Microsoft recently started supporting, but gRPC has not implemented).
Since I have an existing working connection (managed with a different library), my preference would be to just use that in conjunction with GRPC to send/receive commands in place of my JSON parsing, but I am struggling to identify the best way to do that.
I have seen Plugging custom transport into gRPC but this question differs in the following ways (As well as my hope for a more recent answer)
I am wanting to avoid making changes to the core of gRPC. I'd prefer to extend it if possible from within my library, but the answer here implies adding a new transport to gRPC.If I did need to do this at the transport level, is there a mechanism to register it with gRPC after the core has been built?
I am unsure if I need to define this as a full custom transport, since I do already have an existing connection established and ready. I have seen some things that imply I could simply extend Channel, but I might be wrong.
I need to be able to support Windows, or at least modern versions of it (Which means that the from_fd options gRPC provides are not available since they are currently only implemented for POSIX)
Has anyone solved similar problems with gRPC?
I may have figured out my own answer. I seem to have been overly focused on gRPC, when the service definition component of Protobuf is not dependent on that.
How can i write my own RPC Implementation for Protocol Buffers utilizing ZeroMQ is very similar to my use case, with https://developers.google.com/protocol-buffers/docs/proto#services seeming to resolve my issue (And this also explains why I seem to have been mixing up the different kinds of "Channels" involved
I welcome any improvements/suggestions, and hope that maybe this can be found in future searches by people that had the same confusion.

Add protocol to gammu

By default gammu have support for most standard modem. I have a very particular modem with some special requirements and I would like to add a protocol to gammu.
Is there a guide for this somewhere or someone who can list the basic steps for me?
EDIT: #user1664784 recommended to look att Kannel, and actually any system able to handle incoming and outgoing SMS is acceptable as long as it is stable. But I need to know how to modify the system so that I can handle a specific protocol. It is AT-based but a slight dialect. So any suggestion of a system handling SMS from a device connected over serial port is interesting. I need to find a system where someone can give me information on where in the source code I can begin adding a new AT-based protocol.
If someone have done some sample code in this area it would also be greatly appreciated.
It really depends how much different it is from standard AT commands.
If the difference is minor (eg. needs custom initialization), it can be easily achieved by feature flags. This can be seein ATGEN_PostConnect which handles initialization for ZTE or Huawei devices.
If the differences are big, you will probably need to write own driver, which will fall back to AT in some cases. Something similar can be seen in the AT OBEX driver which switches Bluetooth connection between OBEX and IrMC modes.
I think we used to have documentation on adding support for new devices, but I'm unable to find it right now.

Where to begin with SNMP agent implementation?

before I start I realise there are a few SNMP related questions here already but not many seem to have been answered - that could mean I'm asking in the wrong place but I don't know where else to go at the moment.
I've been reading up as best I can on SNMP for a couple of days but am finding it difficult to get my head around what is meant to be happening. The idea is eventually we will integrate SNMP into our Java application server which will allow the end users to incorporate it into their pre-existing Network Management Systems(NMS).
Unfortunately I'm feeling entirely confused by what is meant to be going on. From what I understood from talking to the end users (which was unfortunately before any research) was that the monitoring allows their existing NMS to give their admin guys a view of the vital statistics in a tree type display, giving them feedback regarding different parts of the system at a high level and allowing them to dig down into specific subsystems.
From reading around we would implement an 'Agent' which has several defined interfaces allowing for GET requests etc to be processed and responded to. That makes sense but I am at a loss to work out what the format of the communication is - there don't seem to be any specific examples of what any of the messages look like, how the information is encoded.
More of my confusion though is regarding Management Information Base(MIB). I had, wrongly, assumed that the interface of the agent would allow for the monitored attributes to be requested and then in turn the values for those attributes requested. Allowing any new Agent to be started and detected without any configuration on the NMS end (with the exception of authentication in v3). This, if I understand correctly, is not the case and the Agent must instead define MIBs which can be used by the NMS to determine those attributes. My confusion is increased when people start referring to thousands of existing MIBs and that they can be reused which I don't understand. Is the intention that a single MIB definition can be used to say describe how a particular attribute of a network device (something simple like internet connected on a router:yes/no) for many different devices? If so I don't believe that our software would allow the monitoring of anything common to any other device/system but should we be looking for already exising MIBs? At the moment I don't really see any good rational for such a system, surely it would be easier for the Agent to export that information - so I'd appreciate it if someone could enlighten me!
I think it would help if I was able to setup a simple SNMP agent and some sort of client, I could begin to see the process and eventually inspect the communication between the two but am finding it difficult to find anywhere that provides any information on doing such a thing. Nagios has been recommended to us as a test 'client'/NMS but their 'get started quick' section recommends downloading a 600Mb virtual machine - surely there is a quicker way to get started?
Any help or suggestions will be appreciated, I have been through the Wiki page but it doesn't seem to go into much detail about the MIBs and the having not had to deal with anything like the referenced RFCs before, while they may contain all of the information they seem completely impenetrable to me at the moment. Or if there are any books that can be recommended for an overview and implementation of v3?
Thanks for reading and even more thanks if you think you can help!
It seems to me that you read all SNMP information piece by piece in an disorganized way. This is highly not recommended and of course lead you to confusion.
What about forgetting what you have learnt so far and dive into a good book such as Essential SNMP?
http://shop.oreilly.com/product/9780596008406.do
Click the Google Preview icon to preview it please.
You could not depend on a network forum to tell you the ABCs, as that's impractical I find out.
The communications interface is SNMP. That's the protocol used for transmission (usually on top of UDP). The thing that services information requests is an SNMP Agent. The thing that sends information requests is an SNMP Manager.
The definition of what information should be made available by the Agent, and requested by the Manager, goes in a MIB. A MIB is the "glue", a directory of what sort of things any particular system can/should offer. It maps numeric codes to names and types that allow us to make sense of the data, much like how a phone directory maps phone numbers to people's names and addresses.
Generally you would create and ship and use your own MIBs that can describe aspects specific to your own product, but you are supposed to service some standard information requests as well, which are defined in existing MIBs. Yes there are thousands of other pre-existing MIBs and the likelihood that you need more than one or two of these is remote. They are typically published versions of MIBs for existing products.
The conventional way to "toy around" is to install Net-SNMP (a software suite that includes an agent implementation and allows you to "bolt on" your own logic and your own MIBs fairly easily) then examine the results using a packet capturer like Wireshark.
For a fuller implementation in production you may stick with Net-SNMP, or write your own Agent software, or do what I did and create a hybrid of the two that's a little more flexible and performant but uses Net-SNMP's backend for handling all the low-level SNMP stuff.
Your first step, though, is to read a book or some other teaching material that can clear all your misconceptions, because guesswork won't cut it.
I had success using the samples from this page. Both the shell and Perl NetSNMP code was very straightforward to implement and query.

Resources