Snmp++ (v3.2) receive informs over TCP - snmp

I'm trying to implements a simple NMS with snmp++ V3.2 api.
The objective is receive SNMP informs over TCP.
The problem is that I only receive Informs over UDP... I implement an Agent in java with SNMP4J API, but only works when i send via UDP.
I have searched for examples but I only find examples With Agent sending traps/informs via UDP to snmp++ manager....
I also find this: http://lists.agentpp.org/pipermail/agentpp/2005-October/003196.html, it is possible TCP communications is not yet implemented in SNMP++?
The big question is, can snmp++ managers receive alerts via TCP ? If yes, does someone have an example/tutorial to show me ?

SNMP over TCP is defined in RFC3430 as an experimental standard. It is not widely adopted, and based on its FAQ SNMP++ does not support it at all,
http://oosnmp.net/confluence/pages/viewpage.action?pageId=7766018

Related

How does grpc achieve "bidirectional streaming rpc" like a websocket?

Is this bidirectional stream native to http2? I looked at various http2 client. I couldn't find any example where it allows the client and server to establish a single connection and continuously push messages from both side.
(For http2 maybe on a lower level, the communications between client/server just had one tcp connection and all the request/responses are multiplexed in it, but from application level can't find any example where you establish a single connection object, and that connection object can be reused to push messages to each other).
So how did grpc achieve "Bidirectional streaming RPCs"? Specifically in this document
https://grpc.io/docs/what-is-grpc/core-concepts/
It indicates that the server side could define a Bidirectional streaming RPC, and it allows both the client and server side to continuously push messages, and achieve features that is websocket like.
Yes, bidirectional streaming is native to HTTP/2. You can read RFC-7540 for the details of how the protocol works, but basically it allows you to create several streams on a single TCP connection, and each stream can send data in either direction independently of each other.
I'm not familiar with all of the HTTP/2 libraries out there, but I know that nghttp2 will allow this in C++, and I think Java and Go have HTTP/2 implementations in their standard libraries.

Vertx with ZeroMq architecture

I have two apps that run on the same server. One is a c++ app and another is a java web server running on top of vertx. The webserver wants to send request to the C++ part and obtains response. ZeroMq seems a performing solution to do the inter process communication. And it exists a bridge to vertx (https://github.com/dano/vertx-zeromq), but no so well documented.
I'm wondering if what i think can be done with this bridge:
C++ zeroMq socket type is a dealer, it registers to the event bus by sending the appropriate message that contains the handler adress.
Webserver send data to the socket event bus handler address and get response in its callback.
Does it have an opportunity to work or i misunderstand the zeroMq bridge ?
That sounds correct to me but you don't need ZeroMQ - you can just used regular TCP - https://vertx.io/docs/vertx-tcp-eventbus-bridge/java/ and that has good documentation and support.
I'm currently looking into the benefits of using ZeroMQ for my project and suspect it is useful for more complex topologies like "broadcasting an event without knowing who wants it (don't require handlers to register)" but Vertx doesn't support this from what I can see.

ZeroMQ: a STREAM socket to a DEALER socket proxy

I have the following setup:
zmq::proxy( acceptor, clients, nullptr );
My acceptor is a zmq::socket_type::stream andmy clients is a zmq::socket::type::dealer.
I am finding when the other end sends a large request (~ 16 [kB]), the request gets broken up and distributed to my dealer threads in pieces. One dealer gets the head of the message, others get pieces in the middle. I am not setting any special options so it seems like this is default zeromq behaviour.
I am using ZeroMQ 4.2.2.
Is there any way to override this behaviour and guarantee a delivery of a complete messages to my dealer threads?
#namdam deserved [+1] for posting version details
It there any wasy to override this... ?
Yes, kindly follow the API documented rules
A socket of type ZMQ_STREAM is used to send and receive TCP data from a non-ØMQ peer, when using the tcp:// transport. A ZMQ_STREAM socket can act as client and/or server, sending and/or receiving TCP data asynchronously.
Compatible peer sockets . . . . . none.
So either way, compose the proxy to handle compatible socket-archetypes ( w/o trying to hardwire ZMQ_STREAM into any other ZeroMQ native socket-archetype ) i.e. avoid using ZMQ_STREAM at all, or create a reading gateway, that decodes and mediates the ZMQ_STREAM compatible behaviour on one side and interfaces to other ZeroMQ native socket-archtypes on the other side of the gateway-logic.
If in doubts, you may enjoy a read into the main conceptual differencessketched in brief in the [ ZeroMQ hierarchy in less than a five seconds ] Section.

How Google protobuf RpcController work?

I would like to know the internals of how rpc mechanism in google protobuf works?
what does it use tcp or udp? what protocol does it use to communicate between remote machines?
Protobuf's RpcController is an abstract interface that could be implemented by multiple different RPC systems using a variety of protocols.
If you're specifically asking about gRPC -- Google's Protobuf-based RPC framework -- it sends RPCs as HTTP/2 requests/responses with Protobuf-encoded bodies, over TCP. You can read all about it on the gRPC web site.

Packetbeat can't analyze ICMP packets sent

I am trying to index ICMP packets into elasticseach using Packetbeat. I do know that the current Packetbeat infrastructure just provides support for TCP & UDP plugins, so starting at the transport layer. ICMP is one layer below (network layer) but is there any way in which I could get these data to be indexed.
I tried adding this to packetbeat.yml:
icmp.enabled: true
This is not implemented yet, but an issue has been filed, is still open but is being worked on.
If you don't feel like waiting and want to develop your own extension, you may do so by adding a new protocol yourself.

Resources