netsnmp "snmptable" command retrieves excessive MIB objects - snmp

I have implemented different custom MIBs by using net-snmp-5.7.3 package. As an example, custom OIDs are as below:
.1.3.6.1.4.1.XXXXXX.1.1 //scalar
.1.3.6.1.4.1.XXXXXX.1.2 //scalar
.1.3.6.1.4.1.XXXXXX.3.2 //table
.1.3.6.1.4.1.XXXXXX.6.1 //scalar
.1.3.6.1.4.1.XXXXXX.6.2 //scalar
.1.3.6.1.4.1.XXXXXX.6.3 //scalar
For the type of string mibs, I registered the instance as below:
reginfo = netsnmp_create_handler_registration("fwVersion", handle_fwVersion, fwVersion_oid, OID_LENGTH(fwVersion_oid),HANDLER_CAN_RWRITE);
watcher_flags = WATCHER_SIZE_STRLEN;
netsnmp_init_watcher_info6(&watcher_info, fwVersion, strlen(fwVersion),
ASN_OCTET_STR, watcher_flags,
sizeof(fwVersion), NULL);
netsnmp_register_watched_instance(reginfo, &watcher_info);
snmpget command works correctly for scalars. However, snmptable command retrieves not only the table OIDs but also the following scalar handlers are called wrongly.
I have referred the data_set example which is http://net-snmp.sourceforge.net/dev/agent/data_set_8c-example.html
In addition, I have tried to implement with different mib2c table configuration templates.
Interestingly, If I retrieve the example mibs offered in the net-snmp package (e.g. data_set, netSnmpHostsTable), again my custom handlers (since they all have subsequent OIDs) are called wrongly.
How to prevent snmptable to call other mib object handlers wrongly ? Is this failure of snmptable command?

Related

Apache Geode - Creating region on DUnit Based Test Server/Remote Server with same code from client

I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.

How to use kubebuilder's client.List method?

I'm working on a custom controller for a custom resource using kubebuilder (version 1.0.8). I have a scenario where I need to get a list of all the instances of my custom resource so I can sync up with an external database.
All the examples I've seen for kubernetes controllers use either client-go or just call the api server directly over http. However, kubebuilder has also given me this client.Client object to get and list resources. So I'm trying to use that.
After creating a client instance by using the passed in Manager instance (i.e. do mgr.GetClient()), I then tried to write some code to get the list of all the Environment resources I created.
func syncClusterWithDatabase(c client.Client, db *dynamodb.DynamoDB) {
// Sync environments
// Step 1 - read all the environments the cluster knows about
clusterEnvironments := &cdsv1alpha1.EnvironmentList{}
c.List(context.Background(), /* what do I put here? */, clusterEnvironments)
}
The example in the documentation for the List method shows:
c.List(context.Background, &result);
which doesn't even compile.
I saw a few method in the client package to limit the search to particular labels, or for a specific field with a specific value, but nothing to limit the result to a specific resource kind.
Is there a way to do this via the Client object? Should I do something else entirely?
So figured it out - the answer is to pass nil for the second parameter. The type of the output pointer determines which sort of resource it actually retrieves.
According to the latest documentation, the List method is defined as follows,
List(ctx context.Context, list ObjectList, opts ...ListOption) error
If the List method you are calling has the same definition as above, your code should compile. As it has variadic options to set the namespace and field match, the mandatory arguments are Context and objectList.
Ref: KubeBuilder Book

How can I globally set Oracle fetch size in my Scala Play (Anorm) application?

Our DBAs would like me to increase the fetch size from the JDBC's default (10). Is there a way to do this globally via application.conf, JDBC URL or similar?
My DB calls essentially look like
object SomeController extends Controller {
def someMethod(acronym: String) = Action { implicit request =>
DB.withConnection { implicit c =>
val cust = SQL("""select whatever.... where acronym = {acronym}""").on("acronym" -> acronym).apply()
But there's a lot of them over many controllers and methods.
What can be done to have a central setting?
defaultRowPrefresh is an Oracle JDBC driver property than can be set to change from the default of 10 (Table 4-2 Connection Properties Recognized by Oracle JDBC Drivers)
While not explicitly documented, it looks like custom JDBC properties are done under the datasource key (see this and this)
So something like db.default.datasource.defaultRowsPrefetch="100" should work.
After some searching through the Oracle JDBC jar, I found:
ojdbc6-unjar $ cat ./oracle/jdbc/defaultConnectionProperties.properties
# This properties file sets the default value for connection properties.
# Entries in this file override the predefined defaults as specified
# in the JavaDoc for oracle.jdbc.OracleConnection. These defaults are
# themselves overridden by any values set via -D which are overridden
# by values passed in the Properties argument to getConnection.
#
This bit and the Javadoc do a very bad job at explaining of how to derive the actual parameter name, but after many tries of various case styles, package names etc. I found this to be working:
JAVA_OPTS="-Doracle.jdbc.defaultRowPrefetch=1000" \
./activator -Dconfig.file=conf/xe.conf run
This will make boneCP use a reasonable fetch size without any code change.

Is there any way to recreate an object from the output of its Object.inspect method?

While working with the open source ELK stack, we have run into an issue where one of the Logstash inputs snmptrap is formatting data in a way that is unusable for us. Within the SNMPv1_Trap class there is an instance variable called agent_address which is stored as a SNMP::IpAddress. For anyone familiar with the way SNMP works, the agent address is extremely important in determining where a SNMP trap originated from when using trap relays on your network.
The problem can be seen when you take a look at an event generated by Logstash upon receiving a trap. Mainly, the inspect method of the agent_address variable is dumping data that does not match anything valid.
A sample event looks kind of like this:
#<SNMP::SNMPv1_Trap:0x2db53346 #enterprise=[1.3.6.1.4.1.6827.10.17.3.1.1.1], #timestamp=#<SNMP::TimeTicks:0x2a643dd1 #value=0>, #varbind_list=[#<SNMP::VarBind:0x2d5043a5 #name=[1.0], #value=#<SNMP::Integer:0x29fb6a4a #value=1>>], #specific_trap=1000, #source_ip=\"192.168.87.228\", #agent_addr=#<SNMP::IpAddress:0x227a4011 #value=\"\\xC0\\xA8V\\xFE\">, #generic_trap=6>
We know however, that the IpAddress object used in SNMP::SNMPv1_Trap is able to return us a nicely formatted string representing the IPv4 address it is storing.
For example:
require 'snmp'
include SNMP
address = IpAddress.new(192.168.86.254)
puts address
will yield 192.168.86.254 whereas:
require 'snmp'
include SNMP
address = IpAddress.new(192.168.86.254)
puts address.inspect
will yield:
#<SNMP::IpAddress:0x0000000168ae88 #value="\xC0\xA8V\xFE">
This is the expected behaviour of an object whose .inspect method has not been overridden.
Obviously the IPv4 address in #value is not useful to us, it has only three valid hex sequences (xC0=192, xA8=168, xFE=254) and also contains an invalid hex sequence ('V'). The same thing occurs whenever an octet string representing an IPv4 address is sent as a variable binding as well, which suggests some strange encoding.
Unfortunately, aside from writing our own SNMP input, there is no interface level access to this object. The object we receive via 'event' contains the inspect string, not the object itself. Therefore, the easiest apparent way to get the information we need would be to reconstruct the SNMPv1_Trap object and then make our own calls to it via Object.#send.
If I have the raw, unformatted and default string dump returned by Object.#inspect, is there any way to physically recreate the object used to make this inspect dump on the fly?
For example, given the string dump:
#<Integer:0x2737476 #value=1>
is it possible to recreate an Integer object with a field whose value is 1?. If this is possible, is there also a way to recreate nested objects the same way? For example, given the string:
#<SNMP::SNMPv1_Trap:0x2ef73621 #value=1, #agent_address=#<SNMP::IpAddress:0x0000000168ae88 #value="\xC0\xA8V\xFE">>
Would it possible to have an object that looks like the following?
SNMP::SNMPv1_Trap{
#value : 1
#agent_address : SNMP::IpAddress{
#value : 1
}
}
If I have the raw, unformatted and default string dump returned by Object#inspect, is there any way to physically recreate the object used to make this inspect dump on the fly?
No. inspect is intended for debugging purposes to be read by humans.
It is not guaranteed to be machine-readable. It is not guaranteed to be the same across different Ruby versions. It is not guaranteed to be the same across different Ruby implementations. It isn't even guaranteed to be the same across different versions of the same Ruby implementation implementing the same Ruby version. Heck, I don't even think it is guaranteed to be the same across two runs!
It is not a serialization format.
There are plenty of serialization formats specifically for Ruby (Marshal) or generically (XML, YAML, JSON, and of course ASN.1), but inspect isn't it.

Mongoengine Django Rest Framework - Serializer Error - ReferenceField is not JSON serializable

Everything works great until the ObjectID value of the ReferenceField no longer points to a valid document. Then The ObjectID is left as the value, and json doesn't know how to serialize this.
How do I deal with invalid ReferenceFields?
E.g.
class Food(Document):
name = StringField()
owner = ReferenceField("Person")
class Person(Document):
first_name = StringField()
last_name = StringField()
...
p = Person(...)
apple = Food(name="apple", owner=p)
p.delete() # might be the wrong method, but you get the idea
At this point, attempting to fetch a list of foods via the REST API will fail with the is not JSON serializable error, since apple.owner no longer points to an owner that exists.
Since you are using DRF with mongoengine, you must be using django-rest-framework-mongoengine.
Apparenly, its a bug in django-rest-framework-mongoengine. Check this open issue on Github which was reported recently regarding the same.
https://github.com/umutbozkurt/django-rest-framework-mongoengine/issues/91
One way is to write your own JSONEncoder for this. This link might help.
Another option is to use the json_util library of Pymongo. They provide explicit BSON conversion to and from json.
As per json-util docs:
This module provides two helper methods dumps and loads that wrap the
native json methods and provide explicit BSON conversion to and from
json. This allows for specialized encoding and decoding of BSON
documents into Mongo Extended JSON‘s Strict mode. This lets you encode
/ decode BSON documents to JSON even when they use special BSON types.

Resources