pysnmp Agent with HOST-RESOURCES-MIB - snmp

I'm trying to monitor a python process using opennms. To do this I need to implement an agent that supports the HOST-RESOURCES-MIB. Opennms checks the state of the process by inspecting the hrSwRunTable of the HOST-RESOURCES-MIB. The test is done by matching a given process as hrSwRunName against the numeric value of the hrSwRunState.
pysnmp gives a few examples of writing an agent that I'm trying to modify but I'm not having much success.
the relevant part of my code is as follows
import logging
from pysnmp import debug
from pysnmp.carrier.asyncore.dgram import udp
from pysnmp.entity import engine, config
from pysnmp.entity.rfc3413 import cmdrsp, context
from pysnmp.proto.api import v2c
from pysnmp.smi import builder, instrum, exval
debug.setLogger(debug.Debug('all'))
formatting = '[%(asctime)s-%(levelname)s]-(%(module)s) %(message)s'
logging.basicConfig(level=logging.DEBUG, format=formatting, )
logging.info("Starting....")
# Create SNMP engine
snmpEngine = engine.SnmpEngine()
# Transport setup
# UDP over IPv4
config.addTransport(
snmpEngine,
udp.domainName,
udp.UdpTransport().openServerMode(('mypc', 12345))
)
# SNMPv2c setup
# SecurityName <-> CommunityName mapping.
config.addV1System(snmpEngine, 'my-area', 'public')
# Allow read MIB access for this user / securityModels at VACM
config.addVacmUser(snmpEngine, 2, 'my-area', 'noAuthNoPriv', (1, 3, 6, 1, 2, 1, 25, 4, 2), (1, 3, 6, 1, 2, 1, 25, 4, 2))
# Create an SNMP context
snmpContext = context.SnmpContext(snmpEngine)
# --- define custom SNMP Table within a newly defined EXAMPLE-MIB ---
# ==================================================================
logging.debug('Loading SNMP-TARGET-MIB module...'),
mibBuilder1 = builder.MibBuilder().loadModules('SNMP-TARGET-MIB')
logging.debug('done')
logging.debug('Building MIB tree...'),
mibInstrum1 = instrum.MibInstrumController(mibBuilder1)
logging.debug('done')
logging.debug('Building table entry index from human-friendly representation...')
snmpTargetAddrEntry, = mibBuilder1.importSymbols('SNMP-TARGET-MIB', 'snmpTargetAddrEntry')
instanceId1 = snmpTargetAddrEntry.getInstIdFromIndices('my-area')
# ==================================================================
logging.debug('Loading HOST-RESOURCES-MIB module...'),
mibBuilder = builder.MibBuilder().loadModules('HOST-RESOURCES-MIB')
logging.debug('done')
logging.debug('Building MIB tree...'),
mibInstrum = instrum.MibInstrumController(mibBuilder)
logging.debug('done')
logging.debug('Building table entry index from human-friendly representation...')
# see http://www.oidview.com/mibs/0/HOST-RESOURCES-MIB.html
hostRunTable, = mibBuilder.importSymbols('HOST-RESOURCES-MIB', 'hrSWRunEntry')
instanceId = hostRunTable.getInstIdFromIndices('my-area')
logging.debug('done')
You will see that at the end of the code I'm trying generate an instance of 'SNMP-TARGET-MIB->snmpTargetAddrEntry' and 'HOST-RESOURCES-MIB->hrSWRunEntry'. The code for the SNMP-TARGET-MIB (which is in the pysnmp documentation) works fine however the code that tries to generate the HOST-RESOURCES-MIB fails when I try to generate an instance on the line instanceId = hostRunTable.getInstIdFromIndices('my-area')
The error is pyasn1.error.PyAsn1Error: Can't coerce 'my-area' into integer: invalid literal for int() with base 10: 'my-area'
Can anyone shed any light on what I'm doing wrong? I realise I'm new to SNMP and so it's quite possible its a stupid error

According to the HOST-RESOURCES-MIB, hrSWRunTable is indexed by hrSWRunIndex column, its values belong to the Integer32 type:
hrSWRunEntry OBJECT-TYPE
SYNTAX HrSWRunEntry
INDEX { hrSWRunIndex }
::= { hrSWRunTable 1 }
hrSWRunIndex OBJECT-TYPE
SYNTAX Integer32 (1..2147483647)
::= { hrSWRunEntry 1 }
You are trying to build an OID index from index value which is string type, not integer. That leads to string->int conversion error:
instanceId = hostRunTable.getInstIdFromIndices('my-area')
So you probably want your first row to have 1 as an index value:
instanceId = hostRunTable.getInstIdFromIndices(1)
Here I assume that you compute instanceId for the purpose of building OIDs for your new tabular objects (e.g. MibScalarInstance).

Related

Loading multiple CSV files (silos) to compose Tensorflow Federated dataset

I am working on pre-processed data that were already siloed into separated csv files to represent separated local data for federated learning.
To correct implement the federated learning with these multiple CSVs on TensorFlow Federated, I am just trying to reproduce the same approach with a toy example in the iris dataset. However, when trying to use the method tff.simulation.datasets.TestClientData, I am getting the error:
TypeError: can't pickle _thread.RLock objects
The current code is as follows, first, load the three iris dataset CSV files (50 samples on each) into a dictionary from the filenames iris1.csv, iris2.csv, and iris3.csv:
silos = {}
for silo in silos_files:
silo_name = silo.replace(".csv", "")
silos[silo_name] = pd.read_csv(silos_path + silo)
silos[silo_name]["variety"].replace({"Setosa" : 0, "Versicolor" : 1, "Virginica" : 2}, inplace=True)
Creating a new dict with tensors:
silos_tf = collections.OrderedDict()
for key, silo in silos.items():
silos_tf[key] = tf.data.Dataset.from_tensor_slices((silo.drop(columns=["variety"]).values, silo["variety"].values))
Finally, trying to converting the Tensorflow Dataset into a Tensorflow Federated Dataset:
tff_dataset = tff.simulation.datasets.TestClientData(
silos_tf
)
That raises the error:
TypeError Traceback (most recent call last)
<ipython-input-58-a4b5686509ce> in <module>()
1 tff_dataset = tff.simulation.datasets.TestClientData(
----> 2 silos_tf
3 )
/usr/local/lib/python3.7/dist-packages/tensorflow_federated/python/simulation/datasets/from_tensor_slices_client_data.py in __init__(self, tensor_slices_dict)
59 """
60 py_typecheck.check_type(tensor_slices_dict, dict)
---> 61 tensor_slices_dict = copy.deepcopy(tensor_slices_dict)
62 structures = list(tensor_slices_dict.values())
63 example_structure = structures[0]
...
/usr/lib/python3.7/copy.py in deepcopy(x, memo, _nil)
167 reductor = getattr(x, "__reduce_ex__", None)
168 if reductor:
--> 169 rv = reductor(4)
170 else:
171 reductor = getattr(x, "__reduce__", None)
TypeError: can't pickle _thread.RLock objects
I also tried to use Python dictionary instead of OrderedDict but the error is the same. For this experiment, I am using Google Colab with this notebook as reference running with TensorFlow 2.8.0 and TensorFlow Federated version 0.20.0. I also used these previous questions as references:
Is there a reasonable way to create tff clients datat sets?
'tensorflow_federated.python.simulation' has no attribute 'FromTensorSlicesClientData' when using tff-nightly
I am not sure if this is a good way that derives for a case beyond the toy example, please, if any suggestion on how to bring already siloed data for TFF tests, I am thankful.
I did some search of public code in github using class tff.simulation.datasets.TestClientData, then I found the following implementation (source here):
def to_ClientData(clientsData: np.ndarray, clientsDataLabels: np.ndarray,
ds_info, is_train=True) -> tff.simulation.datasets.TestClientData:
"""Transform dataset to be fed to fedjax
:param clientsData: dataset for each client
:param clientsDataLabels:
:param ds_info: dataset information
:param train: True if processing train split
:return: dataset for each client cast into TestClientData
"""
num_clients = ds_info['num_clients']
client_data = collections.OrderedDict()
for i in range(num_clients if is_train else 1):
client_data[str(i)] = collections.OrderedDict(
x=clientsData[i],
y=clientsDataLabels[i])
return tff.simulation.datasets.TestClientData(client_data)
I understood from this snippet that the tff.simulation.datasets.TestClientData class requires as argument an OrderedDict composed by numpy arrays instead of a dict of tensors (as my previous implementation), now I changed the code for the following:
silos_tf = collections.OrderedDict()
for key, silo in silos.items():
silos_tf[key] = collections.OrderedDict(
x=silo.drop(columns=["variety"]).values,
y=silo["variety"].values)
Followed by:
tff_dataset = tff.simulation.datasets.TestClientData(
silos_tf
)
That correctly runs as the following output:
>>> tff_dataset.client_ids
['iris3', 'iris1', 'iris2']

pycave kmeans-gpu disable model trainer outputs from being printed

This is a code snippet to run KMeans using GPU.
Documentation-link:https://pycave.borchero.com/sites/generated/clustering/kmeans/pycave.clustering.KMeans.html
import torch
from pycave.clustering import KMeans
X = torch.cat([
torch.randn(1000, 6) - 5,
torch.randn(1000, 6),
torch.randn(1000, 6) + 5,
])
estimator = KMeans(num_clusters = 3, trainer_params=dict(gpus=1,
enable_progress_bar=0,
max_epochs=100,))
labels = estimator.fit_predict(X).numpy()
pd.value_counts(labels)
The issue is with how to disable the console output from the estimator.
Current Output:
Running initialization...
{'batch_size': 3000, 'collate_fn': <function collate_tensor at 0x000002BE21221700>}
Fitting K-Means...
{'batch_size': 3000, 'collate_fn': <function collate_tensor at 0x000002BE21221700>}
{'batch_size': 1, 'sampler': None, 'batch_sampler': <pytorch_lightning.overrides.distributed.IndexBatchSamplerWrapper object at 0x000002BE593A55B0>, 'collate_fn': <function collate_tensor at 0x000002BE21221700>, 'shuffle': False, 'drop_last': False}
0 1000
2 1000
1 1000
dtype: int64
Expected Output:
0 1000
2 1000
1 1000
dtype: int64
Info regarding trainer_params parameter
(Optional[Dict[str, Any]]) --
Initialization parameters to use when initializing a PyTorch Lightning trainer. By default, it disables various stdout logs unless PyCave is configured to do verbose logging. Checkpointing and logging are disabled regardless of the log level.
The dictionaries that are printed should never be there, that's a bug in a dependency. Resolved in the latest build.
As far as the PyCave logs are concerned (Running initialization... and Fitting K-Means...), you can turn them off easily by adding the following:
import logging
from pycave import set_logging_level
set_logging_level(logging.WARNING)
Note that set_logging_level(logging.WARNING) also turns off the progress bar and the model summary automatically so you don't have to set these flags explicitly.

How can you decode output sequences from TFGPT2Model?

I'm trying to get generated text from the TFGPT2Model in the Transformers library. I can see the output tensor, but I'm not able to decode it. Is the tokenizer not compatible with the TF model for decoding?
Code is:
import tensorflow as tf
from transformers import (
TFGPT2Model,
GPT2Tokenizer,
GPT2Config,
)
model_name = "gpt2-medium"
config = GPT2Config.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = TFGPT2Model.from_pretrained(model_name, config=config)
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute",
add_special_tokens=True))[None, :] # Batch size 1
outputs = model(input_ids)
print(outputs[0])
result = tokenizer.decode(outputs[0])
print(result)
The resulting output is:
$ python run_tf_gpt2.py
2020-04-16 23:43:11.753181: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-04-16 23:43:11.777487: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
2020-04-16 23:43:27.617982: W tensorflow/python/util/util.cc:319] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2020-04-16 23:43:27.693316: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-04-16 23:43:27.824075: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA n
ode, so returning NUMA node zero
...
...
2020-04-16 23:43:38.149860: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10565 MB memory) -> physical GPU (device: 1, name: Tesla K80, pci bus id: 0000:25:00.0, compute capability: 3.7)
2020-04-16 23:43:38.150217: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-16 23:43:38.150913: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 10565 MB memory) -> physical GPU (device: 2, name: Tesla K80, pci bus id: 0000:26:00.0, compute capability: 3.7)
2020-04-16 23:43:44.438587: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
tf.Tensor(
[[[ 0.671073 0.60760975 -0.10744217 ... -0.51132596 -0.3369941
0.23458953]
[ 0.6403012 0.00396247 0.7443729 ... 0.2058892 -0.43869907
0.2180479 ]
[ 0.5131284 -0.35192695 0.12285632 ... -0.30060387 -1.0279727
0.13515341]
[ 0.3083361 -0.05588413 1.0543617 ... -0.11589152 -1.0487361
0.05204075]
[ 0.70787597 -0.40516227 0.4160383 ... 0.44217822 -0.34975922
0.02535546]
[-0.03940453 -0.1243843 0.40204537 ... 0.04586177 -0.48230025
0.5768887 ]]], shape=(1, 6, 1024), dtype=float32)
Traceback (most recent call last):
File "run_tf_gpt2.py", line 19, in <module>
result = tokenizer.decode(outputs[0])
File "/home/.../transformers/src/transformers/tokenization_utils.py", line 1605, in decode
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
File "/home/.../transformers/src/transformers/tokenization_utils.py", line 1575, in convert_ids_to_tokens
index = int(index)
File "/home/.../venv/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 853, in __int__
return int(self._numpy())
TypeError: only size-1 arrays can be converted to Python scalars
(I removed all the TF messages and modified paths of my environment)
Apparently, you are using the wrong GPT2-Model. I tried your example by using the GPT2LMHeadModel which is the same Transformer just with a language modeling head on top. It also returns prediction_scores. In addition to that, you need to use model.generate(input_ids) in order to get an output for decoding. By default, a greedy search is performed.
import tensorflow as tf
from transformers import (
TFGPT2LMHeadModel,
GPT2Tokenizer,
GPT2Config,
)
model_name = "gpt2-medium"
config = GPT2Config.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = TFGPT2LMHeadModel.from_pretrained(model_name, config=config)
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :] # Batch size 1
outputs = model.generate(input_ids=input_ids)
print(outputs[0])
result = tokenizer.decode(outputs[0])
print(result)

pyyaml parse data with tag

I have yaml data like the input below and i need output as key value pairs
Input
a="""
--- !ruby/hash:ActiveSupport::HashWithIndifferentAccess
code:
- '716'
- '718'
id:
- 488
- 499
"""
ouput needed
{'code': ['716', '718'], 'id': [488, 499]}
The default constructor was giving me an error. I tried adding new constructor and now its not giving me error but i am not able to get key value pairs.
FYI, If i remove the !ruby/hash:ActiveSupport::HashWithIndifferentAccess line from my yaml then it gives me desired output.
def new_constructor(loader, tag_suffix, node):
if type(node.value)=='list':
val=''.join(node.value)
else:
val=node.value
val=node.value
ret_val="""
{0}
""".format(val)
return ret_val
yaml.add_multi_constructor('', new_constructor)
yaml.load(a)
output
"\n [(ScalarNode(tag=u'tag:yaml.org,2002:str', value=u'code'), SequenceNode(tag=u'tag:yaml.org,2002:seq', value=[ScalarNode(tag=u'tag:yaml.org,2002:str', value=u'716'), ScalarNode(tag=u'tag:yaml.org,2002:str', value=u'718')])), (ScalarNode(tag=u'tag:yaml.org,2002:str', value=u'id'), SequenceNode(tag=u'tag:yaml.org,2002:seq', value=[ScalarNode(tag=u'tag:yaml.org,2002:int', value=u'488'), ScalarNode(tag=u'tag:yaml.org,2002:int', value=u'499')]))]\n "
Please suggest.
This is not a solution using PyYAML, but I recommend using ruamel.yaml instead. If for no other reason, it's more actively maintained than PyYAML. A quote from the overview
Many of the bugs filed against PyYAML, but that were never acted upon, have been fixed in ruamel.yaml
To load that string, you can do
import ruamel.yaml
parser = ruamel.yaml.YAML()
obj = parser.load(a) # as defined above.
I strongly recommend following #Andrew F answer, but in case you
wonder why your code did not get the proper result, that is because
you don't correctly process the node under the tag in your tag
handling.
Although the node's value is a list (of tuples with key value pairs),
you should test for the type of the node itself (using isinstance)
and then hand it over to the "normal" mapping processing routine as
the tag is on a mapping:
import yaml
from yaml.loader import SafeLoader
a = """\
--- !ruby/hash:ActiveSupport::HashWithIndifferentAccess
code:
- '716'
- '718'
id:
- 488
- 499
"""
def new_constructor(loader, tag_suffix, node):
if isinstance(node, yaml.nodes.MappingNode):
return loader.construct_mapping(node, deep=True)
raise NotImplementedError
yaml.add_multi_constructor('', new_constructor, Loader=SafeLoader)
data = yaml.load(a, Loader=SafeLoader)
print(data)
which gives:
{'code': ['716', '718'], 'id': [488, 499]}
You should not use PyYAML's yaml.load(), it is documented to be potentially unsafe
and above all it is not necessary. Just add the new constructor to the SafeLoader.

Creating SNMPD Agent - Writeable objects and more

Apologize for the long post, majority of it are config files that need to be shown.
I've been creating my own SNMP agent. For creating my MIB and snmpd.conf file I've just searched the web for answers. For actually implementing the handlers I've used the example.c/.h found at http://www.net-snmp.org/dev/agent/example_8c_source.html
I'm using another PC (all Linux) to test my implementation and so far I've only been able to get snmpwalk/snmpget commands to work.
I've setup the WriteMethod function inside my source file for my setable objects. Problem is, I do not think this code is getting executed when trying to set the object.
Below is an example of trying to set the object:
root#jt:/usr/share/snmp/mibs# snmpset -v 2c -c communityNameHere -m MIB-NAME-HERE.txt 10.20.30.40 1.3.6.1.4.1.12345.1 s "0"
MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf:/usr/share/mibs/site:/usr/share/snmp/mibs:/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp
Cannot find module (MIB-NAME-HERE.txt): At line 0 in (none)
Error in packet.
Reason: notWritable (That object does not support modification)
Failed object: iso.3.6.1.4.1.12345.1
I've also tried to use snmpset without the -m option. I've tried using -m +MIB-NAME-HERE.txt as well.
Question - I have snmp.conf commented out. How can it not find the module when the MIB I specify is in /usr/share/snmp/mibs ?
Below is my MIB :
MIB-NAME-HERE DEFINITIONS ::= BEGIN
IMPORTS
MODULE-IDENTITY, OBJECT-TYPE, Integer32, enterprises,
NOTIFICATION-TYPE FROM SNMPv2-SMI
OBJECT-GROUP, NOTIFICATION-GROUP FROM SNMPv2-CONF
;
testSnmp MODULE-IDENTITY
LAST-UPDATED "201505200000Z"
ORGANIZATION "www.example.com"
CONTACT-INFO
"email: support#example.com"
DESCRIPTION
"MIB Example."
REVISION "201505200000Z"
DESCRIPTION
"version 1.0"
::= { enterprises 12345 }
--
-- top level structure
--
testSnmpValues OBJECT IDENTIFIER ::= { testSnmp 1 }
testSnmpValuesGroup OBJECT-GROUP
OBJECTS { testObject
}
STATUS current
DESCRIPTION
"Group of all test variables."
::= { testSnmp 4 }
--
-- Values
--
testObject OBJECT-TYPE
SYNTAX OCTET STRING (SIZE(1..4096))
MAX-ACCESS read-write
STATUS current
DESCRIPTION
"Test Example"
::= { testSnmpValues 1 }
Question - What is the purpose of :
testSnmpValues OBJECT IDENTIFIER ::= { testSnmp 1 }
testSnmpValuesGroup OBJECT-GROUP
OBJECTS { testObject
}
STATUS current
DESCRIPTION
"Group of all test variables."
::= { testSnmp 4 }
Now for my snmpd.conf file :
###############################################################################
#
# snmpd.conf:
# Test snmpd configuration file. (See EXAMPLE.conf as a reference)
#
###############################################################################
# By default snmp looks here:
# /etc/snmp/snmpd.conf.
# Use '-C -c <configfile>' to override.
#
###############################################################################
# Access Control
###############################################################################
# sec.name source community
com2sec testall default communityNameHere
#---- Community 'communityNameHere' uses security name 'testall'. 'source' selects which IPs can connect.
####
# Second, map the security names into group names:
# sec.model sec.name
group TestGroup v1 testall
group TestGroup v2c testall
group TestGroup usm testall
####
# Third, create a view for us to let the groups have rights to:
# incl/excl subtree mask
#view all included .1 80
view testview included .1.3.6.1.4.1.12345
#---- testview - A view which only allows access to Test OIDs.
####
# Finally, grant the groups access to the 1 view with different
# write permissions:
# context sec.model sec.level match read write notif
#---- Grant read access to TEST group for all security models.
access TestGroup "" any noauth exact testview testview testview
# -----------------------------------------------------------------------------
# load the testsnmp module
dlmod testsnmp /usr/local/testsnmp.so
Question - Is there something I am missing to make an object writeable? I've seen other snmpd.conf files with different formats but I assume that shouldn't matter?
You generally don't need a MIB for net-snmp to work. It is enough when you have the OID specified in the .c file.
Are you trying the snmpset/get/walk on a remote PC or on the same one.
I had to specifie in my snmpd.conf the
-> agentAddress udp:161
Without it i didn't had access.
Your MIB file missing "END" at the end, you can validate it here: simpleweb mib validation
I named my community "public" and had to add this in /etc/snmp/snmpd.conf
com2sec ConfigUser default public
com2sec AllUser default public
group ConfigGroup v1 ConfigUser
group AllGroup v2c AllUser
Now you shall be able to do your tests with v1.
I had to do export MIBS="MY-MIB", whereas MY-MIB.txt is my MIB file, which I put info /usr/local/share/snmp/mibs/. I don't remember exactly whether it was required for mib2c tool or if you can skip defining MIBS variable.
Then you could start snmpd with -d switch to see debug output, start your agent and can do testing. I had to enable ports used by snmpd in my firewall, which were blocked by default. I can test read/write on my dummy value with:
snmpget -v1 -c public localhost:10161 MY-MIB::test2.0
MY-MIB::test2.0 = INTEGER: 43 tests
snmpset -v1 -c public localhost:10161 MY-MIB::test2.0 = 123
MY-MIB::test2.0 = INTEGER: 123 tests
As long as you have a working agent, this shall work, you can use also mib2c to create simple sub-agent for your test-MIB and test it with it, just to make sure your config+agent is all right.

Resources