PySNMP Trap OID string value is getting converted to hex - snmp

I'm trying to write a script that sends a trap when a syslog message is received.
When i use a short literal string it works well, but when the string gets bigger or I use a string variable as the value of and OID, the trap is sent with the value converted to hex.
Here is a sample of my code.
from pysnmp import debug
from pysnmp.entity import engine, config
from pysnmp.carrier.asyncore.dgram import udp
from pysnmp.entity.rfc3413 import ntforg
from pysnmp.proto.api import v2c
src_host = "192.168.1.100"
dst_host = "192.168.1.101"
dst_port = 162
syslog_msg = ("Dec 13 21:19:10 amt-srv-co7 This is a sylog test message")
snmp_engine = engine.SnmpEngine()
config.addV1System(snmp_engine, "my-area", "public", transportTag="all-my-managers")
config.addTargetParams(snmp_engine, "my-creds", "my-area", "noAuthNoPriv", 1)
config.addTransport(snmp_engine, udp.domainName, udp.UdpSocketTransport().openClientMode())
config.addTargetAddr(snmp_engine, "my-nms", udp.domainName, (dst_host, dst_port), "my-creds", tagList="all-my-managers", sourceAddress=(src_host, 0))
config.addNotificationTarget(snmp_engine, "my-notification", "my-filter", "all-my-managers", "trap")
config.addContext(snmp_engine, "")
config.addVacmUser(snmp_engine, 2, "my-area", "noAuthNoPriv", (), (), (1,3,6))
ntf_org = ntforg.NotificationOriginator()
ostr = v2c.OctetString(syslog_msg) #THE PROBLEM IS HERE!
# ostr = v2c.OctetString("Short message is OK")
# ostr = v2c.OctetString("A long message, not much longer than is converted to a hex string.")
varBinds = [((1, 3, 6, 1, 6, 3, 1, 1, 4, 1, 0), v2c.ObjectIdentifier((1, 3, 6, 1, 2, 1, 192, 0, 1))), ((1, 3, 6, 1, 2, 1, 192, 1, 2, 1, 11), ostr)]
send_request_handle = ntf_org.sendVarBinds(snmp_engine, "my-notification", None, "", varBinds, cb_fun)
snmp_engine.transportDispatcher.runDispatcher()
I tried changing de OID to something that would represent a string, but I get the same issue.
I believe that the issue happen when ntf_org.sendVarBinds is called... but maybe there is jist something I'm doing wrong.
Also if I just run a snmptrap from the CL I get the text in string format.
snmptrap -v 2c -c public 192.168.1.101 '1.3.6.1.6.3.1.1.4.1.0' '1.3.6.1.2.1.192.0.1' '1.3.6.1.2.1.192.1.2.11' s 'Dec 13 21:19:10 amt-srv-co7 This is a sylog test message'
But I get different Trap Types.
For the script I get a Trap Type of 1.3.6.1.2.1.192.1.2.1.6.11
For the snmptrap command I get 1.3.6.1.2.1.6.192
This is what I get when I use a literal string inside the script
Unknown alert received from device amt-srv-co7 of type Host_Device. Device Time 0+00:00:00. (Trap type 1.3.6.1.2.1.192.1.2.1.6.11)
Trap var bind data: OID: 1.3.6.1.2.1.1.3.0 Value: 0 OID: 1.3.6.1.6.3.1.1.4.1.0 Value: 1.3.6.1.2.1.192.1.2.1.11 OID: 1.3.6.1.2.1.192.1.2.1.11 Value: Dec 13 21:19:10 amt-srv-co7 This is a syslog test message
And this is when I pass the string as a variable:
Unknown alert received from device amt-srv-co7 of type Host_Device. Device Time 0+00:00:00. (Trap type 1.3.6.1.2.1.192.1.2.1.6.11)
Trap var bind data: OID: 1.3.6.1.2.1.1.3.0 Value: 0 OID: 1.3.6.1.6.3.1.1.4.1.0 Value: 1.3.6.1.2.1.192.1.2.1.11 OID: 1.3.6.1.2.1.192.1.2.1.11 Value: 44.65.63.20.31.33.20.32.31.3A.31.39.3A.31.30.20.61.6D.74.2D.73.72.76.2D.63.6F.37.20.54.68.69.
Also the debug from pysnmp for the literal string includes:
2019-12-17 10:14:59,600 pysnmp: sendVarBinds: final varBinds [(<ObjectIdentifier value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.2.1.1.3.0]>, <SysUpTime value object, tagSet <TagSet object, tags 64:0:3>, subtypeSpec <ConstraintsIntersection object, consts <ValueRangeConstraint object, consts 0, 4294967295>>, payload [0]>), (<ObjectIdentifier value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.6.3.1.1.4.1.0]>, <ObjectIdentifier value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.2.1.192.1.2.1.11]>), (<ObjectIdentifier value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.2.1.192.1.2.1.11]>, <OctetString value object, tagSet <TagSet object, tags 0:0:4>, subtypeSpec <ConstraintsIntersection object, consts <ValueSizeConstraint object, consts 0, 65535>>, encoding iso-8859-1, payload [Dec 13 21:19:10 ...log test message]>)]
Also the debug from pysnmp for the variable includes:
2019-12-17 10:15:42,987 pysnmp: sendVarBinds: final varBinds [(<ObjectIdentifier value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.2.1.1.3.0]>, <SysUpTime value object, tagSet <TagSet object, tags 64:0:3>, subtypeSpec <ConstraintsIntersection object, consts <ValueRangeConstraint object, consts 0, 4294967295>>, payload [0]>), (<ObjectIdentifier value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.6.3.1.1.4.1.0]>, <ObjectIdentifier value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.2.1.192.1.2.1.11]>), (<ObjectIdentifier value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.2.1.192.1.2.1.11]>, <OctetString value object, tagSet <TagSet object, tags 0:0:4>, subtypeSpec <ConstraintsIntersection object, consts <ValueSizeConstraint object, consts 0, 65535>>, encoding iso-8859-1, payload [0x44656320313320...6d6573736167650a]>)]
Can you guys give me a tip?
Thanks!

How do you know the string is getting hexified?
The thing is, that pyasn1's OctetString object would print out hexified if the payload is not all-ascii. It does not depend on the length, solely on the printability of the contents.
However, the above mentioned behavior is only about object printout, the payload itself is never changed. Having said that, I am wondering if you are actually getting corrupted data on the receiving end?
You can get consistent bytes (on Py3) by calling .asOctets() on OctetString object.

Related

How to "inspect to file" (or to string) in Elixir?

In Elixir, we can IO.inspect anyStructure to get anyStructure's internals printed to output. Is there a similar method to output it to a file (or, as a more flexible solution, to a string)?
I've looked through some articles on debugging and io but don't see a solution. I've also tried
{:ok, file} = File.open("test.log", [:append, {:delayed_write, 100, 20}])
structure = %{ a: 1, b: 2 }
IO.binwrite(file, structure)
File.close file
but that results in
no function clause matching in IO.binwrite/2 [...]
def binwrite(device, iodata) when is_list(iodata) or is_binary(iodata)
I’ve also googled some "elixir serialize" and "elixir object to string", but haven't found anything useful (like :erlang.term_to_binary which returns, well, binary). Is there a simple way to get the same result that IO.inspect prints, into a file or a string?
There is already inspect/2 function (not the same as IO.inspect), just go with it:
#> inspect({1,2,3})
"{1, 2, 3}"
#> h inspect/2
def inspect(term, opts \\ [])
#spec inspect(
Inspect.t(),
keyword()
) :: String.t()
Inspects the given argument according to the Inspect protocol. The second
argument is a keyword list with options to control inspection.
You can do whatever you wish with the string afterwards.
You can give IO.inspect an additional param to tell it where to write to:
{:ok, pid} = StringIO.open("")
IO.inspect(pid, %{test: "data"}, label: "IO.inspect options work too \o/")
{:ok, {_in, out}} = StringIO.close(pid)
out # "IO.inspect options work too o/: %{test: \"data\"}\n"
It accepts a pid of a process to write to. StringIO provides such a process, returning you a string on close.
In Elixir, we can IO.inspect anyStructure to get anyStructure's internals printed to output.
This is not quite true; IO.inspect uses the Inspect protocol. What you see is not the internals of the struct, but whatever that struct's implementation of the Inspect protocol is written to produce. There are different options you can give to inspect, defined in Inspect.Opts, one of them is structs: false, which will print structs as maps.
For example, inspecting a range struct:
iex> inspect(1..10)
"1..10"
iex> inspect(1..10, structs: false)
"%{__struct__: Range, first: 1, last: 10, step: 1}"
To answer your question and to add to the other answers, here is a method that uses File.open!/3 to reuse an open file and log multiple inspect calls to the same file, then close the file:
File.open!("test.log", [:write], fn file ->
IO.inspect(file, %{ a: 1, b: 2 }, [])
IO.inspect(file, "logging a string", [])
IO.inspect(file, DateTime.utc_now!(), [])
IO.inspect(file, DateTime.utc_now!(), structs: false)
end)
This produces the following test.log file:
%{a: 1, b: 2}
"logging a string"
~U[2022-04-29 09:51:46.467338Z]
%{
__struct__: DateTime,
calendar: Calendar.ISO,
day: 29,
hour: 9,
microsecond: {485474, 6},
minute: 51,
month: 4,
second: 46,
std_offset: 0,
time_zone: "Etc/UTC",
utc_offset: 0,
year: 2022,
zone_abbr: "UTC"
}
You simply need to combine inspect/2 which returns a binary and File.write/3 or any other function dumping to a file.
File.write("test.log", inspect(%{a: 1, b: 2}, limit: :infinity))
Note the limit: :infinity option, without it the long structures will be truncated for better readability when inspecting to stdout.

QDataStream readQString() How to read utf8 String

I am trying to decode UDP packet data from an application which encoded the data using Qt's QDataStream methods, but having trouble when trying to decode string fields. The docs say the data was encoded in utf8. The python QDataStream module only has a readQString() method. Numbers seem to decode fine, but the stream pointer gets messed up when the first strings decode improperly.
How can i decode these UTF8 Strings?
I am using some documentation from the source project interpret the encoding:
wsjtx-2.2.2.tgz
NetworkMessage.hpp Description in the header file
Header:
32-bit unsigned integer magic number 0xadbccbda
32-bit unsigned integer schema number
There is a status message for example with comments like this:
Heartbeat Out/In 0 quint32
Id (unique key) utf8
Maximum schema number quint32
version utf8
revision utf8
example data from the socket when a status message is received:
b'\xad\xbc\xcb\xda\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x06WSJT-X\x00\x00\x00\x03\x00\x00\x00\x052.1.0\x00\x00\x00\x0624fcd1'
def jt_decode_heart_beat(i):
"""
Heartbeat Out/In 0 quint32
Id (unique key) utf8
Maximum schema number quint32
version utf8
revision utf8
:param i: QDataStream
:return: JT_HB_ID,JT_HB_SCHEMA,JT_HB_VERSION,JT_HB_REVISION
"""
JT_HB_ID = i.readQString()
JT_HB_SCHEMA = i.readInt32()
JT_HB_VERSION = i.readQString()
JT_HB_REVISION = i.readQString()
print(f"HB:ID={JT_HB_ID} JT_HB_SCHEMA={JT_HB_SCHEMA} JT_HB_VERSION={JT_HB_VERSION} JT_HB_REVISION={JT_HB_REVISION}")
return (JT_HB_ID, JT_HB_SCHEMA, JT_HB_VERSION, JT_HB_REVISION)
while 1:
data, addr = s.recvfrom(1024)
b = QByteArray(data)
i = QDataStream(b)
JT_QT_MAGIC_NUMBER = i.readInt32()
JT_QT_SCHEMA_NUMBER = i.readInt32()
JT_TYPE = i.readInt32()
if JT_TYPE == 0:
# Heart Beat
jt_decode_heart_beat(i)
elif JT_TYPE == 1:
jt_decode_status(i)
Long story short the wsjtx udp protocol I was reading did not encode the strings using the the QDataString type, so it was wrong to expect that i.readQString() would work.
Instead the data was encoded using a QInt32 to define the string length, followed by the UTF8 characters encoded in QByteArray.
I successfully encapsulated this functionality in a function:
def jt_decode_utf8_str(i):
"""
strings are encoded with an int 32 indicating size
and then an array of bytes in utf-8 of length size
:param i:
:return: decoded string
"""
sz = i.readInt32()
b = i.readRawData(sz)
return b.decode("utf-8")

How to clear/overwrite all data in a shared memory?

I need to overwrite all previously written data in a shared memory (multiprocessing.shared_memory).
Here is the sample code:
from multiprocessing import shared_memory
import json
shared = shared_memory.SharedMemory(create=True, size=24, name='TEST')
data_one = {'ONE': 1, 'TWO': 2}
data_two = {'ACTIVE': 1}
_byte_data_one = bytes(json.dumps(data_one), encoding='ascii')
_byte_data_two = bytes(json.dumps(data_two), encoding='ascii')
# First write to shared memory
shared.buf[0:len(_byte_data_one)] = _byte_data_one
print(f'Data: {shared.buf.tobytes()}')
# Second write
shared.buf[0:len(_byte_data_two)] = _byte_data_two
print(f'Data: {shared.buf.tobytes()}')
shared.close()
shared.unlink()
Output:
First write: b'{"ONE": 1, "TWO": 2}\x00\x00\x00\x00'
Second write: b'{"ACTIVE": 1}WO": 2}\x00\x00\x00\x00'
The output is understandable, since the second write starts from index 0 and ends at _byte_data_two length. (shared.buf[0:len(_byte_data_two)] = _byte_data_two)
I need that every new write to shared memory to overwrite all previously data.
I've tried shared.buf[0:] = b'' before every new write to shared memory but ended up getting
ValueError: memoryview assignment: lvalue and rvalue have different structures
Also I've tried this shared.buf[0:len(_bytes_data_two)] = b'' after every new write with the same result.
Looking after this result:
First write: b'{"ONE": 1, "TWO": 2}\x00\x00\x00\x00'
Second write: b'{"ACTIVE": 1}\x00\x00\x00\x00' without extra "WO": 2}" from first write
How to overwrite all previously written data in a shared memory?
the easiest might be to create a zero filled byte array first, something like:
def set_zero_filled(sm, data):
buf = bytearray(sm.nbytes)
buf[:len(data)] = data
sm.buf[:] = buf
which you can use as:
set_zero_filled(shared, json.dumps(data_two).encode())

TreeView.insert throws UnicodeDecodeError

I'm trying to populate TreeView with data from os.listdir(path).
All is ok until I read a directory name with a non-utf character. In my case 0xf6 which is not utf8.
As I'm running on Windows the charset from os.listdir() is Windows-1252 or ANSI.
How can I solve this problem to achieve correct display in TreeView?
Here some of my code:
def fill_tree(treeview, node):
if treeview.set(node, "type") != 'directory':
return
path = treeview.set(node, "fullpath")
# Delete the possibly 'dummy' node present.
treeview.delete(*treeview.get_children(node))
parent = treeview.parent(node)
for p in os.listdir(path):
ptype = None
p = os.path.join(path, p)
if os.path.isdir(p):
ptype = 'directory'
fname = os.path.split(p)[1].decode('cp1252').encode('utf8')
if ptype == 'directory':
oid = treeview.insert(node, 'end', text=fname, values=[p, ptype])
treeview.insert(oid, 0, text='dummy')
Regards
Göran
The UnicodeDecodeError is due to passing byte strings when the function is expecting Unicode strings. Python 2 attempts to implicitly decode byte strings to Unicode. Use Unicode strings explicitly instead. os.listdir(unicode_path) will return Unicode string, for example os.listdir(u'.').

YAML deserializer with position information?

Does anyone know of a YAML deserializer that can provide position information for the constructed objects?
I know how to deserialize a YAML file into a Java object. Simple instructions on http://yamlbeans.sourceforge.net/.
However, I want to do some algorithmic validation on the deserialized object and report error back to the user pointing to the position in the YAML that cause the error.
Example:
=========YAML file==========
name: Nathan Sweet
age: 28
address: 4011 16th Ave S
=======JAVA class======
public class Contact {
public String name;
public int age;
public String address;
}
Imagine if I want to first load the yaml into Contact class and then validate the address against some repository and error back if its invalid. Something like:
'Line 3 Column 9: The address does not match valid entry in the database'
The problem is, currently there is no way to get the position inside a deserialized object from YAML.
Anyone know a solution to this issue?
Most YAML parsers, if they keep any information about positions around they drop it while constructing the language native objects.
In ruamel.yaml ¹, I keep more information around because I want to be able to round-trip with minimal loss of original layout (e.g. keeping comments and key order in mappings).
I don't keep information on individual key-value pairs, but I do on the "upper-left" position of a mapping². Because of the kept order of the mapping items you can give some rather nice feedback. Given an input file:
- name: anthon
age: 53
adres: Rijn en Schiekade 105
- name: Nathan Sweet
age: 28
address: 4011 16th Ave S
And a program that you call with the input file as argument:
#! /usr/bin/env python2.7
# coding: utf-8
# http://stackoverflow.com/questions/30677517/yaml-deserializer-with-position-information?noredirect=1#comment49491314_30677517
import sys
import ruamel.yaml
up_arrow = '↑'
def key_error(key, value, line, col, error, e='E'):
print('E[{}:{}]: {}'.format(line, col, error))
print('{}{}: {}'.format(' '*col, key, value))
print('{}{}'.format(' '*(col), up_arrow))
print('---')
def value_error(key, value, line, col, error, e='E'):
val_col = col + len(key) + 2
print('{}[{}:{}]: {}'.format(e, line, val_col, error))
print('{}{}: {}'.format(' '*col, key, value))
print('{}{}'.format(' '*(val_col), up_arrow))
print('---')
def value_warning(key, value, line, col, error):
value_error(key, value, line, col, error, e='W')
class Contact(object):
def __init__(self, vals):
for offset, k in enumerate(vals):
self.check(k, vals[k], vals.lc.line+offset, vals.lc.col)
for k in ['name', 'address', 'age']:
if k not in vals:
print('K[{}:{}]: {}'.format(
vals.lc.line+offset, vals.lc.col, "missing key: "+k
))
print('---')
def check(self, key, value, line, col):
if key == 'name':
if value[0].lower() == value[0]:
value_error(key, value, line, col,
'value should start with uppercase')
elif key == 'age':
if value < 50:
value_warning(key, value, line, col,
'probably too young for knowing ALGOL 60')
elif key == 'address':
pass
else:
key_error(key, value, line, col,
"unexpected key")
data = ruamel.yaml.load(open(sys.argv[1]), Loader=ruamel.yaml.RoundTripLoader)
for x in data:
contact = Contact(x)
giving you E(rrors), W(arnings) and K(eys missing):
E[0:8]: value should start with uppercase
name: anthon
↑
---
E[2:2]: unexpected key
adres: Rijn en Schiekade 105
↑
---
K[2:2]: missing key: address
---
W[4:7]: probably too young for knowing ALGOL 60
age: 28
↑
---
Which you should be able to parser in a calling program in any language to give feedback. The check method of course need adjusting to your requirements. This is not as nice as being to do that in the language the rest of your application is in, but it might be better than nothing.
In my experience handling the above format is certainly simpler than extending an existing (open source) YAML parser.
¹ Disclaimer: I am the author of that package
² I want to use that kind of information at some point to preserve spurious newlines, inserted for readability
In python, you can readily write custom Dumper/Loader objects and use them to load (or dump) your yaml code. You can have these objects track the file/line info:
import yaml
from collections import OrderedDict
class YamlOrderedDict(OrderedDict):
"""
An OrderedDict that was loaded from a yaml file, and is annotated
with file/line info for reporting about errors in the source file
"""
def _annotate(self, node):
self._key_locs = {}
self._value_locs = {}
nodeiter = node.value.__iter__()
for key in self:
subnode = nodeiter.next()
self._key_locs[key] = subnode[0].start_mark.name + ':' + \
str(subnode[0].start_mark.line+1)
self._value_locs[key] = subnode[1].start_mark.name + ':' + \
str(subnode[1].start_mark.line+1)
def key_loc(self, key):
try:
return self._key_locs[key]
except AttributeError, KeyError:
return ''
def value_loc(self, key):
try:
return self._value_locs[key]
except AttributeError, KeyError:
return ''
# Use YamlOrderedDict objects for yaml maps instead of normal dict
yaml.add_representer(OrderedDict, lambda dumper, data:
dumper.represent_dict(data.iteritems()))
yaml.add_representer(YamlOrderedDict, lambda dumper, data:
dumper.represent_dict(data.iteritems()))
def _load_YamlOrderedDict(loader, node):
rv = YamlOrderedDict(loader.construct_pairs(node))
rv._annotate(node)
return rv
yaml.add_constructor(yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG, _load_YamlOrderedDict)
Now when you read a yaml file, any mapping objects will be read as a YamlOrderedDict, which allows looking up the file location of keys in the mapping object. You can also add an iterator method like:
def iter_with_lines(self):
for key, val in self.items():
yield (key, val, self.key_loc(key))
...and now you can write a loop like:
for key,value,location in obj.iter_with_lines():
# iterate through the key/value pairs in a YamlOrderedDict, with
# the source file location

Resources