How to enable/use SNMPv2 RowStatus feature on pysnmp if new row indexes aren't known on table creation? - snmp

I'm implementing a NTCIP agent using pysnmp with lots of tables containing SNMPv2 RowStatus columns.
I need to allow clients to create new conceptual rows on existing agent tables, however there is no way to know the indices of such rows before their creation. Consequently, I can't create such RowStatus object instances on pysnmp. Without this instances, the client have no object to issue a SET command in order to add a conceptual row to the table.
Is there any way to handle this on pysnmp? Perhaps a column generic callback mechanism or something similar.

I think I have found the problem on creating new rows.
The original (ASN.1) mib file defines all RowStatus columns as read-write, but pysnmp MibTableColumn createTest method fails if symbol access permission is not read-create. Changing the RowStatus definitions on the MIB source solved the problem.
After doing that I could create new rows, but noticed another issue: a snmp walk on the table caused a timeout. The problem is that pysnmp doesn't know which values to put on new row elements which are not indices and do not have default values defined, so it puts None - which causes a 'Attempted "__hash__" operation on ASN.1 schema object' PyAsn1Error. In order to handle this the client must issue SET commands to every field in the newly created row before getting them OR add default values to column objects (not sure about that, but default values are not populated by mibdump as original ASN.1 mibs never define default values of itens which are not optional, by definition). My code to export columns for my StaticTable class follows (incomplete code, but I think some method and attribute names speak by themselves).
def add_create_test_callback(superclass, create_callback, error_callback=None):
"""Create a class based on superclass that calls callback function when element is changed"""
class VarCCb(superclass):
def createTest(self, name, val, idx, acInfo):
if create_callback and val in [4, 'createAndWait', 5, 'createAndGo']:
superclass.createTest(self, name, val, idx, acInfo)
create_callback(name, val, idx, acInfo)
else:
if error_callback:
error_callback(name, 'optionNotSupported')
raise error.NoCreationError(idx=idx, name=name)
return VarCCb
class StaticTable:
# ....
def config_cols(self):
"""Add callback do RowStatus columns and default values for other columns that are not indices"""
MibTableColumn, = self.mib_builder.importSymbols('SNMPv2-SMI', 'MibTableColumn')
_, column_symbols = self.import_column_symbols()
for index, symbol in enumerate(column_symbols):
if symbol.syntax.__class__.__name__ == 'DcmRowStatus':
# add callback for row creation on all row status columns
MibTableColumnWCb = add_create_test_callback(MibTableColumn, self.create_callback,
self.error_callback)
# setMaxAccess needs to be defined, otherwise symbol is defaulted as readonly
new_col = MibTableColumnWCb(symbol.name, symbol.syntax.clone()).setMaxAccess('readcreate')
named_col = {symbol.label: new_col}
elif index >= self.index_n and self.column_default_values:
new_col = MibTableColumn(symbol.name, symbol.syntax.clone(self.column_default_values[index]))
named_col = {symbol.label: new_col}
else:
new_col = None
named_col = None
if new_col:
self.mib_builder.unexportSymbols(self.mib_name, symbol.label)
self.mib_builder.exportSymbols(self.mib_name, **named_col)
# ...
Not sure if this is the right way to do it, please correct me if I am wrong. Maybe I shouldn't include this here, but it is part of the way to solving the original question and may help others.
Thanks!

I think with SNMP in general you can't remotely create table rows without knowing their indices. Because index is the way how you convey SNMP agent the information where exactly the row shall reside in the table.
In technical terms, to invoke RowStatus object for a column you need to know its index (e.g. row ID).
If I am mistaken, please explain how that would work?
The other case is when you are not creating table row remotely, but just expose the data you already have at your SNMP agent through the SNMP tables mechanism. Then you can take just build the indices off the existing data. That would not require your SNMP manager to know the indices beforehand.
The possible mid-ground approach can be if your SNMP agent exposes some information that SNMP manager can use for building proper indices for the table rows to be created.
All in all, I think the discussion would benefit from some more hints on your situation. ;-)

Related

PostgreSQL: Create index on length of all table fields

I have a table called profile, and I want to order them by which ones are the most filled out. Each of the columns is either a JSONB column or a TEXT column. I don't need this to a great degree of certainty, so typically I've ordered as follow:
SELECT * FROM profile ORDER BY LENGTH(CONCAT(profile.*)) DESC;
However, this is slow, and so I want to create an index. However, this does not work:
CREATE INDEX index_name ON profile (LENGTH(CONCAT(*))
Nor does
CREATE INDEX index_name ON profile (LENGTH(CONCAT(CAST(* AS TEXT))))
Can't say I'm surprised. What is the right way to declare this index?
To measure the size of the row in text representation you can just cast the whole row to text, which is much faster than concatenating individual columns:
SELECT length(profile::text) FROM profile;
But there are 3 (or 4) issues with this expression in an index:
The syntax shorthand profile::text is not accepted in CREATE INDEX, you need to add extra parentheses or default to the standard syntax cast(profile AS text)
Still the same problem that #jjanes already discussed: only IMMUTABLE functions are allowed in index expressions and casting a row type to text does not pass this requirement. You could build a fake IMMUTABLE wrapper function, like Jeff outlined.
There is an inherent ambiguity (that applies to Jeff's answer as well!): if you have a column name that's the same as the table name (which is a common case) you cannot reference the row type in CREATE INDEX since the identifier always resolves to the column name first.
Minor difference to your original: This adds column separators, row decorators and possibly escape characters to the text representation. Shouldn't matter much to your use case.
However, I would suggest a more radical alternative as crude indicator for the size of a row: pg_column_size(). Even shorter and faster and avoids issues 1, 3 and 4:
SELECT pg_column_size(profile) FROM profile;
Issue 2 remains, though: pg_column_size() is also only STABLE. You can create a simple and cheap SQL wrapper function:
CREATE OR REPLACE FUNCTION pg_column_size(profile)
RETURNS int LANGUAGE sql IMMUTABLE AS
'SELECT pg_catalog.pg_column_size($1)';
and then proceed like #jjanes outlined. More details:
Does PostgreSQL support "accent insensitive" collations?
Note that I created the function with the row type profile as parameter. Postgres allows function overloading, which is why we can use the same function name. Now, when we feed the matching row type to pg_column_size() our custom function matches more closely according to function type resolution rules and is picked instead of the polymorphic system function. Alternatively, use a separate name and possibly make the function polymorphic as well ...
Related:
Is there a way to disable function overloading in Postgres
You can declare a function which is falsely marked "immutable" and build an index on that.
CREATE OR REPLACE FUNCTION len_immut(record)
RETURNS int
LANGUAGE plperl
IMMUTABLE
AS $function$
## This function lies about its immutability.
## Use it with care. It is useful for indexing
## entire table rows.
return length(join ",", values %{$_[0]});
$function$
and then
create index on profile (len_immut(profile));
SELECT * FROM profile ORDER BY len_immut(profile) DESC;
Since the function is falsely labelled as immutable, the index may become out of date if you do things like add or drop columns on the table, or change the types of columns.

SNMP table with dynamic number of columns

I want to have a SNMP table with dynamic number of rows and columns.
The code which creates the OIDs in the snmpd is ready but now I'm having problems with the MIB file.
The MIB file allows dynamic number of rows(entries) but must have constant number of columns.
I'm looking for a way to solve this problem. The following solutions may be possible but I don't know if they are available on the MIB file:
The number of columns is between 1-32. If I could define the columns to be optional - it would solve my problem.
Having dynamic number of tables: If I could define Template table which will have Template name and OID, this will allow me to split my table to smaller dynamic tables with static number of columns.
Currently I can't find any record of such solutions.
SNMP does not allow a dynamic number of columns in a table. It requires that the MIB describes the table completely, so that a manager knows which columns are present, before trying to contact the agent.
Defining tables dynamically is also not permitted.
If you edit your question to describe the data you are trying to model, perhaps we could figure out whether or not it's possible to model it in a MIB. I can certainly imagine situations where the capabilities of SNMP are insufficient to model a data set. It works best where data is either scalar, a tree, or a table with a fixed set of columns.
Edit: As k1eran posted in a comment, it is possible to simply not populate some columns with data, leaving a "sparse table". Please see his comment for a link.

How do I dig into Active Record association meta info?

I have a table with data I pull from an external source. It has three different columns that are hierarchical in nature and reference other tables. The foreign keys are NOT constrained however, so data in those fields is not necessarily valid.
I'm trying to write something generic that will print out any records with values that don't reference an existing row in the parent table for a given date.
I've basically got something like this:
klass.where(:date => date).each do |rec|
next if rec.send(parent)
# do stuff with rec
end
Where klass is the model for the table and parent is a symbol of a declared 'belongs_to' association.
This method works, however, there may be tens of thousands of records for the day, but unique values on the key are fewer than 100. The repeated lookups into the parent table are unpleasantly time consuming. What I'd like to do is stash all the keys I've already looked up and only perform the lookup on new keys.
Towards this end, I'd like to be able to retrieve the field name that has the foreign key reference at runtime. Ideally this would work regardless of whether or not it's using the default naming convention.
You don't show your schema, but this needs to be attacked in your database table, not in code, especially if you're dealing with "tens of thousands" of records.
As a first try, I'd have a "last_checked" field which is a DateTime value. At the start of the program's run, I'd capture the maximum value for that field, + 1 second, as my last_ran value. Any record that is older than that time is a candidate for looking at. Each record you look at gets the last_checked field updated to the current DateTime.now value. You can whittle down the list of candidate records without beating up Ruby that way.

Oracle get values from :new, :old dynamically by string key

How to get a value from special :new or :old by "string key"?
e.g. in PHP:
$key = 'bar';
$foo[$key]; //get foo value
How to in Oracle?
:new.bar --get :new 'bar' value
and
key = 'bar';
:new[key] --How to?
Is it possible?
Thx!
It is not possible.
A trigger that fires at row level can access the data in the row that
it is processing by using correlation names. The default correlation
names are OLD, NEW, and PARENT.
...
OLD, NEW, and PARENT are also
called pseudorecords, because they have record structure, but are
allowed in fewer contexts than records are. The structure of a
pseudorecord is table_name%ROWTYPE, where table_name is the name of
the table on which the trigger is created (for OLD and NEW) or the
name of the parent table (for PARENT).
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/triggers.htm#autoId4
So, these correlation names are basically records. Record is not a key-value storage so you cannot reference its by string key.
Here's what you can do with them:
http://docs.oracle.com/cd/E11882_01/appdev.112/e10472/composites.htm#CIHFCFCJ
According to this
The first approach is syntactically good, it is to to be used like this:
create trigger trg_before_insert before insert on trigger_tbl
for each row
begin
insert into trigger_log (txt) values ('[I] :old.a=' || :old.a || ', :new.a='||:new.a);
end;
/
But if you want to access the field dynamically, one ugly thing that I can think of, that seems to be working (which is not dynamic in the end at all...): using a CASE WHEN... statement for each column you want to be able to use dynamically...
Something along these lines (updating the :new record):
key='bar';
value = 'newValue';
CASE key
WHEN 'bar' THEN :new.bar = value;
WHEN 'foo' THEN :new.foo = value;
WHEN 'baz' THEN :new.baz = value;
END;
To read a value from "dynamic column":
key='bar';
value = CASE key
WHEN 'bar' THEN :new.bar;
WHEN 'foo' THEN :new.foo;
WHEN 'baz' THEN :new.baz;
END;
Then use the value variable as required...
Beware however, as #beherenow noted:
what's the datatype of value variable in your reading example?
and how can you be sure you're not going to encounter a type mismatch?
These are questions that require decisions from the implementing side. For example, with a squint, this thing could be used to dynamically use values from columns that share the same type.
I have to emphasize though, that I don't see a situation where such a bizarre contraption I proposed is to be used, nor do I support using this. The reason I kept it here, after #beherenow's complete and definitive answer is so that everyone finding this page can see - though there might be a way, it shouldn't be used...
To me, this thing seems:
ugly
brittle
badly scaling
appalling
difficult to maintain
...aaand horribly ugly...
I definitely recommend rethinking the use case you need this for. I myself would angrily shout with someone writing this kind of code, unless this is absolutely the only way, and the whole universe collapses, if this is not done this way... (This is very much unlikely though)
Sorry if I misunderstood your question, it was not totally clear to me

Enumerate indexes on a Extensible Storage Engine (ESENT) table

Background
I'm writing an adapter for ESE to .NET and LINQ in a Google Code project called eselinq. One important function I can't seem to figure out is how to get a list of indexes defined for a table. I need to be able to list available indexes so the LINQ part can automatically determine when indexes can be used. This will allow much more efficient plans for user queries if appropriate indexes can be found.
There are two related functions for querying index information:
JetGetTableIndexInfo - get index information by tableID
JetGetIndexInfo - get index information by tableName
These only differ in how the related table is specified (name or tableid). It sounds like these would support the function I want but all the info levels seem to require that I already have a certain index to query information for. The only exception is JET_IdxInfoCount, but that only counts how many indexes are present.
JET_IdxInfo with its JET_INDEXLIST sounds plausible but it only lists the columns on a specific index.
Alternatives
I am aware that I could get the index information another way, like annotations on .NET types corresponding to database tables, or by requiring a index mapping be provided ahead of time. I think there's enough introspection implemented to make everything else work out of the box without the user supplying extra information, except for this one function.
Another option may be to examine the system tables to find related index objects, but this is would mean depending on an undocumented interface.
To satisfy this question, I want a supported method of enumerating the indexes (just the name would be sufficient) on a table.
You are correct about JetGetTableIndexInfo and JetGetIndexInfo and JET_IdxInfo. The twist is that the data is returned in a somewhat complex: a temporary table is returned containing a row for the index and then a row for each column in the table. To just get the index names you will need to skip the column rows (the column count is given by the value of the columnidcColumn column in the first row).
For a .NET example of how to decipher this, look at the ManagedEsent project. In the MetaDataHelpers.cs file there is a method called GetIndexInfoFromIndexlist that extracts all the data from the temporary table.

Resources