SNMP4J for updating a row in an SNMP table? - snmp

How can I update a row from an SNMP table using the SNMP4J library? I found the method:
TableUtils.createRow(Target target,OID rowStatusColumnOID,OID rowIndex,VariableBinding[] values)
to create a row, but it doesn't have a method to update?

You cant use TableUtils for that. See instead Snmp::send which quoting its javadoc :
Sends a PDU to the given target and returns the received response PDU.

Related

NiFi - Call Rest API for every row in the file

I have a datset of IDs, I've got a flow file that has one row per ID. I have an API that takes this ID as a parameter, and I want to harvest the results for all rows back into NiFi (example below).
https://service.com/api/thing/{ID}
How in NiFi, can I call this API, for all IDs in my dataset. Ideally using some parallelism if possible.
(for reference, in SSIS I could load these IDs into an array and then loop over an API call with a parameter for the ID).
First, use SplitText to get each Id as a flowfile
Then copy content to an attribute by ExtractText . add custom property such as 'message.body' in this example
so that ExtractText would add message.body.0 attribute to the flowfile and you can use it InvokeHttp like below . Please note that since your endpoint is https , you may need to configure SSL Contect Service
Finally , you can set concurent task count for each Processor for parallelism

How to enable/use SNMPv2 RowStatus feature on pysnmp if new row indexes aren't known on table creation?

I'm implementing a NTCIP agent using pysnmp with lots of tables containing SNMPv2 RowStatus columns.
I need to allow clients to create new conceptual rows on existing agent tables, however there is no way to know the indices of such rows before their creation. Consequently, I can't create such RowStatus object instances on pysnmp. Without this instances, the client have no object to issue a SET command in order to add a conceptual row to the table.
Is there any way to handle this on pysnmp? Perhaps a column generic callback mechanism or something similar.
I think I have found the problem on creating new rows.
The original (ASN.1) mib file defines all RowStatus columns as read-write, but pysnmp MibTableColumn createTest method fails if symbol access permission is not read-create. Changing the RowStatus definitions on the MIB source solved the problem.
After doing that I could create new rows, but noticed another issue: a snmp walk on the table caused a timeout. The problem is that pysnmp doesn't know which values to put on new row elements which are not indices and do not have default values defined, so it puts None - which causes a 'Attempted "__hash__" operation on ASN.1 schema object' PyAsn1Error. In order to handle this the client must issue SET commands to every field in the newly created row before getting them OR add default values to column objects (not sure about that, but default values are not populated by mibdump as original ASN.1 mibs never define default values of itens which are not optional, by definition). My code to export columns for my StaticTable class follows (incomplete code, but I think some method and attribute names speak by themselves).
def add_create_test_callback(superclass, create_callback, error_callback=None):
"""Create a class based on superclass that calls callback function when element is changed"""
class VarCCb(superclass):
def createTest(self, name, val, idx, acInfo):
if create_callback and val in [4, 'createAndWait', 5, 'createAndGo']:
superclass.createTest(self, name, val, idx, acInfo)
create_callback(name, val, idx, acInfo)
else:
if error_callback:
error_callback(name, 'optionNotSupported')
raise error.NoCreationError(idx=idx, name=name)
return VarCCb
class StaticTable:
# ....
def config_cols(self):
"""Add callback do RowStatus columns and default values for other columns that are not indices"""
MibTableColumn, = self.mib_builder.importSymbols('SNMPv2-SMI', 'MibTableColumn')
_, column_symbols = self.import_column_symbols()
for index, symbol in enumerate(column_symbols):
if symbol.syntax.__class__.__name__ == 'DcmRowStatus':
# add callback for row creation on all row status columns
MibTableColumnWCb = add_create_test_callback(MibTableColumn, self.create_callback,
self.error_callback)
# setMaxAccess needs to be defined, otherwise symbol is defaulted as readonly
new_col = MibTableColumnWCb(symbol.name, symbol.syntax.clone()).setMaxAccess('readcreate')
named_col = {symbol.label: new_col}
elif index >= self.index_n and self.column_default_values:
new_col = MibTableColumn(symbol.name, symbol.syntax.clone(self.column_default_values[index]))
named_col = {symbol.label: new_col}
else:
new_col = None
named_col = None
if new_col:
self.mib_builder.unexportSymbols(self.mib_name, symbol.label)
self.mib_builder.exportSymbols(self.mib_name, **named_col)
# ...
Not sure if this is the right way to do it, please correct me if I am wrong. Maybe I shouldn't include this here, but it is part of the way to solving the original question and may help others.
Thanks!
I think with SNMP in general you can't remotely create table rows without knowing their indices. Because index is the way how you convey SNMP agent the information where exactly the row shall reside in the table.
In technical terms, to invoke RowStatus object for a column you need to know its index (e.g. row ID).
If I am mistaken, please explain how that would work?
The other case is when you are not creating table row remotely, but just expose the data you already have at your SNMP agent through the SNMP tables mechanism. Then you can take just build the indices off the existing data. That would not require your SNMP manager to know the indices beforehand.
The possible mid-ground approach can be if your SNMP agent exposes some information that SNMP manager can use for building proper indices for the table rows to be created.
All in all, I think the discussion would benefit from some more hints on your situation. ;-)

Extract value from JDBC request and use in next jdbc request

I am writing a JDBC test plan for Add and Delete records.
I have extracted queries from SQL Express Profiler for ADD and Delete.
Now when i run JDBC request for add and delete then record is added but same record not deleted. because delete query having different unique key (e.g.35) of record which was added when query was taken from express profiler. Every time i run add jdbc request then new record having different value i.e. incremented.
Is there any way to extract unique key from Jdbc request of ADD and use it in Delete JDBC request so that same record could be deleted?
Response of ADD JDBC Request:
Delete query where i want to use unique value from response of ADD request:
In JDBC Request sampler you have Variable Names input where you can specify the JMeter Variables which will hold the results values. So given you put ScopeIdentity there most likely you will be able to refer its value later on as ${ScopeIdentity_1}
References:
JDBC PostProcessor Example in Jmeter for Response assertion
Debugging JDBC Sampler Results in JMeter
You can solve this using the varible field that you have in your JDBC Request sampler.
More information on the parameters used in JDBC requests are found here:
https://jmeter.apache.org/usermanual/component_reference.html#JDBC_Request
Let me explain how to use them with your problem as an example:
For the ADD query enter the variable name in the Variables Names field:
ScopeIdentity
This will result in the thread local value for Scopeidentity being saved in a variable named "Scopeidentity" with a suflix of the thread-number. So for one-thread scenario the variable is ScopeIdentity_1
For the DELETE query enter this where you want to refer to the value:
${__V(Scopeidentity_${__threadNum})}
Where ${__threadNum} gives the number of the current thread.
https://jmeter.apache.org/usermanual/functions.html#__threadNum
Where ${__V()} is used to nest the variable name and the result of __threadNum.
https://jmeter.apache.org/usermanual/functions.html#what_can_do
Response for Add request wouldn't retrieve your unique_id.
Add an additional step between ADD and DELETE as below:
SELECT TOP 1 unique_id
FROM table
WHERE condition
order by unique_id desc;
Store this response to a variable and use it in the DELETE statement.

Retrieve multiple tables with snmp4j TableUtils

Documentation for snmp4j TableUtils implies the getTables method can be used to retrieve more than one table. Anyone know how to use it in that manner. Just not intuitive for me. I'm wondering if i just put in the columns for table 1 and table 2 in the OID argument and the table util will be able to seperate them all out and i'll just have to distinguish them in the list of TableEvents (rows) that are returned?
http://www.snmp4j.org/doc/org/snmp4j/util/TableUtils.html
I have tried the same situation as you have posted here. While trying out OIDs from different tables i reached the following conclusion and i'm nt sure whether its the way which they have intended. The VariableBinding[] we get as an output will contain result in the order in which we are passing the OIDs into the array and thereby we can match the input and output.
For eg input - new OID[".1.3.6.1.2.1.2.2.1.2", ".1.3.6.1.2.1.25.4.2.1.2"];
output -new VariableBinding["1.3.6.1.2.1.2.2.1.2.1=somevalue", "1.3.6.1.2.1.25.4.2.1.2.1=System Idle Process"];
new VariableBinding["1.3.6.1.2.1.2.2.1.2.2=somevalue", null];
.
.
.
Hope it was some use for you.
Regards
Ajin

ServiceNow Encoded Query validation

I intend to let users specify encoded query in ServiceNow SOAP requests.
The problem is that if user specifies invalid query string (e.g. "?##" or "sometext"), ServiceNow do not return any exception, but every row or no rows at all.
Is there a way to check validity of encoded query via webservice?
Fandic,
If you know ahead of time which table the SOAP query will be running against (or can get that information from the user when they submit the query) you can set up your own web service exclusively for checking the validity of a query string.
GlideRecord has the "addedEncodedQuery" method for putting in encoded queries, and it has a getEncodedQuery for turning the current query into an encoded string. If you instantiate a GlideRecord object, and pass it an invalid query string like this:
var myGR = new GlideRecord('incident');
myGr.addEncodedQuery('invalid_field_foo=BAR');
You can then call getEncodedQuery out to see if it is valid:
var actual_query = myGR.getEncodedQuery(); //sys_idNotValidnull
You shouldn't do simple comparison between the input and the output, as the API doesn't guarantee that a valid query will be returned as the exact same string as entered. It's better to simply validate that the actual_query does not equal 'sys_idNotValidnull'.
I always recommend enabling the system property 'glide.invalid_query.returns_no_rows' to avoid this effect. The property does just what it says. I never understood why you'd ever want to return all rows for an invalid query. I've seen a number of cases where developers had a query defect and never knew it since rows were coming back.
You can use the below code so that query will return the result only when it correct.
gs.getSession().setStrictQuery(boolean);
This overrides the value of system property :
glide.invalid_query.returns_no_rows
Reference : https://docs.servicenow.com/bundle/istanbul-platform-administration/page/administer/reference-pages/reference/r_AvailableSystemProperties.html
Check the property value glide.invalid_query.returns_no_rows. If it's true, the invalid query returns no rows. If false (or not available) it is ignored.
This can be overridden with gs.getSession().setStrictQuery(boolean); on a script-by-script basis.

Categories

Resources