RowStatus in a table in SNMP MIB - snmp

In the example MIB entry below:
--
-- Logging configuration
--
nsLoggingTable OBJECT-TYPE
SYNTAX SEQUENCE OF NsLoggingEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"A table of individual logging output destinations, used to control
where various levels of output from the agent should be directed."
::= { nsConfigLogging 1 }
nsLoggingEntry OBJECT-TYPE
SYNTAX NsLoggingEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"A conceptual row within the logging table."
INDEX { nsLogLevel, IMPLIED nsLogToken }
::= { nsLoggingTable 1 }
NsLoggingEntry ::= SEQUENCE {
nsLogLevel INTEGER,
nsLogToken DisplayString,
nsLogType INTEGER,
nsLogMaxLevel INTEGER,
nsLogStatus RowStatus
}
Here RowStatus entry is the last one in the NsLoggingEntry, can we put this RowStatus entry anywhere in NsLoggingEntry (for e.g. after "nsLogToken DisplayString")?

Moving the entry nsLogStatus RowStatus to a different location within the sequence of NsLoggingEntry is possible but you need to update the order of the columnar objects to match the order of the sequence.
To give a little more detail, NsLoggingEntry ::= SEQUENCE is defining the columns that will make up entries in the nsLoggingTable. The MIB file should have further definition for each of those columns that will look something like,
nsLogStatus OBJECT-TYPE
SYNTAX RowStatus
MAX-ACCESS read-only
STATUS current
DESCRIPTION "<Some great description of this column>"
::= { nsLoggingEntry 5 }
The key part of that definition is the ::= { nsLoggingEntry 5 } line which asserts that nsLogStatus will be the fifth column of in rows of nsLoggingTable. If you change the order of the NsLoggingEntry sequence, you should make sure that the individual column definitions follow that sequence.
For example, if you changed the order to be,
NsLoggingEntry ::= SEQUENCE {
nsLogLevel INTEGER,
nsLogToken DisplayString,
nsLogStatus RowStatus,
nsLogType INTEGER,
nsLogMaxLevel INTEGER
}
the OID assignments for each of the columns should become,
nsLogLevel ::= { nsLoggingEntry 1 }
nsLogToken ::= { nsLoggingEntry 2 }
nsLogStatus ::= { nsLoggingEntry 3 }
nsLogType ::= { nsLoggingEntry 4 }
nsLogMaxLevel ::= { nsLoggingEntry 5 }
There is one more thing to keep in mind: the index for the table should be the first column in the sequence so nsLogLevel should remain in it's current location, as should nsLogToken.

Related

Conflict in ambiguous grammar due to ordered declaration of optional blocks

I need help defining some rules for a grammar in cups. The rules in question belong to the declaration block, which consists of the declaration of 0 or more constants, 0 or more type records, and 0 or more variables. An example of code to parser:
x: constant := True;
y: constant := 32
type Tpersona is record
dni: Integer;
edad : Integer;
casado : Boolean;
end record;
type Tfecha is record
dia: Integer;
mes : Integer;
anyo : Integer;
end record;
type Tcita is record
usuario:Tpersona;
fecha:Tfecha;
end record;
a: Integer;
x,y: Boolean;
x,y: Boolean;
x,y: Boolean;
The order between them must be respected, but any of them can not appear. This last property is what generates a shift/reduce conflict with the following rules.
declaration_block ::= const_block types_block var_block;
// Constant declaration
const_block ::= dec_const const_block | ;
dec_const ::= IDEN TWOPOINT CONSTANT ASSIGN const_values SEMICOLON;
//Types declaration
types_block ::= dec_type types_block | ;
dec_type ::= TYPE IDEN IS RECORD
reg_list
END RECORD SEMICOLON;
reg_list ::= dec_reg reg_list | dec_reg;
dec_reg ::= IDEN TWOPOINT valid_types SEMICOLON;
//Variable declaration
var_block ::= dec_var var_block | ;
dec_variable ::= iden_list TWOPOINT valid_types SEMICOLON;
iden_list ::= IDEN | IDEN COMMA iden_list;
// common use
const_values ::= INT | booleans;
booleans ::= TRUE | FALSE;
valid_types ::= primitive_types | IDEN;
primitive_types ::= INTEGER | BOOLEAN;
The idea is that any X_block can be empty. I understand the shift-reduce conflict, since when starting and receiving an identifier (IDEN), it doesn't know whether to reduce in const_block ::= <empty> and take IDEN as part of dec_variable, or to shift and take the IDEN token as part of const_block. If I remove the empty/epsilon production in const_block or in type_block, the conflict disappears, although the grammar would be incorrect because it would be an infinite list of constants and it would give a syntax error in the reserved word "type".
So I may have an ambiguity caused because both constants and variables can go at the beginning and start with "id:" and either block can appear first. How can I rewrite the rules to resolve the ambiguities and the shift/reduce conflict they cause?
I tried to do something like:
declaration_block ::= const_block types_block var_block | const_block types_block | const_block var_block | types_block var_block | types_block | var_decl | ;
but i have the same problem.
Other try is to create new_rules to identify if it is a constant or a variable... but the ambiguety of the empty rule in contant_block do not dissapear.
dec_const ::= start_const ASSIGN valor_constantes SEMICOLON;
start_const ::= IDEN TWOPOINT CONSTANT;
// dec_var ::= start_variables SEMICOLON;
// start_var ::= lista_iden TWOPOINT tipos_validos;
If I reduce the problem to something simpler, without taking into account types and only allowing one declaration of a constant or a variable, the fact that these blocks can be empty produces the problem:
dec_var ::= iden_list TWOPOINT valid_types SEMICOLON | ;
iden_list ::= IDEN | IDEN COMMA lista_iden;
I expect rewrite the rules some way to solve this conflict and dealing with similar problemns in the future.
Thanks so much
To start with, your grammar is not ambiguous. But it does have a shift-reduce conflict (in fact, two of them), which indicates that it cannot be parsed deterministically with only one lookahead token.
As it happens, you could solve the problem (more or less) by just increasing the lookahead, if you had a parser generator which allowed you to do that. However, such parser generators are pretty rare, and CUP isn't one of them. There are parser generators which allow arbitrary lookahead, either by backtracking (possibly with memoisation, such as ANTLR4), or by using an algorithm which allows multiple alternatives to be explored in parallel (GLR, for example). But I don't know of a parser generators which can produce a deterministic transition table which uses two lookahead tokens (which would suffice, in this case).
So the solution is to add some apparent redundancy to the grammar in order to factor out the cases which require more than one lookahead token.
The fundamental problem is the following set of possible inputs:
...; a : constant 3 ; ...
...; a : Integer ; ...
There's no ambiguity here whatsoever. The first one can only be a constant declaration; the second can only be variable declarations. But observe that we don't discover that fact until we see either the keyword constant (as in the first case), or a identifier which could be a type (as in the second case).
What that means is that we need to avoid forcing the parser to make any decision involving the a and the : until the next token is available. In particular, we cannot force it to decide whether the a is just an IDEN, or the first (or only) element in an iden_list.
iden_list is needed to parse
...; a , b : Integer ; ...
but that's not a problem since the , is a definite sign that we have a list. So the resolution has to include hamdling a : Integer without reducing a to an iden_list. And that requires an (apparently) redundant production:
var_block::=
| dec_var var_block
dec_var : iden_list ':' type ';'
| IDEN ':' type ';'
iden_list : IDEN ',' IDEN
| iden_list ',' IDEN
(Note: I changed valid_types to type because valid is redundant -- only valid syntaxes are parsed -- and because I think you should never use a plural name for a singular object; it confuses the reader.)
That's not quite enough, though, because we also need to avoid forcing the parser to decide whether the const_block needs to be reduced before the variable declaration. For that, we need something like the attempt you already made to remove the empty block definitions, and instead provide eight different declaration_block productions, one of each of the eight possible empty clauses. That will work fine, as long as you change the block definitions to be left-recursive rather than right-recursive. The right-recursive definition forces the parser to perform a reduction at the end of const_block, which means that it needs to know exactly where const_block ends with only one lookahead token.
On the whole, if you're going to use a bottom-up parser like CUP, you should make it a habit to use left-recursion unless you have a good reason not to (like defining a right-associative operator). There are a few exceptions, but on the whole left-recursion will produce fewer surprises, and in addition it will not burn through the parser stack on long inputs.
Making all those changes, we end up with something like this, where:
The block definitions were changed to left-recursive definitions with a non-empty base case;
ident_list was forced to have at least two elements, and a "redundant" production was added for the one-identifier case;
The start production was divided into eight possible combinations in order to allowed each of the three subclauses to be empty;
A few minor name changes were made.
declaration_block ::=
| var_block
| types_block
| types_block var_block
| const_block
| const_block var_block
| const_block types_block
| const_block types_block var_block
;
// Constant declaration
const_block ::= dec_const
| const_block dec_const ;
dec_const ::= IDEN TWOPOINT CONSTANT ASSIGN const_value SEMICOLON;
//Types declaration
types_block ::= dec_type
| types_block dec_type ;
dec_type ::= TYPE IDEN IS RECORD
reg_list
END RECORD SEMICOLON;
reg_list ::= dec_reg
| reg_list dec_reg;
dec_reg ::= IDEN TWOPOINT type SEMICOLON;
//Variable declaration
var_block ::= dec_var
| var_block dec_var;
dec_var : iden_list ':' type ';'
| IDEN ':' type ';' ;
iden_list : IDEN ',' IDEN
| iden_list ',' IDEN;
// common use
const_value ::= INT | boolean;
boolean ::= TRUE | FALSE;
type ::= primitive_type | IDEN;
primitive_type ::= INTEGER | BOOLEAN;

ANTLR4 not recognising a rule

in my g4 file, I have defined an integer like so:
INT: '0'
| '-'? [1-9] [0-9_]*
;
// no leading zeros are allowed!
A parser rule uses this like so:
versionDecl: PACK_VERSION_DECL INT;
However, when ANTLR comes across one, it doesn't recognise it, and throws a NullPointerException if I run ctx.INT().getText():
#Override
public void exitVersionDecl(VersionDeclContext ctx) {
System.out.println(ctx.INT().getText());
}
Log:
line 1:13 mismatched input '6' expecting INT
[...]
java.lang.NullPointerException
at com.blockypenguin.mcfs.MCFSCustomListener.exitVersionDecl(MCFSCustomListener.java:16)
at main.antlr.MCFSParser$VersionDeclContext.exitRule(MCFSParser.java:604)
at org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:47)
at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:30)
at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28)
at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28)
at com.blockypenguin.mcfs.Main.main(Main.java:40)
(Unrelated output omitted for brevity)
And finally, the input I am parsing:
pack_version 6
Why does ANTLR not recognise the integer? Any help appreciated, thank you :)
...
INT: '0'
| '-'? [1-9] [0-9_]*
;
// no leading zeros are allowed!
...
line 1:13 mismatched input '6' expecting INT
This error indicates that for the input 6, the lexer rule INT was not matched. This can happen if you have a lexer rules defined before the INT rule that also matches 6. Like this for example:
DIGIT
: [0-9]
;
...
INT
: '0'
| '-'? [1-9] [0-9_]*
;
Now the input "6" (or any single digit) will be matched as a DIGIT token. Even if you have this in the parser part of your grammar:
parse
: INT
;
the input "6" will still be tokenised as a DIGIT token: the lexer is not "driven" by the parser, it operates on it's own 2 rules:
try to match as much characters as possible for a single lexer rule
in case 2 or more lexer rules match the same amount of characters, let the rule defined first "win"
So, the input "12" will be tokenised as an INT token (rule 1 applies here), and input "0" is tokenised as a DIGIT token (rule 2).

BNFC is not parsing individual functions

I have the following BNFC code:
GFDefC. GoalForm ::= Constraint ;
GFDefT. GoalForm ::= True ;
GFDefA. GoalForm ::= GoalForm "," GoalForm ;
GFDefO. GoalForm ::= GoalForm ";" GoalForm ;
ConFr. Constraint ::= Var "#" Term ;
TVar. Term ::= UnVar;
TFun. Term ::= Fun ;
FDef. Fun ::= FunId "(" [Arg] ")" ;
ADecl. Arg ::= Term ;
separator Arg "," ;
...
However, the following is not parsed
fun(X)
while it parses the one below
x # fun(Y)
so to sum up, it parses the function as a part of constraints, but not individually.
It should parse both of them.
Could anyone point out why?
You should set your entrypoints properly.
As you're parsing x # fun(Y) successfully, I assume you have set your entrypoints to Constraint and using the generated pConstraint function to parse your expressions. Then, you can change your rules of Constraint to
ConNoVar. Constraint ::= Term ;
ConFr. Constraint ::= Var "#" Term ;
Aternatively, you can add Term to your entrypoints and invoke pTerm to parse your function terms.

Must object names/descriptors be unique within an SNMP MIB module?

I have a vendor-provided MIB file where the same object name/descriptor is defined in two different tables in the same MIB. Unfortunately, I think the MIB is proprietary and can't post it here in its entirety. So I've created a similar sample Foobar.mib file that I've included at the end of this post.
My question is: Is there any way such a MIB is legal or could be considered valid?
Net::SNMP can print the tree of it and it looks like this:
+--foobar(12345678)
|
+--foo(1)
| |
| +--fooTable(1)
| |
| +--fooEntry(1)
| | Index: fooIndex
| |
| +-- -R-- INTEGER fooIndex(1)
| +-- -R-- String commonName(2)
|
+--bar(2)
|
+--barTable(1)
|
+--barEntry(1)
| Index: barIndex
|
+-- -R-- INTEGER barIndex(1)
+-- -R-- String commonName(2)
Note now commonName is defined under both fooTable and barTable in the
very same MIB (see below in my sample Foobar.mib).
This confuses Net::SNMP, since FooBarMib::commonName can now mean two different OIDs.
It would be grand to include a link to an RFC in a bug report for the vendor.
I've found that RFC 1155 - Structure and identification of management information for TCP/IP-based internets says:
Each OBJECT DESCRIPTOR corresponding to an object type in the
internet-standard MIB shall be a unique, but mnemonic, printable
string. This promotes a common language for humans to use when
discussing the MIB and also facilitates simple table mappings for
user interfaces.
Does this only apply to "internet-standard MIB"s and hence not to vendor MIBs?
I've also found RFC 2578 - Structure of Management Information Version 2 (SMIv2) that says:
For all descriptors appearing in an information module, the descriptor shall be unique and mnemonic, and shall not exceed 64 characters in length.
But does a MIB for an SNMP v1 agent also have to adhere to RFC 2578? The SNMP agent
implementing the MIB only supports SNMP v1 for whatever reason. And the RFC
2578 has SMIv2 in the title, where the 2 worries me a little. However the MIB itself does import from SMIv2 FWIW.
I've found two internet references that say that object names / descriptors must be unique within a MIB, but without a source reference:
Andrew Komiagin in "SNMP OID with non-unique node names" here on SO says:
MIB Object names must be unique within entire MIB file.
and Dave Shield on the Net::SNMP mailing list says:
Within a given MIB module, all object names must be unique.
Both the objects defined within that MIB, and objects explicitly
IMPORTed. You can't have two objects with the same name,
both referenced in the same MIB.
I'd love to get a standards / RFC reference for either of those two equivalent statements.
Sample Foobar.mib
This defines commonName as both ::={ fooEntry 2 } and further down as ::={ barEntry 2 } also:
-- I've changed the MIB module name.
FooBarMib DEFINITIONS ::= BEGIN
IMPORTS sysName, sysLocation FROM SNMPv2-MIB;
IMPORTS enterprises, OBJECT-TYPE FROM SNMPv2-SMI;
-- I've provided a fake name and enterprise ID here
foobar OBJECT IDENTIFIER::= {enterprises 12345678}
foo OBJECT IDENTIFIER::={ foobar 1 }
fooTable OBJECT-TYPE
SYNTAX SEQUENCE OF FooEntry
MAX-ACCESS not-accessible
STATUS current
::={ foo 1 }
fooEntry OBJECT-TYPE
SYNTAX FooEntry
MAX-ACCESS not-accessible
STATUS current
INDEX { fooIndex }
::={ fooTable 1 }
FooEntry ::= SEQUENCE{
fooIndex INTEGER,
commonName OCTET STRING,
-- other leaves omitted
}
fooIndex OBJECT-TYPE
SYNTAX INTEGER
MAX-ACCESS read-only
STATUS current
::={ fooEntry 1 }
commonName OBJECT-TYPE
SYNTAX OCTET STRING
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"Label for the commonEntry"
::={ fooEntry 2 }
bar OBJECT IDENTIFIER::={ foobar 2 }
barTable OBJECT-TYPE
SYNTAX SEQUENCE OF BarEntry
MAX-ACCESS not-accessible
STATUS current
::={ bar 1 }
barEntry OBJECT-TYPE
SYNTAX BarEntry
MAX-ACCESS not-accessible
STATUS current
INDEX { barIndex }
::={ barTable 1 }
BarEntry ::= SEQUENCE{
barIndex INTEGER,
commonName OCTET STRING,
-- other leaves omitted
}
barIndex OBJECT-TYPE
SYNTAX INTEGER
MAX-ACCESS read-only
STATUS current
::={ barEntry 1 }
commonName OBJECT-TYPE
SYNTAX OCTET STRING
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"Label for the commonEntry"
::={ barEntry 2 }
END
Unfortunately, enterprises can do whatever they want. If they want to play nice, they are advised to adhere to the rules. Details at https://www.rfc-editor.org/rfc/rfc2578#section-3

Extracting information from a scanned GS1-type barcode

I also want to determine product information such as the description, manufacturer and expiry date from a scanned GS1 barcode message.
How can I do that?
There are two processes involved in obtaining the information represented by a GS1-type barcode that stores data in GS1 Application Identifier Standard Format.
Extraction of the data fields (referred to as Application Identifiers) contained within the GS1-structured data obtained by scanning the symbol. This always includes a unique identifier for the item called a GTIN-14 and may include supplementary information such as an expiry date, LOT number, etc.
This process can be performed by a standalone application.
Lookup of the extracted GTIN in a database, either local to your application or via some public API, to provide a textual representation of the country of origin, manufacturer and possibly the item description.
To perform this process comprehensively an application requires access to external resources.
Background: GS1 Application Identifier Standard Format Composition
GS1-formatted data consists of a concatenated list of Application Identifiers (AIs) and values, beginning with AI (01) which represents the GTIN.
For example, the data "(01) 95012345678903 (10) 000123 (17) 150801" represents the following information:
GTIN: 95012345678903
BATCH/LOT: 000123
USE BY OR EXPIRY: 1st August 2015
Section 3: GS1 Application Identifier Definitions of the GS1 General Specifications provides the meaning of each of the Application Identifiers and importantly also states whether the AI values are by definition variable-length or fixed-length in which case the mandatory length is provided.
GS1 barcodes use a special non-data character (FNC1) both to indicate that the data conforms to GS1 Application Identifier standard format and to delimit the end of a variable-length data field from the next AI. For example, the above data could be encoded in a Code 128 symbol as {FNC1}019501234567890310000123{FNC1}17150801 to produce the following GS1-128 symbol:
When this symbol is read by a barcode scanner it is decoded as follows[†]:
019501234567890310000123{GS}17150801
Note that the initial FNC1 non-data character has been discarded and the FNC1 used in the variable-length AI separator role has been represented by a GS character (ASCII value 29).
Extraction (and optionally validation)
Extraction of the GTIN and any supplementary information can be performed directly by your application.
To extract the original Application Identifier data from the decoded GS1 symbol data from a barcode scanner requires that your application contains a data structure that we shall refer to as AI-TABLE mapping AI patterns to the length of their values as derived from the data provided in the section of the GS1 General Specifications linked to above:
AI | N (value length)
-------------------------
(00) | 18
(01) | 14
(10) | variable
(17) | 6
(240) | variable
(310n) | 6
(37) | variable
...
With this available you can proceed with AI-value extraction from the scanned barcode data as follows:
while more data:
AI,N = Entry from AI-TABLE matching a prefix of the data, otherwise FAIL.
if N is fixed-length:
VALUE = next N characters
else N is variable length:
VALUE = characters until "GS" or end of data
emit: (AI) VALUE
In practise you may choose to include more of the data from the General Specifications in your AI-TABLE to permit your application to perform enhanced validation of each VALUE's type and length. However the above is sufficient to extract the given data, such as AI (17) representing the expiry date which you are looking for.
Update August 2022: GS1 has recently released the GS1 Syntax Engine, a C library that is a reference implementation for processing GS1 Application Identifier syntax scan data: https://github.com/gs1/gs1-syntax-engine
Lookup
To obtain the remaining data that you are interested in (which is not directly encoded in the barcode) such as the item's name and manufacturer details requires that you look up the extracted GTIN using external resources such as a local product database or one of the public UPC database APIs that are available.
The GTIN itself contains a country of origin (actually it represents the national GS1 Member Organisation with which the manufacturer is registered, so not quite country of origin), manufacturer identifier – together these are referred to as the GS1 Prefix, are variable-length and are assigned by GS1 – and the remainder of the digits represent the product code which is assigned freely by the manufacturer.
Given a GTIN, some UPC databases will provide only details relating to the GS1 Prefix such as a textual representation of the GS1 Member Organisation and the manufacturer. Others attempt to maintain a record of individual GTIN assignments to common items, however this data will always be somewhat incomplete and out of date as there is no mandatory registry of real time GTIN assignments.
The answers to this question provide some examples of free product information platforms.
[†] In fact you might see ]C1019501234567890310000123{GS}17150801 in which case the leading symbology identifier for GS1-128 ]C1 can be discarded.
This is a solution written in Javascript proven in a specific customer, generalization requires more work:
//define AI's, parameter name and, optionally, transformation functions
SapApplicationIdentifiers= [
{ ai: '00', regex: /^00(\d{18})/, parameter: 'SSCC'},
{ ai: '01', regex: /^01(\d{14})/, parameter: 'EAN'},
{ ai: '02', regex: /^02(\d{14})/, parameter: 'EAN'},
{ ai: '10', regex: /^10([^\u001D]{1,20})/, parameter: 'LOTE'},
{ ai: '13', regex: /^13(\d{6})/},
{ ai: '15', regex: /^15(\d{6})/, parameter: 'F_CONS', transform: function(match){ return '20'+match[1].substr(0,2)+'-'+match[1].substr(2,2)+'-'+match[1].substr(4,2);}},
{ ai: '17', regex: /^17(\d{6})/, parameter: 'F_CONS', transform: function(match){ return '20'+match[1].substr(0,2)+'-'+match[1].substr(2,2)+'-'+match[1].substr(4,2);}},
{ ai: '19', regex: /^19(\d{6})/, parameter: 'F_CONS', transform: function(match){ return '20'+match[1].substr(0,2)+'-'+match[1].substr(2,2)+'-'+match[1].substr(4,2);}},
{ ai: '21', regex: /^21([\d\w]{1,20})/}, //numero de serie
{ ai: '30', regex: /^30(\d{1,8})/},
{ ai: '310', regex: /^310(\d)(\d{6})/, parameter: 'NTGEW', transform: function(match){ return parseInt( match[2] ) / Math.pow( 10,parseInt( match[1] ) )}},
{ ai: '320', regex: /^320(\d)(\d{6})/, parameter: 'NTGEW', transform: function(match){ return parseInt( match[2] ) / Math.pow( 10,parseInt( match[1] ) )}},
{ ai: '330', regex: /^330(\d)(\d{6})/},
{ ai: '37', regex: /^37(\d{1,8})/, parameter: 'CANT'}
];
//walks through the code, removing recognized fields
function parseAiByAi(code, mercancia, onError ){
var match;
if(!code)
return;
SapApplicationIdentifiers.forEach(function(AI){
if(code.indexOf(AI.ai)==0 && AI.regex.test(code)){
match= AI.regex.exec( code );
if(AI.parameter){
if(angular.isFunction(AI.transform)){
mercancia[AI.parameter] = AI.transform(match);
}else
mercancia[AI.parameter]= match[1];
if(AI.parameter=="NTGEW"){
mercancia.NTGEW_IA= AI.ai;
}
}
code= code.replace(match[0],'').replace(/^[\0\u001D]/,'');
parseAiByAi(code, mercancia, onError);
}
});
}
parseAiByAi(code, mercancia, onError);
You could try using the UPC Database API. They have no guarantee of uptime however and they limit to 1000 queries per day. I was also able to find this API which charges $1/1000 calls. Good luck!

Resources