Read All the available records in emv at once - nfc

what simple apdu command can I use to read all the data from available record and sfi
0x00,0xB2,0x01,0x0C,0x00
currently am using the above to read data from from sfi 1 record 1
It works and I get data and 90 00
when I try other sfi and record combination like
0x00,0xB2,0x01,0x10,0x00
I get a record not found sw
So what apdu command can I use to read data from all possible/available records

You should not use any hard coded values. Get the AFL from GET PO, and then use it to form your READ RECORD APDUs. If AFL does not exist, you do not need to go for READ RECORDs.

Related

2 differents Readers to fill an item in the same step

I have this situation. I have a csv file with some information, I have to complete this information with a database table registers ad write a new file with these info.
I guess that I should use a MultipleReader implementation, one to read my file and other to read my database (Some like this example: https://bigzidane.wordpress.com/2016/09/15/spring-batch-multiple-sources-as-input/) But I need pass conditions to te query in relation a the current item being processed.
Any way, If it is possible, I need configure a query data in my reader2 with info getted in my reader1. How I could make this?
This a little resume of my problem:
Input File (Reader1)
Id;data1;data2;data3
Database (Reader2)
Id|data4;data5;data6
Output File
Id;data1;data2;data3;data4;data5;data6
Sorry My english. Any link to articles or docs is good.
This is a common pattern known as the "driving query pattern" and is described in the Common patterns section of the reference documentation.
You can use a FlatFileItemReader to read your input file and an item processor to query the database for the current item and enrich it with additional data.
Another idea is to load your flat file in a staging table in the database and use a database item reader to join data.

Spring Batch, read whole csv file before reading line by line

I want to read a csv file, enrich each row with some data from some other external system and then write the new enriched csv to some directory
Now to get the data from external system i need to pass each row one by one and get the new columns from external system.
But to query the external system with each row i need to pass a value which i have got from external system by sending all the values of a perticular column.
e.g - my csv file is -
name, value, age
10,v1,12
11,v2,13
so to enrich that i first need to fetch a value as per total age - i.e 12 + 13 and get the value total from external system and then i need to send that total with each row to external system to get the enriched value.
I am doing it using spring batch but using fLatFileReader i can read only one line at a time. How would i refer to whole column before that.
Please help.
Thanks
There are two ways to do this.
OPTION 1
Go for this option if you are okey to store all the records in memory. Totally depends how many record you need to calculate the total age.
Reader(Custom Reader) :
Write the logic to read one line at a time.
You need to return null from read() only when you feel all the lines are read for calculating the total age.
NOTE:- A reader will loop the read() method until it returns null.
Processor : You will get the full list of records. calculate the total age.
Connect the external system and get the value. Form the records which need to be written and return from the process method.
NOTE:- You can return all the records modified by a particular field or merge a single record. This is totally your choice what you would like to do.
Writer : Write the records.
OPTION 2
Go for this if option1 is not feasible.
Step1: read all the lines and calculate the total age and pass the value to the next step.
Step2: read all the lines again and update the records with required update and write the same.

Retrieve multiple tables with snmp4j TableUtils

Documentation for snmp4j TableUtils implies the getTables method can be used to retrieve more than one table. Anyone know how to use it in that manner. Just not intuitive for me. I'm wondering if i just put in the columns for table 1 and table 2 in the OID argument and the table util will be able to seperate them all out and i'll just have to distinguish them in the list of TableEvents (rows) that are returned?
http://www.snmp4j.org/doc/org/snmp4j/util/TableUtils.html
I have tried the same situation as you have posted here. While trying out OIDs from different tables i reached the following conclusion and i'm nt sure whether its the way which they have intended. The VariableBinding[] we get as an output will contain result in the order in which we are passing the OIDs into the array and thereby we can match the input and output.
For eg input - new OID[".1.3.6.1.2.1.2.2.1.2", ".1.3.6.1.2.1.25.4.2.1.2"];
output -new VariableBinding["1.3.6.1.2.1.2.2.1.2.1=somevalue", "1.3.6.1.2.1.25.4.2.1.2.1=System Idle Process"];
new VariableBinding["1.3.6.1.2.1.2.2.1.2.2=somevalue", null];
.
.
.
Hope it was some use for you.
Regards
Ajin

Reading XML-files with StAX / Kettle (Pentaho)

I'm doing an ETL-process with Pentaho (Spoon / Kettle) where I'd like to read XML-file and store element values to db.
This works just fine with "Get data from XML" -component...but the XML file is quite big, several giga bytes, and there fore reading the file takes too long.
Pentaho Wiki says:
The existing Get Data from XML step is easier to use but uses DOM
parsers that need in memory processing and even the purging of parts
of the file is not sufficient when these parts are very big.
The XML Input Stream (StAX) step uses a completely different approach
to solve use cases with very big and complex data stuctures and the
need for very fast data loads...
There fore I'm now trying to do the same with StAX, but it just doesn't seem to work out like planned. I'm testing this with XML-file which only has one element group. The file is read and then mapped/inserted to table...but now I get multiple rows to table where all the values are "undefined" and some rows where I have the right values. In total I have 92 rows in the table, even though it should only have one row.
Flow goes like:
1) read with StAX
2) Modified Java Script Value
3) Output to DB
At step 2) I'm doing as follow:
var id;
if ( xml_data_type_description.equals("CHARACTERS") &&
xml_path.equals("/labels/label/id") ) {
id = xml_data_value; }
...
I'm using positional-staz.zip from http://forums.pentaho.com/showthread.php?83480-XPath-in-Get-data-from-XML-tool&p=261230#post261230 as an example.
How to use StAX for reading XML-file and storing the element values to DB?
I've been trying to look for examples but haven't found much. The above example uses "Filter Rows" -component before inserting the rows. I don't quite understand why it's being used, can't I just map the values I need? It might be that this problem occurs because I don't use, or know how to use, Filter Rows -component.
Cheers!
I posted a possible StAX-based solution on the forum listed above, but I'll post the gist of it here since it is awaiting moderator approval.
Using the StAX parser, you can select just those elements that you care about, namely those with a data type of CHARACTERS. For the forum example, you basically need to denormalize the rows in sets of 4 (EXPR, EXCH, DATE, ASK). To do this you add the row number to the stream (using an Add Sequence step) then use a Calculator to determine a "bucket number" = INT((rownum-1)/4). This will give you a grouping field for a Row Denormaliser step.
When the post is approved, you'll see a link to a transformation that uses StAX and the method I describe above.
Is this what you're looking for? If not please let me know where I misunderstood and maybe I can help.

Store some fields from PIG to Hbase

I am trying to extract some part of string and store it to hbase in columns.
Files Content :
msgType1 Person xyz has opened Internet:www.google.com from IP:192.123.123.123 for duration 00:15:00
msgType2 Person xyz denied for opening Internet:202.x.x.x from IP:192.123.123.123 reason:unautheticated
msgType1 Person xyz has opened Internet:202.x.x.x from IP:192.123.123.123 for duration 00:15:00
pattern of messages corresponding to msgType is fixed. Now i am trying to store person name, destination , source , duration etc in hbase.
I am trying to to wrtie script in PIG to do this task.
But i am stuck at extracting part.(extracting IP or website name from 'Internet:202.x.x.x' token inside string).
I tried Regular expression but its not working for me. Regex alway throw this error :
ERROR 1045: Could not infer the matching function for org.apache.pig.builtin.REGEX_EXTRACT as multiple or none of them fit. Please use an explicit cast.
is there any other way to extract these value and store it to hbase in PIG or other than PIG?
How do you use the REGEX_EXTRACT function ? Have you seen the REGEX_EXTRACT_ALL function ? According to the documentation (http://pig.apache.org/docs/r0.9.2/func.html#regex-extract-all), it should be like this :
test = LOAD 'test.csv' USING org.apache.pig.builtin.PigStorage(',') AS (key:chararray, value:chararray);
test = FOREACH test GENERATE FLATTEN(REGEX_EXTRACT_ALL (value, '(\\S+):(\\S+)')) as (match1:chararray, match2:chararray);
DUMP test;
My file is like that :
1,a:b
2,c:d
3,
I know it's easy to be lazy and not take the step, but you really should use a user-defined function here. Pig is good as a data flow language and not much else, so in order to get the full power out of it, you are going to need to use a lot of UDFs to go through text and do more complicated operations.
The UDF will take a single string as a parameter, then return a tuple that represents (person, destination, source, duration). To use it, you'll do:
A = LOAD ...
...
B = FOREACH A GENERATE MyParseUDF(logline);
...
STORE B INTO ...
You didn't mention what your HBase row key was, but be sure that's the first element in the relation before storing it.

Resources