APDU always get response trailer SW1 SW2 = 0x67 0x00 - nfc

I'm working on PN5180 module to read data from my ePassport (ICAO 9303). I can send RATS - ATS, PPS, so technically, now i can exchange data using APDU command. Firstly, i tried to select LDS1 but however i tried, i always get SW1 SW2 = 0x67 0x00, which means "Wrong length".
Here my code trace:
RATS: 0xE0 0x80
ATS: 0E 78 77 D4 03 4D 4B 6A 43 4F 53 2D 33 37
PPS: 0xD0 0x11 0x00
PPS_resp: 0xD0
APDU_SELECT: 0x0A 0x00 0x00 0x00 0xA4 0x04 0x0C 0x07 0xA0 0x00 0x00 0x02 0x47 0x10 0x01
APDU_SELECT_resp: 0x0A 0x00 0x67 0x00
So maybe my INF in APDU_SELECT is incorrect, but the problem is i have used PN532 to communicate, i could read my ePassport with the same INF (using InlistPassiveTarget and InDataExchange).
If anyone see this post and worked with PN5180 or smart card before, pls let me know.

To everybody who is stuck in this problem like me,
It was my fault because i have read the ISO14443-4 version 2001 but ISO has updated it to ISO14443-4 2018 version. So the difference is about the Block format.
Take a look at Block format, in v2018, it's appended one byte as the first byte of the Block. It's Length byte!, when v2001 doesn't include it in its Format.
That's why whatever i tried, my ePassport always take 0x0c (right before 0x07, which is the true Lc) as the Lc. So that, it returned '6700' "Wrong length" all the time.
Link about ISO14443-4 2018.
Good luck to everybody read my post, thank you so so so much Maarten Bodewes!

I'm going to make a bit of a guess here; it may be the answer and it doesn't fit in a comment.
If you look at the I-block then the final nibble is defined as such:
b4 : CID following, if bit is set to 1
b3 : NAD following, if bit is set to 1
b2 shall be set to 1
b1 : Block number
I presume that this is the first APDU so the block number is probably the one initialized. However, as you both provide a CID and a NAT I presume that you need to set both of those to 1. That would make a nibble with value 1110 which translates to E, not A. To the card I would assume that the start of the APDU is off by one byte, receiving a 0x0C instead of a 0x07 as Lc.
I don't see anything particularly wrong with the APDU, so I presume that the error lies outside of it.

Related

Why am I getting an unexpected `0xcc` byte when loading nearby code bytes? Is it because of segment register %es?

I got some inconsistent result of instruction.
I don't know why this happens, so I suspect %es register is doing something weird, but I'm not sure.
Look at below code snippet.
08048400 <main>:
8048400: bf 10 84 04 08 mov $HERE,%edi
8048405: 26 8b 07 mov %es:(%edi),%eax # <----- Result 1
8048408: bf 00 84 04 08 mov $main,%edi
804840d: 26 8b 07 mov %es:(%edi),%eax # <----- Result 2
08048410 <HERE>:
8048410: 11 11 adc %edx,(%ecx)
8048412: 11 11 adc %edx,(%ecx)
Result 1:
%eax : 0x11111111
Seeing this result, I guessed that mov %es:(%edi),%eax to be something like mov (%edi),%eax.
Because 0x11111111 is stored at HERE.
Result 2:
%eax : 0x048410cc
However, the result of Result 2 was quite different.
I assumed %eax to be 0x048410bf, because this value is stored at main.
But the result was different as you can see.
Question:
Why this inconsistency of the result happens?
By the way, value of %es was always 0x7b during execution of both instruction.
es is a red herring. The difference you see is 1 byte at main, cc vs. bf. That is because you used a software breakpoint at main and your debugger inserted an int3 instruction which has machine code cc temporarily overwriting your actual code.
Do not set a breakpoint where you intend to read from, or use a hardware breakpoint instead which does not modify code.

getting extra bytes 82 00 in pc/sc response

I am trying to read data from sony felica card using pc/sc transparent session and transceive data object.
The response I am getting is for a read without encryption command is
c0 03 00 90 00 92 01 00 96 02 00 00 97 82 00 + Data
But according to the protocol, the response should be
c0 03 00 90 00 92 01 00 96 02 00 00 97 + Data
I am unable to figure out the last 82 00 appended in the response from the card.
Now when I try to authenticate with the card I get
c0 03 01 6F 01 90 00
which is a error in pc/sc. I want to resolve these extra bytes 82 00 which I believe will solve the issue with all the commands which require authentication and encryption.
The response data is BER-TLV encoded (see PC/SC 2.02, Part 3).
In BER-TLV encoding there are several possibilities to encode tag 0x97 with two octets of data 0xD0D1, e.g.:
97|02|D0D1 -- short form (see parsed)
97|8102|D0D1 -- long form with one octet with length (see parsed)
97|820002|D0D1 -- long form with two octets with length (see parsed)
97|83000002|D0D1 -- long form with three octets with length (see parsed)
...
Your reader is using two octets for sending the length of ICC Response data object (which is perfectly valid).
You should parse the response properly...Good luck!
PS: The above means, that the Data part of your truncated responses still contains one extra byte with the response length (i.e. Len|Data)

Encoding conditional jump (jecxz) within inline assembly

I am trying encode a jecxz instruction within inline assembly. The jexcz should jump to the next immediate instruction (i.e: the nop).
int main() {
asm("lea -24(%rdi), %rcx");
asm("jecxz $0x00");
asm("nop");
}
But I am getting the following error.
gcc -o t main.c
main.c: Assembler messages:
main.c:7: Error: operand type mismatch for `jecxz'
What needs to be fixed here?
The most compatible solution is to write the line as follows:
asm("jecxz nextline; nextline:");
Regarding the asm("jecxz .+3") solution:
In 16-bit mode, a jcxz is encoded as e3 XX and a jecxz is encoded as 67 e3 XX
In 32-bit mode, a jecxz is encoded as e3 XX and a jcxz is encoded as 67 e3 XX
In 64-bit mode, a jrcxz is encoded as e3 XX and a jecxz is encoded as 67 e3 XX (jcxz is not available)
(Where XX is a signed-byte offset from the end of the instruction to the jump target)
So then, the line asm("jecxz .+3"); would assemble to 67 e3 00 in 16-bit and 64-bit code, and e3 01 in 32-bit code. The 32-bit case would be incorrect, as it would jump one byte past the end of the instruction, given that the 32-bit form is only two bytes wide.
If we use a label, we cover all three cases.
As per Micheal Petch's comment the correct usage is
asm("jecxz .+3");
which encodes the relative distance to the next immediate instruction.

How to determine the major compiler version from .obj files compiled with /GL?

I'm trying to determine Visual Studio version (2002/2003, 2005, 2008, 2010, 2012, 2013, 2015) from the .obj file generated with the link time code generation option.
The file I have, generated with MSVC2012, has following COFF header contents:
File Header
+0 00 00 Machine - Unknown Machine
+2 FF FF NumberOfSections
+4 01 00 4C 01 TimeDateStamp
+8 70 94 F9 55 PointerToSymbolTable
+12 38 FE B3 0C NumberOfSymbols
+16 A5 D9 SizeOfOptionalHeader
+18 AB 4D Characteristics
Optional Header
+20 AC 9B Magic
+22 D6 B6 Linker Version Major/Minor
It seems that the initial 4 bytes being 00,00,FF,FF mark it as a LTCG object, and what follows is proprietary. None of the usual file header members make "sense" (maybe the timestamp is OK, I didn't check).
Does anyone know offhand if any part of this header is compiler-specific? All I need to determine is the MSVC major version used to compile the object...
It appears that there is a version, coded as <MAJOR:16:LE> 0x80 <MINOR:16:LE>, stored shortly after the header. E.g.:
17.00.61030 -> 0x11.0xEE66 -> 11 00 80 66 EE
19.00.23026 -> 0x13.0x59F2 -> 13 00 80 F2 59
What's needed is to figure out how to get to it reliably by offsets from preceding data.
This is a related question, with no resolution...
TL,DR :
You can't get the compiler version with this file format, I guess ...
Complete answer :
It looks like some variation of the "anonymous file format", described in the "winnth.h" by various ANON_OBJECT_HEADER_XXX structures (replace XXX by V2 or BIGOBJ).
Here is a copy of the ANON_OBJECT_HEADER_BIGOBJ found in winnt.h :
typedef struct ANON_OBJECT_HEADER_BIGOBJ {
/* same as ANON_OBJECT_HEADER_V2 */
WORD Sig1; // Must be IMAGE_FILE_MACHINE_UNKNOWN
WORD Sig2; // Must be 0xffff
WORD Version; // >= 2 (implies the Flags field is present)
WORD Machine; // Actual machine - IMAGE_FILE_MACHINE_xxx
DWORD TimeDateStamp;
CLSID ClassID; // CLSID is a 16 bytes struct (not original comment)
DWORD SizeOfData; // Size of data that follows the header
DWORD Flags; // 0x1 -> contains metadata
DWORD MetaDataSize; // Size of CLR metadata
DWORD MetaDataOffset; // Offset of CLR metadata
/* bigobj specifics */
DWORD NumberOfSections; // extended from WORD
DWORD PointerToSymbolTable;
DWORD NumberOfSymbols;
} ANON_OBJECT_HEADER_BIGOBJ;</code>
The description match:
Sig1 : 00 00
Sig2 : FF FF
Version : >=2
Machine : 0x14c`
The other header structures (i.e, ANON_OBJECT_HEADER and ANON_OBJECT_HEADER_V2) are basically the same, but with less fields.
For the Version field, I found some information here :
http://www.geoffchappell.com/studies/msvc/link/dump/infiles/obj.htm
Looks like the Version field is "1" for anonymous files, and it seems like the anonymous files and the so called "import files" shared the same characteristics, only that Version = 0 for import file format (I do not really know what it is admittedly).
But yeah, by just looking at the header, it seems that we have no information on what compiler version was used. And even then, when looking at .obj files generated with the /GL switch, they do not exactly follow this format and I didn't find much information about them. I'll be glad that someone prove me wrong.

snmptrap unsigned type not working as expected

I am using snmpV3 adapter and passing V2 traps to it by using commands as below. It looks like the range for type u (i.e. unsigned) is upto (2^31) - 1 (i.e. 2147483647). I was expecting it to be (2^32) - 1 (i.e. 4294967295).
snmptrap -c public -v 2c clm-pun-009642 '' 1.3.6.1.4.1.20006.1.0.5 1.3.6.1.4.1.12345.1 u 2147483647
Above command generates following log:
trace: ..\..\snmplib\snmp_api.c, 5293:
dumph_recv: Value
dumpx_recv: 42 04 7F FF FF FF
dumpv_recv: UInteger: 2147483647 (0x7FFFFFFF)
Where as for:
snmptrap -c public -v 2c clm-pun-009642 '' 1.3.6.1.4.1.20006.1.0.5 1.3.6.1.4.1.12345.1 u 2147483648
Above command generates following log:
enter code heretrace: ..\..\snmplib\snmp_api.c, 5293:
dumph_recv: Value
dumpx_recv: 42 05 00 80 00 00 00
dumpv_recv: UInteger: -2147483648 (0x80000000)
Refer to:
http://www.net-snmp.org/docs/man/snmptrap.html
I am using net-snmp v5.5.
Is this the correct behavior or am I missing something?
I have discovered various problems with net-snmp over the years. This is apparently one more. The standards are quite clear. RFC 2578 defines Unsigned32 as follows:
-- an unsigned 32-bit quantity
-- indistinguishable from Gauge32 Unsigned32 ::=
[APPLICATION 2]
IMPLICIT INTEGER (0..4294967295)
As noted, this is identical to Gauge32, which is identical to Gauge in SNMPv1 (RFC 1155):
Gauge ::=
[APPLICATION 2]
IMPLICIT INTEGER (0..4294967295)
The encoding is correct; all integers within SNMP are encoded as signed, meaning a value above 2^31-1 must be encoded in 5 bytes. Thus the proper translation of the encoding is:
42 Type: Gauge32 or Unsigned32
05 Length: 5 bytes
00 80 00 00 00 Value: 2^31
net-snmp is incorrectly decoding the value.

Resources