Specification defining ECDSA signature data - format

I want to know what specification (or standard) define the data format of the ECDSA signature and public key?
I'm testing the ECDSA signature on java card. I found out that there is a TLV format in the signature and the public key value.
* Public key (TV format)
[Tag=04] [public key value 1] [public key value 2]
04 038A3F59E813995DAB730588CFCBB985F5A1ED90C0D62960AE0B274D 2E6B12672318E0B113DECC0406B62887B6BCB9B1583B1A50779EAB5A
* Signature (TLV format)
[Tag=30] [Length=3C~3E] [Tag=02] [Length=1C~1D] [signature value 1] [Tag=02] [Length=1C~1D] [signature value 2]
303C 021C 7EEB0B2596F74344B3D7B046EA0BD17C4461FC277658CE93509F1674 021C 4F5DBFB30D994664DA80528847A767F0194876B068E5958161797991
303E 021D 0080F20B82D407AE663F010F4990F12073631D653EA1D65DC75EBD4293 021D 00880DB667EF51AEA8E7C9BB012496C7C9ECE3BC5829B82B692B9211C3
303D 021D 00F77447EF326A4A49597D0B839F68F524891F3655DA4561F1AA10EF70 021C 152F7FF18644C5E5C9118736E1F7528F0B10C5FF641C7B7CDF012129
303D 021D 00A2EBCC5C5981341D0726F2E846CC3879C74EFD64D8698589A8CEAB60 021C 6E04FF884A451D7C0737A182BC2DE7F7D3008EE182B46A009BFFC9E8
I think that the data format is defined in some specification or standard. I just want to know the document name.

The ASN.1 structure is defined in SEC 1: Elliptic Curve Cryptography (part C: ASN.1 for Elliptic Curve Cryptography), from the SECG (Standards for Efficient Cryptography Group).

It's in here:
https://www.ietf.org/rfc/rfc5480.txt
Also ANDI X9.62 might be important, but not freely available i think
For example:
ECDSA-Sig-Value ::= SEQUENCE {
r INTEGER,
s INTEGER
}

The public key value is an uncompressed point. It is defined by value 04, which is an identifier for an uncompressed point, followed by the X and Y coordinate, where the X and Y are encoded as unsigned big endian octet strings that have the same size as the key size (same as the size of the order of the curve in the parameters). Note that 04 is also the tag for an OCTET STRING in ASN.1, but that has nothing to do with the uncompressed point indicator.
The format of the domain parameters is unknown to me. It's certainly not encoded as https://www.ietf.org/rfc/rfc5480.txt as Paul suggests. I presume it is some proprietary DER format, which uses multiple ASN.1 SEQUENCE values filled with two ASN.1 INTEGER values each. These integer values (after the length) are signed, unpadded, big endian encodings, which fortunately are completely compatible with the encoding of Java's BigInteger.
Paul Bastian is correct with regards to the signature generated, it's X9.42 compatible. Plain signatures are not yet supported by Java Card.

Related

What is the name 0x41 (65) in a snmp variable bindings reply?

I am attempting to understand SNMP (in general, and v3). The goal is to include an snmp agent in an embedded device running an RTOS.
I've already been through over a dozen RFCs with at least another dozen more to go. Each one creates more questions than it answers. (1052, 1065, 1067, 1155, 1156, 1157, 1212, 1213, 1592, 1905, 2578, 2579, 2580, 3410, 3411, 3412, 3413, 3414, 3415, 3416, 3417, 3418, 3584... )
I implemented mDNS-SD and 802.1X EAPOL with just a couple RFCs and it wasn't this confusing.
Many of the reviews of books I considered all complain of the same inconsistent and vagueness of the material. I bought a couple books that had better reviews.
Searching online isn't getting anywhere largely because the keywords aren't finding things I want answers to. So I must not even know the best keywords to search with.
Eventually, I decided to just try to reverse engineer what's going on, I installed WireShark on a Linux PC, and the snmpd and snmp tools, so I could sniff it. Here is what I have, and can't align what I see with what I read.
This is a v3 sniff, It's a reply to the first request from a manager. This question is just zeroing in on one of the things that I want to understand. I can't decode and examine a plaintext PDU, because I can't get a request in v2 or v1.
Wireshark shows this reply to a manager. It's apparently the first step in whatever authentication it to be used.
The book I have shows this as the protocol on the wire. And I am trying to parse out the variable bindings.
Here are the variable bindings from Wireshark
A "sequence" that is 15 bytes long (x30 x0f)
This, from the RFC, says that the list is a SEQUENCE of VarBinds, where each VarBind is the object name, and the value in ObjectSyntax. So it's looking okay so far.
Here is the next segment inside the SEQUENCE (Wireshark highlighted all 14 bytes)
An object ID that is 10 bytes long (x06, x0a)
Here is the actual object:
The objectName is the object ID, and it is x2b x6 x1 x6 x3 xf x1 x1 xx4 x0 or (1.3).6.1.6.3.15.1.1.4.0
Given that this is ISO, ORG, DOD, INTERNET, 6?... I have to assume "6" is an object under internet branch I've not yet come across. Likely something to do with the v3 security.
Next, is the value.
This is a type x41 (65), with a length of 1, and a value of 7.
Well, in "ObjectSyntax" what is x41? I can't find it defined anywhere.
For that matter, all these RFCs use words for identifiers, and I can find only a fraction of what their actual numeric values are.
Wireshark knew what it was... It's saying "Counter32"... is that what x41 is supposed to be? If so, it's nowhere near 32 bits. It's only one byte. Again, I'd like to find it's definition.
Also, somewhere, (I can't even recall which RFC) it said the reply to an OID request is to append the value to the requested object, not replace the zero (example: request: 1.3.6.1.4.300.1 -> reply 1.3.6.1.4.300.1.15 so it is a value of 15 ). This OID has a trailing zero, nad I'm not sure why.
Can anyone point me to some useful, concise, condensed information explaining this material? Every RFC requires that I go back and read some previous (and sometimes obsoleted) RFC, and I've now got over 25 of them already. I don't think it should take this many RFCs to be able to write an "simple" snmp agent. A month of researching, and most of what I have to show for it is how to read MIB files. Although that take some mental gymnastics too.
"Simple" is rather deceptive (as more than one book reviewer has stated).
RFC 1157 specifies that SNMP messages are encoded with "a subset of the basic encoding rules of ASN.1". I don't think the official basic encoding rules (BER) specification is available for free, but it's not hard to find explainers online (here's one I found with a simple search). To your question about the 0x41 byte, this is a BER identifier. The 2 most-significant bits (01) tell you the "class" (i.e. something like a namespace) is "application". The "form" bit (0) tells you that it's a primitive type (i.e. not a sequence). Finally the "tag" is 1. Consulting the SNMPv2-SMI MIB (RFC 2578) you can find this definition:
Counter32 ::=
[APPLICATION 1]
IMPLICIT INTEGER (0..4294967295)
You also asked about why a 32-bit integer is encoded with a single byte. This requires you to distinguish between the scope of the SNMP standard versus the ASN.1 standard. ASN.1 only has a single INTEGER type, which 1) has an unlimited range, 2) is always signed (two's complement), and 3) should be encoded in the least number of octets possible. This actually means that a Counter32 (or any other 32-bit unsigned integer type) might use up to 5 bytes for its encoding (see this answer I gave to a question about that).
Finally, you asked about the way the replies are modifying the requested OID. I was confused about this for a long time, but when I figured it out, I realized it's actually pretty simple. I think the best place to start is with this excerpt from RFC 1157:
Each instance of any object type defined in the MIB is identified in
SNMP operations by a unique name called its "variable name." In
general, the name of an SNMP variable is an OBJECT IDENTIFIER of the
form x.y, where x is the name of a non-aggregate object type defined
in the MIB and y is an OBJECT IDENTIFIER fragment that, in a way
specific to the named object type, identifies the desired instance.
This naming strategy admits the fullest exploitation of the semantics
of the GetNextRequest-PDU (see Section 4), because it assigns names
for related variables so as to be contiguous in the lexicographical
ordering of all variable names known in the MIB.
The type-specific naming of object instances is defined below for a
number of classes of object types. Instances of an object type to
which none of the following naming conventions are applicable are
named by OBJECT IDENTIFIERs of the form x.0, where x is the name of
said object type in the MIB definition.
For example, suppose one wanted to identify an instance of the
variable sysDescr The object class for sysDescr is:
iso org dod internet mgmt mib system sysDescr
1 3 6 1 2 1 1 1
Hence, the object type, x, would be 1.3.6.1.2.1.1.1 to which is
appended an instance sub-identifier of 0. That is, 1.3.6.1.2.1.1.1.0
identifies the one and only instance of sysDescr.
So, to summarize, the OID that comes from the MIB doesn't refer to a concrete object, but to the "object type". Each concrete object (i.e. "instance") is identified by a suffix of one or more sub-identifiers (i.e. the y in this explanation). For singleton objects, this suffix is always 0. However, I think most SNMP objects are found in tables, not in singleton objects. I don't actually know of a good explanation of this in the standards, so I'll give it my best shot.
Like any table, SNMP tables are made up of rows and columns. In SNMP, however, the rows are called "entries", and each entry defines a custom type to describe the columns. Here's a simple example from the IF-MIB:
ifTable OBJECT-TYPE
SYNTAX SEQUENCE OF IfEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"A list of interface entries. The number of entries is
given by the value of ifNumber."
::= { interfaces 2 }
ifEntry OBJECT-TYPE
SYNTAX IfEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"An entry containing management information applicable to a
particular interface."
INDEX { ifIndex }
::= { ifTable 1 }
IfEntry ::=
SEQUENCE {
ifIndex InterfaceIndex,
ifDescr DisplayString,
ifType IANAifType,
ifMtu Integer32,
ifSpeed Gauge32,
ifPhysAddress PhysAddress,
ifAdminStatus INTEGER,
ifOperStatus INTEGER,
ifLastChange TimeTicks,
ifInOctets Counter32,
ifInUcastPkts Counter32,
ifInNUcastPkts Counter32, -- deprecated
ifInDiscards Counter32,
ifInErrors Counter32,
ifInUnknownProtos Counter32,
ifOutOctets Counter32,
ifOutUcastPkts Counter32,
ifOutNUcastPkts Counter32, -- deprecated
ifOutDiscards Counter32,
ifOutErrors Counter32,
ifOutQLen Gauge32, -- deprecated
ifSpecific OBJECT IDENTIFIER -- deprecated
}
So, ifTable has an OID of 1.3.6.1.2.1.2.2, and ifEntry has an OID of 1.3.6.1.2.1.2.2.1. Each item in IfEntry also has its own definition, which includes the OID relative to ifEntry. Generally they match up with the entry's data type, so, for example, ifIndex, as the first column in IfEntry, has an OID of ifEntry.1. Confusingly, when you do a simple Get-Next walk, you will traverse in column-major order, meaning you will get all the ifIndexes, followed by all the ifDescrs, and so on.
So, with all that explained, I'm now prepared to explain the instance identifiers for these tables. Notice above that ifEntry defines
INDEX { ifIndex }
This means, first, that each row is guaranteed to have a unique ifIndex, and, more importantly, that the ifIndex is used as the instance identifier for the entire entry. For example, you can pick any column in the IfEntry data type, let's say ifOperStatus (1.3.6.1.2.1.2.2.1.8), and use Get-Next to find the first instance of that column. Let's say its OID is 1.3.6.1.2.1.2.2.1.8.1, and it's value is 1 (up). The last sub-identifier tells you that it belongs to the row whose ifIndex is 1. To find the name of that interface, you can then query ifDescr.1, and to find its speed setting, you can query ifSpeed.1, and so forth. In this case, it is possible to query ifIndex.1, which will just return 1, but in many tables, the INDEX columns are not-accessible, meaning you can only find out what instances there are by walking some other column. Some tables also use multiple indices, or use OCTET STRING or even OBJECT IDENTIFIER rather than INTEGER typed indices. The rules for encoding and decoding those are in RFC 2578 section 7.7.

Reversible Hashed to convert integer ID to alphabets/alphanumeric

I need a Delphi reversible Hashed ID function that is quick.
Short, obfuscated and efficient IDs
No collisions (at least up to 32-bit unsigned integer at least)
Reversible
Fast
preferably something that has an input Key, so it can be randomised a bit...
otherwise, a '3' will always be 23zkJ5 on all my software modules.
works cross-platform
Something like Youtube's video identifier.
Encode(3); // => "23zkJ5"
Decode('23zkJ5'); // => 3
PHP seems to have quite a few of these; I can't find one for Delphi.
I look at this but not really what I wanted, plus I need something in Delphi.
Reversible hash function?
$generator->encode(6); // => "43Vht7"
$generator->decode('43Vht7'); // => 6
I need something like what PHP offers:
https://github.com/delight-im/PHP-IDs
I can't use MD5 as it's not reversible; using Lockbox encryption/decryption seems a bit over-kill? (if really no choice, which algorithm in Lockbox would be the best choice for this?)
Use AES and convert the cypher byte array to a hex string or to Base64.
for a code example see here
AES encrypt string in Delphi (10 Seattle) with DCrypt, decrypt with PHP/OpenSSL

How do I do deterministic RSA in Go

I have two go services, let's call them A and B. B holds an RSA key pair, while A only knows the public key. I want them to know if they agree on some value V.
I want to do this by having B encrypt encrypt V using the public key and have A do a comparison, but all the crytpo/rsa functions take an RNG which adds entropy and makes each hash of V different. That means I can't compare the hashes.
Is there a function in the go standard library that will deterministicly hash V?
Note: I can achieve this by using a fresh RNG seeded with the same value everytime I hash V, but I want to be able to compute this hash from other languages and that would tie me to Go's RNG.
I want to do this by having B encrypt encrypt V using the public key and have A do a comparison…
You're using the wrong primitive.
If you want the owner of a private key to prove that they have some data, have them Sign that data. The recipient can Verify that signature using the public key.
Use the SignPSS and VerifyPSS methods to do this. The signature will not be deterministic, but it doesn't need to be -- the recipient will still be able to verify it.
Take a look at the docs for EncryptOAEP:
The random parameter is used as a source of entropy to ensure that encrypting the same message twice doesn't result in the same ciphertext.
So the random data does not affect the reader's ability to decrypt the message with only the public key. The cipher text bytes will be different each time you encrypt the same value, which is a good thing.
Take a look at the examples on Encrypt/Decrypt OAEP in those docs. It should be sufficient to get you moving the right direction.

ECDH (256 bit key), Create Private Key from X component

I am trying to implement BitMessage Crypto with Windows CNG functions
I am trying to create a key pair from a single 32 byte value.
In order to encrypt the pubkey data, a double SHA-512 hash is calculated from the address version number, stream number, and ripe hash of the Bitmessage address that the pubkey corresponds to. The first 32 bytes of this hash are used to create a public and private key pair with which to encrypt and decrypt the pubkey data, using the same algorithm as message encryption (see Encryption).
Bitmessage Protocol regarding this
This can be done by using the 32 byte integer as the private key component.
But how can i do this using Windows CNG functions.
or maybe i can do the calculation manually?
Thanks for any input.
BCryptImportKeyPiar using a blank public key(X,Y), and use the random 32 byte number as the private part of the BCRYPT_ECCKEY_BLOB.
duh..

protocol buffer uint32 field with data always in [0,255]

In a Google protocol buffer, I'm going to use a field to store values that will be integers in [0,255]. From http://code.google.com/apis/protocolbuffers/docs/proto.html#scalar, it looks like the uint32 will be the appropriate value type to use. Despite the field being able to hold up to 32-bit integers, those extra bits will not be wasted in my case due to the variable length encoding. (Correct me if I'm wrong up to here.)
My question is: how should I indicate that the reader of a serialized message can assume that the largest value in that field will be 255? Just a comment in the protocol buffer specification? Is there any other way?
In .proto there is no such specification; you must simply document it (and presumably cast it appropriately at the consuming code).
Aside: if you happen to be using the C# protobuf-net implementation, then you can do this by working outside a .proto definition (protobuf-net allows code-first):
[ProtoMember(3)] // <=== field number
public byte SomeValue {get;set;}
This is then obviously constrained to 0-255, but is encoded on the wire as you expect (like a uint32). It also does a checked conversion when deserializing, to sanity-check the values.
In .proto, the above is closest to:
optional uint32 someValue = 3;

Resources