I have a certificate x509 in base 64 binary format. How can I retrieve the information about the certificate with Oracle? I have to get serial number of this certificate. Any ideas?
There is a solution on the Oracle forum : SQL to extract specific attributes from an x509 digital certificate
The code (the original is for certificates stored as CLOB, I modified it for BLOB and to return the serial number):
create or replace and compile java source named testx509src
as
import java.security.cert.*;
import java.io.*;
import java.sql.*;
import oracle.sql.BLOB;
import oracle.sql.NUMBER;
public class TestX509 {
public static NUMBER getSerialNumber(BLOB cert)
throws SQLException, IOException, CertificateException {
Connection conn = (Connection) DriverManager.getConnection("jdbc:default:connection:");
BufferedInputStream is = new BufferedInputStream(cert.getBinaryStream());
CertificateFactory cf = CertificateFactory.getInstance("X.509");
X509Certificate c = (X509Certificate) cf.generateCertificate(is);
is.close();
return new oracle.sql.NUMBER(c.getSerialNumber());
}
}
/
CREATE OR REPLACE FUNCTION CERT_getSerialNumber(cert in blob)
RETURN NUMBER
AS LANGUAGE JAVA
NAME 'TestX509.getSerialNumber(oracle.sql.BLOB) return oracle.sql.NUMBER';
/
SQL> select CERT_GetSerialNumber(cert) serial from cert_storage where id = 1;
serial
-----------------------
243435653237
After you base64-decode a certificate, you most likely get a DER-encoded ASN.1 structure of a X.509 v3 certificate (enough keywords to continue searching for an answer).
I am not aware of any PL/SQL implementation of ASN.1 parser, parsing DER-encoded content, but it is possible to learn ASN.1 structures (sequence, integer, etc.) and their binary representation in DER format, and then do the parsing in PL/SQL, byte by byte.
=> The serial number is close to the beginning of the DER-content, so you do not need to support parsing every ASN.1 element to extract the serial number.
You could have to look at X.509 certificate structure/template, explaining how a certificate is constructed from basic ASN.1 elements, then parse/extract the elements and get the info you're interested in.
More detailed description of what's in a certificate: X.509 certificate consists of some data fields like version, serial number, valid from/to dates, issuer DN (distinguished name), subject DN, subject public key, signature hash algorithm, etc. This info is then "signed" by the certificate issuer: the issuer creates a hash code (e.g. using SHA-1 algorithm) from the info mentioned above, and then encrypts it using issuer's private key (RSA encryption). Having issuer's public key and trusting the issuer, one could use the issuer's public key to decrypt the hash code encrypted by the issuer, then create a hash code from certificate details using the same algorithm, and finally compare the computed hash with the one the issuer created. If these match, it means that no one modified the details, so if issuer is trusted, the details found in the certificate can be trusted as well.
X.509 certificate begins with (data types shown to the right):
Certificate SEQUENCE
Data SEQUENCE
Version [0] { INTEGER }
Serial Number INTEGER
Each element starts with a tag byte indicating the element type, followed by the element length, followed by the element content. If the element contains fewer than 128 bytes, the length field requires only one byte to specify the content length. If it is more than 127 bytes, bit 7 of the Length field is set to 1 and bits 6 through 0 specify the number of additional bytes used to identify the content length. In case of X.509 certificate, Version is wrapped in a context-specific tag [0].
Books explaining ASN.1 can be downloaded for free from the web.
Here's an example for analysing the beginning of a certificate:
30 82 02 D7 30 82 02 40 A0 03 02 01 02 02 01 01 ...
Interpretation:
30 = Start of Certificate SEQUENCE
82 = sequence length are the following two bytes
02 D7 = sequence length 0x02D7 (Big Endian order of bytes)
30 = Start of Data SEQUENCE
82 = sequence length are the following two bytes
02 40 = sequence length 0x0240 (Big Endian order of bytes)
A0 = start of context-specific element [0]
03 = length of context-specific element [0]
02 01 02 = content of context-specific element [0] (Version INTEGER)
(02=start of Version INTEGER,
01=length of the integer,
02=Version value (zero-based, so value 02 actually means v3))
02 = Start of Serial Number INTEGER
01 = Length of Serial Number INTEGER
01 = The serial number itself
...
Of course, in your case, length of the serial number may be bigger than one byte shown here.
Related
"Request Message 1" is using static table index 31 to send content-type information. Then the entry is added to dynamic table with index value 63. How to derive the dynamic table index value from "Request Message 1"?
Request message 1:
Header: content-type: multipart/related; boundary=++Boundary
Name Length: 12
Name: content-type
Value Length: 38
Value: multipart/related; boundary=++Boundary
content-type: multipart/related; boundary=++Boundary
[Unescaped: multipart/related; boundary=++Boundary]
Representation: Literal Header Field with Incremental Indexing - Indexed Name
Index: 31
Hex dump
5f 9d a6 da 12 6a c7 62 58 b0 b4 0d 25 93 ed 48
cf 6d 52 0e cf 50 7f bf f7 74 f6 d5 20 ec f5
Request message 2:
Header: content-type: multipart/related; boundary=++Boundary
Name Length: 12
Name: content-type
Value Length: 38
Value: multipart/related; boundary=++Boundary
content-type: multipart/related; boundary=++Boundary
[Unescaped: multipart/related; boundary=++Boundary]
Representation: Indexed Header Field
Index: 63
Hex dump : 0xbf (dynamic table index value)
If I understand you right: Your first request marked as "Field with Incremental Indexing". That's mean that this header also had this index in static or dynamic table and it must be added to dynamic table(because it has other value). Dynamic table's first index is 62. It's because static table ends on 61. When header added to dynamic table he get to the top - 62 index (RFC7541-2.3). I will assume that you did not show us the whole request, most likely it had another incremental header, which took a position above this.
MadBard is right. The hex dump of the header shows the first octet is 5f = 01011111.
According to RFC 7541 "6.2.1. Literal Header Field with Incremental Indexing," the first two bits - 01 - indicate the header field is a new one that should be appended to the dynamic table. Since the next 6 bits (011111) are not all 0, they are referencing a header name in the static table. 011111 is the index of the header name to be used in the new header field. 011111 is 31, so it takes the header name at index 31 of the static table, which is "content-type" (see "Appendix A. Static Table Definition" of RFC 7541). The value for the header field is thus composed of the static table name (content-type), and the value which is carried over-the-wire in Request 1. (The value is also Huffman encoded to save a few bytes, which is why we can't read the ASCII directly from the hex dump). This new header is then appended to index 62 of the lookup table. The indices of all previous entries in the dynamic portion of the lookup table are incremented by one (e.g. the previous 62 becomes 63, since it is a FIFO queue).
Another header was added to the dynamic portion of the lookup table after the one of interest, since we can see that the lookup index in Request 2 is 63, not 62, and thus it was bumped up 1 since it was added. If you were to keep monitoring as more headers were added, you would see the index of this particular header would keep incrementing. Eventually it would get evicted from the lookup table when the dynamic table size gets exhausted.
I ran openssl speed rsa512 and it shows me how many signs and verifies it can do in a second. Unfortunalely, the test does not say anything about the message size, which is signed. Thus I digged into the openssl sources and found the following line in the speed.c:
ret = RSA_sign(NID_md5_sha1, buf, 36, buf2, rsa_num, rsa_key[testnum]);
Looking into the function in the rsa.h, I can see the following function declaration:
int RSA_sign(int type, const unsigned char *m, unsigned int m_length,
unsigned char *sigret, unsigned int *siglen, RSA *rsa);
I guess, m is the message and m_length is the length of the message.
Am I right that the message size is 36 byte in the RSA speed test?
The same goes for ECDSA, e.g., openssl speed ecdsap256. The speed.c uses the following line:
ret = ECDSA_sign(0, buf, 20, ecdsasig, ecdsasiglen, ecdsa[testnum]);
Am I right that the message size is 20 byte in the ECDSA speed test?
My Conclusion: It's not possible to compare them, since they sign different message lengths.
Asymmetric signatures, technically, don't sign messages. They sign hashes of messages.
Their rsa512 test is doing the RSA signature padding and transformation on an SSL "MD5 and SHA1" value (which is 16 + 20 = 36 bytes). So the number it produces is how many RSA pad-and-sign (and answer-copy) operations it can do, you need to divide that by the time it takes to hash the message.
Their ecdsap256 computation is assuming that the digest was SHA-1 (20 bytes). Again, you would take this number divided by the time it takes to hash a message.
Since they both are in scale terms of the data hashing they're comparable.
Nginx can be configured to generate a uuid suitable for client identification. Upon receiving a request from a new client, it appends a uuid in two forms before forwarding the request upstream to the origin server(s):
cookie with uuid in Base64 (e.g. CgIGR1ZfUkeEXQ2YAwMZAg==)
header with uuid in hexadecimal (e.g. 4706020A47525F56980D5D8402190303)
I want to convert a hexadecimal representation to the Base64 equivalent. I have a working solution in Ruby, but I don't fully grasp the underlying mechanics, especially the switching of byte-orders:
hex_str = "4706020A47525F56980D5D8402190303"
Treating hex_str as a sequence of high-nibble (most significant 4 bits first) binary data, produce the (ASCII-encoded) string representation:
binary_seq = [hex_str].pack("H*")
# 47 (71 decimal) -> "G"
# 06 (6 decimal) -> "\x06" (non-printable)
# 02 (2 decimal) -> "\x02" (non-printable)
# 0A (10 decimal) -> "\n"
# ...
#=> "G\x06\x02\nGR_V\x98\r]\x84\x02\x19\x03\x03"
Map binary_seq to an array of 32-bit little-endian unsigned integers. Each 4 characters (4 bytes = 32 bits) maps to an integer:
data = binary_seq.unpack("VVVV")
# "G\x06\x02\n" -> 167904839 (?)
# "GR_V" -> 1449087559 (?)
# "\x98\r]\x84" -> 2220690840 (?)
# "\x02\x19\x03\x03" -> 50534658 (?)
#=> [167904839, 1449087559, 2220690840, 50534658]
Treating data as an array of 32-bit big-endian unsigned integers, produce the (ASCII-encoded) string representation:
network_seq = data.pack("NNNN")
# 167904839 -> "\n\x02\x06G" (?)
# 1449087559 -> "V_RG" (?)
# 2220690840 -> "\x84]\r\x98" (?)
# 50534658 -> "\x03\x03\x19\x02" (?)
#=> "\n\x02\x06GV_RG\x84]\r\x98\x03\x03\x19\x02"
Encode network_seq in Base64 string:
Base64.encode64(network_seq).strip
#=> "CgIGR1ZfUkeEXQ2YAwMZAg=="
My rough understanding is that big-endian is the standard byte-order for network communications, while little-endian is more common on host machines. Why nginx provides two forms that require switching byte order to convert I'm not sure.
I also don't understand how the .unpack("VVVV") and .pack("NNNN") steps work. I can see that G\x06\x02\n becomes \n\x02\x06G, but I don't understand the steps that get there. For example, focusing on the first 8 digits of hex_str, why do .pack(H*) and .unpack("VVVV") produce:
"4706020A" -> "G\x06\x02\n" -> 167904839
whereas converting directly to base-10 produces:
"4706020A".to_i(16) -> 1191576074
? The fact that I'm asking this shows I need clarification on what exactly is going on in all these conversions :)
I am writing a solution to scan PDF417 barcode (http://en.wikipedia.org/wiki/PDF417) at the back of a South African drivers license for iOS. I can't find any documentation or specification how to decode barcode. Does anyone have a link to a specification or sample code that can decode driver license data stored in PDF417 barcode? Thanks
The data after scanning the PDF417 barcode is 720 bytes. First 4 bytes indicate the version of barcode.
Version 2 covers all currently valid licenses.
Version 1: 01 e1 02 45
Version 2: 01 9b 09 45
Next two bytes are zero (00 00).
Remaining 714 bytes form 6 blocks - 5 blocks of 128, 1 block of 74.
Different keys are used depending on version and and block size.
Version 1, 128 bytes
-----BEGIN RSA PUBLIC KEY-----
MIGXAoGBAP7S4cJ+M2MxbncxenpSxUmBOVGGvkl0dgxyUY1j4FRKSNCIszLFsMNw
x2XWXZg8H53gpCsxDMwHrncL0rYdak3M6sdXaJvcv2CEePrzEvYIfMSWw3Ys9cRl
HK7No0mfrn7bfrQOPhjrMEFw6R7VsVaqzm9DLW7KbMNYUd6MZ49nAhEAu3l//ex/
nkLJ1vebE3BZ2w==
-----END RSA PUBLIC KEY-----
Version 1, 74 bytes:
-----BEGIN RSA PUBLIC KEY-----
MGACSwD/POxrX0Djw2YUUbn8+u866wbcIynA5vTczJJ5cmcWzhW74F7tLFcRvPj1
tsj3J221xDv6owQNwBqxS5xNFvccDOXqlT8MdUxrFwIRANsFuoItmswz+rfY9Cf5
zmU=
-----END RSA PUBLIC KEY-----
Version 2, 128 bytes:
-----BEGIN RSA PUBLIC KEY-----
MIGWAoGBAMqfGO9sPz+kxaRh/qVKsZQGul7NdG1gonSS3KPXTjtcHTFfexA4MkGA
mwKeu9XeTRFgMMxX99WmyaFvNzuxSlCFI/foCkx0TZCFZjpKFHLXryxWrkG1Bl9+
+gKTvTJ4rWk1RvnxYhm3n/Rxo2NoJM/822Oo7YBZ5rmk8NuJU4HLAhAYcJLaZFTO
sYU+aRX4RmoF
-----END RSA PUBLIC KEY-----
Version 2, 74 bytes:
-----BEGIN RSA PUBLIC KEY-----
MF8CSwC0BKDfEdHKz/GhoEjU1XP5U6YsWD10klknVhpteh4rFAQlJq9wtVBUc5Dq
bsdI0w/bga20kODDahmGtASy9fae9dobZj5ZUJEw5wIQMJz+2XGf4qXiDJu0R2U4
Kw==
-----END RSA PUBLIC KEY-----
Decrypt each block separately. Decrypted with RSA ENCRYPT function using
the public key.
Based on the RSA public key, the incomplete document, and the C# open source project, I've successfully decoded the South African driving license in Python except for the image part.
Steps:
Load the RSA public key from the PEM format.
pubKey = rsa.PublicKey.load_pkcs1(pk128)
Decrypt the data decoded from PDF417:
all = bytearray()
pubKey = rsa.PublicKey.load_pkcs1(pk128)
start = 6
for i in range(5):
block = data[start: start + 128]
input = int.from_bytes(block, byteorder='big', signed=False)
output = pow(input, pubKey.e, mod=pubKey.n)
decrypted_bytes = output.to_bytes(128, byteorder='big', signed=False)
all += decrypted_bytes
start = start + 128
pubKey = rsa.PublicKey.load_pkcs1(pk74)
block = data[start: start + 74]
input = int.from_bytes(block, byteorder='big', signed=False)
output = pow(input, pubKey.e, mod=pubKey.n)
decrypted_bytes = output.to_bytes(74, byteorder='big', signed=False)
all += decrypted_bytes
Parse the data:
def parse_data(data):
index = 0
for i in range(0, len(data)):
if data[i] == 0x82:
index = i
break
# Section 1: Strings
vehicleCodes, index = readStrings(data, index + 2, 4)
print(f'Vehicle codes: {vehicleCodes}')
surname, index, delimiter = readString(data, index)
print(f'Surname: {surname}')
initials, index, delimiter = readString(data, index)
print(f'Initials: {initials}')
PrDPCode = ''
if delimiter == 0xe0:
PrDPCode, index, delimiter = readString(data, index)
print(f'PrDP Code: {PrDPCode}')
idCountryOfIssue, index, delimiter = readString(data, index)
print(f'ID Country of Issue: {idCountryOfIssue}')
licenseCountryOfIssue, index, delimiter = readString(data, index)
print(f'License Country of Issue: {licenseCountryOfIssue}')
vehicleRestrictions, index = readStrings(data, index, 4)
print(f'Vehicle Restriction: {vehicleRestrictions}')
...
You can visit https://github.com/yushulx/South-Africa-driving-license/blob/main/sadl/init.py to see the full code.
The Python package has been published to pypi.org. You can install it via pip install south-africa-driving-license.
How can I know if a TIFF image is in the format CCITT T.6(Group 4)?
You can use this (C#) code example.
It returns a value indicating the compression type:
1: no compression
2: CCITT Group 3
3: Facsimile-compatible CCITT Group 3
4: CCITT Group 4 (T.6)
5: LZW
public static int GetCompressionType(Image image)
{
int compressionTagIndex = Array.IndexOf(image.PropertyIdList, 0x103);
PropertyItem compressionTag = image.PropertyItems[compressionTagIndex];
return BitConverter.ToInt16(compressionTag.Value, 0);
}
You can check these links
The TIFF File Format
TIFF Tag Compression
TIFF File Format Summary
The tag 259 (hex 0x0103) store the info about the Compression method.
--- Compression
Tag = 259 (103)
Type = word
N = 1
Default = 1.
1 = No compression, but pack data into bytes as tightly as possible, with no
unused bits except at the end of a row. The bytes are stored as an array
of bytes, for BitsPerSample <= 8, word if BitsPerSample > 8 and <= 16, and
dword if BitsPerSample > 16 and <= 32. The byte ordering of data >8 bits
must be consistent with that specified in the TIFF file header (bytes 0
and 1). Rows are required to begin on byte boundaries.
2 = CCITT Group 3 1-Dimensional Modified Huffman run length encoding.
See ALGRTHMS.txt BitsPerSample must be 1, since this type of compression
is defined only for bilevel images (like FAX images...)
3 = Facsimile-compatible CCITT Group 3, exactly as specified in
"Standardization of Group 3 facsimile apparatus for document
transmission," Recommendation T.4, Volume VII, Fascicle VII.3,
Terminal Equipment and Protocols for Telematic Services, The
International Telegraph and Telephone Consultative Committee
(CCITT), Geneva, 1985, pages 16 through 31. Each strip must
begin on a byte boundary. (But recall that an image can be a
single strip.) Rows that are not the first row of a strip are
not required to begin on a byte boundary. The data is stored as
bytes, not words - byte-reversal is not allowed. See the
Group3Options field for Group 3 options such as 1D vs 2D coding.
4 = Facsimile-compatible CCITT Group 4, exactly as specified in
"Facsimile Coding Schemes and Coding Control Functions for Group
4 Facsimile Apparatus," Recommendation T.6, Volume VII, Fascicle
VII.3, Terminal Equipment and Protocols for Telematic Services,
The International Telegraph and Telephone Consultative Committee
(CCITT), Geneva, 1985, pages 40 through 48. Each strip must
begin on a byte boundary. Rows that are not the first row of a
strip are not required to begin on a byte boundary. The data is
stored as bytes, not words. See the Group4Options field for
Group 4 options.
5 = LZW Compression, for grayscale, mapped color, and full color images.
You can run identify -verbose from the ImageMagick suite on the image. Look for "Compression: Group4" in the output.
UPDATE:
SO, I downloaded the libtiff library from the link I mentioned before, and from what I've seen, you can do the following: (untested)
int isTIFF_T6(const char* filename)
{
TIFF* tif= TIFFOpen(filename,"r");
TIFFDirectory *td = &tif->tif_dir;
if(td->td_compression == COMPRESSION_CCITTFAX4) return 1;
return 0;
}
PREVIOUS:
This page has a lot of information about this format and links to some code in C:
Here's an excerpt:
The following paper covers T.4, T.6
and JBIG:
"Review of standards for electronic
imaging for facsimile systems" in
Journal of Electronic Imaging, Vol. 1,
No. 1, pp. 5-21, January 1992.
Source code can be obtained as part of
a TIFF toolkit - TIFF image
compression techniques for binary
images include CCITT T.4 and T.6:
ftp://ftp.sgi.com/graphics/tiff/tiff-v3.4beta035-tar.gz
Contact: sam#engr.sgi.com
Read more: http://www.faqs.org/faqs/compression-faq/part1/section-16.html#ixzz0TYLGKnHI